Category Archives: Fiber To The Home

A compelling treatment of FTTH

Data Center Containment: Types, Benefits & Challenges

Over the past decade, data center containment has experienced a high rate of implementation by many data centers. It can greatly improve the predictability and efficiency of traditional data center cooling systems. This article will elaborate on what data center containment is, common types of it, and their benefits and challenges.

What Is Data Center Containment?

Data center containment is the separation of cold supply air from the hot exhaust air from IT equipment so as to reduce operating cost, optimize power usage effectiveness, and increase cooling capacity. Containment systems enable uniform and stable supply air temperature to the intake of IT equipment and a warmer, drier return air to cooling infrastructure.

Types of Data Center Containment

There are mainly two types of data center containment, hot aisle containment and cold aisle containment.

Hot aisle containment encloses warm exhaust air from IT equipment in data center racks and returns it back to cooling infrastructure. The air from the enclosed hot aisle is returned to cooling equipment via a ceiling plenum or duct work, and then the conditioned air enters the data center via raised floor, computer room air conditioning (CRAC) units, or duct work.

Hot aisle containment

Cold aisle containment encloses cold aisles where cold supply air is delivered to cool IT equipment. So the rest of the data center becomes a hot-air return plenum where the temperature can be high. Physical barriers such as solid metal panels, plastic curtains, or glass are used to allow for proper airflow through cold aisles.

Cold aisle containment

Hot Aisle vs. Cold Aisle

There are mixed views on whether it’s better to contain the hot aisle or the cold aisle. Both containment strategies have their own benefits as well as challenges.

Hot aisle containment benefits

  • The open areas of the data center are cool, so that visitors to the room will not think the IT equipment is not being cooled sufficiently. In addition, it allows for some low density areas to be un-contained if desired.
  • It is generally considered to be more effective. Any leakages that come from raised floor openings in the larger part of the room go into the cold space.
  • With hot aisle containment, low-density network racks and stand-alone equipment like storage cabinets can be situated outside the containment system, and they will not get too hot, because they are able to stay in the lower temperature open areas of the data center.
  • Hot aisle containment typically adjoins the ceiling where fire suppression is installed. With a well-designed space, it will not affect normal operation of a standard grid fire suppression system.

Hot aisle containment challenges

  • It is generally more expensive. A contained path is needed for air to flow from the hot aisle all the way to cooling units. Often a drop ceiling is used as return air plenum.
  • High temperatures in the hot aisle can be undesirable for data center technicians. When they need to access IT equipment and infrastructure, a contained hot aisle can be a very uncomfortable place to work. But this problem can be mitigated using temporary local cooling.

Cold aisle containment benefits

  • It is easy to implement without the need for additional architecture to contain and return exhaust air such as a drop ceiling or air plenum.
  • Cold aisle containment is less expensive to install as it only requires doors at ends of aisles and baffles or roof over the aisle.
  • Cold aisle containment is typically easier to retrofit in an existing data center. This is particularly true for data centers that have overhead obstructions such as existing duct work, lighting and power, and network distribution.

Cold aisle containment challenges

  • When utilizing a cold aisle system, the rest of the data center becomes hot, resulting in high return air temperatures. It also may create operational issues if any non-contained equipment such as low-density storage is installed in the general data center space.
  • The conditioned air that leaks from the openings under equipment like PDUs and raised floor tend to enter air paths that return to cooling units. This reduces the efficiency of the system.
  • In many cases, cold aisles have intermediate ceilings over the aisle. This may affect the overall fire protection and lighting design, especially when added to an existing data center.

How to Choose the Best Containment Option?

Every data center is unique. To find the most suitable option, you have to take into account a number of aspects. The first thing is to evaluate your site and calculate the Cooling Capacity Factor (CCF) of the computer room. Then observe the unique layout and architecture of each computer room to discover conditions that make hot aisle or cold aisle containment preferable. With adequate information and careful consideration, you will be able to choose the best containment option for your data center.

Article Source: Data Center Containment: Types, Benefits & Challenges

Related Articles:

What Is a Containerized Data Center: Pros and Cons

The Most Common Data Center Design Missteps

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

devices

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

data center

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

chip manufacturing

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Infographic – What Is a Data Center?

The Most Common Data Center Design Missteps

Introduction

Data center design is to provide IT equipment with a high-quality, standard, safe, and reliable operating environment, fully meeting the environmental requirements for stable and reliable operation of IT devices and prolonging the service life of computer systems. Data center design is the most important part of data center construction directly relating to the success or failure of data center long term planning, so its design should be professional, advanced, integral, flexible, safe, reliable, and practical.

9 Missteps in Data Center Design

Data center design is one of the effective solutions to overcrowded or outdated data centers, while inappropriate design results in obstacles for growing enterprises. Poor planning can lead to a waste of valuable funds and more issues, increasing operating expenses. Here are 9 mistakes to be aware of when designing a data center.

Miscalculation of Total Cost

Data center operation expense is made up of two key components: maintenance costs and operating costs. Maintenance costs refer to the costs associated with maintaining all critical facility support infrastructure, such as OEM equipment maintenance contracts, data center cleaning fees, etc. Operating costs refer to costs associated with day-to-day operations and field personnel, such as the creation of site-specific operational documentation, capacity management, and QA/QC policies and procedures. If you plan to build or expand a business-critical data center, the best approach is to focus on three basic parameters: capital expenditures, operating and maintenance expenses, and energy costs. Taking any component out of the equation, you might face the case that the model does not properly align an organization’s risk profile and business spending profile.

Unspecified Planning and Infrastructure Assessment

Infrastructure assessment and clear planning are essential processes for data center construction. For example, every construction project needs to have a chain of command that clearly defines areas of responsibility and who is responsible for aspects of data center design. Those who are involved need to evaluate the potential applications of the data center infrastructure and what types of connectivity requirements they need. In general, planning involves a rack-by-rack blueprint, including network connectivity and mobile devices, power requirements, system topology, cooling facilities, virtual local and on-premises networks, third-party applications, and operational systems. For the importance of data center design, you should have a thorough understanding of the functionality before it begins. Otherwise, you’ll fall short and cost more money to maintain.

data center

Inappropriate Design Criteria

Two missteps can send enterprises into an overspending death spiral. First of all, everyone has different design ideas, but not everyone is right. Second, the actual business is mismatched with the desired vision and does not support the setting of kilowatts per square foot or rack. Over planning in design is a waste of capital. Higher-level facilities also result in higher operational and energy costs. A data center designer establishes the proper design criteria and performance characteristics and then builds capital expenditure and operating expenses around it.

Unsuitable Data Center Site

Enterprises often need to find a perfect building location when designing a data center. If you don’t get some site-critical information, it will lead to some cases. Large users are well aware of the data center and have concerns about power availability and cost, fiber optics, and irresistible factors. Baseline users often have business model shells in their core business areas that decide whether they need to build or refurbish. Hence, premature site selection or unreasonable geographic location will fail to meet the design requirements.

Pre-design Space Planning

It is also very important to plan the space capacity inside the data center. The raised floor to support ratio can be as high as 1 to 1, while the mechanical and electrical equipment needs enough space to accommodate. In addition, the planning of office and IT equipment storage areas also needed to be considered. Therefore, it is very critical to estimate and plan the space capacity during data center design. Estimation errors can make the design of a data center unsuitable for the site space, which means suspending project re-evaluation and possibly repurchasing components.

Mismatched Business Goals

Enterprises need to clearly understand their business goals when debugging a data center so that they can complete the data center design. After meeting the business goals, something should be considered, such as which specific applications the data center supports, additional computing power, and later business expansion. Additionally, enterprises need to communicate these goals to data center architects, engineers, and builders to ensure that the overall design meets business needs.

Design Limitations

The importance of modular design is well-publicized in the data center industry. Although the modular approach refers to adding extra infrastructure in an immediate mode to preserve capital, it doesn’t guarantee complete success. Modular and flexible design is the key to long-term stable operation, also meets your data center plans. On the power system, you have to take note of adding UPS (Uninterruptible Power Supply) capacity to existing modules without system disruption. Input and output distribution system design shouldn’t be overlooked, it can allow the data center to adapt to any future changes in the underlying construction standards.

Improper Data Center Power Equipment

To design a data center to maximize equipment uptime and reduce power consumption, you must choose the right power equipment based on the projected capacity. Typically, you might use redundant computing to predict triple server usage to ensure adequate power, which is a waste. Long-term power consumption trends are what you need to consider. Install automatic power-on generators and backup power sources, and choose equipment that can provide enough power to support the data center without waste.

Over-complicated Design

In many cases, redundant targets introduce some complexity. If you add multiple ways to build a modular system, things can quickly get complicated. The over-complexity of data center design means more equipment and components, and these components are the source of failure, which can cause problems such as:

  • Human error. Data statistics errors lead to system data vulnerability and increase operational risks.
  • Expensive. In addition to equipment and components, the maintenance of components failure also incurs more charges.
  • Design concept. If maintainability wasn’t considered by the data center design when the IT team has the requirements of operating or servicing, system operational normality even human security get impacts.

Conclusion

Avoid the nine missteps above to find design solutions for data center IT infrastructure and build a data center that suits your business. Data center design missteps have some impacts on enterprises, such as business expansion, infrastructure maintenance, and security risks. Hence, all infrastructure facilities and data center standards must be rigorously estimated during data center design to ensure long-term stable operation within a reasonable budget.

Article Source: The Most Common Data Center Design Missteps

Related Articles:

How to Utilize Data Center Space More Effectively?

Data Center White Space and Gray Space

Infographic – What Is a Data Center?

The Internet is where we store and receive a huge amount of information. Where is all the information stored? The answer is data centers. At its simplest, a data center is a dedicated place that organizations use to house their critical applications and data. Here is a short look into the basics of data centers. You will get to know the data center layout, the data pathway, and common types of data centers.

what is a data center

To know more about data centers, click here.

Article Source: Infographic – What Is a Data Center?

Related Articles:

What Is a Data Center?

Infographic — Evolution of Data Centers

Why Data Center Location Matters?

When it comes to data center design, location is a crucial aspect that no business can overlook. Where your data center is located matters a lot more than you might realize. In this article, we will walk you through the importance of data center location and factors you should keep in mind when choosing one.

The Importance of Data Center Location

Though data centers can be located anywhere with power and connectivity, the site selection can have a great impact on a wide range of aspects such as business uptime and cost control. Overall, a good data center location can better secure your data center and extend the life of data centers. Specifically, it means lower TCO, faster internet speed, higher productivity, and so on. Here we will discuss two typical aspects that are the major concerns of businesses.

Greater physical security

Data centers have extremely high security requirements, and once problems occur, normal operation will be affected. Of course, security and reliability can be improved by various means, such as building redundant systems, etc. However, reasonable planning of the physical location of a data center can also effectively avoid harm caused by natural disasters such as earthquakes, floods, fires and so on. If a data center is located in a risk zone that is prone to natural disasters, that would lead to longer downtime and more potential damages to infrastructure.

Higher speed and better performance

Where your data center is located can also affect your website’s speed and business performance. When a user visits a page on your website, their computer has to communicate with servers in your data center to access data or information they need. That data is then transferred from servers to their computer. If your data center is located far away from your users who initiate certain requests, information and data will have to travel longer distances. That will be a lengthy process for your users who could probably get frustrated with slow speeds and latency. The result is lost users leaving your site with no plans to come back. In a sense, a good location can make high speed and impressive business performance possible.

Choosing a Data Center Location — Key Factors

Choosing where to locate your data center requires balancing many different priorities. Here are some major considerations to help you get started.

key factors of choosing a data center location

Business Needs

First and foremost, the decision has to be made based on your business needs and market demands. Where are your users? Is the market promising in the location you are considering? You should always build your data center as close as possible to users you serve. It can shorten the time for users to obtain files and data and make for happy customers. For smaller companies that only operate in a specific region or country, it’s best to choose a nearby data center location. For companies that have much more complicated businesses, they may want to consider more locations or resort to third-party providers for more informed decisions.

Natural Disasters

Damages and losses caused by natural disasters are not something any data center can afford. These include big weather and geographical events such as hurricanes, tornadoes, floods, lightning and thunder, volcanoes, earthquakes, tsunamis, blizzards, hail, fires, and landslides. If your data center is in a risk zone, it is almost a matter of time before it falls victim to one. Conversely, a good location less susceptible to various disasters means a higher possibility of less downtime and better operation.

It is also necessary to analyze the climatic conditions of a data center location in order to select the most suitable cooling measures, thus reducing the TCO of running a data center. At the same time, you might want to set up a disaster recovery site that is far enough from the main site, so that it is almost impossible for any natural disaster to affect them at the same time.

Power Supply

The nature of data centers and requirements for quality and capacity determine that the power supply in a data center must be sufficient and stable. As power is the biggest cost of operating a data center, it is very important to choose a place where electricity is relatively cheap.

The factors we need to consider include:

Availability — You have to know the local power supply situation. At the same time, you need to check whether there are multiple mature power grids in alternative locations.

Cost — As we’ve mentioned, power costs a lot. So it is necessary to compare various power costs. That is to say, the amount of power should be viable and the cost of it should be low enough.

Alternative energy sources — You might also want to consider whether there are renewable energy sources such as solar energy, wind energy and air in alternative locations, which will help enterprises to build a greener corporate image.

It is necessary to make clear the local power supply reliability, electricity price, and policies concerning the trend of the power supply and market demand in the next few years.

Other Factors

There are a number of additional factors to consider. These include local data protection laws, tax structures, land policy, availability of suitable networking solutions, local infrastructure, the accessibility of a skilled labor pool, and other aspects. All these things combined can have a great impact on the TCO of your data center and your business performance. This means you will have to do enough research before making an informed decision.

There is no one right answer for the best place to build a data center. A lot of factors come into play, and you may have to weigh different priorities. But one thing is for sure: A good data center location is crucial to data center success.

Article Source: Why Data Center Location Matters?

Related Articles:

Data Center White Space and Gray Space

Five Ways to Ensure Data Center Physical Security

Carrier Neutral vs. Carrier Specific: Which to Choose?

As the need for data storage drives the growth of data centers, colocation facilities are increasingly important to enterprises. A colocation data center brings many advantages to an enterprise data center, such as carriers helping enterprises manage their IT infrastructure that reduces the cost for management. There are two types of hosting carriers: carrier-neutral and carrier-specific. In this article, we will discuss the differentiation of them.

Carrier Neutral and Carrier Specific Data Center: What Are They?

Accompanied by the accelerated growth of the Internet, the exponential growth of data has led to a surge in the number of data centers to meet the needs of companies of all sizes and market segments. Two types of carriers that offer managed services have emerged on the market.

Carrier-neutral data centers allow access and interconnection of multiple different carriers while the carriers can find solutions that meet the specific needs of an enterprise’s business. Carrier-specific data centers, however, are monolithic, supporting only one carrier that controls all access to corporate data. At present, most enterprises choose carrier-neutral data centers to support their business development and avoid some unplanned accidents.

There is an example, in 2021, about 1/3 of the cloud infrastructure in AWS was overwhelmed and down for 9 hours. This not only affected millions of websites, but also countless other devices running on AWS. A week later, AWS was down again for about an hour, bringing down the Playstation network, Zoom, and Salesforce, among others. The third downtime of AWS also impacted Internet giants such as Slack, Asana, Hulu, and Imgur to a certain extent. 3 outages of cloud infrastructure in one month took a beyond measure cost to AWS, which also proved the fragility of cloud dependence.

In the above example, we can know that the management of the data center by the enterprise will affect the business development due to some unplanned accidents, which is a huge loss for the enterprise. To lower the risks caused by using a single carrier, enterprises need to choose a carrier-neutral data center and adjust the system architecture to protect their data center.

Why Should Enterprises Choose Carrier Neutral Data Center?

Carrier-neutral data centers are data centers operated by third-party colocation providers, but these third parties are rarely involved in providing Internet access services. Hence, the existence of carrier-neutral data centers enhances the diversity of market competition and provides enterprises with more beneficial options.

Another colocation advantage of a carrier-neutral data center is the ability to change internet providers as needed, saving the labor cost of physically moving servers elsewhere. We have summarized several main advantages of a carrier-neutral data center as follows.

Why Should Enterprises Choose Carrier Neutral Data Center

Redundancy

A carrier-neutral colocation data center is independent of the network operators and not owned by a single ISP. Out of this advantage, it offers enterprises multiple connectivity options, creating a fully redundant infrastructure. If one of the carriers loses power, the carrier-neutral data center can instantly switch servers to another online carrier. This ensures that the entire infrastructure is running and always online. On the network connection, a cross-connect is used to connect the ISP or telecom company directly to the customer’s sub-server to obtain bandwidth from the source. This can effectively avoid network switching to increase additional delay and ensure network performance.

Options and Flexibility

Flexibility is a key factor and advantage for carrier-neutral data center providers. For one thing, the carrier neutral model can increase or decrease the network transmission capacity through the operation of network transmission. And as the business continues to grow, enterprises need colocation data center providers that can provide scalability and flexibility. For another thing, carrier-neutral facilities can provide additional benefits to their customers, such as offering enterprise DR options, interconnect, and MSP services. Whether your business is large or small, a carrier-neutral data center provider may be the best choice for you.

Cost-effectiveness

First, colocation data center solutions can provide a high level of control and scalability, expanding opportunity to storage, which can support business growth and save some expenses. Additionally, it also lowers physical transport costs for enterprises. Second, with all operators in the market competing for the best price and maximum connectivity, a net neutral data center has a cost advantage over a single network facility. What’s more, since freedom of use to any carrier in a carrier-neutral data center, enterprises can choose the best cost-benefit ratio for their needs.

Reliability

Carrier-neutral data centers also boast reliability. One of the most important aspects of a data center is the ability to have 100% uptime. Carrier-neutral data center providers can provide users with ISP redundancy that a carrier-specific data center cannot. Having multiple ISPs at the same time gives better security for all clients. Even if one carrier fails, another carrier may keep the system running. At the same time, the data center service provider provides 24/7 security including all the details and uses advanced technology to ensure the security of login access at all access points to ensure that customer data is safe. Also, the multi-layered protection of the physical security cabinet ensures the safety of data transmission.

Summary

While many enterprises need to determine the best option for their company’s specific business needs, by comparing both carrier-neutral and carrier-specific, choosing a network carrier neutral data center service provider is a better option for today’s cloud-based business customers. Several advantages, such as maximizing total cost, lower network latency, and better network coverage, are of working with a carrier-neutral managed service provider. With no downtime and constant concerns about equipment performance, IT decision-makers for enterprise clients have more time to focus on the more valuable areas that drive continued business growth and success.

Article Source: Carrier Neutral vs. Carrier Specific: Which to Choose?

Related Articles:

What Is Data Center Storage?

On-Premises vs. Cloud Data Center, Which Is Right for Your Business?

Fibre Patch Panel Termination

It seems that we have already known that the fibre patch panel is the bridge of fibre patch cables. Fibre patch panel, also known as fibre distribution panel, serves as a convenient place to terminate all the fibre optic cable running from different rooms into the wiring closet and provides connection access to the cable’s individual fibres. Fibre patch panels are termination units, which are designed with a secure, organised chamber for housing connectors and splice units.

How Does Patch Panel Termination Units Works?

We know that there are two major termination solutions for fibre cable: field terminated and pre-terminated. The pre-termination, with most devices terminated by the manufacturers in advance, requires less efforts when installing than field termination does. Therefore, this post is going to offer a glimpse into the field termination which describes the termination of the fibre optic cable in the field or the termination after installation.

Fibre Patch Panel Termination Procedure

In the termination process, the fibre optic cable need to be pulled between two points, then connectors will need to be attached and then connected to a patch panel. In addition, before they can be attached to a panel, connectors need to be attached to each individual strand, and a variety of tools will be needed. With field termination, we can determine the cable length accordingly, and fibre optic bulk cable is very easily to pull from either end of the installation circuit.
To carry out the termination, such tools are needed as fibre optic enclosure, fibre cable, patch panel, cable ties, connector panels, permanent marker, fibre optic stripper, cleaver, metric ruler and rubbing alcohol.

To terminate the cable, first slide the boot onto the fibre. Strip the fibre to at least about an inch and a half . Place a mark at 15.5 mm for ST and SC connectors or at 11.5 mm for LC connectors. Clean the stripped fibre with an alcohol wipe and remove any debris. Set the stripped fibre into the cleave and cleave it. Insert the cleaved fibre into the rear of the connector until the mark align with the back of the connector body. Slight the boot up and over the rear of the connector body. After the termination, transmission testing of assemblies need to be performed.

fiber optics termination
In the final fibre patch panel termination, first, open the front and rear door of the patch panel, and remove the covers. Remover the inter stain relief bracket. Second, use cable ties to put the cables on the bracket. The fibres should be put inside the clips on the tray to segregate the fibres from A and B slots. Put the patch panel into the panels clips. Take the excess fibre slack into the slack management clips. Make a bend in the fibre to maintain slight pressure on the connection.

fix the cover

Conclusion

The processes in the device connection and cable management are linking with each other that missing any or failure in any one will result in the imperfect system, or even the damage. If we own a fibre patch panel, we should make full use of its termination function. The products provided by FS,COM enable you to perfect your cabling system.

Wall Mount VS Rack Mount Patch Panel

Patch panels are termination units, which are designed to provide a secure, organised chamber for housing connectors and splice units. Its main function is to terminate the fibre optic cable and provide connection access to the cable’s individual fibres. Patch panels can be categorised into different types based on a few different criteria. Last time, we have shed light on the copper and fibre patch panel and now let’s learn a different pair of it, namely wall mount patch panel and rack mount patch panel.

Wall Mount Patch Panel

As the name suggests, wall mount patch panel is a patch panel fixed on the wall.The wall mount patch panels are designed to provide the essential interface between multiple fibre cables and optical equipment installed on the customer’s premises. The units offer networking and fibre distribution from the vault or wiring closet to the user’s terminal equipment.

This kind of patch panel consists of two separate compartments. As shown below, the left side is used for accommodating outside plant cables entering the building, pigtails and pigtail splices. Whereas, the right side is designed for internal cable assembly networking. And both sides have a door secured with a quarter turn latch.

wall mount patch panel

Rack Mount Patch Panel

The rack mount patch panel usually holds the fibres horizontally and looks like a drawer. Rack mount panel is designed in 1U, 2U, 4U sizes and can hold up to 288 or even more fibres. They can be mounted onto 19″ and 23″ standard relay racks. The rack mount enclosures include two kinds. One is the slide-out variety and the other incorporates a removable lid. As for the latter one, the tray can be pulled out and lowered to 10 degree working angle or even further 45 degree working angle to provide ease of access for maintenance or installation work.

rack mount patch panel

Wall Mount VS Rack Mount Patch Panel

  • Installation

When installing wall mount patch panels, users need to leave at least 51mm additional space on each side to allow opening and removing the doors. Although it can be easily mounted to the wall by using the internal mounting holes, four screws are required when it is attached to a plywood wall, expansion inserts with wood screw for concrete walls and “molly bolts” for sheet rock. However, the installation of a rack patch panel just needs four screws without drilling the wall.

  • Space Occupation

Thinking from another perspective, the advantage of wall mount patch panels is that they allow you to optimise your work space by keeping equipment off floors and desks,which is superior to the rack mount patch panel.

  • Application

Both panels can be applied to Indore Premise Networks, Central offices (FTTx), Telecommunication Networks, Security Surveillance Applications, Process Automation & Control, Systems and Power Systems & Controls, while the rack mount patch panel has an advantage over the wall mount patch panel in that it can be applied to Data Centres.

Conclusion

To sum up, patch panels are available in rack mounted and wall mounted and are usually placed near terminating equipment (within patch cable reach). Both types can provide an easy cable management in that the panel ports can be labeled according to location, desktop number,etc. to help identify which cable from which location is getting terminated on which port on the patch panel, and changes can be made at the patch panel. The world-wide renown FS.COM can provide you the best quality rack mount and wall mount patch panel. Buyers are welcome to contact us.

A Wise Decision to Choose DWDM Mux/DeMux

The advent of big data requires for highly efficient and capable data transmission speed. To solve the paradox of increasing bandwidth but spending less, WDM (wavelength division multiplexing) multiplexer/demultiplexer is the perfect choice. This technology can transport extremely large capacity of data traffic in telecom networks. It’s a good way to deal with the bandwidth explosion from the access network.

WDM

WDM stands for wavelength division multiplexing. At the transmitting side, various light waves are multiplexed into one single signal that will be transmitted through an optical fibre. At the receiver end, the light signal is split into different light waves. There are 2 standards of WDM: coarse wavelength division nultiplexing (CWDM) and dense wavelength division multiplexing (DWDM). The main difference is the wavelength steps between the channels. For CWDM this is 20nm (course) and for DWDM this is typically 0.8nm (dense). The following is going to introduce DWDM Mux/Demux.

DWDM Technology

DWDM technology works by combing and transmitting multiple signals simultaneously at different wavelengths over the same fibre. This technology responds to the growing need for efficient and capable data transmission by working with different formats, such as SONET/SDH, while increasing bandwidth. It uses different colors (wavelength) which are combined in a device. The device is called a Mux/Demux, abbreviated from multiplexer/demultiplexer, where the optical signals are multiplexed and de-multiplexed. Usually demultiplexer is often used with multiplexer on the receiving end.

Mux/Demux

Mux selects one of several input signals to send to the output. So multiplexer is also known as a data selector. Mux acts as a multiple-input and single-output switch. It sends optical signals at high speed over a single fibre optic cable. Mux makes it possible for several signals to share one device or resource instead of having one device per input signals. Mux is mainly used to increase the amount of data that can be sent over the network within a certain amount of time and bandwidth.

Demux is exactly in the opposite manner. Demux is a device that has one input and more than one outputs. It’s often used to send one single input signal to one of many devices. The main function of an optical demultiplexer is to receive from a fibre consisting of multiple optical frequencies and separate it into its frequency components, which are coupled in as many individual fibres as there are frequencies.

mux-and-demux

DWDM Mux/Demux modules deliver the benefits of DWDM technology in a fully passive solution. They are designed for long-haul transmission where wavelengths are packed compact together. FS.COM can provide modules for cramming up to 48 wavelengths in 100GHz grid(0.8nm) and 96 wavelengths in 50GHz grid(0.4nm) into a fiber transfer. ITU G.694.1 standard and Telcordia GR1221 are compliant. When applied with Erbium Doped-Fiber Amplifiers (EDFAs), higher speed communications with longer reach (over thousands of kilometres) can be achieved.

Currently the common configuration of DWDM Mux/Demux is from 8 to 96 channels. Maybe in future channels can reach 200 channels or more. DWDM system typically transports channels (wavelengths) in what is known as the conventional band or C band spectrum, with all channels in the 1550nm region. The denser channel spacing requires tighter control of the wavelengths and therefore cooled DWDM optical transceiver modules required, as contrary to CWDM which has broader channel spacing un-cooled optics, such as CWDM SFP, CWDM XFP.

DWDM Mux/Demux offered by FS.COM are available in the form of plastic ABS module cassette, 19” rack mountable box or standard LGX box. Our DWDM Mux/Demux are modular, scalable and are perfectly suited to transport PDH, SDH / SONET, ETHERNET services over DWDM in optical metro edge and access networks. FS.COM highly recommends you our 40-CH DWDM Mux/DeMux. It can be used in fibre transition application as well as data centre interconnection for bandwidth expansion. With the extra 1310nm port, it can easily connect to the existing metro network, achieving high-speed service without replacing any infrastructure.

DWDM MUX DEMUX

Conclusion

With DWDM Mux/DeMux, single fibres have been able to transmit data at speeds up to 400Gb/s. To expand the bandwidth of your optical communication networks with lower loss and greater distance capabilities, DWDM Mux/DeMux module is absolutely a wise choice. For other DWDM equipment, please contact via sales@fs.com.

Available Interconnect Solutions for FTTH Drop Cables

FTTH, short for fibre to the home, is the installation and use of optical fibre from a central point directly to individual buildings such as residences, apartment buildings and businesses to provide unprecedented high-speed Internet access. In determining the best solution for a particular FTTH deployment, providers must first decide between splices and connectors. Then, they must choose the best splice or connector for the particular circumstances of deployment. This article explores the available interconnect solutions for FTTH drop cables and discusses their advantages and disadvantages in various deployment circumstances.

FTTH

Splice vs Connector

Before deploying a FTTH network, providers must first decide whether to use a splice, which is a permanent joint, or a connector, which can be easily mated and un-mated by hand. Both splices and connectors are widely used at the distribution point. At the home’s optical network terminal (ONT) or network interface device (NID), either a field-terminated connector or a spliced-on factory-terminated connector can be used.

Splices enable a transition from 250micron drop cable fibre to jacketed cable with high reliability and eliminates the possibility of the interconnection point becoming damaged or dirty. Splices are most appropriate for drop cables dedicated to a particular living unit where no future fibre rearrangement is necessary, such as in a greenfield or new construction application where the service provider can easily install all of the drop cables during the living unit construction.

Connectors are easier to operate and provide greater network flexibility than splices, because they can be mated and unmated repeatedly, allowing them to be reused over and over again. Connectors also provide an access point for network testing. However, connectors cost more than splices although network rearrangement is much cheaper. Therefore, providers must weigh the material cost of connectors along with the potential for contamination and damage against their greater flexibility and lower network management expense.

Splice vs Connector

Choosing the Right Splice

Splicing technology for FTTH deployment falls into two major categories: fusion and mechanical.

Fusion splicing is considered to be a solution for FTTH drop splicing, especially considering it provides a high quality splice with low insertion loss and reflection. However, fusion splicing is expensive and requires trained technicians to operate. It is time-consuming and the slow installation speed hinders its status as the preferred solution. Fusion splicing is best suited for companies that have already invested in fusion splicing equipment and do not need to purchase additional splicing machines.

Mechanical splices can perform well in many environments and have been successfully deployed around the world in FTTH installations. A typical mechanical fibre optic splice includes a small plastic housing with an aluminum alloy element to precisely align and clamp fibres. An index matching gel inside the splices maintains a low-loss optical interface, which results in an average insertion loss of less than 0.1 dB.

Splicer

Choosing the Right Connector

According to the drop cables used, connectors can be divided into two types: factory-terminated and field-terminated.

Factory-terminated
Factory-terminated drop cables can provide high-performing, reliable connections with low optical loss. Factory termination also keeps labor costs low by reducing installation time. An excellent application is a patch cord that connects a desktop ONT to a wall outlet box inside the living unit. A key failure point in the network is when the end user accidentally breaks the fibre in the cable that connects the desktop ONT. If this occurs, the patch cord can be easily replaced. However, factory-terminated cables can be expensive compared to field-terminated alternatives.

Field-terminated
Many providers prefer field-terminated connectors where the installation can be customised by using a reel of cable and connectors, such as fuse-on connectors and mechanical connectors. For example, fuse-on connectors use the same technology as fusion splicing to provide the highest level of optical performance in a field-terminated connector. By incorporating the fusion splice inside the connector, the need for a separate splice tray has been eliminated. However, fuse-on connectors share many of the same drawbacks as fusion splicing. They require expensive equipment, highly trained technicians, and packing and unpacking time, and a power source, ratcheting up installation costs. However, mechanical connectors can provide alternatives to fuse-on connectors for field installation of drop cables.

Summary

The drop cable interconnect solution comprises a key component of an FTTH network. Reliable broadband service depends upon robust connections at the distribution point and the NID/ONT. Choosing the right connectivity product can result in cost savings and efficient deployment while providing reliable service to customers. Globally, most FTTH drop cable installations have been and continue to be field-terminated on both ends of the cable with mechanical connectivity solutions.