Monthly Archives: July 2016

MTP Trunks for High Density Data Centre

The need for high bandwidth has never stopped. High bandwidth means more fibres are needed for the cabling infrastructure. The demands certainly change the network architecture to be more complicated. For spine-and-leaf architecture, each leaf switch in the network is interconnected with every spine switch. As a result, with leaf-spine configuration in data centres, fibre counts can multiply very quickly compared with traditional three-layer distribution architectures.

Besides, 40GbE and 100GbE grow quickly in the data centre. Relatively, the interface of parallel optics like 40G QSFP+ changes to be MPO/MTP with 12-fibre instead of duplex fibre. And that also increases the fibre counts in your data centre structured cabling. As data centre evolves, links require 144 fibres, 288 fibres or even more. So data centre managers are in front of many challenges such as limited space, deployment efficiency and of course the cost.

MTP Trunks Deployment Solutions

To address these challenges, many data centre cabling designs use MTP trunks with up to 144 fibres. In data centres requiring more than 144 fibres, multiple runs of a 144-fibre cable assembly are typically installed to achieve the total desired fibre count. For example, if a link requires 288 fibres from the main distribution area of the data centre to another location, two 144-fibre trunk cables would be installed. This method can reduce the physical space capacity for future growth. Figure 1 depicts the space savings across three deployment scenarios in a 12-inch x 6-inch cable tray with a 50 percent fill ratio:

  • 4,440 total fibres using 370 x 12-fibre MTP trunks
  • 13,680 total fibres using 95 x 144-fibre MTP trunks
  • 16,128 total fibres using 56 x 288-fibre MTP trunks

comparison

Figure 1. Comparison of trunks with different fibres

MTP connectivity is one of the important solutions used in high density environment. MTP cable allows for the deployment of optical fibre termination of 12 fibres at a time rather than individual termination of single fibre strands. In addition, this kind of cabling is easy for future migration to 40/100/200/400GbE networks using parallel optical technologies. To achieve high-fibre-count cable and connectivity, various implementation options are available.

MTP Trunks

MTP trunk cable assemblies are offered in fibre types in standard 12, 24, 48, 72, 96 or 144 core versions in a compact and rugged micro-cable structure. With high port density, it brings big savings in installation time and cost. Due to its discreet premium connectors and special fibre, it delivers low insertion loss and power penalties in high speed network environment. And the multifibre connector and compact dimension also ease the space pressure in costly data centres.

MTP trunk cables are available in either mesh bundles or distribution fan-out trunks since infrastructure designs, cabling environments and pathway types are different, MTP connectivity in backbone cabling can employ different methods. Below are two possibilities:

Cables that are factory terminated on both ends using MTP connectors (MTP-MTP trunks)
Cables that are factory terminated on one end using MTP connectors (MTP pigtail trunk)

MTP-trunks

Figure 2. MTP assemblies types

MTP-MTP Trunks

MTP trunk assemblies are used where all fibres are landed at a single location at each end of the link—for example, between the main distribution areas (MDAs) and the server rows or between the MDA and the core switching racks in a computer room or data hall, as Figure 3 shows. Additionally, MTP-MTP trunks also appear between MDAs of multiple computer rooms or data halls where open tray is the pathway.

same-computer-room

Figure 3. MTP-MTP trunk assembly deployed in a computer room

MTP Pigtail Trunks

MTP pigtail trunks can be used for environments where the pathway doesn’t allow for a pre-terminated end with pulling grip to fit through—for example, a small conduit space (see Figure 4). This approach is common when needing to provide connectivity between MDAs of multiple computer rooms or data halls. Additionally, a deployment using pigtail trunks can be useful when the exact pathway or route is not fully known, avoiding exact length measurement before ordering of the assembly.

two-computer-rooms

Figure 4. MTP pigtail trunk field terminated in two computer rooms

Conclusion

Many factors should be considered to plan and install a data centre cabling infrastructure for actual and future needs, especially in high density environments. So before choose the best cabling installation solution, you need to take following points into concern:

  • Application environment: inside or between computer rooms or data halls
  • Design requirements: traditional three-layer or spine-and-leaf architecture
  • Future proofing: transition path and future-technology support

From this article, high-fibre-count MTP trunks are the best solution for your backbone cabling. MTP trunks is useful for faster installation, lower pathway congestion and greater efficiency while delivering the bandwidth to meet the needs of 40GbE/100GbE/200GbE and beyond.

A Wise Decision to Choose DWDM Mux/DeMux

The advent of big data requires for highly efficient and capable data transmission speed. To solve the paradox of increasing bandwidth but spending less, WDM (wavelength division multiplexing) multiplexer/demultiplexer is the perfect choice. This technology can transport extremely large capacity of data traffic in telecom networks. It’s a good way to deal with the bandwidth explosion from the access network.

WDM

WDM stands for wavelength division multiplexing. At the transmitting side, various light waves are multiplexed into one single signal that will be transmitted through an optical fibre. At the receiver end, the light signal is split into different light waves. There are 2 standards of WDM: coarse wavelength division nultiplexing (CWDM) and dense wavelength division multiplexing (DWDM). The main difference is the wavelength steps between the channels. For CWDM this is 20nm (course) and for DWDM this is typically 0.8nm (dense). The following is going to introduce DWDM Mux/Demux.

DWDM Technology

DWDM technology works by combing and transmitting multiple signals simultaneously at different wavelengths over the same fibre. This technology responds to the growing need for efficient and capable data transmission by working with different formats, such as SONET/SDH, while increasing bandwidth. It uses different colors (wavelength) which are combined in a device. The device is called a Mux/Demux, abbreviated from multiplexer/demultiplexer, where the optical signals are multiplexed and de-multiplexed. Usually demultiplexer is often used with multiplexer on the receiving end.

Mux/Demux

Mux selects one of several input signals to send to the output. So multiplexer is also known as a data selector. Mux acts as a multiple-input and single-output switch. It sends optical signals at high speed over a single fibre optic cable. Mux makes it possible for several signals to share one device or resource instead of having one device per input signals. Mux is mainly used to increase the amount of data that can be sent over the network within a certain amount of time and bandwidth.

Demux is exactly in the opposite manner. Demux is a device that has one input and more than one outputs. It’s often used to send one single input signal to one of many devices. The main function of an optical demultiplexer is to receive from a fibre consisting of multiple optical frequencies and separate it into its frequency components, which are coupled in as many individual fibres as there are frequencies.

mux-and-demux

DWDM Mux/Demux modules deliver the benefits of DWDM technology in a fully passive solution. They are designed for long-haul transmission where wavelengths are packed compact together. FS.COM can provide modules for cramming up to 48 wavelengths in 100GHz grid(0.8nm) and 96 wavelengths in 50GHz grid(0.4nm) into a fiber transfer. ITU G.694.1 standard and Telcordia GR1221 are compliant. When applied with Erbium Doped-Fiber Amplifiers (EDFAs), higher speed communications with longer reach (over thousands of kilometres) can be achieved.

Currently the common configuration of DWDM Mux/Demux is from 8 to 96 channels. Maybe in future channels can reach 200 channels or more. DWDM system typically transports channels (wavelengths) in what is known as the conventional band or C band spectrum, with all channels in the 1550nm region. The denser channel spacing requires tighter control of the wavelengths and therefore cooled DWDM optical transceiver modules required, as contrary to CWDM which has broader channel spacing un-cooled optics, such as CWDM SFP, CWDM XFP.

DWDM Mux/Demux offered by FS.COM are available in the form of plastic ABS module cassette, 19” rack mountable box or standard LGX box. Our DWDM Mux/Demux are modular, scalable and are perfectly suited to transport PDH, SDH / SONET, ETHERNET services over DWDM in optical metro edge and access networks. FS.COM highly recommends you our 40-CH DWDM Mux/DeMux. It can be used in fibre transition application as well as data centre interconnection for bandwidth expansion. With the extra 1310nm port, it can easily connect to the existing metro network, achieving high-speed service without replacing any infrastructure.

DWDM MUX DEMUX

Conclusion

With DWDM Mux/DeMux, single fibres have been able to transmit data at speeds up to 400Gb/s. To expand the bandwidth of your optical communication networks with lower loss and greater distance capabilities, DWDM Mux/DeMux module is absolutely a wise choice. For other DWDM equipment, please contact via sales@fs.com.

User Guide for CWDM MUX/DEMUX

Is there a way to enhance your network system but also save cost, time and effort? Do you want to give up traditional model of using many fibre cables? The cost-effective way is to use the CWDM MUX/DEMUX (coarse wavelength division multiplexing multiplexer/demultiplexer). If you are the first time to use it, you are lucky to read the following content.

18-ch-cwdm-muxdemux

CWDM Mux/Demux Introduction

First you need to know what CWDM is. CWDM is a technology which multiplexes multiple optical signals on a single fibre by using different wavelengths of the laser light to carry different signals. CWDM MUX/DEMUX applies this principle. A CWDM MUX/DEMUX can maximize capacity and increase bandwidth over a single or dual fibre cable. It mixes the signals in different wavelengths onto a single fibre and splits it again into the original signals at the end of a link. This kind of device is used to reduce the number of required fibre cables and get other independent data links. CWDM MUX/DEMUX modules are wide from 2 channels to 18 channels in the form of 1RU 19’’ rack chassis. The following will take 9 channels 1290-1610nm single fibre CWDM Mux/Demux as an example. It’s a half 19’’/1RU module for LC/UPC connection.

Features are as follows:
  • Support up to 9 data streams
  • Wavelength range: 1260~1620 nm
  • Low insertion loss, half 19’’/1RU module low profile modular design
  • Passive, no electric power required
  • Simplex LC/UPC for line port
  • Duplex LC/UPC for CWDM channel port, easily support duplex patch cables between the transceiver and passive unit
  • Operating temperature: 0~70℃
  • Storage temperature: -40~85℃
9cwdm-2743-1u-lc-sfb
Preparation for Installation

To connect the CWDM Mux/Demux, you need two strands of 9/125μm single-mode fibre cables. Supportable transceivers cover the wavelengths of 1290 nm, 1370 nm, 1410 nm, 1450 nm, 1490 nm, 1530 nm, 1570 nm 1610 nm. And this device is used together with 9 CWDM-2759-LC-LGX-SFB.

front-channel

To ensure a reliable and safe long-term operation, please note the points below:

  • Only use in dry and indoor environments.
  • Do not locate CWDM Mux/Demux in an enclosed space without airflow since the box will generate heat.
  • Do not place a power supplies directly on top of a unit.
  • Do not obstruct a unit’s ventilation existing holes.
System Installation Procedures

1.To install CWDM MUX/DEMUX system, switch off all devices.
2.Install CWDM transceivers. Remember each channel has a unique transceiver with a certain wavelength. So each transceiver must be plugged into the appropriate channel and must not be used more than once in the system. Devices pairs must carry transceivers with the same wavelength.
3.Connect CWDM MUX/DEMUX units with matching cables (single-mode fibre). Before connecting cable, you should first inspect if the connectors are lean. Never forget cleaning work is an important factor to achieve a better network performance. The guidelines:

  • Keep connectors covered when not in use to prevent damage.
  • Usually inspect fibre ends for signs of damage.
  • Always clean and inspect fibre connectors before make a connection.

4. Power up the system.

Troubleshooting

When you connect the system, but you find there is not data link. Then you need to do:

  • 1. Check the attached devices by directly connecting CWDM MUX/DEMUX units using a short fibre cable.
  • 2. Check the fibre cables and fibre connectors.
  • 3. Check that each wavelength doesn’t occur more than once at the CWDM MUX/DEMUX units.
  • 4. Check if the transceivers are inserted into the matching port at the CWDM MUX/DEMUX units.
Conclusion

CWDM MUX/DEMUX is a cost-effective solution for expanding bandwidth capacity for short link communication. Besides saving costs, CWDM lasers consume less power and take up less space. FS.COM offers both CWDM MUX/DEMUX and DWDM MUX/DEMUX with high quality. If you are the first time to use these devices, keep in mind the above notes.

Things You Need to Know Before Deploying 10 Gigabit Ethernet Network

Since the establishment of 10 Gigabit Ethernet, it has been employed by large amount of enterprises in their corporate backbones, data centres, and server farms to support high-bandwidth applications. But how to achieve a reliable, stable and cost-effective 10Gbps network? There are ten things you should know before doing the deployment.

More Efficient for the Server Edge

Many organizations try to optimize their data centres by seeking server virtualization which supports several applications and operating systems on a single server by defining multiple virtual machines on the server. Because of this, the organizations can reduce server inventory, better utilize servers, and mange resources more efficiently. Server virtualization relies heavily on networking and storage. Virtual machines require lot of storage. The network connectivity between servers and storage must be fast enough to avoid bottlenecks. And 10GbE can provide the fast connectivity for virtualized environments.

More Cost-effective for SAN

There are three types of storage in a network: direct-attached storage, network attached storage, and SAN. Among them, SAN is the most flexible and scalable solution for data centre and high-density applications. But it costs much and needs special trainees for installing and maintaining the Fibre Channel interconnect fabric.

The internet small computer system interface (iSCSI) makes 10 Gigabit Ethernet an attractive interconnect fabric for SAN applications. iSCSI allows 10 Gigabit Ethernet infrastructure to be used as a SAN fabric which is more favorable compared with Fibre Channel. Because it can reduce equipment and management costs, enhance server management, improve disaster recovery and deliver excellent performance.

Reducing Bottlenecks for the Aggregation Layer

Today, traffic at the edge of the network has increased dramatically. Gigabit Ethernet to the desktop has become more popular since it becomes less expensive. More people adopt Gigabit Ethernet to the desktop, which increases the oversubscription ratios of the rest of the network. And that brings the bottleneck between large amounts of Gigabit traffic at the edge of the network and the aggregation layer or core.

10 Gigabit Ethernet allows the aggregation layer to scale to meet the increasing demands of users and applications. It can well solve the bottleneck for its three advantages. First, 10 Gigabit Ethernet link uses fewer fibre stands compared with Gigabit Ethernet aggregation. Second, 10 Gigabit Ethernet can support multi Gigabit streams. Third, 10 Gigabit Ethernet provides greater scalability, bringing a future-proof network.

Fibre Cabling Choices

To realize 10 Gigabit Ethernet network deployment, three important factors should be considered, including the type of fibre cable (MMF of MF), the type of 10 Gigabit Ethernet physical interface and optics module (XENPAK, X2, XFP and SFP+).

Cable Types Interface Max Distance
MMF (OM1/OM2/OM3) 10GBASE-SR 300 m
10GBASE-LRM 220 m
10GBASE-ER 40 km
SMF (9/125um fibre) 10GBASE-LR 10 km
10GBASE-ZR 80 km

Form factor options are interoperable when 10 Gigabit Ethernet physical interface type is the same on both ends of the fibre link. For example, 10GBASE-SR XFP on the left can be linked with one 10GBASE-SR SFP+ on the right. But 10GBASE-SR SFP+ can’t connect to one 10GBASE-LRM SFP+ at the other end of the link.

Copper Cabling Solutions

As copper cabling standards becomes mature, the copper cabling solutions for 10GbE is becoming common. Copper cabling is suitable for short distance connection. The are three different copper cabling solutions for 10 Gigabit Ethernet: 10GBASE-CX4, SFP+ DAC (direct attach cable) and 10GBASE-T.

10GBASE-CX4 is the first 10 Gigabit Ethernet standard. It’s economical and allowed for very low latency. But it’s a too large form factor for high density port counts in aggregation switches.

10G SFP+ DAC is a new copper solution for 10 Gigabit Ethernet. It has become the main choice for servers and storage devices in a rack because of its low latency, small connector and reasonable cost. It’s the best choice for short 10 Gigabit Ethernet network connection.

10GBASE-T runs 10G Ethernet over Cat6a and Cat7 up to 100 m. But this standard is not very popular since it needs technology improvements to reduce its cost, power consumption, and latency.

For Top of Rack Applications

A top-of-rack (ToR) switch is a switch with a low number of ports that sits at the very top or in the middle of a 19’’ telco rack in data centres. A ToR switch provides a simple, low-cost way to easily add more capacity to a network. It connects several servers and other network components such as storage together in a single rack.

ToR switch uses SFP+ to provide 10G network in an efficient 1U form factor. DAC makes rack cabling and termination easier. Each server and network storage device can be directly connected to the ToR switch, eliminating the need for intermediate patch panels. DAC is flexible for vertical cabling management within the rack architecture. And the cabling outside the rack, the ToR switch uplink connection to the aggregation layer, simplifies moving racks.

The following figure shows a 10 Gigabit Ethernet ToR switching solution for servers and network storage. Because the servers are virtualized, so the active-active server team can be distributed across two stacked witches. This can ensure physical redundancy for the servers while connected to the same logical switch. What’s more, failover protection can be offered if one physical link goes down.

10G-ToR

Conclusion

10 Gigabit Ethernet network is not the fastest but quite enough for common use in our daily life. So you should better read this article before you do the deployment. Besides, FS.COM provides both fibre and copper cabling solutions for 10G network. For more details, please visit www.fs.com.