Author Archives: Admin

100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

NIC, short for network interface card, which can be called network interface controller, network adapter or LAN adapter, allows a networking device to communicate with other networking devices. Without NIC, networking can hardly be done. There are NICs with different types and speeds, such as wireless and wired NIC, from 10G to 100G. Among them, 100G NIC, as a product appearing in recent years, hasn’t taken a large market share yet. This post gives a description of 100G NIC and the trends in NIC as follows.

What Is 100G NIC?

NIC is installed on a computer and used for communicating over a network with another computer, server or other network devices. It comes in many different forms but there are two main different types of NIC: wired NIC and wireless NIC. Wireless NICs use wireless technologies to access the network, while wired NICs use DAC cable or transceiver and fiber patch cable. The most popular wired LAN technology is Ethernet. In terms of its application field, it can be divided into computer NIC card and server NIC card. For client computers, one NIC is needed in most cases. However, for servers, it makes sense to use more than one NIC to meet the demand for handling more network traffic. Generally, one NIC has one network interface, but there are still some server NICs that have two or more interfaces built in a single card.

100G NIC

Figure 1: FS 100G NIC

With the expanding of data center from 10G to 100G, 25G server NIC has gained a firm foothold in the NIC market. In the meantime, the growth in demand for bandwidth is driving data center to higher bandwidth, 200G/400G and 100G transceivers have been widespread, which paves the way for 100G server.

How to Select 100G NIC?

How to choose the best 100G NIC from all the vendors? If you are stuck in this puzzle, see the following section listing recommendations and considerations to consider.

Connector

Connector types like RJ45, LC, FC, SC are commonly used connectors on NIC. You should check the connector type supported by NIC. Today many networks are only using RJ45, so it may be not that hard to choose the NIC for the right connector type as it has been in the past. Even so, some network may utilize a different interface such as coax. Therefore, check if the card you are planning to buy supports this connection before purchasing.

Bus Type

PCI is a hardware bus used for adding internal components to the computer. There are three main PCI bus types used by servers and workstations now: PCI, PCI-X and PCI-E. Among them, PCI is the most conventional one. It has a fixed width of 32 bits and can handle only 5 devices at a time. PCI-X is a higher upgraded version, providing more bandwidth. With the emergence of PCI-E, PCI-X cards are gradually replaced. PCI-E is a serial connection so that devices no longer share bandwidth like they do on a normal bus. Besides, there are different physical sizes of PCI-E card in the market: x16, x8, x4, and x1. Before purchasing a 100G NIC, it is necessary to make sure which PCI version and slot width can be compatible with your current equipment and network environment.

Hot swappable

There are some NICs that can be installed and removed without shutting down the system, which helps minimize downtime by allowing faulty devices to be replaced immediately. While you are choosing your 100G NIC, be sure to check if it supports hot swapping.

Trends in NIC

NICs were commonly used in desktop computers in the 1990s and early 2000s. Up to now, it has been widely used in servers and workstations with different types and rates. With the popularization of wireless networking and WiFi, wireless NICs gradually grows in popularity. However, wired cards are still popular for relatively immobile network devices owing to the reliable connections.NICs have been upgrading for years. As data centers are expanding at an unprecedented pace and driving the need for higher bandwidth between the server and switches, networking is moving from 10G to 25G and even 100G. Companies like Intel and Mellanox have launched their 100G NIC in succession.

During the upgrading from 10G to 100G in data centers, 25G server connectivity popularized for 100G migration can be realized by 4 strands of 25G. 25G NIC is still the mainstream. However, considering the fact that the overall bandwidth for data centers grows quickly and hardware upgrade cycles for data centers occur every two years, the ethernet speed can be faster than we expect. 400G data center is just on the horizon. It stands a good chance that 100G NIC will play an integral role in next-generation 400G networking.

Meanwhile, the need of 100G NIC will drive the demand for other network devices as well. For instance, 100G transceiver, the device between NIC and network, is bound to pervade. Now 100G transceivers are provided by many brands with different types such as CXP, CFP, QSFP28 transceivers,etc. FS supplies a full series of compatible 100G QSFP28 and CFP transceivers that can be matched with the major brand of 100G Ethernet NIC, such as Mellanox and Intel.

Conclusion

Nowadays with the hyping of the next generation cellular technology, 5G, the higher bandwidth is needed for data flow, which paves the way for 100G NIC. On the occasion, 100G transceivers and 400G network switches will be in great need. We believe that the new era of 5G networks will see the popularization of 100G NIC and change towards a new era of network performance.

Article Source: 100G NIC: An Irresistible Trend in Next-Generation 400G Data Center

Related Articles:

400G QSFP Transceiver Types and Fiber Connections

How Many 400G Transceiver Types Are in the Market?

400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G

400G ZR and ZR+ coherent pluggable optics have become new solutions for high-density networks with data rates from 100G to 400G featuring low power and small space. Let’s see how the latest generation of 400G ZR and 400G ZR+ optics extends the economic benefits to meet the requirements of network operators, maximizes fiber utilization, and reduces the cost of data transport.

400G ZR & ZR+: Definitions

What Is 400G ZR?

400G ZR coherent optical modules are compliant with the OIF-400ZR standard, ensuring industry-wide interoperability. They provide 400Gbps of optical bandwidth over a single optical wavelength using DWDM (dense wavelength division multiplexing) and higher-order modulation such as 16 QAM. Implemented predominantly in the QSFP-DD form factor, 400G ZR will serve the specific requirement for massively parallel data center interconnect of 400GbE with distances of 80-120km. To learn more about 400G transceivers: How Many 400G Transceiver Types Are in the Market?

Overview of 400G ZR+

ZR+ is a range of coherent pluggable solutions with line capacities up to 400Gbps and reaches well beyond 80km supporting various application requirements. The specific operational and performance requirements of different applications will determine what types of 400G ZR+ coherent plugs will be used in networks. Some applications will take advantage of interoperable, multi-vendor ecosystems defined by standards body or MSA specifications and others will rely on the maximum performance achievable in the constraints of a pluggable module package. Four categories of 400G ZR+ applications will be explained in the following part.

400G ZR & ZR+: Applications

400G ZR – Application Scenario

The arrival of 400G ZR modules has ushered in a new era of DWDM technology marked by open, standards based, and pluggable DWDM optics, enabling true IP-over-DWDM. 400G ZR is often applied for point-to-point DCI (up to 80km), making the task of interconnecting data centers as simple as connecting switches inside a data center (as shown below).

Figure 1: 400G ZR Applied in Single-span DCI

Four Primary Deployment Applications for 400G ZR+

Extended-reach P2P Packet

One definition of ZR+ is a straightforward extension of 400G ZR transcoded mappings of Ethernet with a higher performance FEC to support longer reaches. In this case, 400G ZR+ modules are narrowly defined as supporting a single-carrier 400Gbps optical line rate and transporting 400GbE, 2x 200GbE or 4x 100GbE client signals for point-to-point reaches (up to around 500km). This solution is specifically dedicated to packet transport applications and destined for router platforms.

Multi-span Metro OTN

Another definition of ZR+ is the inclusion of support for OTN, such as client mapping and multiplexing into FlexO interfaces. This coherent pluggable solution is intended to support the additional requirements of OTN networks, carry both Ethernet and OTN clients, and address transport in multi-span ROADM networks. This category of 400G ZR+ is required where demarcation is important to operators, and is destined primarily for multi-span metro ROADM networks.

Figure 2: 400G ZR+ Applied in Multi-span Metro OTN

Multi-span Metro Packet

The third definition of ZR+ is support for extended reach Ethernet or packet transcoded solution that is further optimized for critical performance such as latency. This 400G ZR+ coherent pluggable with high performance FEC and sophisticated coding algorithms supports the longest reach over 1000km multi-span metro packet transport.

Figure 3: 400G ZR+ Applied in Multi-span Metro Packet

Multi-span Metro Regional OTN

The fourth definition of ZR+ supports both Ethernet and OTN clients. This coherent pluggable also leverages high performance FEC and PCS, along with tunable optical filters and amplifiers for maximum reach. It supports a rich feature set of OTN network functions for deployment over both fixed and flex-grid line systems. This category of 400G ZR+ provides solutions with higher performance to address a much wider range of metro/regional packet networking requirements.

400G ZR & ZR+: What Makes Them Suitable for Longer-reach Transmission in Data Center?

Coherent Technology Adopted by 400G ZR & ZR+

Coherent technology uses the three degrees of freedom (amplitude, phase and polarization of light) to focus more data on the wave that is being transmitted. In this way, coherent optics can transport more data over a single fiber for greater distances using higher order modulation techniques, which results in better spectral efficiency. 400G ZR and ZR+ is a leap forward in the application of coherent technology. With higher-order modulation and DWDM unlocking high bandwidth, 400G ZR and ZR+ modules can reduce cost and complexity for high-level data center interconnects.

Importance of 400G ZR & ZR+

400G ZR and 400G ZR+ coherent pluggable optics take implementation challenges to the next level by adding some of the elements for high-performance solutions while pushing component design for low-power, pluggability, and modularity.

Conclusion

Although there are still many challenges to making 400G ZR and 400G ZR+ transceiver modules that fit into the small size and power budget of OSFP or QSFP-DD packages and also achieving interoperation as well the costs and volume targets. With 400Gbps high optical bandwidth and low power consumption, 400G ZR & ZR+ may very well be the new generation in longer-reach optical communications.

Original Source: 400G ZR & ZR+ – New Generation of Solutions for Longer-reach Optical Communications

400G OSFP Transceiver Types Overview

400G

OSFP stands for Octal Small Form-factor Pluggable, which consists of 8 electrical lanes, running at 50Gb/s each, for a total of the bandwidth of 400Gb/s. This post will give an introduction of 400G OSFP transceiver types, the fiber connections, and some QAs about OSFP.

400G OSFP Transceiver Types

Below lists some current main 400G OSFP transceiver types: OSFP SR8, OSFP DR4, OSFP DR4+, OSFP FR4, OSFP 2*FR4, and OSFP LR4, which summarize OSFP transceiver according to the two transmission types (over multimode fiber and single-mode fiber) they support.

Fibers Connections for 400G OSFP Transceivers

400G OSFP SR8

Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP SR8 to 2× 200G SR4 over MTP-16 to 2× MPO-8 breakout cable.
Figure 2 OSFP SR8 to 2 200G SR4.jpg
  • 400G OSFP SR8 to 8× 50G SFP via MTP-16 to 8× LC duplex breakout cable with up to 100m.
Figure 3 OSFP SR8 to 8 50G SFP.jpg

400G OSFP DR4

  • 400G OSFP DR4 to 400G OSFP DR4 over an MTP-12/MPO-12 cable.Figure 1 OSFP SR8 to OSFP SR8.jpg
  • 400G OSFP DR4 to 4× 100G DR4 over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 4 OSFP DR4 to 4 100G DR.jpg

400G OSFP XDR4/DR4+

  • 400G OSFP DR4+ to 400G OSFP DR4+ over an MTP-12/MPO-12 cable.
  • 400G OSFP DR4+ to 4× 100G DR over MTP-12/MPO-12 to 4× LC duplex breakout cable.
Figure 5 OSFP DR4+ to 4 100G DR.jpg

400G OSFP FR4

400G OSFP FR4 to 400G OSFP FR4 over duplex LC cable.

Figure 6 OSFP FR4 to OSFP FR4.jpg

400G OSFP 2FR4

OSFP 2FR4 can break out to 2× 200G and interop with 2× 200G-FR4 QSFP transceivers via 2× CS to 2× LC duplex cable.

400G OSFP Transceivers: Q&A

Q: What does “SR8”, “DR4”, “XDR4”, “FR4”, and “LR4” mean?

A: “SR” refers to short range, and “8” implies there are 8 optical channels. “DR” refers to 500m reach using single-mode fiber, and “4” implies there are 4 optical channels. “XDR4” is short for “eXtended reach DR4”. And “LR” refers to 10km reach using single-mode fiber.

Q: Can I plug an OSFP transceiver module into a QSFP-DD port?

A: No. QSFP-DD and OSFP are totally different form factors. For more information about QSFP-DD transceivers, you can refer to 400G QSFP-DD Transceiver Types Overview. You can use only one kind of form factor in the corresponding system. E.g., if you have an OSFP system, OSFP transceivers and cables must be used.

Q: Can I plug a 100G QSFP28 module into an OSFP port?

A: Yes. A QSFP28 module can be inserted into an OSFP port but with an adapter. When using a QSFP28 module in an OSFP port, the OSFP port must be configured for a data rate of 100G instead of 400G.

Q: What other breakout options are possible apart from using OSFP modules mentioned above?

A: OSFP 400G DACs & AOCs are possible for breakout 400G connections. See 400G Direct Attach Cables (DAC & AOC) Overview for more information about 400G DACs & AOCs.

Original Source: 400G OSFP Transceiver Types Overview

Data Center Containment: Types, Benefits & Challenges

Over the past decade, data center containment has experienced a high rate of implementation by many data centers. It can greatly improve the predictability and efficiency of traditional data center cooling systems. This article will elaborate on what data center containment is, common types of it, and their benefits and challenges.

What Is Data Center Containment?

Data center containment is the separation of cold supply air from the hot exhaust air from IT equipment so as to reduce operating cost, optimize power usage effectiveness, and increase cooling capacity. Containment systems enable uniform and stable supply air temperature to the intake of IT equipment and a warmer, drier return air to cooling infrastructure.

Types of Data Center Containment

There are mainly two types of data center containment, hot aisle containment and cold aisle containment.

Hot aisle containment encloses warm exhaust air from IT equipment in data center racks and returns it back to cooling infrastructure. The air from the enclosed hot aisle is returned to cooling equipment via a ceiling plenum or duct work, and then the conditioned air enters the data center via raised floor, computer room air conditioning (CRAC) units, or duct work.

Hot aisle containment

Cold aisle containment encloses cold aisles where cold supply air is delivered to cool IT equipment. So the rest of the data center becomes a hot-air return plenum where the temperature can be high. Physical barriers such as solid metal panels, plastic curtains, or glass are used to allow for proper airflow through cold aisles.

Cold aisle containment

Hot Aisle vs. Cold Aisle

There are mixed views on whether it’s better to contain the hot aisle or the cold aisle. Both containment strategies have their own benefits as well as challenges.

Hot aisle containment benefits

  • The open areas of the data center are cool, so that visitors to the room will not think the IT equipment is not being cooled sufficiently. In addition, it allows for some low density areas to be un-contained if desired.
  • It is generally considered to be more effective. Any leakages that come from raised floor openings in the larger part of the room go into the cold space.
  • With hot aisle containment, low-density network racks and stand-alone equipment like storage cabinets can be situated outside the containment system, and they will not get too hot, because they are able to stay in the lower temperature open areas of the data center.
  • Hot aisle containment typically adjoins the ceiling where fire suppression is installed. With a well-designed space, it will not affect normal operation of a standard grid fire suppression system.

Hot aisle containment challenges

  • It is generally more expensive. A contained path is needed for air to flow from the hot aisle all the way to cooling units. Often a drop ceiling is used as return air plenum.
  • High temperatures in the hot aisle can be undesirable for data center technicians. When they need to access IT equipment and infrastructure, a contained hot aisle can be a very uncomfortable place to work. But this problem can be mitigated using temporary local cooling.

Cold aisle containment benefits

  • It is easy to implement without the need for additional architecture to contain and return exhaust air such as a drop ceiling or air plenum.
  • Cold aisle containment is less expensive to install as it only requires doors at ends of aisles and baffles or roof over the aisle.
  • Cold aisle containment is typically easier to retrofit in an existing data center. This is particularly true for data centers that have overhead obstructions such as existing duct work, lighting and power, and network distribution.

Cold aisle containment challenges

  • When utilizing a cold aisle system, the rest of the data center becomes hot, resulting in high return air temperatures. It also may create operational issues if any non-contained equipment such as low-density storage is installed in the general data center space.
  • The conditioned air that leaks from the openings under equipment like PDUs and raised floor tend to enter air paths that return to cooling units. This reduces the efficiency of the system.
  • In many cases, cold aisles have intermediate ceilings over the aisle. This may affect the overall fire protection and lighting design, especially when added to an existing data center.

How to Choose the Best Containment Option?

Every data center is unique. To find the most suitable option, you have to take into account a number of aspects. The first thing is to evaluate your site and calculate the Cooling Capacity Factor (CCF) of the computer room. Then observe the unique layout and architecture of each computer room to discover conditions that make hot aisle or cold aisle containment preferable. With adequate information and careful consideration, you will be able to choose the best containment option for your data center.

Article Source: Data Center Containment: Types, Benefits & Challenges

Related Articles:

What Is a Containerized Data Center: Pros and Cons

The Most Common Data Center Design Missteps

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

The COVID-19 pandemic caused several companies to shut down, and the implications were reduced production and altered supply chains. In the tech world, where silicon microchips are the heart of everything electronic, raw material shortage became a barrier to new product creation and development.

During the lockdown periods, some essential workers were required to stay home, which meant chip manufacturing was unavailable for several months. By the time lockdown was lifted and the world embraced the new normal, the rising demand for consumer and business electronics was enough to ripple up the supply chain.

Below, we’ve discussed the challenges associated with the current chip shortage, what to expect moving forward, and the possible interventions necessary to overcome the supply chain constraints.

Challenges Caused by the Current Chip Shortage

As technology and rapid innovation sweeps across industries, semiconductor chips have become an essential part of manufacturing – from devices like switches, wireless routers, computers, and automobiles to basic home appliances.

devices

To understand and quantify the impact this chip shortage has caused spanning the industry, we’ll need to look at some of the most affected sectors. Here’s a quick breakdown of how things have unfolded over the last eighteen months.

Automobile Industry

in North America and Europe had slowed or stopped production due to a lack of computer chips. Major automakers like Tesla, Ford, BMW, and General Motors have all been affected. The major implication is that the global automobile industry will manufacture 4 million fewer cars by the end of 2021 than earlier planned, and it will forfeit an average of $110 billion in revenue.

Consumer Electronics

Consumer electronics such as desktop PCs and smartphones rose in demand throughout the pandemic, thanks to the shift to virtual learning among students and the rise in remote working. At the start of the pandemic, several automakers slashed their vehicle production forecasts before abandoning open semiconductor chip orders. And while the consumer electronics industry stepped in and scooped most of those microchips, the supply couldn’t catch up with the demand.

Data Centers

Most chip fabrication companies like Samsung Foundries, Global Foundries, and TSMC prioritized high-margin orders from PC and data center customers during the pandemic. And while this has given data centers a competitive edge, it isn’t to say that data centers haven’t been affected by the global chip shortage.

data center

Some of the components data centers have struggled to source include those needed to put together their data center switching systems. These include BMC chips, capacitors, resistors, circuit boards, etc. Another challenge is the extended lead times due to wafer and substrate shortages, as well as reduced assembly capacity.

LED Lighting

LED backlights common in most display screens are powered by hard-to-find semiconductor chips. The prices of gadgets with LED lighting features are now highly-priced due to the shortage of raw materials and increased market demand. This is expected to continue up to the beginning of 2022.

Renewable Energy- Solar and Turbines

Renewable energy systems, particularly solar and turbines, rely on semiconductors and sensors to operate. The global supply chain constraints have hurt the industry and even forced some energy solutions manufacturers like Enphase Energy to

Semiconductor Trends: What to Expect Moving Forward

In response to the global chip shortage, several component manufacturers have ramped up production to help mitigate the shortages. However, top electronics and semiconductor manufacturers say the crunch will only worsen before it gets better. Most of these industry leaders speculate that the semiconductor shortage could persist into 2023.

Based on the ongoing disruption and supply chain volatility, various analysts in a recent CNBC article and Bloomberg interview echoed their views, and many are convinced that the coming year will be challenging. Here are some of the key takeaways:

Pat Gelsinger, CEO of Intel Corp., noted in April 2021 that the chip shortage would recover after a couple of years.

DigiTimes Report found that Intel and AMD server ICs and data centers have seen their lead times extend to 45 to 66 weeks.

The world’s third-largest EMS and OEM provider, Flex Ltd., expects the global semiconductor shortage to proceed into 2023.

In May 2021, Global Foundries, the fourth-largest contract semiconductor manufacturer, signed a $1.6 billion, 3-year silicon supply deal with AMD, and in late June, it launched its new $4 billion, 300mm-wafer facility in Singapore. Yet, the company says its production capacity will only increase component production earliest in 2023.

TMSC, one of the leading pure-play foundries in the industry, says it won’t meaningfully increase the component output until 2023. However, it’s optimistic that the company will ramp up the fabrication of automotive micro-controllers by 60% by the end of 2021.

From the industry insights above, it’s evident that despite the many efforts that major players put into resolving the global chip shortage, the bottlenecks will probably persist throughout 2022.

Additionally, some industry observers believe that the move by big tech companies such as Amazon, Microsoft, and Google to design their own chips for cloud and data center business could worsen the chip shortage crisis and other problems facing the semiconductor industry.

article, the authors hint that the entry of Microsoft, Amazon, and Google into the chip design market will be a turning point in the industry. These tech giants have the resources to design superior and cost-effective chips of their own, something most chip designers like Intel have in limited proportions.

Since these tech giants will become independent, each will be looking to create component stockpiles to endure long waits and meet production demands between inventory refreshes. Again, this will further worsen the existing chip shortage.

Possible Solutions

To stay ahead of the game, major industry players such as chip designers and manufacturers and the many affected industries have taken several steps to mitigate the impacts of the chip shortage.

For many chip makers, expanding their production capacity has been an obvious response. Other suppliers in certain regions decided to stockpile and limit exports to better respond to market volatility and political pressures.

Similarly, improving the yields or increasing the number of chips manufactured from a silicon wafer is an area that many manufacturers have invested in to boost chip supply by some given margin.

chip manufacturing

Here are the other possible solutions that companies have had to adopt:

Embracing flexibility to accommodate older chip technologies that may not be “state of the art” but are still better than nothing.

Leveraging software solutions such as smart compression and compilation to build efficient AI models to help unlock hardware capabilities.

LED Lighting

The latest global chip shortage has led to severe shocks in the semiconductor supply chain, affecting several industries from automobile, consumer electronics, data centers, LED, and renewables.

Industry thought leaders believe that shortages will persist into 2023 despite the current build-up in mitigation measures. And while full recovery will not be witnessed any time soon, some chip makers are optimistic that they will ramp up fabrication to contain the demand among their automotive customers.

That said, staying ahead of the game is an all-time struggle considering this is an issue affecting every industry player, regardless of size or market position. Expanding production capacity, accommodating older chip technologies, and leveraging software solutions to unlock hardware capabilities are some of the promising solutions.

Added

This article is being updated continuously. If you want to share any comments on FS switches, or if you are inclined to test and review our switches, please email us via media@fs.com or inform us on social media platforms. We cannot wait to hear more about your ideas on FS switches.

Article Source: The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Related Articles:

Impact of Chip Shortage on Datacenter Industry

Infographic – What Is a Data Center?

The Most Common Data Center Design Missteps

Introduction

Data center design is to provide IT equipment with a high-quality, standard, safe, and reliable operating environment, fully meeting the environmental requirements for stable and reliable operation of IT devices and prolonging the service life of computer systems. Data center design is the most important part of data center construction directly relating to the success or failure of data center long term planning, so its design should be professional, advanced, integral, flexible, safe, reliable, and practical.

9 Missteps in Data Center Design

Data center design is one of the effective solutions to overcrowded or outdated data centers, while inappropriate design results in obstacles for growing enterprises. Poor planning can lead to a waste of valuable funds and more issues, increasing operating expenses. Here are 9 mistakes to be aware of when designing a data center.

Miscalculation of Total Cost

Data center operation expense is made up of two key components: maintenance costs and operating costs. Maintenance costs refer to the costs associated with maintaining all critical facility support infrastructure, such as OEM equipment maintenance contracts, data center cleaning fees, etc. Operating costs refer to costs associated with day-to-day operations and field personnel, such as the creation of site-specific operational documentation, capacity management, and QA/QC policies and procedures. If you plan to build or expand a business-critical data center, the best approach is to focus on three basic parameters: capital expenditures, operating and maintenance expenses, and energy costs. Taking any component out of the equation, you might face the case that the model does not properly align an organization’s risk profile and business spending profile.

Unspecified Planning and Infrastructure Assessment

Infrastructure assessment and clear planning are essential processes for data center construction. For example, every construction project needs to have a chain of command that clearly defines areas of responsibility and who is responsible for aspects of data center design. Those who are involved need to evaluate the potential applications of the data center infrastructure and what types of connectivity requirements they need. In general, planning involves a rack-by-rack blueprint, including network connectivity and mobile devices, power requirements, system topology, cooling facilities, virtual local and on-premises networks, third-party applications, and operational systems. For the importance of data center design, you should have a thorough understanding of the functionality before it begins. Otherwise, you’ll fall short and cost more money to maintain.

data center

Inappropriate Design Criteria

Two missteps can send enterprises into an overspending death spiral. First of all, everyone has different design ideas, but not everyone is right. Second, the actual business is mismatched with the desired vision and does not support the setting of kilowatts per square foot or rack. Over planning in design is a waste of capital. Higher-level facilities also result in higher operational and energy costs. A data center designer establishes the proper design criteria and performance characteristics and then builds capital expenditure and operating expenses around it.

Unsuitable Data Center Site

Enterprises often need to find a perfect building location when designing a data center. If you don’t get some site-critical information, it will lead to some cases. Large users are well aware of the data center and have concerns about power availability and cost, fiber optics, and irresistible factors. Baseline users often have business model shells in their core business areas that decide whether they need to build or refurbish. Hence, premature site selection or unreasonable geographic location will fail to meet the design requirements.

Pre-design Space Planning

It is also very important to plan the space capacity inside the data center. The raised floor to support ratio can be as high as 1 to 1, while the mechanical and electrical equipment needs enough space to accommodate. In addition, the planning of office and IT equipment storage areas also needed to be considered. Therefore, it is very critical to estimate and plan the space capacity during data center design. Estimation errors can make the design of a data center unsuitable for the site space, which means suspending project re-evaluation and possibly repurchasing components.

Mismatched Business Goals

Enterprises need to clearly understand their business goals when debugging a data center so that they can complete the data center design. After meeting the business goals, something should be considered, such as which specific applications the data center supports, additional computing power, and later business expansion. Additionally, enterprises need to communicate these goals to data center architects, engineers, and builders to ensure that the overall design meets business needs.

Design Limitations

The importance of modular design is well-publicized in the data center industry. Although the modular approach refers to adding extra infrastructure in an immediate mode to preserve capital, it doesn’t guarantee complete success. Modular and flexible design is the key to long-term stable operation, also meets your data center plans. On the power system, you have to take note of adding UPS (Uninterruptible Power Supply) capacity to existing modules without system disruption. Input and output distribution system design shouldn’t be overlooked, it can allow the data center to adapt to any future changes in the underlying construction standards.

Improper Data Center Power Equipment

To design a data center to maximize equipment uptime and reduce power consumption, you must choose the right power equipment based on the projected capacity. Typically, you might use redundant computing to predict triple server usage to ensure adequate power, which is a waste. Long-term power consumption trends are what you need to consider. Install automatic power-on generators and backup power sources, and choose equipment that can provide enough power to support the data center without waste.

Over-complicated Design

In many cases, redundant targets introduce some complexity. If you add multiple ways to build a modular system, things can quickly get complicated. The over-complexity of data center design means more equipment and components, and these components are the source of failure, which can cause problems such as:

  • Human error. Data statistics errors lead to system data vulnerability and increase operational risks.
  • Expensive. In addition to equipment and components, the maintenance of components failure also incurs more charges.
  • Design concept. If maintainability wasn’t considered by the data center design when the IT team has the requirements of operating or servicing, system operational normality even human security get impacts.

Conclusion

Avoid the nine missteps above to find design solutions for data center IT infrastructure and build a data center that suits your business. Data center design missteps have some impacts on enterprises, such as business expansion, infrastructure maintenance, and security risks. Hence, all infrastructure facilities and data center standards must be rigorously estimated during data center design to ensure long-term stable operation within a reasonable budget.

Article Source: The Most Common Data Center Design Missteps

Related Articles:

How to Utilize Data Center Space More Effectively?

Data Center White Space and Gray Space

Impact of Chip Shortage on Datacenter Industry

As the global chip shortage let rip, many chip manufacturers have to slow or even halt semiconductor production. Makers of all kinds of electronics such as switches, PCs, servers are all scrambling to get enough chips in the pipeline to match the surging demand for their products. Every manufacturer, supplier and solution provider in datacenter industry is feeling the impact of the ongoing chip scarcity. However, relief is nowhere in sight yet.

What’s Happening?

Due to the rise of AI and cloud computing, datacenter chips have been a highly charged topic in recent times. As networking switches and modern servers, indispensable equipment in datacenter applications, use more advanced components than an average consumer’s PC, naturally when it comes to chip manufacturers and suppliers, data centers are given the top priority. However, with the demand for data center machines far outstripping supply, chip shortages may continue to be pervasive across the next few years. Coupled with economic uncertainties caused by the pandemic, it further puts stress on datacenter management.

According to a report from the Dell’Oro Group, robust datacenter switch sales over the past year could foretell a looming shortage. As the mismatch in supply and demand keeps growing, enterprises looking to buy datacenter switches face extended lead times and elevated costs over the course of the next year.

“So supply is decreasing and demand is increasing,” said Sameh Boujelbene, leader of the analyst firm’s campus and data-center research team. “There’s a belief that things will get worse in the second half of the year, but no consensus on when it’ll start getting better.”

Back in March, Broadcom said that more than 90% of its total chip output for 2021 had already been ordered by customers, who are pressuring it for chips to meet booming demand for servers used in cloud data centers and consumer electronics such as 5G phones.

“We intend to meet such demand, and in doing so, we will maintain our disciplined process of carefully reviewing our backlog, identifying real end-user demand, and delivering products accordingly,” CEO Hock Tan said on a conference call with investors and analysts.

Major Implications

Extended Lead Times

Arista Networks, one of the largest data center networking switch vendors and a supplier of switches to cloud providers, foretells that switch-silicon lead times will be extended to as long as 52 weeks.

“The supply chain has never been so constrained in Arista history,” the company’s CEO, Jayshree Ullal, said on an earnings call. “To put this in perspective, we now have to plan for many components with 52-week lead time. COVID has resulted in substrate and wafer shortages and reduced assembly capacity. Our contract manufacturers have experienced significant volatility due to country specific COVID orders. Naturally, we’re working more closely with our strategic suppliers to improve planning and delivery.”

Hock Tan, CEO of Broadcom, also acknowledged on an earnings call that the company had “started extending lead times.” He said, “part of the problem was that customers were now ordering more chips and demanding them faster than usual, hoping to buffer against the supply chain issues.”

Elevated Cost

Vertiv, one of the biggest sellers of datacenter power and cooling equipment, mentioned it had to delay previously planned “footprint optimization programs” due to strained supply. The company’s CEO, Robert Johnson, said on an earnings call, “We have decided to delay some of those programs.”

Supply chain constraints combined with inflation would cause “some incremental unexpected costs over the short term,” he said, “To share the cost with our customers where possible may be part of the solution.”

“Prices are definitely going to be higher for a lot of devices that require a semiconductor,” says David Yoffie, a Harvard Business School professor who spent almost three decades serving on the board of Intel.

Conclusion

There is no telling that how the situation will continue playing out and, most importantly, when supply and demand might get back to normal. Opinions vary on when the shortage will end. The CEO of chipmaker STMicro estimated that the shortage will end by early 2023. Intel CEO Patrick Gelsinger said it could last two more years.

As a high-tech network solutions and services provider, FS has been actively working with our customers to help them plan for, adapt to, and overcome the supply chain challenges, hoping that we can both ride out this chip shortage crisis. At least, we cannot lose hope, as advised by Bill Wyckoff, vice president at technology equipment provider SHI International, “This is not an ‘all is lost’ situation. There are ways and means to keep your equipment procurement and refresh plans on track if you work with the right partners.”

Article Source: Impact of Chip Shortage on Datacenter Industry

Related Articles:

The Chip Shortage: Current Challenges, Predictions, and Potential Solutions

Infographic – What Is a Data Center?

Infographic – What Is a Data Center?

The Internet is where we store and receive a huge amount of information. Where is all the information stored? The answer is data centers. At its simplest, a data center is a dedicated place that organizations use to house their critical applications and data. Here is a short look into the basics of data centers. You will get to know the data center layout, the data pathway, and common types of data centers.

what is a data center

To know more about data centers, click here.

Article Source: Infographic – What Is a Data Center?

Related Articles:

What Is a Data Center?

Infographic — Evolution of Data Centers

Why Data Center Location Matters?

When it comes to data center design, location is a crucial aspect that no business can overlook. Where your data center is located matters a lot more than you might realize. In this article, we will walk you through the importance of data center location and factors you should keep in mind when choosing one.

The Importance of Data Center Location

Though data centers can be located anywhere with power and connectivity, the site selection can have a great impact on a wide range of aspects such as business uptime and cost control. Overall, a good data center location can better secure your data center and extend the life of data centers. Specifically, it means lower TCO, faster internet speed, higher productivity, and so on. Here we will discuss two typical aspects that are the major concerns of businesses.

Greater physical security

Data centers have extremely high security requirements, and once problems occur, normal operation will be affected. Of course, security and reliability can be improved by various means, such as building redundant systems, etc. However, reasonable planning of the physical location of a data center can also effectively avoid harm caused by natural disasters such as earthquakes, floods, fires and so on. If a data center is located in a risk zone that is prone to natural disasters, that would lead to longer downtime and more potential damages to infrastructure.

Higher speed and better performance

Where your data center is located can also affect your website’s speed and business performance. When a user visits a page on your website, their computer has to communicate with servers in your data center to access data or information they need. That data is then transferred from servers to their computer. If your data center is located far away from your users who initiate certain requests, information and data will have to travel longer distances. That will be a lengthy process for your users who could probably get frustrated with slow speeds and latency. The result is lost users leaving your site with no plans to come back. In a sense, a good location can make high speed and impressive business performance possible.

Choosing a Data Center Location — Key Factors

Choosing where to locate your data center requires balancing many different priorities. Here are some major considerations to help you get started.

key factors of choosing a data center location

Business Needs

First and foremost, the decision has to be made based on your business needs and market demands. Where are your users? Is the market promising in the location you are considering? You should always build your data center as close as possible to users you serve. It can shorten the time for users to obtain files and data and make for happy customers. For smaller companies that only operate in a specific region or country, it’s best to choose a nearby data center location. For companies that have much more complicated businesses, they may want to consider more locations or resort to third-party providers for more informed decisions.

Natural Disasters

Damages and losses caused by natural disasters are not something any data center can afford. These include big weather and geographical events such as hurricanes, tornadoes, floods, lightning and thunder, volcanoes, earthquakes, tsunamis, blizzards, hail, fires, and landslides. If your data center is in a risk zone, it is almost a matter of time before it falls victim to one. Conversely, a good location less susceptible to various disasters means a higher possibility of less downtime and better operation.

It is also necessary to analyze the climatic conditions of a data center location in order to select the most suitable cooling measures, thus reducing the TCO of running a data center. At the same time, you might want to set up a disaster recovery site that is far enough from the main site, so that it is almost impossible for any natural disaster to affect them at the same time.

Power Supply

The nature of data centers and requirements for quality and capacity determine that the power supply in a data center must be sufficient and stable. As power is the biggest cost of operating a data center, it is very important to choose a place where electricity is relatively cheap.

The factors we need to consider include:

Availability — You have to know the local power supply situation. At the same time, you need to check whether there are multiple mature power grids in alternative locations.

Cost — As we’ve mentioned, power costs a lot. So it is necessary to compare various power costs. That is to say, the amount of power should be viable and the cost of it should be low enough.

Alternative energy sources — You might also want to consider whether there are renewable energy sources such as solar energy, wind energy and air in alternative locations, which will help enterprises to build a greener corporate image.

It is necessary to make clear the local power supply reliability, electricity price, and policies concerning the trend of the power supply and market demand in the next few years.

Other Factors

There are a number of additional factors to consider. These include local data protection laws, tax structures, land policy, availability of suitable networking solutions, local infrastructure, the accessibility of a skilled labor pool, and other aspects. All these things combined can have a great impact on the TCO of your data center and your business performance. This means you will have to do enough research before making an informed decision.

There is no one right answer for the best place to build a data center. A lot of factors come into play, and you may have to weigh different priorities. But one thing is for sure: A good data center location is crucial to data center success.

Article Source: Why Data Center Location Matters?

Related Articles:

Data Center White Space and Gray Space

Five Ways to Ensure Data Center Physical Security

5G and Multi-Access Edge Computing

Over the years, the Internet of Things and IoT devices have grown tremendously, effectively boosting productivity and accelerating network agility. This technology has also elevated the adoption of edge computing while ushering in a set of advanced edge devices. By adopting edge computing, computational needs are efficiently met since the computing resources are distributed along the communication path, i.e., via a decentralized computing infrastructure.

One of the benefits of edge computing is improved performance as analytics capabilities are brought closer to the machine. An edge data center also reduces operational costs, thanks to the reduced bandwidth requirement and low latency.

Below, we’ve explored more about 5G wireless systems and multi-access edge computing (MEC), an advanced form of edge computing, and how both extend cloud computing benefits to the edge and closer to the users. Keep reading to learn more.

What Is Multi-Access Edge Computing

Multi-access edge computing (MEC) is a relatively new technology that offers cloud computing capabilities at the network’s edge. This technology works by moving some computing capabilities out of the cloud and closer to the end devices. Hence data doesn’t travel as far, resulting in fast processing speeds.

Ideally, there are two types of MEC, dedicated MEC and distributed MEC. Dedicated MEC is typically deployed at the customer’s site on a mobile private network and is designed only for one business. On the other hand, distributed MEC is deployed on a public network, either 4G or 5G, and connects shared assets and resources.

With both the dedicated and distributed MEC, applications run locally, and data is processed in real or near real-time. This helps avoid latency issues for faster response rates and decision-making. MEC technology has seen wider adoption in video analytics, augmented reality, location services, data caching, local content distribution, etc.

How MEC and 5G are Changing Different Industries

At the heart of multi-access edge computing are wireless and radio access network technologies that open up different networks to a wide range of innovative services. Today, 5G technology is the ultimate network that supports ultra-reliable low latency communication. It also provides an enhanced mobile broadband (eMBB) capability for use cases involving significant data rates such as virtual reality and augmented reality.

That said, 5G use cases can be categorized into three domains, massive IoT, mission-critical IoT, and enhanced mobile broadband. Each of the three categories requires different network features regarding security, mobility, bandwidth, policy control, latency, and reliability.

Why MEC Adoption Is on the Rise

5G MEC adoption is growing exponentially, and there are several reasons why this is the case. One reason is that this technology aligns with the distributed and scalable nature of the cloud, making it a key driver of technical transformation. Similarly, MEC technology is a critical business transformation change agent that offers the opportunity to improve service delivery and even support new market verticals.

Among the top use cases driving the high level of 5G, MEC implementation includes video content delivery, the emergence of smart cities, smart utilities (e.g., water and power grids), and connected cars. This also showcases the significant role MEC plays in different IoT domains. Here’s a quick overview of the primary use cases:

  • Autonomous vehicles – 5G MEC can help enhance operational functions such as continuous sensing and real-time traffic monitoring. This reduces latency issues and increases bandwidth.
  • Smart homes – MEC technology can process data locally, boosting privacy and security. It also reduces communication latency and allows for fast mobility and relocation.
  • AR/VR – Moving computational capabilities and processes to edge amplifies the immersive experience to users, plus it extends the battery-life of AR/VR devices.
  • Smart energy – MEC resolves traffic congestion issues and delays due to huge data generation and intermittent connectivity. It also reduces cyber-attacks by enforcing security mechanisms closer to the edge.
MEC Adoption
MEC Adoption

Getting Started With 5G MEC

One of the key benefits of adopting 5G MEC technology is openness, particularly API openness and the option to integrate third-party apps. Standards compliance and application agility are the other value propositions of multi-access edge computing. Therefore, enterprises looking to benefit from a flexible and open cloud should base their integration on the key competencies they want to achieve.

One of the challenges common during the integration process is hardware platforms’ limitations, as far as scale and openness are concerned. Similarly, deploying 5G MEC technology is costly, especially for small-scale businesses with limited financial backing. Other implementation issues include ecosystem and standards immaturity, software limitations, culture, and technical skillset challenges.

To successfully deploy multi-access edge computing, you need an effective 5G MEC implementation strategy that’s true and tested. You should also consider partnering with an expert IT or edge computing company for professional guidance.

5G MEC Technology: Key Takeaways

Edge-driven transformation is a game-changer in the modern business world, and 5G multi-access edge computing technology is undoubtedly leading the cause. Enterprises that embrace this new technology in their business models benefit from streamlined operations, reduced costs, and enhanced customer experience.

Even then, MEC integration isn’t without its challenges. Companies looking to deploy multi-access edge computing technology should have a solid implementation strategy that aligns with their entire digital transformation agenda to avoid silos.

Article Source: 5G and Multi-Access Edge Computing

Related Articles:

What is Multi-Access Edge Computing?https://community.fs.com/blog/what-is-multi-access-edge-computing.html

Edge Computing vs. Multi-Access Edge Computing

What Is Edge Computing?