Budgeting Sufficient Power: Key to Future-proof Fiber Infrastructure

With the technology transformations happening in today’s enterprises, many types of organizations – from hotels and gaming facilities to schools and offices – are deploying new fiber cabling infrastructure.

However, it’s crucial to understand the power budget of the new architecture, as well as the desired number of connections in each link. The power budget indicates the amount of loss that a link (from the transmitter to the receiver) can tolerate while maintaining an acceptable level of operation.

This blog provides you with multimode fiber (MMF) link specifications so you can ensure your fiber connections have sufficient power for best performance. In an upcoming blog, we’ll cover the link specifications for singlemode fiber.

 

 Attenuation and Effective Modal Bandwidth

The latest IEC and ANSI/TIA standards ratified the maximum cabled fiber attenuation coefficients for OM3 and OM4 to 3.0 dB/km for cabled fiber at 850 nm. Attenuation is also known as “transmission loss,” and is the loss of optical power due to absorption, scattering, bending, etc. as light travels through the fiber. OM4 can support a longer reach than OM3, mainly due to its better light-confining characteristics, defined by its effective modal bandwidth (EMB).

Read full article

Checkpoint 3: Optical Fiber Standards for Fiber Infrastructure Deployment

To reinforce the expanding cloud ecosystem, optical active component vendors have designed and commercialized new transceiver types under multi-source agreements (MSAs) for dissimilar data center types; standards bodies are incorporating these new variants into new standards development.

For example, IEEE 802.3 taskforces are working on 50 Gbps- and 100 Gbps-per-lane technologies for next-generation Ethernet speeds from 50 Gbps to 400 Gbps. Moving from 10 Gbps to 25 Gbps, and then to 50 Gbps and 100 Gbps per lane, creates new challenges in semiconductor integrated circuit design and manufacturing processes, as well as in high-speed data transmission.

Getting ready for new fiber infrastructure deployment to accommodate these upcoming changes, there are four essential checkpoints that we think you should keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

The first blog published on March 23, 2017 – we are discussing these checkpoints, describing current technology trends and explaining the latest industry standards for data center applications. This blog covers checkpoint No. 3: verifying optical fiber standards developed by standards bodies.

Read full article

Rack Scale Design: “Data-Center-in-a-Box”

The “data-center-in-a-box” concept is becoming a reality as data center operators look for explanations that are easily replicated, scaled and deployed following a just-in-time methodology.

Rack scale design is a modular, efficient design approach that supports this yearning for easier-to-manage compute and storage solutions.

What is Rack Scale Design?

Rack scale design solutions serve as the building blocks of a new data center methodology that incorporates a software-defined, hyper-converged management system within a concentrated, single rack solution. In essence, rack scale design is a design approach that supports hyper-convergence.

Rack scale design is changing the data center environment. Read on to discover how the progress to a hyper-converged, software-defined environment came about; its pros and cons; the effects on the data center infrastructure; and where rack scale design solutions are headed.

What is Hyper-Convergence?

Two years ago, the term “hyper-convergence” meant nothing in our industry. By 2019, however, hyper-convergence is expected to be a $5 billion market.

Offering a centralized approach to organizing data center infrastructure, hyper-convergence can collapse compute, storage, virtualization and networking into one SKU, adding a software-defined layer to manage data, software and physical infrastructure. Based on software and/or appliances, or supplied with commodity-based servers, hyper-convergence places compute, storage and networking into one package or “physical container” to create a virtualized data center.

Read full article

IT as a Utility

There are accepted utility services which businesses require in order to function: water, gas, sewer and electricity services are at the very top. As public utilities, these services are provided to all organizations; their cost is typically determined based on usage and demand, and customers pay a metered fee based on individual consumption levels.

When you reflect on public utilities, what comes to mind? How reliant we are on them? How fundamental they are to our survival? How the services are primarily invisible to us, and how we often take them for granted?

When you think about it, many of these statements could also be said about IT networks, especially as they have changed over the past few years to support digital buildings and IoT. It’s becoming extremely common to refer to – or think about – IT as a utility because of how central it is to every business – and to our everyday lives. Enterprise networks are just as vital as electricity and water to keeping a business afloat.

Today’s users expect networks to be fast and fully functional. They do not think about the behind-the-scenes work it takes to make that network connection transpire. When you flip a light switch, do you think about where the electricity is coming from, or the process required to make your overhead lights turn on? When you think about IT as a utility, you expect to be able to connect to a network whenever you want – you assume it will always be available and easy to access, regardless of where you are.

Read full article

 

Cable Braid Design: The significant Factor in New 4K Coax Cable

There is a revolution transpiring with 4K coax cable design. We outlined factors which set new 4K coax cable design apart (silver-coated copper center conductors, the insulation, polyethylene types and foil shields bonded to the core). Then we concluded our post by promising to tell you about the last remaining factor that sets new 4K coax cable designs apart from the others: the cable braid in 12 GHz 4K design.

Learn more about the most difficult part of the journey to a new 4K coax cable design …

Cable Braid Performance

After the first foil, which a braid layer with 95% coverage – the most coverage possible in a single cable braid. Enhacing cable braid performance is one of the most difficult aspects of creating a new 4K coax design.

If you look at the impedance of the cable (return loss), old designs have a bunch of spikes at different frequencies. These are caused by the dimensions of the braid, the relationship between the individual conductors, how the cable braids are woven with each other, the angle at which they cross each other (braid angle) and lots of other factors.

Read full article

IT-OT Convergence and Conflict: Who is Responsible for ICS Security

Who is responsible for industrial cyber security in your organization? Whether it is Information Technology (IT) or a cross-functional ICS operations and process control group – often labeled Operations Technology (OT) – they possibly have incompatible approaches to resolving cyber security risk.

To both secure ICS and reap the productivity benefits of IT-OT convergence, the industrial cyber security program must be recognized as a cross-functional lifecycle and journey. IT and OT must work together for either team to be successful.

Pre-internet, the line between IT and OT was quite clear. Today, that line has been unclear. Technology can potentially permit connectivity to nearly every device on the plant floor and out to field locations. And it’s also connecting IT and OT in new ways too.

IT and OT are very different organizations that have begun to converge. This blog addresses one of the many causes of their conflict and how to start resolving the growing pains.

IT and OT Resisting Convergence

IT and OT are resisting convergence occuring all around them says Luigi De Bernardini, CEO of Autoware, an MES and smart manufacturing automation firm in Italy. When working with clients in large manufacturing automation projects he finds that “many manufacturers still see strong resistance to bringing information and operational technologies together, with mistrust coming from both sides.”

Read full article

 

Is DC Power Heading for Your Data Center?

Could DC power be an energy-saving game changer in the data center industry?

As power densities expand, colocation and hyperscale data center operators need to take advantage of every opportunity to decrease power consumption. Is it possible that 380V direct current (DC) might be the solution?

To answer that query, it’s important to understand the history behind AC (alternating current) and DC power, the pros and cons of using DC power in data centers, and the potential future of DC power.

Some History: AC vs. DC                                                   

The world might be altered if Thomas Edison had won the power war back in the 1800s. In addition to inventing the lightbulb, Edison was the inventor and patent holder of an electrical distribution system based on direct electric current. He established the first electric utility company in New York in 1882 to supply electricity to 59 customers. By the late 1890s, he had constructed and was operating 100+ direct electric power plants in the Northeast.

His jolt to deploy DC power plants ended after one of his employees (Nikola Tesla) joined George Westinghouse; together, they developed an AC power distribution system. The AC power plant was significantly efficient than Edison’s DC plant; AC power plants could distribute power to customers over hundreds of miles compared to DC power plants that needed to be placed within a few miles of homes and offices.

Read full article

AI Uses in Data Centers

Compared to many of the digital transformations we have seen in the past couple years, artificial intelligence (AI) is altering the way we all do business – including in data centers.

An increasingly used term that describes the method of using “machine logic” to solve very complex problems for humans, artificial intelligence also describes the potential for a machine to “learn” similar to the way human beings learn. Software algorithms (programming, more specifically) develop relationships between large sets of data, then repeat the same function using the same algorithms, but including the “learning factor.”

The reason we are hearing so vastly about artificial intelligence is because it is one of the fastest-growing sectors in technology today. Artificial intelligence uses are expected to increase by 63% between last year (2016) and 2022; the prediction is a $16.6 billion market that’s driven by technology companies like IBM, Intel and Microsoft.

According to Siemens, there are specific artificial intelligence uses that are expected to rise between 2019 and 2024:

  • Autonomous robots (self-driving cars): 31%
  • Digital assistants (Siri-like automated online assistants): 30%
  • Neurocomputers (machines that recognize patterns and relationships): 22%
  • Embedded systems (machine monitoring and control): 19%
  • Expert systems (medical diagnosis and the smart grid): 12%

Artificial intelligence uses in data centers are also expected to heighten. AI can help data centers reduce energy consumption and operating costs while improving uptime and maintaining high levels of performance. Need a few examples? Let’s take a closer look.

Read full article

5G Networks and Mobile Edge Computing

Global mobile data traffic is growing much faster than fixed broadband data traffic, with a compound annual growth rate of 47% from 2016 to 2021, according to Cisco’s VNI Mobile 2017 report. Expansion of mobile-access bandwidth is being driven by the proliferation of web applications, cloud computing and content streaming (including audio, video and virtual reality).

The Evolution of Mobile Networks

The mobile network system, which serves as the communications backbone for cellular phones, has changed our lives and our communication over the last 30 years. Today, smartphones do not just support basic services like voice and SMS – they have become indispensable tools that offer millions of applications to improve work efficiency, continuously provide updated news information, keep us in contact with peers and friends, provide instant streaming of our favorite TV series and movies, take and share high-definition pictures and videos … our smartphones have become our personal assistants to complete all kind of tasks.

Since the first cellular mobile network system was introduced in 1981, a new mobile generation has appeared every 10 years. These mobile-network milestones remind us of just how far we’ve come since then:

  • A 1G cellular system that supported analog voice service using frequency division multiple access (FDMA) was introduced in 1982.
  • A 2G GSM cellular system that supported digital voice and messaging using time division multiple access (TDMA) and code division multiple access (CDMA) was introduced in 1992.
  • 3G first appeared in 2001 to support digital voice and messaging, as well as data and multimedia service; it moved us to the wideband spread-spectrum with wideband code-division multiple access (WCDMA).
  • 4G/LTE (long-term evolution), our current mobile-network generation, supports IP voice and data, as well as mobile Internet service. 4G has moved to complex modulation formats with orthogonal frequency-division multiplexing (OFDM), and was first standardized and introduced in 2012.

Read full article

Transmit Wireless Data at Speeds up to 867 Mbit/s

Nobody should pay for features they do not requiure. With the Hirschmann BAT867-R industrial wireless access point, you would not have to compromise performance for price. Space and budgets are limited. That is why the BAT867-R includes a refined set of features to help reduce the device’s size, as well as overall networking costs.

 

BAT867-R Blends High-Performance with Cost-Effectiveness

Its rugged design, compact size and select feature set help you maximize efficiency and performance. The BAT867-R wireless access point is ideal for industrial settings where space and budgets are limited, such as discrete automation and machine building settings.

  • Enables high-speed data transmission up to 867 Mbit/s
  • Meets IEEE11ac standard
  • Provides reliable wireless capabilities from tablets/smartphones
  • Allows wireless connectivity for moving vehicle to improve warehouse efficiency

Transmit data efficiently – up to 867 megabits per second (Mbps) – with the BAT867-R industrial wireless access point. This device supports high-speed IEEE 802.11ac data rates, making it the fastest wireless device in Belden’s portfolio.

By only including the essential interfaces, Hirschmann offers a cost-effective, high-speed solution. You also have access to extensive management, redundancy and security functions with Hirschm

Read full article

Copyright © 2024 Jaycor International
Engineered by: NJIN Agency