Introducing Magnum 5RX Security Router

This ruggedized device delivers high-performance routing and advanced firewall function while ensuring network security. This is your moment to reduce total infrastructure costs, especially in high-volume deployments and highly distributed networks.

 

Ultimate Performance and Reliability in a 2-in-1 Package

Integrating advanced firewall security and routing in a fixed configuration, the Magnum 5RX Security Router provides current and legacy network interfaces and a valuable migration path to the new generation of network backbones. Features eight DB9-DTE serial ports along with standard six Gigabit Ethernet ports and one WAN (T1E1 or DDS) port.

  • Combined 2-in-1 solution
  • Ensures optimal performance
  • Total network support with Magnum series

GarrettCom Magnum 5RX Fixed Configuration Security Router offers a cost-efficient, two-in-one solution for industrial energy and utility applications.

The Magnum 5RX Security Router is a mid-level, industrial-grade security router serving the power generation, transmission and distribution markets by delivering an efficient edge-of-network solution.

Offering advanced routing and security capabilities in a single platform, the new router provides a natural migration path for customers planning a move to next-generation, high performance Gigabit Ethernet and Transmission Control Protocol/Internet Protocol (TCP/IP) technology.

 

Combined two-in-one solution

  • Routing and security functionalities in a single device for streamlined management
  • Fixed configuration for a cost-effective system, especially in highly distributed deployment scenarios

Read full article

Checkpoint 3: Optical Fiber Standards for Fiber Infrastructure Deployment

To reinforce the expanding cloud ecosystem, optical active component vendors have designed and commercialized new transceiver types under multi-source agreements (MSAs) for dissimilar data center types; standards bodies are incorporating these new variants into new standards development.

For example, IEEE 802.3 taskforces are working on 50 Gbps- and 100 Gbps-per-lane technologies for next-generation Ethernet speeds from 50 Gbps to 400 Gbps. Moving from 10 Gbps to 25 Gbps, and then to 50 Gbps and 100 Gbps per lane, creates new challenges in semiconductor integrated circuit design and manufacturing processes, as well as in high-speed data transmission.

Getting ready for new fiber infrastructure deployment to accommodate these upcoming changes, there are four essential checkpoints that we think you should keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

The first blog published on March 23, 2017 – we are discussing these checkpoints, describing current technology trends and explaining the latest industry standards for data center applications. This blog covers checkpoint No. 3: verifying optical fiber standards developed by standards bodies.

Read full article

Rack Scale Design: “Data-Center-in-a-Box”

The “data-center-in-a-box” concept is becoming a reality as data center operators look for explanations that are easily replicated, scaled and deployed following a just-in-time methodology.

Rack scale design is a modular, efficient design approach that supports this yearning for easier-to-manage compute and storage solutions.

What is Rack Scale Design?

Rack scale design solutions serve as the building blocks of a new data center methodology that incorporates a software-defined, hyper-converged management system within a concentrated, single rack solution. In essence, rack scale design is a design approach that supports hyper-convergence.

Rack scale design is changing the data center environment. Read on to discover how the progress to a hyper-converged, software-defined environment came about; its pros and cons; the effects on the data center infrastructure; and where rack scale design solutions are headed.

What is Hyper-Convergence?

Two years ago, the term “hyper-convergence” meant nothing in our industry. By 2019, however, hyper-convergence is expected to be a $5 billion market.

Offering a centralized approach to organizing data center infrastructure, hyper-convergence can collapse compute, storage, virtualization and networking into one SKU, adding a software-defined layer to manage data, software and physical infrastructure. Based on software and/or appliances, or supplied with commodity-based servers, hyper-convergence places compute, storage and networking into one package or “physical container” to create a virtualized data center.

Read full article

IT as a Utility

There are accepted utility services which businesses require in order to function: water, gas, sewer and electricity services are at the very top. As public utilities, these services are provided to all organizations; their cost is typically determined based on usage and demand, and customers pay a metered fee based on individual consumption levels.

When you reflect on public utilities, what comes to mind? How reliant we are on them? How fundamental they are to our survival? How the services are primarily invisible to us, and how we often take them for granted?

When you think about it, many of these statements could also be said about IT networks, especially as they have changed over the past few years to support digital buildings and IoT. It’s becoming extremely common to refer to – or think about – IT as a utility because of how central it is to every business – and to our everyday lives. Enterprise networks are just as vital as electricity and water to keeping a business afloat.

Today’s users expect networks to be fast and fully functional. They do not think about the behind-the-scenes work it takes to make that network connection transpire. When you flip a light switch, do you think about where the electricity is coming from, or the process required to make your overhead lights turn on? When you think about IT as a utility, you expect to be able to connect to a network whenever you want – you assume it will always be available and easy to access, regardless of where you are.

Read full article

 

AI Uses in Data Centers

Compared to many of the digital transformations we have seen in the past couple years, artificial intelligence (AI) is altering the way we all do business – including in data centers.

An increasingly used term that describes the method of using “machine logic” to solve very complex problems for humans, artificial intelligence also describes the potential for a machine to “learn” similar to the way human beings learn. Software algorithms (programming, more specifically) develop relationships between large sets of data, then repeat the same function using the same algorithms, but including the “learning factor.”

The reason we are hearing so vastly about artificial intelligence is because it is one of the fastest-growing sectors in technology today. Artificial intelligence uses are expected to increase by 63% between last year (2016) and 2022; the prediction is a $16.6 billion market that’s driven by technology companies like IBM, Intel and Microsoft.

According to Siemens, there are specific artificial intelligence uses that are expected to rise between 2019 and 2024:

  • Autonomous robots (self-driving cars): 31%
  • Digital assistants (Siri-like automated online assistants): 30%
  • Neurocomputers (machines that recognize patterns and relationships): 22%
  • Embedded systems (machine monitoring and control): 19%
  • Expert systems (medical diagnosis and the smart grid): 12%

Artificial intelligence uses in data centers are also expected to heighten. AI can help data centers reduce energy consumption and operating costs while improving uptime and maintaining high levels of performance. Need a few examples? Let’s take a closer look.

Read full article

5G Networks and Mobile Edge Computing

Global mobile data traffic is growing much faster than fixed broadband data traffic, with a compound annual growth rate of 47% from 2016 to 2021, according to Cisco’s VNI Mobile 2017 report. Expansion of mobile-access bandwidth is being driven by the proliferation of web applications, cloud computing and content streaming (including audio, video and virtual reality).

The Evolution of Mobile Networks

The mobile network system, which serves as the communications backbone for cellular phones, has changed our lives and our communication over the last 30 years. Today, smartphones do not just support basic services like voice and SMS – they have become indispensable tools that offer millions of applications to improve work efficiency, continuously provide updated news information, keep us in contact with peers and friends, provide instant streaming of our favorite TV series and movies, take and share high-definition pictures and videos … our smartphones have become our personal assistants to complete all kind of tasks.

Since the first cellular mobile network system was introduced in 1981, a new mobile generation has appeared every 10 years. These mobile-network milestones remind us of just how far we’ve come since then:

  • A 1G cellular system that supported analog voice service using frequency division multiple access (FDMA) was introduced in 1982.
  • A 2G GSM cellular system that supported digital voice and messaging using time division multiple access (TDMA) and code division multiple access (CDMA) was introduced in 1992.
  • 3G first appeared in 2001 to support digital voice and messaging, as well as data and multimedia service; it moved us to the wideband spread-spectrum with wideband code-division multiple access (WCDMA).
  • 4G/LTE (long-term evolution), our current mobile-network generation, supports IP voice and data, as well as mobile Internet service. 4G has moved to complex modulation formats with orthogonal frequency-division multiplexing (OFDM), and was first standardized and introduced in 2012.

Read full article

Cabinet Seismic Ratings: Reduce the Risk of Downtime

The International Building Code (IBC) determines that certain facilities – data centers often included – remain operational during and after earthquakes or other seismic events. Based on building type, and how vital a building’s operations are, facilities are placed into four IBC-determined risk categories:

  • Risk Category 4: Hospitals, aviation control towers, police/fire stations, facilities containing highly toxic materials
  • Risk Category 3: Lecture halls, theaters, power-generations stations, water treatment plants, prisons
  • Risk Category 2: buildings that don’t fall into Risk Categories 1, 3 or 4
  • Risk Category 1: storage buildings and agricultural facilities

Data centers typically fall into Risk Category 4, meaning that their operation is regarded vital during and after an earthquake. To protect against downtime, it’s pivotal to minimise the potential for equipment damage during seismic events – especially if data centers are not backed up at a secondary location. Some data centers are considered vital to conserving communication exchange (wireless, email, voice, etc.) after a seismic event.

Read full article

High-Speed Optical Links: Checkpoint 2 for Fiber Infrastructure Deployment

All the devices housed in today’s data centers – from virtualization equipment to storage devices – require cabling that provides high performance and flexibility. Because of this, distributing new fiber infrastructure in data centers demand  much thought and planning.

We advise keeping these four essential checkpoints in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

In a series of blogs – the first one published on March 23, 2017– we will cover each of these checkpoints in detail, describe current technology trends and the latest industry standards for data center applications. This blog covers checkpoint No. 2: choosing optical link media based on reach and speed.

Read full article

Category Cables; Planning for Power Delivery

The utilisation of category cables for power delivery has been getting ample attention lately – especially given the amendment in NEC (2017), NFPA 70 (2017) and potentially CEC C22.1 (2017 proposed revisions). This attention is related to potential safety issues that may emerge when high power, high temperature and high cabling density are present.

The National Fire Protection Association (NFPA), Chapter 3, Table 725.144, “Transmission of Power and Data,” contains information about the ampacity rating of conductors at various temperature ratings based on gauge and bundle size. UL has created LP certifications (optional – not required by code) to identify cables that are designed and tested to carry the marked current under reasonable worst-case installation scenarios without exceeding the cable’s temperature rating.

This arose through an allowance in the older version of NEC, which allowed electricians to substitute Class 2 and Class 3 data cables (category cables) for 18 AWG wire in certain instances.

Read full article

LAN Cabling: Going Beyond Standards to Improve Capacity

Cabling standards exist for a purpose – it assists you get the most out of your networks. Many cabling solutions are designed to execute beyond what the standards specify.

When standards for performance are set by groups like the Telecommunications Industry Association (TIA), the International Organization for Standardization (ISO/IEC) and the Institute of Electrical and Electronics Engineers (IEEE), why go beyond what they advise? Because cable performance which moves beyond standards can lead to a more reliable LAN connection for enterprises.

Bandwidth and Information Capacity

The standards spell out specifications for insertion loss and background noise levels (return loss, near-end crosstalk [NEXT], etc.). If the cable stays within the recommended parameters, the cabling system will function as intended in terms of signal to noise ratio, or information capacity.  For cabling, this is referred to as bandwidth.

Read full article

Copyright © 2024 Jaycor International
Engineered by: NJIN Agency
This is a Blog Category Page