Tag Archive for: data centers

Do You Need an IP67 Ethernet Switch? Ask These Questions First

Written by Rick Saro and Mike Krueger

 

In automotive environments, Ethernet switches make it possible to connect essential devices to the network so they can gather data and communicate.

Choosing the right Ethernet switch often comes down to deciding between IP ratings: an IP20 or IP67 switch. Both serve the same purpose but offer different advantages and drawbacks you should consider.

An IP20 switch is installed in a control cabinet, considered touchproof (users won’t make contact with hazardous or energized parts) and prevents ingress of large dust particles.

IP67 switches allow equipment operators to deploy Ethernet-based systems right at a machine, process or factory floor instead of in a cabinet. This allows them to configure, manage and monitor connected machines and devices remotely—outside the control cabinet—without having to run long lengths of cable or install enclosures for switches and powering devices.

Due to many factors—including their space-saving, cabinet-less design—IP67 switches are sometimes considered the automotive manufacturing industry’s go-to option for Ethernet switches. But does your plant environment really need an IP67 switch? Would an IP20 switch work just as well?

In some environments, IP67 switches may be necessary. In other cases, however, IP20 switches may be the more cost-effective choice.

Which IP-rated Ethernet switch is right for your automotive plant? To find out, ask yourself these questions …

 

1. Is There Moisture or Frequent Washdowns?

Water plays a big role in the automotive manufacturing process, and it’s used in a number of different stages in an assembly line.

These applications might include:

  • Paint booths where water is used as a filtration medium
  • Rinsing and metal finishing
  • Processing equipment that must be regularly cleaned with water
  • Body-washing areas where cars are cleaned before leaving the plant
  • Rain test chambers that ensure water tightness

If an Ethernet switch will be deployed in a water-intense production area, then it needs to be protected from water intrusion. To protect against water ingress, IP67 Ethernet switches rely on M12 connectivity instead of the RJ45 connectivity found with IP20 switches.

 

2. Are Dust and Debris Present?

When employees are running the production line, their work often results in large volumes of dust. When a significant amount of dust is present in your manufacturing environment, Ethernet switches need to be able to guard against significant dust intrusion to remain operational.

These types of dust-generating applications can include:

  • Cutting
  • Grinding
  • Machining
  • Plastic processing
  • Rubber manufacturing
  • Stamping
  • Welding

IP20 switches prevent ingress of particles greater than 12 mm in diameter, which provides a reasonable level of protection against dust. IP67 switches are considered completely dust tight, offering full protection from dust and other particulates.

 

3. Do We Need Clear Lines of Sight?

Is having a clear line of sight to production lines important in your plant to support communication, determine when assistance is needed, watch for alerts, maintain productivity or ensure that quality standards are met?

Because IP67 switches can be installed outside protective cabinets and directly on machines, they don’t create any visual clutter that may impede the ability to see production lines or interfere with visual verification.

 

4. Do Control Cabinets Need More Space?

Real estate can be one of the biggest expenses involved with running a plant. Maximizing space inside control cabinets can help reduce the size and footprint of the cabinets themselves, optimize plant square footage and reduce labor and material costs.

If you need to find ways to optimize the space inside your automotive plant’s control cabinets, then an IP67 Ethernet switch’s cabinet-less design can help you do this. When the switch is mounted outside the cabinet and directly at the machine, this also results in shorter cable runs (saving even more labor and material costs).

 

5. Is Maintenance a Concern?

Many U.S. requirements state that electricians must dress in personal protective equipment, including clothing that doesn’t conduct electricity, before accessing a cabinet housing containing 110V service or higher. If an IP20 switch is inside the cabinet, then electricians must be the ones to access it.

IP67 switches eliminate this requirement—and the potential for arc flash—because the switches can be removed from enclosures and cabinets while still ensuring reliable performance in dusty, wet and harsh environments.

Mounting Ethernet switches outside the control cabinet also reduces the amount of time an electrician spends working inside a cabinet, improving life safety.

 

Making the Right Choice

If the factors mentioned above—water and dust ingress, space optimization, maintenance and clear lines of sight—are important to your manufacturing operation, then IP67 switches may be the best choice for your automotive environment.

If these factors aren’t a major concern, however, then IP20 switches can be a practical and cost-effective solution to support your connectivity goals.

For automotive environments that demand IP67 Ethernet switches, Belden offers its OCTOPUS IP67 Ethernet Switch. It allows automotive plants to install reliable, fail-safe networks in demanding conditions. Available in unmanaged and managed versions, they offer a cabinet-less design for easy installation directly on machines, built-in network security and complete protection against dust and water intrusion.

 

Learn more about OCTOPUS IP67 Switches

 

Find the original article here

Are You Ready For The Era Of Private Wireless Networks?

Written by Steve Carroll

In the next four years, Ericsson predicts that North Americans’ data consumption will increase by 500% per user. In 2026, the average user is expected to consume 48 GB of data monthly by 2026.

Much of this data consumption will occur over carrier networks—the networks that support mobile/cellular connections. Today, mobile networks carry almost 300 times more mobile data traffic than they did in 2011. And the vast majority of this traffic—80%—is now consumed indoors.

What does this all mean for the buildings where the data is consumed?

Adapting properties to support growth in dedicated in-building wireless will be key to keep employees, visitors and guests connected indoors. In fact, many buildings are now being evaluated based on the technology and connectivity they offer to their tenants and occupants. We’ll share more about this concept in a future blog, but there are certification programs that rank new and existing buildings based on their digital infrastructure, future readiness and user connectivity experience. One of the newest categories ranks the in-building wireless capabilities of a facility.

Poor indoor mobile connectivity isn’t something that can be overlooked any longer. But, many times, the building itself prevents a wireless carrier’s cellular signals from coming inside. Material like metal, tinted glass, brick and concrete act as physical barriers that slow down or prevent signal penetration.

In the past, mobile carriers were big investors in wireless infrastructure. If they knew their customers would be located in or near a venue—a high-rise office, arena or shopping district, for example—then they would help fund that facility’s wireless infrastructure to provide customers the best experience possible indoors (sometimes even paying a monthly fee to rent space for the infrastructure). In many situations, it didn’t cost the owner much money to deploy a mobile network.

Today, this approach has changed. Because most carriers no longer have the budgets to continue operating this way, enterprises now have to provide their own in-building wireless. As owners take on these costs, they’re looking for other connectivity options—such as private wireless networks.

In future blogs, we’ll talk about where private wireless networks work best, how they may be positioned to support emerging technology initiatives and best practices to design and deploy private wireless networks. For now, we want to explain what private wireless networks are—and how they’re different.

 

What Is A Private Wireless Network?

The purpose of a private wireless network is to give individuals or organizations the chance to deploy their own connectivity systems. These systems can operate by leveraging a combination of licensed, quasi-licensed and/or unlicensed spectrum. In other words, they can be LTE (the technology behind 4G) or 5G networks. They’re owned and operated by an enterprise, not a mobile carrier.

Globally, each region of the world is at a different stage of enabling its own access to private wireless spectrum. In the United States, private wireless networks can operate within the (CBRS) Citizens Broadband Radio Service and C-Band spectrum.

The CBRS frequency range spans between 3.5 GHz and 3.7 GHz and is licensed to the U.S. Department of Defense.

In 2015, the U.S. Federal Communications Commission decided to make this spectrum range available to a wider variety of users. The spectrum is “shared” between these groups and governed by the OnGo™ Alliance, a coalition of industry organizations focused on shared-spectrum solutions.

 

Why Are Owners Choosing Private Wireless Networks?

There are many reasons why an owner may be considering a private wireless network. One of the biggest reasons has to do with costs, like we mentioned above. In some cases, like in highly populated areas, carriers may continue to help fund infrastructure. In situations where they can’t or won’t, owners will be looking for cost-effective ways to bring mobile connectivity into their buildings.

Other reasons involve privacy and security. In a public network, data traffic travels back and forth to a central network in another location. Private wireless network traffic doesn’t have to do that. This not only improves security and privacy, but also lowers latency and improves speed.

Private networks also allow enterprises to control their own bandwidth distribution. A smart manufacturing plant, for example, may choose to prioritize connectivity for its latency-sensitive production lines over back-of-house systems.

 

Where To Learn More About Private Wireless

Recently, Belden teamed up with Ranplan to lead a discussion on the topic of private wireless.

If you missed it, you can watch Private Wireless Networks Explained on demand. We walk through the basics of private wireless so that you understand its capabilities and benefits in terms of deployment, bandwidth, maintenance and costs.

Because every situation is different, private wireless may not be the exact fit to replace a distributed antenna system (DAS). Belden can help you determine your specific connectivity needs.

To learn more about in-building wireless networks, download this Navigating In-Building Wireless white paper.

Find the original article here

Reduce Data Center Operating Costs to Improve PUE

Written by Shad Sechrist

Data Centers

If you’re looking for ways to reduce data center operating costs, then lowering monthly energy bills is a great place to start. By far, the biggest contributor to high data center operating costs are these recurring expenses.

 

What drives your utility bill so high each month? In most data centers, it comes down to the operation of non-IT systems:

  • Cooling and air handling
  • Lighting
  • Security cameras

 

To determine how much of your total power usage goes to systems that don’t provide compute services, you can calculate your data center’s power usage effectiveness (PUE). This ratio compares the total amount of power used by your data center to the amount of power delivered to its computing equipment. It also reveals how much energy is used for non-IT activities and systems.

 

PUE = total facility power / IT equipment energy

 

A high PUE indicates that your data center uses more power than it should to run equipment. A low ratio suggests that energy is used effectively to get compute work done.

 

As we examined the Uptime Institute’s 11th Annual Global Data Center Survey, we discovered that energy-efficiency progress has slowed down for many data centers.

 

From 2003 to 2010, for example, the data center industry made great strides in improving PUE. The average data center dropped from 2.5 to 1.6. In the last five years, however, the industry hit a plateau. The average PUE has been stagnant, sitting near 1.56.

 

When this PUE is translated to a percentage (by using the data center infrastructure efficiency [DCiE] metric), it shows that approximately 60% of energy entering the data center is used to power the non-IT systems we mentioned earlier—not the compute gear.

 

Newly constructed data centers designed to maximize energy efficiency typically see PUEs of 1.1 or 1.05—proof that this level of performance can be achieved. And while there’s plenty of new space on the horizon, most data centers have been running for years and rely on older systems.

 

Why is PUE Progress Slowing Down?

By now, most data center managers have had time to pick the low-hanging fruit, such as:

  • Isolating supply and retain air through containment walls or using end-of-row doors on aisles to prevent air mixing.
  • Using blanking panels to fill unused “U” positions in racks or enclosures and separate hot and cool air.
  • Sealing holes in walls and raised floors with plenum-rated products.
  • Replacing missing or poorly fitting floor tiles.
  • Getting rid of underused or non-operational servers.

 

If your data center hasn’t implemented these best practices, now’s the time to do so. You’ll see an immediate improvement in energy use and lower data center operating costs.

 

The next phase of efficiency improvements, which can take PUE from 1.5 to 1.2 or 1.1, requires more time and money. Once you pick all your low-hanging fruit, here are some examples of what’s waiting higher up the tree.

 

Deploy Power Distribution Units (PDUs)

PDUs are like well-constructed power strips designed to be used in data centers. Today’s smart PDUs help data center managers remotely monitor power use, energy efficiency and environmental conditions.

 

They can track metrics like real-time power usage, data and event logs, the amount of current drawn by each PDU and the amount of current drawn by each outlet so you can optimize usage down to the device level.

 

This level of granularity is key. When you know exactly how much energy certain systems use, it becomes obvious as to where changes need to be made—even down to the rack level.

 

Install More Efficient Cooling Equipment

If you want to replace your legacy cooling equipment with new, more efficient systems to better control heat, there are many options to choose from. The right one for your data center depends on its size, location, configuration and unique design challenges.

 

You can choose to cool at the room level, the row level or the rack level (or use a combination), and there’s a long list of systems to choose from: computer room air conditioners (CRACs), liquid cooling and precision cooling are just a few examples.

 

If your cooling equipment is outdated, then it’s likely inefficient. Upgrading your system can reduce energy use and lower data center operating costs.

 

Invest in White Cabinets

Lighter-colored cabinets can conserve electricity in a few ways. Light colors like silver or white naturally reflect more light than dark colors (like black) because they have different light reflectance values (LRVs). For this reason, additional lighting is often needed to see labels and ports among dark cabinets.

 

When you lower lighting levels, you also reduce the amount of heat given off by the lighting system, which reduces cooling requirements. We estimate that swapping black enclosures for a lighter color leads to energy savings of between 1% and 2%.

 

Update Lighting

Modernize your lighting systems to take advantage of LED technology. LEDs are a good fit for data centers for many reasons:

  • They generate less heat than fluorescents, which translates to lower cooling costs
  • They use less energy than alternatives
  • They offer lighting uniformity so all areas are equally bright, reducing shadows that make maintenance work difficult

 

Occupancy sensors and lighting zones are also effective ways to control data center operating costs. When no one’s in the data center, the lights will automatically shut off. (Depending on your surveillance equipment, you may need enough illumination for proper video capture, but many of today’s cameras can see in low-light and dark conditions.) When the lights are on, initial entry areas and halls don’t need to be as bright as equipment areas, and they can be zoned accordingly.

 

Keep People Out of the Data Center

Data centers spaces are built to process data, not host people. Keeping the data center as “hands-off” or “lights-out” as possible is another step you can take to reduce data center operating costs.

 

IT equipment can operate at higher ambient temperatures than those typically comfortable for people. If you can automate certain processes and reduce the need for onsite staff, then the space doesn’t need to be as cool.

 

Lights-out data centers may not be common yet, but COVID-19 revealed what these unmanned spaces may look like. In many cases, the examples proved that data centers can operate with little human involvement.

 

 

Find the original article here

Copyright © 2024 Jaycor International
Engineered by: NJIN Agency
This is a Blog Tag Page