As companies undergo mergers and acquisitions, on-premise facilities continue to age, and consolidation mandates are handed down, the need arises to migrate data center equipment to new facilities.
The majority of processes occur in the same way — no matter the data center in question — he said: “How do you physically move servers, how do you track inventory, how do you power everything down?” A dependency mapping exercise — down … The fiber …
Steve Carlini is Senior Director of Data Center Global Solutions and Soeren Jensen is Vice President of Software and Managed Services for Schneider Electric.
Data center operators are under more pressure than ever to provide the fastest, most reliable data possible while balancing demands for higher computing power and efficiency.
Cooling optimization in a data center provides a significant opportunity for reducing operating costs since cooling systems consume 34 percent of the power used by a data center, according to Lawrence Berkley National Laboratory.
Due to the need to maintain five nines (99.999 percent) reliability, data centers are too often overdesigned in terms of the cooling capacity, and operating budgets pay the price for this practice.
In the past, companies managed data center infrastructure and the processes surrounding those resources using spreadsheets. This approach, albeit cheap and easy, was a stopgap for using a real enterprise management platform. It is to fill this need that data center infrastructure management (DCIM) materialized.
Within today’s modern data center and when it comes to the strategic importance and business value of the data center network, there is actually a pretty big disconnect. Even with momentum building behind the software-driven data center (SDDC) and software-driven services, applications and networks are often ships in the night. In many cases they are designed, built, deployed, and managed separately, as discrete entities.
Hazard: The circuit breakers will not trip during an overload condition.
The recall involves PowerPact J-frame molded case circuit breakers with thermal-magnetic trip units. The circuit breakers are made of black plastic and have a three-position breaker handle that indicates whether the breaker is off, on or tripped. The recalled circuit breakers are rated for 150 to 250 amps, have interruption ratings of D, G, J, L and R. They were manufactured in two pole and three pole configurations with either lug-in/lug-out or plug-in (I-Line) style connectors.
Keystone NAP has delivered the first modular “KeyBlock” unit to its data center on the border between Pennsylvania and New Jersey. The modules were co-developed with Schneider Electric and are configured to each individual customer’s power, cooling, and network connectivity needs.
Deployment of active optical cable (AOC) is on the rise in data center cabling and is expected to maintain an upward trajectory in the coming years. According to market research firm LightCounting, 10 GbE SFP+ AOCs now accounts for a quarter of the overall market, driven primarily by substantial growth in the data center over the past decade or so. Going forward, throughput exceeding 100 G and even 400 G, along with core switching interconnectivity, is expected to push today’s $100 million market to more than $260 million by the end of the decade.
Providing a rare look inside its data center operations, Google recently posted a video describing its data center in Berkeley County, South Carolina, including descriptions of the facility’s cooling system and security measures.
The company announced it would build the South Carolina data center in 2007. Including an expansion project in 2013, Google’s total investment in the site amounts to $1.2 billion.