Tag Archive for Data center

Under Water Data Center Resurfaces

Under Water Data Center Resurfaces– Updated – 07/07/2024 – Microsoft has discontinued its efforts to build a data center on  the sea floor. “I’m not building subsea data centers anywhere in the world,” Noelle Walsh, the head of Microsoft’s Cloud Operations and Innovation division, told DatacenterDynamics.

Two years ago, Microsoft sank a data center half a mile off Scotland’s Orkney Islands under 117 feet of North Sea water. Earlier this week, they dredged the shipping container-size data center of 864 servers and 27.6 petabytes of storage back to the surface. Now that it has resurfacedMicrosoft (MSFT) researchers are studying how it survived its trip into Davy Jone’s locker and the trip can tell us about land-loving data centers.

Lower failure rate

Microsoft logoTheir first conclusion is that the cylinder with servers packed in like sardines had a lower failure rate than a conventional data center. Only eight out of the 855 servers on board had failed. Ben Cutler, a project manager in Microsoft’s Special Projects research group who leads Project Natick, said in a presser,

Our failure rate in the water is one-eighth of what we see on land.

The MSFT team is speculating that the greater reliability may be connected to the fact that there were no humans on board.  Microsoft’s John Roach explained:

people bump and jostle components,The team hypothesizes that the atmosphere of nitrogen, which is less corrosive than oxygen, and the absence of people to bump and jostle components, are the primary reasons for the difference. If the analysis proves this correct, the team may be able to translate the findings to land data centers.”They believe that land-loving data centers often run into issues like corrosion from oxygen, humidity and temperature fluctuations. and bumps and jostles from people who replace broken components.

Microsoft "Northern Isles"

Alternate power sources for data centers

Project Natick is also about addressing the huge energy demands of data centers as more and more of our data is stored in the cloud. All of Orkney’s electricity comes from alternate power sources, wind and solar power, which was not a problem for the underwater data center “Northern Isles.” Spencer Fowers, Microsoft’s Special Projects research group principal member of technical staff,

We have been able to run really well on what most land-based data centers consider an unreliable grid.

Not only can data centers run on alternative power, but they may not need the huge investment in dedicated buildings, rooms of batteries, and racks of UPS’s. Microsoft’s Fowers speculates;

We are hopeful that we can look at our findings and say maybe we don’t need to have quite as much infrastructure focused on power and reliability.

Underwater data center availability

Microsoft has clammed up about the availability of an underwater data center SKU, but MSFT’s Cutler is confident that it has proved the idea has value;

We think that we’re past the point where this is a science experiment … Now it’s simply a question of what do we want to engineer – would it be a little one, or would it be a large one?

rb-

The drive to autonomous vehicles is just one case that explains MSFT’s idea of micro-self-contained data centers vs. mega-data centers. Even with 5G –  computing power will have to move closer to the user, to the edge of the network. How much latency do you want as your autonomous Tesla, traveling 70 MPH tries to figure out where it is?

Stay safe out there!

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Undersea Data Center

Updated 08/09/2019 – Microsoft has installed two underwater cameras that offer live video feeds of the sunken data center. You can now watch all kinds of sea creatures swimming around a tank that holds 27.6 petabytes of data.

Undersea Data CenterFollowers of the Bach Seat know that Microsoft (MSFT) has experimented with undersea data centers to save costs associated with deploying data centers. Back in 2015, I wrote about MSFT’s initial experiment off the California coast where MSFT first tried out the idea of an underwater data center. Redmond has announced phase 2 of Project Natick. Phase 2 of Project Natick is designed to test the practical aspects of deploying a full-scale lights-out data center underwater called, “Northern Isles.”

Undersea Data CenterKurt Mackie wrote in an article at Redmond Magazine that Microsoft is testing this underwater data center off the coast of Scotland near the Orkney Islands in the North Sea. Microsoft wants to place data centers offshore because about half the world’s population lives within 125 miles of a coast. Locating data closer to its users reduces latency for bandwidth-intensive applications such as video streaming and gaming, as well as emerging artificial intelligence-powered apps. Latency is the time it takes data to travel from its source to customers. It is like the difference between using an application on your hard drive vs. using off the network.

Mr. Mackle posts that the original underwater data center had the computing power of 300 PCs, Phase 2’s computing power is equal to “several thousand high-end consumer PCs,” according to Microsoft’s FAQ page. This next-generation underwater data center requires 240KW of power, is 40 feet in length, and holds 12 racks with 864 servers. The submarine container is mounted on a metal platform on the seafloor 117 feet deep. The Phase 2 data center can house 27.6 petabytes of data. A fiber-optic cable keeps it connected to the outside world. Naval Group, a 400-year old French company built the submarine part of the project.

The interesting part (U.S. Navy submarines have had computers onboard for years) is the lights-out part. Lights out allow Microsoft to change up how data centers are deployed. Northern Isles’s cooling techniques are changed. The cold-aisle temperature is kept at a chilly 54F (12C) to remove the stress temperature variations place on components. This temperature is maintained by using a heat-exchange process developed for cooling submarines. Ben Cutler, Microsoft Research Project Natick lead told Data Center Knowledge, “... by deploying in the water we benefit from ready access to cooling – reducing the requirement for energy for cooling by up to 95%.”

heat exchangerWith Phase 2, Mr. Cutler explained to DCK there’s no external heat exchanger, “We’re pulling raw seawater in through the heat exchangers in the back of the rack and back out again.” This cooling system could cope with very high power densities, such as the ones required by GPU-packed servers used for heavy-duty high-performance computing and AI workloads.

According to DCK the first iteration of Project Natick had a Power Usage Effectiveness (PUE) rating of 1.07 (compared to 1.125 for Microsoft’s latest-generation data centers). The lower the PUE metric, the more efficiently the data center uses electricity. Microsoft hopes to improve the PUE for the phase 2 data center.

off-the-grid tidal power.Data centers are believed to consume up to 3% of the world’s electricity. The new cooling options change up the Northern Isles data center power requirements. It can run off the Orkney Islands’ local electrical grid which is powered by renewable wind, solar and tidal sources. One of the goals of the project is to test powering the data center with an off-the-grid source, such as using nearby tidal power.

Future versions of the underwater data center could also have their own power generation. Mr. Cutler told DCK, “Tide is a reliable, predictable sort of a thing; we know when it’s going to happen … Imagine we have tidal energy, we have battery storage, so you can get a smooth roll across the full 24-hour cycle and the whole lunar cycle.”

This would allow Microsoft to do away with backup generators and rooms full of batteries. They could over-provision the tidal generation capacity to ensure reliability (13 tidal turbines instead of 10, for example). Mr. Cutler says, “You end up with a simpler system that’s purely renewable and has the smallest footprint possible.”

 Northern Isle underwater data centerThe Northern Isle underwater data center is designed to run without being staffed. This fact cuts down on human errors. It is designed with a “fail-in place” approach where failed components are not serviced, they are just left in place. Operations are monitored by artificial intelligence. Mr. Cutler said, “There’s a lot of data showing that when people fix things they’re also likely to cause some other problem.

By operating in ‘lights out’ node with no human presence, allows most of the oxygen and water vapor to be removed from Northern Isles’ atmosphere. MSFT replaced Oxygen with 100% dry nitrogen. This environment should greatly cut the amount of corrosion in the equipment, a major problem in data centers on land.  Mr. Cutler told DCK, “With the nitrogen atmosphere, the lack of oxygen, and the removal of some of the moisture is to get us to a better place with corrosion, so the problems with connectors and the like we think should be less.

The Redmond Magazine article says Project Natick’s phase 2 has already proved that it’s possible to deploy an underwater data center in less than 90 days “from the factory to operation.” The logistics of building underwater data centers are very different from building data centers on land. Northern Isles was manufactured via a standardized supply chain, not as a construction process.  Mr. Cutler said, “Instead of a construction project, it’s a manufactured item; it’s manufactured in a factory just like the computers we put inside it, and now we use the standard logistical supply chain to ship those anywhere.

standard ISO shipping containerThe data center is more standardized. It was purposely built to the size of a standard ISO shipping container. It can be shipped by truck, train or ship. Naval Group shipped Northern Isles to Scotland on a flatbed truck. Mr. Cutler told DCK, “We think the structure is potentially simpler and more uniform than we have for data centers today … the expectation is there actually may be a cost advantage to this.”

The rapid time to deploy these data centers doesn’t only mean expanding faster, it also means spending fewer capital funds. Mr. Cutler explained, “It takes us in some cases 18 months or two years to build new data centers … Imagine if instead … where I can rapidly get them anywhere in 90 days. Well, now my cost of capital is very different … As long as we’re in this mode where we have exponential growth of web services and consequently data centers, that’s enormous leverage.

rb-

If Project Natick stays on the same trajectory, MSFT could bring data centers to any place in the developed or developing world without adding more stress on local infrastructure. MSFT’s Cutler told DCK “There’s no pressure on the electric grid, no pressure on the water supply, but we bring the cloud.”

As more of the world’s population comes online, the need for data centers is going to skyrocket, and having a fast, green solution like this would prove remarkably useful.

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Ethernet Marches On

Ethernet Marches OnIt has been a while since we talked about networking on the Bach Seat. So it is time to get back to my roots. Ethernet continues to dominate the world. The Institute of Electrical and Electronics Engineers (IEEE) 802.3 Ethernet Working Group, the group responsible for the Ethernet standard, recently ratified 4 new Ethernet-related standards. The committee approved IEEE 802.3bp, IEEE 802.3bq, IEEE 802.3br, and IEEE 802.3by.

IEEE 802.3br has implications for IoT and connected cars. This new standard addresses the needs of industrial control system manufacturers and the automotive market by specifying a pre-emption methodology for time-sensitive traffic. IEEE 802.3bp addresses how Ethernet operates in harsh environments found in automotive and industrial applications.

The 2 more interesting new standards to networkers are IEEE 802.3bq and IEEE 802.3by. These standards help define how 25 GB and 40 GB Ethernet will work and more importantly how products from multiple vendors should interoperate in the data center. For a summary of the rationale for the new standard here is the IEEE presentation  (PDF).

Data c enterIEEE 802.3bq, “Standard for Ethernet Amendment: Physical Layer and Management Parameters for 25 Gb/s and 40 Gb/s Operation, Types 25GBASE-T and 40GBASE-T“, opens the door to higher-speed 25 Gb/s and 40 Gb/s twisted pair solutions with auto-negotiation capabilities and Energy Efficient Ethernet (EEE) support for data center applications.

IEEE 802.3by, “Standard for Ethernet Amendment: Media Access Control Parameters, Physical Layers, and Management Parameters for 25 Gb/s Operation”, introduces cost-optimized 25 Gb/s PHY specifications for single-lane server and switch interconnects for data centers.

Siemon’s Standards Informant explains that 25GBASE-T will be backward-compatible with existing BASE T technology and both 25GBASE-T and 40GBASE-T are planned for operation over TIA category 8 cabling. The deployment opportunity for 25GBASE-T is aligned with 40GBASE-T and defined as the same 2-connector, 30-meter reach topology supporting data center edge connections (i.e., switch to server connections in row-based structured cabling or top of rack configurations).

The standard’s ratification comes shortly after the Telecommunications Industry Association (TIA) approved its standard specifications for Category 8 cabling, the twisted-pair type designed to support 25GBase-T and 40GBase-T.

Though 25 Gigabit Ethernet is only now becoming an official standard, Enterprise Networking Planet reports that multiple vendors already have technologies in the market. Among the early adopter of 25 GbE is Broadcom (AVGO) which announced back in 2014 that its StrataXGS Tomahawk silicon would support 25 GbE. In 2015, Arista (ANET) announced its lineup of 25 GbE switches. Cisco (CSCO) is also embedding 25 GbE support in some of its switches including the Nexus 9516 switch.

That is where 25-Gb/s Ethernet comes in. It uses the same LC fiber cables and the SFP28 transceiver modules are compatible with standard SFP+ modules. This means that data-center operators can upgrade from 10 GbE to 25 GbE using the existing installed optical cabling and get a 2.5X increase in performance.

The IEEE 25GbE standard seems to have come out of nowhere, (especially considering the L O N G D R A W N O U T 8 0 2 . 1 1 n process but the technology actually came into being as the natural single-lane version of the IEEE 802.3ba 100-Gb/s Ethernet standard. The 100-Gb/s Ethernet standard uses four separate 25-Gb/s lanes running in parallel, so defining a single lane makes it a straightforward and natural subset of the 100-Gb/s standard.

rb-

IEthernetEEE P802.3by and P802.3bq were initially targeted for server connections in mega data centers like Amazon, Facebook, and Google. In the next 5 years, 25G will be the next mainstream server upgrade from 10G, even for smaller data centers. SMB data centers will be facing a connectivity crisis in the future as the pace of virtualization increases.

According to IDC, the typical virtualized server supported about 10 virtual machines (VMs) in 2014 and will support in excess of 12 VMs by 2017. In many organizations, the majority of production workloads are already virtualized and almost all new workloads are deployed on virtualized infrastructure, placing inexorable stress on server connectivity.
In order to accommodate this growth Twinax copper and short-reach MMF are included in the “by” standard, while 25GBASE-T (twisted pair) was added to the existing 40GBASE-T “bq” project making 25G possible in smaller data centers without having to re-wire the data center.
Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Data Center in Space

Data Center in SpaceCloud computing is old technology. An LA-based start-up wants to move your data beyond the cloud. Cloud Constellation wants to store your data in space. The firm is planning on building a satellite-based data center that will have room for petabytes of data and may start orbiting Earth as early as 2019 according to Computerworld.

spacebelt_logoCEO Scott Sobhani told the author Cloud Constellation is looking upward to give companies and governments direct access to their data from anywhere in the world. Its data centers on satellites would let users bypass the Internet and the thousands of miles of fiber their bits now have to traverse in order to circle the globe. And instead of just transporting data, the company’s satellites would store it, too.

The article describes the pitch like this – Data centers and cables on Earth are susceptible to hacking and to national regulations covering things like government access to information. They can also slow data down as it goes through switches and from one carrier to another, and all those carriers need to get paid.

petabytes of data orbiting EarthCloud Constellation’s system, called SpaceBelt, would be a one-stop-shop for data storage and transport. Need to set up a new international office? No need to call a local carrier or data-center operator. Cloud Constellation plans to sell capacity on SpaceBelt to cloud providers that could offer such services.

Security is another selling point. Data centers on satellites would be safe from disasters like earthquakes, tornadoes, and tsunami. Internet-based hacks wouldn’t directly threaten the SpaceBelt network. The system will use hardware-assisted encryption, and just to communicate with the satellites an intruder would need an advanced Earth station that couldn’t just be bought off the shelf, Mr. Sobhani told ComputerWorld.

How do you reboot a server in space?Cloud Constellation’s secret sauce is a technology that it developed to cut the cost of all this from US$4 billion to about US$460 million, Sobhani said. The network would begin with eight or nine satellites and grow from there. Together, the linked satellites would form a computing cloud in space that could do things like transcode video as well as storing bits. Each new generation of spacecraft would have more modern data center gear inside.

satelite network

The company plans to store petabytes of data across this network of satellites. Computerworld points out that the SpaceBelt hardware would have to be certified for use in space. Hardware in space is more prone to bombardment by cosmic particles that can cause errors. Most computer gear in space today is more expensive and less advanced than what’s on the ground, satellite analyst Tim Farrar of TMF Associates said.

satelliteTaneja Group storage analyst Mike Matchett told the author that the idea of petabytes in space is not as far-fetched as it may sound. A petabyte can already fit on a few shelves in a data center rack, and each generation of storage gear packs more data into the same amount of space. This is likely to get better even before the first satellites are built.

But if you do put your data in space, don’t expect it to float free from the laws of Earth. Under the United Nations Outer Space Treaty of 1967, the country where a satellite is registered still has jurisdiction over it after it’s in space, said Michael Listner, an attorney and founder of Space Law & Policy Solutions. If Cloud Constellations’ satellites are registered in the US, for example, the company will have to comply with subpoenas from the U.S. and other countries, he said.

United Nations Outer Space Treaty of 1967And while the laws of physics are constant, those on Earth are unpredictable. For example, the US hasn’t passed any laws that directly address data storage in orbit, but in 1990 it extended patents to space, said Frans von der Dunk, a professor of space law at the University of Nebraska. “Looking towards the future, that gap could always be filled.”

rb-

On the Bach Seat, we have covered different theories about data centers several times. These theories included manure, sewer gas, and used cars to power DC’s as well as proposed data centers underwater and at KMart. This one however seems the most unique, considering the start-up costs to build and launch satellites.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Is Your Data Center Underwater?

Is Your Data Underwater?Every time you like something on Facebook, it causes a computer in a cloud data center somewhere in the world to do something. That computer uses electricity to let the world know you like the sleepy puppy video or what you dinner looked like.

computers produce heatAs you may have noticed if you left your laptop on your lap for too long computers also produce heat. Facebook (FB), Twitter (TWTR), Instagram, and all the other time-wasters have millions of computers generating excess heat that needs to go somewhere. It is estimated that Facebook alone has hundreds of thousands of servers.

Keep servers cool

One of the ways to keep servers cool is to keep them wet. As count-intuitive as that seems, there are companies that use liquid immersion to cool their servers according to the Register. This approach uses data centers featuring large ‘baths’ filled with a dielectric liquid into which racks of equipment are submerged.

Green Revolution Cooling CarnotJetMineral oil has been used in immersion cooling before Perhaps the best-known proponent of liquid immersion cooling is Green Revolution Cooling. Its CarnotJet system allows rack-mounted servers from any OEM to be dunked in special racked baths filled with a dielectric mineral oil blend called ElectroSafe (PDF), an electrical insulator it claims to have 1,200 times more heat capacity by volume than air.

Green Revolution Cooling claims cooling energy reductions of up to 95 percent, server power savings of 10-25%, data center build-out cost reductions of up to 60% through simplified architecture, and improved server performance and reliability as a result of less exposure to dust (and moisture).
Microsoft has taken this technology to the next level. Now, Microsoft is experimenting with locating entire data centers underwater.

Microsoft underwater data center

Microsoft logoComputerWorld is reporting that Microsoft has designed, built, and deployed its own sub-sea data center in the ocean, in the period of about a year. The Redmond, WA firm started working on the project in late 2014. Microsoft employee, Sean James, who served on a U.S. Navy submarine, submitted a paper on the concept.

The eight-foot diameter steel prototype vessel, named after the Halo character Leona Philpot, operated 30 feet underwater on the Pacific Ocean seafloor, about 1 kilometer off the California coast near San Luis Obispo for 105 days from August to November 2015, according to Microsoft. Microsoft engineers remotely controlled the data center and even ran commercial data-processing projects from Microsoft’s Azure cloud computing service in the submerged data center.

Project NatickThe sub-sea data center experiment, called Project Natick after a town in MA, is in the research stage and Microsoft warns it is “still early days” to evaluate whether the concept could be adopted by the company and other cloud service providers. Microsoft says,

Project Natick reflects Microsoft’s ongoing quest for cloud data center solutions that offer rapid provisioning, lower costs, high responsiveness, and are more environmentally sustainable.

Microsoft believes that using undersea data centers can serve the 50% of people who live within 200 kilometers of the ocean. They say that deployment in deep-water offers “ready access to cooling, renewable power sources, and a controlled environment.” Moreover, a data center can be deployed from start to finish in 90 days.

Microsoft is weighing coupling the data center with a turbine or a tidal energy system to generate electricity, according to the New York Times.

Environmental impact

A new trial is expected to begin next year, possibly near Florida or in Northern Europe, Microsoft engineers told the NYT.

environmental impactSome users questioned whether an undersea data center could have an environmental impact, including the heating up of the water around the data center. But Microsoft claimed on its website that the project envisages the use of data centers that would be totally recycled and would also have zero emissions when located along with offshore renewable energy sources. MSFT told Computerworld

No waste products, whether due to the power generation, computers, or human maintainers are emitted into the environment … During our deployment of the Leona Philpot vessel, sea life in the local vicinity quickly adapted to the presence of the vessel.

rb-

I have covered some other alternative ways to deal with data centers on Bach Seat, including HP’s plans to use cow manure to generate electricity and Microsoft’s plan to use sewer gas to power a data center in Wyoming.

Underwater data centers are an attractive idea, there are challenges. One is a concern is the saltwater could corrode the structures. This issue can be resolved by locating the data centers in the freshwater Great Lakes. The Great Lakes basin is projected to reach a population of about 65 million by 2025.

The region includes:

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.