Tag Archive for Terabit

Whats a Petabit Network

Whats a Petabit NetworkSeems like it was a couple of months ago, we were excited about fiber optic cable that twisted light to carry data at 1.6 Tbps per strand. Now a Petabit network is the new benchmark. U.K. and Japanese researchers mashed up software-defined networking (SDN) and multicore fiber to produce the first Petabit pipe according to Kevin Fitchard at GigaOM. A Petabit is one quadrillion (1,000,000,000,000,000 or 1015) bytes binary digits or one thousand Terabits.

Petabit network uses multicore fibers

Whats a Petabit NetworkThe researchers mashed up multicore fibers and SDN to makes very high-speed networks programmable. GigaOM speculates this will allow carriers to adjust the network capacity and latency to meet the needs of traffic traveling over their networks. First, GigaOM explains that the fiber is unlike today’s single strands of glass, or cores, that carry a single beam of light down the fiber. Multicore fiber is exactly what its name implies: multiple cores each carrying a single core’s worth of capacity over the same link. Professor Dimitra Simeonidou at the University of Bristol called current single-core fiber a capacity bottleneck.

Space Division Multiplexed

The multicore group, led by NICT and NTT in Japan which built a 450 km (280 miles) section of fiber optics using 12 cores in two rings capable of transmitting 409 Tbps in either direction. That’s 818 Tbps in total. Which is within spitting distance of seemingly mythical Petabit speeds according to GigaOM. The MCF research relies on Space Division Multiplexed (SDM) provided by the multicore fibers.

ResearcherIn order to control the massive bandwidth, a team from the High Performance Networks Group at the University of Bristol created an OpenFlow software-based control element to manage those enormous capacities. The Brits implemented an interface that dynamically configures the network nodes so that it can more effectively deal with application-specific traffic requirements such as bandwidth and Quality of Transport.

According to the researchers, this was the first time SDN was used on a multicore network. The University of Bristol presser announcing the new technology says this technology will overcome critical capacity barriers, which threaten the evolution of the Internet.

rb-

OK, so that really – really – really fast. We also know from a 2011 New Scientist article that the total capacity of one of the world’s busiest routes, between New York and Washington DC, is only a few Terabits per second. With bandwidth-hungry applications like cloud computing, social media, and video-streaming continuously growing it forces network planners at firms like AT&T (T), Verizon (VZ), and the NSA to find new ways to grow their capacity.

Data center

Comcast (CMCSA) just finished a 1 Tbps network field trial on a production network between Ashburn, VA, and Charlotte, NC. Most likely the first place Pbps networking will be used is in the mega-data centers of the likes of Google (GOOG), Facebook (FB), or Microsoft (MSFT).

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

New Data Rate Speed Record

The BBC is reporting that researchers from the Karlsruhe Institute of Technology in Germany have set a new data rate speed record. The new data rate speed record is 26 terabits per second down 50km of optical fiber. Professor Wolfgang Freude, a co-author of the paper in Nature Photonics told the BBC how they set the new speed record.

"fast Fourier transformThe trick is to use what is known as a “fast Fourier transform” which separates a single laser beam into 300 colors and encodes data in each different color. Professor Freude and his colleagues have instead worked out how to create comparable data rates using just one laser with exceedingly short pulses. Within these pulses are a number of discrete colors of light in what is known as a “frequency comb”.  When the pulses are sent into an optical fiber, the different colors can mix together and create 325 different colors in total, each of which can be encoded with its own data stream according to the article.

At the receiving end, the researchers implemented an optical fast Fourier transform to receive the data streams, based on the times that the different parts of the beam arrive, and at what intensity. The authors of the paper say the technique can be easily integrated into existing silicon photonics technology. The story says that stringing together all the data in the different colors turns into the simpler problem of organizing data that essentially arrive at different times.

LaserProfessor Freude told the BBC that the current design outperforms earlier approaches simply by moving all the time delays further apart and that it is a technology that could be integrated onto a silicon chip – making it a better candidate for scaling up to commercial use.

rb-

So what does it mean to transfer 26 terabits per second over fiber optic cable? Reportedly the contents of nearly 1,000 high-definition DVDs could be transmitted down an optical fiber in a second – or the entire Library of Congress collections could be sent in 10 seconds. Since the LOC already has a home in Washington DC, more likely uses of these new technologies will be applications like cloud computing, virtual reality, and 3-D Hi-definition TV.

Just last year I wrote about Intel Corp’s. (INTC) efforts in this domain and noted that “1 terabit per second link could transfer the entire printed collection of the Library of Congress in 1.5 minutes.”

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Terabit Ethernet Developing

Terabit Ethernet DevelopingResearchers at the University of California, Santa Barbara (UCSB) are working on the next evolution of Ethernet – Terabit Ethernet. UCSB Professor of Electrical and Computer Engineering Dan Blumenthal told LightReading that the goal of the recently created Terabit Optical Ethernet Center (TOEC), is to create Terabit Ethernet (TbE) which runs at 1 trillion bits per second by 2015 and to follow it up with 100Tbit/s Ethernet by 2020.

Professor Blumenthal explained to LightReading that he wants the TOEC and its partners to produce something the industry can use, not a one-time lab experiment that only works with duct tape and glue. “We’re not talking about lab hero experiments,” Blumenthal told LightReading. The real-world focus of TOEC has helped attract partners like  Agilent Technologies Inc. (NYSE: A), Google (NASDAQ: GOOG), Intel Corp. (NASDAQ: INTC), Rockwell Collins Inc., and Verizon Communications Inc. (NYSE: VZ) to help with the research. I wrote about Intel’s TBPS efforts back in July.

Terabit Ethernet is hard

TOEC could probably use the help because developing TbE is looking like no simple task according to LightReading. Bob Metcalfe, Ethernet’s creator, and now a Polaris Venture Partners partner, speculated two years ago that a terabit standard might need a rethinking of everything, even the fiber itself.

Based on current UCSB research, professor Blumenthal speculates that TbE  may include:

  • Photonic integrated circuits (PICs) are a must.
  • Coherent receivers, but at a scale well beyond what’s being used for 100Gbit/s Ethernet. A likely candidate is 1,024-QAM: quadrature amplitude modulation (QAM) transmitting 10 bits per symbol, a scheme likely to require 100GHz electronics.
  • To make that coherent receiver energy-efficient, TOEC is “trying to move a lot of what’s in the digital signal processor into the optics,” Blumenthal says.
  • New materials for fiber-optics aren’t out of the question. “We won’t start out with that, but it’ll move in that direction,” Blumenthal says.
  • Other items on the TOEC shopping list include optical phase-locked loops, new semiconductor optical amplifiers (SOAs), and methods for drastically lowering on-chip optical losses.

The questions go beyond the optical layer. To make operations more synchronous padding and frame delineation were added to 10Gbit/s and 100Gbit/s Ethernet, Blumenthal pointed out. “Do we keep doing that? Or do we go purely asynchronous? We don’t know yet. …Once you put the word ‘Ethernet’ in there, it’s not about just transmission. It’s about being backward-compatible. That’s the beauty of Ethernet. We can’t lose that essence.

rb-

The need for TbE is real (I first wrote about Intel’s TbE efforts here) and being driven by video. More video is already riding over existing networks. “We’re going to need much faster networking to handle the explosion in Internet traffic and support new large-scale applications like cloud computing,” Professor Blumenthal told Physorg. Stuart Elby, Vice President of Network Architecture for Verizon told Physorg, “Based on current traffic growth, it’s clear that 1 Terabit per second trunks will be needed in the near future.”

Facebook is already looking at TbE in their data centers. PCWorld reports that at the Ethernet Alliance‘s Technology Exploration Forum, Donn Lee, a Facebook Engineer said, “… there is already a need for 1 terabit.” Facebook has so many servers, and those servers can process data so fast, that they could fill 64 Terabit Ethernet pipes in the backbone of one data center, Lee said.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Intel Shows TBps Connections

Intel Shows TBps ConnectionsThe EETimes reports that researchers at Intel Corp. (INTC) have demonstrated optical chips can transmit up to terabit-per-second of data transmission. The new silicon photonic chips will replace copper connections in everything from supercomputers to servers to PCs chips predicts Intel. The new chips can currently transmit data at 50 Gigabits per second (Gbps). 50 Gbps equates to transferring an HD movie a second.

This milestone marks the beginning of silicon photonics in the high-volume marketplace, in applications from [high-performance computing] all the way down to the client PC,” said Mario Paniccia, director of Intel’s Photonics Technology Lab. “We see a clear development path from 50 Gbps today to a terabit in the future,” Mr. Paniccia told EETimes.

Intel says that optical connections could eventually replace the copper connections between systems and even between boards in the same system and down to cores on the same board. intel’s Paniccia estimated that the first commercial applications of silicon photonics will begin appearing in as little as five years in data centers and supercomputer facilities.

The modulators required to encode optical information using signal waveguides and photodiodes are cast in silicon on custom chips designed by Intel. The transmitter chip uses Intel’s hybrid silicon laser technology that bonds a small indium phosphide die to on-chip silicon waveguides, four of which are patterned into a connected optical laser.  “We combined our silicon manufacturing techniques with our hybrid laser and demonstrated an integrated transmitter using four lasers each operating at a different wavelengths and four silicon modulators each operating at 12.5 Gbps, then combined them together into an aggregate 50 Gbps into the optical fiber,” said Paniccia.

The optical fiber output on the receiver chip is then filtered into separate colors and diverted by waveguides into four separate photodiodes, each of which receives one of the four separate 12.5-Gbps channels. In the future, Intel plans to add more lasers per chip and increase the number of channels. Intel believes that it can put 25 lasers on a single chip to produce the 1 Tbps capabilities. It then hopes to commercialize the optical connection technology.  Intel has been developing the technology since 2004.

Intel already has a 10-Gbps Light Peak chip that uses conventional optical technologies that are aimed at reducing the number of port connections on a computer. The Silicon Photonics Link is different from Light Peak technology. Intel’s Light Peak technology – an optical cable that is aimed at reducing the number of port connections on a computer. said it used traditional optical devices and scaling it beyond 10 Gbps speeds would be difficult.

rb-

For some perspective, the 1 terabit per second link could transfer the entire printed collection of the Library of Congress in 1.5 minutes.

Intel is preaching high bandwidth and low cost with these chips. If Intel can deliver, it could change the nature of system design. Theoretically, these chips could allow system components to the spaced further apart without the performance hit. With these chips, data center expansion could be down the hall instead of a full re-design. Now it may be cheaper to take the new gear to the available electrical panel rather than adding a new panel to the server room.

Intel’s Paniccia told VentureBeat that the accuracy of the data transfer is superb. So far, it has been proven to be able to transfer data with no errors for 27 hours straight, which means it can transfer more than a petabyte of data without an error.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Terabit Ethernet

Terabit EthernetOver at The Register, there is an article heralding the coming of Terabit Ethernet. Apparently, researchers from Australia, China, and Denmark think they have opened the door to terabit per second Ethernet links using multiplexed 10Gbit/s data streams and small chalcogenide demux chips to demultiplex the 10 gig streams.

In the paper, entertainingly entitled Breakthrough switching speed with an all-optical chalcogenide glass chip: 640 Gbit/s demultiplexing, the researchers describe how injecting multiple 10gig data streams into optical cables is not a problem using existing optical technology (electro-optic modulator per stream) and optical time-division multiplexing (OTDM).

Recombining the data streams

The obstacle has been recombining those separate data streams at the end of the link and doing it fast enough. Apparently despite the recent hype about 40Gb Ethernet, the receiving and recombination of these streams is a problem at output rates higher than 40Gbits according to the research paper published in Optics Express, Vol. 17, Issue 4, on February 16th.

Until now the re-combination has been carried out using photo-detectors that can operate up to 40 Gbit/s or so. That limits us to just four 10gig streams. Achieving higher data rates this way means we have to send more parallel data streams down the cable and demultiplex – switch or recombine them – into one data stream faster still. This latest research uses waveguides just 5cm long by making them from chalcogenide glass chips with switching speeds measured in femtoseconds, a billionth of a millionth of a second, or a quadrillionth.

The researchers conclude that their test results confirm the enormous potential of chalcogenide-based waveguides for ultrafast optical signal processing.

They believe their technology can be extended to demultiplex 100 10Gbit/s data streams and so achieve a terabit Ethernet capability. The article points out that commercialization of such technology is, of course, if it takes place at all, many years away.

rb-

Seems like its time to add another synonym for huge to our vocabulary petabyte, exabyte, zettabyte

Some thoughts from Bob Metcalfe on TB Ethernet

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.