OFC/NFOEC features breakthroughs in next-generation ethernet, metamaterials, networks

Published: Tuesday, March 17, 2009 - 15:51 in Physics & Chemistry

The world's largest international conference on optical communications begins next week and continues from March 22-26 at the San Diego Convention Center in San Diego. The Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) is the premier meeting where experts from industry and academia intersect and share their results, experiences, and insights on the future of electronic and wireless communication and the optical technologies that will enable it. Journalists are invited to attend the meeting, where more than 15,000 attendees are expected. This year's lineup will have many engaging talks and panels, including:

  • MARKET WATCH, a three-day series of presentations and panel discussions featuring esteemed guest speakers from the industrial, research, and investment communities on the applications and business of optical communications. See: http://www.ofcnfoec.org/conference_program/Market_Watch.aspx.
  • PLENARY PRESENTATIONS: "The Changing Landscape in Optical Communications," Philippe Morin, president, Metro Ethernet Networks; "Getting the Network the World Needs," Lawrence Lessig, professor, Stanford Law School; "The Growth of Fiber Networks in India," Shri Kuldeep Goyal, chairman and managing director, Bharat Sanchar Nigam Ltd. To access speaker bios and talk abstracts, see: http://www.ofcnfoec.org/conference_program/Plenary.aspx.
  • SERVICE PROVIDER SUMMIT, a dynamic program with topics and speakers of interest to CTOs, network architects, network designers and technologists within the service provider and carrier sector. See: http://www.ofcnfoec.org/conference_program/Service_Provider_Summit.aspx

The OFC/NFOEC Web site is http://www.ofcnfoec.org. Also on the site is information on the trade show and exposition, where the latest in optical technology from more than 550 of the industry's key companies will be on display.

SCIENTIFIC HIGHLIGHTS

The conference also features a comprehensive technical program with talks covering the latest research related to all aspects of optical communication. Some of the highlights at OFC/NFOEC 2009 include the following.

A simpler receiver to cut costs of tomorrow's Internet

Upgrading broadband networks to handle the Internet traffic of the future is proving to be a big, expensive job. Engineers at NEC Laboratories America in Princeton, N.J. are hoping to cut costs by simplifying the hybrid optical/electronic receiver that sorts out the flood of data at the end of the optical fiber.

In recent years, telecommunications giants like Verizon and AT&T have begun to beef up their broadband networks from 10 gigabits per second (10G) to 40G, enough bandwidth to broadcast 1,000 Hollywood movies simultaneously. Achieving these high speeds requires data to be "multiplexed," a process in which a single high-speed stream is split into several slower streams using a multi-level modulation format and polarization multiplexing technologies. The slower data streams are received and reassembled at the end of the line by an expensive device called a coherent receiver.

To cut costs, Dayou Qian and colleagues at NEC have eliminated expensive lasers and other components from the conventional coherent receiver. Their simpler "direct-detection receiver" relies instead on a narrow optical carrier signal that travels with the data along optical fibers, re-generates the electrical signal in the receiver and helps the digital signal processing algorithm to piece together the cut-up data.

The drawback to this approach is that this extra "carrier signal" consumes some of the transmission power, reducing the signal-to-noise ratio of the data in the line. Still, in data presented at OFC/NFOEC 2009 in San Diego, the researchers report using the device to reliably detect signals sent at 40G. They hope that this lower-cost technology will pave the way for cheaper, higher-speed upgrades to metropolitan areas and access/office networks.

Talk OTuO7, "22.4-Gb/s OFDM Transmission over 1000 km SSMF Using Polarization Multiplexing with Direct Detection" (Tuesday, March 24, 6:15 - 6:30 p.m.)


New standards released to guide evolution of Ethernet

For more than 30 years, computer networks running on Ethernet standards have allowed co-workers to send data back and forth in the office. But as the size of data grows exponentially, companies like Google and Facebook are realizing that the current 10G pipes will need to be much bigger in the future. At this year's OFC/NFOEC, John D'Ambrosia, chair of the IEEE P802.3ba 40 Gb/s and 100 Gb/s Ethernet Task Force and Scientist for Force10 Networks, will be providing an update on the new physical layer specifications for tomorrow's Ethernet.

The IEEE 802.3 Working Group formed the Higher Speed Study Group – a collection of server manager, network engineers, and companies that use Ethernet – in July 2006. One goal of the study group was to decide how fast the Ethernet of the future will need to be. Two speeds have been selected: 40G for computing applications and 100G for core networking applications, whose bandwidth requirements are growing even faster. This dual standard marks a break from the tradition of simply increasing Ethernet speed standards by a factor of 10.

In December of 2007 the IEEE P802.3ba 40Gb/s and 100Gb/s Ethernet Task Force was formed. In March 2009 the document was approved for Working Group ballot. After completion, the next scheduled steps in the effort will be the Sponsor Ballot in November of 2009 with approval of the standard in June 2010.

An overview of the architecture and physical layer standards will be presented at OFC/NFOEC. As for how long it will take new hardware meeting these specifications to find its way into offices, "that's impossible to predict, but the market forces will work through the issues," says D'Ambrosia.

Talk NTuA4, "The Continuing Evolution of Ethernet" (Tuesday, March 24, 3:20 – 4 p.m.)


New routers marry light and silicon to cut down on power and ramp up speed

Tomorrow's ultra-fast broadband may be limited not by the speed at which data can be sent, but by the electrical power needed to route data to millions of users. A new technology that weds light and silicon hopes to keep up the massive connectivity of a faster Internet by cutting down on its power consumption.

To send a single stream of data to many computers, networks have to "multicast," sending out multiple copies of a single input signal carried by an optical fiber. With electronic switching, this requires converting optical data into digital electronic data, making copies in the electronic domain, and converting electronic copies back into optical data. The amount of power that electronic multicasters require to do this is large and will increase exponentially as the speed of data transmission goes up, an energy bottleneck for the industry.

To solve this problem, a team of researchers at Columbia University and Cornell University has built a purely optical device that cuts out the energy-hungry electronic middleman. They use a pulsing laser to clone the light coming in from an optical fiber into eight identical waves going out, a process called "four-wave mixing." This all happens in silicon – one of the most efficient materials for this process – directly embedded on a computer chip. So though the multicasting itself doesn't require electronics, other electronic components, like switches, could be installed on the chip to modify the signal as it passes through.

The device can handle speeds of more than 160G and draws several orders of magnitude less power than current electronic devices. "We're looking ahead to next-generation networks that will run at terabits per second," says Keren Bergman of Columbia University. "You just can't do that kind of multicasting in electronics."

Talk OTuI3, "First Demonstration of On-Chip Wavelength Multicasting" (Tuesday, March 24, 6 - 6:15 p.m.)


Cost-effective solutions for ever-increasing speeds

The broadband market has traditionally been like a sponge – as it becomes saturated, it expands. The advent of high-speed broadband and its associated streams of videos, music, and phone in the last few years has only increased demand for even greater bandwidth and ever-higher speeds. The need for speed shows no signs of diminishing, says Jeffrey H. Sinsky, a scientist at Bell Labs, the research arm of Alcatel-Lucent. As a result, the need for new cost-effective technologies that can handle the higher speeds has become profound.

Some of the fastest transfer rates ever achieved in the laboratory top out above 100 billion bits per second (100G) – enough speed to copy a reasonably large personal computer hard drive in less than a second. Moving a signal over an optical fiber at speeds above 100G is challenging but achievable. The problem comes when the optical signals need to be converted into electric signals. Moving electrical signals at anything above 40G creates what are known as distributed circuit effects that can confound transmission and lead to a loss of data. One way to deal with this problem is to break optical signals into several lower rate, more easily handled parallel optical data streams, but this may require adding many extra components, which complicates packaging and inflates costs.

With the right know-how and with emerging technology, dealing with ever-increasing speeds can be done cost-effectively, says Sinsky, who is an expert in integrating optical components with electronic ones into small packages. In his talk, he will outline his designs for small integrated hybrid systems that can convert high-rate optical data streams into several lower-rate electrical streams that are easier to handle – and cost-effective even at state-of-the-art 100G speeds.

The demand for commercial Ethernet at these speeds is inevitable, says Sinsky. With new and even more bandwidth-hungry applications such as 3DTV and telepresence just on the horizon, it is clear that what is considered adequate network capacity today will fall far short of tomorrow's demand. In anticipation, researchers are already beginning to explore so called "terabit" Ethernet that will perform at speeds of 1,000G.

Talk OThN6, "Integration and Packaging of Devices for 100-Gb/s Transmission" (Thursday, March 26, 2:30 - 3:00 p.m.)


Reliable multiplexing at 640G

Sliced light is how we communicate now. Millions of phone calls and cable television shows per second are dispatched through fibers in the form of digital zeros and ones formed by chopping laser pulses into bits. This slicing and dicing is generally done with an "electro-optic modulator," a device for allowing an electric signal to switch a laser beam on and off at high speeds (the equivalent of putting a hand in front of a flashlight). Reliably reading that fast data stream is another matter. A new reliable speed-reading record – 640 billion bits per second – has now been established by a collaboration of scientists from Denmark and Australia.

Conventional readers of optical data depend on photo-detectors, electronic devices that can operate up to approximately 40G. This in itself represents a great feat of rapid reading, but it's not good enough for the higher-rate data streams being designed now. The data receiving rate has to keep up. Sometimes several signals – each with its own stream of coded data – will be sent down an optical fiber at the same time to speed up data transmission. This process is called multiplexing. Ten parallel streams of 10 billion bits per second, abbreviated as 10G, would add up to an effective stream of 100G. At the receiving end the parallel signals have to be read out in a complementary process called de-multiplexing. Reliable and fast multiplexing and de-multiplexing represent a major bottleneck in linking up the electronic and photonic worlds.

The new de-multiplexing device demonstrated at the Technical University of Denmark, by contrast, can handle the high data rate, and do so in a stable manner. Furthermore, instead of fibers 50 meters long, the Danish researchers accomplish their untangling of the data stream with a waveguide only 5 cm long, an innovation developed by Danish scientist Leif K. Oxenlowe and his colleagues at the Technical University and at the Centre for Ultrahigh Bandwidth Devices for Optical Systems (CUDOS) in Australia.

Talk OThF3, "640 Gbit/s Optical Signal Processing" (Thursday, March 26, 8:30 - 9 a.m.)


Red-light metamaterials

Metamaterials make it possible to manipulate light on the nanoscale. They are nanostructured materials made of tiny metallic rings, rods, or strips arranged in such a way as to produce a negative index of refraction, or a situation unique to metamaterials when light is deflected away from an imaginary line passing perpendicularly between air and the material. This property in turn is expected to lead to novel optical devices, such as flat-panel lenses and hyperlenses. These lenses can be used to image objects with a spatial resolution smaller than the wavelength of the illuminating light source, thus circumventing the normal "diffraction limit," which says that a lens cannot produce an image with a spatial resolution better than approximately half the wavelength of the light used to make the image.

Alexander Kildishev will report on optical metamaterial progress at Purdue, including the shortest wavelength light (710 nm) yet achieved for a negative index metamaterial, and the improved design of a cylindrical-shaped hyperlens. One goal in this work is to produce cloaking, rendering an object inside a metamaterial enclosure invisible to outside viewers. But Kildshev says achieving invisibility in the visible portion of the electromagnetic spectrum will be difficult. Shorter range applications of metamaterials, he says, will likely be seen in microscopy, biosensing, and in the harvesting of solar energy. (More information is available at http://cobweb.ecn.purdue.edu/~shalaev/MURI/overview.shtml.)

Talk OThK1, "Progress in Metamaterials for Optical Devices" (Thursday, March 26, 1 - 1:30 p.m.)


Optofluidic assembly of microlasers

One of the problems of marrying electronics and photonics is that they are embodied in very different elements. Many photonic components – such as modulators, detectors, switches, and waveguides – can be fashioned from silicon, but the light source itself, the microlaser, is often assembled from elements residing in columns III and V of the periodic table, and these elements don't sit well on top of silicon. Ming-Chun Tien and Professor Ming Wu of the University of California, Berkeley will report on progress in their lab, where their team has been able to make III-V microlasers (6 microns in diameter and only 200 nm thick) using a wet chemical etch process. Once the microdisk lasers are formed, the lasers' substrate (the indium phosphide [InP] platform on which the indium gallium arsenide phosphide [InGaAsP] lasers were built) is etched away. Then the lasers are floated in a mini-lake of ethanol (which accounts for the word "fluidics" in the name) and moved into position by a patterned array of light from a computer-controlled projector, which they refer to as optoelectronic tweezers (OET). The lasers are held in place over optical waveguides defined in silicon by an applied voltage until the attachment process is complete. Tien says that the microlasers, costing less than 1 cent each, can be positioned on the chip with better-than-quarter-micron accuracy.

Talk OMR7, "Optofluidic Assembly of InGaAsP Microdisk Lasers on Si Photonic Circuits with Submicron Alignment Accuracy" (Monday, March 23, 5:45 - 6:00 p.m.)


First true optical packet switching

Supercomputers have the capacity to process immense amounts of data in parallel. Traditionally computers relied on electrons moving in wires to be switched by electronic switches (relays in the early days, then vacuum tubes, then transistors). To support much higher data-flow rates computers now resort to encoding data into the form of light waves, and many high-caliber systems are necessarily opto-electronic hybrids. But how to switch and maneuver all that data? Optical switches are faster and consume less power than electronic switches. However, optical systems don't do as good a job as their electronic counterparts when it comes to having a fast-access memory. Furthermore, optical switching is more expensive and harder to control.

Two corporations, IBM-Zurich and Corning, have collaborated on a new opto-electronic system, called OSMOSIS (Optical Shared Memory Supercomputer Interconnect System) for processing packets of optical data in and among supercomputers. This hybrid approach uses electronic circuitry for data buffering and control operations and optical circuitry for switching and transmitting data.

The result, to be announced at OFC/NFOEC, is the first true high-capacity optical packet switching system. With a cost close to other electronic products, the OSMOSIS system can move data through 64 ports at a rate of 40G for each port, for an overall data rate of 2.5 terabits per second. Other characteristics of the OSMOSIS architecture include a latency period (the time it takes between the sending and processing of the data packet) of less than 250 nsec and an optical switching time of 13 nsec.

Talk OTuF3, "The Osmosis Optical Packet Switch for Supercomputers" (Tuesday, March 24, 2:45 - 3:15 p.m.)

Source: Optical Society of America

Share

Articles on the same topic

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net