OptIPuter Featured in R&D Magazine

January 27th, 2006

Categories: Applications, Devices, Networking, Visualization

EVL Researcher Investigating Satellite Data Displayed on LambdaVision
EVL Researcher Investigating Satellite Data Displayed on LambdaVision

About

Photonic Switches Put the Internet on Steroids
Richard Gaughan
Founder and Chief Engineer, Mountain Optical Systems Technology


Researchers around the world use the Internet to share data, collaborate on analyses, and publish results. But communication glitches that inconvenience the casual user disrupt scientific collaboration. Without reliable, fast, well-defined communication links, large data sets cannot be shared and scientific collaboration cannot prosper. The years since the birth of the Internet have seen tremendous advances in computing speed, and comparable growth in data collection: more instruments around the world are gathering more data. Bioscience and geoscience experiments can generate terabytes, even petabytes of data. Transmission speeds, however, have not kept pace.

Several years ago Larry Smarr, a professor in computer science and engineering at the Univ. of California, San Diego (UCSD), identified the problem, which he recently summarized: “There’s something deadly wrong with the infrastructure when the natural rate of the PC is so much higher than the bandwidth interconnecting them.” Smarr is now the principal investigator for the OptIPuter, a five-year National Science Foundation (NSF) project named for its use of optical networking, Internet protocol, computer storage, processing, and visualization technologies.

The intent of the OptIPuter project is to employ the latest commercially available technology in coordination with existing resources to take advantage of the high bandwidth capabilities inherent in optical fiber.

Share and share alike
Anyone who has connected to a remote Web site has been the victim of a clever illusion. It appears as if a circuit has been established directly between the user’s computer and the Web site’s home computer. That’s not the case. The user’s computer has broken the outgoing data into packets according to the transmission control protocol (TCP). Internet protocol (IP) information is added to the TCP packet. The TCP / IP packet is now formatted with the Internet address. When a packet leaves a user’s computer it is sent to a router, which examines the IP address, looks at paths to get to the destination, and selects an appropriate path. That path leads to the next router down the line, which then determines which path to use. Each router makes a decision with each packet. Because the path is a function of how busy a particular route is, two packets, sent one after another from a user’s computer, may take different paths to reach the destination. All those decisions are transparent to the user.

TCP / IP is an excellent method of fairly allocating telecommunications links, but this sharing of the data paths, and the dynamic reassignment decisions made by the routers along the way, are also responsible for variations in data rate as the software detects network congestion and backs off.

This equitable sharing also means that users rarely approach their optimum data transmission bandwidth. The problem? It’s difficult to optimize the speed of a link if there’s no way of determining what path the data will take.

Mine, all mine
So, the first step in optimizing the speed of a data link is to put the selection of the data path in the hands of the user. In the same way in which users schedule time at high-performance computing facilities, the OptIPuter users schedule their telecommunications links. From storage to processor to display and visualization, the wavelengths and fibers are allocated to a specific user for the duration of a task. This “deterministic” link is predictable, with a specific, constant latency time.

Wavelength division multiplexing (WDM) put multiple users’ data on separate distinguishable wavelengths of light in a single fiber. Philip Papadopoulos, the program director for grid and cluster computing at the San Diego Supercomputer Center, says “circuit-style connections were not practical without WDM. Now, with the rapid price decline of fixed-wavelength lasers the notion of putting eight to ten 10-Gbps channels into a computing facility is becoming economically feasible.” So selection and combination of wavelengths can be put under control of users directly from their computers.

The next step is to select a specific optical fiber down which to send the data. Here, another economic development was key. Maxine Brown, the OptIPuter project manager, says “what helped us was the telecom bust. Companies had lots of fiber in the ground, and no use for it.” The OptIPuter needed a pathway selection method that didn’t suffer from the variation inherent with the smart routers of the existing grid.

Photonic Switches Put the Internet on Steroids
Researchers around the world use the Internet to share data, collaborate on analyses, and publish results. But communication glitches that inconvenience the casual user disrupt scientific collaboration. Without reliable, fast, well-defined communication links, large data sets cannot be shared and scientific collaboration cannot prosper. The years since the birth of the Internet have seen tremendous advances in computing speed, and comparable growth in data collection: more instruments around the world are gathering more data. Bioscience and geoscience experiments can generate terabytes, even petabytes of data. Transmission speeds, however, have not kept pace.

Several years ago Larry Smarr, a professor in computer science and engineering at the Univ. of California, San Diego (UCSD), identified the problem, which he recently summarized: “There’s something deadly wrong with the infrastructure when the natural rate of the PC is so much higher than the bandwidth interconnecting them.” Smarr is now the principal investigator for the OptIPuter, a five-year National Science Foundation (NSF) project named for its use of optical networking, Internet protocol, computer storage, processing, and visualization technologies.

The intent of the OptIPuter project is to employ the latest commercially available technology in coordination with existing resources to take advantage of the high bandwidth capabilities inherent in optical fiber.

Share and share alike
Anyone who has connected to a remote Web site has been the victim of a clever illusion. It appears as if a circuit has been established directly between the user’s computer and the Web site’s home computer. That’s not the case. The user’s computer has broken the outgoing data into packets according to the transmission control protocol (TCP). Internet protocol (IP) information is added to the TCP packet. The TCP / IP packet is now formatted with the Internet address. When a packet leaves a user’s computer it is sent to a router, which examines the IP address, looks at paths to get to the destination, and selects an appropriate path. That path leads to the next router down the line, which then determines which path to use. Each router makes a decision with each packet. Because the path is a function of how busy a particular route is, two packets, sent one after another from a user’s computer, may take different paths to reach the destination. All those decisions are transparent to the user.

TCP / IP is an excellent method of fairly allocating telecommunications links, but this sharing of the data paths, and the dynamic reassignment decisions made by the routers along the way, are also responsible for variations in data rate as the software detects network congestion and backs off.

This equitable sharing also means that users rarely approach their optimum data transmission bandwidth. The problem? It’s difficult to optimize the speed of a link if there’s no way of determining what path the data will take.

Mine, all mine
So, the first step in optimizing the speed of a data link is to put the selection of the data path in the hands of the user. In the same way in which users schedule time at high-performance computing facilities, the OptIPuter users schedule their telecommunications links. From storage to processor to display and visualization, the wavelengths and fibers are allocated to a specific user for the duration of a task. This “deterministic” link is predictable, with a specific, constant latency time.

Wavelength division multiplexing (WDM) put multiple users’ data on separate distinguishable wavelengths of light in a single fiber. Philip Papadopoulos, the program director for grid and cluster computing at the San Diego Supercomputer Center, says “circuit-style connections were not practical without WDM. Now, with the rapid price decline of fixed-wavelength lasers the notion of putting eight to ten 10-Gbps channels into a computing facility is becoming economically feasible.” So selection and combination of wavelengths can be put under control of users directly from their computers.

The next step is to select a specific optical fiber down which to send the data. Here, another economic development was key. Maxine Brown, the OptIPuter project manager, says “what helped us was the telecom bust. Companies had lots of fiber in the ground, and no use for it.” The OptIPuter needed a pathway selection method that didn’t suffer from the variation inherent with the smart routers of the existing grid.

Enter the optical switch
Routers in the existing Internet grid must convert optical data to electronic form to make pathway decisions based on the data they receive. But with deterministic routing chosen by the user, it is no longer necessary to make the electronic conversion. This, in turn, opens up the possibility of using an all-optical switch.

Glimmerglass, Hayward, Calif., has supplied their MEMS-based optical switches to two key components of the OptIPuter network - the Electronic Visualization Laboratory at the Univ. of Illinois at Chicago (UIC), and the San Diego Supercomputing Center at UCSD. The Glimmerglass switch couples light from an input array through a collimating lens array to a MEMS mirror array. The light reflected off the individual MEMS mirrors is directed toward the output lens array and the associated output fiber array.

Each one of the three array types is fabricated from silicon using deep reactive ion etching. The fiber array is constructed by boring 125-μm dia holes in a precise pattern in a silicon block. Bare fiber ends are inserted through the holes, then the block is polished. The lens array is constructed by varying the etch depth with the radius away from each lens center. Finally, the 1-mm dia MEMS mirror array is constructed with a four-flexure design, with the mirror array suspended 200 μm above an electrode array. The UCSD switch is a 128 x 128 device. The all-optical, free-space coupling technique is independent of the density of traffic, the data format, or the data rate, all of which can affect optical to electronic conversion. Alignment is maintained with closed loop control, which picks off 1% of the output fiber power as input to the control system. The OptIPuter project has also integrated a similar MEMS switch developed by Calient, San Jose, Calif.

Papadopoulos also expressed interest in the optical phased array (OPA) switch developed by Chiaro, Richardson, Texas. Chiaro’s switch, like Calient’s and Glimmerglass’, also has an input and output array of fibers coupled through free space, but rather than using a mechanical switch, Chiaro steers individual output beams by varying the phase across the beam. Light from each input fiber is sent through 128 parallel GaAs waveguides. If no voltage is applied then light exits each of the waveguides at the same time, and the beam is undeflected. If voltage is applied to a waveguide the phase of the wave within is retarded, and that section of the beam exits its guide after the others. By applying voltage in a specified fashion an arbitrary deflection is introduced. With a 64 x 64 array, any one of the input fibers can be coupled to any one of the output fibers. The 30-nsec settling time of the OPA method is significantly faster than the MEMS devices’ of about 100 msec. “Although”, says Papadopoulos, “OptIPuter sessions tend to be longer than a minute as a minimum, and the performance of both methods are equivalent in that application.”

Supernet emergence
With 1- to 10-Gbps network interface cards, local fiber networking, WDM, and optical switch access to long-haul fiber, available bandwidth exploded. That’s also the metaphor Brown uses to describe the new vision of distributed computing. “The OptIPuter structure is as if you took a PC and exploded it, then connected the components with high-bandwidth fiber.” Scientific instruments send data to storage components, the storage units send data to the processors, and the processors send it to displays. Not only can each component be geographically separated from the others, but the sheer volume of data is difficult to comprehend. That’s one reason the OptIPuter project has also developed visualization technologies, capped by the LambdaVision facility at UIC’s Electronic Visualization Laboratory.

LambdaVision is a 5.1 x 1.8 m display wall comprised of an 11 x 5 array of LCDs, each 1,600 x 1,200 pixels, for a total of 105 million pixels. Airborne imagery, microscope images, seismic data all can be collected, analyzed, and displayed, combined with other data sets and correlated at a scale that was unimaginable just a few short years ago.

Although the OptIPuter researchers always make a point of emphasizing that the program is a research project, it is difficult to imagine that its enhanced operational and collaborative capabilities will not be duplicated as photonic technology continues to be deployed.