ENERGY | WIRELESS | NANOTECH | MEMS | OPTICS | QUANTUM | 3D | CHIPS | ALGORITHMS

Tuesday, February 07, 2012

#CLOUD: "Smarter Net Boosts Tough Apps"

Compute-intensive applications executing on-the-fly processing of network data are turning to a new genre of accelerator that harnesses field-programmable gate arrays (FPGAs).



When IT needs an application-specific boost, it usually looks to boost the performance of the whole server, but a new genre of smarter server interface harnesses field-programmable gate arrays (FPGAs) to more efficiently accelerate compute-intensive applications.

What do stock market trading programs, automated video surveillance, global oil and gas exploration, and Internet cyber security applications have in common? They all need speed when executing tasks such as parsing, filtering, sorting, encode/decoding, and other compute-intensive operations on real-time data streams. IT's solution today is to increase the capability of the server, dedicate a core to the task, or buy an accelerator card that attaches to the server's main processor. But a new approach is to add an accelerated network interface card (NIC) that harnesses a field-programmable gate array (FPGA).

Solarflare's Application Onload Engine (AOE) is the world's first server adapter with a built-in field-programmable gate array (FPGA) for accelerating compute-intensive operations on real-time data streams.

FPGAs are hardware chips that are reconfigurable to execute a specific task. Unlike general-purpose accelerator chips, which are programmed with algorithms, FPGAs re-route the signals on the chip through arrays of high-speed transistors rather than execute instructions. As a result, compute-intensive tasks that operate on real-time data streams can be more efficiently accelerated with FPGAs than when using faster cores executing software algorithms.

Using a hardware-description language (HDL) instead of computer-software language, an FPGA can be scorchingly fast--much faster than software--at real-time data-stream processing. Today FPGAs are used for digital-signal processing by NASA space probes, medical imagers, computer-vision systems, bioinformatics, radio astronomy and all sorts of time-critical aerospace and military applications. But now IT is gaining access to the blinding speed of FPGAs by virtue of integrating them into the NIC.
"Applications that require super high-speed on-the-fly processing of data streams can achieve an unparalleled boost in performance by turning to FPGAs," said Mike Smith, vice president and general manager of host solutions at Solarflare Communications Inc. (Irvine, Calif.).

Solarflare already sells the world's fastest NICs, offered as options by several high-performance server manufacturers. These companies use the NICs because their application specific integrated circuits (ASICs) lower latency from 10-to-15 microseconds down to 2-microseconds when using 10Gigabit Ethernet. As a result, Solarflareis NICs are used by the New York Stock Exchange (NYSE), the National Association of Securities Dealers Automated Quotations (NASDAQ), the Chicago Board Options Exchange (CBOE), and other enterprises whose success depends on the high-speed processing of real-time data streams.

The company’s new FPGA-based NIC, the ApplicationOnload Engine (AOE), promises to open up a new market in real-time data stream processing by lowering latency below one microsecond, and by processing compute-intensive tasks during the last step before data transmission. For instance, in video surveillance applications the AOE eliminates the need for the main processor to process and store compressed video files, since the NIC performs the compression on-the-fly.

Solarflare is providing a variety of application-specific models of its AOE, for different market segments, such as financial transaction processing or oil-and-gas exploration, but will also supply a software toolkit and API with which IT can roll-their-own custom FPGA configurations.