Multi-SoC PoE Cluster

From Bobs Projects
Jump to: navigation, search

Multi-System-on-Chip Power-over-Ethernet Cluster is a project to investigate performance, cost and power consumption of a cluster of low-cost "System-on-Chip" (SoC) CPUs (typically ARM) interconnected by a synergy of their on-chip USB 2 On-The-Go (OTG) ports, a bespoke USB 2 to Gigabit Ethernet switch (USB 2 GigE switch) and an enterprise grade Power-over-Ethernet (PoE) Gigabit Ethernet switch infrastructure.

Rationale

Low-cost System-on-Chip devices are abundant in mobile devices in the second decade of the third millenium. These CPU/memory/co-processor devices typically have limited I/O other than wireless (IEEE802.11 and/or 3G+) and, generally, a USB 2 OTG port. Typically, unsurprisingly, there is no wired Ethernet interface. They do, however, typically use very little power to do their work and can be purchased at relatively low cost.

Running a Linux kernel, the USB 2 port can be interfaced to a 100Mbps full-duplex Ethernet adaptor that will provide up to around 20MBytes/sec bi-sectional bandwidth. The USB 2 port can theoretically provide up to 48MBytes/sec, although the fastest published results seem to indicate it is more like 43MBytes/sec - still over twice the aggregate bandwidth of a 100Mbps Ethernet link, and more than 4 times in one direction.

On the other hand, a Gigabit Ethernet interface can provide up to 200MBytes/sec aggregate per port. Provisioning one per SoC device is possible, but somewhat wasteful of the available bandwidth, which comes at a greater cost at the switch end.

The best match would be having between 4 to 8 SoCs sharing a single switched Gigabit Ethernet link via their USB 2 ports. A bespoke switch is required in this middle role: the USB 2 GigE switch.

Utilising an enterprise grade Gigabit Ethernet switch with PoE, it is possible that enough power can be delivered down each network link to power the USB 2 GigE switch and up to 4 SoC devices (typically 13W power available at each port).

Development

It is proposed to build a cluster of some 48 nodes with 4 SoCs per node (likely Tegra2, or similar) and a USB 2 GigE switch with PoE per port for a total of 192 SoCs and possibly 384, or more, cores.

The key component is the USB 2 GigE switch device which needs to be developed specifically for this project, although it could easily see many other applications.

The USB 2 GigE switch will likely be based on a Xilinx Field Programmable Gate Array (FPGA) and a number of USB2.0 Interface devices, such as the Cypress CY7C68013 FX2LP USB 2 microcontroller.

Development of the USB 2 GigE switch will start with provisioning USB2.0 Interfaces for the NetFPGA 1G FPGA-based network interface card from Stanford University. This card already has working Gigabit network ports and a reference learning switch implemented with Verilog Hardware Description Language (HDL). It also has a 40-pin "debug" connector with 32 data signals and 2 clocks directly connected to the Xilinx Virtex-II Pro 50 FPGA.

This provisioning is currently underway with the NetFPGA USB 2 interface board project.

Once the proof-of-concept for USB 2 GigE switch has been completed on the NetFPGA infrastructure, development will focus on designing and building a board with a Xilinx FPGA, a single GigE port with PoE and 4 USB 2 device ports. This board may well include sockets or other for the SoC devices, such as the Colibri T20 or gumstick etc.

Once that design has been proved, we can go ahead and build up the number required for a full system build.

Then run some codes to validate the speed, power consumption and cost.

Similar projects