Abstract
Although some interesting routing algorithms based on HNN were already proposed, they are slower when compared to other routing algorithms. Since HNN are inherently parallel, they are suitable for parallel implementations on parallel platforms, such as Field Programmable Gate Arrays (FPGA) and Graphic Processing Units (GPU). In this chapter, the authors show parallel implementations of a routing algorithm based on Hopfield Neural Networks (HNN) for GPU and for FPGAs, considering some implementation issues. They analyze the hardware limitation on the devices, the memory bottlenecks, the complexity of the HNN, and, in the case of GPU implementation, how the kernel functions should be implemented, as well as, in the case of the FPGA implementation, the accuracy of the number representation and memory storage on the device. The authors perform simulations for one variation of the routing algorithm for three communication network topologies with increasing number of nodes. They achieved speed-ups up to 78 when compared the FPGA model simulated to the CPU sequential version and the GPU version is 55 times faster than the sequential one. These new results suggest that it is possible to use the HNN to implement routers for real networks, including optical networks.
Original language | English |
---|---|
Title of host publication | Intelligent Systems for Optical Networks Design: Advancing Techniques |
Publisher | IGI Global |
Pages | 235-254 |
ISBN (Electronic) | 9781466636538 |
ISBN (Print) | 1466636521, 9781466636521 |
DOIs | |
Publication status | Published - 31 Mar 2013 |
Externally published | Yes |