MindTalks: Computation spike by spike – hardware and wetware (Prof. Dr. Klaus Pawelzik and Prof. Dr. Alberto Garcia-Ortiz)

Abstract

Recent advances in machine learning with deep neural networks (DNNs) show impressive performances in solving difficult problems. However, current DNN approaches are still inefficient when compared with their biological counterparts. It appears that evolution has found solutions that are still superior to the current technical implementations. Spiking neural networks (SNN) could offer an alternative to standard CNNs. Like the brain, SNN can operate reliably using mechanisms that are inherently non-reliable. Beside robustness, SNN have further advantages like the possibility of higher energy efficiency and more efficient asynchronous parallelization.

However, current implementations of SNNs require hundreds of cores with large and complex circuits. We present an alternative approach, the ‘Spike-by-Spike’ (SbS) networks, which represent a compromise between computational requirements and biological realism that preserves essential advantages of biological networks while allowing a much more compact technical implementation. To fully exploit the robustness and efficiency of SbS, dedicated hardware architectures are required. By combining optimized hardware architectures with stochastic and approximate processing approaches, this new approach aims to improve the performance of neural networks and their energy consumption by at least one order of magnitude.

For the Zoom-access and further information, please refer to the MindTalks website: https://www.bernstein.uni-bremen.de/drupal/node/229

Please feel free to mail us,
if you have any further questions