Neuromorphic computing Optimisation Segmented bus Spiking neural network Compilers
Neuromorphic computing has emerged as a highly promising paradigm for energy-efficient and real-time processing of event-driven workloads such as spiking neural networks (SNNs). These systems mimic the sparse and asynchronous communication behavior of the brain, and are particularly well-suited for edge AI, robotics, and low-power sensory processing. However, as neuromorphic processors scale to support larger and more complex SNNs, the communication infrastructure becomes a primary bottleneck. Conventional Network-on-Chip (NoC) architectures--while widely used in many-core systems--are ill-suited for neuromorphic workloads due to their reliance on area and energy intensive buffers, routing tables, and continuous link activity. Moreover, NoCs often fail to accommodate the bursty, multicast, and temporally sensitive nature of SNN traffic, leading to spike loss, latency variation, and unnecessary power dissipation. This dissertation introduces a comprehensive interconnect solution based on the segmented ladder bus, a lightweight and scalable bus architecture optimized for spiking communication. The first contribution is the design of ADIONA, a dynamic segmented ladder bus architecture that leverages bufferless switching and compile-time programmability to enable multiple concurrent paths with low energy and latency. ADIONA's segmented topology and runtime control provide flexibility while maintaining deterministic behavior and hardware simplicity. To support application-level deployment on ADIONA, the second major contribution is MASS--a mapping and scheduling framework for SNNs on segmented buses. MASS includes: (1) a heuristic-based cluster mapping strategy to minimize spike loss and communication energy, (2) a traffic scheduling algorithm to prevent path collisions through temporal grouping, and (3) a path routing method that minimizes path length and congestion. Experimental results show that MASS eliminates spike loss and reduces interconnect energy by up to 13.5% compared to randomized baselines, while supporting high-throughput traffic patterns. The final contribution is an efficient architecture proposal and a hardware realization of the control plane, via a scenario-aware controller implemented on FPGA. To manage control plane scaling complexity, we propose two algorithms--greedy grouping and maximal clique extraction--to compress switch control scenarios and minimize memory usage. Our FPGA results show that the control plane occupies less than 10% of the total area and scales sublinearly with the network size, validating the practicality of the approach. Together, these contributions form a complete and scalable communication framework for neuromorphic systems, spanning architectural design, algorithmic optimization, and hardware implementation. The segmented ladder bus offers a promising path toward future neuromorphic processors that demand low-power, low-latency, and scalable interconnects.
Metrics
58 File views/ downloads
32 Record Views
Details
Title
On-chip interconnects and compiler techniques for efficient neuromorphic computing
Creators
Phu Khanh Huynh
Contributors
Anup Das (Advisor)
Francky Catthoor (Advisor)
Awarding Institution
Drexel University
Degree Awarded
Doctor of Philosophy (Ph.D.)
Publisher
Drexel University; Philadelphia, Pennsylvania
Number of pages
xi, 56 pages
Resource Type
Dissertation
Language
English
Academic Unit
College of Engineering (1970-2026); Electrical (and Computer) Engineering [Historical]; Drexel University