3,126 research outputs found
Nature-Inspired Interconnects for Self-Assembled Large-Scale Network-on-Chip Designs
Future nano-scale electronics built up from an Avogadro number of components
needs efficient, highly scalable, and robust means of communication in order to
be competitive with traditional silicon approaches. In recent years, the
Networks-on-Chip (NoC) paradigm emerged as a promising solution to interconnect
challenges in silicon-based electronics. Current NoC architectures are either
highly regular or fully customized, both of which represent implausible
assumptions for emerging bottom-up self-assembled molecular electronics that
are generally assumed to have a high degree of irregularity and imperfection.
Here, we pragmatically and experimentally investigate important design
trade-offs and properties of an irregular, abstract, yet physically plausible
3D small-world interconnect fabric that is inspired by modern network-on-chip
paradigms. We vary the framework's key parameters, such as the connectivity,
the number of switch nodes, the distribution of long- versus short-range
connections, and measure the network's relevant communication characteristics.
We further explore the robustness against link failures and the ability and
efficiency to solve a simple toy problem, the synchronization task. The results
confirm that (1) computation in irregular assemblies is a promising and
disruptive computing paradigm for self-assembled nano-scale electronics and (2)
that 3D small-world interconnect fabrics with a power-law decaying distribution
of shortcut lengths are physically plausible and have major advantages over
local 2D and 3D regular topologies
Resolving structural variability in network models and the brain
Large-scale white matter pathways crisscrossing the cortex create a complex
pattern of connectivity that underlies human cognitive function. Generative
mechanisms for this architecture have been difficult to identify in part
because little is known about mechanistic drivers of structured networks. Here
we contrast network properties derived from diffusion spectrum imaging data of
the human brain with 13 synthetic network models chosen to probe the roles of
physical network embedding and temporal network growth. We characterize both
the empirical and synthetic networks using familiar diagnostics presented in
statistical form, as scatter plots and distributions, to reveal the full range
of variability of each measure across scales in the network. We focus on the
degree distribution, degree assortativity, hierarchy, topological Rentian
scaling, and topological fractal scaling---in addition to several summary
statistics, including the mean clustering coefficient, shortest path length,
and network diameter. The models are investigated in a progressive, branching
sequence, aimed at capturing different elements thought to be important in the
brain, and range from simple random and regular networks, to models that
incorporate specific growth rules and constraints. We find that synthetic
models that constrain the network nodes to be embedded in anatomical brain
regions tend to produce distributions that are similar to those extracted from
the brain. We also find that network models hardcoded to display one network
property do not in general also display a second, suggesting that multiple
neurobiological mechanisms might be at play in the development of human brain
network architecture. Together, the network models that we develop and employ
provide a potentially useful starting point for the statistical inference of
brain network structure from neuroimaging data.Comment: 24 pages, 11 figures, 1 table, supplementary material
Design of a Neuromemristive Echo State Network Architecture
Echo state neural networks (ESNs) provide an efficient classification technique for spatiotemporal signals. The feedback connections in the ESN enable feature extraction in both spatial and temporal components in time series data. This property has been used in several application domains such as image and video analysis, anomaly detection, and speech recognition. The software implementations of the ESN demonstrated efficiency in processing such applications, and have low design cost and flexibility. However, hardware implementation is necessary for power constrained resources applications such as therapeutic and mobile devices. Moreover, software realization consumes an order or more power compared to the hardware realization. In this work, a hardware ESN architecture with neuromemristive system is proposed. A neuromemristive system is a brain inspired computing system that uses memristive devises for synaptic plasticity. The memristive devices in neuromemristive systems have several interesting properties such as small footprint, simple device structure, and most importantly zero static power dissipation. The proposed architecture is reconfigurable for different ESN topologies. 2-D mesh architecture and toroidal networks are exploited in the reservoir layer. The relation between performance of the proposed reservoir architecture and reservoir metrics are analyzed. The proposed architecture is tested on a suite of medical and human computer interaction applications. The benchmark suite includes epileptic seizure detection, speech emotion recognition, and electromyography (EMG) based finger motion recognition. The proposed ESN architecture demonstrated an accuracy of , , and for epileptic seizure detection, speech emotion recognition and EMG prosthetic fingers control respectively
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
- …