9,812 research outputs found
Analog readout for optical reservoir computers
Reservoir computing is a new, powerful and flexible machine learning
technique that is easily implemented in hardware. Recently, by using a
time-multiplexed architecture, hardware reservoir computers have reached
performance comparable to digital implementations. Operating speeds allowing
for real time information operation have been reached using optoelectronic
systems. At present the main performance bottleneck is the readout layer which
uses slow, digital postprocessing. We have designed an analog readout suitable
for time-multiplexed optoelectronic reservoir computers, capable of working in
real time. The readout has been built and tested experimentally on a standard
benchmark task. Its performance is better than non-reservoir methods, with
ample room for further improvement. The present work thereby overcomes one of
the major limitations for the future development of hardware reservoir
computers.Comment: to appear in NIPS 201
A multiplexed mixed-signal fuzzy architecture
Analog circuits provide better area/power efficiency than their digital counterparts for low-medium precision requirements. This limit in precision as well as the lack of design tools when compared to the digital approach, imposes a limit of complexity, hence fuzzy analog controllers are usually oriented to fast low-power systems with low-medium complexity. The paper presents a strategy to preserve most of the advantages of an analog implementation, while allowing a notorious increment of the system complexity. Such strategy consists in implementing a reduced number of rules, those that really determine the output in a lattice controller, which we call analog core, then this core is dynamically programmed to perform the computation related to a specific rule set. The data to program the analog core are stored in a memory, and constitutes the whole knowledge base in a kind of virtual rule set. HSPICE simulations from an exemplary controller are shown to illustrate the viability of the proposal
Spectral Efficiency of MIMO Millimeter-Wave Links with Single-Carrier Modulation for 5G Networks
Future wireless networks will extensively rely upon bandwidths centered on
carrier frequencies larger than 10GHz. Indeed, recent research has shown that,
despite the large path-loss, millimeter wave (mmWave) frequencies can be
successfully exploited to transmit very large data-rates over short distances
to slowly moving users. Due to hardware complexity and cost constraints,
single-carrier modulation schemes, as opposed to the popular multi-carrier
schemes, are being considered for use at mmWave frequencies. This paper
presents preliminary studies on the achievable spectral efficiency on a
wireless MIMO link operating at mmWave in a typical 5G scenario. Two different
single-carrier modem schemes are considered, i.e. a traditional modulation
scheme with linear equalization at the receiver, and a single-carrier
modulation with cyclic prefix, frequency-domain equalization and FFT-based
processing at the receiver. Our results show that the former achieves a larger
spectral efficiency than the latter. Results also confirm that the spectral
efficiency increases with the dimension of the antenna array, as well as that
performance gets severely degraded when the link length exceeds 100 meters and
the transmit power falls below 0dBW. Nonetheless, mmWave appear to be very
suited for providing very large data-rates over short distances.Comment: 8 pages, 8 figures, to appear in Proc. 20th International ITG
Workshop on Smart Antennas (WSA2016
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
- …