4,579 research outputs found

    Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns

    Get PDF
    Reservoir computing (RC) is a technique in machine learning inspired by neural systems. RC has been used successfully to solve complex problems such as signal classification and signal generation. These systems are mainly implemented in software, and thereby they are limited in speed and power efficiency. Several optical and optoelectronic implementations have been demonstrated, in which the system has signals with an amplitude and phase. It is proven that these enrich the dynamics of the system, which is beneficial for the performance. In this paper, we introduce a novel optical architecture based on nanophotonic crystal cavities. This allows us to integrate many neurons on one chip, which, compared with other photonic solutions, closest resembles a classical neural network. Furthermore, the components are passive, which simplifies the design and reduces the power consumption. To assess the performance of this network, we train a photonic network to generate periodic patterns, using an alternative online learning rule called first-order reduced and corrected error. For this, we first train a classical hyperbolic tangent reservoir, but then we vary some of the properties to incorporate typical aspects of a photonics reservoir, such as the use of continuous-time versus discrete-time signals and the use of complex-valued versus real-valued signals. Then, the nanophotonic reservoir is simulated and we explore the role of relevant parameters such as the topology, the phases between the resonators, the number of nodes that are biased and the delay between the resonators. It is important that these parameters are chosen such that no strong self-oscillations occur. Finally, our results show that for a signal generation task a complex-valued, continuous-time nanophotonic reservoir outperforms a classical (i.e., discrete-time, real-valued) leaky hyperbolic tangent reservoir (normalized root-mean-square errors = 0.030 versus NRMSE = 0.127)

    Scalable Digital Architecture of a Liquid State Machine

    Get PDF
    Liquid State Machine (LSM) is an adaptive neural computational model with rich dynamics to process spatio-temporal inputs. These machines are extremely fast in learning because the goal-oriented training is moved to the output layer, unlike conventional recurrent neural networks. The capability to multiplex at the output layer for multiple tasks makes LSM a powerful intelligent engine. These properties are desirable in several machine learning applications such as speech recognition, anomaly detection, user identification etc. Scalable hardware architectures for spatio-temporal signal processing algorithms like LSMs are energy efficient compared to the software implementations. These designs can also naturally adapt to dierent temporal streams of inputs. Early literature shows few behavioral models of LSM. However, they cannot process real time data either due to their hardware complexity or xed design approach. In this thesis, a scalable digital architecture of an LSM is proposed. A key feature of the architecture is a digital liquid that exploits spatial locality and is capable of processing real time data. The quality of the proposed LSM is analyzed using kernel quality, separation property of the liquid and Lyapunov exponent. When realized using TSMC 65nm technology node, the total power dissipation of the liquid layer, with 60 neurons, is 55.7 mW with an area requirement of 2 mm^2. The proposed model is validated for two benchmark. In the case of an epileptic seizure detection an average accuracy of 84% is observed. For user identification/authentication using gait an average accuracy of 98.65% is achieved

    Towards a neural hierarchy of time scales for motor control

    Get PDF
    Animals show remarkable rich motion skills which are still far from realizable with robots. Inspired by the neural circuits which generate rhythmic motion patterns in the spinal cord of all vertebrates, one main research direction points towards the use of central pattern generators in robots. On of the key advantages of this, is that the dimensionality of the control problem is reduced. In this work we investigate this further by introducing a multi-timescale control hierarchy with at its core a hierarchy of recurrent neural networks. By means of some robot experiments, we demonstrate that this hierarchy can embed any rhythmic motor signal by imitation learning. Furthermore, the proposed hierarchy allows the tracking of several high level motion properties (e.g.: amplitude and offset), which are usually observed at a slower rate than the generated motion. Although these experiments are preliminary, the results are promising and have the potential to open the door for rich motor skills and advanced control

    Deep Learning for Mobile Multimedia: A Survey

    Get PDF
    Deep Learning (DL) has become a crucial technology for multimedia computing. It offers a powerful instrument to automatically produce high-level abstractions of complex multimedia data, which can be exploited in a number of applications, including object detection and recognition, speech-to- text, media retrieval, multimodal data analysis, and so on. The availability of affordable large-scale parallel processing architectures, and the sharing of effective open-source codes implementing the basic learning algorithms, caused a rapid diffusion of DL methodologies, bringing a number of new technologies and applications that outperform, in most cases, traditional machine learning technologies. In recent years, the possibility of implementing DL technologies on mobile devices has attracted significant attention. Thanks to this technology, portable devices may become smart objects capable of learning and acting. The path toward these exciting future scenarios, however, entangles a number of important research challenges. DL architectures and algorithms are hardly adapted to the storage and computation resources of a mobile device. Therefore, there is a need for new generations of mobile processors and chipsets, small footprint learning and inference algorithms, new models of collaborative and distributed processing, and a number of other fundamental building blocks. This survey reports the state of the art in this exciting research area, looking back to the evolution of neural networks, and arriving to the most recent results in terms of methodologies, technologies, and applications for mobile environments

    Index to NASA Tech Briefs, 1975

    Get PDF
    This index contains abstracts and four indexes--subject, personal author, originating Center, and Tech Brief number--for 1975 Tech Briefs
    • …
    corecore