822 research outputs found

    Ultrafast single-channel machine vision based on neuro-inspired photonic computing

    Full text link
    High-speed machine vision is increasing its importance in both scientific and technological applications. Neuro-inspired photonic computing is a promising approach to speed-up machine vision processing with ultralow latency. However, the processing rate is fundamentally limited by the low frame rate of image sensors, typically operating at tens of hertz. Here, we propose an image-sensor-free machine vision framework, which optically processes real-world visual information with only a single input channel, based on a random temporal encoding technique. This approach allows for compressive acquisitions of visual information with a single channel at gigahertz rates, outperforming conventional approaches, and enables its direct photonic processing using a photonic reservoir computer in a time domain. We experimentally demonstrate that the proposed approach is capable of high-speed image recognition and anomaly detection, and furthermore, it can be used for high-speed imaging. The proposed approach is multipurpose and can be extended for a wide range of applications, including tracking, controlling, and capturing sub-nanosecond phenomena.Comment: 30 pages, 12 figure

    Emerging opportunities and challenges for the future of reservoir computing

    Get PDF
    Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines

    Emerging opportunities and challenges for the future of reservoir computing

    Get PDF
    Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines

    Delayed Dynamical Systems: Networks, Chimeras and Reservoir Computing

    Full text link
    We present a systematic approach to reveal the correspondence between time delay dynamics and networks of coupled oscillators. After early demonstrations of the usefulness of spatio-temporal representations of time-delay system dynamics, extensive research on optoelectronic feedback loops has revealed their immense potential for realizing complex system dynamics such as chimeras in rings of coupled oscillators and applications to reservoir computing. Delayed dynamical systems have been enriched in recent years through the application of digital signal processing techniques. Very recently, we have showed that one can significantly extend the capabilities and implement networks with arbitrary topologies through the use of field programmable gate arrays (FPGAs). This architecture allows the design of appropriate filters and multiple time delays which greatly extend the possibilities for exploring synchronization patterns in arbitrary topological networks. This has enabled us to explore complex dynamics on networks with nodes that can be perfectly identical, introduce parameter heterogeneities and multiple time delays, as well as change network topologies to control the formation and evolution of patterns of synchrony

    Chain-structure time-delay reservoir computing for synchronizing chaotic signal and an application to secure communication

    Get PDF
    In this work, a chain-structure time-delay reservoir (CSTDR) computing, as a new kind of machine learning-based recurrent neural network, is proposed for synchronizing chaotic signals. Compared with the single time-delay reservoir, our proposed CSTDR computing shows excellent performance in synchronizing chaotic signal achieving an order of magnitude higher accuracy. Noise consideration and optimal parameter setting of the model are discussed. Taking the CSTDR computing as the core, a novel scheme of secure communication is further designed, in which the “smart” receiver is different from the traditional in that it can synchronize to the chaotic signal used for encryption in an adaptive manner. The scheme can solve the issues such as design constrains for identical dynamical systems and couplings between transmitter and receiver in conventional settings. To further manifest the practical significance of the scheme, the digital implementation using field-programmable gate array is conducted and tested experimentally with real-world examples including image and video transmission. The work sheds light on developing machine learning-based signal processing and communication applications
    corecore