4,632 research outputs found
Neural Lyapunov Control
We propose new methods for learning control policies and neural network
Lyapunov functions for nonlinear control problems, with provable guarantee of
stability. The framework consists of a learner that attempts to find the
control and Lyapunov functions, and a falsifier that finds counterexamples to
quickly guide the learner towards solutions. The procedure terminates when no
counterexample is found by the falsifier, in which case the controlled
nonlinear system is provably stable. The approach significantly simplifies the
process of Lyapunov control design, provides end-to-end correctness guarantee,
and can obtain much larger regions of attraction than existing methods such as
LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality
solutions for challenging control problems.Comment: NeurIPS 201
Control of quantum phenomena: Past, present, and future
Quantum control is concerned with active manipulation of physical and
chemical processes on the atomic and molecular scale. This work presents a
perspective of progress in the field of control over quantum phenomena, tracing
the evolution of theoretical concepts and experimental methods from early
developments to the most recent advances. The current experimental successes
would be impossible without the development of intense femtosecond laser
sources and pulse shapers. The two most critical theoretical insights were (1)
realizing that ultrafast atomic and molecular dynamics can be controlled via
manipulation of quantum interferences and (2) understanding that optimally
shaped ultrafast laser pulses are the most effective means for producing the
desired quantum interference patterns in the controlled system. Finally, these
theoretical and experimental advances were brought together by the crucial
concept of adaptive feedback control, which is a laboratory procedure employing
measurement-driven, closed-loop optimization to identify the best shapes of
femtosecond laser control pulses for steering quantum dynamics towards the
desired objective. Optimization in adaptive feedback control experiments is
guided by a learning algorithm, with stochastic methods proving to be
especially effective. Adaptive feedback control of quantum phenomena has found
numerous applications in many areas of the physical and chemical sciences, and
this paper reviews the extensive experiments. Other subjects discussed include
quantum optimal control theory, quantum control landscapes, the role of
theoretical control designs in experimental realizations, and real-time quantum
feedback control. The paper concludes with a prospective of open research
directions that are likely to attract significant attention in the future.Comment: Review article, final version (significantly updated), 76 pages,
accepted for publication in New J. Phys. (Focus issue: Quantum control
Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module
The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project
Connections Between Adaptive Control and Optimization in Machine Learning
This paper demonstrates many immediate connections between adaptive control
and optimization methods commonly employed in machine learning. Starting from
common output error formulations, similarities in update law modifications are
examined. Concepts in stability, performance, and learning, common to both
fields are then discussed. Building on the similarities in update laws and
common concepts, new intersections and opportunities for improved algorithm
analysis are provided. In particular, a specific problem related to higher
order learning is solved through insights obtained from these intersections.Comment: 18 page
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
A continuous-time analysis of distributed stochastic gradient
We analyze the effect of synchronization on distributed stochastic gradient
algorithms. By exploiting an analogy with dynamical models of biological quorum
sensing -- where synchronization between agents is induced through
communication with a common signal -- we quantify how synchronization can
significantly reduce the magnitude of the noise felt by the individual
distributed agents and by their spatial mean. This noise reduction is in turn
associated with a reduction in the smoothing of the loss function imposed by
the stochastic gradient approximation. Through simulations on model non-convex
objectives, we demonstrate that coupling can stabilize higher noise levels and
improve convergence. We provide a convergence analysis for strongly convex
functions by deriving a bound on the expected deviation of the spatial mean of
the agents from the global minimizer for an algorithm based on quorum sensing,
the same algorithm with momentum, and the Elastic Averaging SGD (EASGD)
algorithm. We discuss extensions to new algorithms which allow each agent to
broadcast its current measure of success and shape the collective computation
accordingly. We supplement our theoretical analysis with numerical experiments
on convolutional neural networks trained on the CIFAR-10 dataset, where we note
a surprising regularizing property of EASGD even when applied to the
non-distributed case. This observation suggests alternative second-order
in-time algorithms for non-distributed optimization that are competitive with
momentum methods.Comment: 9/14/19 : Final version, accepted for publication in Neural
Computation. 4/7/19 : Significant edits: addition of simulations, deep
network results, and revisions throughout. 12/28/18: Initial submissio
- …