196,423 research outputs found
Dynamics of coherent and incoherent emission from an artificial atom in a 1D space
We study dynamics of an artificial two-level atom in an open 1D space by
measuring evolution of its coherent and incoherent emission. States of the atom
-- a superconducting flux qubit coupled to a transmission line -- are fully
controlled by resonant excitation microwave pulses. The coherent emission -- a
direct measure of superposition in the atom -- exhibits decaying oscillations
shifted by from oscillations of the incoherent emission, which, in
turn, is proportional to the atomic population. The emission dynamics provides
information about states and properties of the atom. By measuring the coherent
dynamics, we derive two-time correlation function of fluctuations and, using
quantum regression formula, reconstruct the incoherent spectrum of the
resonance fluorescence triplet, which is in a good agreement with the directly
measured one.Comment: 4 pages, 4 figure
Dynamical transitions in the evolution of learning algorithms by selection
We study the evolution of artificial learning systems by means of selection.
Genetic programming is used to generate a sequence of populations of algorithms
which can be used by neural networks for supervised learning of a rule that
generates examples. In opposition to concentrating on final results, which
would be the natural aim while designing good learning algorithms, we study the
evolution process and pay particular attention to the temporal order of
appearance of functional structures responsible for the improvements in the
learning process, as measured by the generalization capabilities of the
resulting algorithms. The effect of such appearances can be described as
dynamical phase transitions. The concepts of phenotypic and genotypic
entropies, which serve to describe the distribution of fitness in the
population and the distribution of symbols respectively, are used to monitor
the dynamics. In different runs the phase transitions might be present or not,
with the system finding out good solutions, or staying in poor regions of
algorithm space. Whenever phase transitions occur, the sequence of appearances
are the same. We identify combinations of variables and operators which are
useful in measuring experience or performance in rule extraction and can thus
implement useful annealing of the learning schedule.Comment: 11 pages, 11 figures, 2 table
Recovering missing data on satellite images
International audienceData Assimilation is commonly used in environmental sciences to improve forecasts, obtained by meteorological, oceanographic or air quality simulation models, with observation data. It aims to solve an evolution equation, describing the dynamics, and an observation equation, measuring the misfit between the state vector and the observations, to get a better knowledge of the actual system's state, named the reference. In this article, we describe how to use this technique to recover missing data and reduce noise on satellite images. The recovering process is based on assumptions on the underlying dynamics displayed by the sequence of images. This is a promising alternative to methods such as space-time interpolation. In order to better evaluate our approach, results are first quantified for an artificial noise applied on the acquisitions and then displayed for real data
Quantum Artificial Life in an IBM Quantum Computer
We present the first experimental realization of a quantum artificial life
algorithm in a quantum computer. The quantum biomimetic protocol encodes
tailored quantum behaviors belonging to living systems, namely,
self-replication, mutation, interaction between individuals, and death, into
the cloud quantum computer IBM ibmqx4. In this experiment, entanglement spreads
throughout generations of individuals, where genuine quantum information
features are inherited through genealogical networks. As a pioneering
proof-of-principle, experimental data fits the ideal model with accuracy.
Thereafter, these and other models of quantum artificial life, for which no
classical device may predict its quantum supremacy evolution, can be further
explored in novel generations of quantum computers. Quantum biomimetics,
quantum machine learning, and quantum artificial intelligence will move forward
hand in hand through more elaborate levels of quantum complexity
Measuring autonomy and emergence via Granger causality
Concepts of emergence and autonomy are central to artificial life and related cognitive and behavioral sciences. However, quantitative and easy-to-apply measures of these phenomena are mostly lacking. Here, I describe quantitative and practicable measures for both autonomy and emergence, based on the framework of multivariate autoregression and specifically Granger causality. G-autonomy measures the extent to which the knowing the past of a variable helps predict its future, as compared to predictions based on past states of external (environmental) variables. G-emergence measures the extent to which a process is both dependent upon and autonomous from its underlying causal factors. These measures are validated by application to agent-based models of predation (for autonomy) and flocking (for emergence). In the former, evolutionary adaptation enhances autonomy; the latter model illustrates not only emergence but also downward causation. I end with a discussion of relations among autonomy, emergence, and consciousness
Evolving collective behavior in an artificial ecology
Collective behavior refers to coordinated group motion, common to many animals. The dynamics of a group can be seen as a distributed model, each āanimalā applying the same rule set. This study investigates the use of evolved sensory controllers to produce schooling behavior. A set of artificial creatures āliveā in an artificial world with hazards and food. Each creature has a simple artificial neural network brain that controls movement in different situations. A chromosome encodes the network structure and weights, which may be combined using artificial evolution with another chromosome, if a creature should choose to mate. Prey and predators coevolve without an explicit fitness function for schooling to produce sophisticated, nondeterministic, behavior. The work highlights the role of speciesā physiology in understanding behavior and the role of the environment in encouraging the development of sensory systems
Global adaptation in networks of selfish components: emergent associative memory at the system scale
In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning
- ā¦