1,656 research outputs found
Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation
This paper derives a contact-aided inertial navigation observer for a 3D
bipedal robot using the theory of invariant observer design. Aided inertial
navigation is fundamentally a nonlinear observer design problem; thus, current
solutions are based on approximations of the system dynamics, such as an
Extended Kalman Filter (EKF), which uses a system's Jacobian linearization
along the current best estimate of its trajectory. On the basis of the theory
of invariant observer design by Barrau and Bonnabel, and in particular, the
Invariant EKF (InEKF), we show that the error dynamics of the point
contact-inertial system follows a log-linear autonomous differential equation;
hence, the observable state variables can be rendered convergent with a domain
of attraction that is independent of the system's trajectory. Due to the
log-linear form of the error dynamics, it is not necessary to perform a
nonlinear observability analysis to show that when using an Inertial
Measurement Unit (IMU) and contact sensors, the absolute position of the robot
and a rotation about the gravity vector (yaw) are unobservable. We further
augment the state of the developed InEKF with IMU biases, as the online
estimation of these parameters has a crucial impact on system performance. We
evaluate the convergence of the proposed system with the commonly used
quaternion-based EKF observer using a Monte-Carlo simulation. In addition, our
experimental evaluation using a Cassie-series bipedal robot shows that the
contact-aided InEKF provides better performance in comparison with the
quaternion-based EKF as a result of exploiting symmetries present in the system
dynamics.Comment: Published in the proceedings of Robotics: Science and Systems 201
Seeing things
This paper is concerned with the problem of attaching meaningful symbols to aspects of the visible environment in machine and biological vision. It begins with a review of some of the arguments commonly used to support either the 'symbolic' or the 'behaviourist' approach to vision. Having explored these avenues without arriving at a satisfactory conclusion, we then present a novel argument, which starts from the question : given a functional description of a vision system, when could it be said to support a symbolic interpretation? We argue that to attach symbols to a system, its behaviour must exhibit certain well defined regularities in its response to its visual input and these are best described in terms of invariance and equivariance to transformations which act in the world and induce corresponding changes of the vision system state. This approach is illustrated with a brief exploration of the problem of identifying and acquiring visual representations having these symmetry properties, which also highlights the advantages of using an 'active' model of vision
Searching for collective behavior in a network of real neurons
Maximum entropy models are the least structured probability distributions
that exactly reproduce a chosen set of statistics measured in an interacting
network. Here we use this principle to construct probabilistic models which
describe the correlated spiking activity of populations of up to 120 neurons in
the salamander retina as it responds to natural movies. Already in groups as
small as 10 neurons, interactions between spikes can no longer be regarded as
small perturbations in an otherwise independent system; for 40 or more neurons
pairwise interactions need to be supplemented by a global interaction that
controls the distribution of synchrony in the population. Here we show that
such "K-pairwise" models--being systematic extensions of the previously used
pairwise Ising models--provide an excellent account of the data. We explore the
properties of the neural vocabulary by: 1) estimating its entropy, which
constrains the population's capacity to represent visual information; 2)
classifying activity patterns into a small set of metastable collective modes;
3) showing that the neural codeword ensembles are extremely inhomogenous; 4)
demonstrating that the state of individual neurons is highly predictable from
the rest of the population, allowing the capacity for error correction.Comment: 24 pages, 19 figure
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data
representation, and we hypothesize that this is because different
representations can entangle and hide more or less the different explanatory
factors of variation behind the data. Although specific domain knowledge can be
used to help design representations, learning with generic priors can also be
used, and the quest for AI is motivating the design of more powerful
representation-learning algorithms implementing such priors. This paper reviews
recent work in the area of unsupervised feature learning and deep learning,
covering advances in probabilistic models, auto-encoders, manifold learning,
and deep networks. This motivates longer-term unanswered questions about the
appropriate objectives for learning good representations, for computing
representations (i.e., inference), and the geometrical connections between
representation learning, density estimation and manifold learning
Using Decoupled Features for Photo-realistic Style Transfer
In this work we propose a photorealistic style transfer method for image and
video that is based on vision science principles and on a recent mathematical
formulation for the deterministic decoupling of sample statistics. The novel
aspects of our approach include matching decoupled moments of higher order than
in common style transfer approaches, and matching a descriptor of the power
spectrum so as to characterize and transfer diffusion effects between source
and target, which is something that has not been considered before in the
literature. The results are of high visual quality, without spatio-temporal
artifacts, and validation tests in the form of observer preference experiments
show that our method compares very well with the state-of-the-art. The
computational complexity of the algorithm is low, and we propose a numerical
implementation that is amenable for real-time video application. Finally,
another contribution of our work is to point out that current deep learning
approaches for photorealistic style transfer don't really achieve
photorealistic quality outside of limited examples, because the results too
often show unacceptable visual artifacts
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …