152 research outputs found

    On Variational Data Assimilation in Continuous Time

    Full text link
    Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalisation of what is known as weakly constrained four dimensional variational assimilation (WC--4DVAR) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two point boundary value problem. For imperfect models, the trade off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. For (nearly) perfect models, this trade off turns out to be (nearly) trivial in some sense, yet allowing for some dynamical error is shown to have positive effects even in this situation. The presented formalism is dynamical in character; no assumptions need to be made about the presence (or absence) of dynamical or observational noise, let alone about their statistics.Comment: 28 Pages, 12 Figure

    Sparse Dynamic Factor Models with Loading Selection by Variational Inference

    Full text link
    In this paper we develop a novel approach for estimating large and sparse dynamic factor models using variational inference, also allowing for missing data. Inspired by Bayesian variable selection, we apply slab-and-spike priors onto the factor loadings to deal with sparsity. An algorithm is developed to find locally optimal mean field approximations of posterior distributions, which can be obtained computationally fast, making it suitable for nowcasting and frequently updated analyses in practice. We evaluate the method in two simulation experiments, which show well identified sparsity patterns and precise loading and factor estimation

    Signal processing for microwave imaging systems with very sparse array

    Get PDF
    This dissertation investigates image reconstruction algorithms for near-field, two dimensional (2D) synthetic aperture radar (SAR) using compressed sensing (CS) based methods. In conventional SAR imaging systems, acquiring higher-quality images requires longer measuring time and/or more elements in an antenna array. Millimeter wave imaging systems using evenly-spaced antenna arrays also have spatial resolution constraints due to the large size of the antennas. This dissertation applies the CS principle to a bistatic antenna array that consists of separate transmitter and receiver subarrays very sparsely and non-uniformly distributed on a 2D plane. One pair of transmitter and receiver elements is turned on at a time, and different pairs are turned on in series to achieve synthetic aperture and controlled random measurements. This dissertation contributes to CS-hardware co-design by proposing several signal-processing methods, including monostatic approximation, re-gridding, adaptive interpolation, CS-based reconstruction, and image denoising. The proposed algorithms enable the successful implementation of CS-SAR hardware cameras, improve the resolution and image quality, and reduce hardware cost and experiment time. This dissertation also describes and analyzes the results for each independent method. The algorithms proposed in this dissertation break the limitations of hardware configuration. By using 16 x 16 transmit and receive elements with an average space of 16 mm, the sparse-array camera achieves the image resolution of 2 mm. This is equivalent to six percent of the λ/4 evenly-spaced array. The reconstructed images achieve similar quality as the fully-sampled array with the structure similarity (SSIM) larger than 0.8 and peak signal-to-noise ratio (PSNR) greater than 25 --Abstract, page iv

    Hierarchical Models in the Brain

    Get PDF
    This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain

    Mathematical theory of the Goddard trajectory determination system

    Get PDF
    Basic mathematical formulations depict coordinate and time systems, perturbation models, orbital estimation techniques, observation models, and numerical integration methods

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation
    • …
    corecore