11 research outputs found

    Signal processing for microwave imaging systems with very sparse array

    Get PDF
    This dissertation investigates image reconstruction algorithms for near-field, two dimensional (2D) synthetic aperture radar (SAR) using compressed sensing (CS) based methods. In conventional SAR imaging systems, acquiring higher-quality images requires longer measuring time and/or more elements in an antenna array. Millimeter wave imaging systems using evenly-spaced antenna arrays also have spatial resolution constraints due to the large size of the antennas. This dissertation applies the CS principle to a bistatic antenna array that consists of separate transmitter and receiver subarrays very sparsely and non-uniformly distributed on a 2D plane. One pair of transmitter and receiver elements is turned on at a time, and different pairs are turned on in series to achieve synthetic aperture and controlled random measurements. This dissertation contributes to CS-hardware co-design by proposing several signal-processing methods, including monostatic approximation, re-gridding, adaptive interpolation, CS-based reconstruction, and image denoising. The proposed algorithms enable the successful implementation of CS-SAR hardware cameras, improve the resolution and image quality, and reduce hardware cost and experiment time. This dissertation also describes and analyzes the results for each independent method. The algorithms proposed in this dissertation break the limitations of hardware configuration. By using 16 x 16 transmit and receive elements with an average space of 16 mm, the sparse-array camera achieves the image resolution of 2 mm. This is equivalent to six percent of the λ/4 evenly-spaced array. The reconstructed images achieve similar quality as the fully-sampled array with the structure similarity (SSIM) larger than 0.8 and peak signal-to-noise ratio (PSNR) greater than 25 --Abstract, page iv

    Autonomous Exploration of Large-Scale Natural Environments

    Get PDF
    This thesis addresses issues which arise when using robotic platforms to explore large-scale, natural environments. Two main problems are identified: the volume of data collected by autonomous platforms and the complexity of planning surveys in large environments. Autonomous platforms are able to rapidly accumulate large data sets. The volume of data that must be processed is often too large for human experts to analyse exhaustively in a practical amount of time or in a cost-effective manner. This burden can create a bottleneck in the process of converting observations into scientifically relevant data. Although autonomous platforms can collect precisely navigated, high-resolution data, they are typically limited by finite battery capacities, data storage and computational resources. Deployments are also limited by project budgets and time frames. These constraints make it impractical to sample large environments exhaustively. To use the limited resources effectively, trajectories which maximise the amount of information gathered from the environment must be designed. This thesis addresses these problems. Three primary contributions are presented: a new classifier designed to accept probabilistic training targets rather than discrete training targets; a semi-autonomous pipeline for creating models of the environment; and an offline method for autonomously planning surveys. These contributions allow large data sets to be processed with minimal human intervention and promote efficient allocation of resources. In this thesis environmental models are established by learning the correlation between data extracted from a digital elevation model (DEM) of the seafloor and habitat categories derived from in-situ images. The DEM of the seafloor is collected using ship-borne multibeam sonar and the in-situ images are collected using an autonomous underwater vehicle (AUV). While the thesis specifically focuses on mapping and exploring marine habitats with an AUV, the research applies equally to other applications such as aerial and terrestrial environmental monitoring and planetary exploration

    System Design, Motion Modelling and Planning for a Recon figurable Wheeled Mobile Robot

    Get PDF
    Over the past ve decades the use of mobile robotic rovers to perform in-situ scienti c investigations on the surfaces of the Moon and Mars has been tremendously in uential in shaping our understanding of these extraterrestrial environments. As robotic missions have evolved there has been a greater desire to explore more unstructured terrain. This has exposed mobility limitations with conventional rover designs such as getting stuck in soft soil or simply not being able to access rugged terrain. Increased mobility and terrain traversability are key requirements when considering designs for next generation planetary rovers. Coupled with these requirements is the need to autonomously navigate unstructured terrain by taking full advantage of increased mobility. To address these issues, a high degree-of-freedom recon gurable platform that is capable of energy intensive legged locomotion in obstacle-rich terrain as well as wheeled locomotion in benign terrain is proposed. The complexities of the planning task that considers the high degree-of-freedom state space of this platform are considerable. A variant of asymptotically optimal sampling-based planners that exploits the presence of dominant sub-spaces within a recon gurable mobile robot's kinematic structure is proposed to increase path quality and ensure platform safety. The contributions of this thesis include: the design and implementation of a highly mobile planetary analogue rover; motion modelling of the platform to enable novel locomotion modes, along with experimental validation of each of these capabilities; the sampling-based HBFMT* planner that hierarchically considers sub-spaces to better guide search of the complete state space; and experimental validation of the planner with the physical platform that demonstrates how the planner exploits the robot's capabilities to uidly transition between various physical geometric con gurations and wheeled/legged locomotion modes

    Statistical models and decision making for robotic scientific information gathering

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 2018.Mobile robots and autonomous sensors have seen increasing use in scientific applications, from planetary rovers surveying for signs of life on Mars, to environmental buoys measuring and logging oceanographic conditions in coastal regions. This thesis makes contributions in both planning algorithms and model design for autonomous scientific information gathering, demonstrating how theory from machine learning, decision theory, theory of optimal experimental design, and statistical inference can be used to develop online algorithms for robotic information gathering that are robust to modeling errors, account for spatiotemporal structure in scientific data, and have probabilistic performance guarantees. This thesis first introduces a novel sample selection algorithm for online, irrevocable sampling in data streams that have spatiotemporal structure, such as those that commonly arise in robotics and environmental monitoring. Given a limited sampling capacity, the proposed periodic secretary algorithm uses an information-theoretic reward function to select samples in real-time that maximally reduce posterior uncertainty in a given scientific model. Additionally, we provide a lower bound on the quality of samples selected by the periodic secretary algorithm by leveraging the submodularity of the information-theoretic reward function. Finally, we demonstrate the robustness of the proposed approach by employing the periodic secretary algorithm to select samples irrevocably from a seven-year oceanographic data stream collected at the Martha’s Vineyard Coastal Observatory off the coast of Cape Cod, USA. Secondly, we consider how scientific models can be specified in environments – such as the deep sea or deep space – where domain scientists may not have enough a priori knowledge to formulate a formal scientific model and hypothesis. These domains require scientific models that start with very little prior information and construct a model of the environment online as observations are gathered. We propose unsupervised machine learning as a technique for science model-learning in these environments. To this end, we introduce a hybrid Bayesian-deep learning model that learns a nonparametric topic model of a visual environment. We use this semantic visual model to identify observations that are poorly explained in the current model, and show experimentally that these highly perplexing observations often correspond to scientifically interesting phenomena. On a marine dataset collected by the SeaBED AUV on the Hannibal Sea Mount, images of high perplexity in the learned model corresponded, for example, to a scientifically novel crab congregation in the deep sea. The approaches presented in this thesis capture the depth and breadth of the problems facing the field of autonomous science. Developing robust autonomous systems that enhance our ability to perform exploratory science in environments such as the oceans, deep space, agricultural and disaster-relief zones will require insight and techniques from classical areas of robotics, such as motion and path planning, mapping, and localization, and from other domains, including machine learning, spatial statistics, optimization, and theory of experimental design. This thesis demonstrates how theory and practice from these diverse disciplines can be unified to address problems in autonomous scientific information gathering

    Statistical models and decision making for robotic scientific information gathering

    Get PDF
    Thesis: S.M., Joint Program in Applied Ocean Physics and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2018.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 97-107).Mobile robots and autonomous sensors have seen increasing use in scientific applications, from planetary rovers surveying for signs of life on Mars, to environmental buoys measuring and logging oceanographic conditions in coastal regions. This thesis makes contributions in both planning algorithms and model design for autonomous scientific information gathering, demonstrating how theory from machine learning, decision theory, theory of optimal experimental design, and statistical inference can be used to develop online algorithms for robotic information gathering that are robust to modeling errors, account for spatiotemporal structure in scientific data, and have probabilistic performance guarantees. This thesis first introduces a novel sample selection algorithm for online, irrevocable sampling in data streams that have spatiotemporal structure, such as those that commonly arise in robotics and environmental monitoring. Given a limited sampling capacity, the proposed periodic secretary algorithm uses an information-theoretic reward function to select samples in real-time that maximally reduce posterior uncertainty in a given scientific model. Additionally, we provide a lower bound on the quality of samples selected by the periodic secretary algorithm by leveraging the submodularity of the information-theoretic reward function. Finally, we demonstrate the robustness of the proposed approach by employing the periodic secretary algorithm to select samples irrevocably from a seven-year oceanographic data stream collected at the Martha's Vineyard Coastal Observatory off the coast of Cape Cod, USA. Secondly, we consider how scientific models can be specified in environments - such as the deep sea or deep space - where domain scientists may not have enough a priori knowledge to formulate a formal scientific model and hypothesis. These domains require scientific models that start with very little prior information and construct a model of the environment online as observations are gathered. We propose unsupervised machine learning as a technique for science model-learning in these environments. To this end, we introduce a hybrid Bayesian-deep learning model that learns a nonparametric topic model of a visual environment. We use this semantic visual model to identify observations that are poorly explained in the current model, and show experimentally that these highly perplexing observations often correspond to scientifically interesting phenomena. On a marine dataset collected by the SeaBED AUV on the Hannibal Sea Mount, images of high perplexity in the learned model corresponded, for example, to a scientifically novel crab congregation in the deep sea. The approaches presented in this thesis capture the depth and breadth of the problems facing the field of autonomous science. Developing robust autonomous systems that enhance our ability to perform exploratory science in environments such as the oceans, deep space, agricultural and disaster-relief zones will require insight and techniques from classical areas of robotics, such as motion and path planning, mapping, and localization, and from other domains, including machine learning, spatial statistics, optimization, and theory of experimental design. This thesis demonstrates how theory and practice from these diverse disciplines can be unified to address problems in autonomous scientific information gathering.by Genevieve Elaine Flaspohler.S.M

    Efficient Environment Sensing and Learning for Mobile Robots

    Get PDF
    Data-driven learning is becoming an integral part of many robotic systems. Robots can be used as mobile sensors to learn about the environment in which they operate. Robots can also seek to learn essential skills, such as navigation, within the environment. A critical challenge in both types of learning is sample efficiency. Acquiring samples with physical robots can be prohibitively time-consuming. As a result, when applying learning techniques in robotics that require physical interaction with the environment, minimizing the number of such interactions becomes a key. The key question we seek to answer is: How do we make robots learn efficiently with a minimal amount of physical interaction? We approach this question along two fronts: extrinsic learning and intrinsic learning. In extrinsic learning, we want the robot to learn about the external environment in which it is operating. In intrinsic learning, our focus is on the robot to learn a skill using reinforcement learning (RL) such as navigating in an environment. In this dissertation, we develop algorithms that carefully plan where the robots obtain samples in order to efficiently perform intrinsic and extrinsic learning. In particular, we exploit the structural properties of Gaussian Process (GP) regression to design efficient sampling algorithms. We study two types of problems under extrinsic learning. We start with the problem of learning a spatially varying field modeled by a GP efficiently. Our goal is to ensure that the GP posterior variance, which is also the mean square error between the learned and actual fields, is below a predefined value. By exploiting the underlying properties of GP, we present a series of constant-factor approximation algorithms for minimizing the number of stationary sensors to place, minimizing the total time taken by a single robot, and minimizing the time taken by a team of robots to learn the field. Here, we assume that the GP hyperparameters are known. We then study a variant where our goal is to identify the hotspot in an environment. Here we do not assume that hyperparameters are unknown. For this problem, we present Upper Confidence Bound (UCB) and Monte Carlo Tree Search (MCTS) based algorithms for a single robot and later extend them to decentralized multi-robot teams. We also validate their performance on real-world datasets. For intrinsic learning, our aim is to reduce the number of physical interactions by leveraging simulations often known as Multi-Fidelity Reinforcement Learning (MFRL). In the MFRL framework, an agent uses multiple simulators of the real environment to perform actions. We present two MFRL framework versions, model-based and model-free, that leverage GPs to learn the optimal policy in a real-world environment. By incorporating GPs in the MFRL framework, we empirically observe a significant reduction in the number of samples for model-based and model-free learning
    corecore