20 research outputs found

    Inertial learning and haptics for legged robot state estimation in visually challenging environments

    Get PDF
    Legged robots have enormous potential to automate dangerous or dirty jobs because they are capable of traversing a wide range of difficult terrains such as up stairs or through mud. However, a significant challenge preventing widespread deployment of legged robots is a lack of robust state estimation, particularly in visually challenging conditions such as darkness or smoke. In this thesis, I address these challenges by exploiting proprioceptive sensing from inertial, kinematic and haptic sensors to provide more accurate state estimation when visual sensors fail. Four different methods are presented, including the use of haptic localisation, terrain semantic localisation, learned inertial odometry, and deep learning to infer the evolution of IMU biases. The first approach exploits haptics as a source of proprioceptive localisation by comparing geometric information to a prior map. The second method expands on this concept by fusing both semantic and geometric information, allowing for accurate localisation on diverse terrain. Next, I combine new techniques in inertial learning with classical IMU integration and legged robot kinematics to provide more robust state estimation. This is further developed to use only IMU data, for an application entirely different from robotics: 3D reconstruction of bone with a handheld ultrasound scanner. Finally, I present the novel idea of using deep learning to infer the evolution of IMU biases, improving state estimation in exteroceptive systems where vision fails. Legged robots have the potential to benefit society by automating dangerous, dull, or dirty jobs and by assisting first responders in emergency situations. However, there remain many unsolved challenges to the real-world deployment of legged robots, including accurate state estimation in vision-denied environments. The work presented in this thesis takes a step towards solving these challenges and enabling the deployment of legged robots in a variety of applications

    Effficient Graph-based Computation and Analytics

    Get PDF
    With data explosion in many domains, such as social media, big code repository, Internet of Things (IoT), and inertial sensors, only 32% of data available to academic and industry is put to work, and the remaining 68% goes unleveraged. Moreover, people are facing an increasing number of obstacles concerning complex analytics on the sheer size of data, which include 1) how to perform dynamic graph analytics in a parallel and robust manner within a reasonable time? 2) How to conduct performance optimizations on a property graph representing and consisting of the semantics of code, data, and runtime systems for big data applications? 3) How to innovate neural graph approaches (ie, Transformer) to solve realistic research problems, such as automated program repair and inertial navigation? To tackle these problems, I present two efforts along this road: efficient graph-based computation and intelligent graph analytics. Specifically, I firstly propose two theory-based dynamic graph models to characterize temporal trends in large social media networks, then implement and optimize them atop Apache Spark GraphX to improve their performances. In addition, I investigate a semantics-aware optimization framework consisting of offline static analysis and online dynamic analysis on a property graph representing the skeleton of a data-intensive application, to interactively and semi-automatically assist programmers to scrutinize the performance problems camouflaged in the source code. In the design of intelligent graph-based algorithms, I innovate novel neural graph-based approaches with multi-task learning techniques to repair a broad range of programming bugs automatically, and also improve the accuracy of pedestrian navigation systems in only consideration of sensor data of Inertial Measurement Units (IMU, ie accelerometer, gyroscope, and magnetometer). In this dissertation, I elaborate on the definitions of these research problems and leverage the knowledge of graph computation, program analysis, and deep learning techniques to seek solutions to them, followed by comprehensive comparisons with the state-of-the-art baselines and discussions on future research

    Pushing the limits of inertial motion sensing

    Get PDF

    Towards Optimization and Robustification of Data-Driven Models

    Get PDF
    In the past two decades, data-driven models have experienced a renaissance, with notable success achieved through the use of models such as deep neural networks (DNNs) in various applications. However, complete reliance on intelligent machine learning systems is still a distant dream. Nevertheless, the initial success of data-driven approaches presents a promising path for building trustworthy data-oriented models. This thesis aims to take a few steps toward improving the performance of existing data-driven frameworks in both the training and testing phases. Specifically, we focus on several key questions: 1) How to efficiently design optimization methods for learning algorithms that can be used in parallel settings and also when first-order information is unavailable? 2) How to revise existing adversarial attacks on DNNs to structured attacks with minimal distortion of benign samples? 3) How to integrate attention models such as Transformers into data-driven inertial navigation systems? 4) How to address the lack of data problem for existing data-driven models and enhance the performance of existing semi-supervised learning (SSL) methods? In terms of parallel optimization methods, our research focuses on investigating a delay-aware asynchronous variance-reduced coordinate descent approach. Additionally, we explore the development of a proximal zeroth-order algorithm for nonsmooth nonconvex problems when first-order information is unavailable. We also extend our study to zeroth-order stochastic gradient descent problems. As for robustness, we develop a structured white-box adversarial attack to enhance research on robust machine learning schemes. Furthermore, our research investigates a group threat model in which adversaries can only perturb image segments rather than the entire image to generate adversarial examples. We also explore the use of attention models, specifically Transformer models, for deep inertial navigation systems based on the Inertial Measurement Unit (IMU). In addressing the problem of data scarcity during the training process, we propose a solution that involves quantizing the uncertainty from the unlabeled data and corresponding pseudo-labels, and incorporating it into the loss term to compensate for noisy pseudo-labeling. We also extend the generic semi-supervised method for data-driven noise suppression frameworks by utilizing a reinforcement learning (RL) model to learn contrastive features in an SSL fashion. Each chapter of the thesis presents the problem and our solutions using concrete algorithms. We verify our approach through comparisons with existing methods on different benchmarks and discuss future research directions

    Sensor fusion with Gaussian processes

    Get PDF
    This thesis presents a new approach to multi-rate sensor fusion for (1) user matching and (2) position stabilisation and lag reduction. The Microsoft Kinect sensor and the inertial sensors in a mobile device are fused with a Gaussian Process (GP) prior method. We present a Gaussian Process prior model-based framework for multisensor data fusion and explore the use of this model for fusing mobile inertial sensors and an external position sensing device. The Gaussian Process prior model provides a principled mechanism for incorporating the low-sampling-rate position measurements and the high-sampling-rate derivatives in multi-rate sensor fusion, which takes account of the uncertainty of each sensor type. We explore the complementary properties of the Kinect sensor and the built-in inertial sensors in a mobile device and apply the GP framework for sensor fusion in the mobile human-computer interaction area. The Gaussian Process prior model-based sensor fusion is presented as a principled probabilistic approach to dealing with position uncertainty and the lag of the system, which are critical for indoor augmented reality (AR) and other location-aware sensing applications. The sensor fusion helps increase the stability of the position and reduce the lag. This is of great benefit for improving the usability of a human-computer interaction system. We develop two applications using the novel and improved GP prior model. (1) User matching and identification. We apply the GP model to identify individual users, by matching the observed Kinect skeletons with the sensed inertial data from their mobile devices. (2) Position stabilisation and lag reduction in a spatially aware display application for user performance improvement. We conduct a user study. Experimental results show the improved accuracy of target selection, and reduced delay from the sensor fusion system, allowing the users to acquire the target more rapidly, and with fewer errors in comparison with the Kinect filtered system. They also reported improved performance in subjective questions. The two applications can be combined seamlessly in a proxemic interaction system as identification of people and their positions in a room-sized environment plays a key role in proxemic interactions
    corecore