1,025 research outputs found

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Image-Guided Robot-Assisted Techniques with Applications in Minimally Invasive Therapy and Cell Biology

    Get PDF
    There are several situations where tasks can be performed better robotically rather than manually. Among these are situations (a) where high accuracy and robustness are required, (b) where difficult or hazardous working conditions exist, and (c) where very large or very small motions or forces are involved. Recent advances in technology have resulted in smaller size robots with higher accuracy and reliability. As a result, robotics is fi nding more and more applications in Biomedical Engineering. Medical Robotics and Cell Micro-Manipulation are two of these applications involving interaction with delicate living organs at very di fferent scales.Availability of a wide range of imaging modalities from ultrasound and X-ray fluoroscopy to high magni cation optical microscopes, makes it possible to use imaging as a powerful means to guide and control robot manipulators. This thesis includes three parts focusing on three applications of Image-Guided Robotics in biomedical engineering, including: Vascular Catheterization: a robotic system was developed to insert a catheter through the vasculature and guide it to a desired point via visual servoing. The system provides shared control with the operator to perform a task semi-automatically or through master-slave control. The system provides control of a catheter tip with high accuracy while reducing X-ray exposure to the clinicians and providing a more ergonomic situation for the cardiologists. Cardiac Catheterization: a master-slave robotic system was developed to perform accurate control of a steerable catheter to touch and ablate faulty regions on the inner walls of a beating heart in order to treat arrhythmia. The system facilitates touching and making contact with a target point in a beating heart chamber through master-slave control with coordinated visual feedback. Live Neuron Micro-Manipulation: a microscope image-guided robotic system was developed to provide shared control over multiple micro-manipulators to touch cell membranes in order to perform patch clamp electrophysiology. Image-guided robot-assisted techniques with master-slave control were implemented for each case to provide shared control between a human operator and a robot. The results show increased accuracy and reduced operation time in all three cases

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector

    B2B2: LiDAR 2D Mapping Rover

    Get PDF
    Autonomous machines are becoming more popular and useful with even self-driving cars being a thing of the present. Most of these machines navigate using cameras and LiDAR which does not detect glass, therefore the machines give misleading results when objects and obstacles are transparent to the wavelengths of the light used. This is problematic in modern building floor plans with glass walls. A solution is to build a ROS system that fuses ultrasonic sensors with LiDAR sensors in order for a robot to navigate in a building that has glass walls. Using both sensors, the final product is a robot that creates a 2D map using Simultaneous Localization and Mapping (SLAM) as well as other pertinent Robotics Operating Systems (ROS) packages. This map enables any mobile robot to pathplan from point A to B on the now created 2D floor plan that incorporates glass and non-glass obstacles. This saves time and energy when compared to a robot that moves from point A to B that has to continuously change paths in the presence of obstacles

    Sewer Robotics

    Get PDF

    Cooperative Material Handling by Human and Robotic Agents:Module Development and System Synthesis

    Get PDF
    In this paper we present the results of a collaborative effort to design and implement a system for cooperative material handling by a small team of human and robotic agents in an unstructured indoor environment. Our approach makes fundamental use of human agents\u27 expertise for aspects of task planning, task monitoring, and error recovery. Our system is neither fully autonomous nor fully teleoperated. It is designed to make effective use of human abilities within the present state of the art of autonomous systems. It is designed to allow for and promote cooperative interaction between distributed agents with various capabilities and resources. Our robotic agents refer to systems which are each equipped with at least one sensing modality and which possess some capability for self-orientation and/or mobility. Our robotic agents are not required to be homogeneous with respect to either capabilities or function. Our research stresses both paradigms and testbed experimentation. Theory issues include the requisite coordination principles and techniques which are fundamental to the basic functioning of such a cooperative multi-agent system. We have constructed a testbed facility for experimenting with distributed multi-agent architectures. The required modular components of this testbed are currently operational and have been tested individually. Our current research focuses on the integration of agents in a scenario for cooperative material handling

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith
    corecore