563 research outputs found

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Robotic Exploration of an Unknown Nuclear Environment Using Radiation Informed Autonomous Navigation

    Get PDF
    From MDPI via Jisc Publications RouterHistory: accepted 2021-05-15, pub-electronic 2021-05-24Publication status: PublishedThis paper describes a novel autonomous ground vehicle that is designed for exploring unknown environments which contain sources of ionising radiation, such as might be found in a nuclear disaster site or a legacy nuclear facility. While exploring the environment, it is important that the robot avoids radiation hot spots to minimise breakdowns. Broken down robots present a real problem: they not only cause the mission to fail but they can block access routes for future missions. Until now, such robots have had no autonomous gamma radiation avoidance capabilities. New software algorithms are presented that allow radiation measurements to be converted into a format in which they can be integrated into the robot’s navigation system so that it can actively avoid receiving a high radiation dose during a mission. An unmanned ground vehicle was fitted with a gamma radiation detector and an autonomous navigation package that included the new radiation avoidance software. The full system was evaluated experimentally in a complex semi-structured environment that contained two radiation sources. In the experiment, the robot successfully identified both sources and avoided areas that were found to have high levels of radiation while navigating between user defined waypoints. This advancement in the state-of-the-art has the potential to deliver real benefit to the nuclear industry, in terms of both increased chance of mission success and reduction of the reliance on human operatives to perform tasks in dangerous radiation environments

    Migration from Teleoperation to Autonomy via Modular Sensor and Mobility Bricks

    Get PDF
    In this thesis, the teleoperated communications of a Remotec ANDROS robot have been reverse engineered. This research has used the information acquired through the reverse engineering process to enhance the teleoperation and add intelligence to the initially automated robot. The main contribution of this thesis is the implementation of the mobility brick paradigm, which enables autonomous operations, using the commercial teleoperated ANDROS platform. The brick paradigm is a generalized architecture for a modular approach to robotics. This architecture and the contribution of this thesis are a paradigm shift from the proprietary commercial models that exist today. The modular system of sensor bricks integrates the transformed mobility platform and defines it as a mobility brick. In the wall following application implemented in this work, the mobile robotic system acquires intelligence using the range sensor brick. This application illustrates a way to alleviate the burden on the human operator and delegate certain tasks to the robot. Wall following is one among several examples of giving a degree of autonomy to an essentially teleoperated robot through the Sensor Brick System. Indeed once the proprietary robot has been altered into a mobility brick; the possibilities for autonomy are numerous and vary with different sensor bricks. The autonomous system implemented is not a fixed-application robot but rather a non-specific autonomy capable platform. Meanwhile the native controller and the computer-interfaced teleoperation are still available when necessary. Rather than trading off by switching from teleoperation to autonomy, this system provides the flexibility to switch between the two at the operator’s command. The contributions of this thesis reside in the reverse engineering of the original robot, its upgrade to a computer-interfaced teleoperated system, the mobility brick paradigm and the addition of autonomy capabilities. The application of a robot autonomously following a wall is subsequently implemented, tested and analyzed in this work. The analysis provides the programmer with information on controlling the robot and launching the autonomous function. The results are conclusive and open up the possibilities for a variety of autonomous applications for mobility platforms using modular sensor bricks

    On exploiting haptic cues for self-supervised learning of depth-based robot navigation affordances

    Get PDF
    This article presents a method for online learning of robot navigation affordances from spatiotemporally correlated haptic and depth cues. The method allows the robot to incrementally learn which objects present in the environment are actually traversable. This is a critical requirement for any wheeled robot performing in natural environments, in which the inability to discern vegetation from non-traversable obstacles frequently hampers terrain progression. A wheeled robot prototype was developed in order to experimentally validate the proposed method. The robot prototype obtains haptic and depth sensory feedback from a pan-tilt telescopic antenna and from a structured light sensor, respectively. With the presented method, the robot learns a mapping between objects' descriptors, given the range data provided by the sensor, and objects' stiffness, as estimated from the interaction between the antenna and the object. Learning confidence estimation is considered in order to progressively reduce the number of required physical interactions with acquainted objects. To raise the number of meaningful interactions per object under time pressure, the several segments of the object under analysis are prioritised according to a set of morphological criteria. Field trials show the ability of the robot to progressively learn which elements of the environment are traversable.info:eu-repo/semantics/acceptedVersio

    Robot Mapping and Navigation by Fusing Sensory Information

    Get PDF

    On Semantic Segmentation and Path Planning for Autonomous Vehicles within Off-Road Environments

    Get PDF
    There are many challenges involved in creating a fully autonomous vehicle capable of safely navigating through off-road environments. In this work we focus on two of the most prominent such challenges, namely scene understanding and path planning. Scene understanding is a challenging computer vision task with recent advances in convolutional neural networks (CNN) achieving results that notably surpass prior traditional feature driven approaches. Here, we build on recent work in urban road-scene understanding, training a state of the art CNN architecture towards the task of classifying off-road scenes. We analyse the effects of transfer learning and training data set size on CNN performance, evaluating multiple configurations of the network at multiple points during the training cycle, investigating in depth how the training process is affected. We compare this CNN to a more traditional feature-driven approach with Support Vector Machine (SVM) classifier and demonstrate state-of-the-art results in this particularly challenging problem of off-road scene understanding. We then expand on this with the addition of multi-channel RGBD data, which we encode in multiple configurations for CNN input. We evaluate each of these configuration over our own off-road RGBD data set and compare performance to that of the network model trained using RGB data. Next, we investigate end-to-end navigation, whereby a machine learning algorithm optimises to predict the vehicle control inputs of a human driver. After evaluating such a technique in an off-road environment and identifying several limitations, we propose a new approach in which a CNN learns to predict vehicle path visually, combining a novel approach to automatic training data creation with state of the art CNN architecture to map a predicted route directly onto image pixels. We then evaluate this approach using our off-road data set, and demonstrate effectiveness surpassing existing end-to-end methods

    Addressing Tasks Through Robot Adaptation

    Get PDF
    Developing flexible, broadly capable systems is essential for robots to move out of factories and into our daily lives, functioning as responsive agents that can handle whatever the world throws at them. This dissertation focuses on two kinds of robot adaptation. Modular self-reconfigurable robots (MSRR) adapt to the requirements of their task and environments by transforming themselves. By rearranging the connective structure of their component robot modules, these systems can assume different morphologies: for example, a cluster of modules might configure themselves into a car to maneuver on flat ground, a snake to climb stairs, or an arm to pick and place objects. Conversely, environment augmentation is a strategy in which the robot transforms its environment to meet its own needs, adding physical structures that allow it to overcome obstacles. In both areas, the presented work includes elements of hardware design, algorithms, and integrated systems, with the common goal of establishing these methods of adaptation as viable strategies to address tasks. The research takes a systems-level view of robotics, placing particular emphasis on experimental validation in hardware

    Autonomous surveillance for biosecurity

    Full text link
    The global movement of people and goods has increased the risk of biosecurity threats and their potential to incur large economic, social, and environmental costs. Conventional manual biosecurity surveillance methods are limited by their scalability in space and time. This article focuses on autonomous surveillance systems, comprising sensor networks, robots, and intelligent algorithms, and their applicability to biosecurity threats. We discuss the spatial and temporal attributes of autonomous surveillance technologies and map them to three broad categories of biosecurity threat: (i) vector-borne diseases; (ii) plant pests; and (iii) aquatic pests. Our discussion reveals a broad range of opportunities to serve biosecurity needs through autonomous surveillance.Comment: 26 pages, Trends in Biotechnology, 3 March 2015, ISSN 0167-7799, http://dx.doi.org/10.1016/j.tibtech.2015.01.003. (http://www.sciencedirect.com/science/article/pii/S0167779915000190
    • …
    corecore