1,856 research outputs found

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    READUP BUILDUP. Thync - instant α-readings

    Get PDF

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications

    Full text link
    This survey samples from the ever-growing family of adaptive resonance theory (ART) neural network models used to perform the three primary machine learning modalities, namely, unsupervised, supervised and reinforcement learning. It comprises a representative list from classic to modern ART models, thereby painting a general picture of the architectures developed by researchers over the past 30 years. The learning dynamics of these ART models are briefly described, and their distinctive characteristics such as code representation, long-term memory and corresponding geometric interpretation are discussed. Useful engineering properties of ART (speed, configurability, explainability, parallelization and hardware implementation) are examined along with current challenges. Finally, a compilation of online software libraries is provided. It is expected that this overview will be helpful to new and seasoned ART researchers

    The fusion and integration of virtual sensors

    Get PDF
    There are numerous sensors from which to choose when designing a mobile robot: ultrasonic, infrared, radar, or laser range finders, video, collision detectors, or beacon based systems such as the Global Positioning System. In order to meet the need for reliability, accuracy, and fault tolerance, mobile robot designers often place multiple sensors on the same platform, or combine sensor data from multiple platforms. The combination of the data from multiple sensors to improve reliability, accuracy, and fault tolerance is termed Sensor Fusion.;The types of robotic sensors are as varied as the properties of the environment that need to be sensed. to reduce the complexity of system software, Roboticists have found it highly desirable to adopt a common interface between each type of sensor and the system responsible for fusing the information. The process of abstracting the essential properties of a sensor is called Sensor Virtualization.;Sensor virtualization to date has focused on abstracting the properties shared by sensors of the same type. The approach taken by T. Henderson is simply to expose to the fusion system only the data from the sensor, along with a textual label describing the sensor. We extend Henderson\u27s work in the following manner. First, we encapsulate both the fusion algorithm and the interface layer in the virtual sensor. This allows us to build multi-tiered virtual sensor hierarchies. Secondly, we show how common fusion algorithms can be encapsulated in the virtual sensor, facilitating the integration and replacement of both physical and virtual sensors. Finally, we provide a physical proof of concept using monostatic sonars, vector sonars, and a laser range-finder

    A New Simulation Metric to Determine Safe Environments and Controllers for Systems with Unknown Dynamics

    Full text link
    We consider the problem of extracting safe environments and controllers for reach-avoid objectives for systems with known state and control spaces, but unknown dynamics. In a given environment, a common approach is to synthesize a controller from an abstraction or a model of the system (potentially learned from data). However, in many situations, the relationship between the dynamics of the model and the \textit{actual system} is not known; and hence it is difficult to provide safety guarantees for the system. In such cases, the Standard Simulation Metric (SSM), defined as the worst-case norm distance between the model and the system output trajectories, can be used to modify a reach-avoid specification for the system into a more stringent specification for the abstraction. Nevertheless, the obtained distance, and hence the modified specification, can be quite conservative. This limits the set of environments for which a safe controller can be obtained. We propose SPEC, a specification-centric simulation metric, which overcomes these limitations by computing the distance using only the trajectories that violate the specification for the system. We show that modifying a reach-avoid specification with SPEC allows us to synthesize a safe controller for a larger set of environments compared to SSM. We also propose a probabilistic method to compute SPEC for a general class of systems. Case studies using simulators for quadrotors and autonomous cars illustrate the advantages of the proposed metric for determining safe environment sets and controllers.Comment: 22nd ACM International Conference on Hybrid Systems: Computation and Control (2019

    Analyzing and Predicting Railway Operational Accidents Based on Fishbone Diagram and Bayesian Networks

    Get PDF
    The prevention of railway operational accidents has become one of the leading issues in railway safety. Identifying the impact factors which significantly affect railway operating is critical for decreasing the occurrence of railway accidents. In this study, 8440 samples of accident data are selected as the datasets for analyzing. Fishbone diagram is applied to obtain the factors which cause the accident from the perspective of human-equipment-environment-management system theory. Then, the Bayesian network method was selected to establish a railway operation safety accident prediction model, and the sensitivity analysis method was used to obtain the sensitivity of each variable factor to the accident level. The results show that season, location, trouble maker and job function have a significant impact on railway safety, and their sensitivity was 0.4577, 0.4116, 0.3478 and 0.3192, respectively. Research helps the railway sector to understand the fundamental causes of accidents, and provides an effective reference for accident prevention, which is conducive to the long-term development of railway transportation
    • …
    corecore