39 research outputs found

    A stereo vision based mapping algorithm for detecting inclines, drop-offs, and obstacles for safe local navigation

    Full text link

    Feature recognition and obstacle detection for drive assistance in indoor environments

    Get PDF
    The goal of this research project was to develop a robust feature recognition and obstacle detection method for smart wheelchair navigation in indoor environments. As two types of depth sensors were employed, two different methods were proposed and implemented in this thesis. The two methods combined information of colour, edge, depth and motion to detect obstacles, compute movements and recognize indoor room features. The first method was based on a stereo vision sensor and started with optimizing the noisy disparity images, then, RANSAC was used to estimate the ground plane, followed by a watershed based image segmentation algorithm for ground pixel classification. Meanwhile, a novel algorithm named a standard deviation ridge straight line detector was performed to extract straight lines from the RGB images. The algorithm is able to provide more useful information than using the Canny edge detector and the Hough Transform. Then, the novel drop-off detection and stairs-up detection algorithms based on the proposed straight line detector were carried out. Moreover, the camera movements were calculated by optical flow. The second method was based on a structured light sensor. After RANSAC ground plane estimation, morphology operations were applied to smooth the ground surface area. Then, an obstacle detection algorithm was carried out to create a top-down map of the ground plane using inverse perspective mapping and segment obstacles using a region growing-based algorithm. Both the drop-off and open door detection algorithms employ the straight lines extracted from depth discontinuities maps. The performance and accuracy of the two proposed methods were evaluated. Results show that the ground plane classification using the first method achieved 98.58% true positives, and the figure improved with the second method to 99%. The drop-off detection algorithms using the first method also achieved good results, with no false negatives found in the test video sequences. The system provided the top-down maps of the surroundings to detect and segment obstacles correctly. Overall, the results showing accurate distances to various detected indoor features and obstacles, suggests that this proposed colour/edge/motion/depth approach would be useful as a navigation aid through doorways and hallways

    An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    Get PDF
    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the adversities in the environment. In this approach the Cellbot camera works as a sensor capable of correlating the images elements to the real world, thus the system is capable of finding the distances of the obstacles and that information is used for the occupancy grid mapping and for fuzzy control input. Experimental results with V-REP simulator are presented to validate the proposal, and the results were favorable to the use in robotics and in acceptable computing time.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Realization of Performance Advancements for WPI\u27s UGV - Prometheus

    Get PDF
    The objective of this project is to design and implement performance improvements for WPI\u27s intelligent ground vehicle, Prometheus, leading to a more competitive entry at the Intelligent Ground Vehicle Competition. Performance enhancements implemented by the project team include a new upper chassis design, a reconfigurable camera mount, extended Kalman filter-based localization with a GPS receiver and a compass module, a lane detection algorithm, and a modular software framework. As a result, Prometheus has improved autonomy, accessibility, robustness, reliability, and usability

    NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge

    Get PDF
    This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTAR¿s demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and Defense Advanced Research Projects Agency (DARPA)

    Design of a walking robot

    Get PDF
    Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Rehabilitation Engineering

    Get PDF
    Population ageing has major consequences and implications in all areas of our daily life as well as other important aspects, such as economic growth, savings, investment and consumption, labour markets, pensions, property and care from one generation to another. Additionally, health and related care, family composition and life-style, housing and migration are also affected. Given the rapid increase in the aging of the population and the further increase that is expected in the coming years, an important problem that has to be faced is the corresponding increase in chronic illness, disabilities, and loss of functional independence endemic to the elderly (WHO 2008). For this reason, novel methods of rehabilitation and care management are urgently needed. This book covers many rehabilitation support systems and robots developed for upper limbs, lower limbs as well as visually impaired condition. Other than upper limbs, the lower limb research works are also discussed like motorized foot rest for electric powered wheelchair and standing assistance device

    On-line, Incremental Visual Scene Understanding for an Indoor Navigating Robot.

    Full text link
    An indoor navigating robot must perceive its local environment in order to act. The robot must construct a model that captures critical navigation information from the stream of visual data that it acquires while traveling within the environment. Visual processing must be done on-line and efficiently to keep up with the robot's need. This thesis contributes both representations and algorithms toward solving the problem of modeling the local environment for an indoor navigating robot. Two representations, Planar Semantic Model (PSM) and Action Opportunity Star (AOS), are proposed to capture important navigation information of the local indoor environment. PSM models the geometric structure of the indoor environment in terms of ground plane and walls, and captures rich relationships among the wall segments. AOS is an abstracted representation that reasons about the navigation opportunities at a given pose. Both representations are capable of capturing incomplete knowledge where representations of unknown regions can be incrementally built as observations become available. An on-line generate-and-test framework is presented to construct the PSM from a stream of visual data. The framework includes two key elements, an incremental process of generating structural hypotheses and an on-line hypothesis testing mechanism using a Bayesian filter. Our framework is evaluated in three phases. First, we evaluate the effectiveness of the on-line hypothesis testing mechanism with an initially generated set of hypotheses in simple empty environments. We demonstrate that our method outperforms state-of-the-art methods on geometric reasoning both in terms of accuracy and applicability to a navigating robot. Second, we evaluate the incremental hypothesis generating process and demonstrate the expressive power of our proposed representations. At this phase, we also demonstrate an attention focusing method to efficiently discriminate among the active hypothesized models. Finally, we demonstrate a general metric to test the hypotheses with partial explanations in cluttered environments.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108914/1/gstsai_1.pd
    corecore