1,041 research outputs found

    Camera Marker Networks for Pose Estimation and Scene Understanding in Construction Automation and Robotics.

    Full text link
    The construction industry faces challenges that include high workplace injuries and fatalities, stagnant productivity, and skill shortage. Automation and Robotics in Construction (ARC) has been proposed in the literature as a potential solution that makes machinery easier to collaborate with, facilitates better decision-making, or enables autonomous behavior. However, there are two primary technical challenges in ARC: 1) unstructured and featureless environments; and 2) differences between the as-designed and the as-built. It is therefore impossible to directly replicate conventional automation methods adopted in industries such as manufacturing on construction sites. In particular, two fundamental problems, pose estimation and scene understanding, must be addressed to realize the full potential of ARC. This dissertation proposes a pose estimation and scene understanding framework that addresses the identified research gaps by exploiting cameras, markers, and planar structures to mitigate the identified technical challenges. A fast plane extraction algorithm is developed for efficient modeling and understanding of built environments. A marker registration algorithm is designed for robust, accurate, cost-efficient, and rapidly reconfigurable pose estimation in unstructured and featureless environments. Camera marker networks are then established for unified and systematic design, estimation, and uncertainty analysis in larger scale applications. The proposed algorithms' efficiency has been validated through comprehensive experiments. Specifically, the speed, accuracy and robustness of the fast plane extraction and the marker registration have been demonstrated to be superior to existing state-of-the-art algorithms. These algorithms have also been implemented in two groups of ARC applications to demonstrate the proposed framework's effectiveness, wherein the applications themselves have significant social and economic value. The first group is related to in-situ robotic machinery, including an autonomous manipulator for assembling digital architecture designs on construction sites to help improve productivity and quality; and an intelligent guidance and monitoring system for articulated machinery such as excavators to help improve safety. The second group emphasizes human-machine interaction to make ARC more effective, including a mobile Building Information Modeling and way-finding platform with discrete location recognition to increase indoor facility management efficiency; and a 3D scanning and modeling solution for rapid and cost-efficient dimension checking and concise as-built modeling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113481/1/cforrest_1.pd

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot\u27s view in order to explore interaction possibilities of the scene

    Traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data

    Get PDF
    Scene perception and traversability analysis are real challenges for autonomous driving systems. In the context of off-road autonomy, there are additional challenges due to the unstructured environments and the existence of various vegetation types. It is necessary for the Autonomous Ground Vehicles (AGVs) to be able to identify obstacles and load-bearing surfaces in the terrain to ensure a safe navigation (McDaniel et al. 2012). The presence of vegetation in off-road autonomy applications presents unique challenges for scene understanding: 1) understory vegetation makes it difficult to detect obstacles or to identify load-bearing surfaces; and 2) trees are usually regarded as obstacles even though only trunks of the trees pose collision risk in navigation. The overarching goal of this dissertation was to study traversability analysis in unstructured forested terrains for off-road autonomy using LIDAR data. More specifically, to address the aforementioned challenges, this dissertation studied the impacts of the understory vegetation density on the solid obstacle detection performance of the off-road autonomous systems. By leveraging a physics-based autonomous driving simulator, a classification-based machine learning framework was proposed for obstacle detection based on point cloud data captured by LIDAR. Features were extracted based on a cumulative approach meaning that information related to each feature was updated at each timeframe when new data was collected by LIDAR. It was concluded that the increase in the density of understory vegetation adversely affected the classification performance in correctly detecting solid obstacles. Additionally, a regression-based framework was proposed for estimating the understory vegetation density for safe path planning purposes according to which the traversabilty risk level was regarded as a function of estimated density. Thus, the denser the predicted density of an area, the higher the risk of collision if the AGV traversed through that area. Finally, for the trees in the terrain, the dissertation investigated statistical features that can be used in machine learning algorithms to differentiate trees from solid obstacles in the context of forested off-road scenes. Using the proposed extracted features, the classification algorithm was able to generate high precision results for differentiating trees from solid obstacles. Such differentiation can result in more optimized path planning in off-road applications

    Active Vision for Scene Understanding

    Get PDF
    Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot's view in order to explore interaction possibilities of the scene

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Outdoor navigation of mobile robots

    Get PDF
    AGVs in the manufacturing industry currently constitute the largest application area for mobile robots. Other applications have been gradually emerging, including various transporting tasks in demanding environments, such as mines or harbours. Most of the new potential applications require a free-ranging navigation system, which means that the path of a robot is no longer bound to follow a buried inductive cable. Moreover, changing the route of a robot or taking a new working area into use must be as effective as possible. These requirements set new challenges for the navigation systems of mobile robots. One of the basic methods of building a free ranging navigation system is to combine dead reckoning navigation with the detection of beacons at known locations. This approach is the backbone of the navigation systems in this study. The study describes research and development work in the area of mobile robotics including the applications in forestry, agriculture, mining, and transportation in a factory yard. The focus is on describing navigation sensors and methods for position and heading estimation by fusing dead reckoning and beacon detection information. A Kalman filter is typically used here for sensor fusion. Both cases of using either artificial or natural beacons have been covered. Artificial beacons used in the research and development projects include specially designed flat objects to be detected using a camera as the detection sensor, GPS satellite positioning system, and passive transponders buried in the ground along the route of a robot. The walls in a mine tunnel have been used as natural beacons. In this case, special attention has been paid to map building and using the map for positioning. The main contribution of the study is in describing the structure of a working navigation system, including positioning and position control. The navigation system for mining application, in particular, contains some unique features that provide an easy-to-use procedure for taking new production areas into use and making it possible to drive a heavy mining machine autonomously at speed comparable to an experienced human driver.reviewe

    Simultaneous localization and mapping for inspection robots in water and sewer pipe networks: a review

    Get PDF
    At the present time, water and sewer pipe networks are predominantly inspected manually. In the near future, smart cities will perform intelligent autonomous monitoring of buried pipe networks, using teams of small robots. These robots, equipped with all necessary computational facilities and sensors (optical, acoustic, inertial, thermal, pressure and others) will be able to inspect pipes whilst navigating, selflocalising and communicating information about the pipe condition and faults such as leaks or blockages to human operators for monitoring and decision support. The predominantly manual inspection of pipe networks will be replaced with teams of autonomous inspection robots that can operate for long periods of time over a large spatial scale. Reliable autonomous navigation and reporting of faults at this scale requires effective localization and mapping, which is the estimation of the robot’s position and its surrounding environment. This survey presents an overview of state-of-the-art works on robot simultaneous localization and mapping (SLAM) with a focus on water and sewer pipe networks. It considers various aspects of the SLAM problem in pipes, from the motivation, to the water industry requirements, modern SLAM methods, map-types and sensors suited to pipes. Future challenges such as robustness for long term robot operation in pipes are discussed, including how making use of prior knowledge, e.g. geographic information systems (GIS) can be used to build map estimates, and improve the multi-robot SLAM in the pipe environmen

    A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure

    Get PDF
    To ensure the safety and the serviceability of civil infrastructure it is essential to visually inspect and assess its physical and functional condition. This review paper presents the current state of practice of assessing the visual condition of vertical and horizontal civil infrastructure; in particular of reinforced concrete bridges, precast concrete tunnels, underground concrete pipes, and asphalt pavements. Since the rate of creation and deployment of computer vision methods for civil engineering applications has been exponentially increasing, the main part of the paper presents a comprehensive synthesis of the state of the art in computer vision based defect detection and condition assessment related to concrete and asphalt civil infrastructure. Finally, the current achievements and limitations of existing methods as well as open research challenges are outlined to assist both the civil engineering and the computer science research community in setting an agenda for future research

    Real-Time Context-Aware Computing with Applications in Civil Infrastructure Systems.

    Full text link
    This dissertation contributes a structured understanding of the fundamental processes involved in developing context-aware computing applications for the civil infrastructure industry. The civil infrastructure industry is characterized by mobile human and machine agents actively engaged in real-time decision-making tasks in a dynamic and unstructured workspace environment. This distinguishes context-aware computing from other computing technologies in three aspects: 1) it has the ability to perceive, interpret, and adapt to the agent’s evolving workspace; 2) It streamlines project data and presents the agent with information pertinent to its context, thus eliminating the agent’s tasks to accomplish the same; 3) By leveraging contextual information, it supplements decision-making tasks in real-time. This research has successfully investigated technical approaches to address fundamental aspects of introducing context-aware applications to civil engineering, including: the ubiquitous localization of mobile agents in dynamic, unstructured environments; abstraction of the spatial-context and identifying the objects of interest to the agent; and the suitability of using standard models to manage and organize data for context-aware computing applications. A computational framework for designing context-aware applications to support real-time decision-making has also been implemented. The framework allows researchers and other end users to leverage currently available context-sensing technology to design and implement innovative solutions to domain specific problems. The researched methods have been validated through several experiments conducted at the University of Michigan, the National Institute of Standards and Technology, and the Michigan Department of Transportation. These experiments have resulted in the implementation of several applications – to support real-life decision-making tasks – that not only serve to illustrate the usefulness of the framework, but also have significant social and economic implications. Among these applications are the controlled drilling system that warns drilling personnel when the drill bit tip is about to strike rebar or utility lines, thus helping preserve the structural integrity of concrete decks and preventing utility strike accidents; an automated fault detection system that diagnoses faulty components of an underperforming HVAC distribution network; and an innovative bridge inspection solution that supports condition assessment decision-making, thus introducing objectivity to visual condition assessment by providing concurrence with the Structural Health Monitoring data.PhDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99816/1/akulaman_1.pd

    Secretin interactions in the type II secretion system

    Get PDF
    PhDThe type II secretion system (T2SS) is the major terminal branch of the general secretory pathway. It is composed of 12-15 proteins, most in multiple copies, and spans the inner and outer membranes of Gram-negative bacteria. The T2SS secretin subunits form a large dodecameric torus-like structure in the outer membrane. The secretin is the only essential component in the outer membrane and secreted proteins and virulence factors pass through the pore in the toroidal secretin dodecamer and out into the environment. The interaction between the secretin and its partners plays a key role in regulation of the T2SS. The interaction between the so-called homology region of the innermembrane protein GspC (GspC-HR) and secretin provides the structural and functional integrity of the secretion machinery across the two cell membranes. The interaction between secretin and its pilotin translocates the secretin subunits to the outer membrane. In this Thesis, the interactions between secretin and its partners are studied at molecular level. The GspC-HR structure is solved using NMR spectroscopy. Its interaction with secretin (GspD) is elucidated using several biochemical and biophysical approaches and a model of the complex is proposed. Also, the interaction between secretin (GspD) and pilotin (GspS) is further charicterisied. An 18 residues secretin sequence is identified as responsible for interacting with pilotin. Upon binding to the pilotin, the unstructured secretin forms a helical structure.MRC NMR centre (NIMR), Medical Research CouncilNational Institute for Medical Research (NIMR) N.M.R. Centr
    • …
    corecore