880 research outputs found

    Human-Robot Collaboration for Effective Bridge Inspection in the Artificial Intelligence Era

    Get PDF
    Advancements in sensor, Artificial Intelligence (AI), and robotic technologies have formed a foundation to enable a transformation from traditional engineering systems to complex adaptive systems. This paradigm shift will bring exciting changes to civil infrastructure systems and their builders, operators and managers. Funded by the INSPIRE University Transportation Center (UTC), Dr. Qin’s group investigated the holism of an AI-robot-inspector system for bridge inspection. Dr. Qin will discuss the need for close collaboration among the constituent components of the AI-robot-inspector system. In the workplace of bridge inspection using drones, the mobile robotic inspection platform rapidly collected big inspection video data that need to be processed prior to element-level inspections. She will illustrate how human intelligence and artificial intelligence can collaborate in creating an AI model both efficiently and effectively. Obtaining a large amount of expert-annotated data for model training is less desirable, if not unrealistic, in bridge inspection. This INSPIRE project addressed this annotation challenge by developing a semi-supervised self-learning (S3T) algorithm that utilizes a small amount of time and guidance from inspectors to help the model achieve an excellent performance. The project evaluated the improvement in job efficacy produced by the developed AI model. This presentation will conclude by introducing some of the on-going work to achieve the desired adaptability of AI models to new or revised tasks in bridge inspection as the National Bridge Inventory includes over 600,000 bridges of various types in material, shape, and age

    Cataloging Public Objects Using Aerial and Street-Level Images – Urban Trees

    Get PDF
    Each corner of the inhabited world is imaged from multiple viewpoints with increasing frequency. Online map services like Google Maps or Here Maps provide direct access to huge amounts of densely sampled, georeferenced images from street view and aerial perspective. There is an opportunity to design computer vision systems that will help us search, catalog and monitor public infrastructure, buildings and artifacts. We explore the architecture and feasibility of such a system. The main technical challenge is combining test time information from multiple views of each geographic location (e.g., aerial and street views). We implement two modules: det2geo, which detects the set of locations of objects belonging to a given category, and geo2cat, which computes the fine-grained category of the object at a given location. We introduce a solution that adapts state-of-the-art CNN-based object detectors and classifiers. We test our method on “Pasadena Urban Trees”, a new dataset of 80,000 trees with geographic and species annotations, and show that combining multiple views significantly improves both tree detection and tree species classification, rivaling human performance

    Vision-Based Damage Localization Method for an Autonomous Robotic Laser Cladding Process

    Get PDF
    Currently, damage identification and localization in remanufacturing is a manual visual task. It is time-consuming, labour-intensive. and can result in an imprecise repair. To mitigate this, an automatic vision-based damage localization method is proposed in this paper that integrates a camera in a robotic laser cladding repair cell. Two case studies analyzing different configurations of Faster Region-based Convolutional neural networks (R-CNN) are performed. This research aims to select the most suitable configuration to localize the wear on damaged fixed bends. Images were collected for testing and training the R-CNN and the results of this study indicated a decreasing trend in training and validation losses and a mean average precision (mAP) of 88.7%
    • …
    corecore