2,569 research outputs found

    Acoustic Echo Estimation using the model-based approach with Application to Spatial Map Construction in Robotics

    Get PDF

    Robots for Exploration, Digital Preservation and Visualization of Archeological Sites

    Get PDF
    Monitoring and conservation of archaeological sites are important activities necessary to prevent damage or to perform restoration on cultural heritage. Standard techniques, like mapping and digitizing, are typically used to document the status of such sites. While these task are normally accomplished manually by humans, this is not possible when dealing with hard-to-access areas. For example, due to the possibility of structural collapses, underground tunnels like catacombs are considered highly unstable environments. Moreover, they are full of radioactive gas radon that limits the presence of people only for few minutes. The progress recently made in the artificial intelligence and robotics field opened new possibilities for mobile robots to be used in locations where humans are not allowed to enter. The ROVINA project aims at developing autonomous mobile robots to make faster, cheaper and safer the monitoring of archaeological sites. ROVINA will be evaluated on the catacombs of Priscilla (in Rome) and S. Gennaro (in Naples)

    Surface and Sub-Surface Analyses for Bridge Inspection

    Get PDF
    The development of bridge inspection solutions has been discussed in the recent past. In this dissertation, significant development and improvement on the state-of-the-art in the field of bridge inspection using multiple sensors (e.g. ground penetrating radar (GPR) and visual sensor) has been proposed. In the first part of this research (discussed in chapter 3), the focus is towards developing effective and novel methods for rebar detection and localization for sub-surface bridge inspection of steel rebars. The data has been collected using Ground Penetrating Radar (GPR) sensor on real bridge decks. In this regard, a number of different approaches have been successively developed that continue to improve the state-of-the-art in this particular research area. The second part (discussed in chapter 4) of this research deals with the development of an automated system for steel bridge defect detection system using a Multi-Directional Bicycle Robot. The training data has been acquired from actual bridges in Vietnam and validation is performed on data collected using Bicycle Robot from actual bridge located in Highway-80, Lovelock, Nevada, USA. A number of different proposed methods have been discussed in chapter 4. The final chapter of the dissertation will conclude the findings from the different parts and discuss ways of improving on the existing works in the near future

    Design, Construction, Energy Modeling, and Navigation of a Six-Wheeled Differential Drive Robot to Deliver Medical Supplies inside Hospitals

    Get PDF
    Differential drive mobile robots have been the most ubiquitous kind of robots for the last few decades. As each of the wheels of a differential drive mobile robot can be controlled, it provides additional flexibility to the end-users in creating new applications. These applications include personal assistance, security, warehouse and distribution applications, ocean and space exploration, etc. In a clinic or hospital, the delivery of medicines and patients’ records are frequently needed activities. Medical personnel often find these activities repetitive and time-consuming. Our research was to design, construct, produce an energy model, and develop a navigation control method for a six-wheeled differential drive robot designed to deliver medical supplies inside the hospital. Such a robot is expected to lessen the workload of medical staff. Therefore, the design and implementation of a six-wheeled differential drive robot with a password-protected medicine carrier were presented. This password-protected medicine carrier ensures that only the authorized medical personnel can receive medical supplies. The low-cost robot base and the medicine carrier were built in real life. Besides the actual robot design and fabrication, a kinematic model for the robot was developed, and a navigation control algorithm to avoid obstacles was implemented using MATLAB/Simulink. The kinematic modeling is helpful for the robot to achieve better energy optimization. To develop the object avoidance algorithm, we investigated the use of the Robot Operating System (ROS) and the Simultaneous Localization and Mapping (SLAM) algorithm for the implementation of the mapping and navigation of a robotic platform named TurtleBot 2. Finally, using the Webot robot simulator, the navigation of the six-wheeled mobile robot was demonstrated in a hospital-like simulation environment

    Road Surface Feature Extraction and Reconstruction of Laser Point Clouds for Urban Environment

    Get PDF
    Automakers are developing end-to-end three-dimensional (3D) mapping system for Advanced Driver Assistance Systems (ADAS) and autonomous vehicles (AVs). Using geomatics, artificial intelligence, and SLAM (Simultaneous Localization and Mapping) systems to handle all stages of map creation, sensor calibration and alignment. It is crucial to have a system highly accurate and efficient as it is an essential part of vehicle controls. Such mapping requires significant resources to acquire geographic information (GIS and GPS), optical laser and radar spectroscopy, Lidar, and 3D modeling applications in order to extract roadway features (e.g., lane markings, traffic signs, road-edges) detailed enough to construct a “base map”. To keep this map current, it is necessary to update changes due to occurring events such as construction changes, traffic patterns, or growth of vegetation. The information of the road play a very important factor in road traffic safety and it is essential for for guiding autonomous vehicles (AVs), and prediction of upcoming road situations within AVs. The data size of the map is extensive due to the level of information provided with different sensor modalities for that reason a data optimization and extraction from three-dimensional (3D) mobile laser scanning (MLS) point clouds is presented in this thesis. The research shows the proposed hybrid filter configuration together with the dynamic developed mechanism provides significant reduction of the point cloud data with reduced computational or size constraints. The results obtained in this work are proven by a real-world system

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop

    On-plate autonomous exploration for an inspection robot using ultrasonic guided waves

    Get PDF
    Autonomous Robotic Exploration is a major research issue in robotics incorporating the aspect of how to make decisions for the next actions to maximize information gain and minimize costs. In this work, we elaborate an active-sensing strategy based on frontier-based exploration to enable the autonomous reconstruction of the geometry of a metal surface by a mobile robot relying on ultrasonic echoes. Such a strategy can be beneficial to the development of a fully autonomous robotic agent for the inspection of large metal structures such as storage tanks and ship hulls. Our exploration strategy relies on the occupancy grid generated by detecting the first echo of the signal referring to the closest edge to the sensor, and it employs a utility function that we define to balance travel cost and information gain using the plate’s geometry estimation. Next, the sensor is directed to the next best location. In simulation, the method developed is evaluated and compared with multiple algorithms, essentially closest and random frontier point selection. Finally, an experiment using a mobile robot equipped with co-localized emitter/receiver pair of transducers is used to validate the viability of the proposed approach.M.S

    Autonomous robot systems and competitions: proceedings of the 12th International Conference

    Get PDF
    This is the 2012’s edition of the scientific meeting of the Portuguese Robotics Open (ROBOTICA’ 2012). It aims to disseminate scientific contributions and to promote discussion of theories, methods and experiences in areas of relevance to Autonomous Robotics and Robotic Competitions. All accepted contributions are included in this proceedings book. The conference program has also included an invited talk by Dr.ir. Raymond H. Cuijpers, from the Department of Human Technology Interaction of Eindhoven University of Technology, Netherlands.The conference is kindly sponsored by the IEEE Portugal Section / IEEE RAS ChapterSPR-Sociedade Portuguesa de Robótic

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    • …
    corecore