3,264 research outputs found

    Automated Global Feature Analyzer - A Driver for Tier-Scalable Reconnaissance

    Get PDF
    For the purposes of space flight, reconnaissance field geologists have trained to become astronauts. However, the initial forays to Mars and other planetary bodies have been done by purely robotic craft. Therefore, training and equipping a robotic craft with the sensory and cognitive capabilities of a field geologist to form a science craft is a necessary prerequisite. Numerous steps are necessary in order for a science craft to be able to map, analyze, and characterize a geologic field site, as well as effectively formulate working hypotheses. We report on the continued development of the integrated software system AGFA: automated global feature analyzerreg, originated by Fink at Caltech and his collaborators in 2001. AGFA is an automatic and feature-driven target characterization system that operates in an imaged operational area, such as a geologic field site on a remote planetary surface. AGFA performs automated target identification and detection through segmentation, providing for feature extraction, classification, and prioritization within mapped or imaged operational areas at different length scales and resolutions, depending on the vantage point (e.g., spaceborne, airborne, or ground). AGFA extracts features such as target size, color, albedo, vesicularity, and angularity. Based on the extracted features, AGFA summarizes the mapped operational area numerically and flags targets of "interest", i.e., targets that exhibit sufficient anomaly within the feature space. AGFA enables automated science analysis aboard robotic spacecraft, and, embedded in tier-scalable reconnaissance mission architectures, is a driver of future intelligent and autonomous robotic planetary exploration

    Novel Pole Photogrammetric System for Low-Cost Documentation of Archaeological Sites: The Case Study of “Cueva Pintada”

    Get PDF
    19 p.Close-range photogrammetry is a powerful and widely used technique for 3D reconstruction of archaeological environments, specifically when a high-level detail is required. This paper presents an innovative low-cost system that allows high quality and detailed reconstructions of indoor complex scenarios with unfavorable lighting conditions by means of close-range nadir and oblique images as an alternative to drone acquisitions for those places where the use of drones is limited or discouraged: (i) indoor scenarios in which both loss of GNSS signal and need of long exposure times occur, (ii) scenarios with risk of raising dust in suspension due to the proximity to the ground and (iii) complex scenarios with variability in the presence of nooks and vertical elements of different heights. The low-altitude aerial view reached with this system allows high-quality 3D documentation of complex scenarios helped by its ergonomic design, self-stability, lightness, and flexibility of handling. In addition, its interchangeable and remote-control support allows to board different sensors and perform both acquisitions that follow the ideal photogrammetric epipolar geometry but also acquisitions with geometry variations that favor a more complete and reliable reconstruction by avoiding occlusions. This versatile pole photogrammetry system has been successfully used to 3D reconstruct and document the “Cueva Pintada” archaeological site located in Gran Canaria (Spain), of approximately 5400 m2 with a Canon EOS 5D MARK II SLR digital camera. As final products: (i) a great quality photorealistic 3D model of 1.47 mm resolution and ±8.4 mm accuracy, (ii) detailed orthophotos of the main assets of the archaeological remains and (iii) a visor 3D with associated information on the structures, materials and plans of the site were obtained.S

    SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI).</p> <p>Methods</p> <p>In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents.</p> <p>Results</p> <p>The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface.</p> <p>Conclusions</p> <p>The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.</p

    Advances in Robot Navigation

    Get PDF
    Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics

    Mobiles Robots - Past Present and Future

    Get PDF

    Autonomous aerial robot for high-speed search and intercept applications

    Get PDF
    In recent years, high-speed navigation and environment interaction in the context of aerial robotics has become a field of interest for several academic and industrial research studies. In particular, Search and Intercept (SaI) applications for aerial robots pose a compelling research area due to their potential usability in several environments. Nevertheless, SaI tasks involve a challenging development regarding sensory weight, onboard computation resources, actuation design, and algorithms for perception and control, among others. In this work, a fully autonomous aerial robot for high-speed object grasping has been proposed. As an additional subtask, our system is able to autonomously pierce balloons located in poles close to the surface. Our first contribution is the design of the aerial robot at an actuation and sensory level consisting of a novel gripper design with additional sensors enabling the robot to grasp objects at high speeds. The second contribution is a complete software framework consisting of perception, state estimation, motion planning, motion control, and mission control in order to rapidly and robustly perform the autonomous grasping mission. Our approach has been validated in a challenging international competition and has shown outstanding results, being able to autonomously search, follow, and grasp a moving object at 6 m/s in an outdoor environment.Agencia Estatal de InvestigaciónKhalifa Universit

    Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera

    Get PDF
    The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future

    System Development of an Unmanned Ground Vehicle and Implementation of an Autonomous Navigation Module in a Mine Environment

    Get PDF
    There are numerous benefits to the insights gained from the exploration and exploitation of underground mines. There are also great risks and challenges involved, such as accidents that have claimed many lives. To avoid these accidents, inspections of the large mines were carried out by the miners, which is not always economically feasible and puts the safety of the inspectors at risk. Despite the progress in the development of robotic systems, autonomous navigation, localization and mapping algorithms, these environments remain particularly demanding for these systems. The successful implementation of the autonomous unmanned system will allow mine workers to autonomously determine the structural integrity of the roof and pillars through the generation of high-fidelity 3D maps. The generation of the maps will allow the miners to rapidly respond to any increasing hazards with proactive measures such as: sending workers to build/rebuild support structure to prevent accidents. The objective of this research is the development, implementation and testing of a robust unmanned ground vehicle (UGV) that will operate in mine environments for extended periods of time. To achieve this, a custom skid-steer four-wheeled UGV is designed to operate in these challenging underground mine environments. To autonomously navigate these environments, the UGV employs the use of a Light Detection and Ranging (LiDAR) and tactical grade inertial measurement unit (IMU) for the localization and mapping through a tightly-coupled LiDAR Inertial Odometry via Smoothing and Mapping framework (LIO-SAM). The autonomous navigation module was implemented based upon the Fast likelihood-based collision avoidance with an extension to human-guided navigation and a terrain traversability analysis framework. In order to successfully operate and generate high-fidelity 3D maps, the system was rigorously tested in different environments and terrain to verify its robustness. To assess the capabilities, several localization, mapping and autonomous navigation missions were carried out in a coal mine environment. These tests allowed for the verification and tuning of the system to be able to successfully autonomously navigate and generate high-fidelity maps

    Study on quality in 3D digitisation of tangible cultural heritage: mapping parameters, formats, standards, benchmarks, methodologies and guidelines: final study report.

    Get PDF
    This study was commissioned by the Commission to help advance 3D digitisation across Europe and thereby to support the objectives of the Recommendation on a common European data space for cultural heritage (C(2021) 7953 final), adopted on 10 November 2021. The Recommendation encourages Member States to set up digital strategies for cultural heritage, which sets clear digitisation and digital preservation goals aiming at higher quality through the use of advanced technologies, notably 3D. The aim of the study is to map the parameters, formats, standards, benchmarks, methodologies and guidelines relating to 3D digitisation of tangible cultural heritage. The overall objective is to further the quality of 3D digitisation projects by enabling cultural heritage professionals, institutions, content-developers, stakeholders and academics to define and produce high-quality digitisation standards for tangible cultural heritage. This unique study identifies key parameters of the digitisation process, estimates the relative complexity and how it is linked to technology, its impact on quality and its various factors. It also identifies standards and formats used for 3D digitisation, including data types, data formats and metadata schemas for 3D structures. Finally, the study forecasts the potential impacts of future technological advances on 3D digitisation

    Long Distance GNSS-Denied Visual Inertial Navigation for Autonomous Fixed Wing Unmanned Air Vehicles: SO(3) Manifold Filter based on Virtual Vision Sensor

    Full text link
    This article proposes a visual inertial navigation algorithm intended to diminish the horizontal position drift experienced by autonomous fixed wing UAVs (Unmanned Air Vehicles) in the absence of GNSS (Global Navigation Satellite System) signals. In addition to accelerometers, gyroscopes, and magnetometers, the proposed navigation filter relies on the accurate incremental displacement outputs generated by a VO (Visual Odometry) system, denoted here as a Virtual Vision Sensor or VVS, which relies on images of the Earth surface taken by an onboard camera and is itself assisted by the filter inertial estimations. Although not a full replacement for a GNSS receiver since its position observations are relative instead of absolute, the proposed system enables major reductions in the GNSS-Denied attitude and position estimation errors. In order to minimize the accumulation of errors in the absence of absolute observations, the filter is implemented in the manifold of rigid body rotations or SO (3). Stochastic high fidelity simulations of two representative scenarios involving the loss of GNSS signals are employed to evaluate the results. The authors release the C++ implementation of both the visual inertial navigation filter and the high fidelity simulation as open-source software.Comment: 27 pages, 14 figures. arXiv admin note: substantial text overlap with arXiv:2205.1324
    corecore