2,318 research outputs found

    Robust mobile robot localization based on security laser scanner

    Get PDF
    This paper addresses the development of a new localization system based on a security laser presented on most AGVs for safety reasons. An enhanced artificial beacons detection algorithm is applied with a combination of a Kalman filter and an outliers rejection method in order to increase the robustness and precision of the system. This new robust approach allows to implement such system in current AGVs. Real results in industrial environment validate the proposed methodology.The work presented in this paper, being part of the Project "NORTE-07-0124-FEDER-000060" is financed by the North Portugal Regional Operational Programme (ON.2 – O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF), and by national funds, through the Portuguese funding agency, Fundação para a CiĂȘncia e a Tecnologia (FCT).info:eu-repo/semantics/publishedVersio

    Robust mobile robot localization based on a security laser: An industry case study

    Get PDF
    This paper aims to address a mobile robot localization system that avoids using a dedicated laser scanner, making it possible to reduce implementation costs and the robot's size. The system has enough precision and robustness to meet the requirements of industrial environments. Design/methodology/approach - Using an algorithm for artificial beacon detection combined with a Kalman Filter and an outlier rejection method, it was possible to enhance the precision and robustness of the overall localization system. Findings - Usually, industrial automatic guide vehicles feature two kinds of lasers: one for navigation placed on top of the robot and another for obstacle detection (security lasers). Recently, security lasers extended their output data with obstacle distance (contours) and reflectivity. These new features made it possible to develop a novel localization system based on a security laser. Research limitations/implications - Once the proposed methodology is completely validated, in the future, a scheme for global localization and failure detection should be addressed. Practical implications - This paper presents a comparison between the presented approach and a commercial localization system for industry. The proposed algorithms were tested in an industrial application under realistic working conditions. Social implications - The presented methodology represents a gain in the effective cost of the mobile robot platform, as it discards the need for a dedicated laser for localization purposes. Originality/value - This paper presents a novel approach that benefits from the presence of a security laser on mobile robots (mandatory sensor when considering industrial applications), using it simultaneously with other sensors, not only to guarantee safety conditions during operation but also to locate the robot in the environment. This paper is also valuable because of the comparison made with a commercialized system, as well as the tests conducted in real industrial environments, which prove that the approach presented is suitable for working under these demanding conditions.Project "TEC4Growth" - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020" is fnanced by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).info:eu-repo/semantics/publishedVersio

    Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping

    Full text link
    Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments

    Evaluation of Using Semi-Autonomy Features in Mobile Robotic Telepresence Systems

    Get PDF
    Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tech. Plan Nacional de InvestigaciĂłn, proyecto DPI2011-25483

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi stÞvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange Är. Autonome selvkjÞrende biler, til tross for Ä ha vÊrt satt tilbake av sikkerhetshensyn, er udiskutabelt pÄ vei til Ä bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utfÞre forskjellige oppgaver, som Ä inspisere farlige omrÄder og delta i leteoppdrag. Uansett hva sluttbrukeren velger Ä gjÞre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljÞ gir spesielle utfordringer som repetitive mÞnstre og storskala miljÞ med fÄ landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljÞ, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendÞrs jordbruksmiljÞ kan en robot bruke Global Navigation Satellite System (GNSS) til Ä navigere sine omgivelser. Dette begrenser robotens aktiviteter til omrÄder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljÞer innendÞrs. I ett slikt tilfelle vil reseptorer mot det eksterne miljÞ som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonÊre detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for Ä bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fÞrer til overlegne estimeringsresultater, men er ofte suboptimale med hensyn pÄ kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter sÄ er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med Ä utforske og selektivt velge en hÄndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljÞ. Algoritmen som kombinerer sensorsignalene gir en hÞy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med Ä bare bruke de nÞdvendige sensorene for oppgaven som skal utfÞres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljÞ til Ä takle landbruksmiljÞ, hvor hellinger, vegetasjon og trÊr gjÞr at tradisjonelle metoder mislykkes. VÄr metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gÄrder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbÊr-polytuneller kan lÞses ved Ä bruke en stereo visuel inertiel (VI) og en dybdesensor for Ä ekstrahere statiske objekter. Dette eliminerer behovet Ä kartlegge pÄ nytt for Ä klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til Ä bevege seg autonomt mellom rader i ett vanskelig miljÞ med polytuneller i sikksakk-mÞnstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten Ä eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljÞ. Alle forsÞk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler
    • 

    corecore