12 research outputs found

    Fusing sonars and LRF data to perform SLAM in reduced visibility scenarios

    Get PDF
    Simultaneous Localization and Mapping (SLAM) approaches have evolved considerably in recent years. However, there are many situations which are not easily handled, such as the case of smoky, dusty, or foggy environments where commonly used range sensors for SLAM are highly disturbed by noise induced in the measurement process by particles of smoke, dust or steam. This work presents a sensor fusion method for range sensing in Simultaneous Localization and Mapping (SLAM) under reduced visibility conditions. The proposed method uses the complementary characteristics between a Laser Range Finder (LRF) and an array of sonars in order to ultimately map smoky environments. The method was validated through experiments in a smoky indoor scenario, and results showed that it is able to adequately cope with induced disturbances, thus decreasing the impact of smoke particles in the mapping task

    confined spaces industrial inspection with micro aerial vehicles and laser range finder localization

    Get PDF
    This work addresses the problem of semi-automatic inspection and navigation in confined environments. A system that overcomes many challenges at the state of the art is presented. It comprises a mu..

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Mobile Robot Localization Based on Kalman Filter

    Get PDF
    Robot localization is one of the most important subjects in the Robotics science. It is an interesting and complicated topic. There are many algorithms to solve the problem of localization. Each localization system has its own set of features, and based on them, a solution will be chosen. In my thesis, I want to present a solution to find the best estimate for a robot position in certain space for which a map is available. The thesis started with an elementary introduction to the probability and the Gaussian theories. Simple and advanced practical examples are presented to illustrate each concept related to localization. Extended Kalman Filter is chosen to be the main algorithm to find the best estimate of the robot position. It was presented through two chapters with many examples. All these examples were simulated in Matlab in this thesis in order to give the readers and future students a clear and complete introduction to Kalman Filter. Fortunately, I applied this algorithm on a robot that I have built its base from scratch. MCECS-Bot was a project started in Winter 2012 and it was assigned to me from my adviser, Dr. Marek Perkowski. This robot consists of the base with four Mecanum wheels, the waist based on four linear actuators, an arm, neck and head. The base is equipped with many sensors, which are bumper switches, encoders, sonars, LRF and Kinect. Additional devices can provide extra information as backup sensors, which are a tablet and a camera. The ultimate goal of this thesis is to have the MCECS-Bot as an open source system accessed by many future classes, capstone projects and graduate thesis students for education purposes. A well-known MRPT software system was used to present the results of the Extended Kalman Filter (EKF). These results are simply the robot positions estimated by EKF. They are demonstrated on the base floor of the FAB building of PSU. In parallel, simulated results to all different solutions derived in this thesis are presented using Matlab. A future students will have a ready platform and a good start to continue developing this system

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Autonomisten metsäkoneiden koneaistijärjestelmät

    Get PDF
    A prerequisite for increasing the autonomy of forest machinery is to provide robots with digital situational awareness, including a representation of the surrounding environment and the robot's own state in it. Therefore, this article-based dissertation proposes perception systems for autonomous or semi-autonomous forest machinery as a summary of seven publications. The work consists of several perception methods using machine vision, lidar, inertial sensors, and positioning sensors. The sensors are used together by means of probabilistic sensor fusion. Semi-autonomy is interpreted as a useful intermediary step, situated between current mechanized solutions and full autonomy, to assist the operator. In this work, the perception of the robot's self is achieved through estimation of its orientation and position in the world, the posture of its crane, and the pose of the attached tool. The view around the forest machine is produced with a rotating lidar, which provides approximately equal-density 3D measurements in all directions. Furthermore, a machine vision camera is used for detecting young trees among other vegetation, and sensor fusion of an actuated lidar and machine vision camera is utilized for detection and classification of tree species. In addition, in an operator-controlled semi-autonomous system, the operator requires a functional view of the data around the robot. To achieve this, the thesis proposes the use of an augmented reality interface, which requires measuring the pose of the operator's head-mounted display in the forest machine cabin. Here, this work adopts a sensor fusion solution for a head-mounted camera and inertial sensors. In order to increase the level of automation and productivity of forest machines, the work focuses on scientifically novel solutions that are also adaptable for industrial use in forest machinery. Therefore, all the proposed perception methods seek to address a real existing problem within current forest machinery. All the proposed solutions are implemented in a prototype forest machine and field tested in a forest. The proposed methods include posture measurement of a forestry crane, positioning of a freely hanging forestry crane attachment, attitude estimation of an all-terrain vehicle, positioning a head mounted camera in a forest machine cabin, detection of young trees for point cleaning, classification of tree species, and measurement of surrounding tree stems and the ground surface underneath.Metsäkoneiden autonomia-asteen kasvattaminen edellyttää, että robotilla on digitaalinen tilannetieto sekä ympäristöstä että robotin omasta toiminnasta. Tämän saavuttamiseksi työssä on kehitetty autonomisen tai puoliautonomisen metsäkoneen koneaistijärjestelmiä, jotka hyödyntävät konenäkö-, laserkeilaus- ja inertia-antureita sekä paikannusantureita. Työ liittää yhteen seitsemässä artikkelissa toteutetut havainnointimenetelmät, joissa useiden anturien mittauksia yhdistetään sensorifuusiomenetelmillä. Työssä puoliautonomialla tarkoitetaan hyödyllisiä kuljettajaa avustavia välivaiheita nykyisten mekanisoitujen ratkaisujen ja täyden autonomian välillä. Työssä esitettävissä autonomisen metsäkoneen koneaistijärjestelmissä koneen omaa toimintaa havainnoidaan estimoimalla koneen asentoa ja sijaintia, nosturin asentoa sekä siihen liitetyn työkalun asentoa suhteessa ympäristöön. Yleisnäkymä metsäkoneen ympärille toteutetaan pyörivällä laserkeilaimella, joka tuottaa lähes vakiotiheyksisiä 3D-mittauksia jokasuuntaisesti koneen ympäristöstä. Nuoret puut tunnistetaan muun kasvillisuuden joukosta käyttäen konenäkökameraa. Lisäksi puiden tunnistamisessa ja puulajien luokittelussa käytetään konenäkökameraa ja laserkeilainta yhdessä sensorifuusioratkaisun avulla. Lisäksi kuljettajan ohjaamassa puoliautonomisessa järjestelmässä kuljettaja tarvitsee toimivan tavan ymmärtää koneen tuottaman mallin ympäristöstä. Työssä tämä ehdotetaan toteutettavaksi lisätyn todellisuuden käyttöliittymän avulla, joka edellyttää metsäkoneen ohjaamossa istuvan kuljettajan lisätyn todellisuuden lasien paikan ja asennon mittaamista. Työssä se toteutetaan kypärään asennetun kameran ja inertia-anturien sensorifuusiona. Jotta metsäkoneiden automatisaatiotasoa ja tuottavuutta voidaan lisätä, työssä keskitytään uusiin tieteellisiin ratkaisuihin, jotka soveltuvat teolliseen käyttöön metsäkoneissa. Kaikki esitetyt koneaistijärjestelmät pyrkivät vastaamaan todelliseen olemassa olevaan tarpeeseen nykyisten metsäkoneiden käytössä. Siksi kaikki menetelmät on implementoitu prototyyppimetsäkoneisiin ja tulokset on testattu metsäympäristössä. Työssä esitetyt menetelmät mahdollistavat metsäkoneen nosturin, vapaasti riippuvan työkalun ja ajoneuvon asennon estimoinnin, lisätyn todellisuuden lasien asennon mittaamisen metsäkoneen ohjaamossa, nuorten puiden havaitsemisen reikäperkauksessa, ympäröivien puiden puulajien tunnistuksen, sekä puun runkojen ja maanpinnan mittauksen

    Umgebungsmodellierung mit Radarsensoren und gestörten Sichtbedingungen

    Get PDF
    Ein mobiler Roboter benötigt ein Abbild seiner Umgebung, um eine autonome Navigation durchzuführen. Dieses Abbild, das sogenannte Umgebungsmodell, kann Objekte und Landmarken oder abstrakte Informationen beinhalten, wie topologische Beziehungen. Für diesen Zweck muss der Roboter mit seiner Sensorik sein Umfeld erfassen und die Sensordaten aufbereiten. Dabei stellen feste und flüssige Schwebeteilchen (Aerosole) für die mobile Robotik ein Problem dar. Zum Beispiel verdeckt aufgewirbelter Staub die Sicht für RGB-Kameras in der Agrar- und Bergbaurobotik, Rauch und Feuer beeinträchtigt die Messwerte von LiDAR-Scannern in der Search-and-Rescue-Robotik und schlechte Witterungsbedienungen (Regen, Schnee, Nebel) sind typische Probleme für autonome Straßenfahrzeuge. Dementsprechend liefern populäre Sensoren, wie LiDAR-Scanner, unter diesen Bedingungen nicht genügend brauchbare Messwerte, um die Kernkompetenzen eines autonom fahrenden Systems (Kartierung, Lokalisierung und Navigation) auszuführen. Daher ist die Integration von Sensortypen, die nicht von Aerosolen beeinträchtigt werden, erforderlich, um auch hier Umgebungsmodelle zu erstellen. In diesem Zusammenhang beschäftigt sich diese Arbeit mit dem Einsatz von Radar auf dem Gebiet der Kartierung und Lokalisierung. Zum einen werden neue Radarmessprinzipien zur Umgebungsmodellierung in der mobilen Robotik untersucht, zum anderen LiDAR-Radar-Fusionsverfahren vorgestellt. Durch die Fusionierung von Radar- und LiDAR-Messungen lassen sich besonders in Umgebungen mit wechselhaften Sichtbedingungen die Vorteile beider Sensoren kombinieren. Hierfür werden drei Fusionsverfahren und ein SLAM-Verfahren ausführlich beschrieben und evaluiert. Die dargestellten Fusionsverfahren ermöglichen es, Umgebungen zu kartieren, in denen sowohl LiDAR- als auch Radar-Scanner allein nicht erfolgreich wären. Mit der durch die fusionierten Daten ermittelten Konzentrationsgröße wird die Verteilung von Aerosolen beschrieben und parallel zu dem vorgestellten SLAM-Verfahren mit einem Finite-Differenzen-Modell in das Umgebungsmodell eingetragen

    Autonomous Navigation of Distributed Spacecraft using Graph-based SLAM for Proximity Operations in Small Celestial Bodies

    Full text link
    Establishment of a sustainable human presence beyond the cislunar space is a major milestone for mankind. Small celestial bodies (SCBs) like asteroids are known to contain valuable natural resources necessary for the development of space assets essential to the accomplishment of this goal. Consequently, future robotic spacecraft missions to SCBs are envisioned with the objective of commercial in-situ resource utilization (ISRU). In mission design, there is also an increasing interest in the utilization of the distributed spacecraft, to benefit from specialization and redundancy. The ability of distributed spacecraft to navigate autonomously in the proximity of a SCB is indispensable for the successful realization of ISRU mission objectives. Quasi-autonomous methods currently used for proximity navigation require extensive ground support for mapping and model development, which can be an impediment for large scale multi-spacecraft ISRU missions in the future. It is prudent to leverage the advances in terrestrial robotic navigation to investigate the development of novel methods for autonomous navigation of spacecraft. The primary objective of the work presented in this thesis is to evaluate the feasibility and investigate the development of methods based on graph-based simultaneous localization and mapping (SLAM), a popular algorithm used in terrestrial autonomous navigation, for the autonomous navigation of distributed spacecraft in the proximity of SCBs. To this end, recent research in graph-based SLAM is extensively studied to identify strategies used to enable multi-agent navigation. The spacecraft navigation requirement is formulated as a graph-based SLAM problem using metric GraphSLAM or topometric graph-based SLAM. Techniques developed based on the identified strategies namely, map merging, inter-spacecraft measurements and relative localization are then applied to this formulation to enable distributed spacecraft navigation. In each case, navigation is formulated in terms of its application to a proximity operation scenario that best suits the multi-agent navigation technique. Several challenges related to the application of graph-based SLAM for spacecraft navigation, such as computational cost and illumination variation are also identified and addressed in the development of these methods. Experiments are performed using simulated models of asteroids and spacecraft dynamics, comparing the estimated states of the spacecraft and landmarks to the assumed true states. The results from the experiments indicate a consistent and robust state determination process, suggesting the suitability of the application of multi-agent navigation techniques to graph-based SLAM for enabling the autonomous navigation of distributed spacecraft near SCBs
    corecore