158 research outputs found

    Use of Advance Driver Assistance System Sensors for Human Detection and Work Machine Odometry

    Get PDF
    This master thesis covers two major topics, the first is the use of Advance driver assistance system (ADAS) sensors for human detection, and second is the use of ADAS sensors for the odometry estimation of the mobile work machine. Solid-state Lidar and Automotive Radar sensors are used as the ADAS sensors. Real-time Simulink models are created for both the sensors. The data is collected from the sensors by connecting the sensors with the XPC target via CAN communication. Later the data is later sent to Robot operating system (ROS) for visualization. The testing of the Solid-state Lidar and Automotive Radar sensors has been performed in different conditions and scenarios, it isn’t limited to human detection only. Detection of cars, machines, building, fence and other multiple objects have also been tested. Moreover, the two major cases for the testing of the sensors were the static case and the dynamic case. For the static case, both the sensors were mounted on a stationary rack and the moving/stationary objects were detected by the sensors. For the dynamic case, both the sensors were mounted on the GIM mobile machine, and the machine was driven around for the sensors to detect an object in the environment. The results are promising, and it is concluded that the sensors can be used for the human detection and for some other applications as well. Furthermore, this research presents an algorithm used to estimate the complete odometry/ ego-motion of the mobile work machine. For this purpose, we are using an automotive radar sensor. Using this sensor and a gyroscope, we seek a complete odometry of the GIM mobile machine, which includes 2-components of linear speed (forward and side slip) and a single component of angular speed. Kinematic equations are calculated having the constraints of vehicle motion and stationary points in the environment. Radial velocity and the azimuth angle of the objects detected are the major components of the kinematic equations provided by the automotive radar sensor. A stationary environment is a compulsory clause in accurate estimation of radar odometry. Assuming the points detected by the automotive radar sensor are stationary, it is then possible to calculate all the three unknown components of speed. However, it is impossible to calculate all the three components using a single radar sensor, because the latter system of equation becomes singular. Literature suggests use of multiple radar sensors, however, in this research, a vertical gyroscope is used to overcome this singularity. GIM mobile machine having a single automotive radar sensor and a vertical gyroscope is used for the experimentation. The results have been compared with the algorithm presented in [32] as well as the wheel odometry of the GIM mobile machine. Furthermore, the results have also been tested with complete navigation solution (GNSS included) as a reference path

    Doppler-only Single-scan 3D Vehicle Odometry

    Full text link
    We present a novel 3D odometry method that recovers the full motion of a vehicle only from a Doppler-capable range sensor. It leverages the radial velocities measured from the scene, estimating the sensor's velocity from a single scan. The vehicle's 3D motion, defined by its linear and angular velocities, is calculated taking into consideration its kinematic model which provides a constraint between the velocity measured at the sensor frame and the vehicle frame. Experiments carried out prove the viability of our single-sensor method compared to mounting an additional IMU. Our method provides the translation of the sensor, which cannot be reliably determined from an IMU, as well as its rotation. Its short-term accuracy and fast operation (~5ms) make it a proper candidate to supply the initialization to more complex localization algorithms or mapping pipelines. Not only does it reduce the error of the mapper, but it does so at a comparable level of accuracy as an IMU would. All without the need to mount and calibrate an extra sensor on the vehicle.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Real-Time Pose Graph SLAM based on Radar

    Get PDF
    This work presents a real-time pose graph based Simultaneous Localization and Mapping (SLAM) system for automotive Radar. The algorithm constructs a map from Radar detections using the Iterative Closest Point (ICP) method to match consecutive scans obtained from a single, front-facing Radar sensor. The algorithm is evaluated on a range of real-world datasets and shows mean translational errors as low as 0.62 m and demonstrates robustness on long tracks. Using a single Radar, our proposed system achieves state-of-the-art performance when compared to other Radar-based SLAM algorithms that use multiple, higher-resolution Radars

    Lane-Precise Localization with Production Vehicle Sensors and Application to Augmented Reality Navigation

    Get PDF
    This works describes an approach to lane-precise localization on current digital maps. A particle filter fuses data from production vehicle sensors, such as GPS, radar, and camera. Performance evaluations on more than 200 km of data show that the proposed algorithm can reliably determine the current lane. Furthermore, a possible architecture for an intuitive route guidance system based on Augmented Reality is proposed together with a lane-change recommendation for unclear situations

    4DEgo: ego-velocity estimation from high-resolution radar data

    Get PDF
    Automotive radars allow for perception of the environment in adverse visibility and weather conditions. New high-resolution sensors have demonstrated potential for tasks beyond obstacle detection and velocity adjustment, such as mapping or target tracking. This paper proposes an end-to-end method for ego-velocity estimation based on radar scan registration. Our architecture includes a 3D convolution over all three channels of the heatmap, capturing features associated with motion, and an attention mechanism for selecting significant features for regression. To the best of our knowledge, this is the first work utilizing the full 3D radar heatmap for ego-velocity estimation. We verify the efficacy of our approach using the publicly available ColoRadar dataset and study the effect of architectural choices and distributional shifts on performance

    Personal Navigation Based on Wireless Networks and Inertial Sensors

    Get PDF
    Tato práce se zaměřuje na vývoj navigačního algoritmu pro systémy vhodné k lokalizaci osob v budovách a městských prostorech. Vzhledem k požadovaným nízkým nákladům na výsledný navigační systém byla uvažována integrace levných inerciálních senzorů a určování vzdálenosti na základě měření v bezdrátových sítích. Dále bylo předpokládáno, že bezdrátová síť bude určena k jiným účelům (např: měření a regulace), než lokalizace, proto bylo použito měření síly bezdrátového signálu. Kvůli snížení značné nepřesnosti této metody, byla navrhnuta technika mapování ztrát v bezdrátovém kanálu. Nejprve jsou shrnuty různé modely senzorů a prostředí a ty nejvhodnější jsou poté vybrány. Jejich efektivní a nové využití v navigační úloze a vhodná fůze všech dostupných informací jsou hlavní cíle této práce.This thesis deals with navigation system based on wireless networks and inertial sensors. The work aims at a development of positioning algorithm suitable for low-cost indoor or urban pedestrian navigation application. The sensor fusion was applied to increase the localization accuracy. Due to required low application cost only low grade inertial sensors and wireless network based ranging were taken into account. The wireless network was assumed to be preinstalled due to other required functionality (for example: building control) therefore only received signal strength (RSS) range measurement technique was considered. Wireless channel loss mapping method was proposed to overcome the natural uncertainties and restrictions in the RSS range measurements. The available sensor and environment models are summarized first and the most appropriate ones are selected secondly. Their effective and novel application in the navigation task, and favorable fusion (Particle filtering) of all available information are the main objectives of this thesis.

    Autonomisten metsäkoneiden koneaistijärjestelmät

    Get PDF
    A prerequisite for increasing the autonomy of forest machinery is to provide robots with digital situational awareness, including a representation of the surrounding environment and the robot's own state in it. Therefore, this article-based dissertation proposes perception systems for autonomous or semi-autonomous forest machinery as a summary of seven publications. The work consists of several perception methods using machine vision, lidar, inertial sensors, and positioning sensors. The sensors are used together by means of probabilistic sensor fusion. Semi-autonomy is interpreted as a useful intermediary step, situated between current mechanized solutions and full autonomy, to assist the operator. In this work, the perception of the robot's self is achieved through estimation of its orientation and position in the world, the posture of its crane, and the pose of the attached tool. The view around the forest machine is produced with a rotating lidar, which provides approximately equal-density 3D measurements in all directions. Furthermore, a machine vision camera is used for detecting young trees among other vegetation, and sensor fusion of an actuated lidar and machine vision camera is utilized for detection and classification of tree species. In addition, in an operator-controlled semi-autonomous system, the operator requires a functional view of the data around the robot. To achieve this, the thesis proposes the use of an augmented reality interface, which requires measuring the pose of the operator's head-mounted display in the forest machine cabin. Here, this work adopts a sensor fusion solution for a head-mounted camera and inertial sensors. In order to increase the level of automation and productivity of forest machines, the work focuses on scientifically novel solutions that are also adaptable for industrial use in forest machinery. Therefore, all the proposed perception methods seek to address a real existing problem within current forest machinery. All the proposed solutions are implemented in a prototype forest machine and field tested in a forest. The proposed methods include posture measurement of a forestry crane, positioning of a freely hanging forestry crane attachment, attitude estimation of an all-terrain vehicle, positioning a head mounted camera in a forest machine cabin, detection of young trees for point cleaning, classification of tree species, and measurement of surrounding tree stems and the ground surface underneath.Metsäkoneiden autonomia-asteen kasvattaminen edellyttää, että robotilla on digitaalinen tilannetieto sekä ympäristöstä että robotin omasta toiminnasta. Tämän saavuttamiseksi työssä on kehitetty autonomisen tai puoliautonomisen metsäkoneen koneaistijärjestelmiä, jotka hyödyntävät konenäkö-, laserkeilaus- ja inertia-antureita sekä paikannusantureita. Työ liittää yhteen seitsemässä artikkelissa toteutetut havainnointimenetelmät, joissa useiden anturien mittauksia yhdistetään sensorifuusiomenetelmillä. Työssä puoliautonomialla tarkoitetaan hyödyllisiä kuljettajaa avustavia välivaiheita nykyisten mekanisoitujen ratkaisujen ja täyden autonomian välillä. Työssä esitettävissä autonomisen metsäkoneen koneaistijärjestelmissä koneen omaa toimintaa havainnoidaan estimoimalla koneen asentoa ja sijaintia, nosturin asentoa sekä siihen liitetyn työkalun asentoa suhteessa ympäristöön. Yleisnäkymä metsäkoneen ympärille toteutetaan pyörivällä laserkeilaimella, joka tuottaa lähes vakiotiheyksisiä 3D-mittauksia jokasuuntaisesti koneen ympäristöstä. Nuoret puut tunnistetaan muun kasvillisuuden joukosta käyttäen konenäkökameraa. Lisäksi puiden tunnistamisessa ja puulajien luokittelussa käytetään konenäkökameraa ja laserkeilainta yhdessä sensorifuusioratkaisun avulla. Lisäksi kuljettajan ohjaamassa puoliautonomisessa järjestelmässä kuljettaja tarvitsee toimivan tavan ymmärtää koneen tuottaman mallin ympäristöstä. Työssä tämä ehdotetaan toteutettavaksi lisätyn todellisuuden käyttöliittymän avulla, joka edellyttää metsäkoneen ohjaamossa istuvan kuljettajan lisätyn todellisuuden lasien paikan ja asennon mittaamista. Työssä se toteutetaan kypärään asennetun kameran ja inertia-anturien sensorifuusiona. Jotta metsäkoneiden automatisaatiotasoa ja tuottavuutta voidaan lisätä, työssä keskitytään uusiin tieteellisiin ratkaisuihin, jotka soveltuvat teolliseen käyttöön metsäkoneissa. Kaikki esitetyt koneaistijärjestelmät pyrkivät vastaamaan todelliseen olemassa olevaan tarpeeseen nykyisten metsäkoneiden käytössä. Siksi kaikki menetelmät on implementoitu prototyyppimetsäkoneisiin ja tulokset on testattu metsäympäristössä. Työssä esitetyt menetelmät mahdollistavat metsäkoneen nosturin, vapaasti riippuvan työkalun ja ajoneuvon asennon estimoinnin, lisätyn todellisuuden lasien asennon mittaamisen metsäkoneen ohjaamossa, nuorten puiden havaitsemisen reikäperkauksessa, ympäröivien puiden puulajien tunnistuksen, sekä puun runkojen ja maanpinnan mittauksen
    corecore