2,398 research outputs found

    Ground Profile Recovery from Aerial 3D LiDAR-based Maps

    Get PDF
    The paper presents the study and implementation of the ground detection methodology with filtration and removal of forest points from LiDAR-based 3D point cloud using the Cloth Simulation Filtering (CSF) algorithm. The methodology allows to recover a terrestrial relief and create a landscape map of a forestry region. As the proof-of-concept, we provided the outdoor flight experiment, launching a hexacopter under a mixed forestry region with sharp ground changes nearby Innopolis city (Russia), which demonstrated the encouraging results for both ground detection and methodology robustness.Comment: 8 pages, FRUCT-2019 conferenc

    NASA Automated Rendezvous and Capture Review. Executive summary

    Get PDF
    In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure

    Pre-Deployment Testing of Low Speed, Urban Road Autonomous Driving in a Simulated Environment

    Full text link
    Low speed autonomous shuttles emulating SAE Level L4 automated driving using human driver assisted autonomy have been operating in geo-fenced areas in several cities in the US and the rest of the world. These autonomous vehicles (AV) are operated by small to mid-sized technology companies that do not have the resources of automotive OEMs for carrying out exhaustive, comprehensive testing of their AV technology solutions before public road deployment. Due to the low speed of operation and hence not operating on roads containing highways, the base vehicles of these AV shuttles are not required to go through rigorous certification tests. The way the driver assisted AV technology is tested and allowed for public road deployment is continuously evolving but is not standardized and shows differences between the different states where these vehicles operate. Currently, AVs and AV shuttles deployed on public roads are using these deployments for testing and improving their technology. However, this is not the right approach. Safe and extensive testing in a lab and controlled test environment including Model-in-the-Loop (MiL), Hardware-in-the-Loop (HiL) and Autonomous-Vehicle-in-the-Loop (AViL) testing should be the prerequisite to such public road deployments. This paper presents three dimensional virtual modeling of an AV shuttle deployment site and simulation testing in this virtual environment. We have two deployment sites in Columbus of these AV shuttles through the Department of Transportation funded Smart City Challenge project named Smart Columbus. The Linden residential area AV shuttle deployment site of Smart Columbus is used as the specific example for illustrating the AV testing method proposed in this paper

    Customized Co-Simulation Environment for Autonomous Driving Algorithm Development and Evaluation

    Full text link
    Increasing the implemented SAE level of autonomy in road vehicles requires extensive simulations and verifications in a realistic simulation environment before proving ground and public road testing. The level of detail in the simulation environment helps ensure the safety of a real-world implementation and reduces algorithm development cost by allowing developers to complete most of the validation in the simulation environment. Considering sensors like camera, LIDAR, radar, and V2X used in autonomous vehicles, it is essential to create a simulation environment that can provide these sensor simulations as realistically as possible. While sensor simulations are of crucial importance for perception algorithm development, the simulation environment will be incomplete for the simulation of holistic AV operation without being complemented by a realistic vehicle dynamic model and traffic cosimulation. Therefore, this paper investigates existing simulation environments, identifies use case scenarios, and creates a cosimulation environment to satisfy the simulation requirements for autonomous driving function development using the Carla simulator based on the Unreal game engine for the environment, Sumo or Vissim for traffic co-simulation, Carsim or Matlab, Simulink for vehicle dynamics co-simulation and Autoware or the author or user routines for autonomous driving algorithm co-simulation. As a result of this work, a model-based vehicle dynamics simulation with realistic sensor simulation and traffic simulation is presented. A sensor fusion methodology is implemented in the created simulation environment as a use case scenario. The results of this work will be a valuable resource for researchers who need a comprehensive co-simulation environment to develop connected and autonomous driving algorithms

    Automatic Internal Stray Light Calibration of AMCW Coaxial Scanning LiDAR Using GMM and PSO

    Full text link
    In this paper, an automatic calibration algorithm is proposed to reduce the depth error caused by internal stray light in amplitude-modulated continuous wave (AMCW) coaxial scanning light detection and ranging (LiDAR). Assuming that the internal stray light generated in the process of emitting laser is static, the amplitude and phase delay of internal stray light are estimated using the Gaussian mixture model (GMM) and particle swarm optimization (PSO). Specifically, the pixel positions in a raw signal amplitude map of calibration checkboard are segmented by GMM with two clusters considering the dark and bright image pattern. The loss function is then defined as L1-norm of difference between mean depths of two amplitude-segmented clusters. To avoid overfitting at a specific distance in PSO process, the calibration check board is actually measured at multiple distances and the average of corresponding L1 loss functions is chosen as the actual loss. Such loss is minimized by PSO to find the two optimal target parameters: the amplitude and phase delay of internal stray light. According to the validation of the proposed algorithm, the original loss is reduced from tens of centimeters to 3.2 mm when the measured distances of the calibration checkboard are between 1 m and 4 m. This accurate calibration performance is also maintained in geometrically complex measured scene. The proposed internal stray light calibration algorithm in this paper can be used for any type of AMCW coaxial scanning LiDAR regardless of its optical characteristics

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler
    corecore