4,137 research outputs found

    Tackling different aspects of drone services utilizing technologies from cross-sectional industries

    Get PDF
    Enabling autonomous and Beyond Visual Line of Sight (BVLOS) operation of Unmanned Aerial Vehicles (UAVs) in the Very Low Level (VLL) airspace requires further advancement of technologies such as sensing the environment or secure and reliable communication. This paper addresses these challenges by presenting solutions developed within the project Airborne Data Collection on Resilient System Architectures (ADACORSA). Here, findings from cross-sectional areas such as the automotive industry are being further enhanced to fulfill the demands of aviation, in particular for use in the UAV domain. The developed technologies include an advanced Ethernet-based deterministic network for reliable onboard communication, a multi-sensor architecture for sensing the spatial environment as well as a multi-link communication gateway that provides reliable communication to the ground and a secure handover architecture.ADACORSA has received funding from the ECSEL Joint Undertaking (JU) and National Authorities under grant agreement No 876019. Follow www.adacorsa.eu for more informatio

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Multi-Object Tracking System based on LiDAR and RADAR for Intelligent Vehicles applications

    Get PDF
    El presente Trabajo Fin de Grado tiene como objetivo el desarrollo de un Sistema de Detección y Multi-Object Tracking 3D basado en la fusión sensorial de LiDAR y RADAR para aplicaciones de conducción autónoma basándose en algoritmos tradicionales de Machine Learning. La implementación realizada está basada en Python, ROS y cumple requerimientos de tiempo real. En la etapa de detección de objetos se utiliza el algoritmo de segmentación del plano RANSAC, para una posterior extracción de Bounding Boxes mediante DBSCAN. Una Late Sensor Fusion mediante Intersection over Union 3D y un sistema de tracking BEV-SORT completan la arquitectura propuesta.This Final Degree Project aims to develop a 3D Multi-Object Tracking and Detection System based on the Sensor Fusion of LiDAR and RADAR for autonomous driving applications based on traditional Machine Learning algorithms. The implementation is based on Python, ROS and complies with real-time requirements. In the Object Detection stage, the RANSAC plane segmentation algorithm is used, for a subsequent extraction of Bounding Boxes using DBSCAN. A Late Sensor Fusion using Intersection over Union 3D and a BEV-SORT tracking system complete the proposed architecture.Grado en Ingeniería en Electrónica y Automática Industria

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Automotive Intelligence Embedded in Electric Connected Autonomous and Shared Vehicles Technology for Sustainable Green Mobility

    Get PDF
    The automotive sector digitalization accelerates the technology convergence of perception, computing processing, connectivity, propulsion, and data fusion for electric connected autonomous and shared (ECAS) vehicles. This brings cutting-edge computing paradigms with embedded cognitive capabilities into vehicle domains and data infrastructure to provide holistic intrinsic and extrinsic intelligence for new mobility applications. Digital technologies are a significant enabler in achieving the sustainability goals of the green transformation of the mobility and transportation sectors. Innovation occurs predominantly in ECAS vehicles’ architecture, operations, intelligent functions, and automotive digital infrastructure. The traditional ownership model is moving toward multimodal and shared mobility services. The ECAS vehicle’s technology allows for the development of virtual automotive functions that run on shared hardware platforms with data unlocking value, and for introducing new, shared computing-based automotive features. Facilitating vehicle automation, vehicle electrification, vehicle-to-everything (V2X) communication is accomplished by the convergence of artificial intelligence (AI), cellular/wireless connectivity, edge computing, the Internet of things (IoT), the Internet of intelligent things (IoIT), digital twins (DTs), virtual/augmented reality (VR/AR) and distributed ledger technologies (DLTs). Vehicles become more intelligent, connected, functioning as edge micro servers on wheels, powered by sensors/actuators, hardware (HW), software (SW) and smart virtual functions that are integrated into the digital infrastructure. Electrification, automation, connectivity, digitalization, decarbonization, decentralization, and standardization are the main drivers that unlock intelligent vehicles' potential for sustainable green mobility applications. ECAS vehicles act as autonomous agents using swarm intelligence to communicate and exchange information, either directly or indirectly, with each other and the infrastructure, accessing independent services such as energy, high-definition maps, routes, infrastructure information, traffic lights, tolls, parking (micropayments), and finding emergent/intelligent solutions. The article gives an overview of the advances in AI technologies and applications to realize intelligent functions and optimize vehicle performance, control, and decision-making for future ECAS vehicles to support the acceleration of deployment in various mobility scenarios. ECAS vehicles, systems, sub-systems, and components are subjected to stringent regulatory frameworks, which set rigorous requirements for autonomous vehicles. An in-depth assessment of existing standards, regulations, and laws, including a thorough gap analysis, is required. Global guidelines must be provided on how to fulfill the requirements. ECAS vehicle technology trustworthiness, including AI-based HW/SW and algorithms, is necessary for developing ECAS systems across the entire automotive ecosystem. The safety and transparency of AI-based technology and the explainability of the purpose, use, benefits, and limitations of AI systems are critical for fulfilling trustworthiness requirements. The article presents ECAS vehicles’ evolution toward domain controller, zonal vehicle, and federated vehicle/edge/cloud-centric based on distributed intelligence in the vehicle and infrastructure level architectures and the role of AI techniques and methods to implement the different autonomous driving and optimization functions for sustainable green mobility.publishedVersio

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    A review and perspective on optical phased array for automotive LiDAR

    Get PDF
    This paper aims to review the state of the art of Light Detection and Ranging (LiDAR) sensors for automotive applications, and particularly for automated vehicles, focusing on recent advances in the field of integrated LiDAR, and one of its key components: the Optical Phased Array (OPA). LiDAR is still a sensor that divides the automotive community, with several automotive companies investing in it, and some companies stating that LiDAR is a ‘useless appendix’. However, currently there is not a single sensor technology able to robustly and completely support automated navigation. Therefore, LiDAR, with its capability to map in 3 dimensions (3D) the vehicle surroundings, is a strong candidate to support Automated Vehicles (AVs). This manuscript highlights current AV sensor challenges, and it analyses the strengths and weaknesses of the perception sensor currently deployed. Then, the manuscript discusses the main LiDAR technologies emerging in automotive, and focuses on integrated LiDAR, challenges associated with light beam steering on a chip, the use of Optical Phased Arrays, finally discussing current factors hindering the affirmation of silicon photonics OPAs and their future research directions

    Model Predictive Control System Design of a passenger car for Valet Parking Scenario

    Get PDF
    A recent expansion of passenger cars’ automated functions has led to increasingly challenging design problems for the engineers. Among this the development of Automated Valet Parking is the latest addition. The system represents the next evolution of automated system giving the vehicle greater autonomy: the efforts of most automotive OEMs go towards achieving market deployment of such automated function. To this end the focus of each OEM is on taking part to this competitive endeavor and succeed by developing a proprietary solution with the support of hardware and software suppliers. Within this framework the present work aims at developing an effective control strategy for the considered scenarios. In order to reach this goal a Model Predictive Control approach is employed taking advantage of previous works within the automotive OEM in the automated driving field. The control algorithm is developed in a Simulink® simulation according to the requirements of the application and tested; results show the control strategy successfully drives the vehicle on the predefined path
    corecore