2,626 research outputs found

    A phased array antenna system of a millimeter-wave FMCW radar for blind spot detection of mobile robots

    Get PDF
    Mobile robots have been extensively used in manufacturing plants for inter-logistic transportation in recent years. This paper covers a phased array antenna design for a millimeter wave radar system to improve lidar-based navigation systems' safety and environmental consciousness. The K-band phased array antenna, when integrated with 24 GHz Frequency-Modulated-Continuous-Wave (FMCW) radar, not only enhances the accuracy of the 2-D Area Scanning lidar system but also helps with the safe operation of the vehicle. The safety improvement is made by covering blind spots to mitigate collision risks during the rotations. The paper first reviews the system-level details of the 2D lidar sensor and shows the blind spots when integrated into a Mobile Robot prototype. Then continues with the inclusion of an FMCW Low-Speed Ramp radar system and discusses the design details of the proposed K-band antenna array, which will be integrated with a radar sensor

    Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

    Full text link
    An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multilane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles

    A Bluetooth-low-energy-based detection and warning system for vulnerable road users in the blind spot of vehicles

    Get PDF
    Blind spot road accidents are a frequently occurring problem. Every year, several deaths are caused by this phenomenon, even though a lot of money is invested in raising awareness and in the development of prevention systems. In this paper, a blind spot detection and warning system is proposed, relying on Received Signal Strength Indicator (RSSI) measurements and Bluetooth Low Energy (BLE) wireless communication. The received RSSI samples are threshold-filtered, after which a weighted average is computed with a sliding window filter. The technique is validated by simulations and measurements. Finally, the strength of the proposed system is demonstrated with real-life measurements

    Driver Assistance Technologies

    Get PDF
    Topic: Driver Assistance Technology is emerging as new driving technology popularly known as ADAS. It is supported with Adaptive Cruise Control, Automatic Emergency Brake, blind spot monitoring, lane change assistance, and forward collision warnings etc. It is an important platform to integrate these multiple applications by using data from multifunction sensors, cameras, radars, lidars etc. and send command to plural actuators, engine, brake, steering etc. ADAS technology can detect some objects, do basic classification, alert the driver of hazardous road conditions, and in some cases, slow or stop the vehicle. The architecture of the electronic control units (ECUs) is responsible for executing advanced driver assistance systems (ADAS) in vehicle which is changing as per its response during the process of driving. Automotive system architecture integrates multiple applications into ADAS ECUs that serve multiple sensors for their functions. Hardware architecture of ADAS and autonomous driving, includes automotive Ethernet, TSN, Ethernet switch and gateway, and domain controller while Software architecture of ADAS and autonomous driving, including AUTOSAR Classic and Adaptive, ROS 2.0 and QNX. This chapter explains the functioning of Assistance Driving Technology with the help of its architecture and various types of sensors

    Perception architecture exploration for automotive cyber-physical systems

    Get PDF
    2022 Spring.Includes bibliographical references.In emerging autonomous and semi-autonomous vehicles, accurate environmental perception by automotive cyber physical platforms are critical for achieving safety and driving performance goals. An efficient perception solution capable of high fidelity environment modeling can improve Advanced Driver Assistance System (ADAS) performance and reduce the number of lives lost to traffic accidents as a result of human driving errors. Enabling robust perception for vehicles with ADAS requires solving multiple complex problems related to the selection and placement of sensors, object detection, and sensor fusion. Current methods address these problems in isolation, which leads to inefficient solutions. For instance, there is an inherent accuracy versus latency trade-off between one stage and two stage object detectors which makes selecting an enhanced object detector from a diverse range of choices difficult. Further, even if a perception architecture was equipped with an ideal object detector performing high accuracy and low latency inference, the relative position and orientation of selected sensors (e.g., cameras, radars, lidars) determine whether static or dynamic targets are inside the field of view of each sensor or in the combined field of view of the sensor configuration. If the combined field of view is too small or contains redundant overlap between individual sensors, important events and obstacles can go undetected. Conversely, if the combined field of view is too large, the number of false positive detections will be high in real time and appropriate sensor fusion algorithms are required for filtering. Sensor fusion algorithms also enable tracking of non-ego vehicles in situations where traffic is highly dynamic or there are many obstacles on the road. Position and velocity estimation using sensor fusion algorithms have a lower margin for error when trajectories of other vehicles in traffic are in the vicinity of the ego vehicle, as incorrect measurement can cause accidents. Due to the various complex inter-dependencies between design decisions, constraints and optimization goals a framework capable of synthesizing perception solutions for automotive cyber physical platforms is not trivial. We present a novel perception architecture exploration framework for automotive cyber- physical platforms capable of global co-optimization of deep learning and sensing infrastructure. The framework is capable of exploring the synthesis of heterogeneous sensor configurations towards achieving vehicle autonomy goals. As our first contribution, we propose a novel optimization framework called VESPA that explores the design space of sensor placement locations and orientations to find the optimal sensor configuration for a vehicle. We demonstrate how our framework can obtain optimal sensor configurations for heterogeneous sensors deployed across two contemporary real vehicles. We then utilize VESPA to create a comprehensive perception architecture synthesis framework called PASTA. This framework enables robust perception for vehicles with ADAS requiring solutions to multiple complex problems related not only to the selection and placement of sensors but also object detection, and sensor fusion as well. Experimental results with the Audi-TT and BMW Minicooper vehicles show how PASTA can intelligently traverse the perception design space to find robust, vehicle-specific solutions

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Ultra-wide bandwidth systems for the surveillance of railway crossing Areas

    Get PDF
    Level crossings are critical elements of railway networks where a large number of accidents take place every year. With the recent enforcement of new and higher safety standards for railway transportation systems, dedicated and reliable technologies for level crossing surveillance must be introduced in order to comply with the safety requirements. In this survey the worldwide problem of level crossing surveillance is addressed, with particular attention to the recent European safety regulations. In this context, the capability of detecting, localizing, and discriminating the vehicle/obstacle that might be entrapped in a level crossing area is considered of paramount importance to save lives, and at the same time avoid costly false alarms. In this article the main solutions available today are illustrated and their pros and cons discussed. In particular, the recent ultra-wide bandwidth technology, combined with proper signal processing and backhauling over the already deployed optical fiber backbone, is shown to represent a promising solution for safety improvement in level crossings
    corecore