1,424 research outputs found

    Object-level fusion for surround environment perception in automated driving applications

    Get PDF
    Driver assistance systems have increasingly relied on more sensors for new functions. As advanced driver assistance system continue to improve towards automated driving, new methods are required for processing the data in an efficient and economical manner from the sensors for such complex systems. The detection of dynamic objects is one of the most important aspects required by advanced driver assistance systems and automated driving. In this thesis, an environment model approach for the detection of dynamic objects is presented in order to realize an effective method for sensor data fusion. A scalable high-level fusion architecture is developed for fusing object data from several sensors in a single system, where processing occurs in three levels: sensor, fusion and application. A complete and consistent object model which includes the object’s dynamic state, existence probability and classification is defined as a sensor-independent and generic interface for sensor data fusion across all three processing levels. Novel algorithms are developed for object data association and fusion at the fusion-level of the architecture. An asynchronous sensor-to-global fusion strategy is applied in order to process sensor data immediately within the high-level fusion architecture, giving driver assistance systems the most up-to-date information about the vehicle’s environment. Track-to-track fusion algorithms are uniquely applied for dynamic state fusion, where the information matrix fusion algorithm produces results comparable to a low-level central Kalman filter approach. The existence probability of an object is fused using a novel approach based on the Dempster-Shafer evidence theory, where the individual sensor’s existence estimation performance is considered during the fusion process. A similar novel approach with the Dempster-Shafer evidence theory is also applied to the fusion of an object’s classification. The developed high-level sensor data fusion architecture and its algorithms are evaluated using a prototype vehicle equipped with 12 sensors for surround environment perception. A thorough evaluation of the complete object model is performed on a closed test track using vehicles equipped with hardware for generating an accurate ground truth. Existence and classification performance is evaluated using labeled data sets from real traffic scenarios. The evaluation demonstrates the accuracy and effectiveness of the proposed sensor data fusion approach. The work presented in this thesis has additionally been extensively used in several research projects as the dynamic object detection platform for automated driving applications on highways in real traffic

    Automotive sensor fusion systems for traffic aware adaptive cruise control

    Get PDF
    The autonomous driving (AD) industry is advancing at a rapid pace. New sensing technology for tracking vehicles, controlling vehicle behavior, and communicating with infrastructure are being added to commercial vehicles. These new automotive technologies reduce on road fatalities, improve ride quality, and improve vehicle fuel economy. This research explores two types of automotive sensor fusion systems: a novel radar/camera sensor fusion system using a long shortterm memory (LSTM) neural network (NN) to perform data fusion improving tracking capabilities in a simulated environment and a traditional radar/camera sensor fusion system that is deployed in Mississippi State’s entry in the EcoCAR Mobility Challenge (2019 Chevrolet Blazer) for an adaptive cruise control system (ACC) which functions in on-road applications. Along with vehicles, pedestrians, and cyclists, the sensor fusion system deployed in the 2019 Chevrolet Blazer uses vehicle-to-everything (V2X) communication to communicate with infrastructure such as traffic lights to optimize and autonomously control vehicle acceleration through a connected corrido

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Radar networks: A review of features and challenges

    Full text link
    Networks of multiple radars are typically used for improving the coverage and tracking accuracy. Recently, such networks have facilitated deployment of commercial radars for civilian applications such as healthcare, gesture recognition, home security, and autonomous automobiles. They exploit advanced signal processing techniques together with efficient data fusion methods in order to yield high performance of event detection and tracking. This paper reviews outstanding features of radar networks, their challenges, and their state-of-the-art solutions from the perspective of signal processing. Each discussed subject can be evolved as a hot research topic.Comment: To appear soon in Information Fusio

    MMPDA Vehicle Tracking System using Asynchronous Sensor Fusion of Radar and Vision

    Get PDF

    Cooperative Perception for Social Driving in Connected Vehicle Traffic

    Get PDF
    The development of autonomous vehicle technology has moved to the center of automotive research in recent decades. In the foreseeable future, road vehicles at all levels of automation and connectivity will be required to operate safely in a hybrid traffic where human operated vehicles (HOVs) and fully and semi-autonomous vehicles (AVs) coexist. Having an accurate and reliable perception of the road is an important requirement for achieving this objective. This dissertation addresses some of the associated challenges via developing a human-like social driver model and devising a decentralized cooperative perception framework. A human-like driver model can aid the development of AVs by building an understanding of interactions among human drivers and AVs in a hybrid traffic, therefore facilitating an efficient and safe integration. The presented social driver model categorizes and defines the driver\u27s psychological decision factors in mathematical representations (target force, object force, and lane force). A model predictive control (MPC) is then employed for the motion planning by evaluating the prevailing social forces and considering the kinematics of the controlled vehicle as well as other operating constraints to ensure a safe maneuver in a way that mimics the predictive nature of the human driver\u27s decision making process. A hierarchical model predictive control structure is also proposed, where an additional upper level controller aggregates the social forces over a longer prediction horizon upon the availability of an extended perception of the upcoming traffic via vehicular networking. Based on the prediction of the upper level controller, a sequence of reference lanes is passed to a lower level controller to track while avoiding local obstacles. This hierarchical scheme helps reduce unnecessary lane changes resulting in smoother maneuvers. The dynamic vehicular communication environment requires a robust framework that must consistently evaluate and exploit the set of communicated information for the purpose of improving the perception of a participating vehicle beyond the limitations. This dissertation presents a decentralized cooperative perception framework that considers uncertainties in traffic measurements and allows scalability (for various settings of traffic density, participation rate, etc.). The framework utilizes a Bhattacharyya distance filter (BDF) for data association and a fast covariance intersection fusion scheme (FCI) for the data fusion processes. The conservatism of the covariance intersection fusion scheme is investigated in comparison to the traditional Kalman filter (KF), and two different fusion architectures: sensor-to-sensor and sensor-to-system track fusion are evaluated. The performance of the overall proposed framework is demonstrated via Monte Carlo simulations with a set of empirical communications models and traffic microsimulations where each connected vehicle asynchronously broadcasts its local perception consisting of estimates of the motion states of self and neighboring vehicles along with the corresponding uncertainty measures of the estimates. The evaluated framework includes a vehicle-to-vehicle (V2V) communication model that considers intermittent communications as well as a model that takes into account dynamic changes in an individual vehicle’s sensors’ FoV in accordance with the prevailing traffic conditions. The results show the presence of optimality in participation rate, where increasing participation rate beyond a certain level adversely affects the delay in packet delivery and the computational complexity in data association and fusion processes increase without a significant improvement in the achieved accuracy via the cooperative perception. In a highly dense traffic environment, the vehicular network can often be congested leading to limited bandwidth availability at high participation rates of the connected vehicles in the cooperative perception scheme. To alleviate the bandwidth utilization issues, an information-value discriminating networking scheme is proposed, where each sender broadcasts selectively chosen perception data based on the novelty-value of information. The potential benefits of these approaches include, but are not limited to, the reduction of bandwidth bottle-necking and the minimization of the computational cost of data association and fusion post processing of the shared perception data at receiving nodes. It is argued that the proposed information-value discriminating communication scheme can alleviate these adverse effects without sacrificing the fidelity of the perception

    Data fusion architecture for intelligent vehicles

    Get PDF
    Traffic accidents are an important socio-economic problem. Every year, the cost in human lives and the economic consequences are inestimable. During the latest years, efforts to reduce or mitigate this problem have lead to a reduction in casualties. But, the death toll in road accidents is still a problem, which means that there is still much work to be done. Recent advances in information technology have lead to more complex applications, which have the ability to help or even substitute the driver in case of hazardous situations, allowing more secure and efficient driving. But these complex systems require more trustable and accurate sensing technology that allows detecting and identifying the surrounding environment as well as identifying the different objects and users. However, the sensing technology available nowadays is insufficient itself, and thus combining the different available technologies is mandatory in order to fulfill the exigent requirements of safety road applications. In this way, the limitations of every system are overcome. More dependable and reliable information can be thus obtained. These kinds of applications are called Data Fusion (DF) applications. The present document tries to provide a solution for the Data Fusion problem in the Intelligent Transport System (ITS) field by providing a set of techniques and algorithms that allow the combination of information from different sensors. By combining these sensors the basic performances of the classical approaches in ITS can be enhanced, satisfying the demands of safety applications. The works presented are related with two researching fields. Intelligent Transport System is the researching field where this thesis was established. ITS tries to use the recent advances in Information Technology to increase the security and efficiency of the transport systems. Data Fusion techniques, on the other hand, try to give solution to the process related with the combination of information from different sources, enhancing the basic capacities of the systems and adding trustability to the inferences. This work attempts to use the Data Fusion algorithms and techniques to provide solution to classic ITS applications. The sensors used in the present application include a laser scanner and computer vision. First is a well known sensor, widely used, and during more recent years have started to be applied in different ITS applications, showing advanced performance mainly related to its trustability. Second is a recent sensor in automotive applications widely used in all recent ITS advances in the last decade. Thanks to computer vision road security applications (e.g. traffic sign detection, driver monitoring, lane detection, pedestrian detection, etc.) advancements are becoming possible. The present thesis tries to solve the environment reconstruction problem, identifying users of the roads (i.e. pedestrians and vehicles) by the use of Data Fusion techniques. The solution delivers a complete level based solution to the Data Fusion problem. It provides different tools for detecting as well as estimates the degree of danger that involve any detection. Presented algorithms represents a step forward in the ITS world, providing novel Data Fusion based algorithms that allow the detection and estimation of movement of pedestrians and vehicles in a robust and trustable way. To perform such a demanding task other information sources were needed: GPS, inertial systems and context information. Finally, it is important to remark that in the frame of the present thesis, the lack of detection and identification techniques based in radar laser resulted in the need to research and provide more innovative approaches, based in the use of laser scanner, able to detect and identify the different actors involved in the road environment. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Los accidentes de tráfico son un grave problema social y económico, cada año el coste tanto en vidas humanas como económico es incontable, por lo que cualquier acción que conlleve la reducción o eliminación de esta lacra es importante. Durante los últimos años se han hecho avances para mitigar el número de accidentes y reducir sus consecuencias. Estos esfuerzos han dado sus frutos, reduciendo el número de accidentes y sus víctimas. Sin embargo el número de heridos y muertos en accidentes de este tipo es aún muy alto, por lo que no hay que rebajar los esfuerzos encaminados a hacer desaparecer tan importante problema. Los recientes avances en tecnologías de la información han permitido la creación de sistemas de ayuda a la conducción cada vez más complejos, capaces de ayudar e incluso sustituir al conductor, permitiendo una conducción más segura y eficiente. Pero estos complejos sistemas requieren de los sensores más fiables, capaces de permitir reconstruir el entorno, identificar los distintos objetos que se encuentran en él e identificar los potenciales peligros. Los sensores disponibles en la actualidad han demostrado ser insuficientes para tan ardua tarea, debido a los enormes requerimientos que conlleva una aplicación de seguridad en carretera. Por lo tanto, combinar los diferentes sensores disponibles se antoja necesario para llegar a los niveles de eficiencia y confianza que requieren este tipo de aplicaciones. De esta forma, las limitaciones de cada sensor pueden ser superadas, gracias al uso combinado de los diferentes sensores, cada uno de ellos proporcionando información que complementa la obtenida por otros sistemas. Este tipo de aplicaciones se denomina aplicaciones de Fusión Sensorial. El presente trabajo busca aportar soluciones en el entorno de los vehículos inteligentes, mediante técnicas de fusión sensorial, a clásicos problemas relacionados con la seguridad vial. Se buscará combinar diferentes sensores y otras fuentes de información, para obtener un sistema fiable, capaz de satisfacer las exigentes demandas de este tipo de aplicaciones. Los estudios realizados y algoritmos propuestos están enmarcados en dos campos de investigación bien conocidos y populares. Los Sistemas Inteligentes de Transporte (ITS- por sus siglas en ingles- Intelligent Transportation Systems), marco en el que se centra la presente tesis, que engloba las diferentes tecnologías que durante los últimos años han permitido dotar a los sistemas de transporte de mejoras que aumentan la seguridad y eficiencia de los sistemas de transporte tradicionales, gracias a las novedades en el campo de las tecnologías de la información. Por otro lado las técnicas de Fusión Sensorial (DF -por sus siglas en ingles- Data Fusión) engloban las diferentes técnicas y procesos necesarios para combinar diferentes fuentes de información, permitiendo mejorar las prestaciones y dando fiabilidad a los sistemas finales. La presente tesis buscará el empleo de las técnicas de Fusión Sensorial para dar solución a problemas relacionados con Sistemas Inteligentes de Transporte. Los sensores escogidos para esta aplicación son un escáner láser y visión por computador. El primero es un sensor ampliamente conocido, que durante los últimos años ha comenzado a emplearse en el mundo de los ITS con unos excelentes resultados. El segundo de este conjunto de sensores es uno de los sistemas más empleados durante los últimos años, para dotar de cada vez más complejos y versátiles aplicaciones en el mundo de los ITS. Gracias a la visión por computador, aplicaciones tan necesarias para la seguridad como detección de señales de tráfico, líneas de la carreta, peatones, etcétera, que hace unos años parecía ciencia ficción, están cada vez más cerca. La aplicación que se presenta pretende dar solución al problema de reconstrucción de entornos viales, identificando a los principales usuarios de la carretera (vehículos y peatones) mediante técnicas de Fusión Sensorial. La solución implementada busca dar una completa solución a todos los niveles del proceso de fusión sensorial, proveyendo de las diferentes herramientas, no solo para detectar los otros usuarios, sino para dar una estimación del peligro que cada una de estas detecciones implica. Para lograr este propósito, además de los sensores ya comentados han sido necesarias otras fuentes de información, como sensores GPS, inerciales e información contextual. Los algoritmos presentados pretenden ser un importante paso adelante en el mundo de los Sistemas Inteligentes de Transporte, proporcionando novedosos algoritmos basados en tecnologías de Fusión Sensorial que permitirán detectar y estimar el movimiento de los peatones y vehículos de forma fiable y robusta. Finalmente hay que remarcar que en el marco de la presente tesis, la falta de sistemas de detección e identificación de obstáculos basados en radar láser provocó la necesidad de implementar novedosos algoritmos que detectasen e identificasen, en la medida de lo posible y pese a las limitaciones de la tecnología, los diferentes obstáculos que se pueden encontrar en la carretera basándose en este sensor

    GM-PHD Filter Based Sensor Data Fusion for Automotive Frontal Perception System

    Get PDF
    Advanced driver assistance systems and highly automated driving functions require an enhanced frontal perception system. The requirements of a frontal environment perception system cannot be satisfied by either of the existing automotive sensors. A commonly used sensor cluster for these functions consists of a mono-vision smart camera and automotive radar. The sensor fusion is intended to combine the data of these sensors to perform a robust environment perception. Multi-object tracking algorithms have a suitable software architecture for sensor data fusion. Several multi-object tracking algorithms, such as JPDAF or MHT, have good tracking performance; however, the computational requirements of these algorithms are significant according to their combinatorial complexity. The GM-PHD filter is a straightforward algorithm with favorable runtime characteristics that can track an unknown and timevarying number of objects. However, the conventional GM-PHD\ud filter has a poor performance in object cardinality estimation. This paper proposes a method that extends the GM-PHD filter with an object birth model that relies on the sensor detections and a robust object extraction module, including Bayesian estimation of objects’ existence probability to compensate for drawbacks of the conventional algorithm

    Fast Optimal Joint Tracking-Registration for Multi-Sensor Systems

    Full text link
    Sensor fusion of multiple sources plays an important role in vehicular systems to achieve refined target position and velocity estimates. In this article, we address the general registration problem, which is a key module for a fusion system to accurately correct systematic errors of sensors. A fast maximum a posteriori (FMAP) algorithm for joint registration-tracking (JRT) is presented. The algorithm uses a recursive two-step optimization that involves orthogonal factorization to ensure numerically stability. Statistical efficiency analysis based on Cram\`{e}r-Rao lower bound theory is presented to show asymptotical optimality of FMAP. Also, Givens rotation is used to derive a fast implementation with complexity O(n) with nn the number of tracked targets. Simulations and experiments are presented to demonstrate the promise and effectiveness of FMAP
    corecore