747 research outputs found

    Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges

    Get PDF
    Intelligent vehicles and advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver status since ADAS share the vehicle control authorities with the human driver. This study provides an overview of the ego-vehicle driver intention inference (DII), which mainly focus on the lane change intention on highways. First, a human intention mechanism is discussed in the beginning to gain an overall understanding of the driver intention. Next, the ego-vehicle driver intention is classified into different categories based on various criteria. A complete DII system can be separated into different modules, which consists of traffic context awareness, driver states monitoring, and the vehicle dynamic measurement module. The relationship between these modules and the corresponding impacts on the DII are analyzed. Then, the lane change intention inference (LCII) system is reviewed from the perspective of input signals, algorithms, and evaluation. Finally, future concerns and emerging trends in this area are highlighted

    Neural Network Predicting Remote Vehicle Movement with Vehicle-to-Vehicle Data

    Full text link
    This paper presents a neural network developed for predicting the path of a remote vehicle using post facto created vehicle-to-vehicle (V2V) data and uses that prediction to determine whether it is safe for the host vehicle to change lanes. The data was collected in a 2013 experiment involving various drivers traveling on public roads in Ann Arbor, MI. The trips were on suburban roads, city roads and divided highways over a two-day period. The vehicular satellite global positioning system (GPS) data from movement over this period was gathered and post-processed to find vehicle paths within 10 meters of one another. The path traces of the two vehicles were combined to simulate what a V2V network would have provided to properly equipped vehicles if such a network and vehicles existed on real road networks demonstrating natural driving behavior. This research harnesses this data to determine the increased effectiveness of a neural network predicting the future path of remote vehicles and lane change safety when a V2V network is available. The most studied maneuver is overtaking. To a lesser extent, this paper also provides a view into how a neural network predicts remote vehicle behaviors using a host vehicle equipped with only perceptive hardware and no given information from the remote vehicle.Master of Science in EngineeringElectrical Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/146791/1/49698122_breg_thesis_embedded (1).pd

    Spatiotemporal Learning of Multivehicle Interaction Patterns in Lane-Change Scenarios

    Full text link
    Interpretation of common-yet-challenging interaction scenarios can benefit well-founded decisions for autonomous vehicles. Previous research achieved this using their prior knowledge of specific scenarios with predefined models, limiting their adaptive capabilities. This paper describes a Bayesian nonparametric approach that leverages continuous (i.e., Gaussian processes) and discrete (i.e., Dirichlet processes) stochastic processes to reveal underlying interaction patterns of the ego vehicle with other nearby vehicles. Our model relaxes dependency on the number of surrounding vehicles by developing an acceleration-sensitive velocity field based on Gaussian processes. The experiment results demonstrate that the velocity field can represent the spatial interactions between the ego vehicle and its surroundings. Then, a discrete Bayesian nonparametric model, integrating Dirichlet processes and hidden Markov models, is developed to learn the interaction patterns over the temporal space by segmenting and clustering the sequential interaction data into interpretable granular patterns automatically. We then evaluate our approach in the highway lane-change scenarios using the highD dataset collected from real-world settings. Results demonstrate that our proposed Bayesian nonparametric approach provides an insight into the complicated lane-change interactions of the ego vehicle with multiple surrounding traffic participants based on the interpretable interaction patterns and their transition properties in temporal relationships. Our proposed approach sheds light on efficiently analyzing other kinds of multi-agent interactions, such as vehicle-pedestrian interactions. View the demos via https://youtu.be/z_vf9UHtdAM.Comment: for the supplements, see https://chengyuan-zhang.github.io/Multivehicle-Interaction

    Relational Recurrent Neural Networks For Vehicle Trajectory Prediction

    Get PDF
    International audienceScene understanding and future motion prediction of surrounding vehicles are crucial to achieve safe and reliable decision-making and motion planning for autonomous driving in a highway environment. This is a challenging task considering the correlation between the drivers behaviors. Knowing the performance of Long Short Term Memories (LSTMs) in sequence modeling and the power of attention mechanism to capture long range dependencies, we bring relational recurrent neural networks (RRNNs) to tackle the vehicle motion prediction problem. We propose an RRNNs based encoder-decoder architecture where the encoder analyzes the patterns underlying in the past trajectories and the decoder generates the future trajectory sequence. The originality of this network is that it combines the advantages of the LSTM blocks in representing the temporal evolution of trajectories and the attention mechanism to model the relative interactions between vehicles. This paper compares the proposed approach with the LSTM encoder decoder using the new large scaled naturalistic driving highD dataset. The proposed method outperforms LSTM encoder decoder in terms of RMSE values of the predicted trajectories. It outputs an estimate of future trajectories over 5s time horizon for longitudinal and lateral prediction RMSE of about 3.34m and 0.48m, respectively

    Multi-task near-field perception for autonomous driving using surround-view fisheye cameras

    Get PDF
    Die Bildung der Augen führte zum Urknall der Evolution. Die Dynamik änderte sich von einem primitiven Organismus, der auf den Kontakt mit der Nahrung wartete, zu einem Organismus, der durch visuelle Sensoren gesucht wurde. Das menschliche Auge ist eine der raffiniertesten Entwicklungen der Evolution, aber es hat immer noch Mängel. Der Mensch hat über Millionen von Jahren einen biologischen Wahrnehmungsalgorithmus entwickelt, der in der Lage ist, Autos zu fahren, Maschinen zu bedienen, Flugzeuge zu steuern und Schiffe zu navigieren. Die Automatisierung dieser Fähigkeiten für Computer ist entscheidend für verschiedene Anwendungen, darunter selbstfahrende Autos, Augmented Realität und architektonische Vermessung. Die visuelle Nahfeldwahrnehmung im Kontext von selbstfahrenden Autos kann die Umgebung in einem Bereich von 0 - 10 Metern und 360° Abdeckung um das Fahrzeug herum wahrnehmen. Sie ist eine entscheidende Entscheidungskomponente bei der Entwicklung eines sichereren automatisierten Fahrens. Jüngste Fortschritte im Bereich Computer Vision und Deep Learning in Verbindung mit hochwertigen Sensoren wie Kameras und LiDARs haben ausgereifte Lösungen für die visuelle Wahrnehmung hervorgebracht. Bisher stand die Fernfeldwahrnehmung im Vordergrund. Ein weiteres wichtiges Problem ist die begrenzte Rechenleistung, die für die Entwicklung von Echtzeit-Anwendungen zur Verfügung steht. Aufgrund dieses Engpasses kommt es häufig zu einem Kompromiss zwischen Leistung und Laufzeiteffizienz. Wir konzentrieren uns auf die folgenden Themen, um diese anzugehen: 1) Entwicklung von Nahfeld-Wahrnehmungsalgorithmen mit hoher Leistung und geringer Rechenkomplexität für verschiedene visuelle Wahrnehmungsaufgaben wie geometrische und semantische Aufgaben unter Verwendung von faltbaren neuronalen Netzen. 2) Verwendung von Multi-Task-Learning zur Überwindung von Rechenengpässen durch die gemeinsame Nutzung von initialen Faltungsschichten zwischen den Aufgaben und die Entwicklung von Optimierungsstrategien, die die Aufgaben ausbalancieren.The formation of eyes led to the big bang of evolution. The dynamics changed from a primitive organism waiting for the food to come into contact for eating food being sought after by visual sensors. The human eye is one of the most sophisticated developments of evolution, but it still has defects. Humans have evolved a biological perception algorithm capable of driving cars, operating machinery, piloting aircraft, and navigating ships over millions of years. Automating these capabilities for computers is critical for various applications, including self-driving cars, augmented reality, and architectural surveying. Near-field visual perception in the context of self-driving cars can perceive the environment in a range of 0 - 10 meters and 360° coverage around the vehicle. It is a critical decision-making component in the development of safer automated driving. Recent advances in computer vision and deep learning, in conjunction with high-quality sensors such as cameras and LiDARs, have fueled mature visual perception solutions. Until now, far-field perception has been the primary focus. Another significant issue is the limited processing power available for developing real-time applications. Because of this bottleneck, there is frequently a trade-off between performance and run-time efficiency. We concentrate on the following issues in order to address them: 1) Developing near-field perception algorithms with high performance and low computational complexity for various visual perception tasks such as geometric and semantic tasks using convolutional neural networks. 2) Using Multi-Task Learning to overcome computational bottlenecks by sharing initial convolutional layers between tasks and developing optimization strategies that balance tasks
    corecore