7 research outputs found

    PFL-LSTR: A privacy-preserving framework for driver intention inference based on in-vehicle and out-vehicle information

    Full text link
    Intelligent vehicle anticipation of the movement intentions of other drivers can reduce collisions. Typically, when a human driver of another vehicle (referred to as the target vehicle) engages in specific behaviors such as checking the rearview mirror prior to lane change, a valuable clue is therein provided on the intentions of the target vehicle's driver. Furthermore, the target driver's intentions can be influenced and shaped by their driving environment. For example, if the target vehicle is too close to a leading vehicle, it may renege the lane change decision. On the other hand, a following vehicle in the target lane is too close to the target vehicle could lead to its reversal of the decision to change lanes. Knowledge of such intentions of all vehicles in a traffic stream can help enhance traffic safety. Unfortunately, such information is often captured in the form of images/videos. Utilization of personally identifiable data to train a general model could violate user privacy. Federated Learning (FL) is a promising tool to resolve this conundrum. FL efficiently trains models without exposing the underlying data. This paper introduces a Personalized Federated Learning (PFL) model embedded a long short-term transformer (LSTR) framework. The framework predicts drivers' intentions by leveraging in-vehicle videos (of driver movement, gestures, and expressions) and out-of-vehicle videos (of the vehicle's surroundings - frontal/rear areas). The proposed PFL-LSTR framework is trained and tested through real-world driving data collected from human drivers at Interstate 65 in Indiana. The results suggest that the PFL-LSTR exhibits high adaptability and high precision, and that out-of-vehicle information (particularly, the driver's rear-mirror viewing actions) is important because it helps reduce false positives and thereby enhances the precision of driver intention inference.Comment: Submitted for presentation only at the 2024 Annual Meeting of the Transportation Research Boar

    A Deep Learning Framework for Generation and Analysis of Driving Scenario Trajectories

    Get PDF
    We propose a unified deep learning framework for generation and analysis of driving scenario trajectories, and validate its effectiveness in a principled way. In order to model and generate scenarios of trajectories with different length, we develop two approaches. First, we adapt the Recurrent Conditional Generative Adversarial Networks (RC-GAN) by conditioning on the length of the trajectories. This provides us flexibility to generate variable-length driving trajectories, a desirable feature for scenario test case generation in the verification of self-driving cars. Second, we develop an architecture based on Recurrent Autoencoder with GANs in order to obviate the variable length issue, wherein we train a GAN to learn/generate the latent representations of original trajectories. In this approach, we train an integrated feed-forward neural network to estimate the length of the trajectories to be able to bring them back from the latent space representation. In addition to trajectory generation, we employ the trained autoencoder as a feature extractor, for the purpose of clustering and anomaly detection, in order to obtain further insights on the collected scenario dataset. We experimentally investigate the performance of the proposed framework on real-world scenario trajectories obtained from in-field data collection

    A Deep Learning Framework for Generation and Analysis of Driving Scenario Trajectories

    Get PDF
    We propose a unified deep learning framework for the generation and analysis of driving scenario trajectories, and validate its effectiveness in a principled way. To model and generate scenarios of trajectories with different lengths, we develop two approaches. First, we adapt the Recurrent Conditional Generative Adversarial Networks (RC-GAN) by conditioning on the length of the trajectories. This provides us the flexibility to generate variable-length driving trajectories, a desirable feature for scenario test case generation in the verification of autonomous driving. Second, we develop an architecture based on Recurrent Autoencoder with GANs to obviate the variable length issue, wherein we train a GAN to learn/generate the latent representations of original trajectories. In this approach, we train an integrated feed-forward neural network to estimate the length of the trajectories to be able to bring them back from the latent space representation. In addition to trajectory generation, we employ the trained autoencoder as a feature extractor, for the purpose of clustering and anomaly detection, to obtain further insights into the collected scenario dataset. We experimentally investigate the performance of the proposed framework on real-world scenario trajectories obtained from in-field data collection

    Robust and Efficient Activity Recognition from Videos

    Get PDF
    With technological advancement in embedded system design, powerful cameras have been embedded within smart phones, and wireless cameras can be easily deployed at street corners, traffic lights, big stadiums, train stations, etc. Besides, the growth of online media, surveillance, and mobile cameras have resulted in an explosion of videos being uploaded to social media sites such as Facebook and YouTube. The availability of such a vast volume of videos has attracted the computer vision community to conduct much research on human activity recognition since people are arguably the most interesting subjects of such videos. Automatic human activity recognition allows engineers and computer scientists to design smarter surveillance systems, semantically aware video indexes and also more natural human-computer interfaces. Despite the explosion of video data, the ability to automatically recognize and understand human activities is still rather limited. This is primarily due to multiple challenges inherent to the recognition task, namely large variability in human execution styles, the complexity of the visual stimuli in terms of camera motion, background clutter, viewpoint changes, etc., and the number of activities that can be recognized. In addition, the ability to predict future actions of objects based on past observed video frames is very useful. Therefore, in this thesis, we explore four designs to solve the problems we discussed earlier, namely (1) A semantics-based deep learning model, namely SBGAR, is proposed to do group activity recognition. This model achieves higher accuracy and efficiency than existing group activity recognition methods. (2) Despite its high accuracy, SBGAR has some limitations, namely (i) it requires a large dataset with caption information, (ii) activity recognition model is independent of the caption generation model and hence SBGAR may not perform well in some cases. To remove such limitations, we design ReHAR, a robust and efficient human activity recognition scheme. ReHAR can be used to recognize both single-person activities and group activities. (3) In many application scenarios, merely knowing what the moving agents are doing is not sufficient. It also requires predictions of future trajectories of moving agents. Thus, we propose GRIP, a graph-based interaction-aware motion intent prediction scheme. The scheme uses a graph to represent the relationships between two objects, e.g., human joints or traffic agents, and predict the motion intents of all observed objects simultaneously. (4) Action recognition and trajectory prediction schemes are typically deployed in resource-constrained devices. Thus, any technique that can accelerate the computation speed of our schemes is important. Hence, we propose a novel deep learning model decomposition method called DAC that is capable of factorizing an ordinary convolutional layer into two layers with much fewer parameters. DAC computes the corresponding weights for the newly generated layers directly from the weights of the original convolutional layer. Thus, no training (or fine-tuning) or any data is needed

    Enhancing Vehicle Sensing for Traffic Safety and Mobility Performance Improvements Using Roadside LiDAR Sensor Data

    Get PDF
    Recent technological advancements in computer vision algorithms and data acquisition devices have greatly facilitated research activities towards enhancing traffic sensing for traffic safety performance improvements. Significant research efforts have been devoted to developing and deploying more effective technologies to detect, sense, and monitor traffic dynamics and rapidly identify crashes in in Rural, Isolated, Tribal, or Indigenous (RITI) communities. As a new modality for 3D scene perception, Light Detection and Ranging (LiDAR) data have gained increasing popularity for traffic perception, due to its advantages over conventional RGB data, such as being insensitive to varying lighting conditions. In the past decade, researchers and professionals have extensively adopted LiDAR data to promote traffic perception for transportation research and applications. Nevertheless, a series of challenges and research gaps are yet to be fully addressed in LiDAR-based transportation research, such as the disturbance of adverse weather conditions, lack of roadside LiDAR data for deep learning analysis, and roadside LiDAR-based vehicle trajectory prediction. In this technical report, we focus on addressing these research gaps and proposing a series of methodologies to optimize deep learning-based feature recognition for roadside LiDAR-based traffic object recognition tasks. The proposed methodologies will help transportation agencies monitor traffic flow, identify crashes, and develop timely countermeasures with improved accuracy, efficiency, and robustness, and thus enhance traffic safety in RITI communities in the States of Alaska, Washington, Idaho, and Hawaii
    corecore