1,058 research outputs found

    Smartphone Based Detection of Vehicle Encounters

    Get PDF
    Riding a bicycle in shared traffic alongside motor vehicles causes discomfort or even stress for many cyclists. Avoiding busy or crowded roads is only possible with good local knowledge, as no data is available on the frequency of encounters with motor vehicles for most roads. Acquiring a data set that combines smartphone sensor data with known vehicle encounters can become the foundation for a smartphone based moving vehicle detector. Therefore, readings from the omnipresent smartphone sensors magnetometer and barometer can be exploited as indicators of passing vehicles. In this paper, a novel approach is presented to detect vehicle encounters in smartphone sensor data. For this purpose, a modular mobile sensor platform is first constructed and set up to collect smartphone, camera and ultrasonic sensor data in real traffic scenarios. The platform is designed to be used with various sensor configurations to serve a broader set of use cases in the future. In the presented use case, the platform is constructed to create a reference data set of vehicle encounters consisting of location information, direction, distance, speed and further metadata. To this end, a methodology is presented to process the collected camera images and ultrasonic distance data. Furthermore, two smartphones are used to collect raw data from their magnetometer and barometric sensor. Based on both, the reference and the smartphones’ data set, a classifier for the detection of vehicle encounters is then trained to operate on pure smartphone sensor data. Experiments on real data show that a Random Forest classifier can be successfully applied to recorded smartphone sensor data. The results prove that the presented approach is able to detect overtaking vehicle encounters with a F1-score of 71.0 %, which is sufficient to rank different cycling routes by their ’stress factor’

    Cognitive Modeling Approach for Dealing with Challenges in Cyber-Physical Systems

    Get PDF
    In this paper, inspired by our previous works, we propose an architecture for the design and realization of cyber-physical systems (CPS) that considers the spatio-temporal context of events, promotes anomaly detection, facilitates efficient human-computer interaction and is capable of discovering novel human and/or machine knowledge. We view deep neural networks as smart sensors and sensory data from the environment represents the semantic and episodic input to a consistency seeking component of the cyber-space. Starting from a knowledge base infused with a deterministic world assumption, this module can detect anomalies and correct estimation errors by combining the outputs of multiple sensors. We also exploit an episodic description of ongoing situations by integrating temporal segmentation with kernel and low-dimensional embedding based methods. We demonstrate parts of the architecture through illustrative examples on our self-collected driving dataset. Our framework can be related to cognitive science foundations and may facilitate reliable functioning of CPS through integrating traditional AI and deep learning methods with deterministic models and reasoning tools. We expect that such knowledge base and cognition driven approaches of joining deep neural networks will be adopted in complex CPS. This looks like a scalable, and beneficial match between human knowledge and the exploding deep learning technologies

    Sensor fusion methodology for vehicle detection

    Get PDF
    A novel sensor fusion methodology is presented, which provides intelligent vehicles with augmented environment information and knowledge, enabled by vision-based system, laser sensor and global positioning system. The presented approach achieves safer roads by data fusion techniques, especially in single-lane carriage-ways where casualties are higher than in other road classes, and focuses on the interplay between vehicle drivers and intelligent vehicles. The system is based on the reliability of laser scanner for obstacle detection, the use of camera based identification techniques and advanced tracking and data association algorithms i.e. Unscented Kalman Filter and Joint Probabilistic Data Association. The achieved results foster the implementation of the sensor fusion methodology in forthcoming Intelligent Transportation Systems

    Data Fusion for Overtaking Vehicle Detection Based on Radar and Optical Flow

    Get PDF
    Trustworthiness is a key point when dealing with vehicle safety applications. In this paper an approach to a real application is presented, able to fulfill the requirements of such demanding applications. Most of commercial sensors available nowadays are usually designed to detect front vehicles but lack the ability to detect overtaking vehicles. The work presented here combines the information provided by two sensors, a Stop&Go radar and a camera. Fusion is done by using the unprocessed information from the radar and computer vision based on optical flow. The basic capabilities of the commercial systems are upgraded giving the possibility to improve the front vehicles detection system, by detecting overtaking vehicles with a high positive rate.This work was supported by the Spanish Government through the Cicyt projects FEDORA (GRANT TRA2010- 20225-C03-01) and D3System (TRA2011-29454-C03-02). BRAiVE prototype has been developed in the framework of the Open intelligent systems for Future Autonomous Vehicles (OFAV) Projects funded by the European Research Council (ERC) within an Advanced Investigation Gran

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    Investigation of low-cost infrared sensing for intelligent deployment of occupant restraints

    Get PDF
    In automotive transport, airbags and seatbelts are effective at restraining the driver and passenger in the event of a crash, with statistics showing a dramatic reduction in the number of casualties from road crashes. However, statistics also show that a small number of these people have been injured or even killed from striking the airbag, and that the elderly and small children are especially at risk of airbag-related injury. This is the result of the fact that in-car restraint systems were designed for the average male at an average speed of 50 km/hr, and people outside these norms are at risk. Therefore one of the future safety goals of the car manufacturers is to deploy sensors that would gain more information about the driver or passenger of their cars in order to tailor the safety systems specifically for that person, and this is the goal of this project. This thesis describes a novel approach to occupant detection, position measurement and monitoring using a low-cost thermal imaging based system, which is a departure from traditional video camera-based systems, and at an affordable price. Experiments were carried out using a specially designed test rig and a car driving simulator with members of the public. Results have shown that the thermal imager can detect a human in a car cabin mock up and provide crucial real-time position data, which could be used to support intelligent restraint deployment. Other valuable information has been detected such as whether the driver is smoking, drinking a hot or cold drink, using a mobile phone, which can help to infer the level of driver attentiveness or engagement

    Target Trailing With Safe Navigation With Colregs for Maritime Autonomous Surface Vehicles

    Get PDF
    Systems and methods for operating autonomous waterborne vessels in a safe manner. The systems include hardware for identifying the locations and motions of other vessels, as well as the locations of stationary objects that represent navigation hazards. By applying a computational method that uses a maritime navigation algorithm for avoiding hazards and obeying COLREGS using Velocity Obstacles to the data obtained, the autonomous vessel computes a safe and effective path to be followed in order to accomplish a desired navigational end result, while operating in a manner so as to avoid hazards and to maintain compliance with standard navigational procedures defined by international agreement. The systems and methods have been successfully demonstrated on water with radar and stereo cameras as the perception sensors, and integrated with a higher level planner for trailing a maneuvering target
    • …
    corecore