13,943 research outputs found
A high speed Tri-Vision system for automotive applications
Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications.
Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring).
Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link® cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range.
Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe
The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems
Scenario-based testing for the safety validation of highly automated vehicles
is a promising approach that is being examined in research and industry. This
approach heavily relies on data from real-world scenarios to derive the
necessary scenario information for testing. Measurement data should be
collected at a reasonable effort, contain naturalistic behavior of road users
and include all data relevant for a description of the identified scenarios in
sufficient quality. However, the current measurement methods fail to meet at
least one of the requirements. Thus, we propose a novel method to measure data
from an aerial perspective for scenario-based validation fulfilling the
mentioned requirements. Furthermore, we provide a large-scale naturalistic
vehicle trajectory dataset from German highways called highD. We evaluate the
data in terms of quantity, variety and contained scenarios. Our dataset
consists of 16.5 hours of measurements from six locations with 110 000
vehicles, a total driven distance of 45 000 km and 5600 recorded complete lane
changes. The highD dataset is available online at: http://www.highD-dataset.comComment: IEEE International Conference on Intelligent Transportation Systems
(ITSC) 201
VANET Applications: Hot Use Cases
Current challenges of car manufacturers are to make roads safe, to achieve
free flowing traffic with few congestions, and to reduce pollution by an
effective fuel use. To reach these goals, many improvements are performed
in-car, but more and more approaches rely on connected cars with communication
capabilities between cars, with an infrastructure, or with IoT devices.
Monitoring and coordinating vehicles allow then to compute intelligent ways of
transportation. Connected cars have introduced a new way of thinking cars - not
only as a mean for a driver to go from A to B, but as smart cars - a user
extension like the smartphone today. In this report, we introduce concepts and
specific vocabulary in order to classify current innovations or ideas on the
emerging topic of smart car. We present a graphical categorization showing this
evolution in function of the societal evolution. Different perspectives are
adopted: a vehicle-centric view, a vehicle-network view, and a user-centric
view; described by simple and complex use-cases and illustrated by a list of
emerging and current projects from the academic and industrial worlds. We
identified an empty space in innovation between the user and his car:
paradoxically even if they are both in interaction, they are separated through
different application uses. Future challenge is to interlace social concerns of
the user within an intelligent and efficient driving
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Development Of Algorithms For Vehicle Classification And Speed Estimation From Dynamic Scenes By On-Board Camera Using Image Processing Techniques
Vehicle assistance system applications benefit the drivers and passengers to promote better and safer driving situations. In terms of usability of dash camera, most vehicle owners pre installed the camera as a personal safety purpose to record the path they went through. The wide availability of various models of the dash cameras on the market, however, lacks in intelligence to process the information that can be obtained from the camera technology system itself. Moreover, in most studies for Intelligence Transport System (ITS), the implementation of static camera, for example CCTV, is popular thus, it is an encouragement for improvement to develop a vehicle assistance system using dynamic camera scenes. The main purpose of this research was to develop a vehicle detection, vehicle classification, and vehicle speed estimation system in dynamic scenes fully by image processing technique. The scope of this research covered Malaysia highway in Skudai, Johor; Ayer Keroh, Melaka and Kajang, Selangor. Video database of these highway areas was recorded by the on-board camera unit placed on the front dashboard area of the host vehicle. Image dataset was collected with positive image sets containing four vehicle classes namely car, lorry, bus, and motorcycle. It was decided that the technique for vehicle detection were Haar-Like and Cascade Classifier while vehicle classification was based on the ratio characteristics of the vehicle detected. The use of ratio value was an added advantage for the classification process since the prepared image dataset were based on each vehicle class dimension and the ratio value are the uniqueness property for each vehicle class. Speed estimation of the vehicle started with host vehicle speed estimation by lane detection technique since the road lane was the most consistence moving object inside the video region. The Host vehicle distance measurement used the broken lane detection and for a scale factor calculation, the width of the highway lanes was calculated by measuring the lane width inside the image and calibrated with real value in meter of the lanes stated by (Jabatan Kerja Raya, 1997). Detected vehicle speed measurements were based on its centroid tracking measurements. Result analysis on accuracy measurement in vehicle detection system obtained 0.93 true positive rates from 300 vehicles presented in the video data. Further analysis in vehicle classification was proved to obtain true positive rate of 0.98 for car class, 0.89 for lorry class, 0.89 for bus class, and 0.75 for motorcycle class. For analysis of speed estimation achieved with the average percentage 6.42% for speed error of host vehicle tested on 10 different videos. In detected vehicle, it speed estimations were based on the host vehicle speed calculation by observation its position and motion behavior in comparison with the host vehicle speed value. Overall the e development indicated that image processing has the ability to visualize the surrounding area for drivers and passengers that was near to real human visions a contribution to human-machine interactions that can be beneficial
- …