133,059 research outputs found
Moving object detection for automobiles by the shared use of H.264/AVC motion vectors : innovation report.
Cost is one of the problems for wider adoption of Advanced Driver Assistance Systems (ADAS) in China. The objective of this research project is to develop a low-cost ADAS by the shared use of motion vectors (MVs) from a H.264/AVC video encoder that was originally designed for video recording only. There were few studies on the use of MVs from video encoders on a moving platform for moving object detection. The main contribution of this research is the novel algorithm proposed to address the problems of moving object detection when MVs from a H.264/AVC encoder are used. It is suitable for mass-produced in-vehicle devices as it combines with MV based moving object detection in order to reduce the cost and complexity of the system, and provides the recording function by default without extra cost. The estimated cost of the proposed system is 50% lower than that making use of the optical flow approach.
To reduce the area of region of interest and to account for the real-time computation requirement, a new block based region growth algorithm is used for the road region detection. To account for the small amplitude and limited precision of H.264/AVC MVs on relatively slow moving objects, the detection task separates the region of interest into relatively fast and relatively slow speed regions by examining the amplitude of MVs, the position of focus of expansion and the result of road region detection.
Relatively slow moving objects are detected and tracked by the use of generic horizontal and vertical contours of rear-view vehicles. This method has addressed the problem of H.264/AVC encoders that possess limited precision and erroneous motion vectors for relatively slow moving objects and regions near the focus of expansion.
Relatively fast moving objects are detected by a two-stage approach. It includes a Hypothesis Generation (HG) and a Hypothesis Verification (HV) stage. This approach addresses the problem that the H.264/AVC MVs are generated for coding efficiency rather than for minimising motion error of objects. The HG stage will report a potential moving object based on clustering the planar parallax residuals satisfying the constraints set out in the algorithm. The HV will verify the existence of the moving object based on the temporal consistency of its displacement in successive frames.
The test results show that the vehicle detection rate higher than 90% which is on a par to methods proposed by other authors, and the computation cost is low enough to achieve the real-time performance requirement.
An invention patent, one international journal paper and two international conference papers have been either published or accepted, showing the originality of the work in this project. One international journal paper is also under preparation
Understanding and improving methods for exterior sound quality evaluation of hybrid and electric vehicles
Electric and Hybrid Electric Vehicles [(H)EVs] are harder for pedestrians to hear when moving at speeds below 20 kph. Laws require (H)EVs to emit additional exterior sounds to alert pedestrians of the vehiclesâ approach to prevent potential collisions. These sounds will also influence pedestriansâ impression of the vehicle brand. Current methods for evaluating (H)EV exterior sounds focus on âpedestriansâ safetyâ but overlook its influence on âvehicle brandâ, and do not balance experimental control, correct context along with external and ecological validity.
This research addresses the question: âHow should (H)EV exterior sounds be evaluated?â The research proposes an experimental methodology for evaluating (H)EV exterior sounds that assesses pedestriansâ safety and influence on the vehicle brand by measuring a listenerâs detection rate and sound quality evaluation of the (H)EV in a Virtual Environment (VE). This methodology was tested, improved and validated through three experimental studies based on their findings.
Study 1 examined the fidelity of the VE setup used for experiments. The VE was immersive with sufficient degree of involvement/control, naturalness, resolution, and interface quality. It also explored a new interactive way of evaluating (H)EV sounds where participants freely navigate the VE and interact with vehicles more naturally. This interactivity increased the experimentsâ ecological validity but reduced reliability and quadrupled the experiment duration compared to using a predefined scenario (non-interactive mode). Thus, a predefined scenario is preferred.
Study 2 tested the non-interactive mode of the proposed methodology. Manipulating the target vehicleâs manoeuvre by varying factors, namely the vehicleâs âarrival timeâ, âapproach directionâ and âdistance of travelâ, across the experiment conditions increased ecological validity. This allowed participants to think, respond and pay similar attention as a real world pedestrian. These factors are neglected by existing methodologies, but were found to affect the participantsâ detection rate and impression of the vehicle brand. Participants detected the vehicle more than once due to confusion with real world ambient sounds. In the real world, pedestrians continuously detect a vehicle in presence of non-vehicular ambient sounds. Therefore, recommendations to improve the representation of the real-world processes in the vehicle detection during listening experiments include an option to re-detect a vehicle and subjective evaluation of âdetectabilityâ of the vehicle sounds.
The improved methodology adds âdetectabilityâ and ârecognisabilityâ of (H)EV sounds as measures and (H)EVâs arrival time as an independent variable. External validity of VEs is a highly debated yet unanswered topic. Study 3 tested external validity of the improved methodology. The methodology accurately predicted participantsâ real world evaluations of the detectability of (H)EV sounds, ranked order of the recognisability of (H)EV sounds and their impressions about the vehicle brand. The vehicleâs arrival time affected participantsâ detection rate and was reaffirmed as a key element in the methodologies for vehicle soundsâ detection.
The final methodological guidelines can help transportation researchers, automotive engineers and legislators determine how pedestrians will respond to the new (H)EV sounds
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
Traffic monitoring using image processing : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Information and Telecommunications Engineering at Massey University, Palmerston North, New Zealand
Traffic monitoring involves the collection of data describing the characteristics of vehicles and their movements. Such data may be used for automatic tolls, congestion and incident detection, law enforcement, and road capacity planning etc. With the recent advances in Computer Vision technology, videos can be analysed automatically and relevant information can be extracted for particular applications. Automatic surveillance using video cameras with image processing technique is becoming a powerful and useful technology for traffic monitoring. In this research project, a video image processing system that has the potential to be developed for real-time application is developed for traffic monitoring including vehicle tracking, counting, and classification. A heuristic approach is applied in developing this system. The system is divided into several parts, and several different functional components have been built and tested using some traffic video sequences. Evaluations are carried out to show that this system is robust and can be developed towards real-time applications
Deep Lidar CNN to Understand the Dynamics of Moving Vehicles
Perception technologies in Autonomous Driving are experiencing their golden
age due to the advances in Deep Learning. Yet, most of these systems rely on
the semantically rich information of RGB images. Deep Learning solutions
applied to the data of other sensors typically mounted on autonomous cars (e.g.
lidars or radars) are not explored much. In this paper we propose a novel
solution to understand the dynamics of moving vehicles of the scene from only
lidar information. The main challenge of this problem stems from the fact that
we need to disambiguate the proprio-motion of the 'observer' vehicle from that
of the external 'observed' vehicles. For this purpose, we devise a CNN
architecture which at testing time is fed with pairs of consecutive lidar
scans. However, in order to properly learn the parameters of this network,
during training we introduce a series of so-called pretext tasks which also
leverage on image data. These tasks include semantic information about
vehicleness and a novel lidar-flow feature which combines standard image-based
optical flow with lidar scans. We obtain very promising results and show that
including distilled image information only during training, allows improving
the inference results of the network at test time, even when image data is no
longer used.Comment: Presented in IEEE ICRA 2018. IEEE Copyrights: Personal use of this
material is permitted. Permission from IEEE must be obtained for all other
uses. (V2 just corrected comments on arxiv submission
Homography-based ground plane detection using a single on-board camera
This study presents a robust method for ground plane detection in vision-based systems with a non-stationary camera. The proposed method is based on the reliable estimation of the homography between ground planes in successive images. This homography is computed using a feature matching approach, which in contrast to classical approaches to on-board motion estimation does not require explicit ego-motion calculation. As opposed to it, a novel homography calculation method based on a linear estimation framework is presented. This framework provides predictions of the ground plane transformation matrix that are dynamically updated with new measurements. The method is specially suited for challenging environments, in particular traffic scenarios, in which the information is scarce and the homography computed from the images is usually inaccurate or erroneous. The proposed estimation framework is able to remove erroneous measurements and to correct those that are inaccurate, hence producing a reliable homography estimate at each instant. It is based on the evaluation of the difference between the predicted and the observed transformations, measured according to the spectral norm of the associated matrix of differences. Moreover, an example is provided on how to use the information extracted from ground plane estimation to achieve object detection and tracking. The method has been successfully demonstrated for the detection of moving vehicles in traffic environments
An Intelligent Monitoring System of Vehicles on Highway Traffic
Vehicle speed monitoring and management of highways is the critical problem
of the road in this modern age of growing technology and population. A poor
management results in frequent traffic jam, traffic rules violation and fatal
road accidents. Using traditional techniques of RADAR, LIDAR and LASAR to
address this problem is time-consuming, expensive and tedious. This paper
presents an efficient framework to produce a simple, cost efficient and
intelligent system for vehicle speed monitoring. The proposed method uses an HD
(High Definition) camera mounted on the road side either on a pole or on a
traffic signal for recording video frames. On the basis of these frames, a
vehicle can be tracked by using radius growing method, and its speed can be
calculated by calculating vehicle mask and its displacement in consecutive
frames. The method uses pattern recognition, digital image processing and
mathematical techniques for vehicle detection, tracking and speed calculation.
The validity of the proposed model is proved by testing it on different
highways.Comment: 5 page
Dynamic Arrival Rate Estimation for Campus Mobility on Demand Network Graphs
Mobility On Demand (MOD) systems are revolutionizing transportation in urban
settings by improving vehicle utilization and reducing parking congestion. A
key factor in the success of an MOD system is the ability to measure and
respond to real-time customer arrival data. Real time traffic arrival rate data
is traditionally difficult to obtain due to the need to install fixed sensors
throughout the MOD network. This paper presents a framework for measuring
pedestrian traffic arrival rates using sensors onboard the vehicles that make
up the MOD fleet. A novel distributed fusion algorithm is presented which
combines onboard LIDAR and camera sensor measurements to detect trajectories of
pedestrians with a 90% detection hit rate with 1.5 false positives per minute.
A novel moving observer method is introduced to estimate pedestrian arrival
rates from pedestrian trajectories collected from mobile sensors. The moving
observer method is evaluated in both simulation and hardware and is shown to
achieve arrival rate estimates comparable to those that would be obtained with
multiple stationary sensors.Comment: Appears in 2016 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS).
http://ieeexplore.ieee.org/abstract/document/7759357
- âŚ