23,872 research outputs found
Saliency difference based objective evaluation method for a superimposed screen of the HUD with various background
The head-up display (HUD) is an emerging device which can project information
on a transparent screen. The HUD has been used in airplanes and vehicles, and
it is usually placed in front of the operator's view. In the case of the
vehicle, the driver can see not only various information on the HUD but also
the backgrounds (driving environment) through the HUD. However, the projected
information on the HUD may interfere with the colors in the background because
the HUD is transparent. For example, a red message on the HUD will be less
noticeable when there is an overlap between it and the red brake light from the
front vehicle. As the first step to solve this issue, how to evaluate the
mutual interference between the information on the HUD and backgrounds is
important. Therefore, this paper proposes a method to evaluate the mutual
interference based on saliency. It can be evaluated by comparing the HUD part
cut from a saliency map of a measured image with the HUD image.Comment: 10 pages, 5 fighres, 1 table, accepted by IFAC-HMS 201
Improving acoustic vehicle classification by information fusion
We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicle’s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicle’s exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac
Thermo-visual feature fusion for object tracking using multiple spatiogram trackers
In this paper, we propose a framework that can efficiently combine features for robust tracking based on fusing the outputs of multiple spatiogram trackers. This is achieved without the exponential increase in storage and processing that other multimodal tracking approaches suffer from. The framework allows the features to be split arbitrarily between the trackers, as well as providing the flexibility to add, remove or dynamically weight features. We derive a mean-shift type algorithm for the framework that allows efficient object tracking with very low computational overhead. We especially target the fusion of thermal infrared and visible spectrum features as the most useful features for automated surveillance applications. Results are shown on multimodal video sequences clearly illustrating the benefits of combining multiple features using our framework
- …