110,298 research outputs found

    Detection and emotional evaluation of an electric vehicle’s exterior sound in a simulated environment

    Get PDF
    Electric vehicles are quiet at low speeds and thus potentially pose a threat to pedestrians’ safety. Laws are formulating worldwide that mandate these vehicles emit sounds to alert the pedestrians of the vehicles’ approach. It is necessary that these sounds promote a positive perception of the vehicle brand, and understanding their impact on soundscapes is also important. Detection time of the vehicle sounds is an important measure to assess pedestrians’ safety. Emotional evaluation of these sounds influences assessment of the vehicle brand. Laboratory simulation is a new approach for evaluating exterior automotive sounds. This study describes the implementation of laboratory simulation to compare the detection time and emotional evaluation of artificial sounds for an electric vehicle. An Exterior Sound Simulator simulated audio-visual stimuli of an electric car passing a crossroad of a virtual town at 4.47 ms-1 (10 mph), from the perspective of a pedestrian standing at the crossroad. In this environment, 15 sounds were tested using experiments where participants detected the car and evaluated its sound using perceptual dimensions. Results show that these sounds vary significantly in their detection times and emotional evaluations, but crucially that traditional metrics like dB(A) do not always relate to the detection of these sounds. Detection time and emotional evaluation do not have significant correlation. Hence, sounds of a vehicle could be detected quickly, but may portray negative perceptions of the vehicle. Simulation provides a means to more fully evaluate potential electric vehicle sounds against the competing criteria

    Improving acoustic vehicle classification by information fusion

    No full text
    We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicle’s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicle’s exterior parts. In correspondence with this structure, we further put forward a modifiedBayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found fusion is found to be improved than a feature level fusion approac

    Evaluating Multimodal Driver Displays of Varying Urgency

    Get PDF
    Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments investigating all multimodal combinations of such warnings along three different levels of designed urgency. The warnings were first evaluated in terms of perceived urgency and perceived annoyance in the context of a driving simulator. The results showed that the perceived urgency matched the designed urgency of the warnings. More urgent warnings were also rated as more annoying but the effect of annoyance was lower compared to urgency. The warnings were then tested for recognition time when presented during a simulated driving task. It was found that warnings of high urgency induced quicker and more accurate responses than warnings of medium and of low urgency. In both studies, the number of modalities used in warnings (one, two or three) affected both subjective and objective responses. More modalities led to higher ratings of urgency and annoyance, with annoyance having a lower effect compared to urgency. More modalities also led to quicker responses. These results provide implications for multimodal warning design and reveal how modalities and modality combinations can influence participant responses during a simulated driving task

    Polyphonic Sound Event Detection by using Capsule Neural Networks

    Full text link
    Artificial sound event detection (SED) has the aim to mimic the human ability to perceive and understand what is happening in the surroundings. Nowadays, Deep Learning offers valuable techniques for this goal such as Convolutional Neural Networks (CNNs). The Capsule Neural Network (CapsNet) architecture has been recently introduced in the image processing field with the intent to overcome some of the known limitations of CNNs, specifically regarding the scarce robustness to affine transformations (i.e., perspective, size, orientation) and the detection of overlapped images. This motivated the authors to employ CapsNets to deal with the polyphonic-SED task, in which multiple sound events occur simultaneously. Specifically, we propose to exploit the capsule units to represent a set of distinctive properties for each individual sound event. Capsule units are connected through a so-called "dynamic routing" that encourages learning part-whole relationships and improves the detection performance in a polyphonic context. This paper reports extensive evaluations carried out on three publicly available datasets, showing how the CapsNet-based algorithm not only outperforms standard CNNs but also allows to achieve the best results with respect to the state of the art algorithms

    VLSI implementation of an energy-aware wake-up detector for an acoustic surveillance sensor network

    Get PDF
    We present a low-power VLSI wake-up detector for a sensor network that uses acoustic signals to localize ground-base vehicles. The detection criterion is the degree of low-frequency periodicity in the acoustic signal, and the periodicity is computed from the "bumpiness" of the autocorrelation of a one-bit version of the signal. We then describe a CMOS ASIC that implements the periodicity estimation algorithm. The ASIC is functional and its core consumes 835 nanowatts. It was integrated into an acoustic enclosure and deployed in field tests with synthesized sounds and ground-based vehicles.Fil: Goldberg, David H.. Johns Hopkins University; Estados UnidosFil: Andreou, Andreas. Johns Hopkins University; Estados UnidosFil: Julian, Pedro Marcelo. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional del Sur. Departamento de Ingeniería Eléctrica y de Computadoras; ArgentinaFil: Pouliquen, Philippe O.. Johns Hopkins University; Estados UnidosFil: Riddle, Laurence. Signal Systems Corporation; Estados UnidosFil: Rosasco, Rich. Signal Systems Corporation; Estados Unido
    • 

    corecore