1,689 research outputs found

    Adaptive Audio Classification Framework for in-Vehicle Environment with Dynamic Noise Characteristics

    Get PDF
    With ever-increasing number of car-mounted electric devices that are accessed, managed, and controlled with smartphones, car apps are becoming an important part of the automotive industry. Audio classification is one of the key components of car apps as a front-end technology to enable human-app interactions. Existing approaches for audio classification, however, fall short as the unique and time-varying audio characteristics of car environments are not appropriately taken into account. Leveraging recent advances in mobile sensing technology that allows for an active and accurate driving environment detection, in this thesis, we develop an audio classification framework for mobile apps that categorizes an audio stream into music, speech, speech and music, and noise, adaptability depending on different driving environments. A case study is performed with four different driving environments, i.e., highway, local road, crowded city, and stopped vehicle. More than 420 minutes of audio data are collected including various genres of music, speech, speech and music, and noise from the driving environments

    Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event Parser

    Full text link
    Audio-visual learning has been a major pillar of multi-modal machine learning, where the community mostly focused on its modality-aligned setting, i.e., the audio and visual modality are both assumed to signal the prediction target. With the Look, Listen, and Parse dataset (LLP), we investigate the under-explored unaligned setting, where the goal is to recognize audio and visual events in a video with only weak labels observed. Such weak video-level labels only tell what events happen without knowing the modality they are perceived (audio, visual, or both). To enhance learning in this challenging setting, we incorporate large-scale contrastively pre-trained models as the modality teachers. A simple, effective, and generic method, termed Visual-Audio Label Elaboration (VALOR), is innovated to harvest modality labels for the training events. Empirical studies show that the harvested labels significantly improve an attentional baseline by 8.0 in average F-score (Type@AV). Surprisingly, we found that modality-independent teachers outperform their modality-fused counterparts since they are noise-proof from the other potentially unaligned modality. Moreover, our best model achieves the new state-of-the-art on all metrics of LLP by a substantial margin (+5.4 F-score for Type@AV). VALOR is further generalized to Audio-Visual Event Localization and achieves the new state-of-the-art as well. Code is available at: https://github.com/Franklin905/VALOR

    Advances in Intelligent Vehicle Control

    Get PDF
    This book is a printed edition of the Special Issue Advances in Intelligent Vehicle Control that was published in the journal Sensors. It presents a collection of eleven papers that covers a range of topics, such as the development of intelligent control algorithms for active safety systems, smart sensors, and intelligent and efficient driving. The contributions presented in these papers can serve as useful tools for researchers who are interested in new vehicle technology and in the improvement of vehicle control systems

    Long-Term Consequences of Early Eye Enucleation on Audiovisual Processing

    Get PDF
    A growing body of research shows that complete deprivation of the visual system from the loss of both eyes early in life results in changes in the remaining senses. Is the adaptive plasticity observed in the remaining intact senses also found in response to partial sensory deprivation specifically, the loss of one eye early in life? My dissertation examines evidence of adaptive plasticity following the loss of one eye (unilateral enucleation) early in life. Unilateral eye enucleation is a unique model for examining the consequences of the loss of binocularity since the brain is completely deprived of all visual input from that eye. My dissertation expands our understanding of the long-term effects of losing one eye early in life on the development of audiovisual processing both behaviourally and in terms of the underlying neural representation. The over-arching goal is to better understand neural plasticity as a result of sensory deprivation. To achieve this I conducted seven experiments, divided into 5 experimental chapters, that focus on the behavioural and structural correlates of audiovisual perception in a unique group of adults who lost one eye in the first few years of life. Behavioural data (Chapters II-V) in conjunction with neuroimaging data (Chapter VI) relate structure and function of the auditory, visual and audiovisual systems in this rare patient group allowing a more refined understanding of cross sensory effects of early sensory deprivation. This information contributes to us better understanding how audiovisual information is experienced by people with one eye. This group can be used as a model to learn how to accommodate and maintain the health of less extreme forms of visual deprivation and to promote overall long-term visual health

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Sound event detection in the DCASE 2017 Challenge

    Get PDF
    International audienceEach edition of the challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) contained several tasks involving sound event detection in different setups. DCASE 2017 presented participants with three such tasks, each having specific datasets and detection requirements: Task 2, in which target sound events were very rare in both training and testing data, Task 3 having overlapping events annotated in real-life audio, and Task 4, in which only weakly-labeled data was available for training. In this paper, we present the three tasks, including the datasets and baseline systems, and analyze the challenge entries for each task. We observe the popularity of methods using deep neural networks, and the still widely used mel frequency based representations, with only few approaches standing out as radically different. Analysis of the systems behavior reveals that task-specific optimization has a big role in producing good performance; however, often this optimization closely follows the ranking metric, and its maximization/minimization does not result in universally good performance. We also introduce the calculation of confidence intervals based on a jackknife resampling procedure, to perform statistical analysis of the challenge results. The analysis indicates that while the 95% confidence intervals for many systems overlap, there are significant difference in performance between the top systems and the baseline for all tasks
    • …
    corecore