3,541 research outputs found

    Engage D2.2 Final Communication and Dissemination Report

    Get PDF
    This deliverable reports on the communication and dissemination activities carried out by the Engage consortium over the duration of the network. Planned activities have been adapted due to the Covid-19 pandemic, however a full programme of workshops and summer schools has been organised. Support has been given to the annual SESAR Innovation Days conference and there has been an Engage presence at many other events. The Engage website launched in the first month of the network. This was later joined by the Engage ‘knowledge hub’, known as the EngageWiki, which hosts ATM research and knowledge. The wiki provides a platform and consolidated repository with novel user functionality, as well as an additional channel for the dissemination of SESAR results. Engage has also supported and publicised numerous research outputs produced by PhD candidates and catalyst fund projects

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    A comprehensive survey of multi-view video summarization

    Full text link
    [EN] There has been an exponential growth in the amount of visual data on a daily basis acquired from single or multi-view surveillance camera networks. This massive amount of data requires efficient mechanisms such as video summarization to ensure that only significant data are reported and the redundancy is reduced. Multi-view video summarization (MVS) is a less redundant and more concise way of providing information from the video content of all the cameras in the form of either keyframes or video segments. This paper presents an overview of the existing strategies proposed for MVS, including their advantages and drawbacks. Our survey covers the genericsteps in MVS, such as the pre-processing of video data, feature extraction, and post-processing followed by summary generation. We also describe the datasets that are available for the evaluation of MVS. Finally, we examine the major current issues related to MVS and put forward the recommendations for future research(1). (C) 2020 Elsevier Ltd. All rights reserved.This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1A2B5B01070067)Hussain, T.; Muhammad, K.; Ding, W.; Lloret, J.; Baik, SW.; De Albuquerque, VHC. (2021). A comprehensive survey of multi-view video summarization. Pattern Recognition. 109:1-15. https://doi.org/10.1016/j.patcog.2020.10756711510

    Deep Learning: Our Miraculous Year 1990-1991

    Full text link
    In 2020, we will celebrate that many of the basic ideas behind the deep learning revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" or "Miraculous Year" 1990-1991 at TU Munich. Back then, few people were interested, but a quarter century later, neural networks based on these ideas were on over 3 billion devices such as smartphones, and used many billions of times per day, consuming a significant fraction of the world's compute.Comment: 37 pages, 188 references, based on work of 4 Oct 201

    Data Labeling tools for Computer Vision: a Review

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceLarge volumes of labeled data are required to train Machine Learning models in order to solve today’s computer vision challenges. The recent exacerbated hype and investment in Data Labeling tools and services has led to many ad-hoc labeling tools. In this review, a detailed comparison between a selection of data labeling tools is framed to ensure the best software choice to holistically optimize the data labeling process in a Computer Vision problem. This analysis is built on multiple domains of features and functionalities related to Computer Vision, Natural Language Processing, Automation, and Quality Assurance, enabling its application to the most prevalent data labeling use cases across the scientific community and global market

    Spectral Representation of Behaviour Primitives for Depression Analysis

    Get PDF

    Classification of Animal Sound Using Convolutional Neural Network

    Get PDF
    Recently, labeling of acoustic events has emerged as an active topic covering a wide range of applications. High-level semantic inference can be conducted based on main audioeffects to facilitate various content-based applications for analysis, efficient recovery and content management. This paper proposes a flexible Convolutional neural network-based framework for animal audio classification. The work takes inspiration from various deep neural network developed for multimedia classification recently. The model is driven by the ideology of identifying the animal sound in the audio file by forcing the network to pay attention to core audio effect present in the audio to generate Mel-spectrogram. The designed framework achieves an accuracy of 98% while classifying the animal audio on weekly labelled datasets. The state-of-the-art in this research is to build a framework which could even run on the basic machine and do not necessarily require high end devices to run the classification
    • …
    corecore