15,500 research outputs found

    ORCA-SPOT: An Automatic Killer Whale Sound Detection Toolkit Using Deep Learning

    Get PDF
    Large bioacoustic archives of wild animals are an important source to identify reappearing communication patterns, which can then be related to recurring behavioral patterns to advance the current understanding of intra-specific communication of non-human animals. A main challenge remains that most large-scale bioacoustic archives contain only a small percentage of animal vocalizations and a large amount of environmental noise, which makes it extremely difficult to manually retrieve sufficient vocalizations for further analysis – particularly important for species with advanced social systems and complex vocalizations. In this study deep neural networks were trained on 11,509 killer whale (Orcinus orca) signals and 34,848 noise segments. The resulting toolkit ORCA-SPOT was tested on a large-scale bioacoustic repository – the Orchive – comprising roughly 19,000 hours of killer whale underwater recordings. An automated segmentation of the entire Orchive recordings (about 2.2 years) took approximately 8 days. It achieved a time-based precision or positive-predictive-value (PPV) of 93.2% and an area-under-the-curve (AUC) of 0.9523. This approach enables an automated annotation procedure of large bioacoustics databases to extract killer whale sounds, which are essential for subsequent identification of significant communication patterns. The code will be publicly available in October 2019 to support the application of deep learning to bioaoucstic research. ORCA-SPOT can be adapted to other animal species

    Detecting complex events in user-generated video using concept classifiers

    Get PDF
    Automatic detection of complex events in user-generated videos (UGV) is a challenging task due to its new characteristics differing from broadcast video. In this work, we firstly summarize the new characteristics of UGV, and then explore how to utilize concept classifiers to recognize complex events in UGV content. The method starts from manually selecting a variety of relevant concepts, followed byconstructing classifiers for these concepts. Finally, complex event detectors are learned by using the concatenated probabilistic scores of these concept classifiers as features. Further, we also compare three different fusion operations of probabilistic scores, namely Maximum, Average and Minimum fusion. Experimental results suggest that our method provides promising results. It also shows that Maximum fusion tends to give better performance for most complex events

    WGBH's Teacher's Domain: Producing Open Materials and Engaging Users

    Get PDF
    Launched in 2002 by WGBH, the non-commercial public media service, located in Boston, Massachusetts, Teachers' Domain is an online repository of multimedia open educational resources for use in classrooms and for professional development. As part of its effort to increase the availability of freely accessible resources WGBH has developed content from public media archives into high quality, open educational resources for Teachers' Domain. Using a participatory case study methodology, this report examines WGBH and Teachers' Domain's successes and challenges in 1) converting proprietary content to open content 2) engaging users in content and 3) redesigning the Teacher's Domain site to accommodate new categories of use and tools for teachers and learners of all different backgrounds and activity levels. For OER projects more generally, ongoing research on user behaviors, experiences and perceptions can be a challenging and resource-intense process; however, by assessing and building data collection mechanisms and research questions into organizational practices, knowledge and learnings can be cultivated to inform how users are best supported, as well as to inform continuous improvement for the projects overall

    Fast Fight Detection

    Get PDF
    Action recognition has become a hot topic within computer vision. However, the action recognition community has focused mainly on relatively simple actions like clapping, walking, jogging, etc. The detection of specific events with direct practical use such as fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like prisons, psychiatric centers or even embedded in camera phones. As a consequence, there is growing interest in developing violence detection algorithms. Recent work considered the well-known Bag-of-Words framework for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which high accuracy rates were achieved, the computational cost of extracting such features is prohibitive for practical applications. This work proposes a novel method to detect violence sequences. Features extracted from motion blobs are used to discriminate fight and non-fight sequences. Although the method is outperformed in accuracy by state of the art, it has a significantly faster computation time thus making it amenable for real-time applications

    Engaging men with penile cancer in qualitative research: reflections from an interview-based study.

    Get PDF
    To explore the challenges of engaging men with penile cancer in qualitative interview research

    The Ohio State University Libraries Audiovisual Assessment Interim Report

    Get PDF
    The 26-month audiovisual (AV) assessment took place from June 2017 to August 2019 with a focus on rare or unique audiovisual items. Curators and archivists within each unit identified collections or collecting areas for significance or research value. Approximately 65% of AV materials in seven library units has not been assessed. Upon completion, 18,389 audiovisual items in 32 distinct formats were individually assessed

    Video Manipulation Techniques for the Protection of Privacy in Remote Presence Systems

    Full text link
    Systems that give control of a mobile robot to a remote user raise privacy concerns about what the remote user can see and do through the robot. We aim to preserve some of that privacy by manipulating the video data that the remote user sees. Through two user studies, we explore the effectiveness of different video manipulation techniques at providing different types of privacy. We simultaneously examine task performance in the presence of privacy protection. In the first study, participants were asked to watch a video captured by a robot exploring an office environment and to complete a series of observational tasks under differing video manipulation conditions. Our results show that using manipulations of the video stream can lead to fewer privacy violations for different privacy types. Through a second user study, it was demonstrated that these privacy-protecting techniques were effective without diminishing the task performance of the remote user.Comment: 14 pages, 8 figure
    corecore