11,299 research outputs found
Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking
Object-to-camera motion produces a variety of apparent motion patterns that
significantly affect performance of short-term visual trackers. Despite being
crucial for designing robust trackers, their influence is poorly explored in
standard benchmarks due to weakly defined, biased and overlapping attribute
annotations. In this paper we propose to go beyond pre-recorded benchmarks with
post-hoc annotations by presenting an approach that utilizes omnidirectional
videos to generate realistic, consistently annotated, short-term tracking
scenarios with exactly parameterized motion patterns. We have created an
evaluation system, constructed a fully annotated dataset of omnidirectional
videos and the generators for typical motion patterns. We provide an in-depth
analysis of major tracking paradigms which is complementary to the standard
benchmarks and confirms the expressiveness of our evaluation approach
Robust sound event detection in bioacoustic sensor networks
Bioacoustic sensors, sometimes known as autonomous recording units (ARUs),
can record sounds of wildlife over long periods of time in scalable and
minimally invasive ways. Deriving per-species abundance estimates from these
sensors requires detection, classification, and quantification of animal
vocalizations as individual acoustic events. Yet, variability in ambient noise,
both over time and across sensors, hinders the reliability of current automated
systems for sound event detection (SED), such as convolutional neural networks
(CNN) in the time-frequency domain. In this article, we develop, benchmark, and
combine several machine listening techniques to improve the generalizability of
SED models across heterogeneous acoustic environments. As a case study, we
consider the problem of detecting avian flight calls from a ten-hour recording
of nocturnal bird migration, recorded by a network of six ARUs in the presence
of heterogeneous background noise. Starting from a CNN yielding
state-of-the-art accuracy on this task, we introduce two noise adaptation
techniques, respectively integrating short-term (60 milliseconds) and long-term
(30 minutes) context. First, we apply per-channel energy normalization (PCEN)
in the time-frequency domain, which applies short-term automatic gain control
to every subband in the mel-frequency spectrogram. Secondly, we replace the
last dense layer in the network by a context-adaptive neural network (CA-NN)
layer. Combining them yields state-of-the-art results that are unmatched by
artificial data augmentation alone. We release a pre-trained version of our
best performing system under the name of BirdVoxDetect, a ready-to-use detector
of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019;
revised August 2019; published October 201
Video-based assistance system for training in minimally invasive surgery
In this paper, the development of an assisting system for laparoscopic surgical training is presented. With this system, we expect to facilitate the training process at the first stages of training in laparoscopic surgery and to contribute to an objective evaluation of surgical skills. To achieve this, we propose the insertion of multimedia contents and outlines of work adapted to the level of experience of trainees and the detection of the movements of the laparoscopic instrument into the monitored image. A module to track the instrument is implemented focusing on the tip of the laparoscopic tool. This tracking method does not need the presence of artificial marks or special colours to distinguish the instruments. Similarly, the system has another method based on visual tracking to localize support multimedia content in a stable position of the field of vision. Therefore, this position of the support content is adapted to the movements of the camera or the working area. Experimental results are presented to show the feasibility of the proposed system for assisting in laparoscopic surgical training
How can video analysis help laparoscopic surgeons?
Automatic analysis of minimally invasive surgical (MIS) video has the potential to drive new solutions that alleviate existing needs for safer surgeries: reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. As an unobtrusive, always available source of information in the operating room (OR), this research proposes the use of surgical video for extracting useful information during surgical operations. Methodology proposed includes tools' tracking algorithm and 3D reconstruction of the surgical field. The motivation for these solutions is the augmentation of the laparoscopic view in order to provide orientation aids, optimal surgical path visualization, or preoperative virtual models overla
- …