81,910 research outputs found
Learning without Prejudice: Avoiding Bias in Webly-Supervised Action Recognition
Webly-supervised learning has recently emerged as an alternative paradigm to
traditional supervised learning based on large-scale datasets with manual
annotations. The key idea is that models such as CNNs can be learned from the
noisy visual data available on the web. In this work we aim to exploit web data
for video understanding tasks such as action recognition and detection. One of
the main problems in webly-supervised learning is cleaning the noisy labeled
data from the web. The state-of-the-art paradigm relies on training a first
classifier on noisy data that is then used to clean the remaining dataset. Our
key insight is that this procedure biases the second classifier towards samples
that the first one understands. Here we train two independent CNNs, a RGB
network on web images and video frames and a second network using temporal
information from optical flow. We show that training the networks independently
is vastly superior to selecting the frames for the flow classifier by using our
RGB network. Moreover, we show benefits in enriching the training set with
different data sources from heterogeneous public web databases. We demonstrate
that our framework outperforms all other webly-supervised methods on two public
benchmarks, UCF-101 and Thumos'14.Comment: Submitted to CVIU SI: Computer Vision and the We
Recognizing Objects In-the-wild: Where Do We Stand?
The ability to recognize objects is an essential skill for a robotic system
acting in human-populated environments. Despite decades of effort from the
robotic and vision research communities, robots are still missing good visual
perceptual systems, preventing the use of autonomous agents for real-world
applications. The progress is slowed down by the lack of a testbed able to
accurately represent the world perceived by the robot in-the-wild. In order to
fill this gap, we introduce a large-scale, multi-view object dataset collected
with an RGB-D camera mounted on a mobile robot. The dataset embeds the
challenges faced by a robot in a real-life application and provides a useful
tool for validating object recognition algorithms. Besides describing the
characteristics of the dataset, the paper evaluates the performance of a
collection of well-established deep convolutional networks on the new dataset
and analyzes the transferability of deep representations from Web images to
robotic data. Despite the promising results obtained with such representations,
the experiments demonstrate that object classification with real-life robotic
data is far from being solved. Finally, we provide a comparative study to
analyze and highlight the open challenges in robot vision, explaining the
discrepancies in the performance
VIENA2: A Driving Anticipation Dataset
Action anticipation is critical in scenarios where one needs to react before
the action is finalized. This is, for instance, the case in automated driving,
where a car needs to, e.g., avoid hitting pedestrians and respect traffic
lights. While solutions have been proposed to tackle subsets of the driving
anticipation tasks, by making use of diverse, task-specific sensors, there is
no single dataset or framework that addresses them all in a consistent manner.
In this paper, we therefore introduce a new, large-scale dataset, called
VIENA2, covering 5 generic driving scenarios, with a total of 25 distinct
action classes. It contains more than 15K full HD, 5s long videos acquired in
various driving conditions, weathers, daytimes and environments, complemented
with a common and realistic set of sensor measurements. This amounts to more
than 2.25M frames, each annotated with an action label, corresponding to 600
samples per action class. We discuss our data acquisition strategy and the
statistics of our dataset, and benchmark state-of-the-art action anticipation
techniques, including a new multi-modal LSTM architecture with an effective
loss function for action anticipation in driving scenarios.Comment: Accepted in ACCV 201
Magnetic and radar sensing for multimodal remote health monitoring
With the increased life expectancy and rise in health conditions related to aging, there is a need for new technologies that can routinely monitor vulnerable people, identify their daily pattern of activities and any anomaly or critical events such as falls. This paper aims to evaluate magnetic and radar sensors as suitable technologies for remote health monitoring purpose, both individually and fusing their information. After experiments and collecting data from 20 volunteers, numerical features has been extracted in both time and frequency domains. In order to analyse and verify the validation of fusion method for different classifiers, a Support Vector Machine with a quadratic kernel, and an Artificial Neural Network with one and multiple hidden layers have been implemented. Furthermore, for both classifiers, feature selection has been performed to obtain salient features. Using this technique along with fusion, both classifiers can detect 10 different activities with an accuracy rate of approximately 96%. In cases where the user is unknown to the classifier, an accuracy of approximately 92% is maintained
Survey on Vision-based Path Prediction
Path prediction is a fundamental task for estimating how pedestrians or
vehicles are going to move in a scene. Because path prediction as a task of
computer vision uses video as input, various information used for prediction,
such as the environment surrounding the target and the internal state of the
target, need to be estimated from the video in addition to predicting paths.
Many prediction approaches that include understanding the environment and the
internal state have been proposed. In this survey, we systematically summarize
methods of path prediction that take video as input and and extract features
from the video. Moreover, we introduce datasets used to evaluate path
prediction methods quantitatively.Comment: DAPI 201
- …