35,403 research outputs found
Egocentric Activity Recognition with Multimodal Fisher Vector
With the increasing availability of wearable devices, research on egocentric
activity recognition has received much attention recently. In this paper, we
build a Multimodal Egocentric Activity dataset which includes egocentric videos
and sensor data of 20 fine-grained and diverse activity categories. We present
a novel strategy to extract temporal trajectory-like features from sensor data.
We propose to apply the Fisher Kernel framework to fuse video and temporal
enhanced sensor features. Experiment results show that with careful design of
feature extraction and fusion algorithm, sensor data can enhance
information-rich video data. We make publicly available the Multimodal
Egocentric Activity dataset to facilitate future research.Comment: 5 pages, 4 figures, ICASSP 2016 accepte
Extraction and Classification of Diving Clips from Continuous Video Footage
Due to recent advances in technology, the recording and analysis of video
data has become an increasingly common component of athlete training
programmes. Today it is incredibly easy and affordable to set up a fixed camera
and record athletes in a wide range of sports, such as diving, gymnastics,
golf, tennis, etc. However, the manual analysis of the obtained footage is a
time-consuming task which involves isolating actions of interest and
categorizing them using domain-specific knowledge. In order to automate this
kind of task, three challenging sub-problems are often encountered: 1)
temporally cropping events/actions of interest from continuous video; 2)
tracking the object of interest; and 3) classifying the events/actions of
interest.
Most previous work has focused on solving just one of the above sub-problems
in isolation. In contrast, this paper provides a complete solution to the
overall action monitoring task in the context of a challenging real-world
exemplar. Specifically, we address the problem of diving classification. This
is a challenging problem since the person (diver) of interest typically
occupies fewer than 1% of the pixels in each frame. The model is required to
learn the temporal boundaries of a dive, even though other divers and
bystanders may be in view. Finally, the model must be sensitive to subtle
changes in body pose over a large number of frames to determine the
classification code. We provide effective solutions to each of the sub-problems
which combine to provide a highly functional solution to the task as a whole.
The techniques proposed can be easily generalized to video footage recorded
from other sports.Comment: To appear at CVsports 201
Flight Dynamics-based Recovery of a UAV Trajectory using Ground Cameras
We propose a new method to estimate the 6-dof trajectory of a flying object
such as a quadrotor UAV within a 3D airspace monitored using multiple fixed
ground cameras. It is based on a new structure from motion formulation for the
3D reconstruction of a single moving point with known motion dynamics. Our main
contribution is a new bundle adjustment procedure which in addition to
optimizing the camera poses, regularizes the point trajectory using a prior
based on motion dynamics (or specifically flight dynamics). Furthermore, we can
infer the underlying control input sent to the UAV's autopilot that determined
its flight trajectory.
Our method requires neither perfect single-view tracking nor appearance
matching across views. For robustness, we allow the tracker to generate
multiple detections per frame in each video. The true detections and the data
association across videos is estimated using robust multi-view triangulation
and subsequently refined during our bundle adjustment procedure. Quantitative
evaluation on simulated data and experiments on real videos from indoor and
outdoor scenes demonstrates the effectiveness of our method
Use of nonintrusive sensor-based information and communication technology for real-world evidence for clinical trials in dementia
Cognitive function is an important end point of treatments in dementia clinical trials. Measuring cognitive function by standardized tests, however, is biased toward highly constrained environments (such as hospitals) in selected samples. Patient-powered real-world evidence using information and communication technology devices, including environmental and wearable sensors, may help to overcome these limitations. This position paper describes current and novel information and communication technology devices and algorithms to monitor behavior and function in people with prodromal and manifest stages of dementia continuously, and discusses clinical, technological, ethical, regulatory, and user-centered requirements for collecting real-world evidence in future randomized controlled trials. Challenges of data safety, quality, and privacy and regulatory requirements need to be addressed by future smart sensor technologies. When these requirements are satisfied, these technologies will provide access to truly user relevant outcomes and broader cohorts of participants than currently sampled in clinical trials
- …