57,607 research outputs found
Dynamic Vision Sensors for Human Activity Recognition
Unlike conventional cameras which capture video at a fixed frame rate,
Dynamic Vision Sensors (DVS) record only changes in pixel intensity values. The
output of DVS is simply a stream of discrete ON/OFF events based on the
polarity of change in its pixel values. DVS has many attractive features such
as low power consumption, high temporal resolution, high dynamic range and
fewer storage requirements. All these make DVS a very promising camera for
potential applications in wearable platforms where power consumption is a major
concern.
In this paper, we explore the feasibility of using DVS for Human Activity
Recognition (HAR). We propose to use the various slices (such as , ,
and ) of the DVS video as a feature map for HAR and denote them as Motion
Maps. We show that fusing motion maps with Motion Boundary Histogram (MBH) give
good performance on the benchmark DVS dataset as well as on a real DVS gesture
dataset collected by us. Interestingly, the performance of DVS is comparable to
that of conventional videos although DVS captures only sparse motion
information.Comment: 6 pages, 9 figures, accepted at the 4th Asian Conference on Pattern
Recognition (ACPR) 201
Human activity recognition for pervasive interaction
PhD ThesisThis thesis addresses the challenge of computing food preparation context in the kitchen. The automatic
recognition of fine-grained human activities and food ingredients is realized through pervasive sensing
which we achieve by instrumenting kitchen objects such as knives, spoons, and chopping boards with
sensors. Context recognition in the kitchen lies at the heart of a broad range of real-world applications. In
particular, activity and food ingredient recognition in the kitchen is an essential component for situated
services such as automatic prompting services for cognitively impaired kitchen users and digital situated
support for healthier eating interventions. Previous works, however, have addressed the activity
recognition problem by exploring high-level-human activities using wearable sensing (i.e. worn sensors
on human body) or using technologies that raise privacy concerns (i.e. computer vision). Although such
approaches have yielded significant results for a number of activity recognition problems, they are not
applicable to our domain of investigation, for which we argue that the technology itself must be genuinely
âinvisibleâ, thereby allowing users to perform their activities in a completely natural manner.
In this thesis we describe the development of pervasive sensing technologies and algorithms for finegrained
human activity and food ingredient recognition in the kitchen. After reviewing previous work on
food and activity recognition we present three systems that constitute increasingly sophisticated
approaches to the challenge of kitchen context recognition. Two of these systems, Slice&Dice and Classbased
Threshold Dynamic Time Warping (CBT-DTW), recognize fine-grained food preparation
activities. Slice&Dice is a proof-of-concept application, whereas CBT-DTW is a real-time application
that also addresses the problem of recognising unknown activities. The final system, KitchenSense is a
real-time context recognition framework that deals with the recognition of a more complex set of
activities, and includes the recognition of food ingredients and events in the kitchen. For each system, we
describe the prototyping of pervasive sensing technologies, algorithms, as well as real-world experiments
and empirical evaluations that validate the proposed solutions.Vietnamese governmentâs 322 project, executed by the Vietnamese Ministry of
Education and Training
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Human-activity-centered measurement system:challenges from laboratory to the real environment in assistive gait wearable robotics
Assistive gait wearable robots (AGWR) have shown a great advancement in developing intelligent devices to assist human in their activities of daily living (ADLs). The rapid technological advancement in sensory technology, actuators, materials and computational intelligence has sped up this development process towards more practical and smart AGWR. However, most assistive gait wearable robots are still confined to be controlled, assessed indoor and within laboratory environments, limiting any potential to provide a real assistance and rehabilitation required to humans in the real environments. The gait assessment parameters play an important role not only in evaluating the patient progress and assistive device performance but also in controlling smart self-adaptable AGWR in real-time. The self-adaptable wearable robots must interactively conform to the changing environments and between users to provide optimal functionality and comfort. This paper discusses the performance parameters, such as comfortability, safety, adaptability, and energy consumption, which are required for the development of an intelligent AGWR for outdoor environments. The challenges to measuring the parameters using current systems for data collection and analysis using vision capture and wearable sensors are presented and discussed
Environmental Sensing by Wearable Device for Indoor Activity and Location Estimation
We present results from a set of experiments in this pilot study to
investigate the causal influence of user activity on various environmental
parameters monitored by occupant carried multi-purpose sensors. Hypotheses with
respect to each type of measurements are verified, including temperature,
humidity, and light level collected during eight typical activities: sitting in
lab / cubicle, indoor walking / running, resting after physical activity,
climbing stairs, taking elevators, and outdoor walking. Our main contribution
is the development of features for activity and location recognition based on
environmental measurements, which exploit location- and activity-specific
characteristics and capture the trends resulted from the underlying
physiological process. The features are statistically shown to have good
separability and are also information-rich. Fusing environmental sensing
together with acceleration is shown to achieve classification accuracy as high
as 99.13%. For building applications, this study motivates a sensor fusion
paradigm for learning individualized activity, location, and environmental
preferences for energy management and user comfort.Comment: submitted to the 40th Annual Conference of the IEEE Industrial
Electronics Society (IECON
RGBD Datasets: Past, Present and Future
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
- âŠ