13,053 research outputs found
Towards Egocentric Person Re-identification and Social Pattern Analysis
Wearable cameras capture a first-person view of the daily activities of the
camera wearer, offering a visual diary of the user behaviour. Detection of the
appearance of people the camera user interacts with for social interactions
analysis is of high interest. Generally speaking, social events, lifestyle and
health are highly correlated, but there is a lack of tools to monitor and
analyse them. We consider that egocentric vision provides a tool to obtain
information and understand users social interactions. We propose a model that
enables us to evaluate and visualize social traits obtained by analysing social
interactions appearance within egocentric photostreams. Given sets of
egocentric images, we detect the appearance of faces within the days of the
camera wearer, and rely on clustering algorithms to group their feature
descriptors in order to re-identify persons. Recurrence of detected faces
within photostreams allows us to shape an idea of the social pattern of
behaviour of the user. We validated our model over several weeks recorded by
different camera wearers. Our findings indicate that social profiles are
potentially useful for social behaviour interpretation
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Future Person Localization in First-Person Videos
We present a new task that predicts future locations of people observed in
first-person videos. Consider a first-person video stream continuously recorded
by a wearable camera. Given a short clip of a person that is extracted from the
complete stream, we aim to predict that person's location in future frames. To
facilitate this future person localization ability, we make the following three
key observations: a) First-person videos typically involve significant
ego-motion which greatly affects the location of the target person in future
frames; b) Scales of the target person act as a salient cue to estimate a
perspective effect in first-person videos; c) First-person videos often capture
people up-close, making it easier to leverage target poses (e.g., where they
look) for predicting their future locations. We incorporate these three
observations into a prediction framework with a multi-stream
convolution-deconvolution architecture. Experimental results reveal our method
to be effective on our new dataset as well as on a public social interaction
dataset.Comment: Accepted to CVPR 201
Lifestyle understanding through the analysis of egocentric photo-streams
At 8:15, before going to work, Rose puts on her pullover and attaches to it the small portable camera that looks like a hanger. The camera will take two images per minute throughout the day and will record almost everything Rose experiences: the people she meets, how long she sits in front of her computer, what she eats, where she goes, etc. These images show an objective description of Rose's experiences. This thesis addresses the development of automatic computer vision tools for the study of people's behaviours. To this end, we rely on the analysis of the visual data offered by these collected sequences of images by wearable cameras. Our developed models have demonstrated to be a powerful tool for the extraction of information about the behaviours of people in society. Examples of applications: 1) selected images as cues to trigger autobiographical memory about past events for prevention of cognitive and functional decline and memory enhancement in elderly people. 2) Self-monitoring devices as people want to increase their self-knowledge through quantitative analysis, expecting that it will lead to psychological well-being and the improvement of their lifestyle. 3) businesses are already making use of such data regarding information about their employees and clients, in order to improve productivity, well-being and customer satisfaction. The ultimate goal is to help people like Rose to improve the quality of our life by creating awareness about our habits and life balance
Behaviour understanding through the analysis of image sequences collected by wearable cameras
Describing people's lifestyle has become a hot topic in the field of artificial intelligence. Lifelogging is described as the process of collecting personal activity data describing the daily behaviour of a person. Nowadays, the development of new technologies and the increasing use of wearable sensors allow to automatically record data from our daily living. In this paper, we describe our developed automatic tools for the analysis of collected visual data that describes the daily behaviour of a person. For this analysis, we rely on sequences of images collected by wearable cameras, which are called egocentric photo-streams. These images are a rich source of information about the behaviour of the camera wearer since they show an objective and first-person view of his or her lifestyle
Analysis of the hands in egocentric vision: A survey
Egocentric vision (a.k.a. first-person vision - FPV) applications have
thrived over the past few years, thanks to the availability of affordable
wearable cameras and large annotated datasets. The position of the wearable
camera (usually mounted on the head) allows recording exactly what the camera
wearers have in front of them, in particular hands and manipulated objects.
This intrinsic advantage enables the study of the hands from multiple
perspectives: localizing hands and their parts within the images; understanding
what actions and activities the hands are involved in; and developing
human-computer interfaces that rely on hand gestures. In this survey, we review
the literature that focuses on the hands using egocentric vision, categorizing
the existing approaches into: localization (where are the hands or parts of
them?); interpretation (what are the hands doing?); and application (e.g.,
systems that used egocentric hand cues for solving a specific problem).
Moreover, a list of the most prominent datasets with hand-based annotations is
provided
- …