1,110 research outputs found
Early Recognition of Human Activities from First-Person Videos Using Onset Representations
In this paper, we propose a methodology for early recognition of human
activities from videos taken with a first-person viewpoint. Early recognition,
which is also known as activity prediction, is an ability to infer an ongoing
activity at its early stage. We present an algorithm to perform recognition of
activities targeted at the camera from streaming videos, making the system to
predict intended activities of the interacting person and avoid harmful events
before they actually happen. We introduce the novel concept of 'onset' that
efficiently summarizes pre-activity observations, and design an approach to
consider event history in addition to ongoing video observation for early
first-person recognition of activities. We propose to represent onset using
cascade histograms of time series gradients, and we describe a novel
algorithmic setup to take advantage of onset for early recognition of
activities. The experimental results clearly illustrate that the proposed
concept of onset enables better/earlier recognition of human activities from
first-person videos
Active Clothing Material Perception using Tactile Sensing and Deep Learning
Humans represent and discriminate the objects in the same category using
their properties, and an intelligent robot should be able to do the same. In
this paper, we build a robot system that can autonomously perceive the object
properties through touch. We work on the common object category of clothing.
The robot moves under the guidance of an external Kinect sensor, and squeezes
the clothes with a GelSight tactile sensor, then it recognizes the 11
properties of the clothing according to the tactile data. Those properties
include the physical properties, like thickness, fuzziness, softness and
durability, and semantic properties, like wearing season and preferred washing
methods. We collect a dataset of 153 varied pieces of clothes, and conduct 6616
robot exploring iterations on them. To extract the useful information from the
high-dimensional sensory output, we applied Convolutional Neural Networks (CNN)
on the tactile data for recognizing the clothing properties, and on the Kinect
depth images for selecting exploration locations. Experiments show that using
the trained neural networks, the robot can autonomously explore the unknown
clothes and learn their properties. This work proposes a new framework for
active tactile perception system with vision-touch system, and has potential to
enable robots to help humans with varied clothing related housework.Comment: ICRA 2018 accepte
- …