59,958 research outputs found
Human Pose Estimation using Global and Local Normalization
In this paper, we address the problem of estimating the positions of human
joints, i.e., articulated pose estimation. Recent state-of-the-art solutions
model two key issues, joint detection and spatial configuration refinement,
together using convolutional neural networks. Our work mainly focuses on
spatial configuration refinement by reducing variations of human poses
statistically, which is motivated by the observation that the scattered
distribution of the relative locations of joints e.g., the left wrist is
distributed nearly uniformly in a circular area around the left shoulder) makes
the learning of convolutional spatial models hard. We present a two-stage
normalization scheme, human body normalization and limb normalization, to make
the distribution of the relative joint locations compact, resulting in easier
learning of convolutional spatial models and more accurate pose estimation. In
addition, our empirical results show that incorporating multi-scale supervision
and multi-scale fusion into the joint detection network is beneficial.
Experiment results demonstrate that our method consistently outperforms
state-of-the-art methods on the benchmarks.Comment: ICCV201
Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction
The visual focus of attention (VFOA) has been recognized as a prominent
conversational cue. We are interested in estimating and tracking the VFOAs
associated with multi-party social interactions. We note that in this type of
situations the participants either look at each other or at an object of
interest; therefore their eyes are not always visible. Consequently both gaze
and VFOA estimation cannot be based on eye detection and tracking. We propose a
method that exploits the correlation between eye gaze and head movements. Both
VFOA and gaze are modeled as latent variables in a Bayesian switching
state-space model. The proposed formulation leads to a tractable learning
procedure and to an efficient algorithm that simultaneously tracks gaze and
visual focus. The method is tested and benchmarked using two publicly available
datasets that contain typical multi-party human-robot and human-human
interactions.Comment: 15 pages, 8 figures, 6 table
Human-centric light sensing and estimation from RGBD images: the invisible light switch
Lighting design in indoor environments is of primary importance for at least two reasons: 1) people should perceive an adequate light; 2) an effective lighting design means consistent energy saving. We present the Invisible Light Switch (ILS) to address both aspects. ILS dynamically adjusts the room illumination level to save energy while maintaining constant the light level perception of the users. So the energy saving is invisible to them. Our proposed ILS leverages a radiosity model to estimate the light level which is perceived by a person within an indoor environment, taking into account the person position and her/his viewing frustum (head pose). ILS may therefore dim those luminaires, which are not seen by the user, resulting in an effective energy saving, especially in large open offices (where light may otherwise be ON everywhere for a single person). To quantify the system performance, we have collected a new dataset where people wear luxmeter devices while working in office rooms. The luxmeters measure the amount of light (in Lux) reaching the people gaze, which we consider a proxy to their illumination level perception. Our initial results are promising: in a room with 8 LED luminaires, the energy consumption in a day may be reduced from 18585 to 6206 watts with ILS (currently needing 1560 watts for operations). While doing so, the drop in perceived lighting decreases by just 200 lux, a value considered negligible when the original illumination level is above 1200 lux, as is normally the case in offices
- …