7,462 research outputs found
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Human-centric light sensing and estimation from RGBD images: the invisible light switch
Lighting design in indoor environments is of primary importance for at least two reasons: 1) people should perceive an adequate light; 2) an effective lighting design means consistent energy saving. We present the Invisible Light Switch (ILS) to address both aspects. ILS dynamically adjusts the room illumination level to save energy while maintaining constant the light level perception of the users. So the energy saving is invisible to them. Our proposed ILS leverages a radiosity model to estimate the light level which is perceived by a person within an indoor environment, taking into account the person position and her/his viewing frustum (head pose). ILS may therefore dim those luminaires, which are not seen by the user, resulting in an effective energy saving, especially in large open offices (where light may otherwise be ON everywhere for a single person). To quantify the system performance, we have collected a new dataset where people wear luxmeter devices while working in office rooms. The luxmeters measure the amount of light (in Lux) reaching the people gaze, which we consider a proxy to their illumination level perception. Our initial results are promising: in a room with 8 LED luminaires, the energy consumption in a day may be reduced from 18585 to 6206 watts with ILS (currently needing 1560 watts for operations). While doing so, the drop in perceived lighting decreases by just 200 lux, a value considered negligible when the original illumination level is above 1200 lux, as is normally the case in offices
Human-centric light sensing and estimation from RGBD images: The invisible light switch
Lighting design in indoor environments is of primary importance for at least
two reasons: 1) people should perceive an adequate light; 2) an effective
lighting design means consistent energy saving. We present the Invisible Light
Switch (ILS) to address both aspects. ILS dynamically adjusts the room
illumination level to save energy while maintaining constant the light level
perception of the users. So the energy saving is invisible to them. Our
proposed ILS leverages a radiosity model to estimate the light level which is
perceived by a person within an indoor environment, taking into account the
person position and her/his viewing frustum (head pose). ILS may therefore dim
those luminaires, which are not seen by the user, resulting in an effective
energy saving, especially in large open offices (where light may otherwise be
ON everywhere for a single person). To quantify the system performance, we have
collected a new dataset where people wear luxmeter devices while working in
office rooms. The luxmeters measure the amount of light (in Lux) reaching the
people gaze, which we consider a proxy to their illumination level perception.
Our initial results are promising: in a room with 8 LED luminaires, the energy
consumption in a day may be reduced from 18585 to 6206 watts with ILS
(currently needing 1560 watts for operations). While doing so, the drop in
perceived lighting decreases by just 200 lux, a value considered negligible
when the original illumination level is above 1200 lux, as is normally the case
in offices
MoSculp: Interactive Visualization of Shape and Time
We present a system that allows users to visualize complex human motion via
3D motion sculptures---a representation that conveys the 3D structure swept by
a human body as it moves through space. Given an input video, our system
computes the motion sculptures and provides a user interface for rendering it
in different styles, including the options to insert the sculpture back into
the original video, render it in a synthetic scene or physically print it.
To provide this end-to-end workflow, we introduce an algorithm that estimates
that human's 3D geometry over time from a set of 2D images and develop a
3D-aware image-based rendering approach that embeds the sculpture back into the
scene. By automating the process, our system takes motion sculpture creation
out of the realm of professional artists, and makes it applicable to a wide
range of existing video material.
By providing viewers with 3D information, motion sculptures reveal space-time
motion information that is difficult to perceive with the naked eye, and allow
viewers to interpret how different parts of the object interact over time. We
validate the effectiveness of this approach with user studies, finding that our
motion sculpture visualizations are significantly more informative about motion
than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu
Learning Manipulation under Physics Constraints with Visual Perception
Understanding physical phenomena is a key competence that enables humans and
animals to act and interact under uncertain perception in previously unseen
environments containing novel objects and their configurations. In this work,
we consider the problem of autonomous block stacking and explore solutions to
learning manipulation under physics constraints with visual perception inherent
to the task. Inspired by the intuitive physics in humans, we first present an
end-to-end learning-based approach to predict stability directly from
appearance, contrasting a more traditional model-based approach with explicit
3D representations and physical simulation. We study the model's behavior
together with an accompanied human subject test. It is then integrated into a
real-world robotic system to guide the placement of a single wood block into
the scene without collapsing existing tower structure. To further automate the
process of consecutive blocks stacking, we present an alternative approach
where the model learns the physics constraint through the interaction with the
environment, bypassing the dedicated physics learning as in the former part of
this work. In particular, we are interested in the type of tasks that require
the agent to reach a given goal state that may be different for every new
trial. Thereby we propose a deep reinforcement learning framework that learns
policies for stacking tasks which are parametrized by a target structure.Comment: arXiv admin note: substantial text overlap with arXiv:1609.04861,
arXiv:1711.00267, arXiv:1604.0006
Learning Manipulation under Physics Constraints with Visual Perception
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. In this work, we consider the problem of autonomous block stacking and explore solutions to learning manipulation under physics constraints with visual perception inherent to the task. Inspired by the intuitive physics in humans, we first present an end-to-end learning-based approach to predict stability directly from appearance, contrasting a more traditional model-based approach with explicit 3D representations and physical simulation. We study the model's behavior together with an accompanied human subject test. It is then integrated into a real-world robotic system to guide the placement of a single wood block into the scene without collapsing existing tower structure. To further automate the process of consecutive blocks stacking, we present an alternative approach where the model learns the physics constraint through the interaction with the environment, bypassing the dedicated physics learning as in the former part of this work. In particular, we are interested in the type of tasks that require the agent to reach a given goal state that may be different for every new trial. Thereby we propose a deep reinforcement learning framework that learns policies for stacking tasks which are parametrized by a target structure
Smoothness perception : investigation of beat rate effect on frame rate perception
Despite the complexity of the Human Visual System (HVS), research over the last few decades has highlighted a number of its limitations. These limitations can be exploited in computer graphics to significantly reduce computational cost and thus required rendering time, without a viewer perceiving any difference in resultant image quality. Furthermore, cross-modal interaction between different modalities, such as the influence of audio on visual perception, has also been shown as significant both in psychology and computer graphics. In this paper we investigate the effect of beat rate on temporal visual perception, i.e. frame rate perception. For the visual quality and perception evaluation, a series of psychophysical experiments was conducted and the data analysed. The results indicate that beat rates in some cases do affect temporal visual perception and that certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and could be potentially used in high-fidelity interactive multi-sensory virtual environments
- …