913 research outputs found
Event Retrieval Using Motion Barcodes
We introduce a simple and effective method for retrieval of videos showing a
specific event, even when the videos of that event were captured from
significantly different viewpoints. Appearance-based methods fail in such
cases, as appearances change with large changes of viewpoints.
Our method is based on a pixel-based feature, "motion barcode", which records
the existence/non-existence of motion as a function of time. While appearance,
motion magnitude, and motion direction can vary greatly between disparate
viewpoints, the existence of motion is viewpoint invariant. Based on the motion
barcode, a similarity measure is developed for videos of the same event taken
from very different viewpoints. This measure is robust to occlusions common
under different viewpoints, and can be computed efficiently.
Event retrieval is demonstrated using challenging videos from stationary and
hand held cameras
Camera Calibration from Dynamic Silhouettes Using Motion Barcodes
Computing the epipolar geometry between cameras with very different
viewpoints is often problematic as matching points are hard to find. In these
cases, it has been proposed to use information from dynamic objects in the
scene for suggesting point and line correspondences.
We propose a speed up of about two orders of magnitude, as well as an
increase in robustness and accuracy, to methods computing epipolar geometry
from dynamic silhouettes. This improvement is based on a new temporal
signature: motion barcode for lines. Motion barcode is a binary temporal
sequence for lines, indicating for each frame the existence of at least one
foreground pixel on that line. The motion barcodes of two corresponding
epipolar lines are very similar, so the search for corresponding epipolar lines
can be limited only to lines having similar barcodes. The use of motion
barcodes leads to increased speed, accuracy, and robustness in computing the
epipolar geometry.Comment: Update metadat
An Epipolar Line from a Single Pixel
Computing the epipolar geometry from feature points between cameras with very
different viewpoints is often error prone, as an object's appearance can vary
greatly between images. For such cases, it has been shown that using motion
extracted from video can achieve much better results than using a static image.
This paper extends these earlier works based on the scene dynamics. In this
paper we propose a new method to compute the epipolar geometry from a video
stream, by exploiting the following observation: For a pixel p in Image A, all
pixels corresponding to p in Image B are on the same epipolar line.
Equivalently, the image of the line going through camera A's center and p is an
epipolar line in B. Therefore, when cameras A and B are synchronized, the
momentary images of two objects projecting to the same pixel, p, in camera A at
times t1 and t2, lie on an epipolar line in camera B. Based on this observation
we achieve fast and precise computation of epipolar lines. Calibrating cameras
based on our method of finding epipolar lines is much faster and more robust
than previous methods.Comment: WACV 201
The hunt for submarines in classical art: mappings between scientific invention and artistic interpretation
This is a report to the AHRC's ICT in Arts and Humanities Research Programme.
This report stems from a project which aimed to produce a series of mappings between advanced imaging information and communications technologies (ICT) and needs within visual arts research. A secondary aim was to demonstrate the feasibility of a structured approach to establishing such mappings.
The project was carried out over 2006, from January to December, by the visual arts centre of the Arts and Humanities Data Service (AHDS Visual Arts).1 It was funded by the Arts and Humanities Research Council (AHRC) as one of the Strategy Projects run under the aegis of its ICT in Arts and Humanities Research programme. The programme, which runs from October 2003 until September 2008, aims âto develop, promote and monitor the AHRCâs ICT strategy, and to build capacity nation-wide in the use of ICT for arts and humanities researchâ.2 As part of this, the Strategy Projects were intended to contribute to the programme in two ways: knowledge-gathering projects would inform the programmeâs Fundamental Strategic Review of ICT, conducted for the AHRC in the second half of 2006, focusing âon critical strategic issues such as e-science and peer-review of digital resourcesâ. Resource-development projects would âbuild tools and resources of broad relevance across the range of the AHRCâs academic subject disciplinesâ.3 This project fell into the knowledge-gathering strand.
The project ran under the leadership of Dr Mike Pringle, Director, AHDS Visual Arts, and the day-to-day management of Polly Christie, Projects Manager, AHDS Visual Arts. The research was carried out by Dr Rupert Shepherd
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
The effects of "order" and "disorder" on human cognitive perception in navigating through urban environments
This paper investigates how âorderâ, âstructureâ, and âdisorderâ of street layouts are perceived
when navigating through an urban environment. It builds on the assumption that a mixture of
âorderâ and âdisorderâ might be a key factor for the quality of understanding within an urban
context and that an âorderedâ environment tends to be more intelligible when broken up by
an irregularity occasionally. Knowledge about urban layouts can be accrued by the traveller
in different ways: From static viewpoints, from top-down maps, and in travelling through the
scenery. Cognitive processes that are involved in organising information about the structure of
the built environment are known to simplify and schematise information. Such a âmental mapâ
creates an image of the city, helps in memorising it and facilitates wayfinding tasks. Wayfinding
experiments and investigations into the configuration of street networks have so far supported
the understanding of movement behaviour and given insight from different perspectives on
an urban environment. This paper will attempt to relate two aspects - configurational and
sequential experiences of navigation (along a route) - to each other in using a methodological
framework that allows for comparison of quantitative measurements and findings from both
fields of research. The centre of attention will be the perception of âorderâ, âstructureâ and
âdisorderâ from both perspectives: From âaboveâ and from âalong withinâ an urban environment.
A virtual movement experiment with pre-chosen routes through six city samples is expected to
provide meaningful empirical data with view on the perception of both configurational (view
from above) and sequential (moving through scenery) embodiments of âorderâ and âdisorderâ,
thereby introducing a methodological approach that applies string code computation in the
spirit of probabilistic information theory
- âŠ