927 research outputs found
A Qualitative Representation of Spatial Scenes in R2 with Regions and Lines
Regions and lines are common geographic abstractions for geographic objects. Collections of regions, lines, and other representations of spatial objects form a spatial scene, along with their relations. For instance, the states of Maine and New Hampshire can be represented by a pair of regions and related based on their topological properties. These two states are adjacent (i.e., they meet along their shared boundary), whereas Maine and Florida are not adjacent (i.e., they are disjoint).
A detailed model for qualitatively describing spatial scenes should capture the essential properties of a configuration such that a description of the represented objects and their relations can be generated. Such a description should then be able to reproduce a scene in a way that preserves all topological relationships, but without regards to metric details.
Coarse approaches to qualitative spatial reasoning may underspecify certain relations. For example, if two objects meet, it is unclear if they meet along an edge, at a single point, or multiple times along their boundaries. Where the boundaries of spatial objects converge, this is called a spatial intersection. This thesis develops a model for spatial scene descriptions primarily through sequences of detailed spatial intersections and object containment, capturing how complex spatial objects relate.
With a theory of complex spatial scenes developed, a tool that will automatically generate a formal description of a spatial scene is prototyped, enabling the described objects to be analyzed. The strengths and weaknesses of the provided model will be discussed relative to other models of spatial scene description, along with further refinements
Context-Dependent Diffusion Network for Visual Relationship Detection
Visual relationship detection can bridge the gap between computer vision and
natural language for scene understanding of images. Different from pure object
recognition tasks, the relation triplets of subject-predicate-object lie on an
extreme diversity space, such as \textit{person-behind-person} and
\textit{car-behind-building}, while suffering from the problem of combinatorial
explosion. In this paper, we propose a context-dependent diffusion network
(CDDN) framework to deal with visual relationship detection. To capture the
interactions of different object instances, two types of graphs, word semantic
graph and visual scene graph, are constructed to encode global context
interdependency. The semantic graph is built through language priors to model
semantic correlations across objects, whilst the visual scene graph defines the
connections of scene objects so as to utilize the surrounding scene
information. For the graph-structured data, we design a diffusion network to
adaptively aggregate information from contexts, which can effectively learn
latent representations of visual relationships and well cater to visual
relationship detection in view of its isomorphic invariance to graphs.
Experiments on two widely-used datasets demonstrate that our proposed method is
more effective and achieves the state-of-the-art performance.Comment: 8 pages, 3 figures, 2018 ACM Multimedia Conference (MM'18
Practical Auto-Calibration for Spatial Scene-Understanding from Crowdsourced Dashcamera Videos
Spatial scene-understanding, including dense depth and ego-motion estimation,
is an important problem in computer vision for autonomous vehicles and advanced
driver assistance systems. Thus, it is beneficial to design perception modules
that can utilize crowdsourced videos collected from arbitrary vehicular onboard
or dashboard cameras. However, the intrinsic parameters corresponding to such
cameras are often unknown or change over time. Typical manual calibration
approaches require objects such as a chessboard or additional scene-specific
information. On the other hand, automatic camera calibration does not have such
requirements. Yet, the automatic calibration of dashboard cameras is
challenging as forward and planar navigation results in critical motion
sequences with reconstruction ambiguities. Structure reconstruction of complete
visual-sequences that may contain tens of thousands of images is also
computationally untenable. Here, we propose a system for practical monocular
onboard camera auto-calibration from crowdsourced videos. We show the
effectiveness of our proposed system on the KITTI raw, Oxford RobotCar, and the
crowdsourced D-City datasets in varying conditions. Finally, we demonstrate
its application for accurate monocular dense depth and ego-motion estimation on
uncalibrated videos.Comment: Accepted at 16th International Conference on Computer Vision Theory
and Applications (VISAP, 2021
Cognitive Interpretation of Everyday Activities - Toward Perceptual Narrative Based Visuo-Spatial Scene Interpretation
We position a narrative-centred computational model for high-level knowledge representation and reasoning in the context of a range of assistive technologies concerned with visuo-spatial perception and cognition tasks. Our proposed narrative model encompasses aspects such as space, events, actions, change, and interaction from the viewpoint of commonsense reasoning and learning in large-scale cognitive systems. The broad focus of this paper is on the domain of human-activity interpretation in smart environments, ambient intelligence etc. In the backdrop of a smart meeting cinematography domain, we position the proposed narrative model, preliminary work on perceptual narrativisation, and the immediate outlook on constructing general-purpose open-source tools for perceptual narrativisation
Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011
Learning Action Maps of Large Environments via First-Person Vision
When people observe and interact with physical spaces, they are able to
associate functionality to regions in the environment. Our goal is to automate
dense functional understanding of large spaces by leveraging sparse activity
demonstrations recorded from an ego-centric viewpoint. The method we describe
enables functionality estimation in large scenes where people have behaved, as
well as novel scenes where no behaviors are observed. Our method learns and
predicts "Action Maps", which encode the ability for a user to perform
activities at various locations. With the usage of an egocentric camera to
observe human activities, our method scales with the size of the scene without
the need for mounting multiple static surveillance cameras and is well-suited
to the task of observing activities up-close. We demonstrate that by capturing
appearance-based attributes of the environment and associating these attributes
with activity demonstrations, our proposed mathematical framework allows for
the prediction of Action Maps in new environments. Additionally, we offer a
preliminary glance of the applicability of Action Maps by demonstrating a
proof-of-concept application in which they are used in concert with activity
detections to perform localization.Comment: To appear at CVPR 201
- …