7,638 research outputs found
On the Expressiveness of Spatial Constraint Systems
In this paper we shall report on our progress using spatial constraint system as an abstract representation of modal and epistemic behaviour. First we shall give an introduction as well as the background to our work. Then, we present our preliminary results on the representation of modal behaviour by using spatial constraint systems. Then, we present our ongoing work on the characterization of the epistemic notion of knowledge. Finally, we discuss about the future work of our research
Multi-body Non-rigid Structure-from-Motion
Conventional structure-from-motion (SFM) research is primarily concerned with
the 3D reconstruction of a single, rigidly moving object seen by a static
camera, or a static and rigid scene observed by a moving camera --in both cases
there are only one relative rigid motion involved. Recent progress have
extended SFM to the areas of {multi-body SFM} (where there are {multiple rigid}
relative motions in the scene), as well as {non-rigid SFM} (where there is a
single non-rigid, deformable object or scene). Along this line of thinking,
there is apparently a missing gap of "multi-body non-rigid SFM", in which the
task would be to jointly reconstruct and segment multiple 3D structures of the
multiple, non-rigid objects or deformable scenes from images. Such a multi-body
non-rigid scenario is common in reality (e.g. two persons shaking hands,
multi-person social event), and how to solve it represents a natural
{next-step} in SFM research. By leveraging recent results of subspace
clustering, this paper proposes, for the first time, an effective framework for
multi-body NRSFM, which simultaneously reconstructs and segments each 3D
trajectory into their respective low-dimensional subspace. Under our
formulation, 3D trajectories for each non-rigid structure can be well
approximated with a sparse affine combination of other 3D trajectories from the
same structure (self-expressiveness). We solve the resultant optimization with
the alternating direction method of multipliers (ADMM). We demonstrate the
efficacy of the proposed framework through extensive experiments on both
synthetic and real data sequences. Our method clearly outperforms other
alternative methods, such as first clustering the 2D feature tracks to groups
and then doing non-rigid reconstruction in each group or first conducting 3D
reconstruction by using single subspace assumption and then clustering the 3D
trajectories into groups.Comment: 21 pages, 16 figure
Generation of Whole-Body Expressive Movement Based on Somatical Theories
An automatic choreography method to generate lifelike body movements is proposed. This method is based on somatics theories that are conventionally used to evaluate human’s psychological and developmental states by analyzing the body movement. The idea of this paper is to use the theories in the inverse way: to facilitate generation of artificial body movements that are plausible regarding evolutionary, developmental and emotional states of robots or other non-living movers. This paper reviews somatic theories and describes a strategy for implementations of automatic body movement generation. In addition, a psychological experiment is reported to verify expression ability on body movement rhythm. This method facilitates to choreographing body movement of humanoids, animal-shaped robots, and computer graphics characters in video games
Combining Spatial and Temporal Logics: Expressiveness vs. Complexity
In this paper, we construct and investigate a hierarchy of spatio-temporal
formalisms that result from various combinations of propositional spatial and
temporal logics such as the propositional temporal logic PTL, the spatial
logics RCC-8, BRCC-8, S4u and their fragments. The obtained results give a
clear picture of the trade-off between expressiveness and computational
realisability within the hierarchy. We demonstrate how different combining
principles as well as spatial and temporal primitives can produce NP-, PSPACE-,
EXPSPACE-, 2EXPSPACE-complete, and even undecidable spatio-temporal logics out
of components that are at most NP- or PSPACE-complete
Seeing What You're Told: Sentence-Guided Activity Recognition In Video
We present a system that demonstrates how the compositional structure of
events, in concert with the compositional structure of language, can interplay
with the underlying focusing mechanisms in video action recognition, thereby
providing a medium, not only for top-down and bottom-up integration, but also
for multi-modal integration between vision and language. We show how the roles
played by participants (nouns), their characteristics (adjectives), the actions
performed (verbs), the manner of such actions (adverbs), and changing spatial
relations between participants (prepositions) in the form of whole sentential
descriptions mediated by a grammar, guides the activity-recognition process.
Further, the utility and expressiveness of our framework is demonstrated by
performing three separate tasks in the domain of multi-activity videos:
sentence-guided focus of attention, generation of sentential descriptions of
video, and query-based video search, simply by leveraging the framework in
different manners.Comment: To appear in CVPR 201
- …