168,004 research outputs found
Recognizing Human-Object Interactions in Videos
Understanding human actions that involve interacting with objects is very important due to the wide range of real-world applications, such as security surveillance and healthcare. In this thesis, three different approaches are presented for addressing the problem of human-object interactions (HOIs) recognition in videos.
Firstly, we propose a hierarchical framework for analyzing human-object interactions in a video sequence. The framework comprises Long Short-Term Memory (LSTM) networks that capture human motion and temporal object information independently. These pieces of information are then combined through a bilinear layer and fed into a global deep LSTM to learn high-level information about HOIs. To concentrate on the key components of human and object temporal information, the proposed approach incorporates an attention mechanism into LSTMs.
Secondly, we aim to achieve a holistic understanding of human-object interactions
(HOIs) by exploiting both their local and global contexts through knowledge
distillation. The local context graphs are used to learn the relationship between
humans and objects at the frame level by capturing their co-occurrence at a specific time step. On the other hand, the global relation graph is constructed based on the video-level of human and object interactions, identifying their long-term relations throughout a video sequence. We investigate how knowledge from these context graphs can be distilled to their counterparts to improve HOI recognition.
Lastly, we propose the Spatio-Temporal Interaction Transformer-based (STIT)
network to reason about spatio-temporal changes of humans and objects. Specifically, the spatial transformers learn the local context of humans and objects at specific frame times. The temporal transformer then learns the relations at a higher level between spatial context representations at different time steps, capturing long-term dependencies across frames. We further investigate multiple hierarchy designs for learning human interactions.
The effectiveness of each of the proposed methods mentioned above is evaluated
using various video action datasets that include human-object interactions, such as Charades, CAD-120, and Something-Something V1
Point-Particle Effective Field Theory III: Relativistic Fermions and the Dirac Equation
We formulate point-particle effective field theory (PPEFT) for relativistic
spin-half fermions interacting with a massive, charged finite-sized source
using a first-quantized effective field theory for the heavy compact object and
a second-quantized language for the lighter fermion with which it interacts.
This description shows how to determine the near-source boundary condition for
the Dirac field in terms of the relevant physical properties of the source, and
reduces to the standard choices in the limit of a point source. Using a
first-quantized effective description is appropriate when the compact object is
sufficiently heavy, and is simpler than (though equivalent to) the effective
theory that treats the compact source in a second-quantized way. As an
application we use the PPEFT to parameterize the leading energy shift for the
bound energy levels due to finite-sized source effects in a model-independent
way, allowing these effects to be fit in precision measurements. Besides
capturing finite-source-size effects, the PPEFT treatment also efficiently
captures how other short-distance source interactions can shift bound-state
energy levels, such as due to vacuum polarization (through the Uehling
potential) or strong interactions for Coulomb bound states of hadrons, or any
hypothetical new short-range forces sourced by nuclei.Comment: 29 pages plus appendices, 3 figure
InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
This paper addresses a novel task of anticipating 3D human-object
interactions (HOIs). Most existing research on HOI synthesis lacks
comprehensive whole-body interactions with dynamic objects, e.g., often limited
to manipulating small or static objects. Our task is significantly more
challenging, as it requires modeling dynamic objects with various shapes,
capturing whole-body motion, and ensuring physically valid interactions. To
this end, we propose InterDiff, a framework comprising two key steps: (i)
interaction diffusion, where we leverage a diffusion model to encode the
distribution of future human-object interactions; (ii) interaction correction,
where we introduce a physics-informed predictor to correct denoised HOIs in a
diffusion step. Our key insight is to inject prior knowledge that the
interactions under reference with respect to contact points follow a simple
pattern and are easily predictable. Experiments on multiple human-object
interaction datasets demonstrate the effectiveness of our method for this task,
capable of producing realistic, vivid, and remarkably long-term 3D HOI
predictions.Comment: ICCV 2023; Project Page: https://sirui-xu.github.io/InterDiff
Dynamics of eye-hand coordination are flexibly preserved in eye-cursor coordination during an online, digital, object interaction task
Do patterns of eye-hand coordination observed during real-world object
interactions apply to digital, screen-based object interactions? We adapted a
real-world object interaction task (physically transferring cups in sequence
about a tabletop) into a two-dimensional screen-based task
(dragging-and-dropping circles in sequence with a cursor). We collected gaze
(with webcam eye-tracking) and cursor position data from 51 fully-remote,
crowd-sourced participants who performed the task on their own computer. We
applied real-world time-series data segmentation strategies to resolve the
self-paced movement sequence into phases of object interaction and rigorously
cleaned the webcam eye-tracking data. In this preliminary investigation, we
found that: 1) real-world eye-hand coordination patterns persist and adapt in
this digital context, and 2) remote, online, cursor-tracking and webcam
eye-tracking are useful tools for capturing visuomotor behaviours during this
ecologically-valid human-computer interaction task. We discuss how these
findings might inform design principles and further investigations into natural
behaviours that persist in digital environments
A Modeling Paradigm for Integrating Processes and Data at the Micro Level
Despite the widespread adoption of BPM, there exist many
business processes not adequately supported by existing BPM technology. In previous work we reported on the properties of these processes. As a major insight we learned that, in accordance to the data model comprising object types and object relations, the modeling and execution of processes can be based on two levels of granularity: object behavior and object interactions. This paper focuses on micro processes capturing object behavior and constituting a fundamental pillar of our framework for object-aware process management. Our approach applies the well established concept of modeling object behavior in terms of states and state transitions. Opposed to existing work, we establish a mapping between attribute values and objects states to ensure compliance between them. Finally, we provide a well-dened operational semantics enabling the automatic and dynamic generation of most end-user components at run-time (e.g., overview tables and user forms)
Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes
The success of deep learning in computer vision is based on availability of
large annotated datasets. To lower the need for hand labeled images, virtually
rendered 3D worlds have recently gained popularity. Creating realistic 3D
content is challenging on its own and requires significant human effort. In
this work, we propose an alternative paradigm which combines real and synthetic
data for learning semantic instance segmentation and object detection models.
Exploiting the fact that not all aspects of the scene are equally important for
this task, we propose to augment real-world imagery with virtual objects of the
target category. Capturing real-world images at large scale is easy and cheap,
and directly provides real background appearances without the need for creating
complex 3D models of the environment. We present an efficient procedure to
augment real images with virtual objects. This allows us to create realistic
composite images which exhibit both realistic background appearance and a large
number of complex object arrangements. In contrast to modeling complete 3D
environments, our augmentation approach requires only a few user interactions
in combination with 3D shapes of the target object. Through extensive
experimentation, we conclude the right set of parameters to produce augmented
data which can maximally enhance the performance of instance segmentation
models. Further, we demonstrate the utility of our approach on training
standard deep models for semantic instance segmentation and object detection of
cars in outdoor driving scenes. We test the models trained on our augmented
data on the KITTI 2015 dataset, which we have annotated with pixel-accurate
ground truth, and on Cityscapes dataset. Our experiments demonstrate that
models trained on augmented imagery generalize better than those trained on
synthetic data or models trained on limited amount of annotated real data
- …