13,590 research outputs found
Propagation of an Earth-directed coronal mass ejection in three dimensions
Solar coronal mass ejections (CMEs) are the most significant drivers of
adverse space weather at Earth, but the physics governing their propagation
through the heliosphere is not well understood. While stereoscopic imaging of
CMEs with the Solar Terrestrial Relations Observatory (STEREO) has provided
some insight into their three-dimensional (3D) propagation, the mechanisms
governing their evolution remain unclear due to difficulties in reconstructing
their true 3D structure. Here we use a new elliptical tie-pointing technique to
reconstruct a full CME front in 3D, enabling us to quantify its deflected
trajectory from high latitudes along the ecliptic, and measure its increasing
angular width and propagation from 2-46 solar radii (approximately 0.2 AU).
Beyond 7 solar radii, we show that its motion is determined by an aerodynamic
drag in the solar wind and, using our reconstruction as input for a 3D
magnetohydrodynamic simulation, we determine an accurate arrival time at the
Lagrangian L1 point near Earth.Comment: 5 figures, 2 supplementary movie
Vision for Social Robots: Human Perception and Pose Estimation
In order to extract the underlying meaning from a scene captured from the surrounding world in a single still image, social robots will need to learn the human ability to detect different objects, understand their arrangement and relationships relative both to their own parts and to each other, and infer the dynamics under which they are evolving. Furthermore, they will need to develop and hold a notion of context to allow assigning different meanings (semantics) to the same visual configuration (syntax) of a scene.
The underlying thread of this Thesis is the investigation of new ways for enabling interactions between social robots and humans, by advancing the visual perception capabilities of robots when they process images and videos in which humans are the main focus of attention.
First, we analyze the general problem of scene understanding, as social robots moving through the world need to be able to interpret scenes without having been assigned a specific preset goal. Throughout this line of research, i) we observe that human actions and interactions which can be visually discriminated from an image follow a very heavy-tailed distribution; ii) we develop an algorithm that can obtain a spatial understanding of a scene by only using cues arising from the effect of perspective on a picture of a person’s face; and iii) we define a novel taxonomy of errors for the task of estimating the 2D body pose of people in images to better explain the behavior of algorithms and highlight their underlying causes of error.
Second, we focus on the specific task of 3D human pose and motion estimation from monocular 2D images using weakly supervised training data, as accurately predicting human pose will open up the possibility of richer interactions between humans and social robots. We show that when 3D ground-truth data is only available in small quantities, or not at all, it is possible to leverage knowledge about the physical properties of the human body, along with additional constraints related to alternative types of supervisory signals, to learn models that can regress the full 3D pose of the human body and predict its motions from monocular 2D images.
Taken in its entirety, the intent of this Thesis is to highlight the importance of, and provide novel methodologies for, social robots' ability to interpret their surrounding environment, learn in a way that is robust to low data availability, and generalize previously observed behaviors to unknown situations in a similar way to humans.</p
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data
Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future
From surfaces to objects : Recognizing objects using surface information and object models.
This thesis describes research on recognizing partially obscured objects using
surface information like Marr's 2D sketch ([MAR82]) and surface-based geometrical
object models. The goal of the recognition process is to produce a fully
instantiated object hypotheses, with either image evidence for each feature or
explanations for their absence, in terms of self or external occlusion.
The central point of the thesis is that using surface information should be
an important part of the image understanding process. This is because surfaces
are the features that directly link perception to the objects perceived (for
normal "camera-like" sensing) and because surfaces make explicit information
needed to understand and cope with some visual problems (e.g. obscured features).
Further, because surfaces are both the data and model primitive, detailed
recognition can be made both simpler and more complete.
Recognition input is a surface image, which represents surface orientation and
absolute depth. Segmentation criteria are proposed for forming surface patches
with constant curvature character, based on surface shape discontinuities which
become labeled segmentation- boundaries.
Partially obscured object surfaces are reconstructed using stronger surface based
constraints. Surfaces are grouped to form surface clusters, which are 3D
identity-independent solids that often correspond to model primitives. These are
used here as a context within which to select models and find all object features.
True three-dimensional properties of image boundaries, surfaces and surface
clusters are directly estimated using the surface data.
Models are invoked using a network formulation, where individual nodes
represent potential identities for image structures. The links between nodes are
defined by generic and structural relationships. They define indirect evidence relationships
for an identity. Direct evidence for the identities comes from the data
properties. A plausibility computation is defined according to the constraints inherent
in the evidence types. When a node acquires sufficient plausibility, the
model is invoked for the corresponding image structure.Objects are primarily represented using a surface-based geometrical model.
Assemblies are formed from subassemblies and surface primitives, which are
defined using surface shape and boundaries. Variable affixments between assemblies
allow flexibly connected objects.
The initial object reference frame is estimated from model-data surface relationships,
using correspondences suggested by invocation. With the reference
frame, back-facing, tangential, partially self-obscured, totally self-obscured and
fully visible image features are deduced. From these, the oriented model is used
for finding evidence for missing visible model features. IT no evidence is found,
the program attempts to find evidence to justify the features obscured by an unrelated
object. Structured objects are constructed using a hierarchical synthesis
process.
Fully completed hypotheses are verified using both existence and identity
constraints based on surface evidence.
Each of these processes is defined by its computational constraints and are
demonstrated on two test images. These test scenes are interesting because they
contain partially and fully obscured object features, a variety of surface and solid
types and flexibly connected objects. All modeled objects were fully identified
and analyzed to the level represented in their models and were also acceptably
spatially located.
Portions of this work have been reported elsewhere ([FIS83], [FIS85a], [FIS85b],
[FIS86]) by the author
- …