3,789 research outputs found
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
Matterport3D: Learning from RGB-D Data in Indoor Environments
Access to large, diverse RGB-D datasets is critical for training RGB-D scene
understanding algorithms. However, existing datasets still cover only a limited
number of views or a restricted scale of spaces. In this paper, we introduce
Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views
from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided
with surface reconstructions, camera poses, and 2D and 3D semantic
segmentations. The precise global alignment and comprehensive, diverse
panoramic set of views over entire buildings enable a variety of supervised and
self-supervised computer vision tasks, including keypoint matching, view
overlap prediction, normal prediction from color, semantic segmentation, and
region classification
Assessment of Wear in Total Knee Arthroplasty Using Advanced Radiographic Techniques
Total knee arthroplasty (TKA) has become the gold standard approach for treating advanced osteoarthritis of the knee. Although the surgery continues to be very successful at relieving pain and restoring joint function, its longevity is challenged by wear and loosening of the implant components. This requires the patient to undergo a revision surgery to replace the implant, a much more challenging operation than primary arthroplasty. Wear of the polyethylene tibial inserts from TKA is assessed in vitro using mechanical wear simulator testing and by examining failed implants retrieved from patients during revision surgery, as well as with direct in vivo measurements. Current in vitro measurement tools provide only a global estimate of wear (failing to describe whether the wear has occurred on the articulating or backside surfaces, or stabilizing post), or are qualitative measurements, or lack resolution. Current in vivo measurement techniques are performed statically or quasi-statically, leading to the potential for an underestimation of wear volume as the contact area of the implant components change throughout flexion. The purpose of this thesis was to describe, validate, and utilize new advanced imaging techniques to measure TKA implant wear for both in vitro and in vivo applications. Micro-computed tomography (micro-CT), a non-destructive, high resolution imaging technique was utilized to provide detailed images of the geometry of tibial inserts used in wear simulator trials or retrieved from patients, and create surface deviation maps to accurately quantify wear. Ways to create an unworn reference geometry, for use in comparing to a worn retrieved tibial insert when the pre-wear geometry is unknown, were evaluated and a best practice approach was determined. These methods were then applied to study a group of tibial inserts retrieved from patients during revision surgery, which were found to be well functioning with a yearly wear rate equivalent to other contemporary implant designs. Finally, a pilot study to evaluate the use of dynamic single-plane flat panel digital radiography for use in measuring TKA implant wear in vivo was conducted. The system was determined to have a measurement accuracy and precision sufficient to begin a pilot clinical study with patients
Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality
abstract: Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars.
Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona’s Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.Dissertation/ThesisDoctoral Dissertation Mechanical Engineering 201
A Generative Model of People in Clothing
We present the first image-based generative model of people in clothing for
the full body. We sidestep the commonly used complex graphics rendering
pipeline and the need for high-quality 3D scans of dressed people. Instead, we
learn generative models from a large image database. The main challenge is to
cope with the high variance in human pose, shape and appearance. For this
reason, pure image-based approaches have not been considered so far. We show
that this challenge can be overcome by splitting the generating process in two
parts. First, we learn to generate a semantic segmentation of the body and
clothing. Second, we learn a conditional model on the resulting segments that
creates realistic images. The full model is differentiable and can be
conditioned on pose, shape or color. The result are samples of people in
different clothing items and styles. The proposed model can generate entirely
new people with realistic clothing. In several experiments we present
encouraging results that suggest an entirely data-driven approach to people
generation is possible
Robust signatures for 3D face registration and recognition
PhDBiometric authentication through face recognition has been an active area of
research for the last few decades, motivated by its application-driven demand. The popularity
of face recognition, compared to other biometric methods, is largely due to its
minimum requirement of subject co-operation, relative ease of data capture and similarity
to the natural way humans distinguish each other.
3D face recognition has recently received particular interest since three-dimensional
face scans eliminate or reduce important limitations of 2D face images, such as illumination
changes and pose variations. In fact, three-dimensional face scans are usually captured
by scanners through the use of a constant structured-light source, making them invariant
to environmental changes in illumination. Moreover, a single 3D scan also captures the
entire face structure and allows for accurate pose normalisation.
However, one of the biggest challenges that still remain in three-dimensional face
scans is the sensitivity to large local deformations due to, for example, facial expressions.
Due to the nature of the data, deformations bring about large changes in the 3D geometry
of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such
as spikes and holes, which are uncommon with 2D images and requires a pre-processing
stage that is speci c to the scanner used to capture the data.
The aim of this thesis is to devise a face signature that is compact in size and
overcomes the above mentioned limitations. We investigate the use of facial regions and
landmarks towards a robust and compact face signature, and we study, implement and
validate a region-based and a landmark-based face signature. Combinations of regions and
landmarks are evaluated for their robustness to pose and expressions, while the matching
scheme is evaluated for its robustness to noise and data artefacts
Focal Spot, Winter 2004/2005
https://digitalcommons.wustl.edu/focal_spot_archives/1098/thumbnail.jp
- …