18,552 research outputs found
Mesh-to-raster based non-rigid registration of multi-modal images
Region of interest (ROI) alignment in medical images plays a crucial role in
diagnostics, procedure planning, treatment, and follow-up. Frequently, a model
is represented as triangulated mesh while the patient data is provided from CAT
scanners as pixel or voxel data. Previously, we presented a 2D method for
curve-to-pixel registration. This paper contributes (i) a general
mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a
3D surface-to-voxel application, and (iii) a comprehensive quantitative
evaluation in 2D using ground truth provided by the simultaneous truth and
performance level estimation (STAPLE) method. The registration is formulated as
a minimization problem where the objective consists of a data term, which
involves the signed distance function of the ROI from the reference image, and
a higher order elastic regularizer for the deformation. The evaluation is based
on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of
decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each
showing one corresponding tooth in both modalities. The ROI in each image is
manually marked by three experts (900 curves in total). In the QLF-DP setting,
our approach significantly outperforms the mutual information-based
registration algorithm implemented with the Insight Segmentation and
Registration Toolkit (ITK) and Elastix
Morphable Face Models - An Open Framework
In this paper, we present a novel open-source pipeline for face registration
based on Gaussian processes as well as an application to face image analysis.
Non-rigid registration of faces is significant for many applications in
computer vision, such as the construction of 3D Morphable face models (3DMMs).
Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid
deformation models with B-splines and PCA models as examples. GPMM separate
problem specific requirements from the registration algorithm by incorporating
domain-specific adaptions as a prior model. The novelties of this paper are the
following: (i) We present a strategy and modeling technique for face
registration that considers symmetry, multi-scale and spatially-varying
details. The registration is applied to neutral faces and facial expressions.
(ii) We release an open-source software framework for registration and
model-building, demonstrated on the publicly available BU3D-FE database. The
released pipeline also contains an implementation of an Analysis-by-Synthesis
model adaption of 2D face images, tested on the Multi-PIE and LFW database.
This enables the community to reproduce, evaluate and compare the individual
steps of registration to model-building and 3D/2D model fitting. (iii) Along
with the framework release, we publish a new version of the Basel Face Model
(BFM-2017) with an improved age distribution and an additional facial
expression model
Personalized 3D mannequin reconstruction based on 3D scanning
Purpose
Currently, a common method of reconstructing mannequin is based on the body measurements or body features, which only preserve the body size lacking of the accurate body geometric shape information. However, the same human body measurement does not equal to the same body shape. This may result in an unfit garment for the target human body. The purpose of this paper is to propose a novel scanning-based pipeline to reconstruct the personalized mannequin, which preserves both body size and body shape information.
Design/methodology/approach
The authors first capture the body of a subject via 3D scanning, and a statistical body model is fit to the scanned data. This results in a skinned articulated model of the subject. The scanned body is then adjusted to be pose-symmetric via linear blending skinning. The mannequin part is then extracted. Finally, a slice-based method is proposed to generate a shape-symmetric 3D mannequin.
Findings
A personalized 3D mannequin can be reconstructed from the scanned body. Compared to conventional methods, the method can preserve both the size and shape of the original scanned body. The reconstructed mannequin can be imported directly into the apparel CAD software. The proposed method provides a step for digitizing the apparel manufacturing.
Originality/value
Compared to the conventional methods, the main advantage of the authorsâ system is that the authors can preserve both size and geometry of the original scanned body. The main contributions of this paper are as follows: decompose the process of the mannequin reconstruction into pose symmetry and shape symmetry; propose a novel scanning-based pipeline to reconstruct a 3D personalized mannequin; and present a slice-based method for the symmetrization of the 3D mesh.
</jats:sec
- âŠ