254 research outputs found
Accelerated Parameter Estimation with DALE
We consider methods for improving the estimation of constraints on a
high-dimensional parameter space with a computationally expensive likelihood
function. In such cases Markov chain Monte Carlo (MCMC) can take a long time to
converge and concentrates on finding the maxima rather than the often-desired
confidence contours for accurate error estimation. We employ DALE (Direct
Analysis of Limits via the Exterior of ) for determining confidence
contours by minimizing a cost function parametrized to incentivize points in
parameter space which are both on the confidence limit and far from previously
sampled points. We compare DALE to the nested sampling algorithm
implemented in MultiNest on a toy likelihood function that is highly
non-Gaussian and non-linear in the mapping between parameter values and
. We find that in high-dimensional cases DALE finds the same
confidence limit as MultiNest using roughly an order of magnitude fewer
evaluations of the likelihood function. DALE is open-source and available
at https://github.com/danielsf/Dalex.git
A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds
This paper proposes a segmentation-free, automatic and efficient procedure to
detect general geometric quadric forms in point clouds, where clutter and
occlusions are inevitable. Our everyday world is dominated by man-made objects
which are designed using 3D primitives (such as planes, cones, spheres,
cylinders, etc.). These objects are also omnipresent in industrial
environments. This gives rise to the possibility of abstracting 3D scenes
through primitives, thereby positions these geometric forms as an integral part
of perception and high level 3D scene understanding.
As opposed to state-of-the-art, where a tailored algorithm treats each
primitive type separately, we propose to encapsulate all types in a single
robust detection procedure. At the center of our approach lies a closed form 3D
quadric fit, operating in both primal & dual spaces and requiring as low as 4
oriented-points. Around this fit, we design a novel, local null-space voting
strategy to reduce the 4-point case to 3. Voting is coupled with the famous
RANSAC and makes our algorithm orders of magnitude faster than its conventional
counterparts. This is the first method capable of performing a generic
cross-type multi-object primitive detection in difficult scenes. Results on
synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201
Generic Primitive Detection in Point Clouds Using Novel Minimal Quadric Fits
We present a novel and effective method for detecting 3D primitives in
cluttered, unorganized point clouds, without axillary segmentation or type
specification. We consider the quadric surfaces for encapsulating the basic
building blocks of our environments - planes, spheres, ellipsoids, cones or
cylinders, in a unified fashion. Moreover, quadrics allow us to model higher
degree of freedom shapes, such as hyperboloids or paraboloids that could be
used in non-rigid settings.
We begin by contributing two novel quadric fits targeting 3D point sets that
are endowed with tangent space information. Based upon the idea of aligning the
quadric gradients with the surface normals, our first formulation is exact and
requires as low as four oriented points. The second fit approximates the first,
and reduces the computational effort. We theoretically analyze these fits with
rigor, and give algebraic and geometric arguments. Next, by re-parameterizing
the solution, we devise a new local Hough voting scheme on the null-space
coefficients that is combined with RANSAC, reducing the complexity from
to (three points). To the best of our knowledge, this is the
first method capable of performing a generic cross-type multi-object primitive
detection in difficult scenes without segmentation. Our extensive qualitative
and quantitative results show that our method is efficient and flexible, as
well as being accurate.Comment: Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence (T-PAMI). arXiv admin note: substantial text overlap with
arXiv:1803.0719
Robust computational intelligence techniques for visual information processing
The third part is exclusively dedicated to the super-resolution of Magnetic Resonance Images. In one of these works, an algorithm based on the random shifting technique is developed. Besides, we studied noise removal and resolution enhancement simultaneously. To end, the cost function of deep networks has been modified by different combinations of norms in order to improve their training.
Finally, the general conclusions of the research are presented and discussed, as well as the possible future research lines that are able to make use of the results obtained in this Ph.D. thesis.This Ph.D. thesis is about image processing by computational intelligence techniques. Firstly, a general overview of this book is carried out, where the motivation, the hypothesis, the objectives, and the methodology employed are described. The use and analysis of different mathematical norms will be our goal. After that, state of the art focused on the applications of the image processing proposals is presented. In addition, the fundamentals of the image modalities, with particular attention to magnetic resonance, and the learning techniques used in this research, mainly based on neural networks, are summarized. To end up, the mathematical framework on which this work is based on, ₚ-norms, is defined.
Three different parts associated with image processing techniques follow. The first non-introductory part of this book collects the developments which are about image segmentation. Two of them are applications for video surveillance tasks and try to model the background of a scenario using a specific camera. The other work is centered on the medical field, where the goal of segmenting diabetic wounds of a very heterogeneous dataset is addressed.
The second part is focused on the optimization and implementation of new models for curve and surface fitting in two and three dimensions, respectively. The first work presents a parabola fitting algorithm based on the measurement of the distances of the interior and exterior points to the focus and the directrix. The second work changes to an ellipse shape, and it ensembles the information of multiple fitting methods. Last, the ellipsoid problem is addressed in a similar way to the parabola
Shape Deformation Statistics and Regional Texture-Based Appearance Models for Segmentation
Transferring identified regions of interest (ROIs) from planning-time MRI images to the trans-rectal ultrasound (TRUS) images used to guide prostate biopsy is difficult because of the large difference in appearance between the two modalities as well as the deformation of the prostate's shape caused by the TRUS transducer. This dissertation describes methods for addressing these difficulties by both estimating a patient's prostate shape after the transducer is applied and then locating it in the TRUS image using skeletal models (s-reps) of prostate shapes. First, I introduce a geometrically-based method for interpolating discretely sampled s-reps into continuous objects. This interpolation is important for many tasks involving s-reps, including fitting them to new objects as well as the later applications described in this dissertation. This method is shown to be accurate for ellipsoids where an analytical solution is known. Next, I create a method for estimating a probability distribution on the difference between two shapes. Because s-reps live in a high-dimensional curved space, I use Principal Nested Spheres (PNS) to transform these representations to instead live in a flat space where standard techniques can be applied. This method is shown effective both on synthetic data as well as for modeling the deformation caused by the TRUS transducer to the prostate. In cases where appearance is described via a large number of parameters, such as intensity combined with multiple texture features, it is computationally beneficial to be able to turn these large tuples of descriptors into a scalar value. Using the inherent localization properties of s-reps, I develop a method for using regionally-trained classifiers to turn appearance tuples into the probability that the appearance tuple in question came from inside the prostate boundary. This method is shown to be able to accurately discern inside appearances from outside appearances over a large majority of the prostate boundary. Finally, I combine these techniques into a deformable model-based segmentation framework to segment the prostate in TRUS. By applying the learned mean deformation to a patient's prostate and then deforming it so that voxels with high probability of coming from the prostate's interior are also in the model's interior, I am able to generate prostate segmentations which are comparable to state of the art methods.Doctor of Philosoph
Continuous Medial Models in Two-Sample Statistics of Shape
In questions of statistical shape analysis, the foremost is how such shapes should be represented. The number of parameters required for a given accuracy and the types of deformation they can express directly influence the quality and type of statistical inferences one can make. One example is a medial model, which represents a solid object using a skeleton of a lower dimension and naturally expresses intuitive changes such as "bending", "twisting", and "thickening". In this dissertation I develop a new three-dimensional medial model that allows continuous interpolation of the medial surface and provides a map back and forth between the boundary and its medial axis. It is the first such model to support branching, allowing the representation of a much wider class of objects than previously possible using continuous medial methods. A measure defined on the medial surface then allows one to write integrals over the boundary and the object interior in medial coordinates, enabling the expression of important object properties in an object-relative coordinate system. I show how these properties can be used to optimize correspondence during model construction. This improved correspondence reduces variability due to how the model is parameterized which could potentially mask a true shape change effect. Finally, I develop a method for performing global and local hypothesis testing between two groups of shapes. This method is capable of handling the nonlinear spaces the shapes live in and is well defined even in the high-dimension, low-sample size case. It naturally reduces to several well-known statistical tests in the linear and univariate cases
A Survey of Surface Reconstruction from Point Clouds
International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction
Computational fluids domain reduction to a simplified fluid network
The primary goal of this project is to demonstrate the practical use of data mining algorithms to cluster a solved steady-state computational fluids simulation (CFD) flow domain into a simplified lumped-parameter network. A commercial-quality code, “cfdMine” was created using a volume-weighted k-means clustering that that can accomplish the clustering of a 20 million cell CFD domain on a single CPU in several hours or less. Additionally agglomeration and k-means Mahalanobis were added as optional post-processing steps to further enhance the separation of the clusters. The resultant nodal network is considered a reduced-order model and can be solved transiently at a very minimal computational cost. The reduced order network is then instantiated in the commercial thermal solver MuSES to perform transient conjugate heat transfer using convection predicted using a lumped network (based on steady-state CFD). When inserting the lumped nodal network into a MuSES model, the potential for developing a “localized heat transfer coefficient” is shown to be an improvement over existing techniques. Also, it was found that the use of the clustering created a new flow visualization technique. Finally, fixing clusters near equipment newly demonstrates a capability to track temperatures near specific objects (such as equipment in vehicles)
Recommended from our members
Multiscale Modeling of Granular Materials
Granular materials have a “discrete” nature whose global mechanical behaviors are originated from the grain scale micromechanical mechanisms. The intriguing properties and non-trivial behaviors of a granular material pose formidable challenges to the multiscale modeling of these materials. Some of the key challenges include upscaling of coarse-scale continuum equation form fine-scale governing equations, calibrating material parameters at different scales, alleviating pathological mesh dependency in continuum models, and generating unit cells with versatile morphological details. This dissertation aims to addressing the aforementioned challenges and to investigate the mechanical behavior of granular materials through multiscale modeling.
Firstly, a three-dimensional nonlocal multiscale discrete-continuum model is presented for modeling the mechanical behavior of granular materials. We establish an information-passing coupling scheme between DEM that explicitly replicates granular motion of individual particles and a finite element continuum model, which captures nonlocal overall response of the granular assemblies. Secondly, a new staggered multilevel material identification procedure is developed for phenomenological critical state plasticity models. The emphasis is placed on cases in which available experimental data and constraints are insufficient for calibration. The key idea is to create a secondary virtual experimental database from high-fidelity models, such as discrete element simulations, then merge both the actual experimental data and secondary database as an extended digital database to determine material parameters for the phenomenological macroscopic critical state plasticity model. This expansion of database provides additional constraints necessary for calibration of the phenomenological critical state plasticity models.
Thirdly, a regularized phenomenological multiscale model is investigated, in which elastic properties are computed using direct homogenization and subsequently evolved using a simple three-parameter orthotropic continuum damage model. The salient feature of the model is a unified regularization framework based on the concept of effective softening strain. The unified regularization scheme is employed in the context of constitutive law rescaling and the staggered nonlocal approach to alleviate pathological mesh dependency. Lastly, a robust parametric model is presented for generating unit cells with randomly distributed inclusions. The proposed model is computationally efficient using a hierarchy of algorithms with increasing computational complexity, and is able to generate unit cells with different inclusion shapes
High-performance geometric vascular modelling
Image-based high-performance geometric vascular modelling and reconstruction is an essential component of computer-assisted surgery on the diagnosis, analysis and treatment of cardiovascular diseases. However, it is an extremely challenging task to efficiently reconstruct the accurate geometric structures of blood vessels out of medical images. For one thing, the shape of an individual section of a blood vessel is highly irregular because of the squeeze of other tissues and the deformation caused by vascular diseases. For another, a vascular system is a very complicated network of blood vessels with different types of branching structures. Although some existing vascular modelling techniques can reconstruct the geometric structure of a vascular system, they are either time-consuming or lacking sufficient accuracy. What is more, these techniques rarely consider the interior tissue of the vascular wall, which consists of complicated layered structures. As a result, it is necessary to develop a better vascular geometric modelling technique, which is not only of high performance and high accuracy in the reconstruction of vascular surfaces, but can also be used to model the interior tissue structures of the vascular walls.This research aims to develop a state-of-the-art patient-specific medical image-based geometric vascular modelling technique to solve the above problems. The main contributions of this research are:- Developed and proposed the Skeleton Marching technique to reconstruct the geometric structures of blood vessels with high performance and high accuracy. With the proposed technique, the highly complicated vascular reconstruction task is reduced to a set of simple localised geometric reconstruction tasks, which can be carried out in a parallel manner. These locally reconstructed vascular geometric segments are then combined together using shape-preserving blending operations to faithfully represent the geometric shape of the whole vascular system.- Developed and proposed the Thin Implicit Patch method to realistically model the interior geometric structures of the vascular tissues. This method allows the multi-layer interior tissue structures to be embedded inside the vascular wall to illustrate the geometric details of the blood vessel in real world
- …