578 research outputs found

    Machine Learning Approaches to Human Body Shape Analysis

    Get PDF
    Soft biometrics, biomedical sciences, and many other fields of study pay particular attention to the study of the geometric description of the human body, and its variations. Although multiple contributions, the interest is particularly high given the non-rigid nature of the human body, capable of assuming different poses, and numerous shapes due to variable body composition. Unfortunately, a well-known costly requirement in data-driven machine learning, and particularly in the human-based analysis, is the availability of data, in the form of geometric information (body measurements) with related vision information (natural images, 3D mesh, etc.). We introduce a computer graphics framework able to generate thousands of synthetic human body meshes, representing a population of individuals with stratified information: gender, Body Fat Percentage (BFP), anthropometric measurements, and pose. This contribution permits an extensive analysis of different bodies in different poses, avoiding the demanding, and expensive acquisition process. We design a virtual environment able to take advantage of the generated bodies, to infer the body surface area (BSA) from a single view. The framework permits to simulate the acquisition process of newly introduced RGB-D devices disentangling different noise components (sensor noise, optical distortion, body part occlusions). Common geometric descriptors in soft biometric, as well as in biomedical sciences, are based on body measurements. Unfortunately, as we prove, these descriptors are not pose invariant, constraining the usability in controlled scenarios. We introduce a differential geometry approach assuming body pose variations as isometric transformations of the body surface, and body composition changes covariant to the body surface area. This setting permits the use of the Laplace-Beltrami operator on the 2D body manifold, describing the body with a compact, efficient, and pose invariant representation. We design a neural network architecture able to infer important body semantics from spectral descriptors, closing the gap between abstract spectral features, and traditional measurement-based indices. Studying the manifold of body shapes, we propose an innovative generative adversarial model able to learn the body shapes. The method permits to generate new bodies with unseen geometries as a walk on the latent space, constituting a significant advantage over traditional generative methods

    Modeling small objects under uncertainties : novel algorithms and applications.

    Get PDF
    Active Shape Models (ASM), Active Appearance Models (AAM) and Active Tensor Models (ATM) are common approaches to model elastic (deformable) objects. These models require an ensemble of shapes and textures, annotated by human experts, in order identify the model order and parameters. A candidate object may be represented by a weighted sum of basis generated by an optimization process. These methods have been very effective for modeling deformable objects in biomedical imaging, biometrics, computer vision and graphics. They have been tried mainly on objects with known features that are amenable to manual (expert) annotation. They have not been examined on objects with severe ambiguities to be uniquely characterized by experts. This dissertation presents a unified approach for modeling, detecting, segmenting and categorizing small objects under uncertainty, with focus on lung nodules that may appear in low dose CT (LDCT) scans of the human chest. The AAM, ASM and the ATM approaches are used for the first time on this application. A new formulation to object detection by template matching, as an energy optimization, is introduced. Nine similarity measures of matching have been quantitatively evaluated for detecting nodules less than 1 em in diameter. Statistical methods that combine intensity, shape and spatial interaction are examined for segmentation of small size objects. Extensions of the intensity model using the linear combination of Gaussians (LCG) approach are introduced, in order to estimate the number of modes in the LCG equation. The classical maximum a posteriori (MAP) segmentation approach has been adapted to handle segmentation of small size lung nodules that are randomly located in the lung tissue. A novel empirical approach has been devised to simultaneously detect and segment the lung nodules in LDCT scans. The level sets methods approach was also applied for lung nodule segmentation. A new formulation for the energy function controlling the level set propagation has been introduced taking into account the specific properties of the nodules. Finally, a novel approach for classification of the segmented nodules into categories has been introduced. Geometric object descriptors such as the SIFT, AS 1FT, SURF and LBP have been used for feature extraction and matching of small size lung nodules; the LBP has been found to be the most robust. Categorization implies classification of detected and segmented objects into classes or types. The object descriptors have been deployed in the detection step for false positive reduction, and in the categorization stage to assign a class and type for the nodules. The AAMI ASMI A TM models have been used for the categorization stage. The front-end processes of lung nodule modeling, detection, segmentation and classification/categorization are model-based and data-driven. This dissertation is the first attempt in the literature at creating an entirely model-based approach for lung nodule analysis

    Understanding the Structure-Function Relationship in Peptide-Enabled High Entropy Alloy Nanocatalysts

    Full text link
    The structural complexity in high entropy alloy nanocatalysts (HEAs), afforded by the homogeneous mixing of five or more elements, has resulted in a limited understanding about the origin of their promising electrocatalytic properties. This thesis investigates the structure-function relationship in HEAs using advanced material characterization techniques. At first, a methodology for resolving the atomic-scale structure of peptide-enabled HEAs was developed using high-energy X-ray diffraction (HE-XRD) coupled with atomic pair distribution function (PDF) and reverse Monte Carlo (RMC) simulations, yielding structure models over the length scale of HEAs. Coordination analysis of the structure models revealed a multifunctional interplay of geometric and electronic attributes of surface atoms in HEAs that was responsible for the catalytic activity enhancement during the methanol electrooxidation reaction. Using the methodology for resolving the atomic scale structure of HEAs and peptide sequence engineering, the structure-function relationship of model PtPdAuCoSn HEAs during ethanol electrooxidation reaction (EOR) was studied. Compositional analysis of the PtPdAuCoSn HEA structure models revealed distinct miscibility characteristics that were attributed to the unique biotic-abiotic interactions. Analysis of the structure models identified the rapid dehydrogenation of CH3CHO intermediate into CH3COads in an optimized adsorption configuration as the contributing factor for the high selectivity towards CH3COO- in PtPdAuCoSn HEAs. Armed with these insights, a study was designed for understanding the effect of changing the concentration of Pt in the structure-function relationship of PtPdAuCoSn HEAs using spatiotemporal structural insights from in-situ PDF. The structure models demonstrated a degree of metastability as a function of their corresponding configurational entropy. Analysis of the structure models revealed that high selectivity towards CH3COO- in PtPdAuCoSn HEAs during EOR originates from the enhanced distribution of Pd and Co surface atoms. In summary, this thesis uses atomic PDF and RMC simulations to draw structure-function correlations in HEAs, presenting a path forward for developing strategies for the rational design of HEAs. Through collaborative efforts from theoreticians and experimentalists, the methodology presented here can form the basis for accelerating the discovery of promising HEA configurations for emerging electrocatalytic applications

    Efficient and Accurate Segmentation of Defects in Industrial CT Scans

    Get PDF
    Industrial computed tomography (CT) is an elementary tool for the non-destructive inspection of cast light-metal or plastic parts. A comprehensive testing not only helps to ensure the stability and durability of a part, it also allows reducing the rejection rate by supporting the optimization of the casting process and to save material (and weight) by producing equivalent but more filigree structures. With a CT scan it is theoretically possible to locate any defect in the part under examination and to exactly determine its shape, which in turn helps to draw conclusions about its harmfulness. However, most of the time the data quality is not good enough to allow segmenting the defects with simple filter-based methods which directly operate on the gray-values—especially when the inspection is expanded to the entire production. In such in-line inspection scenarios the tight cycle times further limit the available time for the acquisition of the CT scan, which renders them noisy and prone to various artifacts. In recent years, dramatic advances in deep learning (and convolutional neural networks in particular) made even the reliable detection of small objects in cluttered scenes possible. These methods are a promising approach to quickly yield a reliable and accurate defect segmentation even in unfavorable CT scans. The huge drawback: a lot of precisely labeled training data is required, which is utterly challenging to obtain—particularly in the case of the detection of tiny defects in huge, highly artifact-afflicted, three-dimensional voxel data sets. Hence, a significant part of this work deals with the acquisition of precisely labeled training data. Firstly, we consider facilitating the manual labeling process: our experts annotate on high-quality CT scans with a high spatial resolution and a high contrast resolution and we then transfer these labels to an aligned ``normal'' CT scan of the same part, which holds all the challenging aspects we expect in production use. Nonetheless, due to the indecisiveness of the labeling experts about what to annotate as defective, the labels remain fuzzy. Thus, we additionally explore different approaches to generate artificial training data, for which a precise ground truth can be computed. We find an accurate labeling to be crucial for a proper training. We evaluate (i) domain randomization which simulates a super-set of reality with simple transformations, (ii) generative models which are trained to produce samples of the real-world data distribution, and (iii) realistic simulations which capture the essential aspects of real CT scans. Here, we develop a fully automated simulation pipeline which provides us with an arbitrary amount of precisely labeled training data. First, we procedurally generate virtual cast parts in which we place reasonable artificial casting defects. Then, we realistically simulate CT scans which include typical CT artifacts like scatter, noise, cupping, and ring artifacts. Finally, we compute a precise ground truth by determining for each voxel the overlap with the defect mesh. To determine whether our realistically simulated CT data is eligible to serve as training data for machine learning methods, we compare the prediction performance of learning-based and non-learning-based defect recognition algorithms on the simulated data and on real CT scans. In an extensive evaluation, we compare our novel deep learning method to a baseline of image processing and traditional machine learning algorithms. This evaluation shows how much defect detection benefits from learning-based approaches. In particular, we compare (i) a filter-based anomaly detection method which finds defect indications by subtracting the original CT data from a generated ``defect-free'' version, (ii) a pixel-classification method which, based on densely extracted hand-designed features, lets a random forest decide about whether an image element is part of a defect or not, and (iii) a novel deep learning method which combines a U-Net-like encoder-decoder-pair of three-dimensional convolutions with an additional refinement step. The encoder-decoder-pair yields a high recall, which allows us to detect even very small defect instances. The refinement step yields a high precision by sorting out the false positive responses. We extensively evaluate these models on our realistically simulated CT scans as well as on real CT scans in terms of their probability of detection, which tells us at which probability a defect of a given size can be found in a CT scan of a given quality, and their intersection over union, which gives us information about how precise our segmentation mask is in general. While the learning-based methods clearly outperform the image processing method, the deep learning method in particular convinces by its inference speed and its prediction performance on challenging CT scans—as they, for example, occur in in-line scenarios. Finally, we further explore the possibilities and the limitations of the combination of our fully automated simulation pipeline and our deep learning model. With the deep learning method yielding reliable results for CT scans of low data quality, we examine by how much we can reduce the scan time while still maintaining proper segmentation results. Then, we take a look on the transferability of the promising results to CT scans of parts of different materials and different manufacturing techniques, including plastic injection molding, iron casting, additive manufacturing, and composed multi-material parts. Each of these tasks comes with its own challenges like an increased artifact-level or different types of defects which occasionally are hard to detect even for the human eye. We tackle these challenges by employing our simulation pipeline to produce virtual counterparts that capture the tricky aspects and fine-tuning the deep learning method on this additional training data. With that we can tailor our approach towards specific tasks, achieving reliable and robust segmentation results even for challenging data. Lastly, we examine if the deep learning method, based on our realistically simulated training data, can be trained to distinguish between different types of defects—the reason why we require a precise segmentation in the first place—and we examine if the deep learning method can detect out-of-distribution data where its predictions become less trustworthy, i.e. an uncertainty estimation

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews
    • …
    corecore