2,711 research outputs found

    Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis

    Full text link
    We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution -- but complete -- output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data

    Magnetic Resonance Image segmentation using Pulse Coupled Neural Networks

    Get PDF
    The Pulse Couple Neural Network (PCNN) was developed by Eckhorn to model the observed synchronization of neural assemblies in the visual cortex of small mammals such as a cat. In this dissertation, three novel PCNN based automatic segmentation algorithms were developed to segment Magnetic Resonance Imaging (MRI) data: (a) PCNN image \u27signature\u27 based single region cropping; (b) PCNN - Kittler Illingworth minimum error thresholding and (c) PCNN -Gaussian Mixture Model - Expectation Maximization (GMM-EM) based multiple material segmentation. Among other control tests, the proposed algorithms were tested on three T2 weighted acquisition configurations comprising a total of 42 rat brain volumes, 20 T1 weighted MR human brain volumes from Harvard\u27s Internet Brain Segmentation Repository and 5 human MR breast volumes. The results were compared against manually segmented gold standards, Brain Extraction Tool (BET) V2.1 results, published results and single threshold methods. The Jaccard similarity index was used for numerical evaluation of the proposed algorithms. Our quantitative results demonstrate conclusively that PCNN based multiple material segmentation strategies can approach a human eye\u27s intensity delineation capability in grayscale image segmentation tasks

    Automated Segmentation of Cerebral Aneurysm Using a Novel Statistical Multiresolution Approach

    Get PDF
    Cerebral Aneurysm (CA) is a vascular disease that threatens the lives of many adults. It a ects almost 1:5 - 5% of the general population. Sub- Arachnoid Hemorrhage (SAH), resulted by a ruptured CA, has high rates of morbidity and mortality. Therefore, radiologists aim to detect it and diagnose it at an early stage, by analyzing the medical images, to prevent or reduce its damages. The analysis process is traditionally done manually. However, with the emerging of the technology, Computer-Aided Diagnosis (CAD) algorithms are adopted in the clinics to overcome the traditional process disadvantages, as the dependency of the radiologist's experience, the inter and intra observation variability, the increase in the probability of error which increases consequently with the growing number of medical images to be analyzed, and the artifacts added by the medical images' acquisition methods (i.e., MRA, CTA, PET, RA, etc.) which impedes the radiologist' s work. Due to the aforementioned reasons, many research works propose di erent segmentation approaches to automate the analysis process of detecting a CA using complementary segmentation techniques; but due to the challenging task of developing a robust reproducible reliable algorithm to detect CA regardless of its shape, size, and location from a variety of the acquisition methods, a diversity of proposed and developed approaches exist which still su er from some limitations. This thesis aims to contribute in this research area by adopting two promising techniques based on the multiresolution and statistical approaches in the Two-Dimensional (2D) domain. The rst technique is the Contourlet Transform (CT), which empowers the segmentation by extracting features not apparent in the normal image scale. While the second technique is the Hidden Markov Random Field model with Expectation Maximization (HMRF-EM), which segments the image based on the relationship of the neighboring pixels in the contourlet domain. The developed algorithm reveals promising results on the four tested Three- Dimensional Rotational Angiography (3D RA) datasets, where an objective and a subjective evaluation are carried out. For the objective evaluation, six performance metrics are adopted which are: accuracy, Dice Similarity Index (DSI), False Positive Ratio (FPR), False Negative Ratio (FNR), speci city, and sensitivity. As for the subjective evaluation, one expert and four observers with some medical background are involved to assess the segmentation visually. Both evaluations compare the segmented volumes against the ground truth data

    On the 3D point cloud for human-pose estimation

    Get PDF
    This thesis aims at investigating methodologies for estimating a human pose from a 3D point cloud that is captured by a static depth sensor. Human-pose estimation (HPE) is important for a range of applications, such as human-robot interaction, healthcare, surveillance, and so forth. Yet, HPE is challenging because of the uncertainty in sensor measurements and the complexity of human poses. In this research, we focus on addressing challenges related to two crucial components in the estimation process, namely, human-pose feature extraction and human-pose modeling. In feature extraction, the main challenge involves reducing feature ambiguity. We propose a 3D-point-cloud feature called viewpoint and shape feature histogram (VISH) to reduce feature ambiguity by capturing geometric properties of the 3D point cloud of a human. The feature extraction consists of three steps: 3D-point-cloud pre-processing, hierarchical structuring, and feature extraction. In the pre-processing step, 3D points corresponding to a human are extracted and outliers from the environment are removed to retain the 3D points of interest. This step is important because it allows us to reduce the number of 3D points by keeping only those points that correspond to the human body for further processing. In the hierarchical structuring, the pre-processed 3D point cloud is partitioned and replicated into a tree structure as nodes. Viewpoint feature histogram (VFH) and shape features are extracted from each node in the tree to provide a descriptor to represent each node. As the features are obtained based on histograms, coarse-level details are highlighted in large regions and fine-level details are highlighted in small regions. Therefore, the features from the point cloud in the tree can capture coarse level to fine level information to reduce feature ambiguity. In human-pose modeling, the main challenges involve reducing the dimensionality of human-pose space and designing appropriate factors that represent the underlying probability distributions for estimating human poses. To reduce the dimensionality, we propose a non-parametric action-mixture model (AMM). It represents high-dimensional human-pose space using low-dimensional manifolds in searching human poses. In each manifold, a probability distribution is estimated based on feature similarity. The distributions in the manifolds are then redistributed according to the stationary distribution of a Markov chain that models the frequency of human actions. After the redistribution, the manifolds are combined according to a probability distribution determined by action classification. Experiments were conducted using VISH features as input to the AMM. The results showed that the overall error and standard deviation of the AMM were reduced by about 7.9% and 7.1%, respectively, compared with a model without action classification. To design appropriate factors, we consider the AMM as a Bayesian network and propose a mapping that converts the Bayesian network to a neural network called NN-AMM. The proposed mapping consists of two steps: structure identification and parameter learning. In structure identification, we have developed a bottom-up approach to build a neural network while preserving the Bayesian-network structure. In parameter learning, we have created a part-based approach to learn synaptic weights by decomposing a neural network into parts. Based on the concept of distributed representation, the NN-AMM is further modified into a scalable neural network called NND-AMM. A neural-network-based system is then built by using VISH features to represent 3D-point-cloud input and the NND-AMM to estimate 3D human poses. The results showed that the proposed mapping can be utilized to design AMM factors automatically. The NND-AMM can provide more accurate human-pose estimates with fewer hidden neurons than both the AMM and NN-AMM can. Both the NN-AMM and NND-AMM can adapt to different types of input, showing the advantage of using neural networks to design factors

    Metamodel-based uncertainty quantification for the mechanical behavior of braided composites

    Get PDF
    The main design requirement for any high-performance structure is minimal dead weight. Producing lighter structures for aerospace and automotive industry directly leads to fuel efficiency and, hence, cost reduction. For wind energy, lighter wings allow larger rotor blades and, consequently, better performance. Prosthetic implants for missing body parts and athletic equipment such as rackets and sticks should also be lightweight for augmented functionality. Additional demands depending on the application, can very often be improved fatigue strength and damage tolerance, crashworthiness, temperature and corrosion resistance etc. Fiber-reinforced composite materials lie within the intersection of all the above requirements since they offer competing stiffness and ultimate strength levels at much lower weight than metals, and also high optimization and design potential due to their versatility. Braided composites are a special category with continuous fiber bundles interlaced around a preform. The automated braiding manufacturing process allows simultaneous material-structure assembly, and therefore, high-rate production with minimal material waste. The multi-step material processes and the intrinsic heterogeneity are the basic origins of the observed variability during mechanical characterization and operation of composite end-products. Conservative safety factors are applied during the design process accounting for uncertainties, even though stochastic modeling approaches lead to more rational estimations of structural safety and reliability. Such approaches require statistical modeling of the uncertain parameters which is quite expensive to be performed experimentally. A robust virtual uncertainty quantification framework is presented, able to integrate material and geometric uncertainties of different nature and statistically assess the response variability of braided composites in terms of effective properties. Information-passing multiscale algorithms are employed for high-fidelity predictions of stiffness and strength. In order to bypass the numerical cost of the repeated multiscale model evaluations required for the probabilistic approach, smart and efficient solutions should be applied. Surrogate models are, thus, trained to map manifolds at different scales and eventually substitute the finite element models. The use of machine learning is viable for uncertainty quantification, optimization and reliability applications of textile materials, but not straightforward for failure responses with complex response surfaces. Novel techniques based on variable-fidelity data and hybrid surrogate models are also integrated. Uncertain parameters are classified according to their significance to the corresponding response via variance-based global sensitivity analysis procedures. Quantification of the random properties in terms of mean and variance can be achieved by inverse approaches based on Bayesian inference. All stochastic and machine learning methods included in the framework are non-intrusive and data-driven, to ensure direct extensions towards more load cases and different materials. Moreover, experimental validation of the adopted multiscale models is presented and an application of stochastic recreation of random textile yarn distortions based on computed tomography data is demonstrated
    corecore