1,398 research outputs found

    Topologically robust CAD model generation for structural optimisation

    Get PDF
    Computer-aided design (CAD) models play a crucial role in the design, manufacturing and maintenance of products. Therefore, the mesh-based finite element descriptions common in structural optimisation must be first translated into CAD models. Currently, this can at best be performed semi-manually. We propose a fully automated and topologically accurate approach to synthesise a structurally-sound parametric CAD model from topology optimised finite element models. Our solution is to first convert the topology optimised structure into a spatial frame structure and then to regenerate it in a CAD system using standard constructive solid geometry (CSG) operations. The obtained parametric CAD models are compact, that is, have as few as possible geometric parameters, which makes them ideal for editing and further processing within a CAD system. The critical task of converting the topology optimised structure into an optimal spatial frame structure is accomplished in several steps. We first generate from the topology optimised voxel model a one-voxel-wide voxel chain model using a topology-preserving skeletonisation algorithm from digital topology. The weighted undirected graph defined by the voxel chain model yields a spatial frame structure after processing it with standard graph algorithms. Subsequently, we optimise the cross-sections and layout of the frame members to recover its optimality, which may have been compromised during the conversion process. At last, we generate the obtained frame structure in a CAD system by repeatedly combining primitive solids, like cylinders and spheres, using boolean operations. The resulting solid model is a boundary representation (B-Rep) consisting of trimmed non-uniform rational B-spline (NURBS) curves and surfaces

    Acta Cybernetica : Volume 20. Number 1.

    Get PDF

    Digital Morphometry : A Taxonomy Of Morphological Filters And Feature Parameters With Application To Alzheimer\u27s Disease Research

    Get PDF
    In this thesis the expression digital morphometry collectively describes all those procedures used to obtain quantitative measurements of objects within a two-dimensional digital image. Quantitative measurement is a two-step process: the application of geometrical transformations to extract the features of interest, and then the actual measurement of these features. With regard to the first step the morphological filters of mathematical morphology provide a wealth of suitable geometric transfomations. Traditional radiometric and spatial enhancement techniques provide an additional source of transformations. The second step is more classical (e.g. Underwood, 1970; Bookstein, 1978; and Weibull, 1980); yet here again mathematical morphology is applicable - morphologically derived feature parameters. This thesis focuses on mathematical morphology for digital morphometry. In particular it proffers a taxonomy of morphological filters and investigates the morphologically derived feature parameters (Minkowski functionals) for digital images sampled on a square grid. As originally conceived by Georges Matheron, mathematical morphology concerns the analysis of binary images by means of probing with structuring elements [typically convex geometric shapes] (Dougherty, 1993, preface). Since its inception the theory has been extended to grey-level images and most recently to complete lattices. It is within the very general framework of the complete lattice that the taxonomy of morphological filters is presented. Examples are provided to help illustrate the behaviour of each type of filter. This thesis also introduces DIMPAL (Mehnert, 1994) - a PC-based image processing and analysis language suitable for researching and developing algorithms for a wide range of image processing applications. Though DIMPAL was used to produce the majority of the images in this thesis it was principally written to provide an environment in which to investigate the application of mathematical morphology to Alzheimer\u27s disease research. Alzheimer\u27s disease is a form of progressive dementia associated with the degeneration of the brain. It is the commonest type of dementia and probably accounts for half the dementia of old age (Forsythe, 1990, p. 21 ). Post mortem examination of the brain reveals the presence of characteristic neuropathologic lesions; namely neuritic plaques and neurofibrillary tangles. They occur predominantly in the cerebral cortex and hippocampus. Quantitative studies of the distribution of plaques and tangles in normally aged and Alzheimer brains are hampered by the enormous amount of time and effort required to count and measure these lesions. Here in a morphological algorithm is proposed for the automatic segmentation and measurement of neuritic plaques from light micrographs of post mortem brain tissue

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Highly automatic quantification of myocardial oedema in patients with acute myocardial infarction using bright blood T2-weighted CMR

    Get PDF
    <p>Background: T2-weighted cardiovascular magnetic resonance (CMR) is clinically-useful for imaging the ischemic area-at-risk and amount of salvageable myocardium in patients with acute myocardial infarction (MI). However, to date, quantification of oedema is user-defined and potentially subjective.</p> <p>Methods: We describe a highly automatic framework for quantifying myocardial oedema from bright blood T2-weighted CMR in patients with acute MI. Our approach retains user input (i.e. clinical judgment) to confirm the presence of oedema on an image which is then subjected to an automatic analysis. The new method was tested on 25 consecutive acute MI patients who had a CMR within 48 hours of hospital admission. Left ventricular wall boundaries were delineated automatically by variational level set methods followed by automatic detection of myocardial oedema by fitting a Rayleigh-Gaussian mixture statistical model. These data were compared with results from manual segmentation of the left ventricular wall and oedema, the current standard approach.</p> <p>Results: The mean perpendicular distances between automatically detected left ventricular boundaries and corresponding manual delineated boundaries were in the range of 1-2 mm. Dice similarity coefficients for agreement (0=no agreement, 1=perfect agreement) between manual delineation and automatic segmentation of the left ventricular wall boundaries and oedema regions were 0.86 and 0.74, respectively.</p&gt

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data
    • …
    corecore