577 research outputs found

    A computationally efficient framework for large-scale distributed fingerprint matching

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science, School of Computer Science and Applied Mathematics. May 2017.Biometric features have been widely implemented to be utilized for forensic and civil applications. Amongst many different kinds of biometric characteristics, the fingerprint is globally accepted and remains the mostly used biometric characteristic by commercial and industrial societies due to its easy acquisition, uniqueness, stability and reliability. There are currently various effective solutions available, however the fingerprint identification is still not considered a fully solved problem mainly due to accuracy and computational time requirements. Although many of the fingerprint recognition systems based on minutiae provide good accuracy, the systems with very large databases require fast and real time comparison of fingerprints, they often either fail to meet the high performance speed requirements or compromise the accuracy. For fingerprint matching that involves databases containing millions of fingerprints, real time identification can only be obtained through the implementation of optimal algorithms that may utilize the given hardware as robustly and efficiently as possible. There are currently no known distributed database and computing framework available that deal with real time solution for fingerprint recognition problem involving databases containing as many as sixty million fingerprints, the size which is close to the size of the South African population. This research proposal intends to serve two main purposes: 1) exploit and scale the best known minutiae matching algorithm for a minimum of sixty million fingerprints; and 2) design a framework for distributed database to deal with large fingerprint databases based on the results obtained in the former item.GR201

    AUTOMATED TREE-LEVEL FOREST QUANTIFICATION USING AIRBORNE LIDAR

    Get PDF
    Traditional forest management relies on a small field sample and interpretation of aerial photography that not only are costly to execute but also yield inaccurate estimates of the entire forest in question. Airborne light detection and ranging (LiDAR) is a remote sensing technology that records point clouds representing the 3D structure of a forest canopy and the terrain underneath. We present a method for segmenting individual trees from the LiDAR point clouds without making prior assumptions about tree crown shapes and sizes. We then present a method that vertically stratifies the point cloud to an overstory and multiple understory tree canopy layers. Using the stratification method, we modeled the occlusion of higher canopy layers with respect to point density. We also present a distributed computing approach that enables processing the massive data of an arbitrarily large forest. Lastly, we investigated using deep learning for coniferous/deciduous classification of point cloud segments representing individual tree crowns. We applied the developed methods to the University of Kentucky Robinson Forest, a natural, majorly deciduous, closed-canopy forest. 90% of overstory and 47% of understory trees were detected with false positive rates of 14% and 2% respectively. Vertical stratification improved the detection rate of understory trees to 67% at the cost of increasing their false positive rate to 12%. According to our occlusion model, a point density of about 170 pt/m² is needed to segment understory trees located in the third layer as accurately as overstory trees. Using our distributed processing method, we segmented about two million trees within a 7400-ha forest in 2.5 hours using 192 processing cores, showing a speedup of ~170. Our deep learning experiments showed high classification accuracies (~82% coniferous and ~90% deciduous) without the need to manually assemble the features. In conclusion, the methods developed are steps forward to remote, accurate quantification of large natural forests at the individual tree level

    Bimodal automated carotid ultrasound segmentation using geometrically constrained deep neural networks

    Get PDF
    For asymptomatic patients suffering from carotid stenosis, the assessment of plaque morphology is an important clinical task which allows monitoring of the risk of plaque rupture and future incidents of stroke. Ultrasound Imaging provides a safe and non-invasive modality for this, and the segmentation of media-adventitia boundaries and lumen-intima boundaries of the Carotid artery form an essential part in this monitoring process. In this paper, we propose a novel Deep Neural Network as a fully automated segmentation tool, and its application in delineating both the media-adventitia boundary and the lumen-intima boundary. We develop a new geometrically constrained objective function as part of the Network's Stochastic Gradient Descent optimisation, thus tuning it to the problem at hand. Furthermore, we also apply a bimodal fusion of amplitude and phase congruency data proposed by us in previous work, as an input to the network, as the latter provides an intensity-invariant data source to the network. We finally report the segmentation performance of the network on transverse sections of the carotid. Tests are carried out on an augmented dataset of 81,000 images, and the results are compared to other studies by reporting the DICE coefficient of similarity, modified Hausdorff Distance, sensitivity and specificity. Our proposed modification is shown to yield improved results on the standard network over this larger dataset, with the advantage of it being fully automated. We conclude that Deep Neural Networks provide a reliable trained manner in which carotid ultrasound images may be automatically segmented, using amplitude data and intensity invariant phase congruency maps as a data source

    Multi-scale active shape description in medical imaging

    Get PDF
    Shape description in medical imaging has become an increasingly important research field in recent years. Fast and high-resolution image acquisition methods like Magnetic Resonance (MR) imaging produce very detailed cross-sectional images of the human body - shape description is then a post-processing operation which abstracts quantitative descriptions of anatomically relevant object shapes. This task is usually performed by clinicians and other experts by first segmenting the shapes of interest, and then making volumetric and other quantitative measurements. High demand on expert time and inter- and intra-observer variability impose a clinical need of automating this process. Furthermore, recent studies in clinical neurology on the correspondence between disease status and degree of shape deformations necessitate the use of more sophisticated, higher-level shape description techniques. In this work a new hierarchical tool for shape description has been developed, combining two recently developed and powerful techniques in image processing: differential invariants in scale-space, and active contour models. This tool enables quantitative and qualitative shape studies at multiple levels of image detail, exploring the extra image scale degree of freedom. Using scale-space continuity, the global object shape can be detected at a coarse level of image detail, and finer shape characteristics can be found at higher levels of detail or scales. New methods for active shape evolution and focusing have been developed for the extraction of shapes at a large set of scales using an active contour model whose energy function is regularized with respect to scale and geometric differential image invariants. The resulting set of shapes is formulated as a multiscale shape stack which is analysed and described for each scale level with a large set of shape descriptors to obtain and analyse shape changes across scales. This shape stack leads naturally to several questions in regard to variable sampling and appropriate levels of detail to investigate an image. The relationship between active contour sampling precision and scale-space is addressed. After a thorough review of modem shape description, multi-scale image processing and active contour model techniques, the novel framework for multi-scale active shape description is presented and tested on synthetic images and medical images. An interesting result is the recovery of the fractal dimension of a known fractal boundary using this framework. Medical applications addressed are grey-matter deformations occurring for patients with epilepsy, spinal cord atrophy for patients with Multiple Sclerosis, and cortical impairment for neonates. Extensions to non-linear scale-spaces, comparisons to binary curve and curvature evolution schemes as well as other hierarchical shape descriptors are discussed

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Novel applications of spectroscopy to characterize soil variation

    Get PDF
    This thesis embodies a collection of novel studies related to the use of multivariate information provided by spectroscopic tools such as Visible and Near Infrared (Vis-NIR) spectrometers to represent soil variation. The general structure is organized following the increasing levels of soil complexity, starting from the characterization of soil aggregates and the identification of soil colloids, to the recognition of soil horizons and their boundaries in the soil profile, to finally the depiction of soil type’s distribution in the landscape. Briefly, Chapter 1 is written as a rationale, emphasising the need for up-to-date methodologies for making effective use of the increasing amount of soil information produced worldwide. Chapter 2 presents the development of a new methodology for the measure of soil aggregate stability and the further use of spectroscopic information to predict its values. Chapter 3 gives examples of the use of Vis-NIR spectral libraries for the prediction of soil properties. Chapter 4 presents the development of a new method for the identification of soil horizons and their boundaries using fuzzy clustering of Vis-NIR spectra. Chapter 5 expands into a new way of measuring the diversity of soils into the landscape, introducing two new indices for measuring soil diversity or “Functional Pedodiversity” inspired in previous studies in Functional Ecology. Finally Chapter 6 discusses the main findings of this thesis and foresees issues, challenges and opportunities in the area of spectroscopy and multivariate soil data analysis

    Curve Skeleton and Moments of Area Supported Beam Parametrization in Multi-Objective Compliance Structural Optimization

    Get PDF
    This work addresses the end-to-end virtual automation of structural optimization up to the derivation of a parametric geometry model that can be used for application areas such as additive manufacturing or the verification of the structural optimization result with the finite element method. A holistic design in structural optimization can be achieved with the weighted sum method, which can be automatically parameterized with curve skeletonization and cross-section regression to virtually verify the result and control the local size for additive manufacturing. is investigated in general. In this paper, a holistic design is understood as a design that considers various compliances as an objective function. This parameterization uses the automated determination of beam parameters by so-called curve skeletonization with subsequent cross-section shape parameter estimation based on moments of area, especially for multi-objective optimized shapes. An essential contribution is the linking of the parameterization with the results of the structural optimization, e.g., to include properties such as boundary conditions, load conditions, sensitivities or even density variables in the curve skeleton parameterization. The parameterization focuses on guiding the skeletonization based on the information provided by the optimization and the finite element model. In addition, the cross-section detection considers circular, elliptical, and tensor product spline cross-sections that can be applied to various shape descriptors such as convolutional surfaces, subdivision surfaces, or constructive solid geometry. The shape parameters of these cross-sections are estimated using stiffness distributions, moments of area of 2D images, and convolutional neural networks with a tailored loss function to moments of area. Each final geometry is designed by extruding the cross-section along the appropriate curve segment of the beam and joining it to other beams by using only unification operations. The focus of multi-objective structural optimization considering 1D, 2D and 3D elements is on cases that can be modeled using equations by the Poisson equation and linear elasticity. This enables the development of designs in application areas such as thermal conduction, electrostatics, magnetostatics, potential flow, linear elasticity and diffusion, which can be optimized in combination or individually. Due to the simplicity of the cases defined by the Poisson equation, no experts are required, so that many conceptual designs can be generated and reconstructed by ordinary users with little effort. Specifically for 1D elements, a element stiffness matrices for tensor product spline cross-sections are derived, which can be used to optimize a variety of lattice structures and automatically convert them into free-form surfaces. For 2D elements, non-local trigonometric interpolation functions are used, which should significantly increase interpretability of the density distribution. To further improve the optimization, a parameter-free mesh deformation is embedded so that the compliances can be further reduced by locally shifting the node positions. Finally, the proposed end-to-end optimization and parameterization is applied to verify a linear elasto-static optimization result for and to satisfy local size constraint for the manufacturing with selective laser melting of a heat transfer optimization result for a heat sink of a CPU. For the elasto-static case, the parameterization is adjusted until a certain criterion (displacement) is satisfied, while for the heat transfer case, the manufacturing constraints are satisfied by automatically changing the local size with the proposed parameterization. This heat sink is then manufactured without manual adjustment and experimentally validated to limit the temperature of a CPU to a certain level.:TABLE OF CONTENT III I LIST OF ABBREVIATIONS V II LIST OF SYMBOLS V III LIST OF FIGURES XIII IV LIST OF TABLES XVIII 1. INTRODUCTION 1 1.1 RESEARCH DESIGN AND MOTIVATION 6 1.2 RESEARCH THESES AND CHAPTER OVERVIEW 9 2. PRELIMINARIES OF TOPOLOGY OPTIMIZATION 12 2.1 MATERIAL INTERPOLATION 16 2.2 TOPOLOGY OPTIMIZATION WITH PARAMETER-FREE SHAPE OPTIMIZATION 17 2.3 MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION WITH THE WEIGHTED SUM METHOD 18 3. SIMULTANEOUS SIZE, TOPOLOGY AND PARAMETER-FREE SHAPE OPTIMIZATION OF WIREFRAMES WITH B-SPLINE CROSS-SECTIONS 21 3.1 FUNDAMENTALS IN WIREFRAME OPTIMIZATION 22 3.2 SIZE AND TOPOLOGY OPTIMIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 27 3.3 PARAMETER-FREE SHAPE OPTIMIZATION EMBEDDED IN SIZE OPTIMIZATION 32 3.4 WEIGHTED SUM SIZE AND TOPOLOGY OPTIMIZATION 36 3.5 CROSS-SECTION COMPARISON 39 4. NON-LOCAL TRIGONOMETRIC INTERPOLATION IN TOPOLOGY OPTIMIZATION 41 4.1 FUNDAMENTALS IN MATERIAL INTERPOLATIONS 43 4.2 NON-LOCAL TRIGONOMETRIC SHAPE FUNCTIONS 45 4.3 NON-LOCAL PARAMETER-FREE SHAPE OPTIMIZATION WITH TRIGONOMETRIC SHAPE FUNCTIONS 49 4.4 NON-LOCAL AND PARAMETER-FREE MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION 54 5. FUNDAMENTALS IN SKELETON GUIDED SHAPE PARAMETRIZATION IN TOPOLOGY OPTIMIZATION 58 5.1 SKELETONIZATION IN TOPOLOGY OPTIMIZATION 61 5.2 CROSS-SECTION RECOGNITION FOR IMAGES 66 5.3 SUBDIVISION SURFACES 67 5.4 CONVOLUTIONAL SURFACES WITH META BALL KERNEL 71 5.5 CONSTRUCTIVE SOLID GEOMETRY 73 6. CURVE SKELETON GUIDED BEAM PARAMETRIZATION OF TOPOLOGY OPTIMIZATION RESULTS 75 6.1 FUNDAMENTALS IN SKELETON SUPPORTED RECONSTRUCTION 76 6.2 SUBDIVISION SURFACE PARAMETRIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 78 6.3 CURVE SKELETONIZATION TAILORED TO TOPOLOGY OPTIMIZATION WITH PRE-PROCESSING 82 6.4 SURFACE RECONSTRUCTION USING LOCAL STIFFNESS DISTRIBUTION 86 7. CROSS-SECTION SHAPE PARAMETRIZATION FOR PERIODIC B-SPLINES 96 7.1 PRELIMINARIES IN B-SPLINE CONTROL GRID ESTIMATION 97 7.2 CROSS-SECTION EXTRACTION OF 2D IMAGES 101 7.3 TENSOR SPLINE PARAMETRIZATION WITH MOMENTS OF AREA 105 7.4 B-SPLINE PARAMETRIZATION WITH MOMENTS OF AREA GUIDED CONVOLUTIONAL NEURAL NETWORK 110 8. FULLY AUTOMATED COMPLIANCE OPTIMIZATION AND CURVE-SKELETON PARAMETRIZATION FOR A CPU HEAT SINK WITH SIZE CONTROL FOR SLM 115 8.1 AUTOMATED 1D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINED SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 118 8.2 AUTOMATED 2D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINT SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 120 8.3 USING THE HEAT SINK PROTOTYPES COOLING A CPU 123 9. CONCLUSION 127 10. OUTLOOK 131 LITERATURE 133 APPENDIX 147 A PREVIOUS STUDIES 147 B CROSS-SECTION PROPERTIES 149 C CASE STUDIES FOR THE CROSS-SECTION PARAMETRIZATION 155 D EXPERIMENTAL SETUP 15

    Lymphatic Filariasis detection in microscopic images

    Get PDF
    In Africa, the propagation of parasites like the lymphatic filariasis is complicatingseriously the efforts of health professionals to cure certain diseases. Although there aremedicines capable to treat the lymphatic filariasis, this needs to be discovered firstly which isnot always an easy task having into account that in most countries affected by this disease it canonly be detected at night (nocturne). The lymphatic filariasis is then, a parasitical infectionwhich can originate changes or ruptures in the lymphatic system as well as an abnormal growthof certain areas of the body causing pain, incapacity and social stigma.Approximately 1.23 billion people in 58 countries from all over the world are threatenedby this disease which requires a preventive treatment to stop its propagation which makes iteven more important for the existence of a mechanism that is less costly and more agile in theanalysis of a blood smear to verify the existence of microfilariae (little worms that are producedby other adult worms while housed in the lymphatic system).The lymphatic filariasis is caused by an infection with nematodes ("roundworms") of theFilariodidea family in which three types are inserted: Wuchereria Bancroft, responsible for 90%of all cases; Brugia Malayi, responsible for almost every remaining; B.Timori also causing thedisease. All three have characteristics that can differentiate them which allow them to beidentified.The current identification process of the disease consists on the analysis of microfilariae ina blood smear with a blood sample through a microscope and its identification by the observer.Taking this into account, it is intended to develop image analysis and processingtechniques for the recognition and counting of the two principal types of filarial worms from athin blood smear, a smartphone and a portable microscope making the detection possiblewithout the need of a health professional and consequent automation of the process. To makethis possible an adapter smartphone-microscope can be used to obtain an image with themagnification of 40x3. The images can then be analyzed in a server or in the smartphone, if ithas enough processing for it. It is expected from this process that the need to resort to labs toprocess the blood smear gets fulfilled making the process more accessible and agile instead ofcostly and slow.For the detection of the parasites from the acquired images it is intended to implement,experiment and choose the more adequate operations. These comprise pre-processing operationswith the goal to enhance the acquired images and eliminate possible artifacts prevenient fromthe acquisition system. However, the principal operations should be those that allow theverification of existence or nonexistence, recognition and classification of the pretendedparasites. Processing and analysis techniques that are common in these processes are based inthe extraction of features (e.g. SIRF, SURF, and FLANN) template similarity, edge detectionand description of contours and recognition of statistical patterns.Once detected and recognized one or more parasites and its types should be defined andused a rule to declare the presence of the disease and its stage
    corecore