42 research outputs found

    Enhanced iris recognition: Algorithms for segmentation, matching and synthesis

    Get PDF
    This thesis addresses the issues of segmentation, matching, fusion and synthesis in the context of irises and makes a four-fold contribution. The first contribution of this thesis is a post matching algorithm that observes the structure of the differences in feature templates to enhance recognition accuracy. The significance of the scheme is its robustness to inaccuracies in the iris segmentation process. Experimental results on the CASIA database indicate the efficacy of the proposed technique. The second contribution of this thesis is a novel iris segmentation scheme that employs Geodesic Active Contours to extract the iris from the surrounding structures. The proposed scheme elicits the iris texture in an iterative fashion depending upon both the local and global conditions of the image. The performance of an iris recognition algorithm on both the WVU non-ideal and CASIA iris database is observed to improve upon application of the proposed segmentation algorithm. The third contribution of this thesis is the fusion of multiple instances of the same iris and multiple iris units of the eye, i.e., the left and right iris at the match score level. Using simple sum rule, it is demonstrated that both multi-instance and multi-unit fusion of iris can lead to a significant improvement in matching accuracy. The final contribution is a technique to create a large database of digital renditions of iris images that can be used to evaluate the performance of iris recognition algorithms. This scheme is implemented in two stages. In the first stage, a Markov Random Field model is used to generate a background texture representing the global iris appearance. In the next stage a variety of iris features, viz., radial and concentric furrows, collarette and crypts, are generated and embedded in the texture field. Experimental results confirm the validity of the synthetic irises generated using this technique

    A Semi-Automated Approach to Medical Image Segmentation using Conditional Random Field Inference

    Full text link
    Medical image segmentation plays a crucial role in delivering effective patient care in various diagnostic and treatment modalities. Manual delineation of target volumes and all critical structures is a very tedious and highly time-consuming process and introduce uncertainties of treatment outcomes of patients. Fully automatic methods holds great promise for reducing cost and time, while at the same time improving accuracy and eliminating expert variability, yet there are still great challenges. Legally and ethically, human oversight must be integrated with ”smart tools” favoring a semi-automatic technique which can leverage the best aspects of both human and computer. In this work we show that we can formulate a semi-automatic framework for the segmentation problem by formulating it as an energy minimization problem in Conditional Random Field (CRF). We show that human input can be used as adaptive training data to condition a probabilistic boundary term modeled for the heterogeneous boundary characteristics of anatomical structures. We demonstrated that our method can effortlessly adapt to multiple structures and image modalities using a single CRF framework and tools to learn probabilistic terms interactively. To tackle a more difficult multi-class segmentation problem, we developed a new ensemble one-vs-rest graph cut algorithm. Each graph in the ensemble performs a simple and efficient bi-class (a target class vs the rest of the classes) segmentation. The final segmentation is obtained by majority vote. Our algorithm is both faster and more accurate when compared with the prior multi-class method which iteratively swaps classes. In this Thesis, we also include novel volumetric segmentation algorithms which employ deep learning and indicate how to synthesize our CRF framework with convolutional neural networks (CNN). This would allow incorporating user guidance into CNN based deep learning for this task. We think a deep learning based method interactively guided by human expert is the ideal solution for medical image segmentation

    Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

    Get PDF
    Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed

    Analyzing and Synthesizing Images by Evolving Curves with the Osher-Sethian Method

    No full text
    Numerical analysis of conservation laws plays an important role in the implementation of curve evolution equations. This paper reviews the relevant concepts in numerical analysis and the relation between curve evolution, Hamilton-Jacobi partial differential equations, and differential conservation laws. This close relation enables us to introduce finite difference approximations, based on the theory of conservation laws, into curve evolution. It is shown how curve evolution serves as a powerful tool for image analysis, and how these mathematical relations enable us to construct efficient and accurate numerical schemes. Some examples demonstrate the importance of the CFL condition as a necessary condition for the stability of the numerical schemes. 1 Introduction Recently, researchers in the field of image processing and computer vision started to pay attention to new ways of analyzing and representing two-dimensional, stationary or moving images, via planar curve evolution. In fact, a..

    Proper shape representation of single figure and multi-figure anatomical objects

    Get PDF
    Extracting anatomic objects from medical images is an important process in various medical applications. This extraction, called image segmentation, is often realized by deformable models. Among deformable model methods, medial deformable models have the unique advantage of representing not only the object boundary surfaces but also the object interior volume. Based on one medial deformable model called the m-rep, the main goal of this dissertation is to provide proper shape representations of simple anatomical objects of one part and complex anatomical objects of multiple parts in a population. This dissertation focuses on several challenges in the existing medially based deformable model method: 1. how to derive a proper continuous form by interpolating a discrete medial shape representation; 2. how to represent complex objects with several parts and do statistical analysis on them; 3. how to avoid local shape defects, such as folding or creasing, in shapes represented by the deformable model. The proposed methods in this dissertation address these challenges in more detail: 1. An interpolation method for a discrete medial shape model is proposed to guarantee the legality of the interpolated shape. This method is based on the integration of medial shape operators. 2. A medially based representation with hierarchy is proposed to represent complex objects with multiple parts by explicitly modeling interrelations between object parts and modeling smooth transitions between each pair of connected parts. A hierarchical statistical analysis is also proposed for these complex objects. 3. A method to fit a medial model to binary images is proposed to use an explicit legality penalty derived from the medial shape operators. Probability distributions learned from the fitted shape models by the proposed fitting method have proven to yield better image segmentation results

    Classification Algorithms based on Generalized Polynomial Chaos

    Get PDF
    Classification is one of the most important tasks in process system engineering. Since most of the classification algorithms are generally based on mathematical models, they inseparably involve the quantification and propagation of model uncertainty onto the variables used for classification. Such uncertainty may originate from either a lack of knowledge of the underlying process or from the intrinsic time varying phenomena such as unmeasured disturbances and noise. Often, model uncertainty has been modeled in a probabilistic way and Monte Carlo (MC) type sampling methods have been the method of choice for quantifying the effects of uncertainty. However, MC methods may be computationally prohibitive especially for nonlinear complex systems and systems involving many variables. Alternatively, stochastic spectral methods such as the generalized polynomial chaos (gPC) expansion have emerged as a promising technique that can be used for uncertainty quantification and propagation. Such methods can approximate the stochastic variables by a truncated gPC series where the coefficients of these series can be calculated by Galerkin projection with the mathematical models describing the process. Following these steps, the gPC expansion based methods can converge much faster to a solution than MC type sampling based methods. Using the gPC based uncertainty quantification and propagation method, this current project focuses on the following three problems: (i) fault detection and diagnosis (FDD) in the presence of stochastic faults entering the system; (ii) simultaneous optimal tuning of a FDD algorithm and a feedback controller to enhance the detectability of faults while mitigating the closed loop process variability; (iii) classification of apoptotic cells versus normal cells using morphological features identified from a stochastic image segmentation algorithm in combination with machine learning techniques. The algorithms developed in this work are shown to be highly efficient in terms of computational time, improved fault diagnosis and accurate classification of apoptotic versus normal cells

    Skeletonization methods for image and volume inpainting

    Get PDF

    Skeletonization methods for image and volume inpainting

    Get PDF
    corecore