536 research outputs found

    Image processing system based on similarity/dissimilarity measures to classify binary images from contour-based features

    Get PDF
    Image Processing Systems (IPS) try to solve tasks like image classification or segmentation based on its content. Many authors proposed a variety of techniques to tackle the image classification task. Plenty of methods address the performance of the IPS [1], as long as the influence of many external circumstances, such as illumination, rotation, and noise [2]. However, there is an increasing interest in classifying shapes from binary images (BI). Shape Classification (SC) from BI considers a segmented image as a sample (backgroundsegmentation [3]) and aims to identify objects based in its shape..

    Fingerprint Matching using A Hybrid Shape and Orientation Descriptor

    Get PDF
    From the privacy perspective most concerns arise from the storage and misuse of biometric data (Cimato et al., 2009). ... is provided with a in-depth discussion of the state-of-the-art in iris biometric cryptosystems, which completes this work

    Exploiting novel properties of space-filling curves for data analysis

    Get PDF
    Using space-filling curves to order multidimensional data has been found to be useful in a variety of application domains. This paper examines the space-filling curve induced ordering of multidimensional data that has been transformed using shape preserving transformations. It is demonstrated that, although the orderings are not invariant under these transformations, the probability of an ordering is dependent on the geometrical configuration of the multidimensional data. This novel property extends the potential applicability of space-filling curves and is demonstrated by constructing novel features for shape matching

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    3D video based detection of early lameness in dairy cattle

    Get PDF
    Lameness is a major issue in dairy cattle and its early and automated detection offers animal welfare benefits together with potentially high commercial savings for farmers. Current advancements in automated detection have not achieved a sensitive measure for classifying early lameness; it remains to be a key challenge to be solved. The state-of-the-art also lacks behind on other aspects e.g. robust feature detection from a cow's body and the identification of the lame leg/side. This multidisciplinary research addresses the above issues by proposing an overhead, non-intrusive and covert 3-Dimensional (3D) video setup. This facilitates an automated process in order to record freely walking Holstein dairy cows at a commercial farm scale, in an unconstrained environment.The 3D data of the cow's body have been used to automatically track key regions such as the hook bones and the spine using a curvedness feature descriptor which operates at a high detection accuracy (100% for the spine, >97% for the hooks). From these tracked regions, two locomotion traits have been developed. First, motivated by a novel biomechanical approach, a proxy for the animal's gait asymmetry is introduced. This dynamic proxy is derived from the height variations in the hip joint (hooks) during walking, and extrapolated into right/left vertical leg motion signals. This proxy is evidently affected by minor lameness and directly contributes in identifying the lame leg. Second, back posture, which is analysed using two cubic-fit curvatures (X-Z plane and X-Y plane) from the spine region. The X-Z plane curvature is used to assess the spine's arch as an early lameness trait, while the X-Y plane curvature provides a novel definition for localising the lame side. Objective variables were extracted from both traits to be trained using a linear Support Vector Machine (SVM) classifier. Validation is made against ground truth data manually scored using a 1–5 locomotion scoring (LS) system, which consist of two datasets, 23 sessions and 60 sessions of walking cows. A threshold has been identified between LS 1 and 2 (and above). This boundary is important as it represents the earliest point in time at which a cow is considered lame, and its early detection could improve intervention outcome, thereby minimising losses and reducing animal suffering. The threshold achieved an accuracy of 95.7% with a 100% sensitivity (detecting lame cows), and 75% specificity (detecting non-lame cows) on dataset 1 and an accuracy of 88.3% with an 88% sensitivity and 92% specificity on dataset 2. Thereby outperforming the state-of-the-art at a stricter lameness boundary. The 3D video based multi-trait detection strives towards providing a comprehensive locomotion assessment on dairy farms. This contributes to the detection of developing lameness trends using regular monitoring which will improve the lack of robustness of existing methods and reduce reliance on expensive equipment and/or expertise in the dairy industry

    A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function

    Get PDF
    Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term

    Handbook of Computer Vision Algorithms in Image Algebra

    Full text link

    Fast Nearest Neighbor Search in Medical Image Databases

    Get PDF
    We examine the problem of finding similar tumor shapes. Starting from a natural similarity function (the so-called `max morpholog- ical distance'), we showed how to lower-bound it and how to search for nearest neighbors in large collections of tumor-like shapes. Specifically, we used state-of-the-art concepts from morphology, namely the `pattern spectrum' of a shape, to map each shape to a point in nn-dimensional space. Following \cite{Faloutsos94Fast,Jagadish91Retrieval}, we organized the nn-d points in an R-tree. We showed that the LinftyL_infty (= max) norm in the nn-d space lower-bounds the actual distance. This guarantees no false dismissals for range queries. In addition, we developed a nearest-neighbor algorithm that also guarantees no false dismissals. Finally, we implemented the method, and we tested it against a testbed of realistic tumor shapes, using an established tumor- growth model of Murray Eden \cite{Eden:61}. The experiments showed that our method is up to 27 times faster than straightfor- ward sequential scanning. (Also cross-referenced as UMIACS-TR-96-17

    Automated framework for robust content-based verification of print-scan degraded text documents

    Get PDF
    Fraudulent documents frequently cause severe financial damages and impose security breaches to civil and government organizations. The rapid advances in technology and the widespread availability of personal computers has not reduced the use of printed documents. While digital documents can be verified by many robust and secure methods such as digital signatures and digital watermarks, verification of printed documents still relies on manual inspection of embedded physical security mechanisms.The objective of this thesis is to propose an efficient automated framework for robust content-based verification of printed documents. The principal issue is to achieve robustness with respect to the degradations and increased levels of noise that occur from multiple cycles of printing and scanning. It is shown that classic OCR systems fail under such conditions, moreover OCR systems typically rely heavily on the use of high level linguistic structures to improve recognition rates. However inferring knowledge about the contents of the document image from a-priori statistics is contrary to the nature of document verification. Instead a system is proposed that utilizes specific knowledge of the document to perform highly accurate content verification based on a Print-Scan degradation model and character shape recognition. Such specific knowledge of the document is a reasonable choice for the verification domain since the document contents are already known in order to verify them.The system analyses digital multi font PDF documents to generate a descriptive summary of the document, referred to as \Document Description Map" (DDM). The DDM is later used for verifying the content of printed and scanned copies of the original documents. The system utilizes 2-D Discrete Cosine Transform based features and an adaptive hierarchical classifier trained with synthetic data generated by a Print-Scan degradation model. The system is tested with varying degrees of Print-Scan Channel corruption on a variety of documents with corruption produced by repetitive printing and scanning of the test documents. Results show the approach achieves excellent accuracy and robustness despite the high level of noise
    corecore