165 research outputs found

    ALGORITHMS FOR ADJUSTMENT OF SYMMETRY AXIS FOUND FOR 2D SHAPES BY THE SKELETON COMPARISON METHOD

    Get PDF
    Reflection symmetry detection for 2D shapes is a well-known task in Computer Vision, but there is a limited number of efficient and effective methods for its solution. Our previously proposed approach based on pair-wise comparison of sub-sequences of skeleton primitives finds the axis of symmetry within few seconds. In order to evaluate the value of symmetry relative to the found axis we use the Jaccard similarity measure. It is applied to the pixels subsets of a shape which are split by the axis. Often an axis found by the skeleton comparison method diverges more or less from the ground-truth axis found by the method of exhaustive search among all the potential candidates. That is why the algorithms that allow adjusting the axis found by the fast skeleton method are proposed. They are based on the idea of searching the axis which is located near the seed skeleton axis and has greater Jaccard similarity measure. The experimental study on the ”Flavia” and ”Butterflies” datasets shows that proposed algorithms find the ground-truth axis (or the axis which has slightly less Jaccard similarity value than the ground-truth axis) in near real time. It is considerably faster than any of the optimized brute-force methods

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Experimental and computational analysis of random cylinder packings with applications

    Get PDF
    Random cylinder packings are prevalent in chemical engineering applications and they can serve as prototype models of fibrous materials and/or other particulate materials. In this research, comprehensive studies on cylinder packings were carried out by computer simulations and by experiments. The computational studies made use of a collective rearrangement algorithm (based on a Monte Carlo technique) to generate different packing structures. 3D random packing limits were explored, and the packing structures were quantified by their positional ordering, orientational ordering, and the particle-particle contacts. Furthermore, the void space in the packings was expressed as a pore network, which retains topological and geometrical information. The significance of this approach is that any irregular continuous porous space can be approximated as a mathematically tractable pore network, thus allowing for efficient microscale flow simulation. Single-phase flow simulations were conducted, and the results were validated by calculating permeabilities. In the experimental part of the research, a series of densification experiments were conducted on equilateral cylinders. X-ray microtomography was used to image the cylinder packs, and the particle-scale packings were reconstructed from the digital data. This numerical approach makes it possible to study detailed packing structure, packing density, the onset of ordering, and wall effects. Orthogonal ordering and layered structures were found to exist at least two characteristic diameters from the wall in cylinder packings. Important applications for cylinder packings include multiphase flow in catalytic beds, heat transfer, bulk storage and transportation, and manufacturing of fibrous composites

    Multi-Technique Fusion for Shape-Based Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) is still in its early stages, although several attempts have been made to solve or minimize challenges associated with it. CBIR techniques use such visual contents as color, texture, and shape to represent and index images. Of these, shapes contain richer information than color or texture. However, retrieval based on shape contents remains more difficult than that based on color or texture due to the diversity of shapes and the natural occurrence of shape transformations such as deformation, scaling and orientation. This thesis presents an approach for fusing several shape-based image retrieval techniques for the purpose of achieving reliable and accurate retrieval performance. An extensive investigation of notable existing shape descriptors is reported. Two new shape descriptors have been proposed as means to overcome limitations of current shape descriptors. The first descriptor is based on a novel shape signature that includes corner information in order to enhance the performance of shape retrieval techniques that use Fourier descriptors. The second descriptor is based on the curvature of the shape contour. This invariant descriptor takes an unconventional view of the curvature-scale-space map of a contour by treating it as a 2-D binary image. The descriptor is then derived from the 2-D Fourier transform of the 2-D binary image. This technique allows the descriptor to capture the detailed dynamics of the curvature of the shape and enhances the efficiency of the shape-matching process. Several experiments have been conducted in order to compare the proposed descriptors with several notable descriptors. The new descriptors not only speed up the online matching process, but also lead to improved retrieval accuracy. The complexity and variety of the content of real images make it impossible for a particular choice of descriptor to be effective for all types of images. Therefore, a data- fusion formulation based on a team consensus approach is proposed as a means of achieving high accuracy performance. In this approach a select set of retrieval techniques form a team. Members of the team exchange information so as to complement each other’s assessment of a database image candidate as a match to query images. Several experiments have been conducted based on the MPEG-7 contour-shape databases; the results demonstrate that the performance of the proposed fusion scheme is superior to that achieved by any technique individually

    The Role of Transient Vibration of the Skull on Concussion

    Get PDF
    Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury

    Automatic Segmentation and Classification of Red and White Blood cells in Thin Blood Smear Slides

    Get PDF
    In this work we develop a system for automatic detection and classification of cytological images which plays an increasing important role in medical diagnosis. A primary aim of this work is the accurate segmentation of cytological images of blood smears and subsequent feature extraction, along with studying related classification problems such as the identification and counting of peripheral blood smear particles, and classification of white blood cell into types five. Our proposed approach benefits from powerful image processing techniques to perform complete blood count (CBC) without human intervention. The general framework in this blood smear analysis research is as follows. Firstly, a digital blood smear image is de-noised using optimized Bayesian non-local means filter to design a dependable cell counting system that may be used under different image capture conditions. Then an edge preservation technique with Kuwahara filter is used to recover degraded and blurred white blood cell boundaries in blood smear images while reducing the residual negative effect of noise in images. After denoising and edge enhancement, the next step is binarization using combination of Otsu and Niblack to separate the cells and stained background. Cells separation and counting is achieved by granulometry, advanced active contours without edges, and morphological operators with watershed algorithm. Following this is the recognition of different types of white blood cells (WBCs), and also red blood cells (RBCs) segmentation. Using three main types of features: shape, intensity, and texture invariant features in combination with a variety of classifiers is next step. The following features are used in this work: intensity histogram features, invariant moments, the relative area, co-occurrence and run-length matrices, dual tree complex wavelet transform features, Haralick and Tamura features. Next, different statistical approaches involving correlation, distribution and redundancy are used to measure of the dependency between a set of features and to select feature variables on the white blood cell classification. A global sensitivity analysis with random sampling-high dimensional model representation (RS-HDMR) which can deal with independent and dependent input feature variables is used to assess dominate discriminatory power and the reliability of feature which leads to an efficient feature selection. These feature selection results are compared in experiments with branch and bound method and with sequential forward selection (SFS), respectively. This work examines support vector machine (SVM) and Convolutional Neural Networks (LeNet5) in connection with white blood cell classification. Finally, white blood cell classification system is validated in experiments conducted on cytological images of normal poor quality blood smears. These experimental results are also assessed with ground truth manually obtained from medical experts

    Traffic and road sign recognition

    Get PDF
    This thesis presents a system to recognise and classify road and traffic signs for the purpose of developing an inventory of them which could assist the highway engineers' tasks of updating and maintaining them. It uses images taken by a camera from a moving vehicle. The system is based on three major stages: colour segmentation, recognition, and classification.Four colour segmentation algorithms are developed and tested. They are a shadow and highlight invariant, a dynamic threshold, a modification of de la Escalera's algorithm and a Fuzzy colour segmentation algorithm. All algorithms are tested using hundreds of images and the shadow-highlight invariant algorithm is eventually chosen as the best performer. This is because it is immune to shadows and highlights. It is also robust as it was tested in different lighting conditions, weather conditions, and times of the day.Approximately 97% successful segmentation rate was achieved using this algorithm. Recognition of traffic signs is carried out using a fuzzy shape recogniser. Based on four shape measures - the rectangularity, triangularity, ellipticity, and octagonality, fuzzy rules were developed to determine the shape of the sign. Among these shape measures octangonality has been introduced in this research. The final decision of the recogniser is based on the combination of both the colour and shape of the sign. The recogniser was tested in a variety of testing conditions giving an overall performance of approximately 88%.Classification was undertaken using a Support Vector Machine (SVM) classifier. The classification is carried out in two stages: rim's shape classification followed by the classification of interior of the sign. The classifier was trained and tested using binary images in addition to five different types of moments which are Geometric moments, Zernike moments, Legendre moments, Orthogonal Fourier-Mellin Moments, and Binary Haar features. The performance of the SVM was tested using different features, kernels, SVM types, SVM parameters, and moment's orders. The average classification rate achieved is about 97%. Binary images show the best testing results followed by Legendre moments. Linear kernel gives the best testing results followed by RBF. C-SVM shows very good performance, but v-SVM gives better results in some case

    Dominant points detection for shape analysis

    Get PDF
    The growing interest in recent years towards the multimedia and the large amount of information exchanged across the network involves the various fields of research towards the study of methods for automatic identification. One of the main objectives is to associate the information content of images, using techniques for identifying composing objects. Among image descriptors, contours reveal are very important because most of the information can be extracted from them and the contour analysis offers a lower computational complexity also. The contour analysis can be restricted to the study of some salient points with high curvature from which it is possible to reconstruct the original contour. The thesis is focused on the polygonal approximation of closed digital curves. After an overview of the most common shape descriptors, distinguished between simple descriptors and external methods, that focus on the analysis of boundary points of objects, and internal methods, which use the pixels inside the object also, a description of the major methods regarding the extraction of dominant points studied so far and the metrics typically used to evaluate the goodness of the polygonal approximation found is given. Three novel approaches to the problem are then discussed in detail: a fast iterative method (DPIL), more suitable for realtime processing, and two metaheuristics methods (GAPA, ACOPA) based on genetic algorithms and Ant Colony Optimization (ACO), more com- plex from the point of view of the calculation, but more precise. Such techniques are then compared with the other main methods cited in literature, in order to assess the performance in terms of computational complexity and polygonal approximation error, and measured between them, in order to evaluate the robustness with respect to affine transformations and conditions of noise. Two new techniques of shape matching, i.e. identification of objects belonging to the same class in a database of images, are then described. The first one is based on the shape alignment and the second is based on a correspondence by ACO, which puts in evidence the excellent results, both in terms of computational time and recognition accuracy, obtained through the use of dominant points. In the first matching algorithm the results are compared with a selection of dominant points generated by a human operator while in the second the dominant points are used instead of a constant sampling of the outline typically used for this kind of approach
    • …
    corecore