19 research outputs found

    Efficient Local Comparison Of Images Using Krawtchouk Descriptors

    Get PDF
    It is known that image comparison can prove cumbersome in both computational complexity and runtime, due to factors such as the rotation, scaling, and translation of the object in question. Due to the locality of Krawtchouk polynomials, relatively few descriptors are necessary to describe a given image, and this can be achieved with minimal memory usage. Using this method, not only can images be described efficiently as a whole, but specific regions of images can be described as well without cropping. Due to this property, queries can be found within a single large image, or collection of large images, which serve as a database for search. Krawtchouk descriptors can also describe collections of patches of 3D objects, which is explored in this paper, as well as a theoretical methodology of describing nD hyperobjects. Test results for an implementation of 3D Krawtchouk descriptors in GNU Octave, as well as statistics regarding effectiveness and runtime, are included, and the code used for testing will be published open source in the near future

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Digital Filters and Signal Processing

    Get PDF
    Digital filters, together with signal processing, are being employed in the new technologies and information systems, and are implemented in different areas and applications. Digital filters and signal processing are used with no costs and they can be adapted to different cases with great flexibility and reliability. This book presents advanced developments in digital filters and signal process methods covering different cases studies. They present the main essence of the subject, with the principal approaches to the most recent mathematical models that are being employed worldwide

    Automatic Segmentation and Classification of Red and White Blood cells in Thin Blood Smear Slides

    Get PDF
    In this work we develop a system for automatic detection and classification of cytological images which plays an increasing important role in medical diagnosis. A primary aim of this work is the accurate segmentation of cytological images of blood smears and subsequent feature extraction, along with studying related classification problems such as the identification and counting of peripheral blood smear particles, and classification of white blood cell into types five. Our proposed approach benefits from powerful image processing techniques to perform complete blood count (CBC) without human intervention. The general framework in this blood smear analysis research is as follows. Firstly, a digital blood smear image is de-noised using optimized Bayesian non-local means filter to design a dependable cell counting system that may be used under different image capture conditions. Then an edge preservation technique with Kuwahara filter is used to recover degraded and blurred white blood cell boundaries in blood smear images while reducing the residual negative effect of noise in images. After denoising and edge enhancement, the next step is binarization using combination of Otsu and Niblack to separate the cells and stained background. Cells separation and counting is achieved by granulometry, advanced active contours without edges, and morphological operators with watershed algorithm. Following this is the recognition of different types of white blood cells (WBCs), and also red blood cells (RBCs) segmentation. Using three main types of features: shape, intensity, and texture invariant features in combination with a variety of classifiers is next step. The following features are used in this work: intensity histogram features, invariant moments, the relative area, co-occurrence and run-length matrices, dual tree complex wavelet transform features, Haralick and Tamura features. Next, different statistical approaches involving correlation, distribution and redundancy are used to measure of the dependency between a set of features and to select feature variables on the white blood cell classification. A global sensitivity analysis with random sampling-high dimensional model representation (RS-HDMR) which can deal with independent and dependent input feature variables is used to assess dominate discriminatory power and the reliability of feature which leads to an efficient feature selection. These feature selection results are compared in experiments with branch and bound method and with sequential forward selection (SFS), respectively. This work examines support vector machine (SVM) and Convolutional Neural Networks (LeNet5) in connection with white blood cell classification. Finally, white blood cell classification system is validated in experiments conducted on cytological images of normal poor quality blood smears. These experimental results are also assessed with ground truth manually obtained from medical experts

    Analysis of combinatorial search spaces for a class of NP-hard problems, An

    Get PDF
    2011 Spring.Includes bibliographical references.Given a finite but very large set of states X and a real-valued objective function Æ’ defined on X, combinatorial optimization refers to the problem of finding elements of X that maximize (or minimize) Æ’. Many combinatorial search algorithms employ some perturbation operator to hill-climb in the search space. Such perturbative local search algorithms are state of the art for many classes of NP-hard combinatorial optimization problems such as maximum k-satisfiability, scheduling, and problems of graph theory. In this thesis we analyze combinatorial search spaces by expanding the objective function into a (sparse) series of basis functions. While most analyses of the distribution of function values in the search space must rely on empirical sampling, the basis function expansion allows us to directly study the distribution of function values across regions of states for combinatorial problems without the need for sampling. We concentrate on objective functions that can be expressed as bounded pseudo-Boolean functions which are NP-hard to solve in general. We use the basis expansion to construct a polynomial-time algorithm for exactly computing constant-degree moments of the objective function Æ’ over arbitrarily large regions of the search space. On functions with restricted codomains, these moments are related to the true distribution by a system of linear equations. Given low moments supplied by our algorithm, we construct bounds of the true distribution of Æ’ over regions of the space using a linear programming approach. A straightforward relaxation allows us to efficiently approximate the distribution and hence quickly estimate the count of states in a given region that have certain values under the objective function. The analysis is also useful for characterizing properties of specific combinatorial problems. For instance, by connecting search space analysis to the theory of inapproximability, we prove that the bound specified by Grover's maximum principle for the Max-Ek-Lin-2 problem is sharp. Moreover, we use the framework to prove certain configurations are forbidden in regions of the Max-3-Sat search space, supplying the first theoretical confirmation of empirical results by others. Finally, we show that theoretical results can be used to drive the design of algorithms in a principled manner by using the search space analysis developed in this thesis in algorithmic applications. First, information obtained from our moment retrieving algorithm can be used to direct a hill-climbing search across plateaus in the Max-k-Sat search space. Second, the analysis can be used to control the mutation rate on a (1+1) evolutionary algorithm on bounded pseudo-Boolean functions so that the offspring of each search point is maximized in expectation. For these applications, knowledge of the search space structure supplied by the analysis translates to significant gains in the performance of search

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    The Role of Transient Vibration of the Skull on Concussion

    Get PDF
    Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum
    corecore