8 research outputs found

    Development of an Automated Technique for Reconstructing Jawi Characters in Historical Documents

    Get PDF
    The old documents in Jawi script are still being used widely for references. The quality of the hard copies of those scripts will be deteriorating as time passes. Manual reconstruction may take long time if the documents are sufficiently thick. The accuracy of the document image recognition algorithms is much dependent on the level of noise on the document. Therefore, the development of the historical Jawi character reconstruction algorithm is a significant contributions to the success of the old Jawi manuscript maintenance and recognition systems. The Background Subtraction technique has proved to be the best algorithm when historical document images were evaluated. The proposed technique has improved the algorithm by incorporating an autonomous decision making, that makes the binarization technique a scale invariant algorithm. The prefiltering and post processing will further enhance the ability of the algorithm to remove noise from the documents. In the post binarization algorithm, separation techniques between characters with holes and without holes is introduced in order for different morphological operations to be applied to those characters. This method will enhance connection between broken characters but still preserving the originality of the document. A noise model has been developed to test the reliability of the proposed algorithm. The model was developed based on several predefined criteria. The algorithms have been implemented using Matlab software version 6.5. The reliability of the proposed algorithms have been tested over simulated and real data. Comparison has been made between the Background Subtraction technique and the proposed method by manual inspection and mathematical evaluation. The results of the algorithms were mathematically evaluated using the Relative Foreground Area Error. Results have shown that better performance has been obtained using the proposed method. The framework managed to create historical Jawi characters more presentable. The system is not only applicable to historical Jawi characters, it can be easily adapted to any other historical characters in different languages

    Particle Filtering Methods for Subcellular Motion Analysis

    Get PDF
    Advances in fluorescent probing and microscopic imaging technology have revolutionized biology in the past decade and have opened the door for studying subcellular dynamical processes. However, accurate and reproducible methods for processing and analyzing the images acquired for such studies are still lacking. Since manual image analysis is time consuming, potentially inaccurate, and poorly reproducible, many biologically highly relevant questions are either left unaddressed, or are answered with great uncertainty. The subject of this thesis is particle filtering methods and their application for multiple object tracking in different biological imaging applications. Particle filtering is a technique for implementing recursive Bayesian filtering by Monte Carlo sampling. A fundamental concept behind the Bayesian approach for performing inference is the possibility to encode the information about the imaging system, possible noise sources, and the system dynamics in terms of probability density functions. In this thesis, a set of novel PF based metho

    CONTACTLESS FINGERPRINT BIOMETRICS: ACQUISITION, PROCESSING, AND PRIVACY PROTECTION

    Get PDF
    Biometrics is defined by the International Organization for Standardization (ISO) as \u201cthe automated recognition of individuals based on their behavioral and biological characteristics\u201d Examples of distinctive features evaluated by biometrics, called biometric traits, are behavioral characteristics like the signature, gait, voice, and keystroke, and biological characteristics like the fingerprint, face, iris, retina, hand geometry, palmprint, ear, and DNA. The biometric recognition is the process that permits to establish the identity of a person, and can be performed in two modalities: verification, and identification. The verification modality evaluates if the identity declared by an individual corresponds to the acquired biometric data. Differently, in the identification modality, the recognition application has to determine a person's identity by comparing the acquired biometric data with the information related to a set of individuals. Compared with traditional techniques used to establish the identity of a person, biometrics offers a greater confidence level that the authenticated individual is not impersonated by someone else. Traditional techniques, in fact, are based on surrogate representations of the identity, like tokens, smart cards, and passwords, which can easily be stolen or copied with respect to biometric traits. This characteristic permitted a wide diffusion of biometrics in different scenarios, like physical access control, government applications, forensic applications, logical access control to data, networks, and services. Most of the biometric applications, also called biometric systems, require performing the acquisition process in a highly controlled and cooperative manner. In order to obtain good quality biometric samples, the acquisition procedures of these systems need that the users perform deliberate actions, assume determinate poses, and stay still for a time period. Limitations regarding the applicative scenarios can also be present, for example the necessity of specific light and environmental conditions. Examples of biometric technologies that traditionally require constrained acquisitions are based on the face, iris, fingerprint, and hand characteristics. Traditional face recognition systems need that the users take a neutral pose, and stay still for a time period. Moreover, the acquisitions are based on a frontal camera and performed in controlled light conditions. Iris acquisitions are usually performed at a distance of less than 30 cm from the camera, and require that the user assume a defined pose and stay still watching the camera. Moreover they use near infrared illumination techniques, which can be perceived as dangerous for the health. Fingerprint recognition systems and systems based on the hand characteristics require that the users touch the sensor surface applying a proper and uniform pressure. The contact with the sensor is often perceived as unhygienic and/or associated to a police procedure. This kind of constrained acquisition techniques can drastically reduce the usability and social acceptance of biometric technologies, therefore decreasing the number of possible applicative contexts in which biometric systems could be used. In traditional fingerprint recognition systems, the usability and user acceptance are not the only negative aspects of the used acquisition procedures since the contact of the finger with the sensor platen introduces a security lack due to the release of a latent fingerprint on the touched surface, the presence of dirt on the surface of the finger can reduce the accuracy of the recognition process, and different pressures applied to the sensor platen can introduce non-linear distortions and low-contrast regions in the captured samples. Other crucial aspects that influence the social acceptance of biometric systems are associated to the privacy and the risks related to misuses of biometric information acquired, stored and transmitted by the systems. One of the most important perceived risks is related to the fact that the persons consider the acquisition of biometric traits as an exact permanent filing of their activities and behaviors, and the idea that the biometric systems can guarantee recognition accuracy equal to 100\% is very common. Other perceived risks consist in the use of the collected biometric data for malicious purposes, for tracing all the activities of the individuals, or for operating proscription lists. In order to increase the usability and the social acceptance of biometric systems, researchers are studying less-constrained biometric recognition techniques based on different biometric traits, for example, face recognition systems in surveillance applications, iris recognition techniques based on images captured at a great distance and on the move, and contactless technologies based on the fingerprint and hand characteristics. Other recent studies aim to reduce the real and perceived privacy risks, and consequently increase the social acceptance of biometric technologies. In this context, many studies regard methods that perform the identity comparison in the encrypted domain in order to prevent possible thefts and misuses of biometric data. The objective of this thesis is to research approaches able to increase the usability and social acceptance of biometric systems by performing less-constrained and highly accurate biometric recognitions in a privacy compliant manner. In particular, approaches designed for high security contexts are studied in order improve the existing technologies adopted in border controls, investigative, and governmental applications. Approaches based on low cost hardware configurations are also researched with the aim of increasing the number of possible applicative scenarios of biometric systems. The privacy compliancy is considered as a crucial aspect in all the studied applications. Fingerprint is specifically considered in this thesis, since this biometric trait is characterized by high distinctivity and durability, is the most diffused trait in the literature, and is adopted in a wide range of applicative contexts. The studied contactless biometric systems are based on one or more CCD cameras, can use two-dimensional or three-dimensional samples, and include privacy protection methods. The main goal of these systems is to perform accurate and privacy compliant recognitions in less-constrained applicative contexts with respect to traditional fingerprint biometric systems. Other important goals are the use of a wider fingerprint area with respect to traditional techniques, compatibility with the existing databases, usability, social acceptance, and scalability. The main contribution of this thesis consists in the realization of novel biometric systems based on contactless fingerprint acquisitions. In particular, different techniques for every step of the recognition process based on two-dimensional and three-dimensional samples have been researched. Novel techniques for the privacy protection of fingerprint data have also been designed. The studied approaches are multidisciplinary since their design and realization involved optical acquisition systems, multiple view geometry, image processing, pattern recognition, computational intelligence, statistics, and cryptography. The implemented biometric systems and algorithms have been applied to different biometric datasets describing a heterogeneous set of applicative scenarios. Results proved the feasibility of the studied approaches. In particular, the realized contactless biometric systems have been compared with traditional fingerprint recognition systems, obtaining positive results in terms of accuracy, usability, user acceptability, scalability, and security. Moreover, the developed techniques for the privacy protection of fingerprint biometric systems showed satisfactory performances in terms of security, accuracy, speed, and memory usage

    Quantitative Analysis of Ultrasound Images of the Preterm Brain

    Get PDF
    In this PhD new algorithms are proposed to better understand and diagnose white matter damage in the preterm Brain. Since Ultrasound imaging is the most suited modality for the inspection of brain pathologies in very low birth weight infants we propose multiple techniques to assist in what is called Computer-Aided Diagnosis. As a main result we are able to increase the qualitative diagnosis from a 70% detectability to a 98% quantitative detectability

    Edge-preserving prefiltering for document image binarization

    No full text
    IEEE International Conference on Image Processing11070-107385QT

    Application of sound source separation methods to advanced spatial audio systems

    Full text link
    This thesis is related to the field of Sound Source Separation (SSS). It addresses the development and evaluation of these techniques for their application in the resynthesis of high-realism sound scenes by means of Wave Field Synthesis (WFS). Because the vast majority of audio recordings are preserved in twochannel stereo format, special up-converters are required to use advanced spatial audio reproduction formats, such as WFS. This is due to the fact that WFS needs the original source signals to be available, in order to accurately synthesize the acoustic field inside an extended listening area. Thus, an object-based mixing is required. Source separation problems in digital signal processing are those in which several signals have been mixed together and the objective is to find out what the original signals were. Therefore, SSS algorithms can be applied to existing two-channel mixtures to extract the different objects that compose the stereo scene. Unfortunately, most stereo mixtures are underdetermined, i.e., there are more sound sources than audio channels. This condition makes the SSS problem especially difficult and stronger assumptions have to be taken, often related to the sparsity of the sources under some signal transformation. This thesis is focused on the application of SSS techniques to the spatial sound reproduction field. As a result, its contributions can be categorized within these two areas. First, two underdetermined SSS methods are proposed to deal efficiently with the separation of stereo sound mixtures. These techniques are based on a multi-level thresholding segmentation approach, which enables to perform a fast and unsupervised separation of sound sources in the time-frequency domain. Although both techniques rely on the same clustering type, the features considered by each of them are related to different localization cues that enable to perform separation of either instantaneous or real mixtures.Additionally, two post-processing techniques aimed at improving the isolation of the separated sources are proposed. The performance achieved by several SSS methods in the resynthesis of WFS sound scenes is afterwards evaluated by means of listening tests, paying special attention to the change observed in the perceived spatial attributes. Although the estimated sources are distorted versions of the original ones, the masking effects involved in their spatial remixing make artifacts less perceptible, which improves the overall assessed quality. Finally, some novel developments related to the application of time-frequency processing to source localization and enhanced sound reproduction are presented.Cobos Serrano, M. (2009). Application of sound source separation methods to advanced spatial audio systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8969Palanci

    EDGE-PRESERVING PREFILTERING FOR DOCUMENT IMAGE BINARIZATION

    No full text
    We propose a novel coplanar filter, which exploits the coplanarity of gray-level distribution of neighboring pixels, to pre-filter the document images. Experiments show that the proposed filter exhibits the following desired properties for document image binarization: (1) impulsive noise removal, (2) piecewise smoothing, and (3) sharp edge preservation. 1
    corecore