66 research outputs found

    An Approach to the Detection of Retinoblastoma based on Apriori Algorithm

    Get PDF
    Retinoblastoma is a rare kind of cancer, typically designated as leukocoria (white-eye pupillary reflex) that rapidly develops from the immature cells of a retina, the light-detecting tissue of the eye. It is the most common malignant cancer of the eye in young children. Early detection of leukocoria can improve the overall treatment duration.There is intensification in interest for setting up medical system that can monitor a large number of people for sight threatening diseases, likely Retinoblastoma and Diabetic Retinopathy.Developed an image processing application for the discovery of retinoblastoma by exploiting graph theory based apriori algorithm as a novel approach and different image processing techniques.The application will review the image with different phases and identifies region of interest of the threatened area in the retina.The software is implemented using MATLAB and developed a graphical user interface for smooth proceedings during identification stages of the disease

    Multispectral scleral patterns for ocular biometric recognition

    Get PDF
    Biometrics is the science of recognizing people based on their physical or behavioral traits such as face, fingerprints, iris, and voice. Among the various traits studied in the literature, ocular biometrics has gained popularity due to the significant progress made in iris recognition. However, iris recognition is unfavorably influenced by the non-frontal gaze direction of the eye with respect to the acquisition device. In such scenarios, additional parts of the eye, such as the sclera (the white of the eye) may be of significance. In this dissertation, we investigate the use of the sclera texture and the vasculature patterns evident in the sclera as potential biometric cues. Iris patterns are better discerned in the near infrared spectrum (NIR) while vasculature patterns are better discerned in the visible spectrum (RGB). Therefore, multispectral images of the eye, consisting of both NIR and RGB channels, were used in this work in order to ensure that both the iris and the vasculature patterns are successfully imaged.;The contributions of this work include the following. Firstly, a multispectral ocular database was assembled by collecting high-resolution color infrared images of the left and right eyes of 103 subjects using the DuncanTech MS 3100 multispectral camera. Secondly, a novel segmentation algorithm was designed to localize the spacial extent of the iris, sclera and pupil in the ocular images. The proposed segmentation algorithm is a combination of region-based and edge-based schemes that exploits the multispectral information. Thirdly, different feature extraction and matching method were used to determine the potential of utilizing the sclera and the accompanying vasculature pattern as biometric cues. The three specific matching methods considered in this work were keypoint-based matching, direct correlation matching, and minutiae matching based on blood vessel bifurcations. Fourthly, the potential of designing a bimodal ocular system that combines the sclera biometric with the iris biometric was explored.;Experiments convey the efficacy of the proposed segmentation algorithm in localizing the sclera and the iris. The use of keypoint-based matching was observed to result in the best recognition performance for the scleral patterns. Finally, the possibility of utilizing the scleral patterns in conjunction with the iris for recognizing ocular images exhibiting non-frontal gaze directions was established

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Iris Recognition: Robust Processing, Synthesis, Performance Evaluation and Applications

    Get PDF
    The popularity of iris biometric has grown considerably over the past few years. It has resulted in the development of a large number of new iris processing and encoding algorithms. In this dissertation, we will discuss the following aspects of the iris recognition problem: iris image acquisition, iris quality, iris segmentation, iris encoding, performance enhancement and two novel applications.;The specific claimed novelties of this dissertation include: (1) a method to generate a large scale realistic database of iris images; (2) a crosspectral iris matching method for comparison of images in color range against images in Near-Infrared (NIR) range; (3) a method to evaluate iris image and video quality; (4) a robust quality-based iris segmentation method; (5) several approaches to enhance recognition performance and security of traditional iris encoding techniques; (6) a method to increase iris capture volume for acquisition of iris on the move from a distance and (7) a method to improve performance of biometric systems due to available soft data in the form of links and connections in a relevant social network

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on ĂŒks pĂ”hilistest teenustest, mille pakkumine vĂ”ib suurendada rĂ”ivapoodide edukust, sest tĂ€nu sellele lahendusele vĂ€heneb fĂŒĂŒsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem vĂ€lja pakutud masinnĂ€gemise ja graafika meetoditel Ă”nnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaĂ”nnestunud pĂ”hiliselt seetĂ”ttu, et ei ole suudetud korralikult arvesse vĂ”tta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. KĂ€esolev projekt kavatseb kĂ”rvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. VĂ€lja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analĂŒĂŒsimises, modelleerimises, mÔÔtmete arvutamises, orientiiride paigutamises, mannekeenidelt vĂ”etud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti kĂ€igus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati vĂ€lja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise sĂŒsteemi tĂ€iendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    Using the 3D shape of the nose for biometric authentication

    Get PDF

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/08/2010.This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims

    Establishing the digital chain of evidence in biometric systems

    Get PDF
    Traditionally, a chain of evidence or chain of custody refers to the chronological documentation, or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Whether in the criminal justice system, military applications, or natural disasters, ensuring the accuracy and integrity of such chains is of paramount importance. Intentional or unintentional alteration, tampering, or fabrication of digital evidence can lead to undesirable effects. We find despite the consequences at stake, historically, no unique protocol or standardized procedure exists for establishing such chains. Current practices rely on traditional paper trails and handwritten signatures as the foundation of chains of evidence.;Copying, fabricating or deleting electronic data is easier than ever and establishing equivalent digital chains of evidence has become both necessary and desirable. We propose to consider a chain of digital evidence as a multi-component validation problem. It ensures the security of access control, confidentiality, integrity, and non-repudiation of origin. Our framework, includes techniques from cryptography, keystroke analysis, digital watermarking, and hardware source identification. The work offers contributions to many of the fields used in the formation of the framework. Related to biometric watermarking, we provide a means for watermarking iris images without significantly impacting biometric performance. Specific to hardware fingerprinting, we establish the ability to verify the source of an image captured by biometric sensing devices such as fingerprint sensors and iris cameras. Related to keystroke dynamics, we establish that user stimulus familiarity is a driver of classification performance. Finally, example applications of the framework are demonstrated with data collected in crime scene investigations, people screening activities at port of entries, naval maritime interdiction operations, and mass fatality incident disaster responses
    • 

    corecore