220 research outputs found

    Method for estimating potential recognition capacity of texture-based biometrics

    Get PDF
    When adopting an image-based biometric system, an important factor for consideration is its potential recognition capacity, since it not only defines the potential number of individuals likely to be identifiable, but also serves as a useful figure-of-merit for performance. Based on block transform coding commonly used for image compression, this study presents a method to enable coarse estimation of potential recognition capacity for texture-based biometrics. Essentially, each image block is treated as a constituent biometric component, and image texture contained in each block is binary coded to represent the corresponding texture class. The statistical variability among the binary values assigned to corresponding blocks is then exploited for estimation of potential recognition capacity. In particular, methodologies are proposed to determine appropriate image partition based on separation between texture classes and informativeness of an image block based on statistical randomness. By applying the proposed method to a commercial fingerprint system and a bespoke hand vein system, the potential recognition capacity is estimated to around 10^36 for a fingerprint area of 25  mm^2 which is in good agreement with the estimates reported, and around 10^15 for a hand vein area of 2268  mm^2 which has not been reported before

    The use of low cost virtual reality and digital technology to aid forensic scene interpretation and recording

    Get PDF
    © Cranfield University 2005. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright owner.Crime scenes are often short lived and the opportunities must not be lost in acquiring sufficient information before the scene is disturbed. With the growth in information technology (IT) in many other scientific fields, there are also substantial opportunities for IT in the area of forensic science. The thesis sought to explore means by which IT can assist and benefit the ways that forensic information can be illustrated and elucidated in a logical manner. The central research hypothesis considers that through the utilisation of low cost IT, the visual presentation of information will be of significant benefit to forensic science in particular for the recoding of crime scenes and its presentation in court. The research hypothesis was addressed by first exploring the current crime scene documentation techniques; their strengths and weaknesses, giving indication to the possible niche that technology could occupy within forensic science. The underlying principles of panoramic technology were examined, highlighting its ability to express spatial information efficiently. Through literature review and case studies, the current status of the technology within the forensic community and courtrooms was also explored to gauge its possible acceptance as a forensic tool. This led to the construction of a low cost semi-automated imaging system capable of capturing the necessary images for the formation of a panorama. This provides the ability to pan around; effectively placing the viewer at the crime scene. Evaluation and analysis involving forensic personnel was performed to assess the capabilities and effectiveness of the imaging system as a forensic tool. The imaging system was found to enhance the repertoire of techniques available for crime scene documentation; possessing sufficient capabilities and benefits to warrant its use within the area of forensics, thereby supporting the central hypothesis

    dugMatting: Decomposed-Uncertainty-Guided Matting

    Full text link
    Cutting out an object and estimating its opacity mask, known as image matting, is a key task in image and video editing. Due to the highly ill-posed issue, additional inputs, typically user-defined trimaps or scribbles, are usually needed to reduce the uncertainty. Although effective, it is either time consuming or only suitable for experienced users who know where to place the strokes. In this work, we propose a decomposed-uncertainty-guided matting (dugMatting) algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results. Basing on the characteristic of these uncertainties, the epistemic uncertainty is reduced in the process of guiding interaction (which introduces prior knowledge), while the aleatoric uncertainty is reduced in modeling data distribution (which introduces statistics for both data and possible noise). The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling. Extensively quantitative and qualitative results validate that the proposed method significantly improves the original matting algorithms in terms of both efficiency and efficacy

    Information selection and fusion in vision systems

    Get PDF
    Handling the enormous amounts of data produced by data-intensive imaging systems, such as multi-camera surveillance systems and microscopes, is technically challenging. While image and video compression help to manage the data volumes, they do not address the basic problem of information overflow. In this PhD we tackle the problem in a more drastic way. We select information of interest to a specific vision task, and discard the rest. We also combine data from different sources into a single output product, which presents the information of interest to end users in a suitable, summarized format. We treat two types of vision systems. The first type is conventional light microscopes. During this PhD, we have exploited for the first time the potential of the curvelet transform for image fusion for depth-of-field extension, allowing us to combine the advantages of multi-resolution image analysis for image fusion with increased directional sensitivity. As a result, the proposed technique clearly outperforms state-of-the-art methods, both on real microscopy data and on artificially generated images. The second type is camera networks with overlapping fields of view. To enable joint processing in such networks, inter-camera communication is essential. Because of infrastructure costs, power consumption for wireless transmission, etc., transmitting high-bandwidth video streams between cameras should be avoided. Fortunately, recently designed 'smart cameras', which have on-board processing and communication hardware, allow distributing the required image processing over the cameras. This permits compactly representing useful information from each camera. We focus on representing information for people localization and observation, which are important tools for statistical analysis of room usage, quick localization of people in case of building fires, etc. To further save bandwidth, we select which cameras should be involved in a vision task and transmit observations only from the selected cameras. We provide an information-theoretically founded framework for general purpose camera selection based on the Dempster-Shafer theory of evidence. Applied to tracking, it allows tracking people using a dynamic selection of as little as three cameras with the same accuracy as when using up to ten cameras

    Impulse Noise Removal Using Soft-computing

    Get PDF
    Image restoration has become a powerful domain now a days. In numerous real life applications Image restoration is important field because where image quality matters it existed like astronomical imaging, defense application, medical imaging and security systems. In real life applications normally image quality disturbed due to image acquisition problems like satellite system images cannot get statically as source and object both moving so noise occurring. Image restoration process involves to deal with that corrupted image. Degradation model used to train filtering techniques for both detection and removal of noise phase. This degeneration is usually the result of excess scar or noise. Standard impulse noise injection techniques are used for standard images. Early noise removal techniques perform better for simple kind of noise but have some deficiencies somewhere in sense of detection or removal process, so our focus is on soft computing techniques non classic algorithmic approach and using (ANN) artificial neural networks. These Fuzzy rules-based techniques performs better than traditional filtering techniques in sense of edge preservation

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system

    An investigation into 3D printing of osteological remains: the metrology and ethics of virtual anthropology

    Get PDF
    Three-dimensional (3D) printed human remains are being utilised in courtroom demonstrations of evidence within the UK criminal justice system. This presents a potential issue given that the use of 3D replicas has not yet been empirically tested or validated for use in crime reconstructions. Further, recent movements to critically evaluate the ethics surrounding the presentation of human remains have failed to address the use of 3D printed replica bones. As such, this research addresses the knowledge gap surrounding the accuracy of 3D printed replicas of skeletal elements and investigates how the public feels about the use of 3D printed replicas. Three experimental studies focussed on metrology and identified 3D printed replicas to be accurate to within ± 2.0 mm using computed tomography (CT) scanning, and to within ± 0.2 mm or to 0-5% difference using micro-CT. The potential loss of micromorphological details was also examined and identified that quality control steps were key in identifying and mitigating loss of detail. A fourth experimental study collected data on the opinion of the public of the use of 3D printed human remains in courtroom demonstrations. Respondents were broadly positive and considered that prints can be produced ethically by maintaining the dignity and respect of the decedent. A framework that helps to assess ethical practices was developed as well as an adaptable pathway that can assist with assessing the quality and accuracy of 3D prints. The findings from this research contribute to an empirical evidence base that can underpin future 3D printed crime reconstructions and provides guidance for creating accurate 3D prints that can inform future practice and research endeavours

    3D Deep Learning on Medical Images: A Review

    Full text link
    The rapid advancements in machine learning, graphics processing technologies and availability of medical imaging data has led to a rapid increase in use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, give a brief mathematical description of 3D CNN and the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection, and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models, in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table
    corecore