23,072 research outputs found

    Iris: Interactive all-in-one graphical validation of 3D protein model iterations

    Get PDF
    Iris validation is a Python package created to represent comprehensive per-residue validation metrics for entire protein chains in a compact, readable and interactive view. These metrics can either be calculated by Iris, or by a third-party program such as MolProbity. We show that those parts of a protein model requiring attention may generate ripples across the metrics on the diagram, immediately catching the modeler's attention. Iris can run as a standalone tool, or be plugged into existing structural biology software to display per-chain model quality at a glance, with a particular emphasis on evaluating incremental changes resulting from the iterative nature of model building and refinement. Finally, the integration of Iris into the CCP4i2 graphical user interface is provided as a showcase of its pluggable design

    Improving Iris Recognition through Quality and Interoperability Metrics

    Get PDF
    The ability to identify individuals based on their iris is known as iris recognition. Over the past decade iris recognition has garnered much attention because of its strong performance in comparison with other mainstream biometrics such as fingerprint and face recognition. Performance of iris recognition systems is driven by application scenario requirements. Standoff distance, subject cooperation, underlying optics, and illumination are a few examples of these requirements which dictate the nature of images an iris recognition system has to process. Traditional iris recognition systems, dubbed stop and stare , operate under highly constrained conditions. This ensures that the captured image is of sufficient quality so that the success of subsequent processing stages, segmentation, encoding, and matching are not compromised. When acquisition constraints are relaxed, such as for surveillance or iris on the move, the fidelity of subsequent processing steps lessens.;In this dissertation we propose a multi-faceted framework for mitigating the difficulties associated with non-ideal iris. We develop and investigate a comprehensive iris image quality metric that is predictive of iris matching performance. The metric is composed of photometric measures such as defocus, motion blur, and illumination, but also contains domain specific measures such as occlusion, and gaze angle. These measures are then combined through a fusion rule based on Dempster-Shafer theory. Related to iris segmentation, which is arguably one of the most important tasks in iris recognition, we develop metrics which are used to evaluate the precision of the pupil and iris boundaries. Furthermore, we illustrate three methods which take advantage of the proposed segmentation metrics for rectifying incorrect segmentation boundaries. Finally, we look at the issue of iris image interoperability and demonstrate that techniques from the field of hardware fingerprinting can be utilized to improve iris matching performance when images captured from distinct sensors are involved

    A Longitudinal Analysis on the Feasibility of Iris Recognition Performance for Infants 0-2 Years Old

    Get PDF
    The focus of this study was to longitudinally evaluate iris recognition for infants between the ages of 0 to 2 years old. Image quality metrics of infant and adult irises acquired on the same iris camera were compared. Matching performance was evaluated for four groups, infants 0 to 6 months, 7 to 12 months, 13 to 24 months, and adults. A mixed linear regression model was used to determine if infants’ genuine similarity scores changed over time. This study found that image quality metrics were different between infants and adults but in the older group, (13 to 24 months old) the image quality metric scores were more likely to be similar to adults. Infants 0 to 6 months old had worse performance at an FMR of 0.01% than infants 7 to 12 months, 13 to 24 months, and adults

    On Generative Adversarial Network Based Synthetic Iris Presentation Attack And Its Detection

    Get PDF
    Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. Reliability and accuracy of iris biometric modality have prompted its large-scale deployment for critical applications such as border control and national identification projects. The extensive growth of iris recognition systems has raised apprehensions about the susceptibility of these systems to various presentation attacks. In this thesis, a novel iris presentation attack using deep learning based synthetically generated iris images is presented. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, a new framework, named as iDCGAN is proposed for creating realistic appearing synthetic iris images. In-depth analysis is performed using quality score distributions of real and synthetically generated iris images to understand the effectiveness of the proposed approach. We also demonstrate that synthetically generated iris images can be used to attack existing iris recognition systems. As synthetically generated iris images can be effectively deployed in iris presentation attacks, it is important to develop accurate iris presentation attack detection algorithms which can distinguish such synthetic iris images from real iris images. For this purpose, a novel structural and textural feature-based iris presentation attack detection framework (DESIST) is proposed. The key emphasis of DESIST is on developing a unified framework for detecting a medley of iris presentation attacks, including synthetic iris. Experimental evaluations showcase the efficacy of the proposed DESIST framework in detecting synthetic iris presentation attacks

    Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets

    Full text link
    A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favorably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity, this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets

    iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris Images

    Full text link
    Generative Adversarial Networks (GANs) have shown success in approximating complex distributions for synthetic image generation. However, current GAN-based methods for generating biometric images, such as iris, have certain limitations: (a) the synthetic images often closely resemble images in the training dataset; (b) the generated images lack diversity in terms of the number of unique identities represented in them; and (c) it is difficult to generate multiple images pertaining to the same identity. To overcome these issues, we propose iWarpGAN that disentangles identity and style in the context of the iris modality by using two transformation pathways: Identity Transformation Pathway to generate unique identities from the training set, and Style Transformation Pathway to extract the style code from a reference image and output an iris image using this style. By concatenating the transformed identity code and reference style code, iWarpGAN generates iris images with both inter- and intra-class variations. The efficacy of the proposed method in generating such iris DeepFakes is evaluated both qualitatively and quantitatively using ISO/IEC 29794-6 Standard Quality Metrics and the VeriEye iris matcher. Further, the utility of the synthetically generated images is demonstrated by improving the performance of deep learning based iris matchers that augment synthetic data with real data during the training process

    Situating the Next Generation of Impact Measurement and Evaluation for Impact Investing

    Get PDF
    In taking stock of the landscape, this paper promotes a convergence of methods, building from both the impact investment and evaluation fields.The commitment of impact investors to strengthen the process of generating evidence for their social returns alongside the evidence for financial returns is a veritable game changer. But social change is a complex business and good intentions do not necessarily translate into verifiable impact.As the public sector, bilaterals, and multilaterals increasingly partner with impact investors in achieving collective impact goals, the need for strong evidence about impact becomes even more compelling. The time has come to develop new mindsets and approaches that can be widely shared and employed in ways that will advance the frontier for impact measurement and evaluation of impact investing. Each of the menu options presented in this paper can contribute to building evidence about impact. The next generation of measurement will be stronger if the full range of options comes into play and the more evaluative approaches become commonplace as means for developing evidence and testing assumptions about the processes of change from a stakeholder perspective– with a view toward context and systems.Creating and sharing evidence about impact is a key lever for contributing to greater impact, demonstrating additionality, and for building confidence among potential investors, partners and observers in this emergent industry on its path to maturation. Further, the range of measurement options offers opportunities to choose appropriate approaches that will allow data to contribute to impact management– to improve on the business model of ventures and to improve services and systems that improve conditions for people and households living in poverty.

    Aligning Capital With Mission: Lessons from the Annie E. Casey Foundation's Social Investment Program

    Get PDF
    The Annie E. Casey Foundation engaged InSight at Pacific Community Ventures to conduct the first comprehensive third-party evaluation of the SI Program, with research support from the Center for the Advancement of Social Entrepreneurship (CASE) at Duke University's Fuqua School of Business. The evaluation focused on the social impact of the SI Program and its impact measurement practices, and had the following objectives: ? Provide a comprehensive review of the social impact that has been achieved to date through the SI Program. ? Assess the systems and processes used by the SI Program to measure and report on its impact, identifying the SI Program's strengths in impact measurement and areas for improvement. ? Surface evidence-based findings and lessons that can assist the Foundation and other investors in rigorously examining and enhancing the social impact of their investments, in order to support the continued development of the impact investing field
    • …
    corecore