4,517 research outputs found

    Privacy-Aware Processing of Biometric Templates by Means of Secure Two-Party Computation

    Get PDF
    The use of biometric data for person identification and access control is gaining more and more popularity. Handling biometric data, however, requires particular care, since biometric data is indissolubly tied to the identity of the owner hence raising important security and privacy issues. This chapter focuses on the latter, presenting an innovative approach that, by relying on tools borrowed from Secure Two Party Computation (STPC) theory, permits to process the biometric data in encrypted form, thus eliminating any risk that private biometric information is leaked during an identification process. The basic concepts behind STPC are reviewed together with the basic cryptographic primitives needed to achieve privacy-aware processing of biometric data in a STPC context. The two main approaches proposed so far, namely homomorphic encryption and garbled circuits, are discussed and the way such techniques can be used to develop a full biometric matching protocol described. Some general guidelines to be used in the design of a privacy-aware biometric system are given, so as to allow the reader to choose the most appropriate tools depending on the application at hand

    Parameter Estimation and Quantitative Parametric Linkage Analysis with GENEHUNTER-QMOD

    Get PDF
    Objective: We present a parametric method for linkage analysis of quantitative phenotypes. The method provides a test for linkage as well as an estimate of different phenotype parameters. We have implemented our new method in the program GENEHUNTER-QMOD and evaluated its properties by performing simulations. Methods: The phenotype is modeled as a normally distributed variable, with a separate distribution for each genotype. Parameter estimates are obtained by maximizing the LOD score over the normal distribution parameters with a gradient-based optimization called PGRAD method. Results: The PGRAD method has lower power to detect linkage than the variance components analysis (VCA) in case of a normal distribution and small pedigrees. However, it outperforms the VCA and Haseman-Elston regression for extended pedigrees, nonrandomly ascertained data and non-normally distributed phenotypes. Here, the higher power even goes along with conservativeness, while the VCA has an inflated type I error. Parameter estimation tends to underestimate residual variances but performs better for expectation values of the phenotype distributions. Conclusion: With GENEHUNTER-QMOD, a powerful new tool is provided to explicitly model quantitative phenotypes in the context of linkage analysis. It is freely available at http://www.helmholtz-muenchen.de/genepi/downloads. Copyright (C) 2012 S. Karger AG, Base

    The evolution of biological theories: explaining the success of Mendelian genetics, Darwin’s Theory of natural selection and their synthesis

    Get PDF
    Darwin’s theory of natural selection was not widely accepted in the biological community until its synthesis with Mendelian genetics. I investigate the history of both sciences, with the aim discovering why Mendelian genetics and the synthesis were scientifically successful. One possible explanation for this is given by constructivism, the view that developments in science are decided not by rational reasons, but by contingent factors. A sophisticated version of this view is defended by Gregory Radick, who argues that Weldonian biometry, a rival theory of inheritance, could have supplanted Mendelism. For Radick, the success of Mendelism and the corresponding decline of biometry can be explained by historical circumstances, such as Weldon’s untimely death and his inability to recruit talented students. Another popular philosophical explanation of scientific developments is scientific realism, whose proponents argue that scientific success can be explained by the truth of scientific theories. More sophisticated versions of realism, such as Weisberg’s, take the routine scientific distortion of truth (idealization) into account. I argue from the history of genetics that neither constructivism nor realism, sophisticated or otherwise, can help us understand the success of Mendelian genetics. Instead, I argue that there were rational reasons in favor of Mendelian genetics, even if it was not a true theory of inheritance. I further conclude that the synthesis was successful because Mendelian genetics theoretically enriched Darwin’s theory of natural selection. This enrichment solved serious empirical and conceptual problems for Darwin’s theory, showing that we can also understand the success of the synthesis without appeal to broad realist or constructivist views

    Netboost: boosting-supported network analysis improves high-dimensional omics prediction in acute myeloid leukemia and Huntington’s disease

    Get PDF
    State-of-the art selection methods fail to identify weak but cumulative effects of features found in many high-dimensional omics datasets. Nevertheless, these features play an important role in certain diseases. We present Netboost, a three-step dimension reduction technique. First, a boosting-based filter is combined with the topological overlap measure to identify the essential edges of the network. Second, sparse hierarchical clustering is applied on the selected edges to identify modules and finally module information is aggregated by the first principal components. We demonstrate the application of the newly developed Netboost in combination with CoxBoost for survival prediction of DNA methylation and gene expression data from 180 acute myeloid leukemia (AML) patients and show, based on cross-validated prediction error curve estimates, its prediction superiority over variable selection on the full dataset as well as over an alternative clustering approach. The identified signature related to chromatin modifying enzymes was replicated in an independent dataset, the phase II AMLSG 12-09 study. In a second application we combine Netboost with Random Forest classification and improve the disease classification error in RNA-sequencing data of Huntington's disease mice. Netboost is a freely available Bioconductor R package for dimension reduction and hypothesis generation in high-dimensional omics applications

    Adaptive Vocal Random Challenge Support for Biometric Authentication

    Get PDF
    Käesoleva bakalaureusetöö eesmärgiks oli arendada välja kõnetuvastusprogramm, mida saaks kasutada vokaalsete juhuväljakutse tarvis. Programmi eesmärgiks oli anda üks võimalik lahendus kõnepõhilise biomeetrilise autentimise kesksele turvaprobleemile – taasesitusrünnetele. Programm põhineb vabavaralisel PocketSphinxi kõnetuvastuse tööriistal ning on kirjutatud Pythoni programmeerimiskeeles. Loodud rakendus koosneb kahest osast: kasutajaliidesega varustatud demonstratsiooniprogrammist ja käsurea utiilidist. Kasutajaliidesega rakendus sobib kõnetuvastusteegi võimete demonstreerimiseks, käsurea utiliiti saab aga kasutada mis tahes teisele programmile kõnetuvastusvõimekuse lisamiseks. Kasutajaliidesega rakenduses saab kasutaja oma hääle abil programmiga vahetult suheldes avada näitlikustamiseks loodud demoprogrammi ust. Kasutaja peab ütlema õige numbrite jada või pildile vastava sõna inglise keeles, et programmi poolt autoriseeritud saada. Mõlemat loodud rakendust saab seadistada luues oma keelemudeleid või muutes demorakenduse puhul numbriliste juhuväljakutsete pikkust.The aim of this thesis was to develop a speech recognition application which could be used for vocal random challenges. The goal of the application was to provide a solution to the central problem for voice-based biometric authentication – replay attacks. This piece of software is based on the PocketSphinx speech recognition toolkit and is written in the Python programming language. The resulting application is composed of two parts: a demonstration application with a GUI interface, and a command line utility. The GUI application is suitable for demonstrating the capabilities of the speech recognition toolkit, whereas the command line utility can be used to add speech recognition capabilities to virtually any application. The user can interact with the door of the GUI application by using his or her voice. The user must utter the correct word corresponding to the picture in English or say the sequence of digits in order to be authenticated. Both of the applications can be configured by generating language models, or by changing the length of the random challenges for the demonstration application

    Evolutionary Computation Paradigm to Determine Deep Neural Networks Architectures

    Get PDF
    Image classification is usually done using deep learning algorithms. Deep learning architectures are set deterministically. The aim of this paper is to propose an evolutionary computation paradigm that optimises a deep learning neural network’s architecture. A set of chromosomes are randomly generated, after which selection, recombination, and mutation are applied. At each generation the fittest chromosomes are kept. The best chromosome from the last generation determines the deep learning architecture. We have tested our method on a second trimester fetal morphology database. The proposed model is statistically compared with DenseNet201 and ResNet50, proving its competitiveness

    Biometrics and identity documents Performance, political context, legal considerations. Summary

    Get PDF

    Big Data Classification of Ultrasound Doppler Scan Images Using a Decision Tree Classifier Based on Maximally Stable Region Feature Points

    Get PDF
    The classification of ultrasound scan images is important in monitoring the development of prenatal and maternal structures. This paper proposes a big data classification system for ultrasound Doppler scan images that combines the residual of maximally stable extreme regions and speeded up robust features (SURF) with a decision tree classifier. The algorithm first preprocesses the ultrasound scan images before detecting the maximally stable extremal regions (MSER). A few essential regions are chosen from the MSER regions, along with the residual region that provides the best Region of Interest (ROI). SURF features points that best represent the region are detected using the gradient of the estimated cumulative region of interest. To extract the feature from the pixels that surround the SURF feature points, the Triangular Vertex Transform (TVT) transform is used. A decision tree classifier is used to train the extracted TVT features. The proposed ultrasound scan image classification system is validated using performance parameters such as accuracy, specificity, precision, sensitivity, and F1 score. For validation, a large dataset of 12,400 scan images collected from 1792 patients is used. The proposed method has an F1score of 94.12%, sensitivity, specificity, precision, and accuracy of 93.57%, 93.57%, and 97.96%, respectively. The evaluation results show that the proposed algorithm for classifying Doppler scan images is better than other algorithms that have been used in the past.&nbsp
    corecore