2,008 research outputs found

    THRIVE: Threshold Homomorphic encryption based secure and privacy preserving bIometric VErification system

    Get PDF
    In this paper, we propose a new biometric verification and template protection system which we call the THRIVE system. The system includes novel enrollment and authentication protocols based on threshold homomorphic cryptosystem where the private key is shared between a user and the verifier. In the THRIVE system, only encrypted binary biometric templates are stored in the database and verification is performed via homomorphically randomized templates, thus, original templates are never revealed during the authentication stage. The THRIVE system is designed for the malicious model where the cheating party may arbitrarily deviate from the protocol specification. Since threshold homomorphic encryption scheme is used, a malicious database owner cannot perform decryption on encrypted templates of the users in the database. Therefore, security of the THRIVE system is enhanced using a two-factor authentication scheme involving the user's private key and the biometric data. We prove security and privacy preservation capability of the proposed system in the simulation-based model with no assumption. The proposed system is suitable for applications where the user does not want to reveal her biometrics to the verifier in plain form but she needs to proof her physical presence by using biometrics. The system can be used with any biometric modality and biometric feature extraction scheme whose output templates can be binarized. The overall connection time for the proposed THRIVE system is estimated to be 336 ms on average for 256-bit biohash vectors on a desktop PC running with quad-core 3.2 GHz CPUs at 10 Mbit/s up/down link connection speed. Consequently, the proposed system can be efficiently used in real life applications

    Yo Variability! JHipster: A Playground for Web-Apps Analyses

    Get PDF
    International audienceThough variability is everywhere, there has always been a shortage of publicly available cases for assessing variability-aware tools and techniques as well as supports for teaching variability-related concepts. Historical software product lines contains industrial secrets their owners do not want to disclose to a wide audience. The open source community contributed to large-scale cases such as Eclipse, Linux kernels, or web-based plugin systems (Drupal, WordPress). To assess accuracy of sampling and prediction approaches (bugs, performance), a case where all products can be enumerated is desirable. As configuration issues do not lie within only one place but are scattered across technologies and assets, a case exposing such diversity is an additional asset. To this end, we present in this paper our efforts in building an explicit product line on top of JHipster, an industrial open-source Web-app configurator that is both manageable in terms of configurations (~ 163,000) and diverse in terms of technologies used. We present our efforts in building a variability-aware chain on top of JHipster's configurator and lessons learned using it as a teaching case at the University of Rennes. We also sketch the diversity of analyses that can be performed with our infrastructure as well as early issues found using it. Our long term goal is both to support students and researchers studying variability analysis and JHipster developers in the maintenance and evolution of their tools

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Privacy-Aware Processing of Biometric Templates by Means of Secure Two-Party Computation

    Get PDF
    The use of biometric data for person identification and access control is gaining more and more popularity. Handling biometric data, however, requires particular care, since biometric data is indissolubly tied to the identity of the owner hence raising important security and privacy issues. This chapter focuses on the latter, presenting an innovative approach that, by relying on tools borrowed from Secure Two Party Computation (STPC) theory, permits to process the biometric data in encrypted form, thus eliminating any risk that private biometric information is leaked during an identification process. The basic concepts behind STPC are reviewed together with the basic cryptographic primitives needed to achieve privacy-aware processing of biometric data in a STPC context. The two main approaches proposed so far, namely homomorphic encryption and garbled circuits, are discussed and the way such techniques can be used to develop a full biometric matching protocol described. Some general guidelines to be used in the design of a privacy-aware biometric system are given, so as to allow the reader to choose the most appropriate tools depending on the application at hand

    Dataremix: Aesthetic Experiences of Big Data and Data Abstraction

    Get PDF
    This PhD by published work expands on the contribution to knowledge in two recent large-scale transdisciplinary artistic research projects: ATLAS in silico and INSTRUMENT | One Antarctic Night and their exhibited and published outputs. The thesis reflects upon this practice-based artistic research that interrogates data abstraction: the digitization, datafication and abstraction of culture and nature, as vast and abstract digital data. The research is situated in digital arts practices that engage a combination of big (scientific) data as artistic material, embodied interaction in virtual environments, and poetic recombination. A transdisciplinary and collaborative artistic practice, x-resonance, provides a framework for the hybrid processes, outcomes, and contributions to knowledge from the research. These are purposefully and productively situated at the objective | subjective interface, have potential to convey multiple meanings simultaneously to a variety of audiences and resist disciplinary definition. In the course of the research, a novel methodology emerges, dataremix, which is employed and iteratively evolved through artistic practice to address the research questions: 1) How can a visceral and poetic experience of data abstraction be created? and 2) How would one go about generating an artistically-informed (scientific) discovery? Several interconnected contributions to knowledge arise through the first research question: creation of representational elements for artistic visualization of big (scientific) data that includes four new forms (genomic calligraphy, algorithmic objects as natural specimens, scalable auditory data signatures, and signal objects); an aesthetic of slowness that contributes an extension to the operative forces in Jevbratt’s inverted sublime of looking down and in to also include looking fast and slow; an extension of Corby’s objective and subjective image consisting of “informational and aesthetic components” to novel virtual environments created from big 3 (scientific) data that extend Davies’ poetic virtual spatiality to poetic objective | subjective generative virtual spaces; and an extension of Seaman’s embodied interactive recombinant poetics through embodied interaction in virtual environments as a recapitulation of scientific (objective) and algorithmic processes through aesthetic (subjective) physical gestures. These contributions holistically combine in the artworks ATLAS in silico and INSTRUMENT | One Antarctic Night to create visceral poetic experiences of big data abstraction. Contributions to knowledge from the first research question develop artworks that are visceral and poetic experiences of data abstraction, and which manifest the objective | subjective through art. Contributions to knowledge from the second research question occur through the process of the artworks functioning as experimental systems in which experiments using analytical tools from the scientific domain are enacted within the process of creation of the artwork. The results are “returned” into the artwork. These contributions are: elucidating differences in DNA helix bending and curvature along regions of gene sequences specified as either introns or exons, revealing nuanced differences in BLAST results in relation to genomics sequence metadata, and cross-correlation of astronomical data to identify putative variable signals from astronomical objects for further scientific evaluation
    • …
    corecore