354 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    Chainable Functional Commitments for Unbounded-Depth Circuits

    Get PDF
    A functional commitment (FC) scheme allows one to commit to a vector x⃗\vec{x} and later produce a short opening proof of (f,f(x⃗))(f, f(\vec{x})) for any admissible function ff. Since their inception, FC schemes supporting ever more expressive classes of functions have been proposed. In this work, we introduce a novel primitive that we call chainable functional commitment (CFC), which extends the functionality of FCs by allowing one to 1) open to functions of multiple inputs f(x⃗1,…,x⃗m)f(\vec x_1, \ldots, \vec x_m) that are committed independently, 2) while preserving the output also in committed form. We show that CFCs for quadratic polynomial maps generically imply FCs for circuits. Then, we efficiently realize CFCs for quadratic polynomials over pairing groups and lattices, resulting in the first FC schemes for circuits of unbounded depth based on either pairing-based or lattice-based falsifiable assumptions. Our FCs require fixing a-priori only the maximal width of the circuit to be evaluated, and have opening proofs whose size only depends on the depth of the circuit. Additionally, our FCs feature other nice properties such as being additively homomorphic and supporting sublinear-time verification after offline preprocessing. Using a recent transformation that constructs homomorphic signatures (HS) from FCs, we obtain the first pairing- and lattice-based realisations of HS for bounded-width, but unbounded-depth, circuits. Prior to this work, the only HS for general circuits is lattice-based and requires bounding the circuit depth at setup time

    Private set intersection: A systematic literature review

    Get PDF
    Secure Multi-party Computation (SMPC) is a family of protocols which allow some parties to compute a function on their private inputs, obtaining the output at the end and nothing more. In this work, we focus on a particular SMPC problem named Private Set Intersection (PSI). The challenge in PSI is how two or more parties can compute the intersection of their private input sets, while the elements that are not in the intersection remain private. This problem has attracted the attention of many researchers because of its wide variety of applications, contributing to the proliferation of many different approaches. Despite that, current PSI protocols still require heavy cryptographic assumptions that may be unrealistic in some scenarios. In this paper, we perform a Systematic Literature Review of PSI solutions, with the objective of analyzing the main scenarios where PSI has been studied and giving the reader a general taxonomy of the problem together with a general understanding of the most common tools used to solve it. We also analyze the performance using different metrics, trying to determine if PSI is mature enough to be used in realistic scenarios, identifying the pros and cons of each protocol and the remaining open problems.This work has been partially supported by the projects: BIGPrivDATA (UMA20-FEDERJA-082) from the FEDER Andalucía 2014– 2020 Program and SecTwin 5.0 funded by the Ministry of Science and Innovation, Spain, and the European Union (Next Generation EU) (TED2021-129830B-I00). The first author has been funded by the Spanish Ministry of Education under the National F.P.U. Program (FPU19/01118). Funding for open access charge: Universidad de Málaga/CBU

    Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

    Full text link
    The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop

    GLUE: Generalizing Unbounded Attribute-Based Encryption for Flexible Efficiency Trade-Offs

    Get PDF
    Ciphertext-policy attribute-based encryption is a versatile primitive that has been considered extensively to securely manage data in practice. Especially completely unbounded schemes are attractive, because they do not restrict the sets of attributes and policies. So far, any such schemes that support negations in the access policy or that have online/offline extensions have an inefficient decryption algorithm. In this work, we propose GLUE (Generalized, Large-universe, Unbounded and Expressive), which is a novel scheme that allows for the efficient implementation of the decryption while allowing the support of both negations and online/offline extensions. We achieve these properties simultaneously by uncovering an underlying dependency between encryption and decryption, which allows for a flexible trade-off in their efficiency. For the security proof, we devise a new technique that enables us to generalize multiple existing schemes. As a result, we obtain a completely unbounded scheme supporting negations that, to the best of our knowledge, outperforms all existing such schemes in the decryption algorithm

    Fully Adaptive Decentralized Multi-Authority ABE

    Get PDF
    Decentralized multi-authority attribute-based encryption (-) is a distributed generalization of standard (ciphertext-policy) attribute-based encryption where there is no trusted central authority: any party can become an authority and issue private keys, and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. We present the first multi-authority attribute-based encryption schemes that are provably fully adaptively secure. Namely, our construction is secure against an attacker that may corrupt some of the authorities as well as perform key queries adaptively throughout the life-time of the system. Our main construction relies on a prime order bilinear group where the -linear assumption holds as well as on a random oracle. Along the way, we present a conceptually simpler construction relying on a composite order bilinear group with standard subgroup decision assumptions as well as on a random oracle. Prior to this work, there was no construction that could resist adaptive corruptions of authorities, no matter the assumptions used. In fact, we point out that even standard complexity leveraging style arguments do not work in the multi-authority setting

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Witness Encryption for Succinct Functional Commitments and Applications

    Get PDF
    Witness encryption (WE), introduced by Garg, Gentry, Sahai, and Waters (STOC 2013) allows one to encrypt a message to a statement x\mathsf{x} for some NP language L\mathcal{L}, such that any user holding a witness for x∈L\mathsf{x} \in \mathcal{L} can decrypt the ciphertext. The extreme power of this primitive comes at the cost of its elusiveness: a practical construction from established cryptographic assumptions is currently out of reach. In this work we introduce and construct a new notion of encryption that has a strong flavor of WE and that, crucially, we can build from well-studied assumptions (based on bilinear pairings) for interesting classes of computation. Our new notion, witness encryption for (succinct) functional commitment, takes inspiration from a prior weakening of witness encryption introduced by Benhamouda and Lin (TCC 2020). In a nutshell, theirs is a WE where: the encryption statement consists of a (non compressible) commitment cm\mathsf{cm}, a function GG and a value yy; the decryption witness consists of a (non succinct) NIZK proof about the fact that cm\mathsf{cm} opens to vv such that y=G(v)y=G(v). Benhamouda and Lin showed how to apply this primitive to obtain MPC with non-interactive and reusability properties---dubbed mrNISC---replacing the requirement of WE in existing round-collapsing techniques. Our new WE-like notion is motivated by supporting both commitments of a fixed size and fixed decryption complexity, independent ∣v∣|v|---in contrast to the work by Benhamouda and Lin where this complexity is linear. As a byproduct, our efficiency profile substantially improves the offline stage of mrNISC protocols. Our work solves the additional challenges that arise from relying on computationally binding commitments and computational soundness (of functional commitments), as opposed to statistical binding and unconditional soundness (of NIZKs), used in Benhamouda and Lin\u27s work. To tackle them, we not only modify their basic blueprint, but also model and instantiate different types of projective hash functions as building blocks. Furthermore, as one of our main contributions, we show the first pairing-based construction of functional commitments for NC1 circuits with linear verification. Our techniques are of independent interest and may highlight new avenues to design practical variants of witness encryption. As an additional contribution, we show that our new WE-flavored primitive and its efficiency properties are versatile: we discuss its further applications and show how to extend this primitive to better suit these settings
    • …
    corecore