6 research outputs found

    The Review of Non-Technical Assumptions in Digital Identity Architectures

    Get PDF
    The literature on digital identity management systems (IdM) is abundant and solutions vary by technology components and non-technical requirements. In the long run, however, there is a need for exchanging identities across domains or even borders, which requires interoperable solutions and flexible architectures. This article aims to give an overview of the current research on digital identity management. We conduct a systematic literature review of digital identity solution architectures and extract their inherent non-technical assumptions. The findings show that solution designs can be based on organizational, business and trust assumptions as well as human-user assumptions. Namely, establishing the trust relationships and collaborations among participating organizations; human-users capability for maintaining private cryptographic material or the assumptions that win-win business models could be easily identified. By reviewing the key findings of solutions proposed and looking at the differences and commonalities of their technical, organizational and social requirements, we discuss their potential real-life inhibitors and identify opportunities for future research in IdM

    Open Algorithms for Identity Federation

    No full text
    © 2019, Springer Nature Switzerland AG. The identity problem today is a data-sharing problem. Today the fixed attributes approach adopted by the consumer identity management industry provides only limited information about an individual, and therefore, is of limited value to the service providers and other participants in the identity ecosystem. This paper proposes the use of the Open Algorithms (OPAL) paradigm to address the increasing need for individuals and organizations to share data in a privacy-preserving manner. Instead of exchanging static or fixed attributes, participants in the ecosystem will be able to obtain better insight through a collective sharing of algorithms, governed through a trust network. Algorithms for specific datasets must be vetted to be privacy-preserving, fair and free from bias
    corecore