3,410 research outputs found

    FORTES: Forensic Information Flow Analysis of Business Processes

    Get PDF
    Nearly 70% of all business processes in use today rely on automated workflow systems for their execution. Despite the growing expenses in the design of advanced tools for secure and compliant deployment of workflows, an exponential growth of dependability incidents persists. Concepts beyond access control focusing on information flow control offer new paradigms to design security mechanisms for reliable and secure IT-based workflows. This talk presents FORTES, an approach for the forensic analysis of information flow properties. FORTES claims that information flow control can be made usable as a core of an audit-control system. For this purpose, it reconstructs workflow models from secure log files (i.e. execution traces) and, applying security policies, analyzes the information flows to distinguish security relevant from security irrelevant information flows. FORTES thus cannot prevent security policy violations, but by detecting them with well-founded analysis, improve the precision of audit controls and the generated certificates

    Online Personal Data Processing and EU Data Protection Reform. CEPS Task Force Report, April 2013

    Get PDF
    This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such

    Enhancing User Authentication with Facial Recognition and Feature-Based Credentials

    Get PDF
    This research proposes a novel and trustworthy user authentication method that creates individualized and trusted credentials based on distinctive facial traits using facial recognition technology. The ability to easily validate user identification across various login methods is provided by this feature. The fundamental elements of this system are face recognition, feature extraction, and the hashing of characteristics to produce usernames and passwords. This method makes use of the OpenCV library, which is free software for computer vision. Additionally, it employs Hashlib for secure hashing and Image-based Deep Learning for Identification (IDLI) technology to extract facial tags. For increased security and dependability, the system mandates a maximum of ten characters for users and passwords. By imposing this restriction, the system increases its resilience by reducing any possible weaknesses in its defense. The policy also generates certificates that are neatly arranged in an Excel file for easy access and management. To improve user data and provide reliable biometric authentication, this study intends to create and implement a recognition system that incorporates cutting-edge approaches such as face feature extraction, feature hashing, and password creation. Additionally, the system has robust security features using face recognition

    Ethics-based AI auditing core drivers and dimensions: A systematic literature review

    Get PDF
    This thesis provides a systematic literature review (SLR) of ethics-based AI auditing research. The review’s main goals are to report the current status of AI auditing academic literature and provide findings addressing the review objectives. The review incorporated 50 articles presenting ethics-based AI auditing. The SLR findings indicate that the AI auditing field is still new and rising. Most of the studies were conference proceeding published either 2019 or 2020. Therefore, there was a demand for a SLR work as the AI auditing field was wide and unorganized. Based on the SLR findings, fairness, transparency, non-maleficence and responsibility are the most important principles for the ethics-based AI auditing. Other commonly identified principles were privacy, beneficence, freedom and autonomy and trust. These principles were interpreted to belong to either drivers or dimensions depending on whether something is audited directly or whether achieving ethics is a desired outcome. The findings also suggest that the external AI auditing leads the ethics-based AI auditing discussion. Majority of the papers dealt specifically with external AI auditing. The most important stakeholders were recognized to be researchers, developers and deployers, regulators, auditors, users and individuals and companies. Roles of the stakeholders varied depending on whether they are proposed to conduct AI audits or whether they are in the position of beneficiary.TĂ€ssĂ€ Pro gradu -tutkielmassa esitellÀÀn systemaattinen kirjallisuuskatsaus etiikkalĂ€htöiseen tekoĂ€lyn auditointiin. Kirjallisuuskatsauksen keskeisimmĂ€t tavoitteet ovat esittÀÀ tĂ€mĂ€nhetkinen tila tekoĂ€lyn auditoinnin akateemisesta kirjallisuudesta sekĂ€ esittÀÀ keskeisimmĂ€t löydökset tutkielman tavoitteiden mukaisesti. Kirjallisuuskatsaus sisĂ€lsi 50 artikkelia, mitkĂ€ kĂ€sittelivat etiikkalĂ€htöistĂ€ tekoĂ€lyn auditointia. Systemaattisen kirjallisuuskatsauksen löydökset osoittivat, ettĂ€ tekoĂ€lyn auditoinnin ala on edelleen uusi ja kasvava. Suurin osa julkaisuista oli konferenssipapereita vuosilta 2019-2020. Ala on myös laaja sekĂ€ epĂ€organisoitu, joten systemaattiselle kirjallisuuskatsaukselle oli kysyntÀÀ. Löydöksien perusteella reiluus, lĂ€pinĂ€kyvyys, ei-haitallisuus sekĂ€ vastuullisuus ovat tĂ€rkeimmĂ€t periaatteet etiikkalĂ€htöiseen tekoĂ€lyn auditointiin. Muut yleisesti tunnistetut periaatteet olivat yksityisyys, hyvyys, vapaus ja autonomia sekĂ€ luottamus. NĂ€mĂ€ periaatteet tulkittiin kuuluvaksi joko ajureihin tai dimensioihin sen perusteella auditoitiinko periaatetta suoraan vai oliko periaatteen saavuttaminen auditoinnin toivottu tulos. Löydökset osoittivat myös, ettĂ€ ulkoinen auditointi hallitsee tĂ€mĂ€nhetkistĂ€ keskustelua etiikkalĂ€htöisessĂ€ tekoĂ€lyn auditoinnissa. Valtaosa papereista kĂ€sitteli erityisesti ulkoista tekoĂ€lyn auditointia. LisĂ€ksi tĂ€rkeimmĂ€t sidosryhmĂ€t tunnistettiin. NĂ€mĂ€ olivat tutkijat, jĂ€rjestelmĂ€n kehittĂ€jĂ€t, lainvalvojat, auditoijat, kĂ€yttĂ€jĂ€t sekĂ€ yksilöt ja organisaatiot. HeidĂ€n roolinsa vaihtelivat sen perusteella vastasivatko he tekoĂ€lyn auditoinnin toteuttamisesta vai kuuluivatko he tekoĂ€lyn auditoinnin edunsaajiin

    Data governance: Organizing data for trustworthy Artificial Intelligence

    Get PDF
    The rise of Big, Open and Linked Data (BOLD) enables Big Data Algorithmic Systems (BDAS) which are often based on machine learning, neural networks and other forms of Artificial Intelligence (AI). As such systems are increasingly requested to make decisions that are consequential to individuals, communities and society at large, their failures cannot be tolerated, and they are subject to stringent regulatory and ethical requirements. However, they all rely on data which is not only big, open and linked but varied, dynamic and streamed at high speeds in real-time. Managing such data is challenging. To overcome such challenges and utilize opportunities for BDAS, organizations are increasingly developing advanced data governance capabilities. This paper reviews challenges and approaches to data governance for such systems, and proposes a framework for data governance for trustworthy BDAS. The framework promotes the stewardship of data, processes and algorithms, the controlled opening of data and algorithms to enable external scrutiny, trusted information sharing within and between organizations, risk-based governance, system-level controls, and data control through shared ownership and self-sovereign identities. The framework is based on 13 design principles and is proposed incrementally, for a single organization and multiple networked organizations.NORTE-01-0145- FEDER-000037

    Legal compliance by design (LCbD) and through design (LCtD) : preliminary survey

    Get PDF
    1st Workshop on Technologies for Regulatory Compliance co-located with the 30th International Conference on Legal Knowledge and Information Systems (JURIX 2017). The purpose of this paper is twofold: (i) carrying out a preliminary survey of the literature and research projects on Compliance by Design (CbD); and (ii) clarifying the double process of (a) extending business managing techniques to other regulatory fields, and (b) converging trends in legal theory, legal technology and Artificial Intelligence. The paper highlights the connections and differences we found across different domains and proposals. We distinguish three different policydriven types of CbD: (i) business, (ii) regulatory, (iii) and legal. The recent deployment of ethical views, and the implementation of general principles of privacy and data protection lead to the conclusion that, in order to appropriately define legal compliance, Compliance through Design (CtD) should be differentiated from CbD

    A Rule of Persons, Not Machines: The Limits of Legal Automation

    Get PDF

    Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability

    Get PDF
    Algorithms are now used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. A quickly growing literature has split on how to address algorithmic decision-making, with individual rights and accountability to nonexpert stakeholders and to the public at the crux of the debate. In this Article, I make the case for why both individual rights and public- and stakeholder-facing accountability are not just goods in and of themselves but crucial components of effective governance. Only individual rights can fully address dignitary and justificatory concerns behind calls for regulating algorithmic decision-making. And without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail. In this Article, I identify three categories of concern behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three. I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights combined with systemic regulation achieved through collaborative governance (the use of private-public partnerships). Only through this binary approach can we effectively address all three concerns raised by algorithmic decision-making, or decision-making by Artificial Intelligence (“AI”). The interplay between the two approaches will be complex. Sometimes the two systems will be complementary, and at other times, they will be in tension. The European Union’s (“EU’s”) General Data Protection Regulation (“GDPR”) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses and provides a model for how to construct a better governance regime for accountable algorithmic, or AI, decision-making. It shows, too, that in the absence of public and stakeholder accountability, individual rights can have a significant role to play in establishing the legitimacy of a collaborative regime

    A Comparative Study of Card Not Present E-commerce Architectures with Card Schemes: What About Privacy?

    Get PDF
    International audienceInternet is increasingly used for card not present e-commerce ar-chitectures. Several protocols, such as 3D-Secure, have been proposed in the literature by Card schemes or academics. Even if some of them are deployed in real life, these solutions are not perfect considering data security and user's privacy. In this paper, we present a comparative study of existing solutions for card not present e-commerce solutions. We consider the main security and privacy trends of e-payment in order to make an objective comparison of existing solutions. This comparative study illustrates the need to consider privacy in deployed e-commerce architectures. This has never been more urgent with the recent release of the new specifications of 3D-secure
    • 

    corecore