922 research outputs found

    Privacy as Contextual Integrity in Online Proctoring Systems in Higher Education: A Scoping Review

    Full text link
    Privacy is one of the key challenges to the adoption and implementation of online proctoring systems in higher education. To better understand this challenge, we adopt privacy as contextual integrity theory to conduct a scoping review of 17 papers. The results show different types of students' personal and sensitive information are collected and disseminated; this raises considerable privacy concerns. As well as the governing principles including transparency and fairness, consent and choice, information minimization, accountability, and information security and accuracy have been identified to address privacy problems. This study notifies a need to clarify how these principles should be implemented and sustained, and what privacy concerns and actors they relate to. Further, it calls for the need to clarify the responsibility of key actors in enacting and sustaining responsible adoption and use of OPS in higher education

    Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten

    Get PDF
    To understand the Right to be Forgotten in context of artificial intelligence, it is necessary to first delve into an overview of the concepts of human and AI memory and forgetting. Our current law appears to treat human and machine memory alike – supporting a fictitious understanding of memory and forgetting that does not comport with reality. (Some authors have already highlighted the concerns on the perfect remembering.) This Article will examine the problem of AI memory and the Right to be Forgotten, using this example as a model for understanding the failures of current privacy law to reflect the realities of AI technology. First, this Article analyzes the legal background behind the Right to be Forgotten, in order to understand its potential applicability to AI, including a discussion on the antagonism between the values of privacy and transparency under current E.U. privacy law. Next, the Authors explore whether the Right to be Forgotten is practicable or beneficial in an AI/machine learning context, in order to understand whether and how the law should address the Right to Be Forgotten in a post-AI world. The Authors discuss the technical problems faced when adhering to strict interpretation of data deletion requirements under the Right to be Forgotten, ultimately concluding that it may be impossible to fulfill the legal aims of the Right to be Forgotten in artificial intelligence environments. Finally, this Article addresses the core issue at the heart of the AI and Right to be Forgotten problem: the unfortunate dearth of interdisciplinary scholarship supporting privacy law and regulation

    Identifying Practical Challenges in the Implementation of Technical Measures for Data Privacy Compliance

    Full text link
    Modern privacy regulations provide a strict mandate for data processing entities to implement appropriate technical measures to demonstrate compliance. In practice, determining what measures are indeed "appropriate" is not trivial, particularly in light of vague guidelines provided by privacy regulations. To exacerbate the issue, challenges arise not only in the implementation of the technical measures themselves, but also in a variety of factors involving the roles, processes, decisions, and culture surrounding the pursuit of privacy compliance. In this paper, we present 33 challenges faced in the implementation of technical measures for privacy compliance, derived from a qualitative analysis of 16 interviews with privacy professionals. In addition, we evaluate the interview findings in a survey study, which gives way to a discussion of the identified challenges and their implications.Comment: 10 pages, 2 tables, 29th Americas Conference on Information Systems (AMCIS 2023
    • …
    corecore