74,624 research outputs found

    Automated Analysis of Accountability

    Get PDF

    Introducing Accountability to Anonymity Networks

    Full text link
    Many anonymous communication (AC) networks rely on routing traffic through proxy nodes to obfuscate the originator of the traffic. Without an accountability mechanism, exit proxy nodes risk sanctions by law enforcement if users commit illegal actions through the AC network. We present BackRef, a generic mechanism for AC networks that provides practical repudiation for the proxy nodes by tracing back the selected outbound traffic to the predecessor node (but not in the forward direction) through a cryptographically verifiable chain. It also provides an option for full (or partial) traceability back to the entry node or even to the corresponding user when all intermediate nodes are cooperating. Moreover, to maintain a good balance between anonymity and accountability, the protocol incorporates whitelist directories at exit proxy nodes. BackRef offers improved deployability over the related work, and introduces a novel concept of pseudonymous signatures that may be of independent interest. We exemplify the utility of BackRef by integrating it into the onion routing (OR) protocol, and examine its deployability by considering several system-level aspects. We also present the security definitions for the BackRef system (namely, anonymity, backward traceability, no forward traceability, and no false accusation) and conduct a formal security analysis of the OR protocol with BackRef using ProVerif, an automated cryptographic protocol verifier, establishing the aforementioned security properties against a strong adversarial model

    Rethinking Privacy and Freedom of Expression in the Digital Era: An Interview with Mark Andrejevic

    Get PDF
    Mark Andrejevic, Professor of Media Studies at the Pomona College in Claremont, California, is a distinguished critical theorist exploring issues around surveillance from pop culture to the logic of automated, predictive surveillance practices. In an interview with WPCC issue co-editor Pinelopi Troullinou, Andrejevic responds to pressing questions emanating from the surveillant society looking to shift the conversation to concepts of data holders’ accountability. He insists on the need to retain awareness of power relations in a data driven society highlighting the emerging challenge, ‘to provide ways of understanding the long and short term consequences of data driven social sorting’. Within the context of Snowden’s revelations and policy responses worldwide he recommends a shift of focus from discourses surrounding ‘pre-emption’ to those of ‘prevention’ also questioning the notion that citizens might only need to be concerned, ‘if we are doing something “wrong”’ as this is dependent on a utopian notion of the state and commercial processes, ‘that have been purged of any forms of discrimination’. He warns of multiple concerns of misuse of data in a context where ‘a total surveillance society looks all but inevitable’. However, the academy may be in a unique position to provide ways of reframing the terms of discussions over privacy and surveillance via the analysis of ‘the long and short term consequences of data driven social sorting (and its automation)’ and in particular of algorithmic accountability

    Design Challenges for GDPR RegTech

    Get PDF
    The Accountability Principle of the GDPR requires that an organisation can demonstrate compliance with the regulations. A survey of GDPR compliance software solutions shows significant gaps in their ability to demonstrate compliance. In contrast, RegTech has recently brought great success to financial compliance, resulting in reduced risk, cost saving and enhanced financial regulatory compliance. It is shown that many GDPR solutions lack interoperability features such as standard APIs, meta-data or reports and they are not supported by published methodologies or evidence to support their validity or even utility. A proof of concept prototype was explored using a regulator based self-assessment checklist to establish if RegTech best practice could improve the demonstration of GDPR compliance. The application of a RegTech approach provides opportunities for demonstrable and validated GDPR compliance, notwithstanding the risk reductions and cost savings that RegTech can deliver. This paper demonstrates a RegTech approach to GDPR compliance can facilitate an organisation meeting its accountability obligations

    The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism

    Full text link
    Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it

    Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model

    Get PDF
    Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems

    Dialectic tensions in the financial markets: a longitudinal study of pre- and post-crisis regulatory technology

    Get PDF
    This article presents the findings from a longitudinal research study on regulatory technology in the UK financial services industry. The financial crisis with serious corporate and mutual fund scandals raised the profile of compliance as governmental bodies, institutional and private investors introduced a ‘tsunami’ of financial regulations. Adopting a multi-level analysis, this study examines how regulatory technology was used by financial firms to meet their compliance obligations, pre- and post-crisis. Empirical data collected over 12 years examine the deployment of an investment management system in eight financial firms. Interviews with public regulatory bodies, financial institutions and technology providers reveal a culture of compliance with increased transparency, surveillance and accountability. Findings show that dialectic tensions arise as the pursuit of transparency, surveillance and accountability in compliance mandates is simultaneously rationalized, facilitated and obscured by regulatory technology. Responding to these challenges, regulatory bodies continue to impose revised compliance mandates on financial firms to force them to adapt their financial technologies in an ever-changing multi-jurisdictional regulatory landscape

    Reducing No-Shows and Late Cancellations in Primary Care

    Get PDF
    No-shows and late cancellations are a challenge across medical practices, resulting in costly, fragmented care. Many patients do not understand the impact that not showing or cancelling an appointment less than 48 hours prior to a visit can have. While reminding the patient of the appointment has been a known tactic to improve patient’s attendance, the most effective mode of the reminder can vary significantly across patient populations. Just as critical as reminding the patient of the appointment is to ensure they understand the purpose of the visit along with showing respect for their time and any competing priorities. This quality improvement initiative aimed to reduce the no-show rate of 21.4% and late cancellation rate of 21.1% for the MassHealth population by 5%. Learning from previous studies, a hybrid approach to meet this population’s needs included a 7-day reminder call with a Patient Engagement Coordinator (PEC) and a 2-day automated reminder. During the 7-day reminder call the PEC identified barriers to attending the appointment through concrete planning and motivational interviewing strategies. Appointments were rescheduled as needed, additional information was provided to solidify shared goals for the visit, and patient’s time/obligations were validated. The intervention resulted in positive feedback from the majority of patients and revealed concrete planning prompts to be a very effective communication form. The post-intervention data analysis revealed both the no-show and late cancellation results were reduced for the MassHealth population. Due to data and confounding variable limitations this study is recommended to be a basis for future investigation as the principal investigators enter into the next pilot phase of this model
    • …
    corecore