118,092 research outputs found

    Rethinking Privacy and Freedom of Expression in the Digital Era: An Interview with Mark Andrejevic

    Get PDF
    Mark Andrejevic, Professor of Media Studies at the Pomona College in Claremont, California, is a distinguished critical theorist exploring issues around surveillance from pop culture to the logic of automated, predictive surveillance practices. In an interview with WPCC issue co-editor Pinelopi Troullinou, Andrejevic responds to pressing questions emanating from the surveillant society looking to shift the conversation to concepts of data holdersā€™ accountability. He insists on the need to retain awareness of power relations in a data driven society highlighting the emerging challenge, ā€˜to provide ways of understanding the long and short term consequences of data driven social sortingā€™. Within the context of Snowdenā€™s revelations and policy responses worldwide he recommends a shift of focus from discourses surrounding ā€˜pre-emptionā€™ to those of ā€˜preventionā€™ also questioning the notion that citizens might only need to be concerned, ā€˜if we are doing something ā€œwrongā€ā€™ as this is dependent on a utopian notion of the state and commercial processes, ā€˜that have been purged of any forms of discriminationā€™. He warns of multiple concerns of misuse of data in a context where ā€˜a total surveillance society looks all but inevitableā€™. However, the academy may be in a unique position to provide ways of reframing the terms of discussions over privacy and surveillance via the analysis of ā€˜the long and short term consequences of data driven social sorting (and its automation)ā€™ and in particular of algorithmic accountability

    Privacy, Transparency, and Accountability in the NSAā€™s Bulk Metadata Program

    Get PDF
    Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as ā€œenemy combatantsā€ outside the scope of traditional Geneva protections, legal memos by White House counsel providing rationale for torture, the USA Patriot Act), and in part because of the rapid development of communications and computing technologies that enable both greater connectivity among people and the greater ability to collect information about those connections. One important way in which these questions intersect is in the controversy surrounding bulk collection of telephone metadata by the U.S. National Security Agency. The bulk metadata program (the ā€œmetadata programā€ or ā€œprogramā€) involved court orders under section 215 of the USA Patriot Act requiring telecommunications companies to provide records about all calls the companies handled and the creation of database that the NSA could search. The program was revealed to the general public in June 2013 as part of the large document leak by Edward Snowden, a former contractor for the NSA. A fair amount has been written about section 215 and the bulk metadata program. Much of the commentary has focused on three discrete issues. First is whether the program is legal; that is, does the program comport with the language of the statute and is it consistent with Fourth Amendment protections against unreasonable searches and seizures? Second is whether the program infringes privacy rights; that is, does bulk metadata collection diminish individual privacy in a way that rises to the level that it infringes personsā€™ rights to privacy? Third is whether the secrecy of the program is inconsistent with democratic accountability. After all, people in the general public only became aware of the metadata program via the Snowden leaks; absent those leaks, there would have not likely been the sort of political backlash and investigation necessary to provide some kind of accountability. In this chapter I argue that we need to look at these not as discrete questions, but as intersecting ones. The metadata program is not simply a legal problem (though it is one); it is not simply a privacy problem (though it is one); and it is not simply a secrecy problem (though it is one). Instead, the importance of the metadata program is the way in which these problems intersect and reinforce one another. Specifically, I will argue that the intersection of the questions undermines the value of rights, and that this is a deeper and more far-reaching moral problem than each of the component questions

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicineā€”the use of big data and sophisticated machine learning techniques for health-care applicationsā€”could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Information Accountability Framework for a Trusted Health Care System

    Get PDF
    Trusted health care outcomes are patient centric. Requirements to ensure both the quality and sharing of patientsā€™ health records are a key for better clinical decision making. In the context of maintaining quality health, the sharing of data and information between professionals and patients is paramount. This information sharing is a challenge and costly if patientsā€™ trust and institutional accountability are not established. Establishment of an Information Accountability Framework (IAF) is one of the approaches in this paper. The concept behind the IAF requirements are: transparent responsibilities, relevance of the information being used, and the establishment and evidence of accountability that all lead to the desired outcome of a Trusted Health Care System. Upon completion of this IAF framework the trust component between the public and professionals will be constructed. Preservation of the confidentiality and integrity of patientsā€™ information will lead to trusted health care outcomes

    Bridging the Data Divide: Understanding State Agency and University Research Partnerships within SLDS

    Get PDF
    This report examines this question through an analysis of state agency-university researcher partnerships that exist in State Longitudinal Data Systems (SLDS). Building state agency-university researcher partnerships is an important value of SLDS. To examine state agency-university researcher partnerships within SLDS, our analysis is guided by the following set of questions based on 71 interviews conducted with individuals most directly involved with SLDS efforts in Virginia, Maryland, Texas and Washington. The findings from this analysis suggest that each stateā€™s SLDS organization and governance structure includes university partners in differing ways. In general, stronger partnership efforts are driven by legislative action or executive-level leadership. Regardless of structure, the operation of these partnerships is shaped by the agencyā€™s previous experience and cultural norms surrounding the value and inclusion of university researchers

    Roadmap for Next-Generation Accountability Systems

    Get PDF
    Offers a framework for designing and implementing state accountability systems that enable consistent, aligned goals to ensure college- and career-readiness; valid measurement, support, and interventions; transparent reporting; and continuous improvement

    Bridging the Data Divide: Understanding State Agency and University Research Partnerships within SLDS

    Get PDF
    This report examines this question through an analysis of state agency-university researcher partnerships that exist in State Longitudinal Data Systems (SLDS). Building state agency-university researcher partnerships is an important value of SLDS. To examine state agency-university researcher partnerships within SLDS, our analysis is guided by the following set of questions based on 71 interviews conducted with individuals most directly involved with SLDS efforts in Virginia, Maryland, Texas and Washington. The findings from this analysis suggest that each stateā€™s SLDS organization and governance structure includes university partners in differing ways. In general, stronger partnership efforts are driven by legislative action or executive-level leadership. Regardless of structure, the operation of these partnerships is shaped by the agencyā€™s previous experience and cultural norms surrounding the value and inclusion of university researchers

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Police Body Worn Cameras and Privacy: Retaining Benefits While Reducing Public Concerns

    Get PDF
    Recent high-profile incidents of police misconduct have led to calls for increased police accountability. One proposed reform is to equip police officers with body worn cameras, which provide more reliable evidence than eyewitness accounts. However, such cameras may pose privacy concerns for individuals who are recorded, as the footage may fall under open records statutes that would require the footage to be released upon request. Furthermore, storage of video data is costly, and redaction of video for release is time-consuming. While exempting all body camera video from release would take care of privacy issues, it would also prevent the public from using body camera footage to uncover misconduct. Agencies and lawmakers can address privacy problems successfully by using data management techniques to identify and preserve critical video evidence, and allowing non-critical video to be deleted under data-retention policies. Furthermore, software redaction may be used to produce releasable video that does not threaten the privacy of recorded individuals
    • ā€¦
    corecore