79,174 research outputs found

    Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier

    Full text link
    As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of 'grey data' about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This paper explores the competing values inherent in data stewardship and makes recommendations for practice, drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.Comment: Final published version, Sept 30, 201

    Governing Networks and Rule-Making in Cyberspace

    Get PDF
    The global network environment defies traditional regulatory theories and policymaking practices. At present, policymakers and private sector organizations are searching for appropriate regulatory strategies to encourage and channel the global information infrastructure (“GII”). Most attempts to define new rules for the development of the GII rely on disintegrating concepts of territory and sector, while ignoring the new network and technological borders that transcend national boundaries. The GII creates new models and sources for rules. Policy leadership requires a fresh approach to the governance of global networks. Instead of foundering on old concepts, the GII requires a new paradigm for governance that recognizes the complexity of networks, builds constructive relationships among the various participants (including governments, systems operators, information providers, and citizens), and promotes incentives for the attainment of various public policy objectives in the private sector

    Big data for monitoring educational systems

    Get PDF
    This report considers “how advances in big data are likely to transform the context and methodology of monitoring educational systems within a long-term perspective (10-30 years) and impact the evidence based policy development in the sector”, big data are “large amounts of different types of data produced with high velocity from a high number of various types of sources.” Five independent experts were commissioned by Ecorys, responding to themes of: students' privacy, educational equity and efficiency, student tracking, assessment and skills. The experts were asked to consider the “macro perspective on governance on educational systems at all levels from primary, secondary education and tertiary – the latter covering all aspects of tertiary from further, to higher, and to VET”, prioritising primary and secondary levels of education

    My Private Cloud Overview: A Trust, Privacy and Security Infrastructure for the Cloud

    Get PDF
    Based on the assumption that cloud providers can be trusted (to a certain extent) we define a trust, security and privacy preserving infrastructure that relies on trusted cloud providers to operate properly. Working in tandem with legal agreements, our open source software supports: trust and reputation management, sticky policies with fine grained access controls, privacy preserving delegation of authority, federated identity management, different levels of assurance and configurable audit trails. Armed with these tools, cloud service providers are then able to offer a reliable privacy preserving infrastructure-as-a-service to their clients

    Configuring the Networked Citizen

    Get PDF
    Among legal scholars of technology, it has become commonplace to acknowledge that the design of networked information technologies has regulatory effects. For the most part, that discussion has been structured by the taxonomy developed by Lawrence Lessig, which classifies code as one of four principal regulatory modalities, alongside law, markets, and norms. As a result of that framing, questions about the applicability of constitutional protections to technical decisions have taken center stage in legal and policy debates. Some scholars have pondered whether digital architectures unacceptably constrain fundamental liberties, and what public design obligations might follow from such a conclusion. Others have argued that code belongs firmly on the private side of the public/private divide because it originates in the innovative activity of private actors. In a forthcoming book, the author argues that the project of situating code within one or another part of the familiar constitutional landscape too often distracts legal scholars from more important questions about the quality of the regulation that networked digital architectures produce. The gradual, inexorable embedding of networked information technologies has the potential to alter, in largely invisible ways, the interrelated processes of subject formation and culture formation. Within legal scholarship, the prevailing conceptions of subjectivity tend to be highly individualistic, oriented around the activities of speech and voluntary affiliation. Subjectivity also tends to be understood as definitionally independent of culture. Yet subjectivity is importantly collective, formed by the substrate within which individuality emerges. People form their conceptions of the good in part by reading, listening, and watching—by engaging with the products of a common culture—and by interacting with one another. Those activities are socially and culturally mediated, shaped by the preexisting communities into which individuals are born and within which they develop. They are also technically mediated, shaped by the artifacts that individuals encounter in common use. The social and cultural patterns that mediate the activities of self-constitution are being reconfigured by the pervasive adoption of technical protocols and services that manage the activities of content delivery, search, and social interaction. In developed countries, a broad cross-section of the population routinely uses networked information technologies and communications devices in hundreds of mundane, unremarkable ways. We search for information, communicate with each other, and gain access to networked resources and services. For the most part, as long as our devices and technologies work as expected, we give little thought to how they work; those questions are understood to be technical questions. Such questions are better characterized as sociotechnical. As networked digital architectures increasingly mediate the ordinary processes of everyday life, they catalyze gradual yet fundamental social and cultural change. This chapter—originally published in Imagining New Legalities: Privacy and Its Possibilities in the 21st Century, edited by Austin Sarat, Lawrence Douglas, and Martha Merrill Umphrey (2012)—considers two interrelated questions that flow from understanding sociotechnical change as (re)configuring networked subjects. First, it revisits the way that legal and policy debates locate networked information technologies with respect to the public/private divide. The design of networked information technologies and communications devices is conventionally treated as a private matter; indeed, that designation has been the principal stumbling block encountered by constitutional theorists of technology. The classification of code as presumptively private has effects that reach beyond debates about the scope of constitutional guarantees, shaping views about the extent to which regulation of technical design decisions is normatively desirable. This chapter reexamines that discursive process, using lenses supplied by literatures on third-party liability and governance. Second, this chapter considers the relationship between sociotechnical change and understandings of citizenship. The ways that people think, form beliefs, and interact with one another are centrally relevant to the sorts of citizens that they become. The gradual embedding of networked information technologies into the practice of everyday life therefore has important implications for both the meaning and the practice of citizenship in the emerging networked information society. If design decisions are neither merely technical nor presumptively private, then they should be subject to more careful scrutiny with regard to the kind of citizen they produce. In particular, policy-makers cannot avoid engaging with the particular values that are encoded

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Technology, governance, and a sustainability model for small and medium-sized towns in Europe

    Get PDF
    New and cutting-edge technologies causing deep changes in societies, playing the role of game modifiers, and having a significant impact on global markets in small and medium-sized towns in Europe (SMSTEs) are the focus of this research. In this context, an analysis was carried out to identify the main dimensions of a model for promoting innovation in SMSTEs. The literature review on the main dimensions boosting the innovation in SMSTEs and the methodological approach was the application of a survey directed to experts on this issue. The findings from the literature review reflect that technologies, governance, and sustainability dimensions are enablers of SMSTEs’ innovation, and based on the results of the survey, a model was implemented to boost innovation, being this the major add-on of this research.info:eu-repo/semantics/publishedVersio

    Internet Governance: the State of Play

    Get PDF
    The Global Forum on Internet Governance held by the UNICT Task Force in New York on 25-26 March concluded that Internet governance issues were many and complex. The Secretary-General's Working Group on Internet Governance will have to map out and navigate this complex terrain as it makes recommendations to the World Summit on an Information Society in 2005. To assist in this process, the Forum recommended, in the words of the Deputy Secretary-General of the United Nations at the closing session, that a matrix be developed "of all issues of Internet governance addressed by multilateral institutions, including gaps and concerns, to assist the Secretary-General in moving forward the agenda on these issues." This paper takes up the Deputy Secretary-General's challenge. It is an analysis of the state of play in Internet governance in different forums, with a view to showing: (1) what issues are being addressed (2) by whom, (3) what are the types of consideration that these issues receive and (4) what issues are not adequately addressed
    • …
    corecore