21,395 research outputs found

    Cross-disciplinary lessons for the future internet

    Get PDF
    There are many societal concerns that emerge as a consequence of Future Internet (FI) research and development. A survey identified six key social and economic issues deemed most relevant to European FI projects. During a SESERV-organized workshop, experts in Future Internet technology engaged with social scientists (including economists), policy experts and other stakeholders in analyzing the socio-economic barriers and challenges that affect the Future Internet, and conversely, how the Future Internet will affect society, government, and business. The workshop aimed to bridge the gap between those who study and those who build the Internet. This chapter describes the socio-economic barriers seen by the community itself related to the Future Internet and suggests their resolution, as well as investigating how relevant the EU Digital Agenda is to Future Internet technologists

    Online Personal Data Processing and EU Data Protection Reform. CEPS Task Force Report, April 2013

    Get PDF
    This report sheds light on the fundamental questions and underlying tensions between current policy objectives, compliance strategies and global trends in online personal data processing, assessing the existing and future framework in terms of effective regulation and public policy. Based on the discussions among the members of the CEPS Digital Forum and independent research carried out by the rapporteurs, policy conclusions are derived with the aim of making EU data protection policy more fit for purpose in today’s online technological context. This report constructively engages with the EU data protection framework, but does not provide a textual analysis of the EU data protection reform proposal as such

    Algorithms that Remember: Model Inversion Attacks and Data Protection Law

    Get PDF
    Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around `model inversion' and `membership inference' attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur
    • 

    corecore