207 research outputs found

    Reinventing Capitalism in the Age of Big Data

    Get PDF
    Book review: Reinventing Capitalism in the Age of Big Data By Viktor Mayer-Schönberger and Thomas Ramg

    The future of consumer data protection in the E.U. Rethinking the “notice and consent” paradigm in the new era of predictive analytics

    Get PDF
    The new E.U. proposal for a general data protection regulation has been introduced to give an answer to the challenges of the evolving digital environment. In some cases, these expectations could be disappointed, since the proposal is still based on the traditional main pillars of the last generation of data protection laws. In the field of consumer data protection, these pillars are the purpose specification principle, the use limitation principle and the “notice and consent” model. Nevertheless, the complexity of data processing, the power of modern analytics and the “transformative” use of personal information drastically limit the awareness of consumers, their capability to evaluate the various consequences of their choices and to give a free and informed consent. To respond to the above, it is necessary to clarify the rationale of the “notice and consent” paradigm, looking back to its origins and assessing its effectiveness in a world of predictive analytics. From this perspective, the paper considers the historical evolution of data protection and how the fundamental issues coming from the technological and socio-economic contexts have been addressed by regulations. On the basis of this analysis, the author suggests a revision of the “notice and consent” model focused on the opt-in and proposes the adoption of a different approach when, such as in Big Data collection, the data subject cannot be totally aware of the tools of analysis and their potential output. For this reason, the author sustains the provision of a subset of rules for Big Data analytics, which is based on a multiple impact assessment of data processing, on a deeper level of control by data protection authorities, and on the different opt-out model

    Si rafforza la tutela dei dati personali: data breach notification e limiti alla profilazione mediante cookies

    No full text
    L'articolo analizza il recepimento nell'ordinamento italiano delle direttive 2009/136/CE e 2009/140/CE sull'impiego dei cookies e sulla sicurezza rispetto agli accessi illegittimi alle banche dati. Mettendo in luce le ragioni correlate al rafforzamento della tutela dei dati personali e sottolineando il ritardo del legislatore italiano sul tem

    AI and Big Data: a blueprint for a human rights, social and ethical impact assessment

    Get PDF
    The use of algorithms in modern data processing techniques, as well as data-intensive tech- nological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values. Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research

    Toward a New Approach to Data Protection in the Big Data Era

    Get PDF
    The complexity of data processing, the power of modern analytics, and the transformative use of personal information drastically limit the awareness of consumers about how their data is collected and used and preclude their ability to give free and informed consent. These elements lead us to reconsider the role of user’s self-determination in data processing and the “notice and consent” model

    AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment

    Get PDF
    Abstract The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values. Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research

    Hacia una regulación de los datos masivos basada en valores sociales y éticos. Las directrices del Consejo de Europa

    Get PDF
    Aquest article ofereix una anàlisi de les principals Directrius sobre dades massives i protecció de dades recentment aprovades pel Comitè Consultiu del Consell d'Europa. Després d'un examen dels canvis ocorreguts en el processament de dades mitjançant l’ús de l'analítica descriptiva, l'autor descriu el model d'avaluació d'impacte que se suggereix en les Directrius a fi d’encarar els riscos potencials de les aplicacions que utilitzen l'analítica de dades massives. Aquest procediment d'avaluació de riscos és un element clau per gestionar les dades massives, ja que no es limita a l'avaluació tradicional de l'impacte en la protecció de dades sinó que inclou també les conseqüències socials i ètiques del seu ús, aspectes crítics de la futura societat algorítmica.This article discusses the main provisions of the Guidelines on big data and data protection recently adopted by the Consultative Committee of the Council of Europe. After an analysis of the changes in data processing caused by the use of the predictive analytics, the author outlines the impact assessment model suggested by the Guidelines to tackles the potential risks of big data applications. This procedure of risk-assessment represents a key element to address the challenges of Big Data, since it goes beyond the traditional data protection impact assessment encompassing the social and ethical consequences of the use of data, which are the most important and critical aspects of the future algorithmic society.Este artículo ofrece un análisis de las principales disposiciones de las Directrices sobre datos masivos y protección de datos recientemente aprobadas por el Comité Consultivo del Consejo de Europa. Después de un examen de los cambios ocurridos en el procesamiento de datos por el uso de la analítica descriptiva, el autor describe el modelo de evaluación de impacto que se sugiere en las Directrices para encarar los riesgos potenciales de las aplicaciones que utilizan la analítica de datos masivos. Este procedimiento de evaluación de riesgos es un elemento clave para gestionar los datos masivos, ya que no se limita a la evaluación tradicional del impacto en la protección de datos sino que abarca también las consecuencias sociales y éticas de su uso, aspectos que son importantes y críticos de la futura sociedad algorítmica

    Defining a new paradigm for data protection in the world of Big Data analytics

    Get PDF
    All the ongoing proposals for a reform of data protection regulations, both in the U.S. and Europe, are still focused on the purpose limitation principle and the “notice and choice” model. This approach is inadequate in the present Big Data context. The paper suggests a revision of the existing model and proposes the provision of a subset of rules for Big Data processing, which is based on the opt-out model and on a deeper level of control by data protection authorities

    Beyond Data

    Get PDF
    This open access book focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values. The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values. Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation. The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators. Alessandro Mantelero is Associate Professor of Private Law and Law & Technology in the Department of Management and Production Engineering at the Politecnico di Torino in Turin, Italy

    Regulating AI within the Human Rights Framework: A Roadmapping Methodology

    Get PDF
    The ongoing European debate on Artificial Intelligence (AI) is increasingly polarised between the initial ethics-based approach and the growing focus on human rights. Th e prevalence of one or the other of these two approaches is not neutral and entails consequences in terms of regulatory outcomes and underlying interests. Th e basic assumption of this study is the need to consider the pivotal role of ethics as a complementary element of a regulatory strategy , which must have human rights principles at its core. Based on this premise, this contribution focuses on the role that the international human rights framework can play in defining common binding principles for AI regulation. The first challenge in considering human rights as a frame of reference in AI regulation is to define the exact nature of the subject matter. Since a wide range of AI-based services and products have only emerged as a recent development of the digital economy, many of the existing international legal instruments are not tailored to the specific issues raised by AI. Moreover, certain binding principles and safeguards were shaped in a different technological era and social context. Against this background, we need to examine the existing binding international human rights instruments and their non-binding implementations to extract the key principles that should underpin AI development and govern its groundbreaking applications. However, the paradigm shift brought about by the latest wave of AI development means that the principles embodied in international legally binding instruments cannot be applied in their current form, and this contribution sets out to contextualise these guiding principles for the AI era. Given the broad application of AI solutions in a variety of fields, we might look at the entire corpus of available international binding instruments. However, taking a methodological approach, this analysis focuses on two key areas – data protection and healthcare – to provide an initial assessment of the regulatory issues and a possible roadmap to addressing them
    corecore