12,070 research outputs found

    Designing the Health-related Internet of Things: Ethical Principles and Guidelines

    Get PDF
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems

    Get PDF
    Different approaches have been adopted in addressing the challenges of Artificial Intelligence (AI), some centred on personal data and others on ethics, respectively narrowing and broadening the scope of AI regulation. This contribution aims to demonstrate that a third way is possible, starting from the acknowledgement of the role that human rights can play in regulating the impact of data-intensive systems. The focus on human rights is neither a paradigm shift nor a mere theoretical exercise. Through the analysis of more than 700 decisions and documents of the data protection authorities of six countries, we show that human rights already underpin the decisions in the field of data use. Based on empirical analysis of this evidence, this work presents a methodology and a model for a Human Rights Impact Assessment (HRIA). The methodology and related assessment model are focused on AI applications, whose nature and scale require a proper contextualisation of HRIA methodology. Moreover, the proposed models provide a more measurable approach to risk assessment which is consistent with the regulatory proposals centred on risk thresholds. The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness. The overall goal is to respond to the growing interest in HRIA, moving from a mere theoretical debate to a concrete and context-specific implementation in the field of data-intensive applications based on AI

    A Literature Survey on Smart Toy-related Children\u27s Privacy Risks

    Get PDF
    Smart toys have become popular as technological solutions offer a better experience for children. However, the technology used increases the risks to children\u27s privacy, which does not seem to have become a real concern for toy makers. Most researchers in this domain are vague in defining their motivations due to lack of an expert survey to support them. We conducted a literature survey to find papers on smart toy-related children\u27s privacy risks and mitigation solutions. We analyzed 26 papers using a taxonomy for privacy principles and preserving techniques adapted from the IoT context. Our analysis shows that some types of risks received more attention, especially (a) confidentiality, (b) use, retention and disclosure limitation, (c) authorization, (d) consent and choice, (e) openness, transparency and notice and (f) authentication. As for solutions, few were effectively presented; the vast majority related to data restriction -- (a) access control and (b) cryptographic

    Mandated Ethical Hacking—a Repackaged Solution

    Get PDF
    Hacking to prove a point or to expose technological vulnerabilities has been around since the 1960s, but it has been labeled and packaged differently as “white hacking” or “ethical hacking.” This article suggests that smart toy manufacturers, such as Mattel and VTech, should be subject to required vulnerability testing which utilizes ethical hacking under the Consumer Product Safety Improvement Act (“CPSIA”). More specifically, this article proposes to amend the Toy Safety Standard, ASTMF- 963-11, to include smart toys connected to the internet. The CPSIA and Consumer Product Safety Commission (“CPSC”) impose safety testing on all toys intended for use by children of twelve years of age or younger. This article will explore the proposed safety testing in the context of the smart toys My Friend Cayla and Hello Barbie. This article is cognizant of how fast-paced the technology industry is and thus, does not suggest a specific time period, rather it suggests what must be done prior to the release of product

    The Internet of Things Will Thrive by 2025

    Get PDF
    This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things

    Beyond Data

    Get PDF
    This open access book focuses on the impact of Artificial Intelligence (AI) on individuals and society from a legal perspective, providing a comprehensive risk-based methodological framework to address it. Building on the limitations of data protection in dealing with the challenges of AI, the author proposes an integrated approach to risk assessment that focuses on human rights and encompasses contextual social and ethical values. The core of the analysis concerns the assessment methodology and the role of experts in steering the design of AI products and services by business and public bodies in the direction of human rights and societal values. Taking into account the ongoing debate on AI regulation, the proposed assessment model also bridges the gap between risk-based provisions and their real-world implementation. The central focus of the book on human rights and societal values in AI and the proposed solutions will make it of interest to legal scholars, AI developers and providers, policy makers and regulators. Alessandro Mantelero is Associate Professor of Private Law and Law & Technology in the Department of Management and Production Engineering at the Politecnico di Torino in Turin, Italy
    • 

    corecore