16,199 research outputs found

    An Experimental Evaluation of Smart Toys’ Security and Privacy Practices

    Get PDF
    Smart toys have captured an increasing share of the toy market, and are growing ubiquitous in households with children. These toys can be considered as a subset of Internet of Things (IoT) devices, often containing sensors and artificial intelligence capabilities. They may collect personal information, and frequently have Internet connectivity directly or indirectly through companion apps. Recent studies have found security flaws in many smart toys that have led to serious privacy leaks or allowed tracking a child’s physical location. Some well-publicized discoveries of this nature have led governments around the world to ban some of these toys. To complement recent efforts in analyzing and quantifying security and privacy issues of smart toys, we set out to create two thorough analysis frameworks that are specifically crafted for smart toys. The first framework is designed to analyze legally-binding privacy policies and terms-of-use documentation of smart toys. It is based on a set of privacy-sensitive criteria that we carefully define to systematically evaluate selected privacy aspects of smart toys. We augment our work with a static analysis for the companion Android apps, which are, in most cases, essential for intended functioning of the toys. We use our framework to evaluate a representative set of 11 smart toys, along with 11 companion apps. Our analysis highlights several instances of unnecessary collection of privacy-sensitive information, the use of over-privileged apps, incomplete/lack of information about data storage practices and legal compliance. The proposed framework is a step towards enabling a comparison of smart toys from a privacy perspective, which can be useful to parents, regulatory bodies, and law-makers. The second framework is used to investigate security and privacy practices - based on experimental analysis - of those specific kinds of IoT devices. In particular, we inspect the real practice of smart toys to determine the personal information they collect and security measures used to protect them. We also investigate potential security and privacy flaws in smart toys that can lead to leakage of private information, or allow an adversary to control the toy to lure, harm, or distress a child. Smart toys pose risks unique to this category of devices, and our work is intended to define these risks and assess a subset of toys against them. We perform a thorough experimental analysis of five smart toys and their companion apps. Our systematic analysis has uncovered that several of these toys may expose children to multiple threats through physical, nearby, or remote access to the toy. The presented frameworks unite and complement several existing adhoc analyses, and help comprehensive evaluation of other smart toys

    Perceived Innovativeness and Privacy Risk of Smart Toys in Brazil and Argentina

    Get PDF
    A smart toy, such as Hello Barbie, is a device consisting of a physical toy component that connects to a computing system with online services through networking to enhance the functionality of a traditional toy. Whilst these are new educational and entertaining values of smart toys, experts in western countries such as U.S. and Germany have warned consumers of the data security and privacy issues of these toys. In this preliminary research study, we particularly studied Brazilian and Argentinian consumers’ perceived innovativeness, risks and benefits of smart toys and their purchase intention toward such toys. Results indicate that Brazilian consumers have better perception and evaluation of the toy and thus higher purchase intention than Argentinian consumers do. Such difference may be explained by the cultural differences be-tween the two countries, such as relatively low vs. high uncertainty avoidance

    A Privacy-Preserving Context Ontology (PPCO) for Smart Connected Toys

    Get PDF
    © 2019 IEEE. Ubiquitous mobile technology like Smart Connected Toys (SCTs) have unique challenges of clearly defining context data elements due to unstructured, consistent, and persistent changes in the environment. SCTs interact with its context to achieve meaningful functionality while maintaining context data privacy. As SCTs become increasingly pervasive, the toys with their built-in features must be aware of and adapt to their changing contexts while providing a sense of privacy and security to contextual data processed to support its use. This paper presents a context profile through SCT Privacy-Preserving Context Ontology (PPCO) and examines the benefits of designing a context data model for SCT privacy goals. Our proposed data context model is an abstract model, which organizes elements of data and standardizes how they relate to one another. It organizes properties of related entries in SCT based on eXtensible Markup Language (XML) to depict and project how the SCT contextual information-related to the SCTs\u27 environment-is assembled and maintained. Ultimately, the PPCO provides a structured description of the SCT context profile necessary to identify needed privacy controls to support SCT privacy goals

    Evaluating the Contextual Integrity of Privacy Regulation: Parents' IoT Toy Privacy Norms Versus COPPA

    Full text link
    Increased concern about data privacy has prompted new and updated data protection regulations worldwide. However, there has been no rigorous way to test whether the practices mandated by these regulations actually align with the privacy norms of affected populations. Here, we demonstrate that surveys based on the theory of contextual integrity provide a quantifiable and scalable method for measuring the conformity of specific regulatory provisions to privacy norms. We apply this method to the U.S. Children's Online Privacy Protection Act (COPPA), surveying 195 parents and providing the first data that COPPA's mandates generally align with parents' privacy expectations for Internet-connected "smart" children's toys. Nevertheless, variations in the acceptability of data collection across specific smart toys, information types, parent ages, and other conditions emphasize the importance of detailed contextual factors to privacy norms, which may not be adequately captured by COPPA.Comment: 18 pages, 1 table, 4 figures, 2 appendice

    A Look into User\u27s Privacy Perceptions and Data Practices of IoT Devices

    Get PDF
    Purpose: With the rapid deployment of Internet of Things (IoT) technologies, it has been essential to address the security and privacy issues through maintaining transparency in data practices. The prior research focused on identifying people’s privacy preferences in different contexts of IoT usage, and their mental models of security threats. However, there is a dearth in existing literature to understand the mismatch between user’s perceptions and the actual data practices of IoT devices. Such mismatches could lead users unknowingly sharing their private information, exposing themselves to unanticipated privacy risks. We aim to identify these mismatched privacy perceptions in our work. Methodology: We conducted a lab study with 42 participants, where we compared participants’ perceptions with the data practices stated in the privacy policy of 28 IoT devices from different categories, including health & exercise, entertainment, smart homes, toys & games, and pets. Findings: We identified the mismatched privacy perceptions of users in terms of data collection, sharing, protection, and storage period. Our findings revealed the mismatches between user’s perceptions and the data practices of IoT devices for various types of information, including personal, contact, financial, heath, location, media, connected device, online social media, and IoT device usage. Value: The findings from this study lead to our recommendations on designing simplified privacy notice by highlighting the unexpected data practices, which in turn, would contribute to the secure and privacy-preserving use of IoT devices

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology

    Designing the Health-related Internet of Things: Ethical Principles and Guidelines

    Get PDF
    The conjunction of wireless computing, ubiquitous Internet access, and the miniaturisation of sensors have opened the door for technological applications that can monitor health and well-being outside of formal healthcare systems. The health-related Internet of Things (H-IoT) increasingly plays a key role in health management by providing real-time tele-monitoring of patients, testing of treatments, actuation of medical devices, and fitness and well-being monitoring. Given its numerous applications and proposed benefits, adoption by medical and social care institutions and consumers may be rapid. However, a host of ethical concerns are also raised that must be addressed. The inherent sensitivity of health-related data being generated and latent risks of Internet-enabled devices pose serious challenges. Users, already in a vulnerable position as patients, face a seemingly impossible task to retain control over their data due to the scale, scope and complexity of systems that create, aggregate, and analyse personal health data. In response, the H-IoT must be designed to be technologically robust and scientifically reliable, while also remaining ethically responsible, trustworthy, and respectful of user rights and interests. To assist developers of the H-IoT, this paper describes nine principles and nine guidelines for ethical design of H-IoT devices and data protocols
    • 

    corecore