81 research outputs found

    Scale Invariant Privacy Preserving Video via Wavelet Decomposition

    Full text link
    Video surveillance has become ubiquitous in the modern world. Mobile devices, surveillance cameras, and IoT devices, all can record video that can violate our privacy. One proposed solution for this is privacy-preserving video, which removes identifying information from the video as it is produced. Several algorithms for this have been proposed, but all of them suffer from scale issues: in order to sufficiently anonymize near-camera objects, distant objects become unidentifiable. In this paper, we propose a scale-invariant method, based on wavelet decomposition

    The Robot Privacy Paradox: Understanding How Privacy Concerns Shape Intentions to Use Social Robots

    Get PDF
    Conceptual research on robots and privacy has increased but we lack empirical evidence about the prevalence, antecedents, and outcomes of different privacy concerns about social robots. To fill this gap, we present a survey, testing a variety of antecedents from trust, technology adoption, and robotics scholarship. Respondents are most concerned about data protection on the manufacturer side, followed by social privacy concerns and physical concerns. Using structural equation modeling, we find a privacy paradox, where the perceived benefits of social robots override privacy concerns

    On-Demand Collaboration in Programming

    Full text link
    In programming, on-demand assistance occurs when developers seek support for their tasks as needed. Traditionally, this collaboration happens within teams and organizations in which people are familiar with the context of requests and tasks. More recently, this type of collaboration has become ubiquitous outside of teams and organizations, due to the success of paid online crowdsourcing marketplaces (e.g., Upwork) and free online question-answering websites (e.g., Stack Overflow). Thousands of requests are posted on these platforms on a daily basis, and many of them are not addressed in a timely manner for a variety of reasons, including requests that often lack sufficient context and access to relevant artifacts. In consequence, on-demand collaboration often results in suboptimal productivity and unsatisfactory user experiences. This dissertation includes three main parts: First, I explored the challenges developers face when requesting help from or providing assistance to others on demand. I have found seven common types of requests (e.g., seeking code examples) that developers use in various projects when an on-demand agent is available. Compared to studying existing supporting systems, I suggest eight key system features to enable more effective on-demand remote assistance for developers. Second, driven by these findings, I designed and developed two systems: 1) CodeOn, a system that enables more effective task hand-offs (e.g., rich context capturing) between end-user developers and remote helpers than exciting synchronous support systems by allowing asynchronous responses to on-demand requests; and 2) CoCapture, a system that enables interface designers to easily create and then accurately describe UI behavior mockups, including changes they want to propose or questions they want to ask about an aspect of the existing UI. Third, beyond software development assistance, I also studied intelligent assistance for embedded system development (e.g., Arduino) and revealed six challenges (e.g., communication setup remains tedious) that developers have during on-demand collaboration. Through an imaginary study, I propose four design implications to help develop future support systems with embedded system development. This thesis envisions a future in which developers in all kinds of domains can effortlessly make context-rich, on-demand requests at any stage of their development processes, and qualified agents (machine or human) can quickly be notified and orchestrate their efforts to promptly respond to the requests.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/166144/1/yanchenm_1.pd

    Security Considerations in AI-Robotics: A Survey of Current Methods, Challenges, and Opportunities

    Full text link
    Robotics and Artificial Intelligence (AI) have been inextricably intertwined since their inception. Today, AI-Robotics systems have become an integral part of our daily lives, from robotic vacuum cleaners to semi-autonomous cars. These systems are built upon three fundamental architectural elements: perception, navigation and planning, and control. However, while the integration of AI-Robotics systems has enhanced the quality our lives, it has also presented a serious problem - these systems are vulnerable to security attacks. The physical components, algorithms, and data that make up AI-Robotics systems can be exploited by malicious actors, potentially leading to dire consequences. Motivated by the need to address the security concerns in AI-Robotics systems, this paper presents a comprehensive survey and taxonomy across three dimensions: attack surfaces, ethical and legal concerns, and Human-Robot Interaction (HRI) security. Our goal is to provide users, developers and other stakeholders with a holistic understanding of these areas to enhance the overall AI-Robotics system security. We begin by surveying potential attack surfaces and provide mitigating defensive strategies. We then delve into ethical issues, such as dependency and psychological impact, as well as the legal concerns regarding accountability for these systems. Besides, emerging trends such as HRI are discussed, considering privacy, integrity, safety, trustworthiness, and explainability concerns. Finally, we present our vision for future research directions in this dynamic and promising field

    Augmented Reality: A Technology and Policy Primer

    Get PDF
    The vision for AR dates back at least until the 1960s with the work of Ivan Sutherland. In a way, AR represents a natural evolution of information communication technology. Our phones, cars, and other devices are increasingly reactive to the world around us. But AR also represents a serious departure from the way people have perceived data for most of human history: a Neolithic cave painting or book operates like a laptop insofar as each presents information to the user in a way that is external to her and separate from her present reality. By contrast, AR begins to collapse millennia of distinction between display and environment. Today, a number of companies are investing heavily in AR and beginning to deploy consumer-facing devices and applications. These systems have the potential to deliver enormous value, including to populations with limited physical or other resources. Applications include hands-free instruction and training, language translation, obstacle avoidance, advertising, gaming, museum tours, and much more. AR also presents novel or acute challenges for technologists and policymakers, including privacy, distraction, and discrimination. This whitepaper—which grows out of research conducted across three units through the University of Washington’s interdisciplinary Tech Policy Lab—is aimed at identifying some of the major legal and policy issues AR may present as a novel technology, and outlines some conditional recommendations to help address those issues. Our key findings include: 1. AR exists in a variety of configurations, but in general, AR is a mobile or embedded technology that senses, processes, and outputs data in real-time, recognizes and tracks real-world objects, and provides contextual information by supplementing or replacing human senses. 2. AR systems will raise legal and policy issues in roughly two categories: collection and display. Issues tend to include privacy, free speech, and intellectual property as well as novel forms of distraction and discrimination. 3. We recommend that policymakers—broadly defined—engage in diverse stakeholder analysis, threat modeling, and risk assessment processes. We recommend that they pay particular attention to: a) the fact that adversaries succeed when systems fail to anticipate behaviors; and that, b) not all stakeholders experience AR the same way. 4. Architectural/design decisions—such as whether AR systems are open or closed, whether data is ephemeral or stored, where data is processed, and so on—will each have policy consequences that vary by stakeholder.https://digitalcommons.law.uw.edu/techlab/1000/thumbnail.jp

    Toys That Listen: A Study of Parents, Children, and Internet-Connected Toys

    Get PDF
    Hello Barbie, CogniToys Dino, and Amazon Echo are part of a new wave of connected toys and gadgets for the home that listen. Unlike the smartphone, these devices are always on, blending into the background until needed. We conducted interviews with parent-child pairs in which they interacted with Hello Barbie and CogniToys Dino, shedding light on children’s expectations of the toys’ “intelligence” and parents’ privacy concerns and expectations for parental controls. We find that children were often unaware that others might be able to hear what was said to the toy, and that some parents draw connections between the toys and similar tools not intended as toys (e.g., Siri, Alexa) with which their children already interact. Our findings illuminate people’s mental models and experiences with these emerging technologies and will help inform the future designs of interactive, connected toys and gadgets. We conclude with recommendations for designers and policy makers.https://digitalcommons.law.uw.edu/techlab/1002/thumbnail.jp

    Password-conditioned Anonymization and Deanonymization with Face Identity Transformers

    Full text link
    Cameras are prevalent in our daily lives, and enable many useful systems built upon computer vision technologies such as smart cameras and home robots for service applications. However, there is also an increasing societal concern as the captured images/videos may contain privacy-sensitive information (e.g., face identity). We propose a novel face identity transformer which enables automated photo-realistic password-based anonymization as well as deanonymization of human faces appearing in visual data. Our face identity transformer is trained to (1) remove face identity information after anonymization, (2) make the recovery of the original face possible when given the correct password, and (3) return a wrong--but photo-realistic--face given a wrong password. Extensive experiments show that our approach enables multimodal password-conditioned face anonymizations and deanonymizations, without sacrificing privacy compared to existing anonymization approaches.Comment: ECCV 202
    • …
    corecore