105 research outputs found
What's "up"? - Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant
Trabajo presentado al 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), celebrado en Lisboa (Portugal) del 28 de agosto al 1 de septiembre de 2017.Robots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of deictic speech in an assisted-dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words >up>, which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user's head orientation. For predicting garment direction, the model used the angle of the user's arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user's arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.This research was supported by EPSRC and EU CHIST-ERA.Peer Reviewe
What's “up”? Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant
@2017 Personal use of these materials is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating news collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksRobots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of deictic speech in an assisted-dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words “up”, which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user's head orientation. For predicting garment direction, the model used the angle of the user's arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user's arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.Peer Reviewe
Proceedings, MSVSCC 2012
Proceedings of the 6th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2012 at VMASC in Suffolk, Virginia
Towards a Legal end Ethical Framework for Personal Care Robots. Analysis of Person Carrier, Physical Assistant and Mobile Servant Robots.
Technology is rapidly developing, and regulators and robot creators inevitably have to come to terms with new and unexpected scenarios. A thorough analysis of this new and continuosuly evolving reality could be useful to better understand the current situation and pave the way to the future creation of a legal and ethical framework. This is clearly a wide and complex goal, considering the variety of new technologies available today and those under development. Therefore, this thesis focuses on the evaluation of the impacts of personal care robots. In particular, it analyzes how roboticists adjust their creations to the existing regulatory framework for legal compliance purposes.
By carrying out an impact assessment analysis, existing regulatory gaps and lack of regulatory clarity can be highlighted. These gaps should of course be considered further on by lawmakers for a future legal framework for personal care robot.
This assessment should be made first against regulations. If the creators of the robot do not encounter any limitations, they can then proceed with its development. On the contrary, if there are some limitations, robot creators will either (1) adjust the robot to comply with the existing regulatory framework; (2) start a negotiation with the regulators to change the law; or (3) carry out the original plan and risk to be non-compliant.
The regulator can discuss existing (or lacking) regulations with robot developers and give a legal response accordingly. In an ideal world, robots are clear of impacts and therefore threats can be responded in terms of prevention and opportunities in form of facilitation. In reality, the impacts of robots are often uncertain and less clear, especially when they are inserted in care applications. Therefore, regulators will have to address uncertain risks, ambiguous impacts and yet unkown effects
Interactive Technologies for the Public Sphere Toward a Theory of Critical Creative Technology
Digital media cultural practices continue to address the social, cultural and aesthetic
contexts of the global information economy, perhaps better called ecology, by inventing
new methods and genres that encourage interactive engagement, collaboration, exploration
and learning. The theoretical framework for creative critical technology evolved from the
confluence of the arts, human computer interaction, and critical theories of technology.
Molding this nascent theoretical framework from these seemingly disparate disciplines was
a reflexive process where the influence of each component on each other spiraled into the
theory and practice as illustrated through the Constructed Narratives project. Research
that evolves from an arts perspective encourages experimental processes of making as a
method for defining research principles. The traditional reductionist approach to research
requires that all confounding variables are eliminated or silenced using methods of
statistics. However, that noise in the data, those confounding variables provide the rich
context, media, and processes by which creative practices thrive. As research in the arts
gains recognition for its contributions of new knowledge, the traditional reductive practice
in search of general principles will be respectfully joined by methodologies for defining
living principles that celebrate and build from the confounding variables, the data noise.
The movement to develop research methodologies from the noisy edges of human
interaction have been explored in the research and practices of ludic design and ambiguity
(Gaver, 2003); affective gap (Sengers et al., 2005b; 2006); embodied interaction (Dourish,
2001); the felt life (McCarthy & Wright, 2004); and reflective HCI (Dourish, et al., 2004).
The theory of critical creative technology examines the relationships between critical
theories of technology, society and aesthetics, information technologies and contemporary
practices in interaction design and creative digital media. The theory of critical creative
technology is aligned with theories and practices in social navigation (Dourish, 1999) and
community-based interactive systems (Stathis, 1999) in the development of smart
appliances and network systems that support people in engaging in social activities,
promoting communication and enhancing the potential for learning in a community-based
environment. The theory of critical creative technology amends these community-based
and collaborative design theories by emphasizing methods to facilitate face-to-face
dialogical interaction when the exchange of ideas, observations, dreams, concerns, and
celebrations may be silenced by societal norms about how to engage others in public
spaces.
The Constructed Narratives project is an experiment in the design of a critical creative
technology that emphasizes the collaborative construction of new knowledge about one's
lived world through computer-supported collaborative play (CSCP). To construct is to
creatively invent one's world by engaging in creative decision-making, problem solving
and acts of negotiation. The metaphor of construction is used to demonstrate how a simple
artefact - a building block - can provide an interactive platform to support discourse
between collaborating participants. The technical goal for this project was the development
of a software and hardware platform for the design of critical creative technology
applications that can process a dynamic flow of logistical and profile data from multiple
users to be used in applications that facilitate dialogue between people in a real-time
playful interactive experience
"Seeing Like A Rover": Images In Interaction On The Mars Exploration Rover Mission
This dissertation analyzes the use of images on the Mars Exploration Rover mission to both conduct scientific investigations of Mars and plan robotic operations on its surface. Drawing upon three years of fieldwork with the Mars Rover team including ethnography, participant observation, and interviews, the dissertation contributes to the literature in Science and Technology Studies by advancing the analytical framework of drawing as: a practical corollary to Wittgenstein and Hanson's concepts of seeing as that allows the analyst to explore the work of producing scientific images that draw natural objects as analytical objects to enable future representations and interactions. Further, images of Mars betray the social organization of the mission team and its commitment to consensus operations. Observing how images of Mars are drawn as trustworthy documents, drawn as a hypothesis or as a record of collective agreement, drawn as a map for the Rover and drawn as a public space, the disertation demonstrates how interactions with and around Mars Rover images support this political orientation, making the Rover's body a body politic
- …