1,274 research outputs found

    Disciplining the body? Reflections on the cross disciplinary import of ‘embodied meaning’ into interaction design

    Get PDF
    The aim of this paper is above all critically to examine and clarify some of the negative implications that the idea of ‘embodied meaning’ has for the emergent field of interaction design research. Originally, the term ‘embodied meaning’ has been brought into HCI research from phenomenology and cognitive semantics in order to better understand how user’s experience of new technological systems relies to an increasing extent on full-body interaction. Embodied approaches to technology design could thus be found in Winograd & Flores (1986), Dourish (2001), Lund (2003), Klemmer, Hartman & Takayama (2006), Hornecker & Buur (2006), Hurtienne & Israel (2007) among others. However, fertile as this cross-disciplinary import may be, design research can generally be criticised for being ‘undisciplined’, because of its tendency merely to take over reductionist ideas of embodied meaning from those neighbouring disciplines without questioning the inherent limitations it thereby subscribe to. In this paper I focus on this reductionism and what it means for interaction design research. I start out by introducing the field of interaction design and two central research questions that it raises. This will serve as a prerequisite for understanding the overall intention of bringing the notion of ‘embodied meaning’ from cognitive semantics into design research. Narrowing my account down to the concepts of ‘image schemas’ and their ‘metaphorical extension’, I then explain in more detail what is reductionistic about the notion of embodied meaning. Having done so, I shed light on the consequences this reductionism might have for design research by examining a recently developed framework for intuitive user interaction along with two case examples. In so doing I sketch an alternative view of embodied meaning for interaction design research. Keywords: Interaction Design, Embodied Meaning, Tangible User Interaction, Design Theory, Cognitive Semiotics</p

    Embodied conversations: Performance and the design of a robotic dancing partner

    Get PDF
    This paper reports insights gained from an exploration of performance-based techniques to improve the design of relationships between people and responsive machines. It draws on the Emergent Objects project and specifically addresses notions of embodiment as employed in the field of performance as a means to prototype and develop a robotic agent, SpiderCrab, designed to promote expressive interaction of device and human dancer, in order to achieve ‘performative merging’. The significance of the work is to bring further knowledge of embodiment to bear on the development of human-technological interaction in general. In doing so, it draws on discursive and interpretive methods of research widely used in the field of performance but not yet obviously aligned with some orthodox paradigms and practices within design research. It also posits the design outcome as an ‘objectile’ in the sense that a continuous and potentially divergent iteration of prototypes is envisaged, rather than a singular final product. The focus on performative merging draws in notions of complexity and user experience. Keywords: Embodiment; Performance; Tacit Knowledge; Practice-As-Research; Habitus.</p

    Unified Behavior Framework for Reactive Robot Control in Real-Time Systems

    Get PDF
    Endeavors in mobile robotics focus on developing autonomous vehicles that operate in dynamic and uncertain environments. By reducing the need for human-in- the-loop control, unmanned vehicles are utilized to achieve tasks considered dull or dangerous by humans. Because unexpected latency can adversely affect the quality of an autonomous system\u27s operations, which in turn can affect lives and property in the real-world, their ability to detect and handle external events is paramount to providing safe and dependable operation. Behavior-based systems form the basis of autonomous control for many robots. This thesis presents the unified behavior framework, a new and novel approach which incorporates the critical ideas and concepts of the existing reactive controllers in an effort to simplify development without locking the system developer into using any single behavior system. The modular design of the framework is based on modern software engineering principles and only specifies a functional interface for components, leaving the implementation details to the developers. In addition to its use of industry standard techniques in the design of reactive controllers, the unified behavior framework guarantees the responsiveness of routines that are critical to the vehicle\u27s safe operation by allowing individual behaviors to be scheduled by a real-time process controller. The experiments in this thesis demonstrate the ability of the framework to: 1) interchange behavioral components during execution to generate various global behavior attributes; 2) apply genetic programming techniques to automate the discovery of effective structures for a domain that are up to 122 percent better than those crafted by an expert; and 3) leverage real-time scheduling technologies to guarantee the responsiveness of time critical routines regardless of the system\u27s computational load

    Grounding Verbs of Motion in Natural Language Commands to Robots

    Get PDF
    To be useful teammates to human partners, robots must be able to follow spoken instructions given in natural language. An important class of instructions involve interacting with people, such as “Follow the person to the kitchen” or “Meet the person at the elevators.” These instructions require that the robot fluidly react to changes in the environment, not simply follow a pre-computed plan. We present an algorithm for understanding natural language commands with three components. First, we create a cost function that scores the language according to how well it matches a candidate plan in the environment, defined as the log-likelihood of the plan given the command. Components of the cost function include novel models for the meanings of motion verbs such as “follow,” “meet,” and “avoid,” as well as spatial relations such as “to” and landmark phrases such as “the kitchen.” Second, an inference method uses this cost function to perform forward search, finding a plan that matches the natural language command. Third, a high-level controller repeatedly calls the inference method at each timestep to compute a new plan in response to changes in the environment such as the movement of the human partner or other people in the scene. When a command consists of more than a single task, the controller switches to the next task when an earlier one is satisfied. We evaluate our approach on a set of example tasks that require the ability to follow both simple and complex natural language commands. Keywords: Cost Function; Spatial Relation; State Sequence; Edit Distance; Statistical Machine TranslationUnited States. Office of Naval Research (Grant MURI N00014-07-1-0749

    Design and semantics of form and movement (DeSForM 2006)

    Get PDF
    Design and Semantics of Form and Movement (DeSForM) grew from applied research exploring emerging design methods and practices to support new generation product and interface design. The products and interfaces are concerned with: the context of ubiquitous computing and ambient technologies and the need for greater empathy in the pre-programmed behaviour of the ‘machines’ that populate our lives. Such explorative research in the CfDR has been led by Young, supported by Kyffin, Visiting Professor from Philips Design and sponsored by Philips Design over a period of four years (research funding £87k). DeSForM1 was the first of a series of three conferences that enable the presentation and debate of international work within this field: ‱ 1st European conference on Design and Semantics of Form and Movement (DeSForM1), Baltic, Gateshead, 2005, Feijs L., Kyffin S. & Young R.A. eds. ‱ 2nd European conference on Design and Semantics of Form and Movement (DeSForM2), Evoluon, Eindhoven, 2006, Feijs L., Kyffin S. & Young R.A. eds. ‱ 3rd European conference on Design and Semantics of Form and Movement (DeSForM3), New Design School Building, Newcastle, 2007, Feijs L., Kyffin S. & Young R.A. eds. Philips sponsorship of practice-based enquiry led to research by three teams of research students over three years and on-going sponsorship of research through the Northumbria University Design and Innovation Laboratory (nuDIL). Young has been invited on the steering panel of the UK Thinking Digital Conference concerning the latest developments in digital and media technologies. Informed by this research is the work of PhD student Yukie Nakano who examines new technologies in relation to eco-design textiles

    Robotic Faces: Exploring Dynamical Patterns of Social Interaction between Humans and Robots

    Get PDF
    Thesis (Ph.D.) - Indiana University, Informatics, 2015The purpose of this dissertation is two-fold: 1) to develop an empirically-based design for an interactive robotic face, and 2) to understand how dynamical aspects of social interaction may be leveraged to design better interactive technologies and/or further our understanding of social cognition. Understanding the role that dynamics plays in social cognition is a challenging problem. This is particularly true in studying cognition via human-robot interaction, which entails both the natural social cognition of the human and the “artificial intelligence” of the robot. Clearly, humans who are interacting with other humans (or even other mammals such as dogs) are cognizant of the social nature of the interaction – their behavior in those cases differs from that when interacting with inanimate objects such as tools. Humans (and many other animals) have some awareness of “social”, some sense of other agents. However, it is not clear how or why. Social interaction patterns vary across culture, context, and individual characteristics of the human interactor. These factors are subsumed into the larger interaction system, influencing the unfolding of the system over time (i.e. the dynamics). The overarching question is whether we can figure out how to utilize factors that influence the dynamics of the social interaction in order to imbue our interactive technologies (robots, clinical AI, decision support systems, etc.) with some "awareness of social", and potentially create more natural interaction paradigms for those technologies. In this work, we explore the above questions across a range of studies, including lab-based experiments, field observations, and placing autonomous, interactive robotic faces in public spaces. We also discuss future work, how this research relates to making sense of what a robot "sees", creating data-driven models of robot social behavior, and development of robotic face personalities

    A Real-Time Architecture for Conversational Agents

    Get PDF
    Consider two people having a face-to-face conversation. They sometimes listen, sometimes talk, and sometimes interrupt each other. They use facial expressions to signal that they are confused. They point at objects. They jump from topic to topic opportunistically. When another acquaintance walks by, they nod and say hello. All the while they have other concerns on their mind, such as not missing the meeting that starts in 10 minutes. Like many other humans behaviors, these are not easy to replicate in artificial agents. In this work we look into the design requirements of an embodied agent that can participate in such natural conversations in a mixed-initiative, multi-modal setting. Such an agent needs to understand participating in a conversation is not merely a matter of sending a message and then waiting to receive a response -- both partners are simultaneously active at all times. This agent should be able to deal with different, sometimes conflicting goals, and be always ready to address events that may interrupt the current topic of conversation. To address those requirements, we have created a modular architecture that includes distributed functional units that compete with each other to gain control over available resources. Each of these units, called a schema, has its own sense- think-act cycle. In the field of robotics, this design is often referred to as behavior-based or schema-based. The major contribution of this work is merging behavior-based robotics with plan- based human-computer interaction
    • 

    corecore