1,783 research outputs found

    Semi-autonomous mobile phone communication avatar for enhanced interaction

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.Cataloged from PDF version of thesis.Includes bibliographical references (p. 29).In order to take advantage of the present technology in cellular phones and to enhance the presence of phone user in a remote location, the process of designing the MeBot is started. The MeBot is designed to be a semiautonomous robot that is aimed to embody the other side of the phone conversation in a more interactive way. This thesis covers the initial mechanical design of the MeBot. Major goals such as compactness, expressiveness, and manufacturability were attempted in the first two version of the design. The MeBot v1.0, built and tested, was able to prove the feasibility of the concept and generated some consumer response through the implementation of three degrees of freedom. Then MeBot v2.0, with six degrees of freedom, was designed to incorporate some improvements on the mechanical design. The latest mechanical design of the MeBot has capabilities to perform tasks such as traveling on a flat surface, lift and lower the phone, gesture and point with its arms, and rotate its entire upper body independently from the wheels. Overall, the degrees of freedom fortify the MeBot with capabilities to embody the user in an expressive way.by Yingdan Gu.S.B

    Affordable avatar control system for personal robots

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.Includes bibliographical references (p. 76-79).Social robots (personal robots) emphasize individualized social interaction and communication with people. To maximize communication capacity of a personal robot, designers make it more anthropomorphic (or zoomorphic), and people tend to interact more naturally with such robots. However, adapting anthropomorphism (or zoomorphism) in social robots makes morphology of a robot more complex; thus, it becomes harder to control robots with existing interfaces. The Huggable is a robotic Teddy bear platform developed by the Personal Robots Group at the MIT Media Lab. It has its specific purpose in healthcare, elderly care, education, and family communication. It is important that a user can successfully convey the meaningful context in a dialogue via the robot's puppeteering interface. I investigate relevant technologies to develop a robotic puppetry system for a zoomorphic personal robot and develop three different puppeteering interfaces to control the robot: the website interface, wearable interface, and sympathetic interface. The wearable interface was examined through a performance test and the web interface was examined through a user study.by Jun Ki Lee.S.M

    A Biosymtic (Biosymbiotic Robotic) Approach to Human Development and Evolution. The Echo of the Universe.

    Get PDF
    In the present work we demonstrate that the current Child-Computer Interaction paradigm is not potentiating human development to its fullest – it is associated with several physical and mental health problems and appears not to be maximizing children’s cognitive performance and cognitive development. In order to potentiate children’s physical and mental health (including cognitive performance and cognitive development) we have developed a new approach to human development and evolution. This approach proposes a particular synergy between the developing human body, computing machines and natural environments. It emphasizes that children should be encouraged to interact with challenging physical environments offering multiple possibilities for sensory stimulation and increasing physical and mental stress to the organism. We created and tested a new set of computing devices in order to operationalize our approach – Biosymtic (Biosymbiotic Robotic) devices: “Albert” and “Cratus”. In two initial studies we were able to observe that the main goal of our approach is being achieved. We observed that, interaction with the Biosymtic device “Albert”, in a natural environment, managed to trigger a different neurophysiological response (increases in sustained attention levels) and tended to optimize episodic memory performance in children, compared to interaction with a sedentary screen-based computing device, in an artificially controlled environment (indoors) - thus a promising solution to promote cognitive performance/development; and that interaction with the Biosymtic device “Cratus”, in a natural environment, instilled vigorous physical activity levels in children - thus a promising solution to promote physical and mental health

    Building a semi-autonomous sociable robot platform for robust interpersonal telecommunication

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 73-74).This thesis presents the design of a software platform for the Huggable project. The Huggable is a new kind of robotic companion being developed at the MIT Media Lab for health care, education, entertainment and social communication applications. This work focuses on the social communication application as it pertains to using a semi-autonomous robotic avatar in a remote environment. The software platform consists of an extensible and robust distributed software system that connects a remote human puppeteer to the Huggable robot via internet. The paper discusses design decisions made in building the software platform and describes the technologies created for the social communication application. An informal trial of the system reveals how the system's puppeteering interface can be improved, and pinpoints where performance enhancements are needed for this particular application.by Robert Lopez Toscano.M.Eng

    A Framework for Test & Evaluation of Autonomous Systems Along the Virtuality-Reality Spectrum

    Get PDF
    Test & Evaluation of autonomous vehicles presents a challenge as the vehicles may have emergent behavior and it is frequently difficult to ascertain the reason for software decisions. Current Test & Evaluation approaches for autonomous systems place the vehicles in various operating scenarios to observe their behavior. However, this introduces dependencies between design and development lifecycle of the autonomous software and physical vehicle hardware. Simulation-based testing can alleviate the necessity to have physical hardware; however, it can be costly when transitioning the autonomous software to and from a simulation testing environment. The objective of this thesis is to develop a reusable framework for testing autonomous software such that testing can be conducted at various levels of mixed reality provided the framework components are sufficient to support data required by the autonomous software. The paper describes the design of the software framework and explores its application through use cases

    Social touch in human–computer interaction

    Get PDF
    Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better research methodologies, developing basic social touch building blocks, and solving specific ICT challenges

    Social Interactions in Immersive Virtual Environments: People, Agents, and Avatars

    Get PDF
    Immersive virtual environments (IVEs) have received increased popularity with applications in many fields. IVEs aim to approximate real environments, and to make users react similarly to how they would in everyday life. An important use case is the users-virtual characters (VCs) interaction. We interact with other people every day, hence we expect others to appropriately act and behave, verbally and non-verbally (i.e., pitch, proximity, gaze, turn-taking). These expectations also apply to interactions with VCs in IVEs, and this thesis tackles some of these aspects. We present three projects that inform the area of social interactions with a VC in IVEs, focusing on non-verbal behaviours. In our first study on interactions between people, we collaborated with the Social Neuroscience group at the Institute of Cognitive Neuroscience from UCL on a dyad multi-modal interaction. This aims to understand the conversation dynamics, focusing on gaze and turn-taking. The results show that people have a higher frequency of gaze change (from averted to direct and vice versa) when they are being looked at compared to when they are not. When they are not being looked at, they are also directing their gaze to their partners more compared to when they are being looked at. Another contribution of this work is the automated method of annotating speech and gaze data. Next, we consider agents’ higher-level non-verbal behaviours, covering social attitudes. We present a pipeline to collect data and train a machine learning (ML) model that detects social attitudes in a user-VC interaction. Here we collaborated with two game studios: Dream Reality Interaction and Maze Theory. We present a case study for the ML pipeline on social engagement recognition for the Peaky Blinders narrative VR game from Maze Theory studio. We use a reinforcement learning algorithm with imitation learning rewards and a temporal memory element. The results show that the model trained with raw data does not generalise and performs worse (60% accuracy) than the one trained with socially meaningful data (83% accuracy). In IVEs, people embody avatars and their appearance can impact social interactions. In collaboration with Microsoft Research, we report a longitudinal study in mixed-reality on avatar appearance in real-work meetings between co-workers comparing personalised full-body realistic and cartoon avatars. The results imply that when participants use realistic avatars first, they may have higher expectations and they perceive their colleagues’ emotional states with less accuracy. Participants may also become more accustomed to cartoon avatars as time passes and the overall use of avatars may lead to less accurately perceiving negative emotions. The work presented here contributes towards the field of detecting and generating nonverbal cues for VCs in IVEs. These are also important building blocks for creating autonomous agents for IVEs. Additionally, this work contributes to the games and work industry fields through an immersive ML pipeline for detecting social attitudes and through insights into using different avatar styles over time in real-world meetings
    • …
    corecore