26 research outputs found

    Development of the huggable social robot Probo: on the conceptual design and software architecture

    Get PDF
    This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Automatic extraction of constraints in manipulation tasks for autonomy and interaction

    Get PDF
    Tasks routinely executed by humans involve sequences of actions performed with high dexterity and coordination. Fully specifying these actions such that a robot could replicate the task is often difficult. Furthermore the uncertainties introduced by the use of different tools or changing configurations demand the specification to be generic, while enhancing the important task aspects, i.e. the constraints. Therefore the first challenge of this thesis is inferring these constraints from repeated demonstrations. In addition humans explaining a task to another person rely on the person's ability to apprehend missing or implicit information. Therefore observations contain user-specific cues, alongside knowledge on performing the task. Thus our second challenge is correlating the task constraints with the user behavior for improving the robot's performance. We address these challenges using a Programming by Demonstration framework. In the first part of the thesis we describe an approach for decomposing demonstrations into actions and extracting task-space constraints as continuous features that apply throughout each action. The constraints consist of: (1) the reference frame for performing manipulation, (2) the variables of interest relative to this frame, allowing a decomposition in force and position control, and (3) a stiffness gain modulating the contribution of force and position. We then extend this approach to asymmetrical bimanual tasks by extracting features that enable arm coordination: the master--slave role that enables precedence, and the motion--motion or force--motion coordination that facilitates the physical interaction through an object. The set of constraints and the time-independent encoding of each action form a task prototype, used to execute the task. In the second part of the thesis we focus on discovering additional features implicit in the demonstrations with respect to two aspects of the teaching interactions: (1) characterizing the user performance and (2) improving the user behavior. For the first goal we assess the skill of the user and implicitly the quality of the demonstrations by using objective task--specific metrics, related directly to the constraints. We further analyze ways of making the user aware of the robot's state during teaching by providing task--related feedback. The feedback has a direct influence on both the teaching efficiency and the user's perception of the interaction. We evaluated our approaches on robotic experiments that encompass daily activities using two 7 degrees of freedom Kuka LWR robotic arms, and a 53 degrees of freedom iCub humanoid robot

    Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach

    Get PDF
    With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions

    Tangible language for hands-on play and learning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 187-192).For over a century, educators and constructivist theorists have argued that children learn by actively forming and testing -- constructing -- theories about how the world works. Recent efforts in the design of "tangible user interfaces" (TUIs) for learning have sought to bring together interaction models like direct manipulation and pedagogical frameworks like constructivism to make new, often complex, ideas salient for young children. Tangible interfaces attempt to eliminate the distance between the computational and physical world by making behavior directly manipulable with one's hands. In the past, systems for children to model behavior have been either intuitive-but-simple (e.g. curlybot) or complex-but-abstract, (e.g. LEGO Mindstorms). In order to develop a system that supports a user's transition from intuitive-but-simple constructions to constructions that are complex-but-abstract, I draw upon constructivist educational theories, particularly Bruner's theories of how learning progresses through enactive then iconic and then symbolic representations. This thesis present an example system and set of design guidelines to create a class of tools that helps people transition from simple-but-intuitive exploration to abstract-and-flexible exploration. The Topobo system is designed to facilitate mental transitions between different representations of ideas, and between different tools. A modular approach, with an inherent grammar, helps people make such transitions. With Topobo, children use enactive knowledge, e.g. knowing how to walk, as the intellectual basis to understand a scientific domain, e.g. engineering and robot locomotion. Queens, backpacks, Remix and Robo add various abstractions to the system, and extend the tangible interface. Children use Topobo to transition from hands-on knowledge to theories that can be tested and reformulated, employing a combination of enactive, iconic and symbolic representations of ideas.by Hayes Solos Raffle.Ph.D

    The design and development of motion detection edutainment maths for use with slow learners’ children

    Get PDF
    This research is aimed to examine game-based motion detection technology in helping slow learners’ children to improve and enhance their levels of attention and concentration while learning mathematics. The study also aims to explore that game-based motion detection engage slow learners’ children while learning mathematics. Additionally, the current study examined the role of game-based motion detection in improving the attention and concentration of slow learners’ children as compared to normal healthy students, in terms of learning mathematics and the educational outcomes of such classes Slow learners are considering a wide different range of students who are not performing well in their study. These group could be including ADHD, Autism, impulsive students, inattention and many more. In this research, I have designed and developed a motion-based mathematic game using Kinect Xbox to test and check the effectiveness and efficiency of slow learners’ students in compare with normal students. For the above purpose, the game has been designed based on several learning theories such as Mayer Principles of learning, Kolb’s Learning style and Piaget Theory for K5 and grade 6-8 years old. In experiment design, I have used System Usability Scale (SUS) to rate the features and PACES, Physical Activity Enjoyment Scale have been used for experimentation of the participants. For testing, both qualitative and quantitative model have been accomplished. Qualitative model was based on the feedback from expert teachers who are observing the students and quantitative model was based on demographic analysis, normality test, reliability analysis and validity test. The outcome illustrates the value of game-based instruction, in specific physical activities and their impact on children's mathematics. The current study findings highlighted the suitability, needfulness, attention, and enhanced learning through game based instructional design for slow learners. The study answers the advantages of using game based instructional design for the slow learner students

    Investigating User Experiences Through Animation-based Sketching

    Get PDF

    The development and evaluation of a custom-built synchronous online learning environment for tertiary education in South Africa

    Get PDF
    The Departments of Computer Science and Information Systems at Rhodes University currently share certain honours-level (fourth year) course modules with students from the corresponding departments at the previously disadvantaged University of Fort Hare. These lectures are currently delivered using video-conferencing. This was found to present a number of problems including challenges in terms of implementing desired pedagogical approaches, inequitable learning experiences, student disengagement at the remote venue, and inflexibility of the video-conferencing system. In order to address these problems, various e-learning modes were investigated and synchronous e-learning were found to offer a number of advantages over asynchronous e-learning. Live Virtual Classrooms (LVCs) were identified as synchronous e-learning tools that support the pedagogical principles important to the two universities and to the broader context of South African tertiary education, and commercial LVC applications were investigated and evaluated. Informed by the results of this investigation a small, simple LVC was designed, developed and customised for use in a predominantly academic sphere and deployment in a South African tertiary educational context. Testing and evaluation of this solution was carried out and the results analysed in terms of the LVC’s technical merits and the pedagogical value of the solution as experienced by students and lecturers/facilitators. An evaluation of this solution indicated that the LVC solves a number of the identified problems with video-conferencing and also provides a flexible/customisable/extensible solution that supports highly interactive, collaborative, learner-centred education. The custom LVC solution could be easily adapted to the specific needs of any tertiary educational institute in the country, and results may benefit other tertiary educational institutions involved in or dependant on distance learning

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    corecore