80 research outputs found

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Dialogue management using reinforcement learning

    Get PDF
    Dialogue has been widely used for verbal communication between human and robot interaction, such as assistant robot in hospital. However, this robot was usually limited by predetermined dialogue, so it will be difficult to understand new words for new desired goal. In this paper, we discussed conversation in Indonesian on entertainment, motivation, emergency, and helping with knowledge growing method. We provided mp3 audio for music, fairy tale, comedy request, and motivation. The execution time for this request was 3.74 ms on average. In emergency situation, patient able to ask robot to call the nurse. Robot will record complaint of pain and inform nurse. From 7 emergency reports, all complaints were successfully saved on database. In helping conversation, robot will walk to pick up belongings of patient. Once the robot did not understand with patient’s conversation, robot will ask until it understands. From asking conversation, knowledge expands from 2 to 10, with learning execution from 1405 ms to 3490 ms. SARSA was faster towards steady state because of higher cumulative rewards. Q-learning and SARSA were achieved desired object within 200 episodes. It concludes that RL method to overcome robot knowledge limitation in achieving new dialogue goal for patient assistant were achieved

    Framework for autonomous navigation through MS HoloLenses

    Get PDF
    Τα τελευταία χρόνια, η τεράστια ανάπτυξη των τεχνολογιών εικονικής πραγματικότητας φαίνεται να κατακλύζει την τεχνολογική κοινότητα. Οι δυνατότητες που η οικογένεια της εικονικής πραγματικότητας φέρνει στο τραπέζι, αποτελούν μια εμπειρία που αλλάζει τόσο την καθημερινή όσο και τη βιομηχανική ζωή. Πιο συγκεκριμένα, η Επαυξημένη Πραγματικότητα (AR) θεωρείται από ένα μεγάλο μέρος της επιστημονικής κοινότητας, η κυρίαρχη τεχνολογία των Διεπαφών Χρήστη (UI). Το βασικό χαρακτηριστικό του AR είναι ότι προσθέτει ψηφιακό περιεχόμενο στο πραγματικό περιβάλλον χωρίς να απομονώνει το χρήστη από αυτό, παρέχοντας μια πολύ ρεαλιστική αλληλεπίδραση κοντά στην αντίληψη του χρήστη. Λαμβάνοντας υπόψη αυτά τα χαρακτηριστικά, η τεχνολογία AR μπορεί να χρησιμοποιηθεί για παράδειγμα σε περιπτώσεις βελτιωμένης μάθησης, ελέγχου μηχανής, πλοήγησης ανθρώπου / οχήματος. Για παράδειγμα, ένα AR UI ανεπτυγμένο σε γυαλιά AR μπορεί να βοηθήσει τον χειριστή να ελέγξει ένα μηχάνημα εύκολα και χωρίς κίνδυνο από απόσταση. Επιπλέον, αυτή η λειτουργικότητα μπορεί να εμπλουτιστεί χρησιμοποιώντας ένα μη επανδρωμένο όχημα, ένα ρομπότ, ως το μηχάνημα που θα ελέγχεται. Η ρομποτική είναι ένας τομέας της τεχνολογίας, του οποίου η παρέμβαση στη ζωή των ανθρώπων φαίνεται ασταμάτητη σε όλο και περισσότερες πτυχές. Σήμερα, τα μη επανδρωμένα οχήματα χρησιμοποιούνται στην πλειονότητα των βιομηχανικών δραστηριοτήτων και των καθημερινών συνηθειών. Ας εξετάσουμε μια κατάσταση κατά την οποία επιβλαβή απόβλητα πρέπει να εξαχθούν από μια συγκεκριμένη περιοχή. Η χρήση μη επανδρωμένου οχήματος είναι υποχρεωτική για τη συλλογή και την απομάκρυνση των αποβλήτων. Επιπλέον, ένα UI επαυξημένης πραγματικότητας για το τηλεχειριστήριο του UV, προσφέρει τη δυνατότητα στον χειριστή να αξιοποιήσει στο έπακρο τις δεξιότητές του χωρίς να διακινδυνεύσει τη ζωή του. Το AR UI προσφέρει έναν πολύ φυσικό και οικείο έλεγχο στον χρήστη. Σε αυτήν την πτυχιακή εργασία, εξετάζουμε το σενάριο όπου ο χρήστης ελέγχει / πλοηγεί ένα μη επανδρωμένο όχημα εδάφους με τη βοήθεια AR γυαλιών. Τα γυαλιά AR προβάλλουν μία ειδικά σχεδιασμένη διεπαφή χρήστη για τον έλεγχο κίνησης του ρομπότ. Η πλοήγηση του οχήματος εξαρτάται αποκλειστικά από την αντίληψη και την εμπειρία του χρήστη. Εκεί η τεχνολογία AR γίνεται πρακτική καθώς δεν επηρεάζει την όραση και την αντίληψη του περιβάλλοντος για τον χρήστη και το περιβάλλον του. Πιο συγκεκριμένα, πραγματοποιείται μια σειρά πειραμάτων, όπου ο χρήστης φορά τα AR γυαλιά και πλοηγεί το ρομπότ δίνοντας μια σειρά εντολών κίνησης. Φυσικά, το ρομπότ πρέπει να παραμένει πάντα στο οπτικό του πεδίο. Τα πειράματα εκτελέστηκαν τόσο σε προσομοιωμένο όσο και σε πραγματικό κόσμο. Για την προσομοίωση, χρησιμοποιήθηκε ο προσομοιωτής Gazebo με ένα εικονικό Turtlebot 2 με λειτουργικό σύστημα ROS και ο προσομοιωτής Unity για τα AR γυαλιά. Τα πειράματα του πραγματικού κόσμου εκτελέστηκαν με ένα Turtlebot2 που εκτελεί ROS και τα γυαλιά Microsoft HoloLens AR όπου αναπτύχθηκε η εφαρμογή AR.In recent years, the immense development of the virtual reality technologies seems to overwhelm the technological community. The possibilities which the virtual reality family brings to the table, pose a life changing experience for both daily and industrial life. More particular, Augmented Reality (AR) in considered by a large portion of the scientific community, the reign technology of User Interfaces (UI). The key feature of AR is that adds digital content to the real environment without isolating the user from it, providing a very realistic interaction, close to the user’s perception. Considering these features, AR technology can be used for instance in cases of enhanced learning, machine control, human/vehicle navigation. For example, an AR UI deployed in AR glasses can help the actor control a machine easily and without risk from distance. In addition, this functionality can be enriched by using an unmanned vehicle, a robot, as the machine that will be controlled. Robotics is a field of technology, whose intervention in people’s lives seems unstoppable in more and more aspects. Nowadays, unmanned vehicles are used in the majority of industrial operations and daily habits. Let us consider a situation where harmful waste should be extracted from a specific area. The use of an unmanned vehicle is mandatory for the collection and the removal of the waste. On top of this, an Augmented Reality UI for the remote control of the UV, offers the ability to the actor to make the most out of his skills without risking his life. The AR UI offers a very natural an intimate control to the user. In this Thesis, we examine the scenario where the user controls/navigates an unmanned ground vehicle with the aid of an AR headset. The AR headset projects a specially designed UI for the robot’s movement control. The vehicle’s navigation depends solely on the user’s perception and experience. That’s where the AR technology comes in handy as is does not affects the vision and the environment perception of the user and his surroundings. More specifically, a series of experiments are carried out, where the user wears the AR headset and navigates the robot by giving a series of movement commands. Of course, the robot should always remain on his field of view. Experiments were executed both in simulated and real world. For the simulation Gazebo simulator was used with a virtual Turtlebot 2 running ROS operating system and the Unity simulator for the AR headset. The real - world experiments were executed with a Turtlebot2 running ROS and the Microsoft HoloLens AR headset where our AR application was deployed

    Expressive social exchange between humans and robots

    Get PDF
    Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 253-264).Sociable humanoid robots are natural and intuitive for people to communicate with and to teach. We present recent advances in building an autonomous humanoid robot, Kismet, that can engage humans in expressive social interaction. We outline a set of design issues and a framework that we have found to be of particular importance for sociable robots. Having a human-in-the-loop places significant social constraints on how the robot aesthetically appears, how its sensors are configured, its quality of movement, and its behavior. Inspired by infant social development, psychology, ethology, and evolutionary perspectives, this work integrates theories and concepts from these diverse viewpoints to enable Kismet to enter into natural and intuitive social interaction with a human caregiver, reminiscent of parent-infant exchanges. Kismet perceives a variety of natural social cues from visual and auditory channels, and delivers social signals to people through gaze direction, facial expression, body posture, and vocalizations. We present the implementation of Kismet's social competencies and evaluate each with respect to: 1) the ability of naive subjects to read and interpret the robot's social cues, 2) the robot's ability to perceive and appropriately respond to naturally offered social cues, 3) the robot's ability to elicit interaction scenarios that afford rich learning potential, and 4) how this produces a rich, flexible, dynamic interaction that is physical, affective, and social. Numerous studies with naive human subjects are described that provide the data upon which we base our evaluations.by Cynthia L. Breazeal.Sc.D

    Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments

    Get PDF
    As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. This thesis envisages a future where autonomous robots could be used as home companions to assist and collaborate with their human partners in unstructured environments without the support of any roboticist or expert. To realise such a vision, it is important to identify which factors (e.g. trust, participants’ personalities and background etc.) that influence people to accept robots’ as companions and trust the robots to look after their well-being. I am particularly interested in the possibility of robots using social behaviours and natural communications as a repair mechanism to positively influence humans’ sense of trust and companionship towards the robots. The main reason being that trust can change over time due to different factors (e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines for a robot to regain human trust by adopting certain human-like behaviours. I can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occurs with any other electrical consumer devices. For example, these might include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. In this context, several open questions need to be addressed regarding both individuals’ perceptions of the errors and robots, and the effects of these on people’s trust in robots. As a first step, I investigated how the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impact on users’ attitudes towards a domestic robot. I concluded that there is a correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust was strongly affected by robot errors that had severe consequences. This led us to investigate whether people’s awareness of robots’ functionalities may affect their trust in a robot. I found that people’s acceptance and trust in the robot may be affected by their knowledge of the robot’s capabilities and its limitations differently according the participants’ age and the robot’s embodiment. In order to deploy robots in the wild, strategies for mitigating and re-gaining people’s trust in robots in case of errors needs to be implemented. In the following three studies, I assessed if a robot with awareness of human social conventions would increase people’s trust in the robot. My findings showed that people almost blindly trusted a social and a non-social robot in scenarios with non-severe error consequences. In contrast, people that interacted with a social robot did not trust its suggestions in a scenario with a higher risk outcome. Finally, I investigated the effects of robots’ errors on people’s trust of a robot over time. The findings showed that participants’ judgement of a robot is formed during the first stage of their interaction. Therefore, people are more inclined to lose trust in a robot if it makes big errors at the beginning of the interaction. The findings from the Human-Robot Interaction experiments presented in this thesis will contribute to an advanced understanding of the trust dynamics between humans and robots for a long-lasting and successful collaboration

    Distributing intelligence in the wireless control of a mobile robot using a personal digital assistant

    Get PDF
    Personal Digital Assistants (PDAs) have recently become a popular component in mobile robots. This compact processing device with its touch screen, variety of built-in features, wireless technologies and affordability can perform various roles within a robotic system. Applications include low-cost prototype development, rapid prototyping, low-cost humanoid robots, robot control, robot vision systems, algorithm development, human-robot interaction, mobile user interfaces as well as wireless robot communication schemes. Limits on processing power, memory, battery life and screen size impact the usefulness of a PDA in some applications. In addition various implementation strategies exist, each with its own strengths and weaknesses. No comparison of the advantages and disadvantages of the different strategies and resulting architectures exist. This makes it difficult for designers to decide on the best use of a PDA within their mobile robot system. This dissertation examines and compares the available mobile robot architectures. A thorough literature study identifies robot projects using a PDA and examines how the designs incorporate a PDA and what purpose it fulfils within the system it forms part of. The dissertation categorises the architectures according to the role of the PDA within the robot system. The hypothesis is made that using a distributed control system architecture makes optimal use of the rich feature set gained from including a PDA in a robot system’s design and simultaneously overcomes the device’s inherent shortcomings. This architecture is developed into a novel distributed intelligence framework that is supported by a hybrid communications architecture, using two wireless connection schemes. A prototype implementation illustrates the framework and communications architecture in action. Various performance measurements are taken in a test scenario for an office robot. The results indicate that the proposed framework does deliver performance gains and is a viable alternative for future projects in this area

    AI ethics and higher education : good practice and guidance for educators, learners, and institutions

    Get PDF
    Artificial intelligence (AI) is exerting unprecedented pressure on the global higher educational landscape in transforming recruitment processes, subverting traditional pedagogy, and creating new research and institutional opportunities. These technologies require contextual and global ethical analysis so that they may be developed and deployed in higher education in just and responsible ways. To-date, these efforts have been largely focused on small parts of the educational environments leaving most of the world out of an essential contribution. This volume acts as a corrective to this and contributes to the building of competencies in ethics education and to broader, global debates about how AI will transform various facets of our lives, not the least of which is higher education

    Designing Embodied Interactive Software Agents for E-Learning: Principles, Components, and Roles

    Get PDF
    Embodied interactive software agents are complex autonomous, adaptive, and social software systems with a digital embodiment that enables them to act on and react to other entities (users, objects, and other agents) in their environment through bodily actions, which include the use of verbal and non-verbal communicative behaviors in face-to-face interactions with the user. These agents have been developed for various roles in different application domains, in which they perform tasks that have been assigned to them by their developers or delegated to them by their users or by other agents. In computer-assisted learning, embodied interactive pedagogical software agents have the general task to promote human learning by working with students (and other agents) in computer-based learning environments, among them e-learning platforms based on Internet technologies, such as the Virtual Linguistics Campus (www.linguistics-online.com). In these environments, pedagogical agents provide contextualized, qualified, personalized, and timely assistance, cooperation, instruction, motivation, and services for both individual learners and groups of learners. This thesis develops a comprehensive, multidisciplinary, and user-oriented view of the design of embodied interactive pedagogical software agents, which integrates theoretical and practical insights from various academic and other fields. The research intends to contribute to the scientific understanding of issues, methods, theories, and technologies that are involved in the design, implementation, and evaluation of embodied interactive software agents for different roles in e-learning and other areas. For developers, the thesis provides sixteen basic principles (Added Value, Perceptible Qualities, Balanced Design, Coherence, Consistency, Completeness, Comprehensibility, Individuality, Variability, Communicative Ability, Modularity, Teamwork, Participatory Design, Role Awareness, Cultural Awareness, and Relationship Building) plus a large number of specific guidelines for the design of embodied interactive software agents and their components. Furthermore, it offers critical reviews of theories, concepts, approaches, and technologies from different areas and disciplines that are relevant to agent design. Finally, it discusses three pedagogical agent roles (virtual native speaker, coach, and peer) in the scenario of the linguistic fieldwork classes on the Virtual Linguistics Campus and presents detailed considerations for the design of an agent for one of these roles (the virtual native speaker)

    Touch future x ROBOT: examining production, consumption, and disability at a social robot research laboratory and a centre for independent living in Japan

    Get PDF
    This thesis contributes to anthropological discussions on the relationship between production and consumption by engaging in multi-sited ethnography that investigates the design of social robots in cutting-edge Japanese research laboratories and also explores the day-to-day lives of Japanese disabled people who are potential consumers of such devices. By drawing on these disparate groups, located in disparate sites, this thesis traces connections but also disconnections as it analyses the 'friction' between the technical problem-solving of researchers and the organized activist politics of disabled people. It investigates the rationales of robot research, messy and multiple, as well as the material and political impetus behind the 'barrier free' movement for independent living. Social robots hold a special interest in Japan because not only do many people, both inside and outside of Japan, believe that the nation has a unique cultural interest and affinity for robots, but, with an ageing population, the Japanese state has looked toward social robots as potential care-givers and as a solution to the 'demographic crisis'. Through the engagement of both science and technology studies and disability studies, this thesis focuses on the theme of problems to show how the problem-making approach of robotics researchers, which identifies problems of the body as a disability to be solved by a technical fix in the form of a robot, contrasts with the perspective from disabled people themselves, who see disability as a problem of society and the environment rather than the individual and the body
    corecore