155 research outputs found

    A systematic literature review of decision-making and control systems for autonomous and social robots

    Get PDF
    In the last years, considerable research has been carried out to develop robots that can improve our quality of life during tedious and challenging tasks. In these contexts, robots operating without human supervision open many possibilities to assist people in their daily activities. When autonomous robots collaborate with humans, social skills are necessary for adequate communication and cooperation. Considering these facts, endowing autonomous and social robots with decision-making and control models is critical for appropriately fulfiling their initial goals. This manuscript presents a systematic review of the evolution of decision-making systems and control architectures for autonomous and social robots in the last three decades. These architectures have been incorporating new methods based on biologically inspired models and Machine Learning to enhance these systems’ possibilities to developed societies. The review explores the most novel advances in each application area, comparing their most essential features. Additionally, we describe the current challenges of software architecture devoted to action selection, an analysis not provided in similar reviews of behavioural models for autonomous and social robots. Finally, we present the future directions that these systems can take in the future.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, Innovación y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spanish Ministerio de Ciencia e Innovación. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR

    User Experience Design and Evaluation of Persuasive Social Robot As Language Tutor At University : Design And Learning Experiences From Design Research

    Get PDF
    Human Robot Interaction (HRI) is a developing field where research and innovation are progressing. One domain where Human Robot Interaction has focused is in the educational sector. Various research has been conducted in education field to design social robots with appropriate design guidelines derived from user preferences, context, and technology to help students and teachers to foster their learning and teaching experience. Language learning has become popular in education due to students receiving opportunities to study and learn any interested subjects in any language in their preferred universities around the world. Thus, being the reason behind the research of using social robots in language learning and teaching in education field. To this context this thesis explored the design of language tutoring robot for students learning Finnish language at university. In language learning, motivation, the learning experience, context, and user preferences are important to be considered. This thesis focuses on the Finnish language learning students through language tutoring social robot at Tampere University. The design research methodology is used to design the persuasive language tutoring social robot teaching Finnish language to the international students at Tampere University. The design guidelines and the future language tutoring robot design with their benefits are formed using Design Research methodology. Elias Robot, a language tutoring application designed by Curious Technologies, Finnish EdTech company was used in the explorative user study. The user study involved Pepper, Social robot along with the Elias robot application using Mobile device technology. The user study was conducted in university, the students include three male participants and four female participants. The aim of the study was to gather the design requirements based on learning experiences from social robot tutor. Based on this study findings and the design research findings, the future language tutoring social robot was co-created through co design workshop. Based on the findings from Field study, user study, technology acceptance model findings, design research findings, student interviews, the persuasive social robot language tutor was designed. The findings revealed all the multi modalities are required for the efficient tutoring of persuasive social robots and the social robots persuade motivation with students to learn the language. The design implications were discussed, and the design of social robot tutor are created through design scenarios

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Expectations and expertise in artificial intelligence: specialist views and historical perspectives on conceptualisation, promise, and funding

    Get PDF
    Artificial intelligence’s (AI) distinctiveness as a technoscientific field that imitates the ability to think went through a resurgence of interest post-2010, attracting a flood of scientific and popular expectations as to its utopian or dystopian transformative consequences. This thesis offers observations about the formation and dynamics of expectations based on documentary material from the previous periods of perceived AI hype (1960-1975 and 1980-1990, including in-between periods of perceived dormancy), and 25 interviews with UK-based AI specialists, directly involved with its development, who commented on the issues during the crucial period of uncertainty (2017-2019) and intense negotiation through which AI gained momentum prior to its regulation and relatively stabilised new rounds of long-term investment (2020-2021). This examination applies and contributes to longitudinal studies in the sociology of expectations (SoE) and studies of experience and expertise (SEE) frameworks, proposing a historical sociology of expertise and expectations framework. The research questions, focusing on the interplay between hype mobilisation and governance, are: (1) What is the relationship between AI practical development and the broader expectational environment, in terms of funding and conceptualisation of AI? (2) To what extent does informal and non-developer assessment of expectations influence formal articulations of foresight? (3) What can historical examinations of AI’s conceptual and promissory settings tell about the current rebranding of AI? The following contributions are made: (1) I extend SEE by paying greater attention to the interplay between technoscientific experts and wider collective arenas of discourse amongst non-specialists and showing how AI’s contemporary research cultures are overwhelmingly influenced by the hype environment but also contribute to it. This further highlights the interaction between competing rationales focusing on exploratory, curiosity-driven scientific research against exploitation-oriented strategies at formal and informal levels. (2) I suggest benefits of examining promissory environments in AI and related technoscientific fields longitudinally, treating contemporary expectations as historical products of sociotechnical trajectories through an authoritative historical reading of AI’s shifting conceptualisation and attached expectations as a response to availability of funding and broader national imaginaries. This comes with the benefit of better perceiving technological hype as migrating from social group to social group instead of fading through reductionist cycles of disillusionment; either by rebranding of technical operations, or by the investigation of a given field by non-technical practitioners. It also sensitises to critically examine broader social expectations as factors for shifts in perception about theoretical/basic science research transforming into applied technological fields. Finally, (3) I offer a model for understanding the significance of interplay between conceptualisations, promising, and motivations across groups within competing dynamics of collective and individual expectations and diverse sources of expertise

    Apprenticeship Bootstrapping for Autonomous Aerial Shepherding of Ground Swarm

    Full text link
    Aerial shepherding of ground vehicles (ASGV) musters a group of uncrewed ground vehicles (UGVs) from the air using uncrewed aerial vehicles (UAVs). This inspiration enables robust uncrewed ground-air coordination where one or multiple UAVs effectively drive a group of UGVs towards a goal. Developing artificial intelligence (AI) agents for ASGV is a non-trivial task due to the sub-tasks, multiple skills, and their non-linear interaction required to synthesise a solution. One approach to developing AI agents is Imitation learning (IL), where humans demonstrate the task to the machine. However, gathering human data from complex tasks in human-swarm interaction (HSI) requires the human to perform the entire job, which could lead to unexpected errors caused by a lack of control skills and human workload due to the length and complexity of ASGV. We hypothesise that we can bootstrap the overall task by collecting human data from simpler sub-tasks to limit errors and workload for humans. Therefore, this thesis attempts to answer the primary research question of how to design IL algorithms for multiple agents. We propose a new learning scheme called Apprenticeship Bootstrapping (AB). In AB, the low-level behaviours of the shepherding agents are trained from human data using our proposed hierarchical IL algorithms. The high-level behaviours are then formed using a proposed gesture demonstration framework to collect human data from synthesising more complex controllers. The transferring mechanism is performed by aggregating the proposed IL algorithms. Experiments are designed using a mixed environment, where the UAV flies in a simulated robotic Gazebo environment, while the UGVs are physical vehicles in a natural environment. A system is designed to allow switching between humans controlling the UAVs using low-level actions and humans controlling the UAVs using high-level actions. The former enables data collection for developing autonomous agents for sub-tasks. At the same time, in the latter, humans control the UAV by issuing commands that call the autonomous agents for the sub-tasks. We baseline the learnt agents against Str\"{o}mbom scripted behaviours and show that the system can successfully generate autonomous behaviours for ASGV

    Self-Supervised Prediction of the Intention to Interact with a Service Robot

    Full text link
    A service robot can provide a smoother interaction experience if it has the ability to proactively detect whether a nearby user intends to interact, in order to adapt its behavior e.g. by explicitly showing that it is available to provide a service. In this work, we propose a learning-based approach to predict the probability that a human user will interact with a robot before the interaction actually begins; the approach is self-supervised because after each encounter with a human, the robot can automatically label it depending on whether it resulted in an interaction or not. We explore different classification approaches, using different sets of features considering the pose and the motion of the user. We validate and deploy the approach in three scenarios. The first collects 34423442 natural sequences (both interacting and non-interacting) representing employees in an office break area: a real-world, challenging setting, where we consider a coffee machine in place of a service robot. The other two scenarios represent researchers interacting with service robots (200200 and 7272 sequences, respectively). Results show that, even in challenging real-world settings, our approach can learn without external supervision, and can achieve accurate classification (i.e. AUROC greater than 0.90.9) of the user's intention to interact with an advance of more than 33s before the interaction actually occurs.Comment: Paper under revision for Robotics and Autonomous Systems journa

    Energy-based control approaches in human-robot collaborative disassembly

    Get PDF

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Touch Technology in Affective Human, Robot, Virtual-Human Interactions: A Survey

    Get PDF
    Given the importance of affective touch in human interactions, technology designers are increasingly attempting to bring this modality to the core of interactive technology. Advances in haptics and touch-sensing technology have been critical to fostering interest in this area. In this survey, we review how affective touch is investigated to enhance and support the human experience with or through technology. We explore this question across three different research areas to highlight their epistemology, main findings, and the challenges that persist. First, we review affective touch technology through the human–computer interaction literature to understand how it has been applied to the mediation of human–human interaction and its roles in other human interactions particularly with oneself, augmented objects/media, and affect-aware devices. We further highlight the datasets and methods that have been investigated for automatic detection and interpretation of affective touch in this area. In addition, we discuss the modalities of affective touch expressions in both humans and technology in these interactions. Second, we separately review how affective touch has been explored in human–robot and real-human–virtual-human interactions where the technical challenges encountered and the types of experience aimed at are different. We conclude with a discussion of the gaps and challenges that emerge from the review to steer research in directions that are critical for advancing affective touch technology and recognition systems. In our discussion, we also raise ethical issues that should be considered for responsible innovation in this growing area
    corecore