133 research outputs found

    A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots

    Get PDF
    A signi�cant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation diferential

    A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots

    Get PDF
    A significant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation differential

    Implementasi Robot Cerdas Pemadam Api dengan Multi Independent Steering

    Get PDF
    Abstract - Robot technology appears to ease human work. With devices such as fire fighting robots, rescue robots, surveillance robots, and robots whose functions differ according to field needs. In this paper, we have the idea of implementing an intelligent fire extinguisher robot with multi-independent steering that can be directed to the location of the fire to find the source of hotspots. With multi-independent steering, the wheels on the robot can navigate and move swiftly on each wheel that moves in all directions. In this paper we simulate by searching for candles as a source of hotspots. When this robot finds a source of fire, this robot will turn off the candle or source of the fire automatically. This robot contains the following functional components: ultrasonic to find out the surrounding field, a light sensor to look for sources of hotspots or light sources, and a fan to turn off the candle or the point of fire. In terms of design, although the designs of robots are minimalist and small, they provide protection at high temperatures, excellent waterproof when exposed to water spray, and a high impact resistance. As well as creating an automatic robot that aims to find a way automatically to find the presence of hotspots on the fire field without being controlled. This study aims to provide an automatic fire extinguisher system that can help and lighten the fire extinguishing profession which has a high level of risk in its operation. The results of this study will be able to create an intelligent robot that can minimize the risk of the work of a fire extinguisher profession with a multi-independent steering method. By using this method the steering system where each wheel can move freely. With this steering system the robot can move freely in all directions and is a type of holomonic motionKeyword : Fire extinguisher robot, robotic system, portable evacuation guide, multi-independent steeringAbstrak – Teknologi robot muncul untuk meringankan pekerjaan manusia. Dengan perangkat seperti robot  pemadam kebakaran, robot penyelamat, robot pengintai, dan robot-robot yang berbeda fungsinya sesuai kebutuhan lapangan. Dalam tulisan ini, kami memiliki ide implementasi robot cerdas pemadam api dengan multi-independent steering yang dapat ditujukan ke lokasi kebakaran guna mencari sumber titik api. Dengan multi-independent steering nantinya roda pada robot dapat bernavigasi dan bergerak secara lincah pada setiap masing masing rodanya yang dapat bergerak kesegala arah. Didalam tulisan ini kami mensimulasikan dengan mencari lilin sebagai sumber titik api. Ketika robot ini menemukan sumber api, maka robot ini akan mematikan lilin atau sumber titik api tersebut secara otomatis. Robot ini berisikan komponen fungsional berikut: ultrasonic untuk mengetahui medan di sekitarnya, sensor cahaya untuk mencari sumber titik api atau sumber cahaya, dan kipas untuk mematikan lilin atau sumber titik apinya. Dalam segi desain, walaupun desain dari robot minimalis dan kecil namun robot ini memberikan perlindungan pada saat suhu tinggi, waterproof yang sangat baik saat terkena semprotan air, dan ketahanan terhadap benturan yang cukup tinggi. Serta menciptakan sebuah robot otomatis yang bertujuan dapat menemukan jalan secara otomatis untuk mencari keberadaan titik api pada medan kebakaran tanpa dikendalikan. Penelitian ini bertujuan untuk menyediakan sistem otomatis pemadam api yang dapat memembantu serta memperingan profesi pemadam kebakaran yang memiliki tingkat resiko tinggi dalam bekerjanya. Hasil dari penelitian ini nantinya dapat menciptakan suatu robot cerdas yang dapat memperkecil resiko kerja seorang profesi pemadam kebakaran dengan metode multi-independent steering. Dengan menggunakan metode ini sistem kemudi dimana tiap roda dapat bergerak secara bebas. Dengan sistem kemudi ini robot dapat bergerak bebas ke segala arah dan merupakan tipe gerak holomonicKata kunci : Robot pemadam api, sistem robot, pemandu evakuasi portable, multi-independent steerin

    A Retro-Projected Robotic Head for Social Human-Robot Interaction

    Get PDF
    As people respond strongly to faces and facial features, both con- sciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an orig- inal contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving soft- ware designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial ex- pressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncon- trolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evalua- tions present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face

    Support tools for 3D game creation

    Get PDF
    Nowadays, tools for developing videogames are a very important part of the development process in the game industry. Such tools are used to assist game developers in their tasks, allowing them to create functional games while writing a few lines of code. For example, these tools allow the users to import the content for the game, set the game logic, or produce the source code and compile it. There are several tasks and components regarding the development of videogames that may become unproductive, therefore, it’s necessary to automate and/or optimize such tasks. For example, the programming of events or dialogs can be a task that consumes too much time in the development cycle, and a tedious and repetitive task for the programmer. For this reason, the use of tools to support these tasks can be very important to increase productivity and help on the maintenance of the various processes that involve the development of videogames. This dissertation aims to demonstrate the advantages of the use of these kind of tools during the development of videogames, presenting a case study involving the development of a Serious Game entitled Clean World.Atualmente, as ferramentas para o desenvolvimento de jogos são uma parte bastante importante de todo o processo de desenvolvimento. Estas ferramentas servem para assistir os criadores de jogos nas tarefas que realizam, permitindo-lhes a criação de jogos funcionais escrevendo poucas linhas de código. Desenvolver um videojogo sem a utilização de ferramentas especializadas é um processo complexo e que consome bastante tempo, daí a existência de ferramentas que permitem ao utilizador importar os conteúdos para o jogo, definir a lógica de jogo, produzir o código fonte e compilá-lo. Este tipo de software é normalmente utilizado por quem se dedica à criação de jogos como hobby, ou por profissionais que procuram otimizar o processo de desenvolvimento de jogos. Existem várias componentes ao nível do desenvolvimento de videojogos que se tornam pouco produtivas, se não forem automatizados e/ou otimizadas. Por exemplo, a programação de eventos ou de diálogos pode ser uma tarefa que consome demasiado tempo no ciclo de desenvolvimento, para além de ser uma tarefa entediante e repetitiva no ponto de vista do programador. Por este motivo, a utilização de ferramentas pode ser muito importante no que diz respeito ao aumento da produtividade e manutenção dos vários processos que envolvem o desenvolvimento de videojogos. Nesta dissertação pretendemos demonstrar as vantagens da utilização dessas mesmas ferramentas durante o desenvolvimento de videojogos, através da apresentação de um caso de estudo que envolve o desenvolvimento de um Serious Game intitulado Clean World. Em Clean World, foram identificadas determinadas tarefas que se mostraram demasiado repetitivas e entediantes quando programadas por inteiro, como é o caso da adição, modificação ou remoção de componentes como diálogos, quest ou items. Tendo em conta este problema concreto, foram criadas algumas ferramentas de forma a aumentar a produtividade no desenvolvimento do jogo, tornando tarefas repetitivas e entediantes em processos simples e intuitivos. O conjunto de ferramentas é constituído por: Item Manager, Quest Manager, Dialog Manager e Terrain Creator

    Persuasive interactive non-verbal behaviour in embodied conversational agents

    Get PDF
    Realism for embodied conversational agents (ECAs) requires both visual and behavioural fidelity. One significant area of ECA behaviour, that has to date received little attention, is non-verbal behaviour. Non-verbal behaviour occurs continually in all human-human interactions, and has been shown to be highly important in those interactions. Previous research has demonstrated that people treat media (and therefore ECAs) as real people, and so non-verbal behaviour is also important in the development of ECAs. ECAs that use non-verbal behaviour when interacting with humans or other ECAs will be more realistic, more engaging, and have higher social influence. This thesis gives an in-depth view of non-verbal behaviour in humans followed by an exploration of the potential social influence of ECAs using a novel Wizard of Oz style approach of synthetic ECAs. It is shown that ECAs have the potential to have no less social influence (as measured using a direct measure of behaviour change) than real people and also that it is important that ECAs have visual feedback on their interactants for this social influence to maximised. Throughout this thesis there is a focus on empirical evaluation of ECAs, both as a validation tool and also to provide directions for future research and development. Present ECAs frequently incorporate some form of non-verbal behaviour, but this is quite limited and more importantly not connected strongly to the behaviour of a human interactant. This interactional aspect of non-verbal behaviour is important in human-human interactions and results from the study of the persuasive potential of ECAs support this fact mapping onto human-ECA interactions. The challenges in creating non-verbally interactive ECAs are introduced and by drawing corollaries with robotics control systems development behaviour-based architectures are presented as a solution towards these challenges, and implemented in a prototypical ECA. Evaluation of this ECA using the methodology used previously in this thesis demonstrates that an ECA with non-verbal behaviour that responds to its interactant is rated more positively than an ECA that does not, indicating that directly measurable social influences will be possible with further development

    Is Multimedia Multisensorial? - A Review of Mulsemedia Systems

    Get PDF
    © 2018 Copyright held by the owner/author(s). Mulsemedia - multiple sensorial media - makes possible the inclusion of layered sensory stimulation and interaction through multiple sensory channels. e recent upsurge in technology and wearables provides mulsemedia researchers a vehicle for potentially boundless choice. However, in order to build systems that integrate various senses, there are still some issues that need to be addressed. is review deals with mulsemedia topics remained insu ciently explored by previous work, with a focus on multi-multi (multiple media - multiple senses) perspective, where multiple types of media engage multiple senses. Moreover, it addresses the evolution of previously identi ed challenges in this area and formulates new exploration directions.This article was funded by the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement no. 688503

    ON THE INFLUENCE OF SOCIAL ROBOTS IN COGNITIVE MULTITASKING AND ITS APPLICATION

    Get PDF
    [Objective] I clarify the impact of social robots on cognitive tasks, such as driving a car or driving an airplane, and show the possibility of industrial applications based on the principles of social robotics. [Approach] I adopted the MATB, a generalized version of the automobile and airplane operation tasks, as cognitive tasks to evaluate participants' performance on reaction speed, tracking performance, and short-term memory tasks that are widely applicable, rather than tasks specific to a particular situation. Also, as the stimuli from social robots, we used the iCub robot, which has been widely used in social communication research. In the analysis of participants, I not only analyzed performance, but also mental workload using skin conductance and emotional analysis of arousal-valence using facial expressions analysis. In the first experiment, I compared a social robot that use social signals with a nonsocial robot that do not use such signals and evaluated whether social robots affect cognitive task performances. In the second experiment, I focused on vitality forms and compared a calm social robot with an assertive social robot. As analysis methods, I adopted Mann-Whitney's U test for one-pair comparisons, and ART-ANOVA for analysis of variance in repeated task comparisons. Based on the results, I aimed to express vitality forms in a robot head, which is smaller in size and more flexible in placement than a full-body humanoid robot, considering car and airplane cockpit's limited space. For that, I developed a novel eyebrow and I decided to use a wire-driven technique, which is widely used in surgical robots to control soft materials. [Main results] In cognitive tasks such as car drivers and airplane pilots, I clarified the effects of social robots acting social behaviors on task performance, mental workload, and emotions. In addition, I focused on vitality forms, one of the parameters of social behaviors, and clarified the effects of different vitality forms of social robots' behavior on cognitive tasks.In cognitive tasks such as car drivers and airplane pilots, we clarified the effects of social robots acting in social behaviors on task performance, mental workload, and emotions, and showed that the presence of social robots can be effective in cognitive tasks. Furthermore, focusing on vitality forms, one of the parameters of social behaviors, we clarified the effects of different vitality forms of social robots' behaviors on cognitive tasks, and found that social robots with calm behaviors positively affected participants' facial expressions and improved their performance in a short-term memory task. Based on the results, I decided to adopt the configuration of a robot head, eliminating the torso from the social humanoid robot, iCub, considering the possibility of placement in a limited space such as cockpits of car or airplane. In designing the robot head, I developed a novel soft-material eyebrow that can be mounted on the iCub robot head to achieve continuous position and velocity changes, which is an important factor to express vitality forms. The novel eyebrows can express different vitality forms by changing the shape and velocity of the eyebrows, which was conventionally represented by the iCub's torso and arms. [Significance] The results of my research are important achievements that opens up the possibility of applying social robots to non-robotic industries such as automotive and aircraft. In addition, the newly developed soft-material eyebrows' precise shape and velocity changes have opened up new research possibilities in social robotics and social communication research themselves, enabling experiments with complex facial expressions that move beyond Ekman's simple facial expression changes definition, such as, joy, anger, sadness, and pleasure. Thus, the results of this research are one important step in both scientific and industrial applications. [Key-words] social robot, cognitive task, vitality form, robot head, facial expression, eyebro

    Sensory Computing and Object Processing Entity: Assistive Robotics for Healthcare

    Get PDF
    Team SCOPE has created an assistive robot for healthcare delivery. The robot is mobile, responds to spoken commands, and possesses Artificial Intelligence (AI). It extracts meanings about the patient’s health from conversations and visual interactions. It summarizes these observations into reports that could be merged with the patient’s Electronic Health Records (EHRs). This process aids healthcare professionals in delivering better care by augmenting attendance, increasing accuracy of patient information collection, aiding in diagnosis, streamlining data collection, and automating the process of ingesting and incorporating this information into EHR systems. SCOPE’s solution uses cloud-based AI services along with local processing. Using VEX Robotics parts and an Arduino microcontroller, SCOPE created a mobile platform for the robot. The robotic platform implements basic motions and obstacle avoidance. These separate systems are integrated using a Java master program, Node-Red, and IBM Watson cloud services. The resulting AI can be expanded for different applications within healthcare delivery
    • …
    corecore