100 research outputs found

    Graceful User Following for Mobile Balance Assistive Robot in Daily Activities Assistance

    Full text link
    Numerous diseases and aging can cause degeneration of people's balance ability resulting in limited mobility and even high risks of fall. Robotic technologies can provide more intensive rehabilitation exercises or be used as assistive devices to compensate for balance ability. However, With the new healthcare paradigm shifting from hospital care to home care, there is a gap in robotic systems that can provide care at home. This paper introduces Mobile Robotic Balance Assistant (MRBA), a compact and cost-effective balance assistive robot that can provide both rehabilitation training and activities of daily living (ADLs) assistance at home. A three degrees of freedom (3-DoF) robotic arm was designed to mimic the therapist arm function to provide balance assistance to the user. To minimize the interference to users' natural pelvis movements and gait patterns, the robot must have a Human-Robot Interface(HRI) that can detect user intention accurately and follow the user's movement smoothly and timely. Thus, a graceful user following control rule was proposed. The overall control architecture consists of two parts: an observer for human inputs estimation and an LQR-based controller with disturbance rejection. The proposed controller is validated in high-fidelity simulation with actual human trajectories, and the results successfully show the effectiveness of the method in different walking modes

    Human - Robot Interfacing by the Aid of Cognition Based Interaction

    Get PDF

    Development of a Voice-Controlled Human-Robot Interface

    Get PDF
    The goal of this thesis is to develop a voice-controlled human-robot interface (HRI) which allows a person to control and communicate with a robot. Dragon NaturallySpeaking, a commercially available automatic speech recognition engine, was chosen for the development of the proposed HRI. In order to achieve the goal, the Dragon software is used to create custom commands (or macros) which must satisfy the tasks of (a) directly controlling the robot with voice, (b) writing a robot program with voice, and (c) developing a HRI which allows the human and robot to communicate with each other using speech. The key is to generate keystrokes upon recognizing the speech and three types of macro including step-by-step, macro recorder, and advanced scripting. Experiment was conducted in three phases to test the functionality of the developed macros in accomplishing all three tasks. The result showed that advanced scripting macro is the only type of macro that works. It is also the most suitable for the task because it is quick and easy to create and can be used to develop flexible and natural voice command. Since the output of macro is a series of keystrokes, which forms a syntax for the robot program, macros developed by the Dragon software can be used to communicate with virtually any robots by making an adjustment on the output keystroke

    POSEIDON Project: Objectives and Preliminary Results

    Get PDF
    [Abstract] This paper is presenting preliminary results dealing with the ongoing three-year project POSEIDON (imProving underwater cOoperative manipulation by meanS of lEarnIng, augmenteD reality and wIreless cOmmunicatioNs). In fact, this Project is a sub-project inside of a bigger one, COOPERAMOS (COOPErative Resident Robots for Autonomous ManipulatiOn Subsea). The aim and specific objectives of this project are presented, as well as some preliminary results on Simulation, HRI, and communications.[Resumen] Este documento presenta los resultados preliminares relacionados con el proyecto POSEIDON (mejora de la manipulación cooperativa subacuática mediante el aprendizaje, la realidad aumentada y las comunicaciones inalámbricas), que tiene una duración de tres años y está en curso. De hecho, este Proyecto es un subproyecto dentro de uno más grande, COOPERAMOS (COOPerativos Residentes de robots para la Manipulación Autónoma Submarina). Se presentan los objetivos de este proyecto, así como algunos resultados preliminares sobre Simulación, HRI y comunicaciones.Funded by PID2020-115332RBC31 (COOPERAMOS), IDIFEDER/2018/013 (GV), UJI-B2021-30 (AUDAZ) and EU H2020-Peacetolero-NFRP-2019-2020-04 projects.https://doi.org/10.17979/spudc.978849749841

    Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios

    Get PDF
    The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way

    PROTOTIPE SISTEM PENGONTROLAN BERBASIS VOICE RECOGNITION SENSOR DAN MIKROKONTROLER UNTUK PENGOPERASIAN AKTUATOR

    Get PDF
    PROTOTIPE SISTEM PENGONTROLAN BERBASIS VOICE RECOGNITION SENSOR DAN MIKROKONTROLER UNTUK PENGOPERASIAN AKTUATOR. Telah dilakukan pembuatan prototype sistem pengontrolan pengoperasian actuator berbasis voice recognition sensor dan mikrokontroler, melalui langkah-langkah: (1) pengintegrasian voice recognition sensor ke sistem mikrokontroler, meliputi: (a) pengawatan sistem berbantuan program aplikasi Eagle, (b) pembuatan board untuk sistem mikrokontroler ATmega16, (c)pengawatan sistem, dan (d) penempatan voice recognition sensor pada sistem mikrokontroler ATmega16; (2) pembuatan program berbasis bahasa pemrograman BasCom, meliputi: (a) pembuatan diagram alir (algoritma), (b) penulisan sintaks, dan (c) uji verifikasi terhadap program berbasis bahasa BasCom yang telah dibuat ke dalam program aplikasi Proteus; (3) pengukuran kinerja sistem pengontrolan pengoperasian actuator berbasis voice recognition sensor dan mikrokontroler, meliputi: (a) pemantauan dan pengukuran kinerja sensor melalui simulasi pemberian suara terhadap sensor dan (b) penjelasan mekanisme pengontrolan lampu dan kipas berdasarkan keadaan yang dideteksi oleh sensor. Pengintegrasian sensor ke sistem mikrokontroler ATmega16 ditunjukan, bahwa port pada mikrokontroler ATmega16, digunakan untuk (i) catu daya dengan intergrate circuit (IC) regulator 7805, (ii) sensor voice recognition, (iii) downloader dan (iv) keluaran. Berkaitan dengan pemrograman mikrokontroler ATmega16 untuk pengoperasian sistem, dilakukan penanaman program berbasis bahasa BasCom melalui beberapa tahapan, yaitu: (i) konfigurasi pin, (ii) deklarasi variablel (peubah), (iii) deklarasi konstanta (tetapan), (iv) inisialisasi, (v) program utama, (vi) ambil data kirim data, dan (vii) keluaran. Hasil simulasi mendekati hasil yang di harapkan, sensor dapat membaca perintah yang kemudian dikirimkan ke sistem mikrokontrole

    Alive Human Detection Robot using PIR sensor

    Get PDF
    Modern technology has paved the way for the construction of tall buildings and houses, which increases the risk of life loss during both natural and human-made disasters. To address this issue, we propose a recommended approach involves using a robot equipped with sensor technology to locate and determine the status of humans. The proposed solution is a robot based on passive infrared sensors (PIR). The robot is equipped with a set of sensors, including the micro controller using ultrasonic and PIR sensors, to detect signs of life from humans. If the person is found to be alive, a buzzer is activated. The robot then shares the person's location through a global positioning system(GPS) module, sending this information to the receiver via message. This way, timely assistance can be provided to individuals in need, potentially saving lives in critical situations. However, in harsh weather conditions, people may be stranded at various locations, making it challenging to find and assist them. The robot is operated via a Bluetooth module and utilizes an ultrasonic sensor to navigate autonomously

    Facial recognition for human disposition identification

    Get PDF
    Autonomous human facial disposition identification is beneficial in the majority of applications, including healthcare, customer satisfaction, criminal investigation and Human-Robot Interaction (HRI). Deep learning techniques able to classify human expressions into emotion categories via Convolutional Neural Network (CNN), which is well known example of deep learning concepts in maintaining accuracy. CNN can be trained to analyze and differentiate multiple human facial dispositions, since it made up of many intermediate states namely input layer, hidden layer and output layer which plays the significant part in generating the precise outcome and helps to reduce elimination tasks in easier way with minimal steps. In this research, we study to develop an autonomous system that can recognize and differentiate multiple human facial dispositions. This study will validate the models by creating a real-time vision system mainly includes three phases which are face detection through Haar Cascades, normalization and emotion recognition and classification using proposed CNN architecture on FER-2013 database with seven different sorts of universal emotions such as Happiness, Sadness, Anger, Disgust, Surprise, Fear and Neutral
    corecore