2,613 research outputs found

    Robotic Remote Surveillance and Control through Speech Recognition

    Get PDF
    This paper deals with the remote based robotic surveillance system and control through speech processing. Robotic remote surveillance and control through speech recognition is a kind of simple Cyber Physical system. Cyber Physical system is connection between cyber world and physical world around us. Sensors in network map the physical parameters in digital, share the information with processors and CPS intelligently makes the decision after computing. Finally the decision command is translated into physical world by actuators. The speech commands from a user’s distant location are carried over wirelessly to a multifunctional robot unit. Robotic arm over a base will act for voice commands sent over media. Desired surveillance will be facilitated by movement of robot and installed surveillance unit. Video stream feed to user is sensing of physical environment while actions of arm represent the role of actuator. This system used in the heavy industry in any environment

    Viia-hand: a Reach-and-grasp Restoration System Integrating Voice interaction, Computer vision and Auditory feedback for Blind Amputees

    Full text link
    Visual feedback plays a crucial role in the process of amputation patients completing grasping in the field of prosthesis control. However, for blind and visually impaired (BVI) amputees, the loss of both visual and grasping abilities makes the "easy" reach-and-grasp task a feasible challenge. In this paper, we propose a novel multi-sensory prosthesis system helping BVI amputees with sensing, navigation and grasp operations. It combines modules of voice interaction, environmental perception, grasp guidance, collaborative control, and auditory/tactile feedback. In particular, the voice interaction module receives user instructions and invokes other functional modules according to the instructions. The environmental perception and grasp guidance module obtains environmental information through computer vision, and feedbacks the information to the user through auditory feedback modules (voice prompts and spatial sound sources) and tactile feedback modules (vibration stimulation). The prosthesis collaborative control module obtains the context information of the grasp guidance process and completes the collaborative control of grasp gestures and wrist angles of prosthesis in conjunction with the user's control intention in order to achieve stable grasp of various objects. This paper details a prototyping design (named viia-hand) and presents its preliminary experimental verification on healthy subjects completing specific reach-and-grasp tasks. Our results showed that, with the help of our new design, the subjects were able to achieve a precise reach and reliable grasp of the target objects in a relatively cluttered environment. Additionally, the system is extremely user-friendly, as users can quickly adapt to it with minimal training

    Advancing automation and robotics technology for the space station and for the US economy: Submitted to the United States Congress October 1, 1987

    Get PDF
    In April 1985, as required by Public Law 98-371, the NASA Advanced Technology Advisory Committee (ATAC) reported to Congress the results of its studies on advanced automation and robotics technology for use on the space station. This material was documented in the initial report (NASA Technical Memorandum 87566). A further requirement of the Law was that ATAC follow NASA's progress in this area and report to Congress semiannually. This report is the fifth in a series of progress updates and covers the period between 16 May 1987 and 30 September 1987. NASA has accepted the basic recommendations of ATAC for its space station efforts. ATAC and NASA agree that the mandate of Congress is that an advanced automation and robotics technology be built to support an evolutionary space station program and serve as a highly visible stimulator affecting the long-term U.S. economy

    Guido and Am I Robot? A Case Study of Two Robotic Artworks Operating in Public Spaces

    Get PDF
    This article is a case study of two artworks that were commissioned for and exhibited in art venues in 2016 and 2017. The first artwork, Guido the Robot Guide, guided the visitors to an art-science exhibition, presenting the exhibits with a robot's perspective. Guido was the result of a collaboration between artists and engineers. The concept was an irreverent robot guide that could switch transparently from autonomous mode to operator control, allowing for seamless natural interaction. We examine how the project unfolded, its successes and limitations. Following on Guido, the lead artist developed the robotic installation Am I Robot? where the idea of a hybrid autonomous/remote-manual mode was implemented fully in a non-utilitarian machine that was exhibited in several art galleries. The article provides a concise contextualisation and details technical and design aspects as well as observations of visitors' interactions with the artworks. We evaluate the hybrid system's potential for creative robotics applications and identify directions for future research

    Design and implementation of a domestic disinfection robot based on 2D lidar

    Get PDF
    In the battle against the Covid-19, the demand for disinfection robots in China and other countries has increased rapidly. Manual disinfection is time-consuming, laborious, and has safety hazards. For large public areas, the deployment of human resources and the effectiveness of disinfection face significant challenges. Using robots for disinfection therefore becomes an ideal choice. At present, most disinfection robots on the market use ultraviolet or disinfectant to disinfect, or both. They are mostly put into service in hospitals, airports, hotels, shopping malls, office buildings, or other places with daily high foot traffic. These robots are often built-in with automatic navigation and intelligent recognition, ensuring day-to-day operations. However, they usually are expensive and need regular maintenance. The sweeping robots and window-cleaning robots have been put into massive use, but the domestic disinfection robots have not gained much attention. The health and safety of a family are also critical in epidemic prevention. This thesis proposes a low-cost, 2D lidar-based domestic disinfection robot and implements it. The robot possesses dry fog disinfection, ultraviolet disinfection, and air cleaning. The thesis is mainly engaged in the following work: The design and implementation of the control board of the robot chassis are elaborated in this thesis. The control board uses STM32F103ZET6 as the MCU. Infrared sensors are used in the robot to prevent from falling over and walk along the wall. The Ultrasonic sensor is installed in the front of the chassis to detect and avoid the path's obstacles. Photoelectric switches are used to record the information when the potential collisions happen in the early phase of mapping. The disinfection robot adopts a centrifugal fan and HEPA filter for air purification. The ceramic atomizer is used to break up the disinfectant's molecular structure to produce the dry fog. The UV germicidal lamp is installed at the bottom of the chassis to disinfect the ground. The robot uses an air pollution sensor to estimate the air quality. Motors are used to drive the chassis to move. The lidar transmits its data to the navigation board directly through the wires and the edge-board contact on the control board. The control board also manages the atmosphere LEDs, horn, press-buttons, battery, LDC, and temperature-humidity sensor. It exchanges data with and executes the command from the navigation board and manages all kinds of peripheral devices. Thus, it is the administrative unit of the disinfection robot. Moreover, the robot is designed in a way that reduces costs while ensuring quality. The control board’s embedded software is realized and analyzed in the thesis. The communication protocol that links the control board and the navigation board is implemented in software. Standard commands, specific commands, error handling, and the data packet format are detailed and processed in software. The software effectively drives and manages the peripheral devices. SLAMWARE CORE is used as the navigation board to complete the system design. System tests like disinfecting, mapping, navigating, and anti-falling were performed to polish and adjust the structure and functionalities of the robot. Raspberry Pi is also used with the control board to explore 2D Simultaneous Localization and Mapping (SLAM) algorithms, such as Hector, Karto, and Cartographer, in Robot Operating System (ROS) for the robot’s further development. The thesis is written from the perspective of engineering practice and proposes a feasible design for a domestic disinfection robot. Hardware, embedded software, and system tests are covered in the thesis

    CLARC: A cognitive robot for helping geriatric doctors in real scenarios

    Get PDF
    Third Iberian Robotics Conference (ROBOT 2017). 22 to 24 November 2017, Seville, SpainAbstract: Comprehensive Geriatric Assessment (CGA) is an integrated clinical process to evaluate the frailty of elderly persons in order to create therapy plans that improve their quality of life. For robotizing these tests, we are designing and developing CLARC, a mobile robot able to help the physician to capture and manage data during the CGA procedures, mainly by autonomously conducting a set of predefined evaluation tests. Built around a shared internal representation of the outer world, the architecture is composed of software modules able to plan and generate a stream of actions, to execute actions emanated from the representation or to update this by including/removing items at different abstraction levels. Percepts, actions and intentions coming from all software modules are grounded within this unique representation. This allows the robot to react to unexpected events and to modify the course of action according to the dynamics of a scenario built around the interaction with the patient. The paper describes the architecture of the system as well as the preliminary user studies and evaluation to gather new user requirements.This work has been partially funded by the EU ECHORD++ project (FP7-ICT-601116) and the TIN2015-65686-C5-1-R (MINECO and FEDER funds). Javier García is partially supported by the Comunidad de Madrid (Spain) funds under the project 2016-T2/TIC-171

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida
    corecore