43 research outputs found

    Unmanned Drug Delivery Vehicle for COVID-19 Wards in Hospitals

    Get PDF
    The prime reason for proposing the work is designing and developing a low-cost guided wireless Unmanned Ground Vehicle (UGV) for use in hospitals for assistance in contactless drug delivery in COVID-19 wards. The Robot is designed as per the requirements and technical specifications required for the healthcare facility. After a detailed survey and tests of various mechanisms for steering and structure of UGV, the best mechanism preferred for steering articulated and for body structure is hexagonal as this approach provides decent performance and stability required to achieve the objective. The UGV has multiple sensors onboard, such as a Camera, GPS module, Hydrogen, and Carbon Gas sensor, Raindrop sensor, and an ultrasonic range finder on UGV for the end-user to understand the circumferential environment and status of UGV. The data and control options are displayed on any phone or computer present in the Wi-Fi zones only if the user login is validated. ESP-32 microcontroller is the prime component utilized to establish reliable wireless communication between the user and UGV.These days, the demand for robot vehicles in hospitals has increased rapidly due to pandemic outbreaks as using this makes a contactless delivery of the medicinal drug. These systems are designed specifically to assist humans in the current situation where life can be at risk for healthcare facilities. In addition, the robot vehicle is suitable for many other applications like supervision, sanitization, carrying medicines and medical equipment for delivery, delivery of food and used dishes, laundry, garbage, laboratory samples, and additional supply

    Immersive Telerobotic Modular Framework using stereoscopic HMD's

    Get PDF
    Telepresença é o termo utilizado para descrever o conjunto de tecnologias que proporcionam aos utilizadores a sensação de que se encontram num local onde não estão fisicamente. Telepresença imersiva é o próximo passo e o objetivo passa por proporcionar a sensação de que o utilizador se encontra completamente imerso num ambiente remoto, estimulando para isso o maior número possível de sentidos e utilizando novas tecnologias tais como: visão estereoscópica, visão panorâmica, áudio 3D e Head Mounted Displays (HMDs).Telerobótica é um sub-campo da telepresença ligando a mesma à robótica, e que essencialmente consiste em proporcionar ao utilizador a possibilidade de controlar um robô de forma remota. Nas soluções do estado da arte da telerobótica existe uma falha, uma vez que a telerobótica não tem usufruido, no geral, das recentes evoluções em tecnologias de controlo e interfaces de interação pessoa- computador. Além da falta de estudos que apostam em soluções de imersividade, tais como visão estereoscópica, a telerobótica imersiva pode também incluir controlos mais intuitivos, tais como controladores de toque ou baseados em movimentos e gestos. Estes controlos são mais naturais e podem ser traduzidos de forma mais natural no sistema. Neste documento propomos uma abordagem alternativa a métodos mais comuns encontrados na teleoperação de robôs, como, por exemplo, os que se encontram em robôs de busca e salvamento (SAR). O nosso principal foco é testar o impacto que características imersivas, tais como visão estereoscópica e HMDs podem trazer para os robôs de telepresença e sistemas de telerobótica. Além disso, e tendo em conta que este é um novo e crescente campo, vamos mais além estando também a desenvolver uma framework modular que possuí a capacidade de ser extendida com diferentes robôs, com o fim de proporcionar aos investigadores uma plataforma com que podem testar diferentes casos de estudo.Pretendemos provar que adicionando tecnologias imersivas a um sistema de telerobótica é possível obter uma plataforma mais intuitiva, ou seja, menos propensa a erros induzidos por uma perceção e interação errada com o sistema de teleoperação do robô, por parte do operador. A perceção de profundidade e do ambiente em geral são significativamente melhoradas quando se utiliza esta solução de imersão. E o desempenho, tanto em tempo de operação numa tarefa como numa bem-sucedida identificação de objetos de interesse, é também reforçado. Desenvolvemos uma plataforma modular, de baixo/médio custo, de telerobótica imersiva que pode ser estendida com aplicações Android hardware-based no lado do robô. Esta solução tem por objetivo proporcionar a possibilidade de utilizar a mesma plataforma, em qualquer tipo de caso de estudo, estendendo a plataforma com diferentes tipos de robô. Em adição a uma framework modular e extensível, o projeto conta também com três principais módulos de interação, nomeadamente: - Módulo que contém um head mounted display com suporte a head tracking no ambiente do operador - Stream de visão estereoscópica através de Android - E um módulo que proporciona ao utilizador a possibilidade de interagir com o sistema com positional tracking No que respeita ao hardware não apenas a área móvel (e.g. smartphones, tablets, arduino) expandiu de forma avassaladora nos últimos anos, como também assistimos ao despertar de tecnologias de imersão a baixo custo, tais como o Oculus Rift, Google Cardboard ou Leap Motion.Estas soluções de hardware, de custo acessível, associadas aos avanços em stream de vídeo e áudio fornecidas pelas tecnologias WebRTC, principalmente pelo Google, tornam o desenvolvimento de uma solução de software em tempo real possível. Atualmente existe uma falta de métodos de software em tempo real em estereoscopia, mas acreditamos que a chegada de tecnologias WebRTC vai marcar o ponto de viragem, permitindo um plataforma económica e elevando a fasquia em termos de especificações.Telepresence is the term used to describe the set of technologies that enable people to feel or appear as if they were present in a location which they are not physically in. Immersive telepresence is the next step and the objective is to make the operator feel like he is immersed in a remote location, using as many senses as possible and new technologies such as stereoscopic vision, panoramic vision, 3D audio and Head Mounted Displays (HMDs).Telerobotics is a subfield of telepresence and merge it with robotics, providing the operator with the ability to control a robot remotely. In the current state of the art solutions there is a gap, since telerobotics have not enjoyed, in general, of the recent developments in control and human-computer interfaces technology. Besides the lack of studies investing on immersive solutions, such as stereoscopic vision, immersive telerobotics can also include more intuitive control capabilities such as haptic based controls or movement and gestures that would feel more natural and translated more naturally into the system. In this paper we propose an alternative approach to common teleoperation methods. As an example of common solutions, the reader can think about some of the methods found, for instance, in search and rescue (SAR) robots. Our main focus is to test the impact that immersive characteristics like stereoscopic vision and HMDs can bring to telepresence robots and telerobotics systems. Besides that, and since this is a new and growing field, we are also aiming to a modular framework capable of being extended with different robots in order to test different cases and aid researchers with an extensible platform.We claim that with immersive solutions the operator in a telerobotics system will have a more intuitive perception of the remote environment, and will be less error prone induced by a wrong perception and interaction with the teleoperation of the robot. We believe that the operator's depth perception and situational awareness are significantly improved when using immersive solutions, the performance both in terms of operation time and on successful identification, of particular objects, in remote environments are also enhanced.We have developed a low cost immersive telerobotic modular platform, this platform can be extended with hardware based Android applications in slave side (robot side). This solution provides the possibility of using the same platform, in any type of case study, by just extending it with different robots.In addition to the modular and extensible framework, the project will also features three main modules of interaction, namely:* A module that supports an head mounted display and head tracking in the operator environment* Stream of stereoscopic vision through Android with software synchronization* And a module that enables the operator to control the robot with positional tracking In the hardware side not only the mobile area (e.g. smartphones, tablets, arduino) expanded greatly in the last years but we also saw the raise of low cost immersive technologies, like the Oculus Rift DK2, Google Cardboard or Leap Motion. This cost effective hardware solutions associated with the advances in video and audio streaming provided by WebRTC technologies, achieved mostly by Google, make the development of a real-time software solution possible. Currently there is a lack of real-time software methods in stereoscopy, but the arrival of WebRTC technologies can be a game changer.We take advantage of this recent evolution in hardware and software in order to keep the platform economic and low cost, but at same time raising the flag in terms of performance and technical specifications of this kind of platform

    Live Feedback on a Live Video Using HTML5 Canvas

    Get PDF
    The objective of this research is to teleoperate a robot situated at a remote location with the help of video camera. The operator relies on the live video to move the bot and maneuver different tasks, but in the case of laggy video the operator has to wait until the video is received which is frustrating and tiresome. We intend to solve this by giving feedback to the operator instantly as the command is given. This is done by using HTML5 Canvas as a medium to show the video instead of using a video player. HTML5 Canvas gives us flexibility as it can be used to access pixel values from the video and manipulate them giving feedback to the operator on the video itself. Preliminary data shows 30% decrease in time for doing certain tasks using this system instead of without using the system.Computer Scienc

    Pervasive Device and Service Discovery Protocol in Interoperability XBee-IP Network

    Get PDF
    The Internet of Things (IoT) communication protocol built over IP and non-IP environment. Therefore, a gateway device will be needed to bridge the IP and non-IP network transparently since an IoT user is more likely to concern on the service provided by the IoT device, rather than the complexity of the network or device configuration. Since today ubiquitous computing needs to hide the architectural level from it users, the data & information centric approach was proposed. However, the data & information centric protocol is having several issues and one of them is device and service discovery protocol over IP & non-IP network. This paper proposed a pervasive device and service discovery protocol that able to work in interoperability of the IP and non-IP network. The system environment consists of a smart device with XBee Communication as the non-IP network that will send the device and service description data to the IP network using WebSocket. The gateway will able to recognize the smart device and sent the data to the web-based user application. The user application displayed the discovered devices along the services and able to send the control data to each of the smart devices. Our proposed protocol also enriched with the smart device inoperability detection by using keep-alive tracking from the gateway to each of the smart devices. The result showed that the delay for the user application to detect the smart device in the XBee network is around 10.13 ms delay, and the service average delay requested by the user application to each of the devices is 2.13 ms

    Побудова моделі мережевої взаємодії складових мультиагентної системи мобільних роботів

    Get PDF
    The results reported here represent the first stage in the development of a full-featured laboratory system aimed at studying machine learning algorithms. The relevance of the current work is predetermined by the lack of network small-size mobile robots and appropriate control software that would make it possible to conduct field experiments in real time. This paper reports the selection of network data transmission technology for managing mobile robots in real time. Based on the chosen data transmission protocol, a complete stack of technologies of the network model of a multi-agent system of mobile robots has been proposed. This has made it possible to build a network model of the system that visualizes and investigates machine learning algorithms. In accordance with the requirements set by the OSI network model for constructing such systems, the model includes the following levels:1) the lower level of data collection and controlling elements – mobile robots;2) the top level of the model includes a user interface server and a business logic support server.Based on the built diagram of the protocol stack and the network model, the software and hardware implementation of the obtained results has been carried out. This paper employed the JavaScript library React with a SPA technology (Single Page Application), a Virtual DOM technology (Document Object Model), stored in the device's RAM and synchronized with the actual DOM. That has made it possible to simplify the process of control over the clients and reduce network traffic.The model provides the opportunity to:1) manage the prototypes of robot clients in real time;2) reduce the use of network traffic, compared to other data transmission technologies;3) reduce the load on the CPU processors of robots and servers; 4) virtually simulate an experiment;5) investigate the implementation of machine learning algorithmsПредставленные результаты работы являются первым этапом разработки полнофункциональной лабораторной системы исследования алгоритмов машинного обучения. Актуальность работы обусловлена отсутствием сетевых малогабаритных мобильных роботов и соответствующего управляющего программного обеспечения, что позволило бы проводить натурные эксперименты в реальном времени. В работе осуществлен подбор сетевой технологии передачи данных для управления мобильными роботами в реальном времени. На основе выбранного протокола передачи данных предложен полный стек технологий сетевой модели мультиагентной системы мобильных роботов. Это позволило построить сетевую модель системы визуализации и исследования алгоритмов машинного обучения. В соответствии с требованиями сетевой модели OSI по построению подобных систем, модель включает в себя следующие уровни:1. Нижний уровень сбора данных и исполнительных механизмов – мобильные работы;2. Верхний уровень модели состоит из сервера пользовательского интерфейса и сервера поддержки бизнес-логики.Основываясь на построенной диаграмме стека протоколов и сетевой модели, осуществлена программно-аппаратная реализация полученных результатов. В работе использованы JavaScript библиотека React с технологией SPA (Single Page Application), технология Virtual DOM (Document Object Model), хранящаяся в оперативной памяти устройства и синхронизируемая с реальным DOM. Это позволило упростить процесс управления клиентами и уменьшить сетевой трафик.Модель позволяет:1) управлять прототипами роботов-клиентов в реальном времени;2) уменшить использование сетевого трафика, в сравнении с другими технологиями передачи данных;3) уменьшить нагрузку на центральные процессоры роботов и серверов;4) выполнять виртуальную симуляцию эксперимента;5) исследовать выполнение алгоритмов машинного обученияПредставлені результати роботи є першим етапом розробки повнофункціональної лабораторної системи дослідження алгоритмів машинного навчання. Актуальність роботи зумовлена відсутністю мережевих малогабаритних мобільних роботів та відповідного керуючого програмного забезпечення, що дозволило б проводити натурні експерименти в реальному часі. В роботі здійснено підбір мережевої технології передачі даних для керування мобільними роботами в реальному часі. На основі обраного протоколу передачі даних запропоновано повний стек технологій мережевої моделі мультиагентної системи мобільних роботів. Це дозволило побудувати мережеву модель системи візуалізації та дослідження алгоритмів машинного навчання. Відповідно до вимог мережевої моделі OSI щодо побудови подібних систем, модель включає в себе наступні рівні:1) нижній рівень збору даних та виконавчих механізмів – мобільні роботи;2) верхній рівень моделі – складається з серверу користувацького інтерфейсу та серверу підтримки бізнес-логіки.Базуючись на побудованих діаграмі стеку протоколів та мережевій моделі здійснена програмно-апаратна реалізація отриманих результатів. У роботі використано JavaScript бібліотека React з технологією SPA (Single Page Application), технологію Virtual DOM (Document Object Model), що зберігається в оперативній пам’яті пристрою і синхронізується з реальним DOM. Це дозволило спростити процес керування клієнтами та зменшити мережевий трафік.Модель надає можливість: 1) керувати прототипами роботів-клієнтів в реальному часі;2) зменшити використання мережевого трафіку, в порівнянні з іншими технологіями передачі даних;3) зменшити навантаження на центральні процесори роботів та серверів;4) виконувати віртуальну симуляцію експерименту;5) досліджувати виконання алгоритмів машинного навчання

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Enabling peer-to-peer remote experimentation in distributed online remote laboratories

    Get PDF
    Remote Access Laboratories (RALs) are online platforms that allow human user interaction with physical instruments over the Internet. Usually RALs follow a client-server paradigm. Dedicated providers create and maintain experiments and corresponding educational content. In contrast, this dissertation focuses on a Peer-to-Peer (P2P) service model for RALs where users are encouraged to host experiments at their location. This approach can be seen as an example of an Internet of Things (IoT) system. A set of smart devices work together providing a cyber-physical interface for users to run experiments remotely via the Internet. The majority of traditional RAL learning activities focus on undergraduate education where hands-on experience such as building experiments, is not a major focus. In contrast this work is motivated by the need to improve Science, Technology, Engineering and Mathematics (STEM) education for school-aged children. Here physically constructing experiments forms a substantial part of the learning experience. In the proposed approach, experiments can be designed with relatively simple components such as LEGO Mindstorms or Arduinos. The user interface can be programed using SNAP!, a graphical programming tool. While the motivation for the work is educational in nature, this thesis focuses on the technical details of experiment control in an opportunistic distributed environment. P2P RAL aims to enable any two random participants in the system - one in the role of maker creating and hosting an experiment and one in the role of learner using the experiment - to establish a communication session during which the learner runs the remote experiment through the Internet without requiring a centralized experiment or service provider. The makers need to have support to create the experiment according to a common web based programing interface. Thus, the P2P approach of RALs requires an architecture that provides a set of heterogeneous tools which can be used by makers to create a wide variety of experiments. The core contribution of this dissertation is an automaton-based model (twin finite state automata) of the controller units and the controller interface of an experiment. This enables the creation of experiments based on a common platform, both in terms of software and hardware. This architecture enables further development of algorithms for evaluating and supporting the performance of users which is demonstrated through a number of algorithms. It can also ensure the safety of instruments with intelligent tools. The proposed network architecture for P2P RALs is designed to minimise latency to improve user satisfaction and learning experience. As experiment availability is limited for this approach of RALs, novel scheduling strategies are proposed. Each of these contributions has been validated through either simulations, e.g. in case of network architecture and scheduling, or test-bed implementations, in case of the intelligent tools. Three example experiments are discussed along with users' feedback on their experience of creating an experiment and using others’ experimental setup. The focus of the thesis is mainly on the design and hosting of experiments and ensuring user accessibility to them. The main contributions of this thesis are in regards to machine learning and data mining techniques applied to IoT systems in order to realize the P2P RALs system. This research has shown that a P2P architecture of RALs can provide a wide variety of experimental setups in a modular environment with high scalability. It can potentially enhance the user-learning experience while aiding the makers of experiments. It presents new aspects of learning analytics mechanisms to monitor and support users while running experiments, thus lending itself to further research. The proposed mathematical models are also applicable to other Internet of Things applications

    CCRP: A Novel Clone-Based Cloud Robotic Platform for Multi-Robots

    Get PDF
    Recently, the cloud computing paradigm has evolved from various research fields. A new path of research, cloud robotics, has emerged which allows robots to inherit the enormous computing and storage capability of cloud. Advances in cloud computing technologies, networking, parallel computing and other evolving technologies, and the integration with multi-robot systems, make it possible to design systems with new capabilities. The main advantages of cloud robotics are in overcoming the limitations of on-board robot computing and storage capabilities and in improving energy efficiency. Nevertheless, there is a lack of cloud robotics frameworks that can provide a secured environment for multi-robot application. The implementation of a robust cloud robotic platform capable of handling multi-robot applications has been shown to be challenging. This research proposes a novel Clone-based Cloud Robotic Platform architecture (CCRP) which assigns a Virtual Machine (VM) clone of each individual robot's operating system in the cloud, enabling fast and efficient collaboration between them via the cloud's inner-network. The system utilises Robot Operating System (ROS) as a middleware and programmable environment for robot development. This model is using the OpenVPN as a communication protocol between the robot and the VM, which provides considerable enhancement for the security and additional network for the system to allow multi-master ROS deployment. The Quality of Service (QoS) for the system has been tested and evaluated in terms of performance, compatibility and scalability via comparison study, which examines the CCRP performance against a local system and a proxy-based cloud system. Two case studies have been deployed for different robot scenarios. Case study 1 was focused on a navigation task which includes the process of mapping and teleoperation implemented in Google public cloud. The real time response has been examined by using the CCRP to teleoperate the NAO and Turtlebot robots. A response time and video streaming delays were measured to assess the overall QoS performance. Case study 2 is composed of a face recognition task performed using the CCRP in a private cloud on an Openstack platform. The objective of this task was to evaluate the system ability of running the tasks in the cloud effectively and to assess the collaborative learning capability. During the CCRP development and deployment stages an optimization study was conducted to determine optimal parameters for data offloading to the cloud and energy efficiency of a low-cost robot. The result of the CCRP performance evaluation proved that it is capable of running on a public and private cloud platform for self-configuring and programmable robotic systems, as well as executing various applications on different robot types. The CCRP is facilitating the improvements to QoS performance, compatibility and scalability and is providing a secure cloud computing environment for on-board robots
    corecore