1,840 research outputs found

    Exploration Robot Controlled by an Android Application

    Get PDF
    Exploration Robot Controlled by an Android Application (ERCAA), University of Palestine Faculty of Information TechnologyIn recent years, with the pace of technological development, people have become more and more demanding in terms of quality of life. At the same time, there is an increasing need for bringing and merging new ideas of technology to create new products. That need is stemming out of the big curiosity of people to try new technologies that helps and entertain them in their daily life. A robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Many robots have been built for manufacturing purpose and can be found in factories around the world. We have designed ROBOT which can be controlled using an APP of android mobile. This Robot is provided with Camera on it to empowers user to explore. We have developed the remote buttons in the android app by which we can control the robot motion and the Camera View with them. And in which we use Wi-Fi communication to interface controller and android. Controller can be interfaced to the Wi-Fi module. According to commands received from android the robot motion can be controlled. Robot can be reprogrammable and tooling can be interchanged to provide for multiple applications according to the Arduino Chip we use. We have used the Android, C, HTML Programming Languages to develop each of the Application and the Hardware components and electronic Chips. Following, is the test analysis section, which discusses whether the proposed system met its objectives. Performance is also evaluated near the end of the paper along with possible extensions of the system

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    A multi-modal perception based assistive robotic system for the elderly

    Get PDF
    Edited by Giovanni Maria Farinella, Takeo Kanade, Marco Leo, Gerard G. Medioni, Mohan TrivediInternational audienceIn this paper, we present a multi-modal perception based framework to realize a non-intrusive domestic assistive robotic system. It is non-intrusive in that it only starts interaction with a user when it detects the user's intention to do so. All the robot's actions are based on multi-modal perceptions which include user detection based on RGB-D data, user's intention-for-interaction detection with RGB-D and audio data, and communication via user distance mediated speech recognition. The utilization of multi-modal cues in different parts of the robotic activity paves the way to successful robotic runs (94% success rate). Each presented perceptual component is systematically evaluated using appropriate dataset and evaluation metrics. Finally the complete system is fully integrated on the PR2 robotic platform and validated through system sanity check runs and user studies with the help of 17 volunteer elderly participants

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    A framework for abstraction and virtualization of sensors in mobile context-aware computing

    Get PDF
    110 p.[EN] The latest mobile devices available nowadays are leading to the development of a new generation of mobile applications that are able to react to context. Context- awareness requires data from the environment, usually collected by means of sensors embedded in mobile devices or connected to them through wireless networks. Developers of mobile applications are faced with several challenges when it comes to the creation of context-aware applications. Sensor and device heterogeneity stand out among these challenges. In order to assist designers, we propose a layered conceptual framework for sensor abstraction and virtualization, called Igerri. Its main objective is to facilitate the development of context-aware applications independently of the specific sensors available in the user environment. To avoid the need to directly manage physical sensors, a layered structure of virtual and abstract sensors is conceived. Two software components, based on the proposed framework, have been designed in order to test Igerris robustness. The first one processes the information from the successive sensor layers and generates high-level context information. The second is responsible for managing network aspects and real time settings. This implementation has been tested using a representative context-aware application in different scenarios. The results obtained show that the implementation, and therefore the conceptual framework, is suitable for dealing with context information and hiding sensor programming.[EU] Gaur egungo gailu mugikor puntakoenek inguruneari erantzuteko gai diren aplikazio mugikorren garapenean oinarritzen dira. Testuingurua nabaritzeko ingurunearen informazioa behar da, zeina gailu mugikorretan txertatutako sentsoreen edo haririk gabeko sareen bitartez biltzen den. Aplikazio mugikorren garatzaileek erronka askori aurre egin behar izaten diete testuingurua kontuan hartzen duten aplikazioak garatzerakoan. Erronka na- gusien artean, sentsoreen eta gailuen heterogeneotasuna izaten dira. Garatzaileei laguntzeko asmoz, Igerri izeneko sentsoreen abstrakzio eta birtualizaziorako marko kontzeptual bat proposatzen dugu. Bere helburu nagusia, testuinguruaren aplikazio hautemangarrien garapena erraztea da, erabiltzailearen ingurunean dauden sentsore espezifikoak edozein direla ere. Sentsore fisikoak zuzenean ma- nipulatu behar izatea saihesteko, sentsore birtual eta abstraktuen egitura bat asmatu da. Igerri-ren sendotasuna egiaztatzeko, proposatutako markoan oinarritutako bi software osagai diseinatu dira. Lehenak, sentsore geruzen informazio geruzak prozesatu eta maila altuko testuinguru informazioa ematen du. Bigarrenak, sare aukerak kudeatu eta sentsoreen konfigurazioa denbora errealean burutzen ditu. Inplementazio hau testuingurua hautemateko gai eta adierazgarria den aplikazio batekin egoera desberdinetan frogatu da. Lortutako emaitzek erakusten dute inplementazioa, eta ondorioz marko kontzeptuala ere, aproposa dela testuinguruaren informazioa erabiltzeko eta sentsoreen programazioa ezkutatzeko.[ES] Los dispositivos móviles disponibles en la actualidad facilitan el desarrollo de una nueva generación de aplicaciones móviles que son capaces de reaccionar al contexto. La computación sensible al contexto requiere datos del entorno que normalmente se obtienen por medio de sensores embebidos en dispositivos móviles o conectados a ellos a través de redes inalámbricas. Los desarrolladores de aplicaciones móviles se enfrentan a varios retos para crear aplicaciones sensibles al contexto. Entre estos retos destaca la necesidad de tratar la heterogeneidad de los sensores y de los dispositivos móviles. Con el fin de ayudar a los desarrolladores, esta tesis propone un marco conceptual para la abstracción multinivel y la virtualización de sensores, llamado Igerri. Su principal objetivo es facilitar el desarrollo de aplicaciones sensibles al contexto independientemente de los sensores específicos que se encuentren en el entorno. Para evitar la necesidad de manipular directamente los sensores físicos, se ha concebido una estructura multinivel de sensores virtuales y abstractos. Se han diseñado dos componentes software basados en el marco propuesto para comprobar la robustez de Igerri. El primero procesa la información de la estructura multinivel de sensores y genera información de contexto de alto nivel. El segundo es responsable de administrar, en tiempo real, las opciones de red y la configuración de los sensores. Esta implementación ha sido probada en diferentes escenarios usando una aplicación representativa y sensible al contexto. Los resultados obtenidos muestran que la implementación, y por tanto el marco conceptual que le da soporte, es adecuada para tratar la información de contexto y ocultar los problemas de programación de los sensores.Borja Gamecho held a PhD scholarship from the Research Staff Training Programme of the Basque Government from 2011 to 2014. This work also has been supported by the Department of Education, Universities and Research of the Basque Government under Grant IT395-10, by the Ministry of Economy and Competitiveness of the Spanish Government and by the European Regional Development Fund (project TIN2014-52665-C2-1)

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces
    corecore