314 research outputs found

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    A study on virtual reality and developing the experience in a gaming simulation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Masters by ResearchVirtual Reality (VR) is an experience where a person is provided with the freedom of viewing and moving in a virtual world [1]. The experience is not constrained to a limited control. Here, it was triggered interactively according to the user’s physical movement [1] [2]. So the user feels as if they are seeing the real world; also, 3D technologies allow the viewer to experience the volume of the object and its prospection in the virtual world [1]. The human brain generates the depth when each eye receives the images in its point of view. For learning for and developing the project using the university’s facilities, some of the core parts of the research have been accomplished, such as designing the VR motion controller and VR HMD (Head Mount Display), using an open source microcontroller. The VR HMD with the VR controller gives an immersive feel and a complete VR system [2]. The motive was to demonstrate a working model to create a VR experience on a mobile platform. Particularly, the VR system uses a micro electro-mechanical system to track motion without a tracking camera. The VR experience has also been developed in a gaming simulation. To produce this, Maya, Unity, Motion Analysis System, MotionBuilder, Arduino and programming have been used. The lessons and codes taken or improvised from [33] [44] [25] and [45] have been studied and implemented

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Upper-limb Kinematic Analysis and Artificial Intelligent Techniques for Neurorehabilitation and Assistive Environments

    Get PDF
    Stroke, one of the leading causes of death and disability around the world, usually affects the motor cortex causing weakness or paralysis in the limbs of one side of the body. Research efforts in neurorehabilitation technology have focused on the development of robotic devices to restore motor and cognitive function in impaired individuals, having the potential to deliver high-intensity and motivating therapy. End-effector-based devices have become an usual tool in the upper- limb neurorehabilitation due to the ease of adapting to patients. However, they are unable to measure the joint movements during the exercise. Thus, the first part of this thesis is focused on the development of a kinematic reconstruction algorithm that can be used in a real rehabilitation environment, without disturbing the normal patient-clinician interaction. On the basis of the algorithm found in the literature that presents some instabilities, a new algorithm is developed. The proposed algorithm is the first one able to online estimate not only the upper-limb joints, but also the trunk compensation using only two non-invasive wearable devices, placed onto the shoulder and upper arm of the patient. This new tool will allow the therapist to perform a comprehensive assessment combining the range of movement with clinical assessment scales. Knowing that the intensity of the therapy improves the outcomes of neurorehabilitation, a ‘self-managed’ rehabilitation system can allow the patients to continue the rehabilitation at home. This thesis proposes a system to online measure a set of upper-limb rehabilitation gestures, and intelligently evaluates the quality of the exercise performed by the patients. The assessment is performed through the study of the performed movement as a whole as well as evaluating each joint independently. The first results are promising and suggest that this system can became a a new tool to complement the clinical therapy at home and improve the rehabilitation outcomes. Finally, severe motor condition can remain after rehabilitation process. Thus, a technology solution for these patients and people with severe motor disabilities is proposed. An intelligent environmental control interface is developed with the ability to adapt its scan control to the residual capabilities of the user. Furthermore, the system estimates the intention of the user from the environmental information and the behavior of the user, helping in the navigation through the interface, improving its independence at home.El accidente cerebrovascular o ictus es una de las causas principales de muerte y discapacidad a nivel mundial. Normalmente afecta a la corteza motora causando debilidad o parálisis en las articulaciones del mismo lado del cuerpo. Los esfuerzos de investigación dentro de la tecnología de neurorehabilitación se han centrado en el desarrollo de dispositivos robóticos para restaurar las funciones motoras y cognitivas en las personas con esta discapacidad, teniendo un gran potencial para ofrecer una terapia de alta intensidad y motivadora. Los dispositivos basados en efector final se han convertido en una herramienta habitual en la neurorehabilitación de miembro superior ya que es muy sencillo adaptarlo a los pacientes. Sin embargo, éstos no son capaces de medir los movimientos articulares durante la realización del ejercicio. Por tanto, la primera parte de esta tesis se centra en el desarrollo de un algoritmo de reconstrucción cinemática que pueda ser usado en un entorno de rehabilitación real, sin perjudicar a la interacción normal entre el paciente y el clínico. Partiendo de la base que propone el algoritmo encontrado en la literatura, el cual presenta algunas inestabilidades, se ha desarrollado un nuevo algoritmo. El algoritmo propuesto es el primero capaz de estimar en tiempo real no sólo las articulaciones del miembro superior, sino también la compensación del tronco usando solamente dos dispositivos no invasivos y portátiles, colocados sobre el hombro y el brazo del paciente. Esta nueva herramienta permite al terapeuta realizar una valoración más exhaustiva combinando el rango de movimiento con las escalas de valoración clínicas. Sabiendo que la intensidad de la terapia mejora los resultados de la recuperación del ictus, un sistema de rehabilitación ‘auto-gestionado’ permite a los pacientes continuar con la rehabilitación en casa. Esta tesis propone un sistema para medir en tiempo real un conjunto de gestos de miembro superior y evaluar de manera inteligente la calidad del ejercicio realizado por el paciente. La valoración se hace a través del estudio del movimiento ejecutado en su conjunto, así como evaluando cada articulación independientemente. Los primeros resultados son prometedores y apuntan a que este sistema puede convertirse en una nueva herramienta para complementar la terapia clínica en casa y mejorar los resultados de la rehabilitación. Finalmente, después del proceso de rehabilitación pueden quedar secuelas motoras graves. Por este motivo, se propone una solución tecnológica para estas personas y para personas con discapacidades motoras severas. Así, se ha desarrollado una interfaz de control de entorno inteligente capaz de adaptar su control a las capacidades residuales del usuario. Además, el sistema estima la intención del usuario a partir de la información del entorno y el comportamiento del usuario, ayudando en la navegación a través de la interfaz, mejorando su independencia en el hogar

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Glove-based systems for medical applications: review of recent advancements

    Get PDF
    Human hand motion analysis is attracting researchers in the areas of neuroscience, biomedical engineering, robotics, human-machines interfaces (HMI), human-computer interaction (HCI), and artificial intelligence (AI). Among the others, the fields of medical rehabilitation and physiological assessments are suggesting high impact applications for wearable sensing systems. Glove-based systems are one of the most significant devices in assessing quantities related to hand movements. This paper provides updated survey among the main glove solutions proposed in literature for hand rehabilitation. Then, the process for designing glove-based systems is defined, by including all relevant design issues for researchers and makers. The main goal of the paper is to describe the basics of glove-based systems and to outline their potentialities and limitations. At the same time, roadmap to design and prototype the next generation of these devices is defined, according to the results of previous experiences in the scientific community

    Upper Body Pose Estimation Using Wearable Inertial Sensors and Multiplicative Kalman Filter

    Get PDF
    Estimating the limbs pose in a wearable way may benefit multiple areas such as rehabilitation, teleoperation, human-robot interaction, gaming, and many more. Several solutions are commercially available, but they are usually expensive or not wearable/portable. We present a wearable pose estimation system (WePosE), based on inertial measurements units (IMUs), for motion analysis and body tracking. Differently from camera-based approaches, the proposed system does not suffer from occlusion problems and lighting conditions, it is cost effective and it can be used in indoor and outdoor environments. Moreover, since only accelerometers and gyroscopes are used to estimate the orientation, the system can be used also in the presence of iron and magnetic disturbances. An experimental validation using a high precision optical tracker has been performed. Results confirmed the effectiveness of the proposed approach

    Motion-based remote control device for interaction with multimedia content

    Get PDF
    This dissertation describes the development and implementation of techniques to enhance the accuracy of low-complexity lters, making them suitable for remote control devices in consumer electronics. The evolution veri ed in the last years, on multimedia contents, available for consumers in Smart TVs and set-top-boxes, is not raising the expected interest from users, and one of the pointed reasons for this nding is the user interface. Although most current pointing devices rely on relative rotation increments, absolute orientation allows for a more intuitive use and interaction. This possibility is explored in this work as well as the interaction with multimedia contents through gestures. Classical accurate fusion algorithms are computationally intensive, therefore their implementation in low-energy consumption devices is a challenging task. To tackle this problem, a performance study was carried, comparing a relevant set of professional commercial of-the-shelf units, with the developed low-complexity lters in state-of-the-art Magnetic, Angular Rate, Gravity (MARG) sensors. Part of the performance evaluation tests are carried out under harsh conditions to observe the algorithms response in a nontrivial environment. The results demonstrate that the implementation of low-complexity lters using low-cost sensors, can provide an acceptable accuracy in comparison with the more complex units/ lters. These results pave the way for faster adoption of absolute orientation-based pointing devices in interactive multimedia applications, which includes hand-held, battery-operated devices
    corecore