170 research outputs found

    Design and Development of a Twisted String Exoskeleton Robot for the Upper Limb

    Get PDF
    High-intensity and task-specific upper-limb treatment of active, highly repetitive movements are the effective approaches for patients with motor disorders. However, with the severe shortage of medical service in the United States and the fact that post-stroke survivors can continue to incur significant financial costs, patients often choose not to return to the hospital or clinic for complete recovery. Therefore, robot-assisted therapy can be considered as an alternative rehabilitation approach because the similar or better results as the patients who receive intensive conventional therapy offered by professional physicians.;The primary objective of this study was to design and fabricate an effective mobile assistive robotic system that can provide stroke patients shoulder and elbow assistance. To reduce the size of actuators and to minimize the weight that needs to be carried by users, two sets of dual twisted-string actuators, each with 7 strands (1 neutral and 6 effective) were used to extend/contract the adopted strings to drive the rotational movements of shoulder and elbow joints through a Bowden cable mechanism. Furthermore, movements of non-disabled people were captured as templates of training trajectories to provide effective rehabilitation.;The specific aims of this study included the development of a two-degree-of-freedom prototype for the elbow and shoulder joints, an adaptive robust control algorithm with cross-coupling dynamics that can compensate for both nonlinear factors of the system and asynchronization between individual actuators as well as an approach for extracting the reference trajectories for the assistive robotic from non-disabled people based on Microsoft Kinect sensor and Dynamic time warping algorithm. Finally, the data acquisition and control system of the robot was implemented by Intel Galileo and XILINX FPGA embedded system

    Knee Joint Angle Measuring Portable Embedded System based on Inertial Measurement Units for Gait Analysis

    Get PDF
    Inside clinical research, gait analysis is a fundamental part of the functional evaluation of the human body's movement. Its evaluation has been carried out through different methods and tools, which allow early diagnosis of diseases, and monitoring and assessing the effectiveness of therapeutic plans applied to patients for rehabilitation. The observational method is one of the most used in specialized centers in Colombia; however, to avoid any possible errors associated with the subjectivity observation, technological tools that provide quantitative data can support this method. This paper deals with the methodological process for developing a computational tool and hardware device for the analysis of gait, specifically on articular kinematics of the knee.  This work develops a prototype based on the fusion of inertial measurement units (IMU) data as an alternative for the attenuation of errors associated with each of these technologies. A videogrammetry technique measured the same human gait patterns to validate the proposed system, in terms of accuracy and repeatability of the recorded data. Results showed that the developed prototype successfully captured the knee-joint angles of the flexion-extension motions with high consistency and accuracy in with the measurements obtained from the videogrammetry technique. Statistical analysis (ICC and RMSE) exhibited a high correlation between the two systems for the measures of the joint angles. These results suggest the possibility of using an IMU-based prototype in realistic scenarios for accurately tracking a patient’s knee-joint kinematics during a human gait

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Artificial Vision Algorithms for Socially Assistive Robot Applications: A Review of the Literature

    Get PDF
    Today, computer vision algorithms are very important for different fields and applications, such as closed-circuit television security, health status monitoring, and recognizing a specific person or object and robotics. Regarding this topic, the present paper deals with a recent review of the literature on computer vision algorithms (recognition and tracking of faces, bodies, and objects) oriented towards socially assistive robot applications. The performance, frames per second (FPS) processing speed, and hardware implemented to run the algorithms are highlighted by comparing the available solutions. Moreover, this paper provides general information for researchers interested in knowing which vision algorithms are available, enabling them to select the one that is most suitable to include in their robotic system applicationsBeca Conacyt Doctorado No de CVU: 64683

    Development of a Real-Time, Simple and High-Accuracy Fall Detection System for Elderly Using 3-DOF Accelerometers

    Full text link
    © 2018, King Fahd University of Petroleum & Minerals. Falls represent a major problem for the elderly people aged 60 or above. There are many monitoring systems which are currently available to detect the fall. However, there is a great need to propose a system which is of optimal effectiveness. In this paper, we propose to develop a low-cost fall detection system to precisely detect an event when an elderly person accidentally falls. The fall detection algorithm compares the acceleration with lower fall threshold and upper fall threshold values to accurately detect a fall event. The post-fall recognition module is the combination of posture recognition and vertical velocity estimation that has been added to our proposed method to enhance the performance and accuracy. In case of a fall, our device will transmit the location information to the contacts instantly via SMS and voice call. A smartphone application will ensure that the notifications are delivered to the elderly person’s relatives so that medical attention can be provided with minimal delay. The system was tested by volunteers and achieved 100% sensitivity and accuracy. This was confirmed by testing with public datasets and it also achieved the same percentage in sensitivity and accuracy as in our recorded datasets

    Interactions Between Humans and Robots

    Get PDF
    • …
    corecore