6 research outputs found

    Measurements by A LEAP-Based Virtual Glove for the hand rehabilitation

    Get PDF
    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications

    Telerehabilitation using Real Time Communication

    Get PDF
    There are many diseases affecting the population trends globally.Miniaturization of sensors in combined with medical information technology provides efficient solutions to reduce costs and deliver remote medical services through connected devices. Remote consultation via video‐conferencing has been well established, but the chronic or long‐term musculoskeletal conditions require pro‐active management and therapy thus raising the need to develop more advanced telerehabilitation systems. In this paper, we introduce KinectRTC that can be used for Kinect‐based telerehabilitationwith efficient real‐timetransmission of video, audio and skeletal data. The Web Real‐TimeCommunication (WebRTC) technology has benefitted to the proposed framework which is able to manage video and audio streams based on the state of the network and the available bandwidth to guarantee the real‐time performance of the communication

    A Real-time Sign Language Recognition System for Hearing and Speaking Challengers

    Get PDF
    [[abstract]]Sign language is the primary means of communication between deaf people and hearing/speaking challengers. There are many varieties of sign language in different challenger community, just like an ethnic community within society. Unfortunately, few people have knowledge of sign language in our daily life. In general, interpreters can help us to communicate with these challengers, but they only can be found in Government Agencies, Hospital, and etc. Moreover, it is expensive to employ interpreter on personal behalf and inconvenient when privacy is required. It is very important to develop a robust Human Machine Interface (HMI) system that can support challengers to enter our society. A novel sign language recognition system is proposed. This system is composed of three parts. First, initial coordinate locations of hands are obtained by using joint skeleton information of Kinect. Next, we extract features from joints of hands that have depth information and translate handshapes. Then we train Hidden Markov Model-based Threshold Model by three feature sets. Finally, we use Hidden Markov Model-based Threshold Model to segment and recognize sign language. Experimental results show, average recognition rate for signer-dependent and signer-independent are 95% and 92%, respectively. We also find that feature sets including handshape can achieve better recognition result.[[sponsorship]]Asia-Pacific Education & Research Association[[conferencetype]]國際[[conferencedate]]20140711~20140713[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]普吉島, 泰

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Overall Design and Implementation of the Virtual Glove

    No full text
    Post-stroke patients and people suffering from hand diseases often need rehabilitation therapy. The recovery of original skills, when possible, is closely related to the frequency, quality, and duration of rehabilitative therapy. Rehabilitation gloves are tools used both to facilitate rehabilitation and to control improvements by an evaluation system. Mechanical gloves have high cost, are often cumbersome, are not re-usable and, hence, not usable with the healthy hand to collect patient-specific hand mobility information to which rehabilitation should tend. The approach we propose is the virtual glove, a system that, unlike tools based on mechanical haptic interfaces, uses a set of video cameras surrounding the patient hand to collect a set of synchronized videos used to track hand movements. The hand tracking is associated with a numerical hand model that is used to calculate physical, geometrical and mechanical parameters, and to implement some boundary constraints such as joint dimensions, shape, joint angles, and so on. Besides being accurate, the proposed system is aimed to be low cost, not bulky (touch-less), easy to use, and re-usable.Previous works described the virtual glove general concepts, the hand model, and its characterization including system calibration strategy. The present paper provides the virtual glove overall design, both in real-time and in off-line modalities. In particular, the real-time modality is described and implemented and a marker-based hand tracking algorithm, including a marker positioning, coloring, labeling, detection and classification strategy, is presented for the off-line modality. Moreover, model based hand tracking experimental measurements are reported, discussed and compared with the corresponding poses of the real hand. An error estimation strategy is also presented and used for the collected measurements. System limitations and future work for system improvement are also discussed. © 2013 Elsevier Ltd
    corecore