902 research outputs found

    A review on automated facial nerve function assessment from visual face capture

    Get PDF

    Vision-based interface applied to assistive robots

    Get PDF
    This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.Fil: Pérez Berenguer, María Elisa. Universidad Nacional de San Juan. Facultad de Ingeniería. Departamento de Electrónica y Automática. Gabinete de Tecnología Médica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: López Celani, Natalia Martina. Universidad Nacional de San Juan. Facultad de Ingeniería. Departamento de Electrónica y Automática. Gabinete de Tecnología Médica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; Argentin

    Severity scoring approach using modified optical flow method and lesion identification for facial nerve paralysis assessment

    Get PDF
    The facial nerve controls facial movement and expression. Hence, a patient with facial nerve paralysis will experience affected social interactions, psychological distress, and low self-esteem. Upon the first presentation, it is crucial to determine the severity level of the paralysis and take out the possibility of stroke or any other serious causes by recognising the type of lesion in preventing any mistreatment of the patient. Clinically, the facial nerve is assessed subjectively by observing voluntary facial movement and assigning a score based on the deductions made by the clinician. However, the results are not uniform among different examiners evaluating the same patients. This is extremely undesirable for both medical diagnostic and treatment considerations. Acknowledging the importance of this assessment, this research was conducted to develop a facial nerve assessment that can classify both the severity level of facial nerve function and also the types of facial lesion, Upper Motor Neuron (UMN) and Lower Motor Neuron (LMN), in facial regional assessment and lesion assessment, respectively. For regional assessment, two optical flow techniques, Kanade-Lucas-Tomasi (KLT) and Horn-Schunck (HS) were used in this study to determine the local and global motion information of facial features. Nevertheless, there is a problem with the original KLT which is the inability of the Eigen features to distinguish the normal and patient subjects. Thus, the KLT method was modified by introducing polygonal measurements and the landmarks were placed on each facial region. Similar to the HS method, the multiple frames evaluation was proposed rather than a single frame evaluation of the original HS method to avoid the differences between frames becoming too small. The features of these modified methods, Modified Local Sparse (MLS) and Modified Global Dense (MGD), were combined, namely the Combined Modified Local-Global (CMLG), to discover both local (certain region) and global (entire image) flow features. This served as the input into the k-NN classifier to assess the performance of each of them in determining the severity level of paralysis. For the lesion assessment, the Gabor filter method was used to extract the wrinkle forehead features. Thereafter, the Gabor features combined with the previous features of CMLG, by focusing only on the forehead region to evaluate both the wrinkle and motion information of the facial features. This is because, in an LMN lesion, the patient will not be able to move the forehead symmetrically during the rising eyebrows movement and unable to wrinkle the forehead due to the damaged frontalis muscle. However, the patient with a UMN lesion exhibits the same criteria as a normal subject, where the forehead is spared and can be lifted symmetrically. The CMLG technique in regional assessment showed the best performance in distinguishing between patient and normal subjects with an accuracy of 92.26% compared to that of MLS and MGD, which were 88.38% and 90.32%, respectively. From the results, some assessment tools were developed in this study namely individual score, total score and paralysis score chart which were correlated with the House-Brackmann score and validated by a medical professional with 91.30% of accuracy. In lesion assessment, the combined features of Gabor and CMLG on the forehead region depicted a greater performance in distinguishing the UMN and LMN lesion of the patient with an accuracy of 89.03% compared to Gabor alone, which was 78.07%. In conclusion, the proposed facial nerve assessment approach consisting of both regional assessment and lesion assessment is capable of determining the level of facial paralysis severity and recognising the type of facial lesion, whether it is a UMN or LMN lesion

    Markerless Human Motion Analysis

    Get PDF
    Measuring and understanding human motion is crucial in several domains, ranging from neuroscience, to rehabilitation and sports biomechanics. Quantitative information about human motion is fundamental to study how our Central Nervous System controls and organizes movements to functionally evaluate motor performance and deficits. In the last decades, the research in this field has made considerable progress. State-of-the-art technologies that provide useful and accurate quantitative measures rely on marker-based systems. Unfortunately, markers are intrusive and their number and location must be determined a priori. Also, marker-based systems require expensive laboratory settings with several infrared cameras. This could modify the naturalness of a subject\u2019s movements and induce discomfort. Last, but not less important, they are computationally expensive in time and space. Recent advances on markerless pose estimation based on computer vision and deep neural networks are opening the possibility of adopting efficient video-based methods for extracting movement information from RGB video data. In this contest, this thesis presents original contributions to the following objectives: (i) the implementation of a video-based markerless pipeline to quantitatively characterize human motion; (ii) the assessment of its accuracy if compared with a gold standard marker-based system; (iii) the application of the pipeline to different domains in order to verify its versatility, with a special focus on the characterization of the motion of preterm infants and on gait analysis. With the proposed approach we highlight that, starting only from RGB videos and leveraging computer vision and machine learning techniques, it is possible to extract reliable information characterizing human motion comparable to that obtained with gold standard marker-based systems

    Determining normal and abnormal lip shapes during movement for use as a surgical outcome measure

    Get PDF
    Craniofacial assessment for diagnosis, treatment planning and outcome has traditionally relied on imaging techniques that provide a static image of the facial structure. Objective measures of facial movement are however becoming increasingly important for clinical interventions where surgical repositioning of facial structures can influence soft tissue mobility. These applications include the management of patients with cleft lip, facial nerve palsy and orthognathic surgery. Although technological advances in medical imaging have now enabled three-dimensional (3D) motion scanners to become commercially available their clinical application to date has been limited. Therefore, the aim of this study is to determine normal and abnormal lip shapes during movement for use as a clinical outcome measure using such a scanner. Lip movements were captured from an average population using a 3D motion scanner. Consideration was given to the type of facial movement captured (i.e. verbal or non-verbal) and also the method of feature extraction (i.e. manual or semi-automatic landmarking). Statistical models of appearance (Active Shape Models) were used to convert the video motion sequences into linear data and identify reproducible facial movements via pattern recognition. Average templates of lip movement were created based on the most reproducible lip movements using Geometric Morphometrics (GMM) incorporating Generalised Procrustes Analysis (GPA) and Principal Component Analysis (PCA). Finally lip movement data from a patient group undergoing orthognathic surgery was incorporated into the model and Discriminant Analysis (DA) employed in an attempt to statistically distinguish abnormal lip movement. The results showed that manual landmarking was the preferred method of feature extraction. Verbal facial gestures (i.e. words) were significantly more reproducible/repeatable over time when compared to non-verbal gestures (i.e. facial expressions). It was possible to create average templates of lip movement from the control group, which acted as an outcome measure, and from which abnormalities in movement could be discriminated pre-surgery. These abnormalities were found to normalise post-surgery. The concepts of this study form the basis of analysing facial movement in the clinical context. The methods are transferrable to other patient groups. Specifically, patients undergoing orthognathic surgery have differences in lip shape/movement when compared to an average population. Correcting the position of the basal bones in this group of patients appears to normalise lip mobility

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems
    corecore