7 research outputs found

    Proceedings XXII Congresso SIAMOC 2022

    Get PDF
    Il congresso annuale della Società Italiana di Analisi del Movimento in Clinica dà l’occasione a tutti i professionisti, dell’ambito clinico e ingegneristico, di incontrarsi, presentare le proprie ricerche e rimanere aggiornati sulle più recenti innovazioni nell’ambito dell’applicazione clinica dei metodi di analisi del movimento, al fine di promuoverne lo studio e le applicazioni cliniche per migliorare la valutazione dei disordini motori, aumentare l’efficacia dei trattamenti attraverso l’analisi quantitativa dei dati e una più focalizzata pianificazione dei trattamenti, ed inoltre per quantificare i risultati delle terapie correnti

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Detection and Prediction of Freezing of Gait in Parkinson’s Disease using Wearable Sensors and Machine Learning

    Get PDF
    Freezing of gait (FOG), is a brief episodic absence of forward body progression despite the intention to walk. Appearing mostly in mid-late stage Parkinson’s disease (PD), freezing manifests as a sudden loss of lower-limb function, and is closely linked to falling, decreased functional mobility, and loss of independence. Wearable-sensor based devices can detect freezes already in progress, and intervene by delivering auditory, visual, or tactile stimuli called cues. Cueing has been shown to reduce FOG duration and allow walking to continue. However, FOG detection and cueing systems require data from the freeze episode itself and are thus unable to prevent freezing. Anticipating the FOG episode before onset and supplying a timely cue could prevent the freeze from occurring altogether. FOG has been predicted in offline analyses by training machine learning models to identify wearable-sensor signal patterns known to precede FOG. The most commonly used sensors for FOG detection and prediction are inertial measurement units (IMU) that include an accelerometer, gyroscope and sometimes magnetometer. Currently, the best FOG prediction systems use data collected from multiple sensors on various body locations to develop person-specific models. Multi-sensor systems are more complex and may be challenging to integrate into real-life assistive devices. The ultimate goal of FOG prediction systems is a user-friendly assistive device that can be used by anyone experiencing FOG. To achieve this goal, person-independent models with high FOG prediction performance and a minimal number of conveniently located sensors are needed. The objectives of this thesis were: to develop and evaluate FOG detection and prediction models using IMU and plantar pressure data; determine if event-based or period of gait disruption FOG definitions have better classification performance for FOG detection and prediction; and evaluate FOG prediction models that use a single unilateral plantar pressure insole sensor or bilateral sensors. In this thesis, IMU (accelerometer and gyroscope) and plantar pressure insole sensors were used to collect data from 11 people with FOG while they walked a freeze provoking path. A custom-made synchronization and labeling program was used synchronize the IMU and plantar pressure data and annotate FOG episodes. Data were divided into overlapping 1 s windows with 0.2 s shift between consecutive windows. Time domain, Fourier transform based, and wavelet transform based features were extracted from the data. A total of 861 features were extracted from each of the 71,000 data windows. To evaluate the effectiveness of FOG detection and prediction models using plantar pressure and IMU data features, three feature sets were compared: plantar pressure, IMU, and both plantar pressure and IMU features. Minimum-redundancy maximum-relevance (mRMR) and Relief-F feature selection were performed prior to training boosted ensembles of decision trees. The binary classification models identified Total-FOG or Non-FOG states, wherein the Total-FOG class included windows with data from 2 s before the FOG onset until the end of the FOG episode. The plantar-pressure-only model had the greatest sensitivity, and the IMU-only model had the greatest specificity. The best overall model used the combination of plantar pressure and IMU features, achieving 76.4% sensitivity and 86.2% specificity. Next, the Total-FOG class components were evaluated individually (i.e., Pre-FOG windows, freeze windows, and transition windows between Pre-FOG and FOG). The best model, which used plantar pressure and IMU features, detected windows that contained both Pre-FOG and FOG data with 85.2% sensitivity, which is equivalent to detecting FOG less than 1 s after the freeze began. Models using both plantar pressure and IMU features performed better than models that used either sensor type alone. Datasets used to train machine learning models often generate ground truth FOG labels based on visual observation of specific lower limb movements (event-based definition) or an overall inability to walk effectively (period of gait disruption based definition). FOG definition ambiguity may affect FOG detection and prediction model performance, especially with respect to multiple FOG in rapid succession. This research examined the effects of defining FOG either as a period of gait disruption (merging successive FOG), or based on an event (no merging), on FOG detection and prediction. Plantar pressure and lower limb acceleration data were used to extract a set of features and train decision tree ensembles. FOG was labeled using an event-based definition. Additional datasets were then produced by merging FOG that occurred in rapid succession. A merging threshold was introduced where FOG that were separated by less than the merging threshold were merged into one episode. FOG detection and prediction models were trained for merging thresholds of 0, 1, 2, and 3 s. Merging had little effect on FOG detection model performance; however, for the prediction model, merging resulted in slightly later FOG identification and lower precision. FOG prediction models may benefit from using event-based FOG definitions and avoiding merging multiple FOG in rapid succession. Despite the known asymmetry of PD motor symptom manifestation, the difference between the more severely affected side (MSS) and less severely affected side (LSS) is rarely considered in FOG detection and prediction studies. The additional information provided by the MSS or LSS, if any, may be beneficial to FOG prediction models, especially if using a single sensor. To examine the effect of using data from the MSS, LSS, or both limbs, multiple FOG prediction models were trained and compared. Three datasets were created using plantar pressure data from the MSS, LSS, and both sides together. Feature selection was performed, and FOG prediction models were trained using the top 5, 10, 15, 20, 25 or 30 features for each dataset. The best models were the MSS model with 15 features, and the LSS and bilateral features with 5 features. The LSS model reached the highest sensitivity (79.5%) and identified the highest percentage of FOG episodes (94.9%). The MSS model achieved the highest specificity (84.9%) and the lowest false positive (FP) rate (2 FP/walking trial). Overall, the bilateral model was best. The bilateral model had 77.3% sensitivity, 82.9% specificity, and identified 94.3% of FOG episodes an average of 1.1 s before FOG onset. Compared to the bilateral model, the LSS model had a higher false positive rate; however, the bilateral and LSS models were similar in all other evaluation metrics. Therefore, using the LSS model instead of the bilateral model would produce similar FOG prediction performance at the cost of slightly more false positives. Given the advantages of single sensor systems, the increased FP rate may be acceptable. Therefore, a single plantar pressure sensor placed on the LSS could be used to develop a FOG prediction system and produce performance similar to a bilateral system

    Proceedings of the 9th international conference on disability, virtual reality and associated technologies (ICDVRAT 2012)

    Get PDF
    The proceedings of the conferenc

    The effectiveness of PROMPT therapy for children with cerebral palsy

    Get PDF
    The purpose of this study is to evaluate the effectiveness of a motor speech treatment approach (PROMPT) in the management of motor-speech impairment in children with cerebral palsy. Two main objectives were addressed: (1) to evaluate changes in speech intelligibility and, (2) evaluate changes in kinematic movements of the jaw and lips using three dimensional (3D) motion analysis.A single subject multiple-baseline-across-participants research design, with four phases: Baseline (A1), two intervention phases (B and C) and maintenance (A2), was implemented.Six participants, aged 3-to-11-years (3 boys, 3 girls) with moderate to severe speech impairment were recruited through The Centre for Cerebral Palsy, Western Australia (TCCP). Inclusion criteria were: diagnosis of cerebral palsy, age 3 – 14 years, stable head control (supported or independent), spontaneous use of at least 15 words, speech impairment ≥1.5 standard deviations, hearing loss no greater than 25dB, developmental quotient ≥70 (Leiter-Brief International Performance Scale R) and no previous exposure to PROMPT. Thirteen typically-developing peers were recruited to compare the trend of kinematic changes in jaw and lip movements to those of the children with cerebral palsy.Upon achievement of a stable baseline, participants completed two intervention phases both of 10 weeks duration. Therapist fidelity to the PROMPT approach was determined by a blinded, independent PROMPT Instructor.Perceptual outcome measures included the administration of weekly speech probes, containing trained and untrained vocabulary at the two targeted levels of intervention plus an additional level. These were analysed for both perceptual accuracy (PA) and the motor speech movement parameter. End of phase measures included: 1. Changes in phonetic accuracy as measured using a measure of percentage phonemes correct; 2. Speech intelligibility measures, using a standardised assessment tool; and 3. Changes to activity/participation using the Canadian Occupational Performance Measure (COPM).Kinematic data were collected at the end of each study phase using 3D motion analysis (Vicon Motus 9.1). This involved the collection of jaw and lip measurements of distance, duration and velocity, during the production of 11 untrained stimulus words. The words contained vowels that spanned the articulatory space and represented motor-speech movement patterns at the level of mandibular and labial-facial control, as classified according to the PROMPT motor speech hierarchy.Analysis of the speech probe data showed all participants recorded a statistically significant improvement. Between phases A1-B and B-C 6/6 and 4/6 participants respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns (MSMPs) targeted during the training of that intervention priority (IP). The data further show that five participants (one participant was lost to follow-up) achieved a statistically significant increase at 12- weeks post-intervention as compared to baseline (phase A1).Four participants achieved a statistically significant increase in performance level in the PA of the speech probes of both IP1 and IP2 between phases A1-B. Whilst only one participant recorded a statistically significant increase in PA between phases BC, five participants achieved a statistically significant increase in IP2 between phases A1-C. The data further show all participants achieved a statistically significant increase in PA on both intervention priorities at 12-weeks post-intervention. All participants recorded data that indicated improved perceptual accuracy across the study phases. This was indicated by a statistically significant increase in the percentage phonemes correct scores F(3,18) = 5.55, p<.05.All participants achieved improved speech intelligibility. Five participants recorded an increase in speech intelligibility greater than 14% at the end of the first intervention (phase B). Continued improvement was observed for 5 participants at the end of the second intervention (phase C)
    corecore