173 research outputs found

    Development and evaluation of a haptic framework supporting telerehabilitation robotics and group interaction

    Get PDF
    Telerehabilitation robotics has grown remarkably in the past few years. It can provide intensive training to people with special needs remotely while facilitating therapists to observe the whole process. Telerehabilitation robotics is a promising solution supporting routine care which can help to transform face-to-face and one-on-one treatment sessions that require not only intensive human resource but are also restricted to some specialised care centres to treatments that are technology-based (less human involvement) and easy to access remotely from anywhere. However, there are some limitations such as network latency, jitter, and delay of the internet that can affect negatively user experience and quality of the treatment session. Moreover, the lack of social interaction since all treatments are performed over the internet can reduce motivation of the patients. As a result, these limitations are making it very difficult to deliver an efficient recovery plan. This thesis developed and evaluated a new framework designed to facilitate telerehabilitation robotics. The framework integrates multiple cutting-edge technologies to generate playful activities that involve group interaction with binaural audio, visual, and haptic feedback with robot interaction in a variety of environments. The research questions asked were: 1) Can activity mediated by technology motivate and influence the behaviour of users, so that they engage in the activity and sustain a good level of motivation? 2) Will working as a group enhance users’ motivation and interaction? 3) Can we transfer real life activity involving group interaction to virtual domain and deliver it reliably via the internet? There were three goals in this work: first was to compare people’s behaviours and motivations while doing the task in a group and on their own; second was to determine whether group interaction in virtual and reala environments was different from each other in terms of performance, engagement and strategy to complete the task; finally was to test out the effectiveness of the framework based on the benchmarks generated from socially assistive robotics literature. Three studies have been conducted to achieve the first goal, two with healthy participants and one with seven autistic children. The first study observed how people react in a challenging group task while the other two studies compared group and individual interactions. The results obtained from these studies showed that the group interactions were more enjoyable than individual interactions and most likely had more positive effects in terms of user behaviours. This suggests that the group interaction approach has the potential to motivate individuals to make more movements and be more active and could be applied in the future for more serious therapy. Another study has been conducted to measure group interaction’s performance in virtual and real environments and pointed out which aspect influences users’ strategy for dealing with the task. The results from this study helped to form a better understanding to predict a user’s behaviour in a collaborative task. A simulation has been run to compare the results generated from the predictor and the real data. It has shown that, with an appropriate training method, the predictor can perform very well. This thesis has demonstrated the feasibility of group interaction via the internet using robotic technology which could be beneficial for people who require social interaction (e.g. stroke patients and autistic children) in their treatments without regular visits to the clinical centres

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    EEG and ECoG features for Brain Computer Interface in Stroke Rehabilitation

    Get PDF
    The ability of non-invasive Brain-Computer Interface (BCI) to control an exoskeleton was used for motor rehabilitation in stroke patients or as an assistive device for the paralyzed. However, there is still a need to create a more reliable BCI that could be used to control several degrees of Freedom (DoFs) that could improve rehabilitation results. Decoding different movements from the same limb, high accuracy and reliability are some of the main difficulties when using conventional EEG-based BCIs and the challenges we tackled in this thesis. In this PhD thesis, we investigated that the classification of several functional hand reaching movements from the same limb using EEG is possible with acceptable accuracy. Moreover, we investigated how the recalibration could affect the classification results. For this reason, we tested the recalibration in each multi-class decoding for within session, recalibrated between-sessions, and between sessions. It was shown the great influence of recalibrating the generated classifier with data from the current session to improve stability and reliability of the decoding. Moreover, we used a multiclass extension of the Filter Bank Common Spatial Patterns (FBCSP) to improve the decoding accuracy based on features and compared it to our previous study using CSP. Sensorimotor-rhythm-based BCI systems have been used within the same frequency ranges as a way to influence brain plasticity or controlling external devices. However, neural oscillations have shown to synchronize activity according to motor and cognitive functions. For this reason, the existence of cross-frequency interactions produces oscillations with different frequencies in neural networks. In this PhD, we investigated for the first time the existence of cross-frequency coupling during rest and movement using ECoG in chronic stroke patients. We found that there is an exaggerated phase-amplitude coupling between the phase of alpha frequency and the amplitude of gamma frequency, which can be used as feature or target for neurofeedback interventions using BCIs. This coupling has been also reported in another neurological disorder affecting motor function (Parkinson and dystonia) but, to date, it has not been investigated in stroke patients. This finding might change the future design of assistive or therapeuthic BCI systems for motor restoration in stroke patients

    XXII International Conference on Mechanics in Medicine and Biology - Abstracts Book

    Get PDF
    This book contain the abstracts presented the XXII ICMMB, held in Bologna in September 2022. The abstracts are divided following the sessions scheduled during the conference

    Physical Diagnosis and Rehabilitation Technologies

    Get PDF
    The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices

    Analysis of sensorimotor rhythms based on lower-limbs motor imagery for brain-computer interface

    Get PDF
    Over recent years significant advancements in the field of assistive technologies have been observed. One of the paramount needs for the development and advancement that urged researchers to contribute in the field other than congenital or diagnosed chronic disorders, is the rising number of affectees from accidents, natural calamity (due to climate change), or warfare, worldwide resulting in spinal cord injuries (SCI), neural disorder, or amputation (interception) of limbs, that impede a human to live a normal life. In addition to this, more than ten million people in the world are living with some form of handicap due to the central nervous system (CNS) disorder, which is precarious. Biomedical devices for rehabilitation are the center of research focus for many years. For people with lost motor control, or amputation, but unscathed sensory control, instigation of control signals from the source, i.e. electrophysiological signals, is vital for seamless control of assistive biomedical devices. Control signals, i.e. motion intentions, arouse    in the sensorimotor cortex of the brain that can be detected using invasive or non-invasive modality. With non-invasive modality, the electroencephalography (EEG) is used to record these motion intentions encoded in electrical activity of the cortex, and are deciphered to recognize user intent for locomotion. They are further transferred to the actuator, or end effector of the assistive device for control purposes. This can be executed via the brain-computer interface (BCI) technology. BCI is an emerging research field that establishes a real-time bidirectional connection between the human brain and a computer/output device. Amongst its diverse applications, neurorehabilitation to deliver sensory feedback and brain controlled biomedical devices for rehabilitation are most popular. While substantial literature on control of upper-limb assistive technologies controlled via BCI is there, less is known about the lower-limb (LL) control of biomedical devices for navigation or gait assistance via BCI. The types  of EEG signals compatible with an independent BCI are the oscillatory/sensorimotor rhythms (SMR) and event-related potential (ERP). These signals have successfully been used in BCIs for navigation control of assistive devices. However, ERP paradigm accounts for a voluminous setup for stimulus presentation to the user during operation of BCI assistive device. Contrary to this, the SMR does not require large setup for activation of cortical activity; it instead depends on the motor imagery (MI) that is produced synchronously or asynchronously by the user. MI is a covert cognitive process also termed kinaesthetic motor imagery (KMI) and elicits clearly after rigorous training trials, in form of event-related desynchronization (ERD) or synchronization (ERS), depending on imagery activity or resting period. It usually comprises of limb movement tasks, but is not limited to it in a BCI paradigm. In order to produce detectable features that correlate to the user¿s intent, selection of cognitive task is an important aspect to improve the performance of a BCI. MI used in BCI predominantly remains associated with the upper- limbs, particularly hands, due to the somatotopic organization of the motor cortex. The hand representation area is substantially large, in contrast to the anatomical location of the LL representation areas in the human sensorimotor cortex. The LL area is located within the interhemispheric fissure, i.e. between the mesial walls of both hemispheres of the cortex. This makes it arduous to detect EEG features prompted upon imagination of LL. Detailed investigation of the ERD/ERS in the mu and beta oscillatory rhythms during left and right LL KMI tasks is required, as the user¿s intent to walk is of paramount importance associated to everyday activity. This is an important area of research, followed by the improvisation of the already existing rehabilitation system that serves the LL affectees. Though challenging, solution to these issues is also imperative for the development of robust controllers that follow the asynchronous BCI paradigms to operate LL assistive devices seamlessly. This thesis focusses on the investigation of cortical lateralization of ERD/ERS in the SMR, based on foot dorsiflexion KMI and knee extension KMI separately. This research infers the possibility to deploy these features in real-time BCI by finding maximum possible classification accuracy from the machine learning (ML) models. EEG signal is non-stationary, as it is characterized by individual-to-individual and trial-to-trial variability, and a low signal-to-noise ratio (SNR), which is challenging. They are high in dimension with relatively low number of samples available for fitting ML models to the data. These factors account for ML methods that were developed into the tool of choice  to analyse single-trial EEG data. Hence, the selection of appropriate ML model for true detection of class label with no tradeoff of overfitting is crucial. The feature extraction part of the thesis constituted of testing the band-power (BP) and the common spatial pattern (CSP) methods individually. The study focused on the synchronous BCI paradigm. This was to ensure the exhibition of SMR for the possibility of a practically viable control system in a BCI. For the left vs. right foot KMI, the objective was to distinguish the bilateral tasks, in order to use them as unilateral commands in a 2-class BCI for controlling/navigating a robotic/prosthetic LL for rehabilitation. Similar was the approach for left-right knee KMI. The research was based on four main experimental studies. In addition to the four studies, the research is also inclusive of the comparison of intra-cognitive tasks within the same limb, i.e. left foot vs. left knee and right foot vs. right knee tasks, respectively (Chapter 4). This added to another novel contribution towards the findings based on comparison of different tasks within the same LL. It provides basis to increase the dimensionality of control signals within one BCI paradigm, such as a BCI-controlled LL assistive device with multiple degrees of freedom (DOF) for restoration of locomotion function. This study was based on analysis of statistically significant mu ERD feature using BP feature extraction method. The first stage of this research comprised of the left vs. right foot KMI tasks, wherein the ERD/ERS that elicited in the mu-beta rhythms were analysed using BP feature extraction method (Chapter 5). Three individual features, i.e. mu ERD, beta ERD, and beta ERS were investigated on EEG topography and time-frequency (TF) maps, and average time course of power percentage, using the common average reference and bipolar reference methods. A comparative study was drawn for both references to infer the optimal method. This was followed by ML, i.e. classification of the three feature vectors (mu ERD, beta ERD, and beta ERS), using linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbour (KNN) algorithms, separately. Finally, the multiple correction statistical tests were done, in order to predict maximum possible classification accuracy amongst all paradigms for the most significant feature. All classifier models were supported with the statistical techniques of k-fold cross validation and evaluation of area under receiver-operator characteristic curves (AUC-ROC) for prediction of the true class label. The highest classification accuracy of 83.4% ± 6.72 was obtained with KNN model for beta ERS feature. The next study was based on enhancing the classification accuracy obtained from previous study. It was based on using similar cognitive tasks as study in Chapter 5, however deploying different methodology for feature extraction and classification procedure. In the second study, ERD/ERS from mu and beta rhythms were extracted using CSP and filter bank common spatial pattern (FBCSP) algorithms, to optimize the individual spatial patterns (Chapter 6). This was followed by ML process, for which the supervised logistic regression (Logreg) and LDA were deployed separately. Maximum classification accuracy resulted in 77.5% ± 4.23 with FBCSP feature vector and LDA model, with a maximum kappa coefficient of 0.55 that is in the moderate range of agreement between the two classes. The left vs. right foot discrimination results were nearly same, however the BP feature vector performed better than CSP. The third stage was based on the deployment of novel cognitive task of left vs. right knee extension KMI. Analysis of the ERD/ERS in the mu-beta rhythms was done for verification of cortical lateralization via BP feature vector (Chapter 7). Similar to Chapter 5, in this study the analysis of ERD/ERS features was done on the EEG topography and TF maps, followed by the determination of average time course and peak latency of feature occurrence. However, for this study, only mu ERD and beta ERS features were taken into consideration and the EEG recording method only comprised of common average reference. This was due to the established results from the foot study earlier, in Chapter 5, where beta ERD features showed less average amplitude. The LDA and KNN classification algorithms were employed. Unexpectedly, the left vs. right knee KMI reflected the highest accuracy of 81.04% ± 7.5 and an AUC-ROC = 0.84, strong enough to be used in a real-time BCI as two independent control features. This was using KNN model for beta ERS feature. The final study of this research followed the same paradigm as used in Chapter 6, but for left vs. right knee KMI cognitive task (Chapter 8). Primarily this study aimed at enhancing the resulting accuracy from Chapter 7, using CSP and FBCSP methods with Logreg and LDA models respectively. Results were in accordance with those of the already established foot KMI study, i.e. BP feature vector performed better than the CSP. Highest classification accuracy of 70.00% ± 2.85 with kappa score of 0.40 was obtained with Logreg using FBCSP feature vector. Results stipulated the utilization of ERD/ERS in mu and beta bands, as independent control features for discrimination of bilateral foot or the novel bilateral knee KMI tasks. Resulting classification accuracies implicate that any 2-class BCI, employing unilateral foot, or knee KMI, is suitable for real-time implementation. In conclusion, this thesis demonstrates the possible EEG pre-processing, feature extraction and classification methods to instigate a real-time BCI from the conducted studies. Following this, the critical aspects of latency in information transfer rate, SNR, and tradeoff between dimensionality and overfitting needs to be taken care of, during design of real-time BCI controller. It also highlights that there is a need for consensus over the development of standardized methods of cognitive tasks for MI based BCI. Finally, the application of wireless EEG for portable assistance is essential as it will contribute to lay the foundations of the development of independent asynchronous BCI based on SMR

    Biomedical Engineering

    Get PDF
    Biomedical engineering is currently relatively wide scientific area which has been constantly bringing innovations with an objective to support and improve all areas of medicine such as therapy, diagnostics and rehabilitation. It holds a strong position also in natural and biological sciences. In the terms of application, biomedical engineering is present at almost all technical universities where some of them are targeted for the research and development in this area. The presented book brings chosen outputs and results of research and development tasks, often supported by important world or European framework programs or grant agencies. The knowledge and findings from the area of biomaterials, bioelectronics, bioinformatics, biomedical devices and tools or computer support in the processes of diagnostics and therapy are defined in a way that they bring both basic information to a reader and also specific outputs with a possible further use in research and development
    • …
    corecore