3,276 research outputs found

    Analysis of sensorimotor rhythms based on lower-limbs motor imagery for brain-computer interface

    Get PDF
    Over recent years significant advancements in the field of assistive technologies have been observed. One of the paramount needs for the development and advancement that urged researchers to contribute in the field other than congenital or diagnosed chronic disorders, is the rising number of affectees from accidents, natural calamity (due to climate change), or warfare, worldwide resulting in spinal cord injuries (SCI), neural disorder, or amputation (interception) of limbs, that impede a human to live a normal life. In addition to this, more than ten million people in the world are living with some form of handicap due to the central nervous system (CNS) disorder, which is precarious. Biomedical devices for rehabilitation are the center of research focus for many years. For people with lost motor control, or amputation, but unscathed sensory control, instigation of control signals from the source, i.e. electrophysiological signals, is vital for seamless control of assistive biomedical devices. Control signals, i.e. motion intentions, arouse    in the sensorimotor cortex of the brain that can be detected using invasive or non-invasive modality. With non-invasive modality, the electroencephalography (EEG) is used to record these motion intentions encoded in electrical activity of the cortex, and are deciphered to recognize user intent for locomotion. They are further transferred to the actuator, or end effector of the assistive device for control purposes. This can be executed via the brain-computer interface (BCI) technology. BCI is an emerging research field that establishes a real-time bidirectional connection between the human brain and a computer/output device. Amongst its diverse applications, neurorehabilitation to deliver sensory feedback and brain controlled biomedical devices for rehabilitation are most popular. While substantial literature on control of upper-limb assistive technologies controlled via BCI is there, less is known about the lower-limb (LL) control of biomedical devices for navigation or gait assistance via BCI. The types  of EEG signals compatible with an independent BCI are the oscillatory/sensorimotor rhythms (SMR) and event-related potential (ERP). These signals have successfully been used in BCIs for navigation control of assistive devices. However, ERP paradigm accounts for a voluminous setup for stimulus presentation to the user during operation of BCI assistive device. Contrary to this, the SMR does not require large setup for activation of cortical activity; it instead depends on the motor imagery (MI) that is produced synchronously or asynchronously by the user. MI is a covert cognitive process also termed kinaesthetic motor imagery (KMI) and elicits clearly after rigorous training trials, in form of event-related desynchronization (ERD) or synchronization (ERS), depending on imagery activity or resting period. It usually comprises of limb movement tasks, but is not limited to it in a BCI paradigm. In order to produce detectable features that correlate to the user¿s intent, selection of cognitive task is an important aspect to improve the performance of a BCI. MI used in BCI predominantly remains associated with the upper- limbs, particularly hands, due to the somatotopic organization of the motor cortex. The hand representation area is substantially large, in contrast to the anatomical location of the LL representation areas in the human sensorimotor cortex. The LL area is located within the interhemispheric fissure, i.e. between the mesial walls of both hemispheres of the cortex. This makes it arduous to detect EEG features prompted upon imagination of LL. Detailed investigation of the ERD/ERS in the mu and beta oscillatory rhythms during left and right LL KMI tasks is required, as the user¿s intent to walk is of paramount importance associated to everyday activity. This is an important area of research, followed by the improvisation of the already existing rehabilitation system that serves the LL affectees. Though challenging, solution to these issues is also imperative for the development of robust controllers that follow the asynchronous BCI paradigms to operate LL assistive devices seamlessly. This thesis focusses on the investigation of cortical lateralization of ERD/ERS in the SMR, based on foot dorsiflexion KMI and knee extension KMI separately. This research infers the possibility to deploy these features in real-time BCI by finding maximum possible classification accuracy from the machine learning (ML) models. EEG signal is non-stationary, as it is characterized by individual-to-individual and trial-to-trial variability, and a low signal-to-noise ratio (SNR), which is challenging. They are high in dimension with relatively low number of samples available for fitting ML models to the data. These factors account for ML methods that were developed into the tool of choice  to analyse single-trial EEG data. Hence, the selection of appropriate ML model for true detection of class label with no tradeoff of overfitting is crucial. The feature extraction part of the thesis constituted of testing the band-power (BP) and the common spatial pattern (CSP) methods individually. The study focused on the synchronous BCI paradigm. This was to ensure the exhibition of SMR for the possibility of a practically viable control system in a BCI. For the left vs. right foot KMI, the objective was to distinguish the bilateral tasks, in order to use them as unilateral commands in a 2-class BCI for controlling/navigating a robotic/prosthetic LL for rehabilitation. Similar was the approach for left-right knee KMI. The research was based on four main experimental studies. In addition to the four studies, the research is also inclusive of the comparison of intra-cognitive tasks within the same limb, i.e. left foot vs. left knee and right foot vs. right knee tasks, respectively (Chapter 4). This added to another novel contribution towards the findings based on comparison of different tasks within the same LL. It provides basis to increase the dimensionality of control signals within one BCI paradigm, such as a BCI-controlled LL assistive device with multiple degrees of freedom (DOF) for restoration of locomotion function. This study was based on analysis of statistically significant mu ERD feature using BP feature extraction method. The first stage of this research comprised of the left vs. right foot KMI tasks, wherein the ERD/ERS that elicited in the mu-beta rhythms were analysed using BP feature extraction method (Chapter 5). Three individual features, i.e. mu ERD, beta ERD, and beta ERS were investigated on EEG topography and time-frequency (TF) maps, and average time course of power percentage, using the common average reference and bipolar reference methods. A comparative study was drawn for both references to infer the optimal method. This was followed by ML, i.e. classification of the three feature vectors (mu ERD, beta ERD, and beta ERS), using linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbour (KNN) algorithms, separately. Finally, the multiple correction statistical tests were done, in order to predict maximum possible classification accuracy amongst all paradigms for the most significant feature. All classifier models were supported with the statistical techniques of k-fold cross validation and evaluation of area under receiver-operator characteristic curves (AUC-ROC) for prediction of the true class label. The highest classification accuracy of 83.4% ± 6.72 was obtained with KNN model for beta ERS feature. The next study was based on enhancing the classification accuracy obtained from previous study. It was based on using similar cognitive tasks as study in Chapter 5, however deploying different methodology for feature extraction and classification procedure. In the second study, ERD/ERS from mu and beta rhythms were extracted using CSP and filter bank common spatial pattern (FBCSP) algorithms, to optimize the individual spatial patterns (Chapter 6). This was followed by ML process, for which the supervised logistic regression (Logreg) and LDA were deployed separately. Maximum classification accuracy resulted in 77.5% ± 4.23 with FBCSP feature vector and LDA model, with a maximum kappa coefficient of 0.55 that is in the moderate range of agreement between the two classes. The left vs. right foot discrimination results were nearly same, however the BP feature vector performed better than CSP. The third stage was based on the deployment of novel cognitive task of left vs. right knee extension KMI. Analysis of the ERD/ERS in the mu-beta rhythms was done for verification of cortical lateralization via BP feature vector (Chapter 7). Similar to Chapter 5, in this study the analysis of ERD/ERS features was done on the EEG topography and TF maps, followed by the determination of average time course and peak latency of feature occurrence. However, for this study, only mu ERD and beta ERS features were taken into consideration and the EEG recording method only comprised of common average reference. This was due to the established results from the foot study earlier, in Chapter 5, where beta ERD features showed less average amplitude. The LDA and KNN classification algorithms were employed. Unexpectedly, the left vs. right knee KMI reflected the highest accuracy of 81.04% ± 7.5 and an AUC-ROC = 0.84, strong enough to be used in a real-time BCI as two independent control features. This was using KNN model for beta ERS feature. The final study of this research followed the same paradigm as used in Chapter 6, but for left vs. right knee KMI cognitive task (Chapter 8). Primarily this study aimed at enhancing the resulting accuracy from Chapter 7, using CSP and FBCSP methods with Logreg and LDA models respectively. Results were in accordance with those of the already established foot KMI study, i.e. BP feature vector performed better than the CSP. Highest classification accuracy of 70.00% ± 2.85 with kappa score of 0.40 was obtained with Logreg using FBCSP feature vector. Results stipulated the utilization of ERD/ERS in mu and beta bands, as independent control features for discrimination of bilateral foot or the novel bilateral knee KMI tasks. Resulting classification accuracies implicate that any 2-class BCI, employing unilateral foot, or knee KMI, is suitable for real-time implementation. In conclusion, this thesis demonstrates the possible EEG pre-processing, feature extraction and classification methods to instigate a real-time BCI from the conducted studies. Following this, the critical aspects of latency in information transfer rate, SNR, and tradeoff between dimensionality and overfitting needs to be taken care of, during design of real-time BCI controller. It also highlights that there is a need for consensus over the development of standardized methods of cognitive tasks for MI based BCI. Finally, the application of wireless EEG for portable assistance is essential as it will contribute to lay the foundations of the development of independent asynchronous BCI based on SMR

    An Occupational Therapist\u27s Guide for Rehabilitative Driving with Traumatic Brain Injured Clients

    Get PDF
    Traumatic brain injuries are devastating occurrence accounting for nearly 10 million injuries occurring each year, with 2 million of those occurring in the United States. As these individuals progress through rehabilitation and begin to acquire independence once again, they look for opportunities to reintegrate within the communities which they live. Driving has been identified as a monumental stage of rehabilitation and is a key way to experience the community for individuals after a traumatic brain injury. This scholarly project was conducted to help occupational therapists addressing driving rehabilitation with traumatic brain injured clients and help ease some of the problems that inexperienced occupational therapists face with rehabilitative driving. The problems that have been addressed include the limited information that is available to inexperienced occupational therapists as they deal with rehabilitative driving. Rehabilitative driving is an emerging field in occupational therapy. Many therapists will not address driving on a fulltime basis and may not have driving specializations. This guide will help those that are limited with inexperience approach driving concerns with traumatic brain injured clients. A comprehensive literature review was conducted to support the outcome of the developed product. This research suggests that rehabilitative driving resources are needed to increase and support the evidence base on driving. The development of additional resources will provide increased access to rehabilitative driving for inexperienced occupational therapists. As the literature review progressed, it also became evident that traumatic brain injured clients are in need of rehabilitative driving services specific to their diagnosis. Significant findings throughout the literature review include deficits currently being addressed by occupational therapists are similar to needs related to driving, clients view driving as a monumental stage in recovery, and occupational therapists are in need of increased guidelines and resources to meet driving needs for their traumatic brain injured clients. To help aid in the resolution of these findings a product has been developed that specifically addresses driving concerns of traumatic brain injured clients. Included in this product are tools and resources to aid in the stress experienced by inexperienced occupational therapists addressing rehabilitative driving. Specific evaluation tools have been developed to evaluate both on and off-road evaluations. The off-road evaluation tool is a semi-structured interview that addresses specific details related to driving and the history of the clients driving experiences. The on-road evaluation provides a checklist that will aid in the behind-the-wheel driving assessment

    Drivers’ response to attentional demand in automated driving

    Get PDF
    Vehicle automation can make driving safer; it can compensate for human impairments that are recognized as the leading cause of crashes. Vehicle automation has become a central topic in transportation and human factors research. This thesis addresses some unresolved challenges on how to guide attention for safe use of automation and on how to improve the design of automation to account for humans\u27 abilities and limitations. Specifically, this thesis investigated how driver attention changed with automation and the driving situation. The objective was to inform the design of vehicle systems and develop design knowledge to support safe driving. A novelty of this thesis was in the use of real-world driving data and Bayesian methods (improved statistical modeling techniques). The analysis of driver behavior was based on data collected in naturalistic driving studies (to study the effect of assistive automation) and in a simulator experiment (to study the effect of unsupervised automation). Driver behavior was examined with measures of visual and motor response, together with contextual information, on the driving situation. The results show that assistive automation affected driver attention in real-world driving. In general, drivers devoted less attention at the forward path with automation than without. However, driver attention was sensitive to the presence of other traffic and changes in illumination---variations in the surrounding environment that increased the uncertainty of the driving situation---and it was elicited by visual, audio, and vestibular-kinesthetic-somatosensory information (perceptual cues) that alerted to an impending conflict. Driver response to a critical situation with unsupervised automation had a reflexive component (glance on-path, hands on wheel, and feet on pedals) and a planned component (decision and execution of evasive maneuver). Warnings primarily alerted attention rather than triggering an intervention. Expectation, which changed over time depending on experience, affected driver response substantially. This thesis found that the safety implications of diverting attention away from the driving situation need to be interpreted in relation to the characteristics and criticality of the driving situation (driving context) and need to consider the reduction of risk exposure due to automation (e.g., headway maintenance and collision warnings). Drivers were, for example, successful at changing their behavior in the presence of other vehicles and in different light conditions independently of automation. If drivers are not attentive at critical points, warnings are effective for triggering a quick shift of attention to the driving task in preparation to an evasive action. The results improved on those of earlier studies by providing a comprehensive assessment of driver attentional response in routine driving and critical situations. The results can support evidence-based recommendations (inattention guidelines) and be used as a reference for driver modeling and vehicle systems development

    Vision Therapy Promotional Packet: Through the Eyes of Traumatic Brain Injury/Acquired Brain Injury: An informational Resource on the Role of OT

    Get PDF
    Very few in the health care professions, including head trauma rehabilitation centers, are adequately aware of visual problems resulting from Traumatic Brain Injury (TBI)/ Acquired Brain Injury (ABI) and the visual-perception consequences. These visual deficits may lead to impaired functioning in the person\u27s daily activities and roles because vision affects all other functions (braininjuries.org, ~ 1). A few examples of activities of daily living (ADL\u27s) and instrumental activities of daily living (lADL\u27s) that may be impacted by visual deficits include but are not limited to: driving, eating, dressing, leisure participation (movies, reading, sports), and employment. Unfortunately, this creates a gap in rehabilitative services, resulting in incomplete treatment and frustration for the patient, family and treatment team (braininjuries.org, ~ 1). Occupational Therapy\u27s basic premise is to increase the independence of an individual in their daily activities and roles. An occupational therapist\u27s (OT\u27s) training in the assessment, design and provision of effective interventions can be instrumental in the rehabilitation process of vision deficits and their impact on daily living. OT\u27s are trained in the evaluation and treatment design specific to individuals diagnosed with TBII ABI including; cognitive, visual perceptual, physical, and psychological in relation to activities of daily living. Occupational Therapy can be an important member of the multidisciplinary team who is serving this population. Unfortunately, often both occupational therapists and members of a multi-disciplinary team are not always clear on the role and protocol of OT\u27 s in the provision of treatment intervention specific to TBII ABI and visual deficits. A concentrated literature review was conducted to identify current standard best practices and protocols and the potential role of OT identified. TBII ABI visual perceptual deficits were identified and compared to OT training to ensure OT\u27s are qualified to meet the unique needs of this population. The other roles of multi-disciplinary team members were explored to see the possible areas OT could address or where an OT\u27s specialized training could strengthen the rehabilitative treatment intervention. The findings from this review demonstrate that occupational therapists have the proficiency and competence to evaluate and depict the functional disability or ability of the acquired or traumatic brain injured client as a whole. The outcome of the project is a promotional packet, The Role of Occupational Therapy in Vision Therapy, which includes an: 1. Educational brochure, Through the Eyes of Traumatic Brain Injury / Acquired Brain Injury: The Role of Occupational Therapy in Vision Therapy as well as, 2. A more in depth educational packet entitled, Through the Eyes of Traumatic Brain Injury/Acquired Brain Injury: An Informational Resource on the Role of OT This promotional packet is intended as a means for occupational therapists to use in educating others further on the use of the role of occupational therapy in the provision of vision therapy with the acquired and traumatic brain injured populace

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    Annotated Bibliography: Anticipation

    Get PDF
    • …
    corecore