3,195 research outputs found

    Phase Image Texture Analysis for Motion Detection in Diffusion MRI (PITA-MDD)

    Get PDF
    Purpose Pronounced spin phase artifacts appear in diffusion-weighted imaging (DWI) with only minor subject motion. While DWI data corruption is often identified as signal drop out in diffusion-weighted (DW) magnitude images, DW phase images may have higher sensitivity for detecting subtle subject motion. Methods This article describes a novel method to return a metric of subject motion, computed using an image texture analysis of the DW phase image. This Phase Image Texture Analysis for Motion Detection in dMRI (PITA-MDD) method is computationally fast and reliably detects subject motion from diffusion-weighted images. A threshold of the motion metric was identified to remove motion-corrupted slices, and the effect of removing corrupted slices was assessed on the reconstructed FA maps and fiber tracts. Results Using a motion-metric threshold to remove the motion-corrupted slices results in superior fiber tracts and fractional anisotropy maps. When further compared to a state-of-the-art magnitude-based motion correction method, PITA-MDD was able to detect comparable corrupted slices in a more computationally efficient manner. Conclusion In this study, we evaluated the use of DW phase images to detect motion corruption. The proposed method can be a robust and fast alternative for automatic motion detection in the brain with multiple applications to inform prospective motion correction or as real-time feedback for data quality control during scanning, as well as after data is already acquired

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Combining task-evoked and spontaneous activity to improve pre-operative brain mapping with fMRI

    Get PDF
    Noninvasive localization of brain function is used to understand and treat neurological disease, exemplified by pre-operative fMRI mapping prior to neurosurgical intervention. The principal approach for generating these maps relies on brain responses evoked by a task and, despite known limitations, has dominated clinical practice for over 20years. Recently, pre-operative fMRI mapping based on correlations in spontaneous brain activity has been demonstrated, however this approach has its own limitations and has not seen widespread clinical use. Here we show that spontaneous and task-based mapping can be performed together using the same pre-operative fMRI data, provide complimentary information relevant for functional localization, and can be combined to improve identification of eloquent motor cortex. Accuracy, sensitivity, and specificity of our approach are quantified through comparison with electrical cortical stimulation mapping in eight patients with intractable epilepsy. Broad applicability and reproducibility of our approach are demonstrated through prospective replication in an independent dataset of six patients from a different center. In both cohorts and every individual patient, we see a significant improvement in signal to noise and mapping accuracy independent of threshold, quantified using receiver operating characteristic curves. Collectively, our results suggest that modifying the processing of fMRI data to incorporate both task-based and spontaneous activity significantly improves functional localization in pre-operative patients. Because this method requires no additional scan time or modification to conventional pre-operative data acquisition protocols it could have widespread utility

    Représentations cérébrales des articulateurs de la parole

    Get PDF
    National audienceIn order to localize cerebral regions involved in articulatory control processes, ten subjects were examined using functional magnetic resonance imaging while executing lip, tongue and jaw movements. Although the three motor tasks activated a set of common brain areas classically involved in motor control, distinct movement representation sites were found in the motor cortex. These results support and extend previous brain imaging studies by demonstrating a sequential dorsoventral somatotopic organization of lips, jaw and tongue in the motor cortex

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Design of Participatory Virtual Reality System for visualizing an intelligent adaptive cyberspace

    Get PDF
    The concept of 'Virtual Intelligence' is proposed as an intelligent adaptive interaction between the simulated 3-D dynamic environment and the 3-D dynamic virtual image of the participant in the cyberspace created by a virtual reality system. A system design for such interaction is realised utilising only a stereoscopic optical head-mounted LCD display with an ultrasonic head tracker, a pair of gesture-controlled fibre optic gloves and, a speech recogni(ion and synthesiser device, which are all connected to a Pentium computer. A 3-D dynamic environment is created by physically-based modelling and rendering in real-time and modification of existing object description files by afractals-based Morph software. It is supported by an extensive library of audio and video functions, and functions characterising the dynamics of various objects. The multimedia database files so created are retrieved or manipulated by intelligent hypermedia navigation and intelligent integration with existing information. Speech commands control the dynamics of the environment and the corresponding multimedia databases. The concept of a virtual camera developed by ZeIter as well as Thalmann and Thalmann, as automated by Noma and Okada, can be applied for dynamically relating the orientation and actions of the virtual image of the participant with respect to the simulated environment. Utilising the fibre optic gloves, gesture-based commands are given by the participant for controlling his 3-D virtual image using a gesture language. Optimal estimation methods and dataflow techniques enable synchronisation between the commands of the participant expressed through the gesture language and his 3-D dynamic virtual image. Utilising a framework, developed earlier by the author, for adaptive computational control of distribute multimedia systems, the data access required for the environment as well as the virtual image of the participant can be endowed with adaptive capability

    Filter Design and Consistency Evaluation for 3D Tongue Motion Estimation using Harmonic Phase Analysis Method

    Get PDF
    Understanding patterns of tongue motion in speech using 3D motion estimation is challenging. Harmonic phase analysis has been used to perform noninvasive tongue motion and strain estimation using tagged magnetic resonance imaging (MRI). Two main contributions have been made in this thesis. First, the filtering process, which is used to produce harmonic phase images used for tissue tracking, influences the estimation accuracy. For this work, we evaluated different filtering approaches, and propose a novel high-pass filter for volumes tagged in individual directions. Testing was done using an open benchmarking dataset and synthetic images obtained using a mechanical model. Second, the datasets with inconsistent motion need to be excluded to yield meaningful motion estimation. For this work, we used a tracking-based method to evaluate the motion consistency between datasets and gave a strategy to identify the inconsistent dataset. Experiments including 2 normal subjects were done to validate our method. In all, the first work about 3D filter design improves the motion estimation accuracy and the second work about motion consistency test ensures the meaningfulness of the estimation results

    Three Dimensional Tissue Motion Analysis from Tagged Magnetic Resonance Imaging

    Get PDF
    Motion estimation of soft tissues during organ deformation has been an important topic in medical imaging studies. Its application involves a variety of internal and external organs including the heart, the lung, the brain, and the tongue. Tagged magnetic resonance imaging has been used for decades to observe and quantify motion and strain of deforming tissues. It places temporary noninvasive markers—so called "tags"—in the tissue of interest that deform together with the tissue during motion, producing images that carry motion information in the deformed tagged patterns. These images can later be processed using phase-extraction algorithms to achieve motion estimation and strain computation. In this dissertation, we study three-dimensional (3D) motion estimation and analysis using tagged magnetic resonance images with applications focused on speech studies and traumatic brain injury modeling. Novel algorithms are developed to assist tagged motion analysis. Firstly, a pipeline of methods—TMAP—is proposed to compute 3D motion from tagged and cine images of the tongue during speech. TMAP produces an estimation of motion along with a multi-subject analysis of motion pattern differences between healthy control subjects and post-glossectomy patients. Secondly, an enhanced 3D motion estimation algorithm—E-IDEA—is proposed. E-IDEA tackles the incompressible motion both on the internal tissue region and the tissue boundaries, reducing the boundary errors and yielding a motion estimate that is more accurate overall. Thirdly, a novel 3D motion estimation algorithm—PVIRA—is developed. Based on image registration and tracking, PVIRA is a faster and more robust method that performs phase extraction in a novel way. Lastly, a method to reveal muscles' activity using strain in the line of action of muscle fiber directions is presented. It is a first step toward relating motion production with individual muscles and provides a new tool for future clinical and scientific use

    Influences génétiques et environnementales sur la variabilité et l’unicité des activations cérébrales chez l’humain : un devis familial de jumeaux sur la base de données d’imagerie cérébrale du Human Connectome Project

    Full text link
    Le comportement humain est à la fois singulier et universel. La singularité serait principalement due aux trajectoires de vie propre à chaque individu (variant entre autres selon leur culture) alors que l’universalité émanerait d’une nature universelle ancrée dans un génome universel. Démêler les influences de la nature et de la culture sur le comportement humain est le Saint Graal de l’anthropologie biologique. J’aborde cette question en explorant les effets génétiques et environnementaux sur les bases psychiques du comportement. Plus particulièrement, je teste l’hypothèse que la singularité et l’universalité comportementales humaines s’observent au plan psychique par l’exploration de leur substrat neurobiologique, et que ce substrat possède à la fois un ancrage génétique et environnemental. À l’aide de données d’imagerie par résonance magnétique fonctionnelle (IRMf) recueillies auprès de 862 participants du Human Connectome Project (HCP), j’analyse les activations cérébrales liées à sept tâches socio-cognitives qui recoupent diverses facultés, dont le langage, la mémoire, la prise de risque, la logique, les émotions, la motricité et le raisonnement social. Après avoir groupé les sujets selon la similarité de leurs patrons d’activation cérébrale (c.-à-d. leurs sous-types neurobiologiques), j’estime l’influence génétique et environnementale sur la variabilité interindividuelle de ces divers sous-types. Les résultats démontrent bel et bien l’existence d’un regroupement des sujets selon la similarité de leurs cartes d’activation cérébrale lors d’une même tâche socio-cognitive, ce qui reflète à la fois le caractère singulier et universel des corrélats neuronaux d’un comportement observable. La variabilité interindividuelle constatée dans ces regroupements cérébraux témoigne quant à elle d’effets génétiques (héritabilité) ainsi qu’environnementaux (environnementalité), dont les ampleurs respectives varient selon la nature de la tâche effectuée. De plus, les sous-types cérébraux mis à jour révèlent une association avec les mesures comportementales et de performance effectuées lors des diverses tâches à l’étude. Enfin, les sous-types neurobiologiques résultant des diverses tâches partagent certaines bases génétiques. Dans leur ensemble, ces résultats appuient la notion que le comportement humain, ainsi que les processus neurobiologiques le sous-tendant, sont des phénotypes au même titre qu’un caractère morphologique ou physiologique, c’est-à-dire qu’ils sont le résultat de l’expression conjointe de bases génétiques (nature) et environnementales (culture).Human behaviour is both singular and universal. Singularity is believed to be mainly due to life trajectories unique to each individual (influenced among others by culture), whereas universality would stem from a universal nature resulting from a panhuman genome. Unravelling the influences of nature and nurture on human behaviour is the Holy Grail of biological anthropology. I approach this issue by exploring genetic and environmental influences on the neuropsychological underpinnings of behaviour. In particular, I test the hypothesis that the singularity and universality of human behaviour are also observed at the psychological level through the exploration of the neurobiological basis of behaviour, and that these bases have both genetic and environmental sources. Using Functional Magnetic Resonance Imaging (fMRI) data of 862 participants from the Human Connectome Project (HCP), I analyze brain activation related to 7 socio-cognitive tasks covering language, memory, risk taking, logic, emotions, motor skills, and social reasoning. After grouping subjects according to the similarity of their brain activation patterns (neurobiological subtypes), I estimate the genetic and environmental influences on the variation between participants on these subtypes. The inter-individual variability in cerebral groupings appears to have both genetic (heritability) and environmental (environmentality) sources that vary according to the particular psychological task involved. Moreover, these neurobiological subtypes show an association with behavioural and performance measures assessed by the socio-cognitive tasks. Finally, the neurobiological subtypes across the 7 tasks share common genetic links. Overall, the results support the notion that human behaviour, as well as its underlying neurobiological processes, are phenotypes in the same way as morphology or physiology, i.e., are the results of the integrated expression of a genetic basis (nature) and environmental influences (nurture)
    • …
    corecore