1,077 research outputs found

    Investigating the Impact of Susceptibility Artifacts on Adjacent Tumors in PET/MRI through Simulated Tomography Experiments

    Get PDF
    For quantitative PET imaging, attenuation correction (AC) is mandatory. Currently, all main vendors of hybrid PET/MRI systems apply a segmentation-based approach to compute a Dixon AC-map based on fat and water images derived from in- and opposed-phase MR-images. Changes in magnetic susceptibility pose major problems for MRI, which may lead to artifacts resulting in tissue misclassification in the segmented AC-map. Cases have been reported where the liver has been misidentified as lung tissue due to iron overload, e.g. from hemochromatosis or iron oxide MR contrast agents, resulting in severe underestimation of PET-quantification. In this thesis, simulated tomography experiments were conducted to investigate the impact of susceptibility artifacts on adjacent tumors, focusing on the misclassification of liver tissue as lung tissue. A digital phantom was programmed, and synthetic tumors and artifacts were introduced into a realistic PET/MRI patient dataset. The data were reconstructed with attenuation maps both with and without artifacts to compute the relative error (RE) in tumor uptake. It was shown that relevant errors can be introduced to tumors adjacent to the artifact. A strong inverse square relationship between the distance (d) of the center points of a tumor and an artifact was found with the RE. Further, because the RE was known to be proportional to the volume (V) of misclassified tissue, it was shown that it is possible to obtain a linear equation describing the RE using only V and d. However, this assumes similar information, i.e activity and attenuation, along the common line of responses (LORs) of the artifact and tumor. A correction method was developed to correct for lung-liver misclassifications. The proposed method uses the already acquired opposed-phase Dixon images, which are less sensitive to susceptibility changes. It successfully corrected 96% of misclassified tissue down to a 50% MR-signal reduction from the liver. The method benefits from using already acquired data to correct the artifacts, and may be made fully automatic to function in real-time

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Impact of tumor size and tracer uptake heterogeneity in (18)F-FDG PET and CT non-small cell lung cancer tumor delineation.: 18F-FDG PET and CT tumor delineation in NSCLC

    Get PDF
    International audienceUNLABELLED: The objectives of this study were to investigate the relationship between CT- and (18)F-FDG PET-based tumor volumes in non-small cell lung cancer (NSCLC) and the impact of tumor size and uptake heterogeneity on various approaches to delineating uptake on PET images. METHODS: Twenty-five NSCLC cancer patients with (18)F-FDG PET/CT were considered. Seventeen underwent surgical resection of their tumor, and the maximum diameter was measured. Two observers manually delineated the tumors on the CT images and the tumor uptake on the corresponding PET images, using a fixed threshold at 50% of the maximum (T(50)), an adaptive threshold methodology, and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Maximum diameters of the delineated volumes were compared with the histopathology reference when available. The volumes of the tumors were compared, and correlations between the anatomic volume and PET uptake heterogeneity and the differences between delineations were investigated. RESULTS: All maximum diameters measured on PET and CT images significantly correlated with the histopathology reference (r > 0.89, P < 0.0001). Significant differences were observed among the approaches: CT delineation resulted in large overestimation (+32% ± 37%), whereas all delineations on PET images resulted in underestimation (from -15% ± 17% for T(50) to -4% ± 8% for FLAB) except manual delineation (+8% ± 17%). Overall, CT volumes were significantly larger than PET volumes (55 ± 74 cm(3) for CT vs. from 18 ± 25 to 47 ± 76 cm(3) for PET). A significant correlation was found between anatomic tumor size and heterogeneity (larger lesions were more heterogeneous). Finally, the more heterogeneous the tumor uptake, the larger was the underestimation of PET volumes by threshold-based techniques. CONCLUSION: Volumes based on CT images were larger than those based on PET images. Tumor size and tracer uptake heterogeneity have an impact on threshold-based methods, which should not be used for the delineation of cases of large heterogeneous NSCLC, as these methods tend to largely underestimate the spatial extent of the functional tumor in such cases. For an accurate delineation of PET volumes in NSCLC, advanced image segmentation algorithms able to deal with tracer uptake heterogeneity should be preferred

    Image processing and machine learning techniques used in computer-aided detection system for mammogram screening - a review

    Get PDF
    This paper aims to review the previously developed Computer-aided detection (CAD) systems for mammogram screening because increasing death rate in women due to breast cancer is a global medical issue and it can be controlled only by early detection with regular screening. Till now mammography is the widely used breast imaging modality. CAD systems have been adopted by the radiologists to increase the accuracy of the breast cancer diagnosis by avoiding human errors and experience related issues. This study reveals that in spite of the higher accuracy obtained by the earlier proposed CAD systems for breast cancer diagnosis, they are not fully automated. Moreover, the false-positive mammogram screening cases are high in number and over-diagnosis of breast cancer exposes a patient towards harmful overtreatment for which a huge amount of money is being wasted. In addition, it is also reported that the mammogram screening result with and without CAD systems does not have noticeable difference, whereas the undetected cancer cases by CAD system are increasing. Thus, future research is required to improve the performance of CAD system for mammogram screening and make it completely automated

    Development of methods for time efficient scatter correction and improved attenuation correction in time-of-flight PET/MR

    Get PDF
    In der vorliegenden Dissertation wurden zwei fortdauernde Probleme der Bildrekonstruktion in der time-of-flight (TOF) PET bearbeitet: Beschleunigung der TOF-Streukorrektur sowie Verbesserung der emissionsbasierten Schwächungskorrektur. Aufgrund der fehlenden Möglichkeit, die Photonenabschwächung direkt zu messen, ist eine Verbesserung der Schwächungskorrektur durch eine gemeinsame Rekonstruktion der Aktivitäts- und Schwächungskoeffizienten-Verteilung mittels der MLAA-Methode von besonderer Bedeutung für die PET/MRT, während eine Beschleunigung der TOF-Streukorrektur gleichermaßen auch für TOF-fähige PET/CT-Systeme relevant ist. Für das Erreichen dieser Ziele wurde in einem ersten Schritt die hochauflösende PET-Bildrekonstruktion THOR, die bereits zuvor in unserer Gruppe entwickelt wurde, angepasst, um die TOF-Information nutzen zu können, welche von allen modernen PET-Systemen zur Verfügung gestellt wird. Die Nutzung der TOF-Information in der Bildrekonstruktion führt zu reduziertem Bildrauschen und zu einer verbesserten Konvergenzgeschwindigkeit. Basierend auf diesen Anpassungen werden in der vorliegenden Arbeit neue Entwicklungen für eine Verbesserung der TOF-Streukorrektur und der MLAA-Rekonstruktion beschrieben. Es werden sodann Ergebnisse vorgestellt, welche mit den neuen Algorithmen am Philips Ingenuity PET/MRT-Gerät erzielt wurden, das gemeinsam vom Helmholtz-Zentrum Dresden-Rossendorf (HZDR) und dem Universitätsklinikum betrieben wird. Eine wesentliche Voraussetzung für eine quantitative TOF-Bildrekonstruktionen ist eine Streukorrektur, welche die TOF-Information mit einbezieht. Die derzeit übliche Referenzmethode hierfür ist eine TOF-Erweiterung des single scatter simulation Ansatzes (TOF-SSS). Diese Methode wurde im Rahmen der TOF-Erweiterung von THOR implementiert. Der größte Nachteil der TOF-SSS ist eine 3–7-fach erhöhte Rechenzeit für die Berechnung der Streuschätzung im Vergleich zur non-TOF-SSS, wodurch die Bildrekonstruktionsdauer deutlich erhöht wird. Um dieses Problem zu beheben, wurde eine neue, schnellere TOF-Streukorrektur (ISA) entwickelt und implementiert. Es konnte gezeigt werden, dass dieser neue Algorithmus eine brauchbare Alternative zur TOF-SSS darstellt, welche die Rechenzeit auf ein Fünftel reduziert, wobei mithilfe von ISA und TOF-SSS rekonstruierte Schnittbilder quantitativ ausgezeichnet übereinstimmen. Die Gesamtrekonstruktionszeit konnte mithilfe ISA bei Ganzkörperuntersuchungen insgesamt um den Faktor Zwei reduziert werden. Dies kann als maßgeblicher Fortschritt betrachtet werden, speziell im Hinblick auf die Nutzung fortgeschrittener Bildrekonstruktionsverfahren im klinischen Umfeld. Das zweite große Thema dieser Arbeit ist ein Beitrag zur verbesserten Schwächungskorrektur in der PET/MRT mittels MLAA-Rekonstruktion. Hierfür ist zunächst eine genaue Kenntnis der tatsächlichen Zeitauflösung in der betrachten PET-Aufnahme zwingend notwendig. Da die vom Hersteller zur Verfügung gestellten Zahlen nicht immer verlässlich sind und zudem die Zählratenabhängigkeit nicht berücksichtigen, wurde ein neuer Algorithmus entwickelt und implementiert, um die Zeitauflösung in Abhängigkeit von der Zählrate zu bestimmen. Dieser Algorithmus (MLRES) basiert auf dem maximum likelihood Prinzip und erlaubt es, die funktionale Abhängigkeit der Zeitauflösung des Philips Ingenuity PET/MRT von der Zählrate zu bestimmen. In der vorliegenden Arbeit konnte insbesondere gezeigt werden, dass sich die Zeitauflösung des Ingenuity PET/MRT im klinisch relevanten Zählratenbereich um mehr als 250 ps gegenüber der vom Hersteller genannten Auflösung von 550 ps verschlechtern kann, welche tatsächlich nur bei extrem niedrigen Zählraten erreicht wird. Basierend auf den oben beschrieben Entwicklungen konnte MLAA in THOR integriert werden. Die MLAA-Implementierung erlaubt die Generierung realistischer patientenspezifischer Schwächungsbilder. Es konnte insbesondere gezeigt werden, dass auch Knochen und Hohlräume korrekt identifiziert werden, was mittels MRT-basierter Schwächungskorrektur sehr schwierig oder sogar unmöglich ist. Zudem konnten wir bestätigen, dass es mit MLAA möglich ist, metallbedingte Artefakte zu reduzieren, die ansonsten in den MRT-basierten Schwächungsbildern immer zu finden sind. Eine detaillierte Analyse der Ergebnisse zeigte allerdings verbleibende Probleme bezüglich der globalen Skalierung und des lokalen Übersprechens zwischen Aktivitäts- und Schwächungsschätzung auf. Daher werden zusätzliche Entwicklungen erforderlich sein, um auch diese Defizite zu beheben.The present work addresses two persistent issues of image reconstruction for time-of-flight (TOF) PET: acceleration of TOF scatter correction and improvement of emission-based attenuation correction. Due to the missing capability to measure photon attenuation directly, improving attenuation correction by joint reconstruction of the activity and attenuation coefficient distribution using the MLAA technique is of special relevance for PET/MR while accelerating TOF scatter correction is of equal importance for TOF-capable PET/CT systems as well. To achieve the stated goals, in a first step the high-resolution PET image reconstruction THOR, previously developed in our group, was adapted to take advantage of the TOF information delivered by state-of-the-art PET systems. TOF-aware image reconstruction reduces image noise and improves convergence rate both of which is highly desirable. Based on these adaptations, this thesis describes new developments for improvement of TOF scatter correction and MLAA reconstruction and reports results obtained with the new algorithms on the Philips Ingenuity PET/MR jointly operated by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the University Hospital. A crucial requirement for quantitative TOF image reconstruction is TOF-aware scatter correction. The currently accepted reference method — the TOF extension of the single scatter simulation approach (TOF-SSS) — was implemented as part of the TOF-related modifications of THOR. The major drawback of TOF-SSS is a 3–7 fold increase in computation time required for the scatter estimation, compared to regular SSS, which in turn does lead to a considerable image reconstruction slowdown. This problem was addressed by development and implementation of a novel accelerated TOF scatter correction algorithm called ISA. This new algorithm proved to be a viable alternative to TOF-SSS and speeds up scatter correction by a factor of up to five in comparison to TOF-SSS. Images reconstructed using ISA are in excellent quantitative agreement with those obtained when using TOF-SSS while overall reconstruction time is reduced by a factor of two in whole-body investigations. This can be considered a major achievement especially with regard to the use of advanced image reconstruction in a clinical context. The second major topic of this thesis is contribution to improved attenuation correction in PET/MR by utilization of MLAA reconstruction. First of all, knowledge of the actual time resolution operational in the considered PET scan is mandatory for a viable MLAA implementation. Since vendor-provided figures regarding the time resolution are not necessarily reliable and do not cover count-rate dependent effects at all, a new algorithm was developed and implemented to determine the time resolution as a function of count rate. This algorithm (MLRES) is based on the maximum likelihood principle and allows to determine the functional dependency of the time resolution of the Philips Ingenuity PET/MR on the given count rate and to integrate this information into THOR. Notably, the present work proves that the time resolution of the Ingenuity PET/MR can degrade by more than 250 ps for the clinically relevant range of count rates in comparison to the vendor-provided figure of 550 ps which is only realized in the limit of extremely low count rates. Based on the previously described developments, MLAA could be integrated into THOR. The performed list-mode MLAA implementation is capable of deriving realistic, patient-specific attenuation maps. Especially, correct identification of osseous structures and air cavities could be demonstrated which is very difficult or even impossible with MR-based approaches to attenuation correction. Moreover, we have confirmed that MLAA is capable of reducing metal-induced artifacts which are otherwise present in MR-based attenuation maps. However, the detailed analysis of the obtained MLAA results revealed remaining problems regarding stability of global scaling as well as local cross-talk between activity and attenuation estimates. Therefore, further work beyond the scope of the present work will be necessary to address these remaining issues

    Impact of tumor size and tracer uptake heterogeneity in (18)F-FDG PET and CT non-small cell lung cancer tumor delineation.: 18F-FDG PET and CT tumor delineation in NSCLC

    Get PDF
    International audienceUNLABELLED: The objectives of this study were to investigate the relationship between CT- and (18)F-FDG PET-based tumor volumes in non-small cell lung cancer (NSCLC) and the impact of tumor size and uptake heterogeneity on various approaches to delineating uptake on PET images. METHODS: Twenty-five NSCLC cancer patients with (18)F-FDG PET/CT were considered. Seventeen underwent surgical resection of their tumor, and the maximum diameter was measured. Two observers manually delineated the tumors on the CT images and the tumor uptake on the corresponding PET images, using a fixed threshold at 50% of the maximum (T(50)), an adaptive threshold methodology, and the fuzzy locally adaptive Bayesian (FLAB) algorithm. Maximum diameters of the delineated volumes were compared with the histopathology reference when available. The volumes of the tumors were compared, and correlations between the anatomic volume and PET uptake heterogeneity and the differences between delineations were investigated. RESULTS: All maximum diameters measured on PET and CT images significantly correlated with the histopathology reference (r > 0.89, P < 0.0001). Significant differences were observed among the approaches: CT delineation resulted in large overestimation (+32% ± 37%), whereas all delineations on PET images resulted in underestimation (from -15% ± 17% for T(50) to -4% ± 8% for FLAB) except manual delineation (+8% ± 17%). Overall, CT volumes were significantly larger than PET volumes (55 ± 74 cm(3) for CT vs. from 18 ± 25 to 47 ± 76 cm(3) for PET). A significant correlation was found between anatomic tumor size and heterogeneity (larger lesions were more heterogeneous). Finally, the more heterogeneous the tumor uptake, the larger was the underestimation of PET volumes by threshold-based techniques. CONCLUSION: Volumes based on CT images were larger than those based on PET images. Tumor size and tracer uptake heterogeneity have an impact on threshold-based methods, which should not be used for the delineation of cases of large heterogeneous NSCLC, as these methods tend to largely underestimate the spatial extent of the functional tumor in such cases. For an accurate delineation of PET volumes in NSCLC, advanced image segmentation algorithms able to deal with tracer uptake heterogeneity should be preferred

    Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment.: Multi observation PET image fusion for patient follow-up quantitation and therapy response

    No full text
    International audienceIn positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Metabolically active volumes automatic delineation methodologies in PET imaging: review and perspectives

    No full text
    International audiencePET imaging is now considered a gold standard tool in clinical oncology, especially for diagnosis purposes. More recent applications such as therapy follow up or tumor targeting in radiotherapy require a fast, accurate and robust metabolically active tumor volumes on emission images, which cannot be obtained through manual contouring. This clinical need has sprung a large number of methodological developments regarding automatic methods to defined tumor volumes on PET images. This paper reviews most of the methodologies that have been recently proposed and discusses their framework and methodological and/or clinical validation. Perspectives regarding the future work to be done are also suggested

    Reorganization of retinotopic maps after occipital lobe infarction

    Full text link
    Published in final edited form as: J Cogn Neurosci. 2014 June ; 26(6): 1266–1282. doi:10.1162/jocn_a_00538.We studied patient JS, who had a right occipital infarct that encroached on visual areas V1, V2v, and VP. When tested psychophysically, he was very impaired at detecting the direction of motion in random dot displays where a variable proportion of dots moving in one direction (signal) were embedded in masking motion noise (noise dots). The impairment on this motion coherence task was especially marked when the display was presented to the upper left (affected) visual quadrant, contralateral to his lesion. However, with extensive training, by 11 months his threshold fell to the level of healthy participants. Training on the motion coherence task generalized to another motion task, the motion discontinuity task, on which he had to detect the presence of an edge that was defined by the difference in the direction of the coherently moving dots (signal) within the display. He was much better at this task at 8 than 3 months, and this improvement was associated with an increase in the activation of the human MT complex (hMT^+) and in the kinetic occipital region as shown by repeated fMRI scans. We also used fMRI to perform retinotopic mapping at 3, 8, and 11 months after the infarct. We quantified the retinotopy and areal shifts by measuring the distances between the center of mass of functionally defined areas, computed in spherical surface-based coordinates. The functionally defined retinotopic areas V1, V2v, V2d, and VP were initially smaller in the lesioned right hemisphere, but they increased in size between 3 and 11 months. This change was not found in the normal, left hemisphere of the patient or in either hemispheres of the healthy control participants. We were interested in whether practice on the motion coherence task promoted the changes in the retinotopic maps. We compared the results for patient JS with those from another patient (PF) who had a comparable lesion but had not been given such practice. We found similar changes in the maps in the lesioned hemisphere of PF. However, PF was only scanned at 3 and 7 months, and the biggest shifts in patient JS were found between 8 and 11 months. Thus, it is important to carry out a prospective study with a trained and untrained group so as to determine whether the patterns of reorganization that we have observed can be further promoted by training.This work was supported by NIH grant R01NS064100 to L. M. V. Lucia M. Vaina dedicates this article to Charlie Gross, who has been a long-time collaborator and friend. I met him at the INS meeting in Beaune (France), and since then we often discussed the relationship between several aspects of high-level visual processing described in his work in monkeys physiology and my work in neuropsychology. In particular, his pioneering study of biological motion in monkeys' superior temporal lobe has influenced my own work on biological motion and has led us to coauthor a paper on this topic. Working with Charlie was a uniquely enjoyable experience. Alan Cowey and I often spoke fondly about Charlie, a dear friend and close colleague to us both, whose work, exquisite sense of humor, and unbound zest of living we both deeply admired and loved. (R01NS064100 - NIH)Accepted manuscrip
    • …
    corecore