107 research outputs found

    An F-ratio-Based Method for Estimating the Number of Active Sources in MEG

    Get PDF
    Magnetoencephalography (MEG) is a powerful technique for studying the human brain function. However, accurately estimating the number of sources that contribute to the MEG recordings remains a challenging problem due to the low signal-to-noise ratio (SNR), the presence of correlated sources, inaccuracies in head modeling, and variations in individual anatomy. To address these issues, our study introduces a robust method for accurately estimating the number of active sources in the brain based on the F-ratio statistical approach, which allows for a comparison between a full model with a higher number of sources and a reduced model with fewer sources. Using this approach, we developed a formal statistical procedure that sequentially increases the number of sources in the multiple dipole localization problem until all sources are found. Our results revealed that the selection of thresholds plays a critical role in determining the method`s overall performance, and appropriate thresholds needed to be adjusted for the number of sources and SNR levels, while they remained largely invariant to different inter-source correlations, modeling inaccuracies, and different cortical anatomies. By identifying optimal thresholds and validating our F-ratio-based method in simulated, real phantom, and human MEG data, we demonstrated the superiority of our F-ratio-based method over existing state-of-the-art statistical approaches, such as the Akaike Information Criterion (AIC) and Minimum Description Length (MDL). Overall, when tuned for optimal selection of thresholds, our method offers researchers a precise tool to estimate the true number of active brain sources and accurately model brain function

    Parallel Magnetic Resonance Imaging as Approximation in a Reproducing Kernel Hilbert Space

    Full text link
    In Magnetic Resonance Imaging (MRI) data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a Reproducing Kernel Hilbert Space (RKHS) with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional g-factor noise analysis to both noise amplification and approximation errors. This is demonstrated with numerical examples.Comment: 28 pages, 7 figure

    Comparison of beamformer implementations for MEG source localization

    Get PDF
    Beamformers are applied for estimating spatiotemporal characteristics of neuronal sources underlying measured MEG/EEG signals. Several MEG analysis toolboxes include an implementation of a linearly constrained minimum-variance (LCMV) beamformer. However, differences in implementations and in their results complicate the selection and application of beamformers and may hinder their wider adoption in research and clinical use. Additionally, combinations of different MEG sensor types (such as magnetometers and planar gradiometers) and application of preprocessing methods for interference suppression, such as signal space separation (SSS), can affect the results in different ways for different implementations. So far, a systematic evaluation of the different implementations has not been performed. Here, we compared the localization performance of the LCMV beamformer pipelines in four widely used open-source toolboxes (MNE-Python, FieldTrip, DAiSS (SPM12), and Brainstorm) using datasets both with and without SSS interference suppression. We analyzed MEG data that were i) simulated, ii) recorded from a static and moving phantom, and iii) recorded from a healthy volunteer receiving auditory, visual, and somatosensory stimulation. We also investigated the effects of SSS and the combination of the magnetometer and gradiometer signals. We quantified how localization error and point-spread volume vary with the signal-to-noise ratio (SNR) in all four toolboxes. When applied carefully to MEG data with a typical SNR (3-15 dB), all four toolboxes localized the sources reliably; however, they differed in their sensitivity to preprocessing parameters. As expected, localizations were highly unreliable at very low SNR, but we found high localization error also at very high SNRs for the first three toolboxes while Brainstorm showed greater robustness but with lower spatial resolution. We also found that the SNR improvement offered by SSS led to more accurate localization.Peer reviewe

    Fast Bayesian estimation of brain activation with cortical surface fMRI data using EM

    Full text link
    Task functional magnetic resonance imaging (fMRI) is a type of neuroimaging data used to identify areas of the brain that activate during specific tasks or stimuli. These data are conventionally modeled using a massive univariate approach across all data locations, which ignores spatial dependence at the cost of model power. We previously developed and validated a spatial Bayesian model leveraging dependencies along the cortical surface of the brain in order to improve accuracy and power. This model utilizes stochastic partial differential equation spatial priors with sparse precision matrices to allow for appropriate modeling of spatially-dependent activations seen in the neuroimaging literature, resulting in substantial increases in model power. Our original implementation relies on the computational efficiencies of the integrated nested Laplace approximation (INLA) to overcome the computational challenges of analyzing high-dimensional fMRI data while avoiding issues associated with variational Bayes implementations. However, this requires significant memory resources, extra software, and software licenses to run. In this article, we develop an exact Bayesian analysis method for the general linear model, employing an efficient expectation-maximization algorithm to find maximum a posteriori estimates of task-based regressors on cortical surface fMRI data. Through an extensive simulation study of cortical surface-based fMRI data, we compare our proposed method to the existing INLA implementation, as well as a conventional massive univariate approach employing ad-hoc spatial smoothing. We also apply the method to task fMRI data from the Human Connectome Project and show that our proposed implementation produces similar results to the validated INLA implementation. Both the INLA and EM-based implementations are available through our open-source BayesfMRI R package.Comment: 29 pages, 10 figures. arXiv admin note: text overlap with arXiv:2203.0005

    An F-ratio-based method for estimating the number of active sources in MEG

    Get PDF
    IntroductionMagnetoencephalography (MEG) is a powerful technique for studying the human brain function. However, accurately estimating the number of sources that contribute to the MEG recordings remains a challenging problem due to the low signal-to-noise ratio (SNR), the presence of correlated sources, inaccuracies in head modeling, and variations in individual anatomy.MethodsTo address these issues, our study introduces a robust method for accurately estimating the number of active sources in the brain based on the F-ratio statistical approach, which allows for a comparison between a full model with a higher number of sources and a reduced model with fewer sources. Using this approach, we developed a formal statistical procedure that sequentially increases the number of sources in the multiple dipole localization problem until all sources are found.ResultsOur results revealed that the selection of thresholds plays a critical role in determining the method's overall performance, and appropriate thresholds needed to be adjusted for the number of sources and SNR levels, while they remained largely invariant to different inter-source correlations, translational modeling inaccuracies, and different cortical anatomies. By identifying optimal thresholds and validating our F-ratio-based method in simulated, real phantom, and human MEG data, we demonstrated the superiority of our F-ratio-based method over existing state-of-the-art statistical approaches, such as the Akaike Information Criterion (AIC) and Minimum Description Length (MDL).DiscussionOverall, when tuned for optimal selection of thresholds, our method offers researchers a precise tool to estimate the true number of active brain sources and accurately model brain function

    Spatially and temporally distinct encoding of muscle and kinematic information in rostral and caudal primary motor cortex

    Get PDF
    The organising principle of human motor cortex does not follow an anatomical body map, but rather a distributed representational structure in which motor primitives are com- bined to produce motor outputs. Electrophysiological recordings in primates and human imaging data suggest that M1 encodes kinematic features of movements, such as joint position and velocity. However, M1 exhibits well-documented sensory responses to cu- taneous and proprioceptive stimuli, raising questions regarding the origins of kinematic motor representations: are they relevant in top-down motor control, or are they an epiphe- nomenon of bottom-up sensory feedback during movement? Here we provide evidence for spatially and temporally distinct encoding of kinematic and muscle information in human M1 during the production of a wide variety of naturalistic hand movements. Using a powerful combination of high-field fMRI and MEG, a spatial and temporal multivariate representational similarity analysis revealed encoding of kinematic information in more caudal regions of M1, over 200 ms before movement onset. In contrast, patterns of muscle activity were encoded in more rostral motor regions much later after movements began. We provide compelling evidence that top-down control of dexterous movement engages kinematic representations in caudal regions of M1 prior to movement production

    Integração da impedância sísmica 3D e 4D ao modelo de simulação para melhorar a caracterização do reservatório

    Get PDF
    Orientadores: Denis José Schiozer, Alessandra Davólio GomesTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de GeociênciasResumo: O objetivo principal da simulação numérica de reservatórios é prever a produção e planejar o desenvolvimento de campos de petróleo, mantendo modelos de reservatórios confiáveis que respeitem os dados estáticos e dinâmicos disponíveis. A sísmica 4D (S4D) desempenha papel importante no monitoramento de reservatórios, fornecendo dados que descrevem o comportamento dinâmico das propriedades do reservatório durante a produção. Aplicações recentes mostraram que a S4D possibilita reduzir a incerteza na distribuição de heterogeneidades, melhorando o conhecimento da estrutura geológica e permitindo que o reservatório seja gerenciado de forma mais eficaz. Dados de S4D podem ser integrados com dados de simulação do fluxo do reservatório, qualitativamente (como na interpretação de causas prováveis de anomalias devido a mudanças na saturação e pressão dos poros) ou quantitativamente (adicionando atributos derivados da sísmica dentro da função objetivo de um processo de ajuste de histórico). Dados sísmicos 3D são associados aos parâmetros estáticos do reservatório e podem fornecer conhecimento da estrutura e litologia do reservatório. Assim, a integração entre o modelo de simulação de fluxo e os dados sísmicos observados (domínios de engenharia e sísmica) deve respeitar a interpretação dinâmica, estrutural e estratigráfica do reservatório através da modelagem direta e inversa e subsequente comparação entre as observações previstas e reais. Este trabalho destina-se a desenvolver metodologias para usar dados sísmicos 3D e 4D, para mitigar as incertezas no modelo de simulação numérica de reservatórios. Deste modo, este trabalho propõe uma metodologia de estudos para integrar impedância sísmica invertida (3D e 4D) com dados de engenharia, dando ênfase na interface entre modelos estáticos e dinâmicos, para proporcionar um modelo de reservatório mais confiável. A metodologia é aplicada a um reservatório de arenito com geologia estrutural complexa, o benchmark do campo de Norne (Noruega). A primeira parte do trabalho apresenta uma inversão do levantamento sísmico 3D base (adquirido em 2001) discutindo o uso de diferentes números e localização de poços para determinar as características estáticas do reservatório. Demonstrou-se que a inversão 3D fornece melhores resultados se os dados de entrada, neste caso os dados de poço, respeitarem a complexa geologia estrutural do reservatório de Norne. Destacamos as vantagens da interpretação sísmica 4D em forma de impedância, obtida através de inversão sísmica 4D,através da comparação das anomalias de impedância sísmica com as diferenças de amplitude sísmica para alguns exemplos no campo de Norne. A inversão 4D atenua as anomalias que não são causadas pelas atividades produtivas do campo. Em seguida, interpretamos as variações de impedância entre os levantamentos sísmicos base (2001) e monitor (2006) para todo o campo para identificar anomalias de impedância 4D (sinais de aumento e diminuição de impedância) e desacoplar os efeitos das variações de fluido e pressão (devido à atividade de produção) suportado por dados de engenharia do reservatório. Assim, uma interpretação sísmica 4D qualitativa precisa foi alcançada através dos resultados da inversão permitindo entender os efeitos da atividade de produção, que é outra contribuição importante a ser destacada. A natureza multidisciplinar da modelagem do reservatório exige uma abordagem mais quantitativa para integrar os dados sísmicos 4D na metodologia de ajuste de histórico. A avaliação quantitativa da consistência entre a simulação do fluxo do reservatório e os parâmetros elásticos necessita de um modelo petro-elástico (PEM) para fornecer uma comparação lógica entre domínios. No entanto, o PEM pode ser bastante incerto. Assim, atualizamos o modelo do reservatório usando a integração quantitativa da impedância sísmica invertida (3D e 4D) dentro do modelo de simulação de fluxo do reservatório, levando em consideração que o desajuste de dados sísmicos pode ser associado à um modelo de simulação incerto, ou à um PEM incerto. O caso estudado mostrou um desajuste considerável entre dados simulados e observados de pressão de fundo dos poços. Portanto, propomos duas etapas para resolver a ambiguidade na geração de um modelo de simulação de reservatório confiável tendo um PEM incerto. Em primeiro lugar, melhoramos a confiabilidade do modelo de reservatório usando a integração quantitativa da impedância sísmica observada em 3D e 4D, juntamente com os dados do histórico dos poços. Em seguida, calibramos os parâmetros no modelo petro-elástico, referente aos dados observados 4D e ao histórico de produção para garantir valores realistas às mudanças nos parâmetros elásticos in situ devido à atividade de produção. Este estudo apresenta a integração dos domínios de engenharia e sísmica, em um fluxo de trabalho iterativo, em um campo real para fechar o ciclo entre os dois domínios, permitindo atualizar o modelo de reservatório e validar o modelo petro-elástico. A principal contribuição deste trabalho é destacar a incorporação dos dados estáticos e dinâmicos do reservatório para diagnosticar a confiabilidade da simulação de fluxo do reservatório para um caso complexo, considerando as incertezas inerentes a esses dados e melhorando a compreensão do comportamento do reservatórioAbstract: The ultimate goal of reservoir simulation in reservoir surveillance technology is to estimate long-term production forecasting and to plan further development of petroleum fields by maintaining reliable reservoir models that honor available static and dynamic data. Moreover, time-lapse seismic (or 4DS) has played a preeminent role in the reservoir surveillance technology by providing new data describing the dynamic behavior of reservoir properties during production. Recent applications have shown that 4DS yields a reduction in the uncertainty in reservoir properties allowing the improvement of the knowledge of the geological framework and a more effective reservoir management. 4DS response can be integrated with reservoir flow simulation either qualitatively (such as interpreting likely causes of 4D anomalies due to changes in saturation and pore pressure) or quantitatively (by adding seismic derived attributes inside the objective function of a history matching process). Alternatively, 3D seismic data is associated to the static reservoir parameters which can provide reservoir framework knowledge. Thus, closing the loop between the flow simulation model and the observed seismic data (engineering and seismic domains) must honor static, dynamic, structural and stratigraphic interpretation of reservoirs through forward and inverse modeling and consequent comparison between predicted and actual observations. This work aims using 3D and 4D seismic data to mitigate uncertainties in numerical reservoir simulation model, proposing a circular workflow of inverted seismic impedance (3D and 4D) and engineering studies, with emphasis on the interface between static and dynamic models. The methodology is applied to a complex structural geology, sandstone reservoir, the Norne Field benchmark case (Norway). The first part of the work presents a 3D seismic inversion of the baseline seismic survey (2001) discussing different numbers and locations of wells to characterize the static reservoir framework. It was shown that the 3D inversion provides better results if the input data, in this case the well-logs data, respect the complex structural geology of Norne reservoir. Meanwhile, we highlight the advantages of time-lapse seismic interpretation in form of inverted impedance by running 4D seismic inversion and comparing derived seismic impedance anomalies within the standard seismic amplitude differences for some examples in the Norne Field. The 4D inversion mitigates the anomalies that are not caused by production activity. Then, we interpret impedance variations between the base (2001) and monitor (2006) seismic surveys for entire field to identify 4D impedance anomalies (hardening and softening signals) and decouple the effects of fluid and pressure variations (due to the production activity), supported by reservoir engineering data. Thus, an accurate qualitative 4D seismic interpretation are provided by inversion results to be able to understand the effects of production activity, which is another important contribution to be highlighted. However, the multidisciplinary nature of reservoir modeling demands more quantitative approach to integrate 4D seismic data into the history matching workflows. Nevertheless, quantitative evaluation of consistency between reservoir flow simulation and elastic parameters relies on calibrated petro-elastic modelling (PEM) to provide the logical cross-domain comparison. However the petro-elastic model can be very uncertain. Thereby, we update the reservoir model using quantitative integration of seismic inverted impedance (3D and 4D) within reservoir flow simulation model, taking into account that the seismic data mismatch can be associated to an uncertain simulation model as well as to an uncertain PEM. The case studied presented a considerable initial mismatch between simulated and measured bottom-hole pressure (BHP). We therefore propose two steps in order to resolve ambiguity in generating validated reservoir flow simulation and PEM model. First, we improve the reliability of reservoir model using quantitative integration of 3D and 4D observed seismic impedance together with well history data. Eventually, we calibrate the parameters in petro-elastic model, referring to 4D observed and production history data to ensure realistic values for changes in in-situ elastic parameters due to the production activity. This study presents the integration of engineering and seismic domains, in an iterative workflow, on a real field to close the loop and subsequently to update reservoir flow simulation and validate the petro-elastic model. The main contributions of this work is to highlight the incorporation of available static and dynamic reservoir data to diagnose the reservoir flow simulation reliability for a complex case, considering the uncertainties inherent to these data and improve the reservoir behavior understandingDoutoradoReservatórios e GestãoDoutor em Ciências e Engenharia de Petróle

    Modelos de observador aplicados a la detectabilidad de bajo contraste en tomografía computarizada

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Medicina, leída el 15/01/2016. Tesis formato europeo (compendio de artículos)Introduction. Medical imaging has become one of the comerstones in modem healthcare. Computed tomography (CT) is a widely used imaging modality in radiology worldwide. This technique allows to obtain three-dimensional volume reconstructions ofdifferent parts of the patient with isotropic spatial resolution. Also, to acquire sharp images of moving organs, such as the heart orthe lungs, without artifacts. The spectrum ofindications which can be tackled with this technique is wide, and it comprises brain perfusion, cardiology, oncology, vascular radiology, interventionism and traumatology, amongst others. CT is a very popular imaging technique, widely implanted in healthcare services worldwide. The amount of CT scans performed per year has been continuously growing in the past decades, which has led to a great benefit for the patients. At the same time, CT exams represent the highest contribution to the collective radiation dose. Patient dose in CT is one order ofmagnitude higher than in conventional X-ray studies. Regarding patient dose in X-ray imaging the ALARA criteria is universally accepted. It states that patient images should be obtained using adose as low as reasonably achievable and compatible with the diagnostic task. Sorne cases ofpatients' radiation overexposure, most ofthem in brain perfusion procedures have come to the public eye and hada great impact in the USA media. These cases, together with the increasing number ofCT scans performed per year, have raised a red flag about the patient imparted doses in CT. Several guidelines and recommendation for dose optimization in CT have been published by different organizations, which have been included in European and National regulations and adopted by CT manufacturers. In CT, the X-ray tube is rotating around the patient, emitting photons in beams from different angles or projections. These photons interact with the tissues in the patient, depending on their energy and the tissue composition and density. A fraction of these photons deposit all or part of their energy inside the patient, resulting in organs absorbed dose. The images are generated using the data from the projections ofthe X-ray beam that reach the detectors after passing through the patient. Each proj ection represents the total integrated attenuation of the X-ray beam along its path. A CT protocol is defined as a collection of settings which can be selected in the CT console and affect the image quality outcome and the patient dose. They can be acquisition parameters such as beam collimation, tube current, rotation time, kV, pitch, or reconstruction parameters such as the slice thickness and spacing, reconstruction filter and method (filtered back projection (FBP) or iterative algorithms). All main CT manufacturers offer default protocols for different indications, depending on the anatomical region. The user can frequently set the protocol parameters selecting amongst a range of values to adapt them to the clinical indication and patient characteristics, such as size or age. The selected settings in the protocol affect greatly image quality and dose. Many combinations ofsean parameters can render an appropriate image quality for a particular study. Protocol optimization is a complex task in CT because most sean protocol parameters are intertwined and affect image quality and patient dose...Introducción. La imagen médica se ha convertido en uno de los pilares en la atención sanitaria actual. La tomografía computarizada (TC) es una modalidad de imagen ampliamente extendida en radiología en todo el mundo. Esta técnica permite adquirir imágenes de órganos en movimiento, como el corazón o los pulmones, sin artefactos. También permite obtener reconstrucciones de volúmenes tridimensionales de distintas partes del cuerpo de los pacientes. El abanico de indicaciones que pueden abordarse con esta técnica es amplio, e incluye la perfusión cerebral, cardiología, oncología, radiología vascular, intervencionismo y traumatología, entre otras. La TC es una técnica de imagen muy popular, ampliamente implantada en los servicios de salud de hospitales de todo el mundo. El número de estudios de TC hechos anualmente ha crecido de manera continua en las últimas décadas, lo que ha supuesto un gran beneficio para los pacientes. A la vez, los exámenes de TC representan la contribución más alta a la dosis de radiación colectiva en la actualidad. La dosis que reciben los pacientes en un estudio de TC es un orden de magnitud más alta que en exámenes de radiología convencional. En relación con la dosis a pacientes en radiodiagnóstico, el criterio ALARA es aceptado universalmente. Expone que las imágenes de los pacientes deberían obtenerse utilizando una dosis tan baja como sea razonablemente posible y compatible con el objetivo diagnóstico de la prueba. Algunos casos de sobreexposición de pacientes a la radiación, la mayoría en exámenes de perfusión cerebral, se han hecho públicos, lo que ha tenido un gran impacto en los medios de comunicación de EEUU. Estos accidentes, junto con el creciente número de exámenes TC anuales, han hecho aumentar la preocupación sobre las dosis de radiación impartidas a los pacientes en TC. V arias guías y recomendaciones para la optimización de la dosis en TC han sido publicadas por distintas organizaciones, y han sido incluidas en normas europeas y nacionales y adoptadas parcialmente por los fabricantes de equipos de TC. En TC, el tubo de rayos-X rota en tomo al paciente, emitiendo fotones en haces desde distintos ángulos o proyecciones. Estos fotones interactúan con los tejidos en el paciente, en función de su energía y de la composición y densidad del tejido. Una fracción de estos fotones depositan parte o toda su energía dentro del paciente, dando lugar a la dosis absorbida en los órganos. Las imágenes se generan usando los datos de las proyecciones del haz de rayos-X que alcanzan los detectores tras atravesar al paciente. Cada proyección representa la atenuación total del haz de rayos-X integrada a lo largo de su trayectoria. Un protocolo de TC se define como una colección de opciones que pueden seleccionarse en la consola del equipo y que afectan a la calidad de las imágenes y a la dosis que recibe el paciente. Pueden ser parámetros de adquisición, tales como la colimación del haz, la intensidad de corriente, el tiempo de rotación, el kV, el factor de paso parámetros de reconstrucción como el espesor y espaciado de corte, el filtro y el método de reconstrucción (retroproyección filtrada (FBP) o algoritmos iterativos). Los principales fabricantes de equipos de TC ofrecen protocolos recomendados para distintas indicaciones, dependiendo de la región anatómica. El usuario con frecuencia fija los parámetros del protocolo eligiendo entre un rango de valores disponibles, para adaptarlo a la indicación clínica y a las características del paciente, tales como su tamaño o edad. Las condiciones seleccionadas en el protocolo tienen un gran impacto en la calidad de imagen y la dosis. Múltiples combinaciones de los parámetros pueden dar lugar a un nivel de calidad de imagen apropiado para un estudio en concreto. La optimización de los protocolos es una tarea compleja en TC, ya que la mayoría de los parámetros del protocolo están relacionados entre sí y afectan a la calidad de imagen y a la dosis que recibe el paciente...Depto. de Radiología, Rehabilitación y FisioterapiaFac. de MedicinaTRUEunpu

    A unified view on beamformers for M/EEG source reconstruction

    Get PDF
    Beamforming is a popular method for functional source reconstruction using magnetoencephalography (MEG) and electroencephalography (EEG) data. Beamformers, which were first proposed for MEG more than two decades ago, have since been applied in hundreds of studies, demonstrating that they are a versatile and robust tool for neuroscience. However, certain characteristics of beamformers remain somewhat elusive and there currently does not exist a unified documentation of the mathematical underpinnings and computational subtleties of beamformers as implemented in the most widely used academic open source software packages for MEG analysis (Brainstorm, FieldTrip, MNE, and SPM). Here, we provide such documentation that aims at providing the mathematical background of beamforming and unifying the terminology. Beamformer implementations are compared across toolboxes and pitfalls of beamforming analyses are discussed. Specifically, we provide details on handling rank deficient covariance matrices, prewhitening, the rank reduction of forward fields, and on the combination of heterogeneous sensor types, such as magnetometers and gradiometers. The overall aim of this paper is to contribute to contemporary efforts towards higher levels of computational transparency in functional neuroimaging
    • …
    corecore