181 research outputs found

    2D Electrical Resistivity Tomography (ERT) Survey using the Multi-Electrode Gradient Array at the Bosumtwi Impact Crater, Ghana

    Get PDF
    The 10.5 km diameter Bosuntwi impact crater in Ghana is occupied by a lake of about 8.5 km in diameter. The multi-electrode gradient array has been used to carry out 2D electrical resistivity tomography (ERT) survey at different locations around the crater. The 2 m take-out cable of the ABEM LUND Resistivity Imaging System was modified to function as a 5 m take-out. 2D electrical resistivity survey was carried out along six (6) radial profiles running from the shore of the lake towards the crater rim. The least-square inversion technique was used to invert the topographically corrected data. The area extending from the lake shore towards the crater rim contains essentially three formations: the low resistivity regions from the shore of the lake towards uphill with resistivities < 64 W.m representing the lake sediments; the moderately high resistivity regions with values between 128 and 200 W.m interpreted as impact related breccias such as dikes, allochthonous or parautochthonous depending on their geometries; lastly, the model clearly differentiates the resistive basement metamorphic rocks of resistivities > 128 W.m from the lake sediments and the breccias due to their geometry and lateral extent. The ERT models allowed us to locate faults and fractures and also the thickness of the post impact lake sediments and the breccias. The results showed that the cables take-outs of the multi-core cable can be modified to suit the requirements of a particular survey thus highlighting the utility of this technique in impact cratering studies and geo-electrical imaging studies in general. Keywords: impact crater, target rock, electrical resistivity tomography (ERT), multi-electrode gradient array, roll-along techniqu

    An empirical comparison of surface-based and volume-based group studies in neuroimaging

    Get PDF
    International audienceBeing able to detect reliably functional activity in a population of subjects is crucial in human brain mapping, both for the understanding of cognitive functions in normal subjects and for the analysis of patient data. The usual approach proceeds by normalizing brain volumes to a common three-dimensional template. However, a large part of the data acquired in fMRI aims at localizing cortical activity, and methods working on the cortical surface may provide better inter-subject registration than the standard procedures that process the data in the volume. Nevertheless, few assessments of the performance of surface-based (2D) versus volume-based (3D) procedures have been shown so far, mostly because inter-subject cortical surface maps are not easily obtained. In this paper we present a systematic comparison of 2D versus 3D group-level inference procedures, by using cluster-level and voxel-level statistics assessed by permutation, in random effects (RFX) and mixed-effects analyses (MFX). We consider different schemes to perform meaningful comparisons between thresholded statistical maps in the volume and on the cortical surface. We find that surface-based multi-subject statistical analyses are generally more sensitive than their volume-based counterpart, in the sense that they detect slightly denser networks of regions when performing peak-level detection; this effect is less clear for cluster-level inference and is reduced by smoothing. Surface-based inference also increases the reliability of the activation maps

    Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can trust

    Full text link
    Deep neural networks have become the gold-standard approach for the automated segmentation of 3D medical images. Their full acceptance by clinicians remains however hampered by the lack of intelligible uncertainty assessment of the provided results. Most approaches to quantify their uncertainty, such as the popular Monte Carlo dropout, restrict to some measure of uncertainty in prediction at the voxel level. In addition not to be clearly related to genuine medical uncertainty, this is not clinically satisfying as most objects of interest (e.g. brain lesions) are made of groups of voxels whose overall relevance may not simply reduce to the sum or mean of their individual uncertainties. In this work, we propose to go beyond voxel-wise assessment using an innovative Graph Neural Network approach, trained from the outputs of a Monte Carlo dropout model. This network allows the fusion of three estimators of voxel uncertainty: entropy, variance, and model's confidence; and can be applied to any lesion, regardless of its shape or size. We demonstrate the superiority of our approach for uncertainty estimate on a task of Multiple Sclerosis lesions segmentation.Comment: Accepted for presentation at the Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 202

    Surface-based versus volume-based fMRI group analysis: a case study

    Get PDF
    International audienceBeing able to detect reliably functional activity in a popula- tion of subjects is crucial in human brain mapping, both for the under- standing of cognitive functions in normal subjects and for the analysis of patient data. The usual approach proceeds by normalizing brain volumes to a common 3D template. However, a large part of the data acquired in fMRI aims at localizing cortical activity, and methods working on the cortical surface may provide better inter-subject registration than the standard procedures that process the data in 3D. Nevertheless, few as- sessments of the performance of surface-based (2D) versus volume-based (3D) procedures have been shown so far, mostly because inter-subject cortical surface maps are not easily obtained. In this paper we present a systematic comparison of 2D versus 3D group-level inference procedures, by using cluster-level and voxel-level statistics assessed by permutation, in random e ects (RFX) and mixed-e ects analyses (MFX). We nd that, using a voxel-level thresholding, and to some extent, cluster-level thresholding, the surface-based approach generally detects more, but smaller active regions than the corresponding volume-based approach for both RFX and MFX procedures, and that surface-based supra-threshold regions are more reproducible by bootstrap

    Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis

    Full text link
    The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. Particularly, end users are reluctant to rely on the rough predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential response to reduce the rough decision provided by the DL black box and thus increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated to DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability, as well as constraints associated to real-life clinical routine. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges of uncertainty quantification in the medical field

    Structural analysis of fMRI data revisited: improving the sensitivity and reliability of fMRI group studies.

    Get PDF
    International audienceGroup studies of functional magnetic resonance imaging datasets are usually based on the computation of the mean signal across subjects at each voxel (random effects analyses), assuming that all subjects have been set in the same anatomical space (normalization). Although this approach allows for a correct specificity (rate of false detections), it is not very efficient for three reasons: i) its underlying hypotheses, perfect coregistration of the individual datasets and normality of the measured signal at the group level are frequently violated; ii) the group size is small in general, so that asymptotic approximations on the parameters distributions do not hold; iii) the large size of the images requires some conservative strategies to control the false detection rate, at the risk of increasing the number of false negatives. Given that it is still very challenging to build generative or parametric models of intersubject variability, we rely on a rule based, bottom-up approach: we present a set of procedures that detect structures of interest from each subject's data, then search for correspondences across subjects and outline the most reproducible activation regions in the group studied. This framework enables a strict control on the number of false detections. It is shown here that this analysis demonstrates increased validity and improves both the sensitivity and reliability of group analyses compared with standard methods. Moreover, it directly provides information on the spatial position correspondence or variability of the activated regions across subjects, which is difficult to obtain in standard voxel-based analyses

    Parcellation Schemes and Statistical Tests to Detect Active Regions on the Cortical Surface

    Get PDF
    International audienceActivation detection in functional Magnetic Resonance Imaging (fMRI) datasets is usually performed by thresholding activation maps in the brain volume or, better, on the cortical surface. However, basing the analysis on a site-by-site statistical decision may be detrimental both to the interpretation of the results and to the sensitivity of the analysis, because a perfect point-to-point correspondence of brain surfaces from multiple subjects cannot be guaranteed in practice. In this paper, we propose a new approach that first defines anatomical regions such as cortical gyri outlined on the cortical surface, and then segments these regions into functionally homogeneous structures using a parcellation procedure that includes an explicit between-subject variability model, i.e. random effects. We show that random effects inference can be performed in this framework. Our procedure allows an exact control of the specificity using permutation techniques, and we show that the sensitivity of this approach is higher than the sensitivity of voxel- or cluster-level random effects tests performed on the cortical surface

    Axysimetrical water infiltration in soil imaged by non-invasive electrical resistivimetry

    Get PDF
    Axisymetrical infiltration of water in soil has been largely studied since the of tension disc infiltrometers. Procedures have been developed to derive the hydraulic properties of soils from axisymetrical infiltration measurements but rely some simplifying and/or a priori assumptions on the homogeneity of the soil from point of view of its hydraulic properties and its initial water status prior to Such assumptions are difficult to ascertain. We present here an attempt to image in vertical 2D plane the development of the axisymetrical infiltration bulb in soils using Bi-dimensional images of the soil electrical resistivity were obtained at various during the infiltration process by inverting apparent electrical resistivity taken by a 32-electrodes Wenner array with a 10 cm spacing laid across a diameter of the infiltrometer. The inversion was done using the Res2Dinv software. The infiltration experiments used either a CaCl2 or a KBr solution at 40 g/Litre to enhance the soil electrical resistivity contrast, and either 8-cm or 25-cm diameter disks. Most of the infiltration experiments were done at one single water potential (-0.1 kPa) and lasted 3.5 to 5 hours. A multipotential experiment was conducted as classically done to derive hydraulic conductivity values according to Reynolds & ElrickŠs method. At the end of each experiment, the soil was sampled for Cl or Br concentrations on the 2D plane corresponding to the resistivity measurements. Electrical resistivity measurements provided clear images of the infiltration bulb allowed the user to monitor the development of the infiltration bulb through time. The infiltration bulb imaged by resistivimetry at the end of the infiltrations matched well that imaged from the anion concentrations in soil. Some geometrical of the infiltration bulb could be seen both through resistivity and anion measurements and were consistent between both imaging methods. High- geophysical imaging of water infiltration in field soils seems a fruitful approach to development of efficient methods for the hydraulic characterisation of soils

    MRI-based screening of preclinical Alzheimer's disease for prevention clinical trials

    Get PDF
    The final publication is available at IOS Press through http://dx.doi.org/10.3233/JAD-180299”.The identification of healthy individuals harboring amyloid pathology represents one important challenge for secondary prevention clinical trials in Alzheimer’s disease (AD). Consequently, noninvasive and cost-efficient techniques to detect preclinical AD constitute an unmet need of critical importance. In this manuscript, we apply machine learning to structural MRI (T1 and DTI) of 96 cognitively normal subjects to identify amyloid-positive ones. Models were trained on public ADNI data and validated on an independent local cohort. Used for subject classification in a simulated clinical trial setting, the proposed method is able to save 60% of unnecessary CSF/PET tests and to reduce 47% of the cost of recruitment. This recruitment strategy capitalizes on available MR scans to reduce the overall amount of invasive PET/CSF tests in prevention trials, demonstrating a potential value as a tool for preclinical AD screening. This protocol could foster the development of secondary prevention strategies for AD.Peer ReviewedPostprint (author's final draft

    A supervised clustering approach for fMRI-based inference of brain states

    Get PDF
    We propose a method that combines signals from many brain regions observed in functional Magnetic Resonance Imaging (fMRI) to predict the subject's behavior during a scanning session. Such predictions suffer from the huge number of brain regions sampled on the voxel grid of standard fMRI data sets: the curse of dimensionality. Dimensionality reduction is thus needed, but it is often performed using a univariate feature selection procedure, that handles neither the spatial structure of the images, nor the multivariate nature of the signal. By introducing a hierarchical clustering of the brain volume that incorporates connectivity constraints, we reduce the span of the possible spatial configurations to a single tree of nested regions tailored to the signal. We then prune the tree in a supervised setting, hence the name supervised clustering, in order to extract a parcellation (division of the volume) such that parcel-based signal averages best predict the target information. Dimensionality reduction is thus achieved by feature agglomeration, and the constructed features now provide a multi-scale representation of the signal. Comparisons with reference methods on both simulated and real data show that our approach yields higher prediction accuracy than standard voxel-based approaches. Moreover, the method infers an explicit weighting of the regions involved in the regression or classification task
    corecore