9 research outputs found

    Port: A software tool for digital data donation

    Get PDF
    Recently, a new workflow has been introduced that allows academic researchers to partner with individuals interested in donating their digital trace data for academic research purposes (Boeschoten, Ausloos, et al., 2022). In this workflow, the digital traces of participants are processed locally on their own devices in such a way that only the subset of participants’ digital trace data that is of legitimate interest to a research project are shared with the researcher, which can only occur after the participant has provided their informed consent.This data donation workflow consists of the following steps: First, the participant requests a digital copy of their personal data at the platform of interest, such as Google, Meta, Twitter and other digital platforms, i.e., their Data Download Package (DDP). Platforms, as data controllers, are required as per the European Union’s General Data Protection Regulation (GDPR) to share a digital copy with each participant requesting such a copy. Second, they download the DDP onto their personal device. Third, by means of local processing, only thedata points of interest to the researcher are extracted from that DDP. Fourth, the participant inspects the extracted data points after which the participant can consent to donate. Only after providing this consent, the donated data is sent to a storage location and can be accessed by the researcher, which would mean that the storage location can be accessed for further analysis.In this paper, we introduce Port. Port is a software tool that allows researchers to configure the local processing step of the data donation workflow, allowing the researcher to collect exactly the digital traces needed to answer their research question. When using Port, a researcher can decide:• Which digital platforms are investigated;• Which digital traces are collected;• How the extracted digital traces are visually presented to the participant;• What is communicated to the participant

    Noise reduction in computed tomography scans using 3-d anisotropic hybrid diffusion with continuous switch.

    No full text
    Item does not contain fulltextNoise filtering techniques that maintain image contrast while decreasing image noise have the potential to optimize the quality of computed tomography (CT) images acquired at reduced radiation dose. In this paper, a hybrid diffusion filter with continuous switch (HDCS) is introduced, which exploits the benefits of three-dimensional edge-enhancing diffusion (EED) and coherence-enhancing diffusion (CED). Noise is filtered, while edges, tubular structures, and small spherical structures are preserved. From ten high dose thorax CT scans, acquired at clinical doses, ultra low dose ( 15 mAs ) scans were simulated and used to evaluate and compare HDCS to other diffusion filters, such as regularized Perona-Malik diffusion and EED. Quantitative results show that the HDCS filter outperforms the other filters in restoring the high dose CT scan from the corresponding simulated low dose scan. A qualitative evaluation was performed on filtered real low dose CT thorax scans. An expert observer scored artifacts as well as fine structures and was asked to choose one of three scans (two filtered (blinded), one unfiltered) for three different settings (trachea, lung, and mediastinal). Overall, the HDCS filtered scan was chosen most often

    Automatic segmentation of MR brain images with a convolutional neural network

    No full text
    Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2- weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86 and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol

    Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in MRI

    Get PDF
    Automatic segmentation of brain tissues and white matter hyperintensities of presumed vascular origin (WMH) in MRI of older patients is widely described in the literature. Although brain abnormalities and motion artefacts are common in this age group, most segmentation methods are not evaluated in a setting that includes these items. In the present study, our tissue segmentation method for brain MRI was extended and evaluated for additional WMH segmentation. Furthermore, our method was evaluated in two large cohorts with a realistic variation in brain abnormalities and motion artefacts. The method uses a multi-scale convolutional neural network with a T1-weighted image, a T2-weighted fluid attenuated inversion recovery (FLAIR) image and a T1-weighted inversion recovery (IR) image as input. The method automatically segments white matter (WM), cortical grey matter (cGM), basal ganglia and thalami (BGT), cerebellum (CB), brain stem (BS), lateral ventricular cerebrospinal fluid (lvCSF), peripheral cerebrospinal fluid (pCSF), and WMH. Our method was evaluated quantitatively with images publicly available from the MRBrainS13 challenge (n = 20), quantitatively and qualitatively in relatively healthy older subjects (n = 96), and qualitatively in patients from a memory clinic (n = 110). The method can accurately segment WMH (Overall Dice coefficient in the MRBrainS13 data of 0.67) without compromising performance for tissue segmentations (Overall Dice coefficients in the MRBrainS13 data of 0.87 for WM, 0.85 for cGM, 0.82 for BGT, 0.93 for CB, 0.92 for BS, 0.93 for lvCSF, 0.76 for pCSF). Furthermore, the automatic WMH volumes showed a high correlation with manual WMH volumes (Spearman's ρ = 0.83 for relatively healthy older subjects). In both cohorts, our method produced reliable segmentations (as determined by a human observer) in most images (relatively healthy/memory clinic: tissues 88%/77% reliable, WMH 85%/84% reliable) despite various degrees of brain abnormalities and motion artefacts. In conclusion, this study shows that a convolutional neural network-based segmentation method can accurately segment brain tissues and WMH in MR images of older patients with varying degrees of brain abnormalities and motion artefacts

    TIPS bilateral noise reduction in 4D CT perfusion scans produces high-quality cerebral blood flow maps

    No full text
    Contains fulltext : 98391.pdf ( ) (Closed access)Cerebral computed tomography perfusion (CTP) scans are acquired to detect areas of abnormal perfusion in patients with cerebrovascular diseases. These 4D CTP scans consist of multiple sequential 3D CT scans over time. Therefore, to reduce radiation exposure to the patient, the amount of x-ray radiation that can be used per sequential scan is limited, which results in a high level of noise. To detect areas of abnormal perfusion, perfusion parameters are derived from the CTP data, such as the cerebral blood flow (CBF). Algorithms to determine perfusion parameters, especially singular value decomposition, are very sensitive to noise. Therefore, noise reduction is an important preprocessing step for CTP analysis. In this paper, we propose a time-intensity profile similarity (TIPS) bilateral filter to reduce noise in 4D CTP scans, while preserving the time-intensity profiles (fourth dimension) that are essential for determining the perfusion parameters. The proposed TIPS bilateral filter is compared to standard Gaussian filtering, and 4D and 3D (applied separately to each sequential scan) bilateral filtering on both phantom and patient data. Results on the phantom data show that the TIPS bilateral filter is best able to approach the ground truth (noise-free phantom), compared to the other filtering methods (lowest root mean square error). An observer study is performed using CBF maps derived from fifteen CTP scans of acute stroke patients filtered with standard Gaussian, 3D, 4D and TIPS bilateral filtering. These CBF maps were blindly presented to two observers that indicated which map they preferred for (1) gray/white matter differentiation, (2) detectability of infarcted area and (3) overall image quality. Based on these results, the TIPS bilateral filter ranked best and its CBF maps were scored to have the best overall image quality in 100% of the cases by both observers. Furthermore, quantitative CBF and cerebral blood volume values in both the phantom and the patient data showed that the TIPS bilateral filter resulted in realistic mean values with a smaller standard deviation than the other evaluated filters and higher contrast-to-noise ratios. Therefore, applying the proposed TIPS bilateral filtering method to 4D CTP data produces higher quality CBF maps than applying the standard Gaussian, 3D bilateral or 4D bilateral filter. Furthermore, the TIPS bilateral filter is computationally faster than both the 3D and 4D bilateral filters
    corecore