157 research outputs found

    Novel Brain Complexity Measures Based on Information Theory

    Get PDF
    Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe the topological features of these networks, but unfortunately, it is still unclear which measures give the best representation of the brain. In this paper, we propose a new set of measures based on information theory. Our approach interprets the brain network as a stochastic process where impulses are modeled as a random walk on the graph nodes. This new interpretation provides a solid theoretical framework from which several global and local measures are derived. Global measures provide quantitative values for the whole brain network characterization and include entropy, mutual information, and erasure mutual information. The latter is a new measure based on mutual information and erasure entropy. On the other hand, local measures are based on different decompositions of the global measures and provide different properties of the nodes. Local measures include entropic surprise, mutual surprise, mutual predictability, and erasure surprise. The proposed approach is evaluated using synthetic model networks and structural and functional human networks at different scales. Results demonstrate that the global measures can characterize new properties of the topology of a brain network and, in addition, for a given number of nodes, an optimal number of edges is found for small-world networks. Local measures show different properties of the nodes such as the uncertainty associated to the node, or the uniqueness of the path that the node belongs. Finally, the consistency of the results across healthy subjects demonstrates the robustness of the proposed measures

    3D CATBraTS: Channel Attention Transformer for Brain Tumour Semantic Segmentation

    Get PDF
    Brain tumour diagnosis is a challenging task yet crucial for planning treatments to stop or slow the growth of a tumour. In the last decade, there has been a dramatic increase in the use of convolutional neural networks (CNN) for their high performance in the automatic segmentation of tumours in medical images. More recently, Vision Transformer (ViT) has become a central focus of medical imaging for its robustness and efficiency when compared to CNNs. In this paper, we propose a novel 3D transformer named 3D CATBraTS for brain tumour semantic segmentation on magnetic resonance images (MRIs) based on the state-of-the-art Swin transformer with a modified CNN-encoder architecture using residual blocks and a channel attention module. The proposed approach is evaluated on the BraTS 2021 dataset and achieved quantitative measures of the mean Dice similarity coefficient (DSC) that surpasses the current state-of-the-art approaches in the validation phase

    Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using Image Sequence Classification

    Get PDF
    Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up. Probe placement and ultrasound image interpretation are manual tasks contingent upon operator skill, leading to interoperator uncertainties that degrade radiotherapy precision. We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data. Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image classifier using a recurrent neural network to generate two sets of predictions in real-time. The first set identifies relevant prostate anatomy visible in the field of view using the classes: outside prostate, prostate periphery, prostate centre. The second set recommends a probe angular adjustment to achieve alignment between the probe and prostate centre with the classes: move left, move right, stop. The algorithm was trained and tested on 9,743 clinical images from 61 treatment sessions across 32 patients. We evaluated classification accuracy against class labels derived from three experienced observers at 2/3 and 3/3 agreement thresholds. For images with unanimous consensus between observers, anatomical classification accuracy was 97.2% and probe adjustment accuracy was 94.9%. The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7° (1.2°) from angle labels with full observer consensus, comparable to the 2.8° (2.6°) mean interobserver range. We propose such an algorithm could assist radiotherapy practitioners with limited experience of ultrasound image interpretation by providing effective real-time feedback during patient set-up

    Brain parcellation based on information theory

    Get PDF
    BACKGROUND AND OBJECTIVE: In computational neuroimaging, brain parcellation methods subdivide the brain into individual regions that can be used to build a network to study its structure and function. Using anatomical or functional connectivity, hierarchical clustering methods aim to offer a meaningful parcellation of the brain at each level of granularity. However, some of these methods have been only applied to small regions and strongly depend on the similarity measure used to merge regions. The aim of this work is to present a robust whole-brain hierarchical parcellation that preserves the global structure of the network. METHODS: Brain regions are modeled as a random walk on the connectome. From this model, a Markov process is derived, where the different nodes represent brain regions and in which the structure can be quantified. Functional or anatomical brain regions are clustered by using an agglomerative information bottleneck method that minimizes the overall loss of information of the structure by using mutual information as a similarity measure. RESULTS: The method is tested with synthetic models, structural and functional human connectomes and is compared with the classic k-means. Results show that the parcellated networks preserve the main properties and are consistent across subjects. CONCLUSION: This work provides a new framework to study the human connectome using functional or anatomical connectivity at different levels

    k-Space tutorial: an MRI educational tool for a better understanding of k-space

    Get PDF
    A main difference between Magnetic Resonance (MR) imaging and other medical imaging modalities is the control over the data acquisition and how it can be managed to finally show the adequate reconstructed image. With some basic programming adjustments, the user can modify the spatial resolution, field of view (FOV), image contrast, acquisition velocity, artifacts and so many other parameters that will contribute to form the final image. The main character and agent of all this control is called k-space, which represents the matrix where the MR data will be stored previously to a Fourier transformation to obtain the desired image

    Novel Brain Complexity Measures Based on Information Theory

    Get PDF
    Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe the topological features of these networks, but unfortunately, it is still unclear which measures give the best representation of the brain. In this paper, we propose a new set of measures based on information theory. Our approach interprets the brain network as a stochastic process where impulses are modeled as a random walk on the graph nodes. This new interpretation provides a solid theoretical framework from which several global and local measures are derived. Global measures provide quantitative values for the whole brain network characterization and include entropy, mutual information, and erasure mutual information. The latter is a new measure based on mutual information and erasure entropy. On the other hand, local measures are based on different decompositions of the global measures and provide different properties of the nodes. Local measures include entropic surprise, mutual surprise, mutual predictability, and erasure surprise. The proposed approach is evaluated using synthetic model networks and structural and functional human networks at different scales. Results demonstrate that the global measures can characterize new properties of the topology of a brain network and, in addition, for a given number of nodes, an optimal number of edges is found for small-world networks. Local measures show different properties of the nodes such as the uncertainty associated to the node, or the uniqueness of the path that the node belongs. Finally, the consistency of the results across healthy subjects demonstrates the robustness of the proposed measures

    Assessing Chronotypes by Ambulatory Circadian Monitoring

    Get PDF
    © The authors 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ This document is the Published version of a Published Work that appeared in final form in Frontiers in Physiology. To access the final edited and published work see https://doi.org/10.3389/fphys.2019.01396In order to develop objective indexes for chronotype identification by means of direct measurement of circadian rhythms, 159 undergraduate students were recruited as volunteers and instructed to wear ambulatory circadian monitoring (ACM) sensors that continuously gathered information on the individual’s environmental light and temperature exposure, wrist temperature, body position, activity, and the integrated TAP (temperature, activity, and position) variable for 7 consecutive days under regular freeliving conditions. Among all the proposed indexes, the night phase marker (NPM) of the TAP variable was the best suited to discriminate among chronotypes, due to its relationship with the Munich ChronoType Questionnaire (b = 0.531; p < 0.001). The NPM of TAP allowed subjects to be classified as early- (E-type, 20%), neither- (N-type, 60%), and late-types (L-type, 20%), each of which had its own characteristics. In terms of light exposure, while all subjects had short exposure times to bright light (>100 lux), with a daily average of 93.84 5.72 min, the earlier chronotypes were exposed to brighter days and darker nights compared to the later chronotypes. Furthermore, the earlier chronotypes were associated with higher stability and day–night contrast, along with an earlier phase, which could be the cause or consequence of the light exposure habits. Overall, these data support the use of ACM for chronotype identification and for evaluation under free living conditions, using objective markers

    Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

    Get PDF
    Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Fréchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io

    Effect of Single and Combined Monochromatic Light on the Human Pupillary Light Response

    Get PDF
    The pupillary light reflex (PLR) is a neurological reflex driven by rods, cones, and melanopsin-containing retinal ganglion cells. Our aim was to achieve a more precise picture of the effects of 5-min duration monochromatic light stimuli, alone or in combination, on the human PLR, to determine its spectral sensitivity and to assess the importance of photon flux. Using pupillometry, the PLR was assessed in 13 participants (6 women) aged 27.2 ± 5.41 years (mean ± SD) during 5-min light stimuli of purple (437 nm), blue (479 nm), red (627 nm), and combinations of red+purple or red+blue light. In addition, nine 5-min, photon-matched light stimuli, ranging in 10 nm increments peaking between 420 and 500 nm were tested in 15 participants (8 women) aged 25.7 ± 8.90 years. Maximum pupil constriction, time to achieve this, constriction velocity, area under the curve (AUC) at short (0–60 s), and longer duration (240–300 s) light exposures, and 6-s post-illumination pupillary response (6-s PIPR) were assessed. Photoreceptor activation was estimated by mathematical modeling. The velocity of constriction was significantly faster with blue monochromatic light than with red or purple light. Within the blue light spectrum (between 420 and 500 nm), the velocity of constriction was significantly faster with the 480 nm light stimulus, while the slowest pupil constriction was observed with 430 nm light. Maximum pupil constriction was achieved with 470 nm light, and the greatest AUC0−60 and AUC240−300 was observed with 490 and 460 nm light, respectively. The 6-s PIPR was maximum after 490 nm light stimulus. Both the transient (AUC0−60) and sustained (AUC240−300) response was significantly correlated with melanopic activation. Higher photon fluxes for both purple and blue light produced greater amplitude sustained pupillary constriction. The findings confirm human PLR dependence on wavelength, monochromatic or bichromatic light and photon flux under 5-min duration light stimuli. Since the most rapid and high amplitude PLR occurred within the 460–490 nm light range (alone or combined), our results suggest that color discrimination should be studied under total or partial substitution of this blue light range (460–490 nm) by shorter wavelengths (~440 nm). Thus for nocturnal lighting, replacement of blue light with purple light might be a plausible solution to preserve color discrimination while minimizing melanopic activation

    Voice-assisted Image Labelling for Endoscopic Ultrasound Classification using Neural Networks

    Get PDF
    Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications
    • …
    corecore