139 research outputs found

    Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using Image Sequence Classification

    Get PDF
    Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up. Probe placement and ultrasound image interpretation are manual tasks contingent upon operator skill, leading to interoperator uncertainties that degrade radiotherapy precision. We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data. Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image classifier using a recurrent neural network to generate two sets of predictions in real-time. The first set identifies relevant prostate anatomy visible in the field of view using the classes: outside prostate, prostate periphery, prostate centre. The second set recommends a probe angular adjustment to achieve alignment between the probe and prostate centre with the classes: move left, move right, stop. The algorithm was trained and tested on 9,743 clinical images from 61 treatment sessions across 32 patients. We evaluated classification accuracy against class labels derived from three experienced observers at 2/3 and 3/3 agreement thresholds. For images with unanimous consensus between observers, anatomical classification accuracy was 97.2% and probe adjustment accuracy was 94.9%. The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7° (1.2°) from angle labels with full observer consensus, comparable to the 2.8° (2.6°) mean interobserver range. We propose such an algorithm could assist radiotherapy practitioners with limited experience of ultrasound image interpretation by providing effective real-time feedback during patient set-up

    Brain parcellation based on information theory

    Get PDF
    BACKGROUND AND OBJECTIVE: In computational neuroimaging, brain parcellation methods subdivide the brain into individual regions that can be used to build a network to study its structure and function. Using anatomical or functional connectivity, hierarchical clustering methods aim to offer a meaningful parcellation of the brain at each level of granularity. However, some of these methods have been only applied to small regions and strongly depend on the similarity measure used to merge regions. The aim of this work is to present a robust whole-brain hierarchical parcellation that preserves the global structure of the network. METHODS: Brain regions are modeled as a random walk on the connectome. From this model, a Markov process is derived, where the different nodes represent brain regions and in which the structure can be quantified. Functional or anatomical brain regions are clustered by using an agglomerative information bottleneck method that minimizes the overall loss of information of the structure by using mutual information as a similarity measure. RESULTS: The method is tested with synthetic models, structural and functional human connectomes and is compared with the classic k-means. Results show that the parcellated networks preserve the main properties and are consistent across subjects. CONCLUSION: This work provides a new framework to study the human connectome using functional or anatomical connectivity at different levels

    Novel Brain Complexity Measures Based on Information Theory

    Get PDF
    Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe the topological features of these networks, but unfortunately, it is still unclear which measures give the best representation of the brain. In this paper, we propose a new set of measures based on information theory. Our approach interprets the brain network as a stochastic process where impulses are modeled as a random walk on the graph nodes. This new interpretation provides a solid theoretical framework from which several global and local measures are derived. Global measures provide quantitative values for the whole brain network characterization and include entropy, mutual information, and erasure mutual information. The latter is a new measure based on mutual information and erasure entropy. On the other hand, local measures are based on different decompositions of the global measures and provide different properties of the nodes. Local measures include entropic surprise, mutual surprise, mutual predictability, and erasure surprise. The proposed approach is evaluated using synthetic model networks and structural and functional human networks at different scales. Results demonstrate that the global measures can characterize new properties of the topology of a brain network and, in addition, for a given number of nodes, an optimal number of edges is found for small-world networks. Local measures show different properties of the nodes such as the uncertainty associated to the node, or the uniqueness of the path that the node belongs. Finally, the consistency of the results across healthy subjects demonstrates the robustness of the proposed measures

    Endoscopic Ultrasound Image Synthesis Using a Cycle-Consistent Adversarial Network

    Get PDF
    Endoscopic ultrasound (EUS) is a challenging procedure that requires skill, both in endoscopy and ultrasound image interpretation. Classification of key anatomical landmarks visible on EUS images can assist the gastroenterologist during navigation. Current applications of deep learning have shown the ability to automatically classify ultrasound images with high accuracy. However, these techniques require a large amount of labelled data which is time consuming to obtain, and in the case of EUS, is also a difficult task to perform retrospectively due to the lack of 3D context. In this paper, we propose the use of an image-to-image translation method to create synthetic EUS (sEUS) images from CT data, that can be used as a data augmentation strategy when EUS data is scarce. We train a cycle-consistent adversarial network with unpaired EUS images and CT slices extracted in a manner such that they mimic plausible EUS views, to generate sEUS images from the pancreas, aorta and liver. We quantitatively evaluate the use of sEUS images in a classification sub-task and assess the Fréchet Inception Distance. We show that synthetic data, obtained from CT data, imposes only a minor classification accuracy penalty and may help generalization to new unseen patients. The code and a dataset containing generated sEUS images are available at: https://ebonmati.github.io

    Voice-assisted Image Labelling for Endoscopic Ultrasound Classification using Neural Networks

    Get PDF
    Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications

    Evidence of paleoecological changes and Mousterian occupations at the Galeria de las Estatuas site, Sierra de Atapuerca, northern Iberian plateau, Spain

    Get PDF
    Here we present a new site in the Sierra de Atapuerca (Burgos, Spain): Galeria de las Estatuas (GE), which provides new information about Mousterian occupations in the Iberian Plateau. The GE was an ancient entrance to the cave system, which is currently closed and sealed by a stalagmitic crust, below which a detritic sedimentary sequence of more than 2 m is found. This has been divided into five litostratigraphic units with a rich assemblage of faunal and lithic remains of clear Mousterian affinity. Radiocarbon dates provide minimum ages and suggest occupations older than 45 C-14 ka BP. The palynological analysis detected a landscape change to increased tree coverage, which suggests that the sequence recorded a warming episode. The macromammal assemblage is composed of both ungulates (mainly red deer and equids) and carnivores. Taphonomic analysis reveals both anthropic, and to a lesser extent, carnivore activities. The GE was occupied by Neanderthals and also sporadically by carnivores. This new site broadens the information available regarding different human occupations at the Sierra de Atapuerca, which emphasizes the importance of this site-complex for understanding human evolution in Western Europe
    corecore