180 research outputs found

    Protein Tracking by CNN-Based Candidate Pruning and Two-Step Linking with Bayesian Network

    Get PDF
    Protein trafficking plays a vital role in understanding many biological processes and disease. Automated tracking of protein vesicles is challenging due to their erratic behaviour, changing appearance, and visual clutter. In this paper we present a novel tracking approach which utilizes a two-step linking process that exploits a probabilistic graphical model to predict tracklet linkage. The vesicles are initially detected with help of a candidate selection process, where the candidates are identified by a multi-scale spot enhancing filter. Subsequently, these candidates are pruned and selected by a light weight convolutional neural network. At the linking stage, the tracklets are formed based on the distance and the detection assignment which is implemented via combinatorial optimization algorithm. Each tracklet is described by a number of parameters used to evaluate the probability of tracklets connection by the inference over the Bayesian network. The tracking results are presented for confocal fluorescence microscopy data of protein trafficking in epithelial cells. The proposed method achieves a root mean square error (RMSE) of 1.39 for the vesicle localisation and of 0.7 representing the degree of track matching with ground truth. The presented method is also evaluated against the state-of-the-art “Trackmate“ framework

    Single-Molecule Localization Microscopy Reconstruction Using Noise2Noise for Super-Resolution Imaging of Actin Filaments

    Get PDF
    Single-molecule localization microscopy (SMLM) is a super-resolution imaging technique developed to image structures smaller than the diffraction limit. This modality results in sparse and non-uniform sets of localized blinks that need to be reconstructed to obtain a super-resolution representation of a tissue. In this paper, we explore the use of the Noise2Noise (N2N) paradigm to reconstruct the SMLM images. Noise2Noise is an image denoising technique where a neural network is trained with only pairs of noisy realizations of the data instead of using pairs of noisy/clean images, as performed with Noise2Clean (N2C). Here we have adapted Noise2Noise to the 2D SMLM reconstruction problem, exploring different pair creation strategies (fixed and dynamic). The approach was applied to synthetic data and to real 2D SMLM data of actin filaments. This revealed that N2N can achieve reconstruction performances close to the Noise2Clean training strategy, without having access to the super-resolution images. This could open the way to further improvement in SMLM acquisition speed and reconstruction performance

    Extracting Axial Depth and Trajectory Trend Using Astigmatism, Gaussian Fitting, and CNNs for Protein Tracking

    Get PDF
    Accurate analysis of vesicle trafficking in live cells is challenging for a number of reasons: varying appearance, complex protein movement patterns, and imaging conditions. To allow fast image acquisition, we study how employing an astigmatism can be utilized for obtaining additional information that could make tracking more robust. We present two approaches for measuring the z position of individual vesicles. Firstly, Gaussian curve fitting with CNN-based denoising is applied to infer the absolute depth around the focal plane of each localized protein. We demonstrate that adding denoising yields more accurate estimation of depth while preserving the overall structure of the localized proteins. Secondly, we investigate if we can predict using a custom CNN architecture the axial trajectory trend. We demonstrate that this method performs well on calibration beads data without the need for denoising. By incorporating the obtained depth information into a trajectory analysis, we demonstrate the potential improvement in vesicle tracking

    Beyond attention: deriving biologically interpretable insights from weakly-supervised multiple-instance learning models

    Full text link
    Recent advances in attention-based multiple instance learning (MIL) have improved our insights into the tissue regions that models rely on to make predictions in digital pathology. However, the interpretability of these approaches is still limited. In particular, they do not report whether high-attention regions are positively or negatively associated with the class labels or how well these regions correspond to previously established clinical and biological knowledge. We address this by introducing a post-training methodology to analyse MIL models. Firstly, we introduce prediction-attention-weighted (PAW) maps by combining tile-level attention and prediction scores produced by a refined encoder, allowing us to quantify the predictive contribution of high-attention regions. Secondly, we introduce a biological feature instantiation technique by integrating PAW maps with nuclei segmentation masks. This further improves interpretability by providing biologically meaningful features related to the cellular organisation of the tissue and facilitates comparisons with known clinical features. We illustrate the utility of our approach by comparing PAW maps obtained for prostate cancer diagnosis (i.e. samples containing malignant tissue, 381/516 tissue samples) and prognosis (i.e. samples from patients with biochemical recurrence following surgery, 98/663 tissue samples) in a cohort of patients from the international cancer genome consortium (ICGC UK Prostate Group). Our approach reveals that regions that are predictive of adverse prognosis do not tend to co-locate with the tumour regions, indicating that non-cancer cells should also be studied when evaluating prognosis

    A Pilot Study on Automatic Three-Dimensional Quantification of Barrett’s Esophagus for Risk Stratification and Therapy Monitoring

    Get PDF
    Background & Aims Barrett’s epithelium measurement using widely accepted Prague C&M classification is highly operator dependent. We propose a novel methodology for measuring this risk score automatically. The method also enables quantification of the area of Barrett’s epithelium (BEA) and islands, which was not possible before. Furthermore, it allows 3-dimensional (3D) reconstruction of the esophageal surface, enabling interactive 3D visualization. We aimed to assess the accuracy of the proposed artificial intelligence system on both phantom and endoscopic patient data. Methods Using advanced deep learning, a depth estimator network is used to predict endoscope camera distance from the gastric folds. By segmenting BEA and gastroesophageal junction and projecting them to the estimated mm distances, we measure C&M scores including the BEA. The derived endoscopy artificial intelligence system was tested on a purpose-built 3D printed esophagus phantom with varying BEAs and on 194 high-definition videos from 131 patients with C&M values scored by expert endoscopists. Results Endoscopic phantom video data demonstrated a 97.2% accuracy with a marginal ± 0.9 mm average deviation for C&M and island measurements, while for BEA we achieved 98.4% accuracy with only ±0.4 cm2 average deviation compared with ground-truth. On patient data, the C&M measurements provided by our system concurred with expert scores with marginal overall relative error (mean difference) of 8% (3.6 mm) and 7% (2.8 mm) for C and M scores, respectively. Conclusions The proposed methodology automatically extracts Prague C&M scores with high accuracy. Quantification and 3D reconstruction of the entire Barrett’s area provides new opportunities for risk stratification and assessment of therapy response

    A comparison of passive and active dust sampling methods for measuring airborne methicillin-resistant Staphylococcus aureus in pig farms

    Get PDF
    Methicillin-resistant strains of Staphylococcus aureus (MRSA) are resistant to most β-lactam antibiotics. Pigs are an important reservoir of livestock-associated MRSA (LA-MRSA), which is genetically distinct from both hospital and community-acquired MRSA. Occupational exposure to pigs on farms can lead to LA-MRSA carriage by workers. There is a growing body of research on MRSA found in the farm environment, the airborne route of transmission, and its implication on human health. This study aims to directly compare two sampling methods used to measure airborne MRSA in the farm environment; passive dust sampling with electrostatic dust fall collectors (EDCs), and active inhalable dust sampling using stationary air pumps with Gesamtstaubprobenahme (GSP) sampling heads containing Teflon filters. Paired dust samples using EDCs and GSP samplers, totaling 87 samples, were taken from 7 Dutch pig farms, in multiple compartments housing pigs of varying ages. Total nucleic acids of both types of dust samples were extracted and targets indicating MRSA (femA, nuc, mecA) and total bacterial count (16S rRNA) were quantified using quantitative real-time PCRs. MRSA could be measured from all GSP samples and in 94% of the EDCs, additionally MRSA was present on every farm sampled. There was a strong positive relationship between the paired MRSA levels found in EDCs and those measured on filters (Normalized by 16S rRNA; Pearson's correlation coefficient r = 0.94, Not Normalized; Pearson's correlation coefficient r = 0.84). This study suggests that EDCs can be used as an affordable and easily standardized method for quantifying airborne MRSA levels in the pig farm setting

    Image-based consensus molecular subtype (imCMS) classification of colorectal cancer using deep learning

    Full text link
    OBJECTIVE Complex phenotypes captured on histological slides represent the biological processes at play in individual cancers, but the link to underlying molecular classification has not been clarified or systematised. In colorectal cancer (CRC), histological grading is a poor predictor of disease progression, and consensus molecular subtypes (CMSs) cannot be distinguished without gene expression profiling. We hypothesise that image analysis is a cost-effective tool to associate complex features of tissue organisation with molecular and outcome data and to resolve unclassifiable or heterogeneous cases. In this study, we present an image-based approach to predict CRC CMS from standard H&E sections using deep learning. DESIGN Training and evaluation of a neural network were performed using a total of n=1206 tissue sections with comprehensive multi-omic data from three independent datasets (training on FOCUS trial, n=278 patients; test on rectal cancer biopsies, GRAMPIAN cohort, n=144 patients; and The Cancer Genome Atlas (TCGA), n=430 patients). Ground truth CMS calls were ascertained by matching random forest and single sample predictions from CMS classifier. RESULTS Image-based CMS (imCMS) accurately classified slides in unseen datasets from TCGA (n=431 slides, AUC)=0.84) and rectal cancer biopsies (n=265 slides, AUC=0.85). imCMS spatially resolved intratumoural heterogeneity and provided secondary calls correlating with bioinformatic prediction from molecular data. imCMS classified samples previously unclassifiable by RNA expression profiling, reproduced the expected correlations with genomic and epigenetic alterations and showed similar prognostic associations as transcriptomic CMS. CONCLUSION This study shows that a prediction of RNA expression classifiers can be made from H&E images, opening the door to simple, cheap and reliable biological stratification within routine workflows

    Image-based consensus molecular subtype classification (imCMS) of colorectal cancer using deep learning

    Get PDF
    Objective Complex phenotypes captured on histological slides represent the biological processes at play in individual cancers, but the link to underlying molecular classification has not been clarified or systematised. In colorectal cancer (CRC), histological grading is a poor predictor of disease progression, and consensus molecular subtypes (CMSs) cannot be distinguished without gene expression profiling. We hypothesise that image analysis is a cost-effective tool to associate complex features of tissue organisation with molecular and outcome data and to resolve unclassifiable or heterogeneous cases. In this study, we present an image-based approach to predict CRC CMS from standard H&E sections using deep learning. Design Training and evaluation of a neural network were performed using a total of n=1206 tissue sections with comprehensive multi-omic data from three independent datasets (training on FOCUS trial, n=278 patients; test on rectal cancer biopsies, GRAMPIAN cohort, n=144 patients; and The Cancer Genome Atlas (TCGA), n=430 patients). Ground truth CMS calls were ascertained by matching random forest and single sample predictions from CMS classifier. Results Image-based CMS (imCMS) accurately classified slides in unseen datasets from TCGA (n=431 slides, AUC)=0.84) and rectal cancer biopsies (n=265 slides, AUC=0.85). imCMS spatially resolved intratumoural heterogeneity and provided secondary calls correlating with bioinformatic prediction from molecular data. imCMS classified samples previously unclassifiable by RNA expression profiling, reproduced the expected correlations with genomic and epigenetic alterations and showed similar prognostic associations as transcriptomic CMS. Conclusion This study shows that a prediction of RNA expression classifiers can be made from H&E images, opening the door to simple, cheap and reliable biological stratification within routine workflows

    Modeling the Development of Goal-Specificity in Mirror Neurons

    Get PDF
    Neurophysiological studies have shown that parietal mirror neurons encode not only actions but also the goal of these actions. Although some mirror neurons will fire whenever a certain action is perceived (goal-independently), most will only fire if the motion is perceived as part of an action with a specific goal. This result is important for the action-understanding hypothesis as it provides a potential neurological basis for such a cognitive ability. It is also relevant for the design of artificial cognitive systems, in particular robotic systems that rely on computational models of the mirror system in their interaction with other agents. Yet, to date, no computational model has explicitly addressed the mechanisms that give rise to both goal-specific and goal-independent parietal mirror neurons. In the present paper, we present a computational model based on a self-organizing map, which receives artificial inputs representing information about both the observed or executed actions and the context in which they were executed. We show that the map develops a biologically plausible organization in which goal-specific mirror neurons emerge. We further show that the fundamental cause for both the appearance and the number of goal-specific neurons can be found in geometric relationships between the different inputs to the map. The results are important to the action-understanding hypothesis as they provide a mechanism for the emergence of goal-specific parietal mirror neurons and lead to a number of predictions: (1) Learning of new goals may mostly reassign existing goal-specific neurons rather than recruit new ones; (2) input differences between executed and observed actions can explain observed corresponding differences in the number of goal-specific neurons; and (3) the percentage of goal-specific neurons may differ between motion primitives

    Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy

    Get PDF
    The Endoscopy Computer Vision Challenge (EndoCV) is a crowd-sourcing initiative to address eminent problems in developing reliable computer aided detection and diagnosis endoscopy systems and suggest a pathway for clinical translation of technologies. Whilst endoscopy is a widely used diagnostic and treatment tool for hollow-organs, there are several core challenges often faced by endoscopists, mainly: 1) presence of multi-class artefacts that hinder their visual interpretation, and 2) difficulty in identifying subtle precancerous precursors and cancer abnormalities. Artefacts often affect the robustness of deep learning methods applied to the gastrointestinal tract organs as they can be confused with tissue of interest. EndoCV2020 challenges are designed to address research questions in these remits. In this paper, we present a summary of methods developed by the top 17 teams and provide an objective comparison of state-of-the-art methods and methods designed by the participants for two sub-challenges: i) artefact detection and segmentation (EAD2020), and ii) disease detection and segmentation (EDD2020). Multi-center, multi-organ, multi-class, and multi-modal clinical endoscopy datasets were compiled for both EAD2020 and EDD2020 sub-challenges. The out-of-sample generalization ability of detection algorithms was also evaluated. Whilst most teams focused on accuracy improvements, only a few methods hold credibility for clinical usability. The best performing teams provided solutions to tackle class imbalance, and variabilities in size, origin, modality and occurrences by exploring data augmentation, data fusion, and optimal class thresholding techniques
    corecore