53 research outputs found
Spectral unmixing of multiply stained fluorescence samples T
The widespread use of fluorescence microscopy along with the vast library of available fluorescent stains and staining
methods has been extremely beneficial to researchers in many fields, ranging from material sciences to plant biology. In
clinical diagnostics, the ability to combine different markers in a given sample allows the simultaneous detection of the
expression of several different molecules, which in turn provides a powerful diagnostic tool for pathologists, allowing a
better classification of the sample at hand. The correct detection and separation of multiple stains in a sample is achieved
not only by the biochemical and optical properties of the markers, but also by the use of appropriate hardware and software
tools. In this chapter, we will review and compare these tools along with their advantages and limitations
Automated Quantitative Analysis of a Mouse Model of Chronic Pulmonary Inflammation using Micro X-ray Computed Tomography
Micro-CT has emerged as an excellent tool for in-vivo imaging
of the lungs of small laboratory animals. Several studies have shown
that it can be used to assess the evolution of pulmonary lung diseases in
longitudinal studies. However, most of them rely on non-automatic tools
for image analysis, or are merely qualitative. In this article, we present
a longitudinal, quantitative study of a mouse model of silica-induced
pulmonary inflammation. To automatically assess disease progression,
we have devised and validated a lung segmentation method that combines
threshold-based segmentation, atlas-based segmentation and level
sets. Our volume measurements, based on the automatic segmentations,
point at a compensation mechanism which leads to an increase of the
healthy lung volume in response to the loss of functional tissue caused
by inflammation
Reduction of motion effects in myocardial arterial spin labeling
Purpose
To evaluate the accuracy and reproducibility of myocardial blood flow measurements obtained under different breathing strategies and motion correction techniques with arterial spin labeling.
Methods
A prospective cardiac arterial spin labeling study was performed in 12 volunteers at 3 Tesla. Perfusion images were acquired twice under breath-hold, synchronized-breathing, and free-breathing. Motion detection based on the temporal intensity variation of a myocardial voxel, as well as image registration based on pairwise and groupwise approaches, were applied and evaluated in synthetic and in vivo data. A region of interest was drawn over the mean perfusion-weighted image for quantification. Original breath-hold datasets, analyzed with individual regions of interest for each perfusion-weighted image, were considered as reference values.
Results
Perfusion measurements in the reference breath-hold datasets were in line with those reported in literature. In original datasets, prior to motion correction, myocardial blood flow quantification was significantly overestimated due to contamination of the myocardial perfusion with the high intensity signal of blood pool. These effects were minimized with motion detection or registration. Synthetic data showed that accuracy of the perfusion measurements was higher with the use of registration, in particular after the pairwise approach, which probed to be more robust to motion.
Conclusion
Satisfactory results were obtained for the free-breathing strategy after pairwise registration, with higher accuracy and robustness (in synthetic datasets) and higher intrasession reproducibility together with lower myocardial blood flow variability across subjects (in in vivo datasets). Breath-hold and synchronized-breathing after motion correction provided similar results, but these breathing strategies can be difficult to perform by patients
New strategies for echocardiographic evaluation of left ventricular function in a mouse model of long-term myocardial infarction
In summary, we have performed a complete characterization of LV post-infarction remodeling in a DBA/2J mouse model of MI, using parameters adapted to the particular characteristics of the model In the future, this well characterized model will be used in both investigative and pharmacological studies that require accurate quantitative monitoring of cardiac recovery after myocardial infarction
Smokers with CT detected emphysema and no airway obstruction have decreased plasma levels of EGF, IL-15, IL-8 and IL-1ra
Current or former smokers expressing a well-defined disease characteristic such as emphysema, has a specific plasma cytokine profile. This includes a decrease of cytokines mainly implicated in activation of apoptosis or decrease of immunosurveillance. This information should be taken into account when evaluated patients with tobacco respiratory diseases
Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut.
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature
Growth Pattern Analysis of Murine Lung Neoplasms by Advanced Semi-Automated Quantification of Micro-CT Images
Computed tomography (CT) is a non-invasive imaging modality used to monitor human lung cancers. Typically, tumor volumes are calculated using manual or semi-automated methods that require substantial user input, and an exponential growth model is used to predict tumor growth. However, these measurement methodologies are time-consuming and can lack consistency. In addition, the availability of datasets with sequential images of the same tumor that are needed to characterize in vivo growth patterns for human lung cancers is limited due to treatment interventions and radiation exposure associated with multiple scans. In this paper, we performed micro-CT imaging of mouse lung cancers induced by overexpression of ribonucleotide reductase, a key enzyme in nucleotide biosynthesis, and developed an advanced semi-automated algorithm for efficient and accurate tumor volume measurement. Tumor volumes determined by the algorithm were first validated by comparison with results from manual methods for volume determination as well as direct physical measurements. A longitudinal study was then performed to investigate in vivo murine lung tumor growth patterns. Individual mice were imaged at least three times, with at least three weeks between scans. The tumors analyzed exhibited an exponential growth pattern, with an average doubling time of 57.08 days. The accuracy of the algorithm in the longitudinal study was also confirmed by comparing its output with manual measurements. These results suggest an exponential growth model for lung neoplasms and establish a new advanced semi-automated algorithm to measure lung tumor volume in mice that can aid efforts to improve lung cancer diagnosis and the evaluation of therapeutic responses
Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.Rudyanto, RD.; Kerkstra, S.; Van Rikxoort, EM.; Fetita, C.; Brillet, P.; Lefevre, C.; Xue, W.... (2014). Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study. Medical Image Analysis. 18(7):1217-1232. doi:10.1016/j.media.2014.07.003S1217123218
The Cell Tracking Challenge: 10âyears of objective benchmarking
The Cell Tracking Challenge is an ongoing benchmarking initiative that
has become a reference in cell segmentation and tracking algorithm
development. Here, we present a signifcant number of improvements
introduced in the challenge since our 2017 report. These include the
creation of a new segmentation-only benchmark, the enrichment of
the dataset repository with new datasets that increase its diversity and
complexity, and the creation of a silver standard reference corpus based
on the most competitive results, which will be of particular interest for
data-hungry deep learning-based strategies. Furthermore, we present
the up-to-date cell segmentation and tracking leaderboards, an in-depth
analysis of the relationship between the performance of the state-of-the-art
methods and the properties of the datasets and annotations, and two
novel, insightful studies about the generalizability and the reusability
of top-performing methods. These studies provide critical practical
conclusions for both developers and users of traditional and machine
learning-based cell segmentation and tracking algorithms.Web of Science2071020101
- âŠ