186 research outputs found

    Automatic Recognition of Muscle-invasive T-lymphocytes Expressing Dipeptidyl-peptidase IV (CD26) and Analysis of the Associated Cell Surface Phenotypes

    Get PDF
    A neural cell detection system (NCDS) for the automatic quantitation of fluorescent lymphocytes in tissue sections was used to analyze CD26 expression in muscle-invasive T-cells. CD26 is a cell surface dipeptidyl-peptidase IV (DPP IV) involved in co-stimulatory activation of T-cells and also in adhesive events. The NCDS system acquires visual knowledge from a set of training cell image patches selected by a user. The trained system evaluates an image in 2 min calculating (i) the number, (ii) the positions and (iii) the phenotypes of the fluorescent cells. In the present study we have used the NCDS to identity DPP IV (CD26) expressing invasive lymphocytes in sarcoid myopathy and to analyze the associated cell surface phenotypes. We find highly unusual phenotypes characterized by differential combination of seven cell surface receptors usually involved in co-stimulatory events in T-lymphocytes. The data support a differential adhesive rather than a co-stimulatory role of CD26 in muscle-invasive cells. The adaptability of the NCDS algorithm to diverse types of cells should enable us to approach any invasion process, including invasion of malignant cells

    Robust normalization protocols for multiplexed fluorescence bioimage analysis

    Get PDF
    study of mapping and interaction of co-localized proteins at a sub-cellular level is important for understanding complex biological phenomena. One of the recent techniques to map co-localized proteins is to use the standard immuno-fluorescence microscopy in a cyclic manner (Nat Biotechnol 24:1270–8, 2006; Proc Natl Acad Sci 110:11982–7, 2013). Unfortunately, these techniques suffer from variability in intensity and positioning of signals from protein markers within a run and across different runs. Therefore, it is necessary to standardize protocols for preprocessing of the multiplexed bioimaging (MBI) data from multiple runs to a comparable scale before any further analysis can be performed on the data. In this paper, we compare various normalization protocols and propose on the basis of the obtained results, a robust normalization technique that produces consistent results on the MBI data collected from different runs using the Toponome Imaging System (TIS). Normalization results produced by the proposed method on a sample TIS data set for colorectal cancer patients were ranked favorably by two pathologists and two biologists. We show that the proposed method produces higher between class Kullback-Leibler (KL) divergence and lower within class KL divergence on a distribution of cell phenotypes from colorectal cancer and histologically normal samples

    AI in Medical Imaging Informatics: Current Challenges and Future Directions

    Get PDF
    This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine

    Robust normalization protocols for multiplexed fluorescence bioimage analysis

    Get PDF
    Ahmed Raza SE, Langenkämper D, Sirinukunwattana K, Epstein D, Nattkemper TW, Rajpoot NM. Robust normalization protocols for multiplexed fluorescence bioimage analysis. BioData Mining. 2016;9(1): 11

    Application of digital pathology-based advanced analytics of tumour microenvironment organisation to predict prognosis and therapeutic response.

    Get PDF
    In recent years, the application of advanced analytics, especially artificial intelligence (AI), to digital H&E images, and other histological image types, has begun to radically change how histological images are used in the clinic. Alongside the recognition that the tumour microenvironment (TME) has a profound impact on tumour phenotype, the technical development of highly multiplexed immunofluorescence platforms has enhanced the biological complexity that can be captured in the TME with high precision. AI has an increasingly powerful role in the recognition and quantitation of image features and the association of such features with clinically important outcomes, as occurs in distinct stages in conventional machine learning. Deep-learning algorithms are able to elucidate TME patterns inherent in the input data with minimum levels of human intelligence and, hence, have the potential to achieve clinically relevant predictions and discovery of important TME features. Furthermore, the diverse repertoire of deep-learning algorithms able to interrogate TME patterns extends beyond convolutional neural networks to include attention-based models, graph neural networks, and multimodal models. To date, AI models have largely been evaluated retrospectively, outside the well-established rigour of prospective clinical trials, in part because traditional clinical trial methodology may not always be suitable for the assessment of AI technology. However, to enable digital pathology-based advanced analytics to meaningfully impact clinical care, specific measures of 'added benefit' to the current standard of care and validation in a prospective setting are important. This will need to be accompanied by adequate measures of explainability and interpretability. Despite such challenges, the combination of expanding datasets, increased computational power, and the possibility of integration of pre-clinical experimental insights into model development means there is exciting potential for the future progress of these AI applications. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland

    Automated identification of neurons and their locations

    Full text link
    Individual locations of many neuronal cell bodies (>10^4) are needed to enable statistically significant measurements of spatial organization within the brain such as nearest-neighbor and microcolumnarity measurements. In this paper, we introduce an Automated Neuron Recognition Algorithm (ANRA) which obtains the (x,y) location of individual neurons within digitized images of Nissl-stained, 30 micron thick, frozen sections of the cerebral cortex of the Rhesus monkey. Identification of neurons within such Nissl-stained sections is inherently difficult due to the variability in neuron staining, the overlap of neurons, the presence of partial or damaged neurons at tissue surfaces, and the presence of non-neuron objects, such as glial cells, blood vessels, and random artifacts. To overcome these challenges and identify neurons, ANRA applies a combination of image segmentation and machine learning. The steps involve active contour segmentation to find outlines of potential neuron cell bodies followed by artificial neural network training using the segmentation properties (size, optical density, gyration, etc.) to distinguish between neuron and non-neuron segmentations. ANRA positively identifies 86[5]% neurons with 15[8]% error (mean[st.dev.]) on a wide range of Nissl-stained images, whereas semi-automatic methods obtain 80[7]%/17[12]%. A further advantage of ANRA is that it affords an unlimited increase in speed from semi-automatic methods, and is computationally efficient, with the ability to recognize ~100 neurons per minute using a standard personal computer. ANRA is amenable to analysis of huge photo-montages of Nissl-stained tissue, thereby opening the door to fast, efficient and quantitative analysis of vast stores of archival material that exist in laboratories and research collections around the world.Comment: 38 pages. Formatted for two-sided printing. Supplemental material and software available at http://physics.bu.edu/~ainglis/ANRA

    Fast Ray Features for Learning Irregular Shapes

    Get PDF
    We introduce a new class of image features, the Ray feature set, that consider image characteristics at distant contour points, capturing information which is difficult to represent with standard feature sets. This property allows Ray features to efficiently and robustly recognize deformable or irregular shapes, such as cells in microscopic imagery. Experiments show Ray features clearly outperform other powerful features including Haar-like features and Histograms of Oriented Gradients when applied to detecting irregularly shaped neuron nuclei and mitochondria. Ray features can also provide important complementary information to Haar features for other tasks such as face detection, reducing the number of weak learners and computational cost

    Automated Vascular Smooth Muscle Segmentation, Reconstruction, Classification and Simulation on Whole-Slide Histology

    Get PDF
    Histology of the microvasculature depicts detailed characteristics relevant to tissue perfusion. One important histologic feature is the smooth muscle component of the microvessel wall, which is responsible for controlling vessel caliber. Abnormalities can cause disease and organ failure, as seen in hypertensive retinopathy, diabetic ischemia, Alzheimer’s disease and improper cardiovascular development. However, assessments of smooth muscle cell content are conventionally performed on selected fields of view on 2D sections, which may lead to measurement bias. We have developed a software platform for automated (1) 3D vascular reconstruction, (2) detection and segmentation of muscularized microvessels, (3) classification of vascular subtypes, and (4) simulation of function through blood flow modeling. Vessels were stained for α-actin using 3,3\u27-Diaminobenzidine, assessing both normal (n=9 mice) and regenerated vasculature (n=5 at day 14, n=4 at day 28). 2D locally adaptive segmentation involved vessel detection, skeletonization, and fragment connection. 3D reconstruction was performed using our novel nucleus landmark-based registration. Arterioles and venules were categorized using supervised machine learning based on texture and morphometry. Simulation of blood flow for the normal and regenerated vasculature was performed at baseline and during demand based on the structural measures obtained from the above tools. Vessel medial area and vessel wall thickness were found to be greater in the normal vasculature as compared to the regenerated vasculature (p\u3c0.001) and a higher density of arterioles was found in the regenerated tissue (p\u3c0.05). Validation showed: a Dice coefficient of 0.88 (compared to manual) for the segmentations, a 3D reconstruction target registration error of 4 μm, and area under the receiver operator curve of 0.89 for vessel classification. We found 89% and 67% decreases in the blood flow through the network for the regenerated vasculature during increased oxygen demand as compared to the normal vasculature, respectively for 14 and 28 days post-ischemia. We developed a software platform for automated vasculature histology analysis involving 3D reconstruction, segmentation, and arteriole vs. venule classification. This advanced the knowledge of conventional histology sampling compared to whole slide analysis, the morphological and density differences in the regenerated vasculature, and the effect of the differences on blood flow and function

    Feature selection and modelling methods for microarray data from acute coronary syndrome

    Get PDF
    Acute coronary syndrome (ACS) represents a leading cause of mortality and morbidity worldwide. Providing better diagnostic solutions and developing therapeutic strategies customized to the individual patient represent societal and economical urgencies. Progressive improvement in diagnosis and treatment procedures require a thorough understanding of the underlying genetic mechanisms of the disease. Recent advances in microarray technologies together with the decreasing costs of the specialized equipment enabled affordable harvesting of time-course gene expression data. The high-dimensional data generated demands for computational tools able to extract the underlying biological knowledge. This thesis is concerned with developing new methods for analysing time-course gene expression data, focused on identifying differentially expressed genes, deconvolving heterogeneous gene expression measurements and inferring dynamic gene regulatory interactions. The main contributions include: a novel multi-stage feature selection method, a new deconvolution approach for estimating cell-type specific signatures and quantifying the contribution of each cell type to the variance of the gene expression patters, a novel approach to identify the cellular sources of differential gene expression, a new approach to model gene expression dynamics using sums of exponentials and a novel method to estimate stable linear dynamical systems from noisy and unequally spaced time series data. The performance of the proposed methods was demonstrated on a time-course dataset consisting of microarray gene expression levels collected from the blood samples of patients with ACS and associated blood count measurements. The results of the feature selection study are of significant biological relevance. For the first time is was reported high diagnostic performance of the ACS subtypes up to three months after hospital admission. The deconvolution study exposed features of within and between groups variation in expression measurements and identified potential cell type markers and cellular sources of differential gene expression. It was shown that the dynamics of post-admission gene expression data can be accurately modelled using sums of exponentials, suggesting that gene expression levels undergo a transient response to the ACS events before returning to equilibrium. The linear dynamical models capturing the gene regulatory interactions exhibit high predictive performance and can serve as platforms for system-level analysis, numerical simulations and intervention studies
    corecore