845 research outputs found
Recommended from our members
Interpretable classification of Alzheimer's disease pathologies with a convolutional neural network pipeline.
Neuropathologists assess vast brain areas to identify diverse and subtly-differentiated morphologies. Standard semi-quantitative scoring approaches, however, are coarse-grained and lack precise neuroanatomic localization. We report a proof-of-concept deep learning pipeline that identifies specific neuropathologies-amyloid plaques and cerebral amyloid angiopathy-in immunohistochemically-stained archival slides. Using automated segmentation of stained objects and a cloud-based interface, we annotate > 70,000 plaque candidates from 43 whole slide images (WSIs) to train and evaluate convolutional neural networks. Networks achieve strong plaque classification on a 10-WSI hold-out set (0.993 and 0.743 areas under the receiver operating characteristic and precision recall curve, respectively). Prediction confidence maps visualize morphology distributions at high resolution. Resulting network-derived amyloid beta (Aβ)-burden scores correlate well with established semi-quantitative scores on a 30-WSI blinded hold-out. Finally, saliency mapping demonstrates that networks learn patterns agreeing with accepted pathologic features. This scalable means to augment a neuropathologist's ability suggests a route to neuropathologic deep phenotyping
Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue
Histological staining is a vital step used to diagnose various diseases and
has been used for more than a century to provide contrast to tissue sections,
rendering the tissue constituents visible for microscopic analysis by medical
experts. However, this process is time-consuming, labor-intensive, expensive
and destructive to the specimen. Recently, the ability to virtually-stain
unlabeled tissue sections, entirely avoiding the histochemical staining step,
has been demonstrated using tissue-stain specific deep neural networks. Here,
we present a new deep learning-based framework which generates
virtually-stained images using label-free tissue, where different stains are
merged following a micro-structure map defined by the user. This approach uses
a single deep neural network that receives two different sources of information
at its input: (1) autofluorescence images of the label-free tissue sample, and
(2) a digital staining matrix which represents the desired microscopic map of
different stains to be virtually generated at the same tissue section. This
digital staining matrix is also used to virtually blend existing stains,
digitally synthesizing new histological stains. We trained and blindly tested
this virtual-staining network using unlabeled kidney tissue sections to
generate micro-structured combinations of Hematoxylin and Eosin (H&E), Jones
silver stain, and Masson's Trichrome stain. Using a single network, this
approach multiplexes virtual staining of label-free tissue with multiple types
of stains and paves the way for synthesizing new digital histological stains
that can be created on the same tissue cross-section, which is currently not
feasible with standard histochemical staining methods.Comment: 19 pages, 5 figures, 2 table
Recommended from our members
Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and Pyramid Sampling.
Objective and Impact Statement: Human epidermal growth factor receptor 2 (HER2) is a critical protein in cancer cell growth that signifies the aggressiveness of breast cancer (BC) and helps predict its prognosis. Here, we introduce a deep learning-based approach utilizing pyramid sampling for the automated classification of HER2 status in immunohistochemically (IHC) stained BC tissue images. Introduction: Accurate assessment of IHC-stained tissue slides for HER2 expression levels is essential for both treatment guidance and understanding of cancer mechanisms. Nevertheless, the traditional workflow of manual examination by board-certified pathologists encounters challenges, including inter- and intra-observer inconsistency and extended turnaround times. Methods: Our deep learning-based method analyzes morphological features at various spatial scales, efficiently managing the computational load and facilitating a detailed examination of cellular and larger-scale tissue-level details. Results: This approach addresses the tissue heterogeneity of HER2 expression by providing a comprehensive view, leading to a blind testing classification accuracy of 84.70%, on a dataset of 523 core images from tissue microarrays. Conclusion: This automated system, proving reliable as an adjunct pathology tool, has the potential to enhance diagnostic precision and evaluation speed, and might substantially impact cancer treatment planning
Automated detection of pain levels using deep feature extraction from shutter blinds‑based dynamic‑sized horizontal patches with facial images
Pain intensity classification using facial images is a challenging problem in computer vision research.
This work proposed a patch and transfer learning-based model to classify various pain intensities
using facial images. The input facial images were segmented into dynamic-sized horizontal patches
or “shutter blinds”. A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used
to generate deep features from the shutter blinds and the undivided resized segmented input facial
image. The most discriminative features were selected from these deep features using iterative
neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor
classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model
was trained and tested on datasets derived from two public databases—University of Northern
British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of
Spontaneous Facial Action Database—which both comprised four pain intensity classes that had
been labeled by human experts using validated facial action coding system methodology. Our shutter
blinds-based classification model attained more than 95% overall accuracy rates on both datasets.
The excellent performance suggests that the automated pain intensity classification model can be
deployed to assist doctors in the non-verbal detection of pain using facial images in various situations
(e.g., non-communicative patients or during surgery). This system can facilitate timely detection and
management of pain
Recommended from our members
Inspection and evaluation of artifacts in digital video sources
Streaming digital video content providers such as YouTube, Amazon, Hulu, and Netflix collaborate with production teams to obtain new and old video content. These collaborations lead to an accumulation of video sources, some of which might contain unacceptable visual artifacts. Artifacts may inadvertently enter the video master at any point in the production pipeline, due to any of a number of equipment and user failures. Unfortunately, these artifacts are difficult to detect since no pristine reference exists for comparison. As of now, few automated tools exist that can effectively capture the most common forms of these artifacts. This work studies no-reference video source inspection for generalized artifact detection and subjective quality prediction, which will ultimate inform decisions related to acquisition of new content.
Automatically identifying the locations and severities of video artifacts is a difficult problem. We have developed a general method for detecting local artifacts by learning differences in the statistics between distorted and pristine video frames. Our model, which we call the Video Impairment Mapper (VID-MAP), produces a full resolution map of artifact detection probabilities based on comparisons of excitatory and inhibatory convolutional responses. Validation on a large database shows that our method outperforms the previous state-of-the-art of even distortion-specific detectors.
A variety of powerful picture quality predictors are available that rely on neuro-statistical models of distortion perception. We extend these principles to video source inspection, by coupling spatial divisive normalization with a series of filterbanks tuned for artifact detection, implemented using a common convolutional framework. We developed the Video Impairment Detection by SParse Error CapTure (VIDSPECT) model, which leverages discriminative sparse dictionaries that are tuned to detect specific artifacts. VIDSPECT is simple, highly generalizable, and yields better accuracy than competing methods.
To evaluate the perceived quality of video sources containing artifacts, we built a new digital video database, called the LIVE Video Masters Database, which contains 384 videos affected by the types of artifacts encountered in otherwise pristine digital video sources. We find that VIDSPECT delivers top performance on this database for most artifacts tested, and competitive performance otherwise, using the same basic architecture in all cases.Electrical and Computer Engineerin
A Comprehensive Review of Deep Learning-based Single Image Super-resolution
Image super-resolution (SR) is one of the vital image processing methods that
improve the resolution of an image in the field of computer vision. In the last
two decades, significant progress has been made in the field of
super-resolution, especially by utilizing deep learning methods. This survey is
an effort to provide a detailed survey of recent progress in single-image
super-resolution in the perspective of deep learning while also informing about
the initial classical methods used for image super-resolution. The survey
classifies the image SR methods into four categories, i.e., classical methods,
supervised learning-based methods, unsupervised learning-based methods, and
domain-specific SR methods. We also introduce the problem of SR to provide
intuition about image quality metrics, available reference datasets, and SR
challenges. Deep learning-based approaches of SR are evaluated using a
reference dataset. Some of the reviewed state-of-the-art image SR methods
include the enhanced deep SR network (EDSR), cycle-in-cycle GAN (CinCGAN),
multiscale residual network (MSRN), meta residual dense network (Meta-RDN),
recurrent back-projection network (RBPN), second-order attention network (SAN),
SR feedback network (SRFBN) and the wavelet-based residual attention network
(WRAN). Finally, this survey is concluded with future directions and trends in
SR and open problems in SR to be addressed by the researchers.Comment: 56 Pages, 11 Figures, 5 Table
Denoising OCT Images Using Steered Mixture of Experts with Multi-Model Inference
In Optical Coherence Tomography (OCT), speckle noise significantly hampers
image quality, affecting diagnostic accuracy. Current methods, including
traditional filtering and deep learning techniques, have limitations in noise
reduction and detail preservation. Addressing these challenges, this study
introduces a novel denoising algorithm, Block-Matching Steered-Mixture of
Experts with Multi-Model Inference and Autoencoder (BM-SMoE-AE). This method
combines block-matched implementation of the SMoE algorithm with an enhanced
autoencoder architecture, offering efficient speckle noise reduction while
retaining critical image details. Our method stands out by providing improved
edge definition and reduced processing time. Comparative analysis with existing
denoising techniques demonstrates the superior performance of BM-SMoE-AE in
maintaining image integrity and enhancing OCT image usability for medical
diagnostics.Comment: This submission contains 10 pages and 4 figures. It was presented at
the 2024 SPIE Photonics West, held in San Francisco. The paper details
advancements in photonics applications related to healthcare and includes
supplementary material with additional datasets for revie
- …