43 research outputs found

    Histopathological image analysis with connections to genomics

    Get PDF
    The fields of imaging and genomics in cancer research have been mostly studied independently, but recently available datasets have made investigation into the synergy of these two fields possible. This work demonstrates the efficacy of computational histopathological image analysis to extract meaningful quantitative nuclear and cellular features from hematoxylin and eosin stained images that have meaningful connections to genomic data. Additionally, with the advent of whole slide images, significantly more data representing the variation in nuclear characteristics and tumor heterogeneity is available, which can aid in developing new analytical tools, such as the proposed convolutional neural network for nuclear segmentation, which produces state-of-the-art segmentation results on challenging cases seen in normal pathology. This robust segmentation tool is essential for capturing reliable features for computational pathology. Additionally, whole slide images capture rich spatial information about tumors, which presents a challenge, but also an opportunity for the development of new image processing tools to capture this spatial information, which could be considered for future work. Other histopathological image modalities and relevant machine learning tools are also considered for elucidating cellular processes of cancer

    On the Effectiveness of Leukocytes Classification Methods in a Real Application Scenario

    Get PDF
    Automating the analysis of digital microscopic images to identify the cell sub-types or the presence of illness has assumed a great importance since it aids the laborious manual process of review and diagnosis. In this paper, we have focused on the analysis of white blood cells. They are the body’s main defence against infections and diseases and, therefore, their reliable classification is very important. Current systems for leukocyte analysis are mainly dedicated to: counting, sub-types classification, disease detection or classification. Although these tasks seem very different, they share many steps in the analysis process, especially those dedicated to the detection of cells in blood smears. A very accurate detection step gives accurate results in the classification of white blood cells. Conversely, when detection is not accurate, it can adversely affect classification performance. However, it is very common in real-world applications that work on inaccurate or non-accurate regions. Many problems can affect detection results. They can be related to the quality of the blood smear images, e.g., colour and lighting conditions, absence of standards, or even density and presence of overlapping cells. To this end, we performed an in-depth investigation of the above scenario, simulating the regions produced by detection-based systems. We exploit various image descriptors combined with different classifiers, including CNNs, in order to evaluate which is the most suitable in such a scenario, when performing two different tasks: Classification of WBC subtypes and Leukaemia detection. Experimental results have shown that Convolutional Neural Networks are very robust in such a scenario, outperforming common machine learning techniques combined with hand-crafted descriptors. However, when exploiting appropriate images for model training, even simpler approaches can lead to accurate results in both tasks

    Deep learning in food category recognition

    Get PDF
    Integrating artificial intelligence with food category recognition has been a field of interest for research for the past few decades. It is potentially one of the next steps in revolutionizing human interaction with food. The modern advent of big data and the development of data-oriented fields like deep learning have provided advancements in food category recognition. With increasing computational power and ever-larger food datasets, the approach’s potential has yet to be realized. This survey provides an overview of methods that can be applied to various food category recognition tasks, including detecting type, ingredients, quality, and quantity. We survey the core components for constructing a machine learning system for food category recognition, including datasets, data augmentation, hand-crafted feature extraction, and machine learning algorithms. We place a particular focus on the field of deep learning, including the utilization of convolutional neural networks, transfer learning, and semi-supervised learning. We provide an overview of relevant studies to promote further developments in food category recognition for research and industrial applicationsMRC (MC_PC_17171)Royal Society (RP202G0230)BHF (AA/18/3/34220)Hope Foundation for Cancer Research (RM60G0680)GCRF (P202PF11)Sino-UK Industrial Fund (RP202G0289)LIAS (P202ED10Data Science Enhancement Fund (P202RE237)Fight for Sight (24NN201);Sino-UK Education Fund (OP202006)BBSRC (RM32G0178B8

    Peripheral Blood Smear Analyses Using Deep Learning

    Get PDF
    Peripheral Blood Smear (PBS) analysis is a vital routine test carried out by hematologists to assess some aspects of humans’ health status. PBS analysis is prone to human errors and utilizing computer-based analysis can greatly enhance this process in terms of accuracy and cost. Recent approaches in learning algorithms, such as deep learning, are data hungry, but due to the scarcity of labeled medical images, researchers had to find viable alternative solutions to increase the size of available datasets. Synthetic datasets provide a promising solution to data scarcity, however, the complexity of blood smears’ natural structure adds an extra layer of challenge to its synthesizing process. In this thesis, we propose a method- ology that utilizes Locality Sensitive Hashing (LSH) to create a novel balanced dataset of synthetic blood smears. This dataset, which was automatically annotated during the gener- ation phase, covers 17 essential categories of blood cells. The dataset also got the approval of 5 experienced hematologists to meet the general standards of making thin blood smears. Moreover, a platelet classifier and a WBC classifier were trained on the synthetic dataset. For classifying platelets, a hybrid approach of deep learning and image processing tech- niques is proposed. This approach improved the platelet classification accuracy and macro- average precision from 82.6% to 98.6% and 76.6% to 97.6% respectively. Moreover, for white blood cell classification, a novel scheme for training deep networks is proposed, namely, Enhanced Incremental Training, that automatically recognises and handles classes that confuse and negatively affect neural network predictions. To handle the confusable classes, we also propose a procedure called "training revert". Application of the proposed method has improved the classification accuracy and macro-average precision from 61.5% to 95% and 76.6% to 94.27% respectively. In addition, the feasibility of using animal reticulocyte cells as a viable solution to com- pensate for the deficiency of human data is investigated. The integration of animal cells is implemented by employing multiple deep classifiers that utilize transfer learning in differ- ent experimental setups in a procedure that mimics the protocol followed in experimental medical labs. Moreover, three measures are defined, namely, the pretraining boost, the dataset similarity boost, and the dataset size boost measures to compare the effectiveness of the utilized experimental setups. All the experiments of this work were conducted on a novel public human reticulocyte dataset and the best performing model achieved 98.9%, 98.9%, 98.6% average accuracy, average macro precision, and average macro F-score re- spectively. Finally, this work provides a comprehensive framework for analysing two main blood smears that are still being conducted manually in labs. To automate the analysis process, a novel method for constructing synthetic whole-slide blood smear datasets is proposed. Moreover, to conduct the blood cell classification, which includes eighteen blood cell types and abnormalities, two novel techniques are proposed, namely: enhanced incremental train- ing and animal to human cells transfer learning. The outcomes of this work were published in six reputable international conferences and journals such as the computers in biology and medicine and IEEE access journals

    Analysis of Signal Decomposition and Stain Separation methods for biomedical applications

    Get PDF
    Nowadays, the biomedical signal processing and classification and medical image interpretation play an essential role in the detection and diagnosis of several human diseases. The problem of high variability and heterogeneity of information, which is extracted from digital data, can be addressed with signal decomposition and stain separation techniques which can be useful approaches to highlight hidden patterns or rhythms in biological signals and specific cellular structures in histological color images, respectively. This thesis work can be divided into two macro-sections. In the first part (Part I), a novel cascaded RNN model based on long short-term memory (LSTM) blocks is presented with the aim to classify sleep stages automatically. A general workflow based on single-channel EEG signals is developed to enhance the low performance in staging N1 sleep without reducing the performances in the other sleep stages (i.e. Wake, N2, N3 and REM). In the same context, several signal decomposition techniques and time-frequency representations are deployed for the analysis of EEG signals. All extracted features are analyzed by using a novel correlation-based timestep feature selection and finally the selected features are fed to a bidirectional RNN model. In the second part (Part II), a fully automated method named SCAN (Stain Color Adaptive Normalization) is proposed for the separation and normalization of staining in digital pathology. This normalization system allows to standardize digitally, automatically and in a few seconds, the color intensity of a tissue slide with respect to that of a target image, in order to improve the pathologist’s diagnosis and increase the accuracy of computer-assisted diagnosis (CAD) systems. Multiscale evaluation and multi-tissue comparison are performed for assessing the robustness of the proposed method. In addition, a stain normalization based on a novel mathematical technique, named ICD (Inverse Color Deconvolution) is developed for immunohistochemical (IHC) staining in histopathological images. In conclusion, the proposed techniques achieve satisfactory results compared to state-of-the-art methods in the same research field. The workflow proposed in this thesis work and the developed algorithms can be employed for the analysis and interpretation of other biomedical signals and for digital medical image analysis

    Radiomics in prostate cancer: an up-to-date review

    Get PDF
    : Prostate cancer (PCa) is the most common worldwide diagnosed malignancy in male population. The diagnosis, the identification of aggressive disease, and the post-treatment follow-up needs a more comprehensive and holistic approach. Radiomics is the extraction and interpretation of images phenotypes in a quantitative manner. Radiomics may give an advantage through advancements in imaging modalities and through the potential power of artificial intelligence techniques by translating those features into clinical outcome prediction. This article gives an overview on the current evidence of methodology and reviews the available literature on radiomics in PCa patients, highlighting its potential for personalized treatment and future applications

    Automatic Esophageal Abnormality Detection and Classification

    Get PDF
    Esophageal cancer is counted as one of the deadliest cancers worldwide ranking the sixth among all types of cancers. Early esophageal cancer typically causes no symp- toms and mainly arises from overlooked/untreated premalignant abnormalities in the esophagus tube. Endoscopy is the main tool used for the detection of abnormalities, and the cell deformation stage is confirmed by taking biopsy samples. The process of detection and classification is considered challenging for several reasons such as; different types of abnormalities (including early cancer stages) can be located ran- domly throughout the esophagus tube, abnormal regions can have various sizes and appearances which makes it difficult to capture, and failure in discriminating between the columnar mucosa from the metaplastic epithelium. Although many studies have been conducted, it remains a challenging task and improving the accuracy of auto- matically classifying and detecting different esophageal abnormalities is an ongoing field. This thesis aims to develop novel automated methods for the detection and classification of the abnormal esophageal regions (precancerous and cancerous) from endoscopic images and videos. In this thesis, firstly, the abnormality stage of the esophageal cell deformation is clas- sified from confocal laser endomicroscopy (CLE) images. The CLE is an endoscopic tool that provides a digital pathology view of the esophagus cells. The classifica- tion is achieved by enhancing the internal features of the CLE image, using a novel enhancement filter that utilizes fractional integration and differentiation. Different imaging features including, Multi-Scale pyramid rotation LBP (MP-RLBP), gray level co-occurrence matrices (GLCM), fractal analysis, fuzzy LBP and maximally stable extremal regions (MSER), are calculated from the enhanced image to assure a robust classification result. The support vector machine (SVM) and random forest (RF) classifiers are employed to classify each image into its pathology stage. Secondly, we propose an automatic detection method to locate abnormality regions from high definition white light (HD-WLE) endoscopic images. We first investigate the performance of different deep learning detection methods on our dataset. Then we propose an approach that combines hand-designed Gabor features with extracted convolutional neural network features that are used by the Faster R-CNN to detect abnormal regions. Moreover, to further improve the detection performance, we pro- pose a novel two-input network named GFD-Faster RCNN. The proposed method generates a Gabor fractal image from the original endoscopic image using Gabor filters. Then features are learned separately from the endoscopic image and the gen- erated Gabor fractal image using the densely connected convolutional network to detect abnormal esophageal regions. Thirdly, we present a novel model to detect the abnormal regions from endoscopic videos. We design a 3D Sequential DenseConvLstm network to extract spatiotem- poral features from the input videos that are utilized by a region proposal network and ROI pooling layer to detect abnormality regions in each frame throughout the video. Additionally, we suggest an FS-CRF post-processing method that incorpor- ates the Conditional Random Field (CRF) on a frame-based level to recover missed abnormal regions in neighborhood frames within the same clip. The methods are evaluated on four datasets: (1) CLE dataset used for the classific- ation model, (2) Publicly available dataset named Kvasir, (3) MICCAI’15 Endovis challenge dataset, Both datasets (2) and (3) are used for the evaluation of detection model from endoscopic images. Finally, (4) Gastrointestinal Atlas dataset used for the evaluation of the video detection model. The experimental results demonstrate promising results of the different models and have outperformed the state-of-the-art methods

    Feature extraction in image processing and deep learning

    Get PDF
    This thesis develops theoretical analysis of the approximation properties of neural networks, and algorithms to extract useful features of images in fields of deep learning, quantum energy regression and cancer image analysis. The separate applications are connected by using representation systems in harmonic analysis; we focus on deriving proper representations of data using Gabor transform in this thesis. A novel neural network with proven approximation properties dependent on its size is developed using Gabor system. In quantum energy regression, invariant representation of chemical molecules using electron densities is obtained based on the Gabor transform. Additionally, we dig into pooling functions, the feature extractor in deep neural networks, and develop a novel pooling strategy originated from the maximal function with stability property and stable performance. Anisotropic representation of data using the Shearlet transform is also explored in its ability to detect regions of interests of nuclei in cancer images
    corecore