28 research outputs found

    Gigapixel Histopathological Image Analysis using Attention-based Neural Networks

    Full text link
    Although CNNs are widely considered as the state-of-the-art models in various applications of image analysis, one of the main challenges still open is the training of a CNN on high resolution images. Different strategies have been proposed involving either a rescaling of the image or an individual processing of parts of the image. Such strategies cannot be applied to images, such as gigapixel histopathological images, for which a high reduction in resolution inherently effects a loss of discriminative information, and in respect of which the analysis of single parts of the image suffers from a lack of global information or implies a high workload in terms of annotating the training images in such a way as to select significant parts. We propose a method for the analysis of gigapixel histopathological images solely by using weak image-level labels. In particular, two analysis tasks are taken into account: a binary classification and a prediction of the tumor proliferation score. Our method is based on a CNN structure consisting of a compressing path and a learning path. In the compressing path, the gigapixel image is packed into a grid-based feature map by using a residual network devoted to the feature extraction of each patch into which the image has been divided. In the learning path, attention modules are applied to the grid-based feature map, taking into account spatial correlations of neighboring patch features to find regions of interest, which are then used for the final whole slide analysis. Our method integrates both global and local information, is flexible with regard to the size of the input images and only requires weak image-level labels. Comparisons with different methods of the state-of-the-art on two well known datasets, Camelyon16 and TUPAC16, have been made to confirm the validity of the proposed model.Comment: The manuscript was submitted to a peer-review journal on January 27t

    BIRD: Watershed Based IRis Detection for mobile devices

    Get PDF
    Communications with a central iris database system using common wireless technologies, such as tablets and smartphones, and iris acquisition out of the field are important functionalities and capabilities of a mobile iris identification device. However, when images are acquired by means of mobile devices under uncontrolled acquisition conditions, noisy images are produced and the effectiveness of the iris recognition system is significantly conditioned. This paper proposes a technique based on watershed transform for iris detection in noisy images captured by mobile devices. The method exploits the information related to limbus to segment the periocular region and merges its score with the iris' one to achieve greater accuracy in the recognition phase

    A Deep Learning Approach for Breast Invasive Ductal Carcinoma Detection and Lymphoma Multi-Classification in Histological Images

    Get PDF
    Accurately identifying and categorizing cancer structures/sub-types in histological images is an important clinical task involving a considerable workload and a specific subspecialty of pathologists. Digitizing pathology is a current trend that provides large amounts of visual data allowing a faster and more precise diagnosis through the development of automatic image analysis techniques. Recent studies have shown promising results for the automatic analysis of cancer tissue by using deep learning strategies that automatically extract and organize the discriminative information from the data. This paper explores deep learning methods for the automatic analysis of Hematoxylin and Eosin stained histological images of breast cancer and lymphoma. In particular, a deep learning approach is proposed for two different use cases: the detection of invasive ductal carcinoma in breast histological images and the classification of lymphoma sub-types. Both use cases have been addressed by adopting a residual convolutional neural network that is part of a convolutional autoencoder network (i.e., FusionNet). The performances have been evaluated on the public datasets of digital histological images and have been compared with those obtained by using different deep neural networks (UNet and ResNet). Additionally, comparisons with the state of the art have been considered, in accordance with different deep learning approaches. The experimental results show an improvement of 5.06% in F-measure score for the detection task and an improvement of 1.09% in the accuracy measure for the classification task

    Improving Breast Tumor Multi-Classification from High-Resolution Histological Images with the Integration of Feature Space Data Augmentation

    No full text
    To support pathologists in breast tumor diagnosis, deep learning plays a crucial role in the development of histological whole slide image (WSI) classification methods. However, automatic classification is challenging due to the high-resolution data and the scarcity of representative training data. To tackle these limitations, we propose a deep learning-based breast tumor gigapixel histological image multi-classifier integrated with a high-resolution data augmentation model to process the entire slide by exploring its local and global information and generating its different synthetic versions. The key idea is to perform the classification and augmentation in feature latent space, reducing the computational cost while preserving the class label of the input. We adopt a deep learning-based multi-classification method and evaluate the contribution given by a conditional generative adversarial network-based data augmentation model on the classifier’s performance for three tumor classes in the BRIGHT Challenge dataset. The proposed method has allowed us to achieve an average F1 equal to 69.5, considering only the WSI dataset of the Challenge. The results are comparable to those obtained by the Challenge winning method (71.6), also trained on the annotated tumor region dataset of the Challenge

    Watershed Based Iris SEgmentation

    No full text
    Recently, the research interest on biometric systems and applications has significantly grown up, aiming to bring the benefits of biometrics to the broader range of users. As signal processing and feature extraction play a very important role for biometric applications, they can be thought as a particular subset of pattern recognition techniques. Most of iris biometric systems have been designed for security applications and work on near-infrared (NIR) images. NIR images are not affected by illumination changes in visible light making systems working both in darker and lighter conditions. The reverse of the medal is a very short distance allowed between the acquisition camera and the user, further than a strictly controlled pose of the eye. For those reasons, the viability of NIR image based systems in commercial applications is quite limited. Several efforts have been devoted to designing new iris biometric approaches on color images acquired in visible wavelength light (VW). However, illumination changes significantly affect the iris pattern as well as the periocular region making both segmentation and feature extraction harder than in NIR. In the specific case of iris biometrics, segmentation represents a crucial aspect, as it must be fast as well as accurate. To this aim, a new watershed based approach for iris segmentation in color images is presented in this paper. The watershed transform is exploited to binarize an image of the eye, while circle fitting together with a ranking approach is applied to first approximate the iris boundary with a circle. The experimental results demonstrate this approach to be effective with respect to location accuracy

    A new unsupervised approach for segmenting and counting cells in high-throughput microscopy image sets

    No full text
    New technological advances in automated microscopy have given rise to large volumes of data, which have made human-based analysis infeasible, heightening the need for automatic systems for high-throughput microscopy applications. In particular, in the field of fluorescence microscopy, automatic tools for image analysis are making an essential contribution in order to increase the statistical power of the cell analysis process. The development of these automatic systems is a difficult task due to both the diversification of the staining patterns and the local variability of the images. In this paper, we present an unsupervised approach for automatic cell segmentation and counting, namely CSC, in high-throughput microscopy images. The segmentation is performed by dividing the whole image into square patches that undergo a gray level clustering followed by an adaptive thresholding. Subsequently, the cell labeling is obtained by detecting the centers of the cells, using both distance transform and curvature analysis, and by applying a region growing process. The advantages of CSC are manifold. The foreground detection process works on gray levels rather than on individual pixels, so it proves to be very efficient. Moreover, the combination of distance transform and curvature analysis makes the counting process very robust to clustered cells. A further strength of the CSC method is the limited number of parameters that must be tuned. Indeed, two different versions of the method have been considered, CSC-7 and CSC-3, depending on the number of parameters to be tuned. The CSC method has been tested on several publicly available image datasets of real and synthetic images. Results in terms of standard metrics and spatially aware measures show that CSC outperforms the current state-of-the-art techniques
    corecore