6,552 research outputs found

    Hyperspectral colon tissue cell classification

    Get PDF
    A novel algorithm to discriminate between normal and malignant tissue cells of the human colon is presented. The microscopic level images of human colon tissue cells were acquired using hyperspectral imaging technology at contiguous wavelength intervals of visible light. While hyperspectral imagery data provides a wealth of information, its large size normally means high computational processing complexity. Several methods exist to avoid the so-called curse of dimensionality and hence reduce the computational complexity. In this study, we experimented with Principal Component Analysis (PCA) and two modifications of Independent Component Analysis (ICA). In the first stage of the algorithm, the extracted components are used to separate four constituent parts of the colon tissue: nuclei, cytoplasm, lamina propria, and lumen. The segmentation is performed in an unsupervised fashion using the nearest centroid clustering algorithm. The segmented image is further used, in the second stage of the classification algorithm, to exploit the spatial relationship between the labeled constituent parts. Experimental results using supervised Support Vector Machines (SVM) classification based on multiscale morphological features reveal the discrimination between normal and malignant tissue cells with a reasonable degree of accuracy

    CT diagnosis of early stroke : the initial approach to the new CAD tool based on multiscale estimation of ischemia

    Get PDF
    Background: Computer aided diagnosis (CAD) becomes one of the most important diagnostic tools for urgent states in cerebral stroke and other life-threatening conditions where time plays a crucial role. Routine CT is still diagnostically insufficient in hyperacute stage of stroke that is in the therapeutic window for thrombolytic therapy. Authors present computer assistant of early ischemic stroke diagnosis that supports the radiologic interpretations. A new semantic-visualization system of ischemic symptoms applied to noncontrast, routine CT examination was based on multiscale image processing and diagnostic content estimation. Material/Methods: Evaluation of 95 sets of examinations in patients admitted to a hospital with symptoms suggesting stroke was undertaken by four radiologists from two medical centers unaware of the final clinical findings. All of the consecutive cases were considered as having no CT direct signs of hyperacute ischemia. At the first test stage only the CTs performed at the admission were evaluated independently by radiologists. Next, the same early scans were evaluated again with additional use of multiscale computer-assistant of stroke (MulCAS). Computerized suggestion with increased sensitivity to the subtle image manifestations of cerebral ischemia was constructed as additional view representing estimated diagnostic content with enhanced stroke symptoms synchronized to routine CT data preview. Follow-up CT examinations and clinical features confirmed or excluded the diagnosis of stroke constituting 'gold standard' to verify stroke detection performance. Results: Higher AUC (area under curve) values were found for MulCAS -aided radiological diagnosis for all readers and the differences were statistically significant for random readers-random cases parametric and non-parametric DBM MRMC analysis. Sensitivity and specificity of acute stroke detection for the readers was increased by 30% and 4%, respectively. Conclusions: Routine CT completed with proposed method of computer assisted diagnosis provided noticeable better diagnosis efficiency of acute stroke according to the rates and opinions of all test readers. Further research includes fully automatic detection of hypodense regions to complete assisted indications and formulate the suggestions of stroke cases more objectively. Planned prospective studies will let evaluate more accurately the impact of this CAD tool on diagnosis and further treatment in patients suffered from stroke. It is necessary to determine whether this method is possible to be applied widely

    DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

    Get PDF
    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.Comment: Accepted by TPAM
    • …
    corecore