24 research outputs found

    Improved Classification of Histopathological images using the feature fusion of Thepade sorted block truncation code and Niblack thresholding

    Get PDF
    Histopathology is the study of disease-affected tissues, and it is particularly helpful in diagnosis and figuring out how severe and rapidly a disease is spreading. It also demonstrates how to recognize a variety of human tissues and analyze the alterations brought on by sickness. Only through histopathological pictures can a specific collection of disease characteristics, such as lymphocytic infiltration of malignancy, be determined. The "gold standard" for diagnosing practically all cancer forms is a histopathological picture. Diagnosis and prognosis of cancer at an early stage are essential for treatment, which has become a requirement in cancer research. The importance and advantages of classification of cancer patients into more-risk or less-risk divisions have motivated many researchers to study and improve the application of machine learning (ML) methods. It would be interesting to explore the performance of multiple ML algorithms in classifying these histopathological images. Something crucial in this field of ML for differentiating images is feature extraction. Features are the distinctive identifiers of an image that provide a brief about it. Features are drawn out for discrimination between the images using a variety of handcrafted algorithms. This paper presents a fusion of features extracted with Thepade sorted block truncation code (TSBTC) and Niblack thresholding algorithm for the classification of histopathological images. The experimental validation is done using 960 images present in the Kimiapath-960 dataset of histopathological images with the help of performance metrics like sensitivity, specificity and accuracy. Better performance is observed by an ensemble of TSBTC N-ary and Niblack's thresholding features as 97.92% of accuracy in 10-fold cross-validation

    Machine learning methods for histopathological image analysis

    Full text link
    Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.Comment: 23 pages, 4 figure

    Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data

    Get PDF
    Detecting sentiments in natural language is tricky even for humans, making its automated detection more complicated. This research proffers a hybrid deep learning model for fine-grained sentiment prediction in real-time multimodal data. It reinforces the strengths of deep learning nets in combination to machine learning to deal with two specific semiotic systems, namely the textual (written text) and visual (still images) and their combination within the online content using decision level multimodal fusion. The proposed contextual ConvNet-SVMBoVW model, has four modules, namely, the discretization, text analytics, image analytics, and decision module. The input to the model is multimodal text, m ε {text, image, info-graphic}. The discretization module uses Google Lens to separate the text from the image, which is then processed as discrete entities and sent to the respective text analytics and image analytics modules. Text analytics module determines the sentiment using a hybrid of a convolution neural network (ConvNet) enriched with the contextual semantics of SentiCircle. An aggregation scheme is introduced to compute the hybrid polarity. A support vector machine (SVM) classifier trained using bag-of-visual-words (BoVW) for predicting the visual content sentiment. A Boolean decision module with a logical OR operation is augmented to the architecture which validates and categorizes the output on the basis of five fine-grained sentiment categories (truth values), namely ‘highly positive,’ ‘positive,’ ‘neutral,’ ‘negative’ and ‘highly negative.’ The accuracy achieved by the proposed model is nearly 91% which is an improvement over the accuracy obtained by the text and image modules individually
    corecore