153 research outputs found

    Segmentation of articular cartilage and early osteoarthritis based on the fuzzy soft thresholding approach driven by modified evolutionary ABC optimization and local statistical aggregation

    Get PDF
    Articular cartilage assessment, with the aim of the cartilage loss identification, is a crucial task for the clinical practice of orthopedics. Conventional software (SW) instruments allow for just a visualization of the knee structure, without post processing, offering objective cartilage modeling. In this paper, we propose the multiregional segmentation method, having ambitions to bring a mathematical model reflecting the physiological cartilage morphological structure and spots, corresponding with the early cartilage loss, which is poorly recognizable by the naked eye from magnetic resonance imaging (MRI). The proposed segmentation model is composed from two pixel's classification parts. Firstly, the image histogram is decomposed by using a sequence of the triangular fuzzy membership functions, when their localization is driven by the modified artificial bee colony (ABC) optimization algorithm, utilizing a random sequence of considered solutions based on the real cartilage features. In the second part of the segmentation model, the original pixel's membership in a respective segmentation class may be modified by using the local statistical aggregation, taking into account the spatial relationships regarding adjacent pixels. By this way, the image noise and artefacts, which are commonly presented in the MR images, may be identified and eliminated. This fact makes the model robust and sensitive with regards to distorting signals. We analyzed the proposed model on the 2D spatial MR image records. We show different MR clinical cases for the articular cartilage segmentation, with identification of the cartilage loss. In the final part of the analysis, we compared our model performance against the selected conventional methods in application on the MR image records being corrupted by additive image noise.Web of Science117art. no. 86

    HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images

    Get PDF
    Recently, a novel virus called COVID-19 has pervasive worldwide, starting from China and moving to all the world to eliminate a lot of persons. Many attempts have been experimented to identify the infection with COVID-19. The X-ray images were one of the attempts to detect the influence of COVID-19 on the infected persons from involving those experiments. According to the X-ray analysis, bilateral pulmonary parenchymal ground-glass and consolidative pulmonary opacities can be caused by COVID-19 — sometimes with a rounded morphology and a peripheral lung distribution. But unfortunately, the specification or if the person infected with COVID-19 or not is so hard under the X-ray images. X-ray images could be classified using the machine learning techniques to specify if the person infected severely, mild, or not infected. To improve the classification accuracy of the machine learning, the region of interest within the image that contains the features of COVID-19 must be extracted. This problem is called the image segmentation problem (ISP). Many techniques have been proposed to overcome ISP. The most commonly used technique due to its simplicity, speed, and accuracy are threshold-based segmentation. This paper proposes a new hybrid approach based on the thresholding technique to overcome ISP for COVID-19 chest X-ray images by integrating a novel meta-heuristic algorithm known as a slime mold algorithm (SMA) with the whale optimization algorithm to maximize the Kapur's entropy. The performance of integrated SMA has been evaluated on 12 chest X-ray images with threshold levels up to 30 and compared with five algorithms: Lshade algorithm, whale optimization algorithm (WOA), FireFly algorithm (FFA), Harris-hawks algorithm (HHA), salp swarm algorithms (SSA), and the standard SMA. The experimental results demonstrate that the proposed algorithm outperforms SMA under Kapur's entropy for all the metrics used and the standard SMA could perform better than the other algorithms in the comparison under all the metrics

    Improved Otsu and Kapur approach for white blood cells segmentation based on LebTLBO optimization for the detection of Leukemia.

    Full text link
    The diagnosis of leukemia involves the detection of the abnormal characteristics of blood cells by a trained pathologist. Currently, this is done manually by observing the morphological characteristics of white blood cells in the microscopic images. Though there are some equipment- based and chemical-based tests available, the use and adaptation of the automated computer vision-based system is still an issue. There are certain software frameworks available in the literature; however, they are still not being adopted commercially. So there is a need for an automated and software- based framework for the detection of leukemia. In software-based detection, segmentation is the first critical stage that outputs the region of interest for further accurate diagnosis. Therefore, this paper explores an efficient and hybrid segmentation that proposes a more efficient and effective system for leukemia diagnosis. A very popular publicly available database, the acute lymphoblastic leukemia image database (ALL-IDB), is used in this research. First, the images are pre-processed and segmentation is done using Multilevel thresholding with Otsu and Kapur methods. To further optimize the segmentation performance, the Learning enthusiasm-based teaching-learning-based optimization (LebTLBO) algorithm is employed. Different metrics are used for measuring the system performance. A comparative analysis of the proposed methodology is done with existing benchmarks methods. The proposed approach has proven to be better than earlier techniques with measuring parameters of PSNR and Similarity index. The result shows a significant improvement in the performance measures with optimizing threshold algorithms and the LebTLBO technique

    Grey Scale Image Multi-Thresholding Using Moth-Flame Algorithm and Tsallis Entropy

    Get PDF
    In the current era, image evaluations play a foremost role in a variety of domains, where the processing of digital images is essential to identify vital information. The image multi-thresholding is a vital image pre-processing field in which the available digital image is enhanced by grouping similar pixel values. Normally, the digital test images are available in RGB/greyscale format and the appropriate processing methodology is essential to treat the images with a chosen methodology. In the proposed approach, Tsallis Entropy (TE) supported multi-level thresholding is planned for the benchmark greyscale imagery of dimension 512x512x1 pixels using a chosen threshold values (T=2,3,4,5). This work suggests the possible Cost Value (CV) that can be considered during the optimization search and the proposed work is executed by considering the maximization of the TE as the CV. The entire thresholding task is executed using Moth-Flame Algorithm (MFA) and the accomplished results are validated based on the image quality measures of various thresholds. The attained result with MFO is better compared to the result of CS, BFO, PSO, and GA
    corecore