10 research outputs found

    An innovative technique for contrast enhancement of computed tomography images using normalized gamma-corrected contrast-limited adaptive histogram equalization

    Get PDF
    Image contrast is an essential visual feature that determines whether an image is of good quality. In computed tomography (CT), captured images tend to be low contrast, which is a prevalent artifact that reduces the image quality and hampers the process of extracting its useful information. A common tactic to process such artifact is by using histogram-based techniques. However, although these techniques may improve the contrast for different grayscale imaging applications, the results are mostly unacceptable for CT images due to the presentation of various faults, noise amplification, excess brightness, and imperfect contrast. Therefore, an ameliorated version of the contrast-limited adaptive histogram equalization (CLAHE) is introduced in this article to provide a good brightness with decent contrast for CT images. The novel modification to the aforesaid technique is done by adding an initial phase of a normalized gamma correction function that helps in adjusting the gamma of the processed image to avoid the common errors of the basic CLAHE of the excess brightness and imperfect contrast it produces. The newly developed technique is tested with synthetic and real-degraded low-contrast CT images, in which it highly contributed in producing better quality results. Moreover, a low intricacy technique for contrast enhancement is proposed, and its performance is also exhibited against various versions of histogram-based enhancement technique using three advanced image quality assessment metrics of Universal Image Quality Index (UIQI), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM). Finally, the proposed technique provided acceptable results with no visible artifacts and outperformed all the comparable techniques

    Rouleaux red blood cells splitting in microscopic thin blood smear images via local maxima, circles drawing, and mapping with original RBCs.

    Get PDF
    Splitting the rouleaux RBCs from single RBCs and its further subdivision is a challenging area in computer-assisted diagnosis of blood. This phenomenon is applied in complete blood count, anemia, leukemia, and malaria tests. Several automated techniques are reported in the state of art for this task but face either under or over splitting problems. The current research presents a novel approach to split Rouleaux red blood cells (chains of RBCs) precisely, which are frequently observed in the thin blood smear images. Accordingly, this research address the rouleaux splitting problem in a realistic, efficient and automated way by considering the distance transform and local maxima of the rouleaux RBCs. Rouleaux RBCs are splitted by taking their local maxima as the centres to draw circles by mid-point circle algorithm. The resulting circles are further mapped with single RBC in Rouleaux to preserve its original shape. The results of the proposed approach on standard data set are presented and analyzed statistically by achieving an average recall of 0.059, an average precision of 0.067 and F-measure 0.063 are achieved through ground truth with visual inspection

    Crowd region detection in outdoor scenes using color spaces

    Get PDF

    Crowd detection and counting using a static and dynamic platform: state of the art

    Get PDF
    Automated object detection and crowd density estimation are popular and important area in visual surveillance research. The last decades witnessed many significant research in this field however, it is still a challenging problem for automatic visual surveillance. The ever increase in research of the field of crowd dynamics and crowd motion necessitates a detailed and updated survey of different techniques and trends in this field. This paper presents a survey on crowd detection and crowd density estimation from moving platform and surveys the different methods employed for this purpose. This review category and delineates several detections and counting estimation methods that have been applied for the examination of scenes from static and moving platforms

    Image Enhancement and Segmentation Techniques for Detection of Knee Joint Diseases: A Survey

    Get PDF
    Knee bone diseases are rare but might be highly destructive. Magnetic resonance imaging (MRI) is the main approach to identify knee cancer and its treatment. Normally, the knee cancers are pointed out with the help of different MRI analysis techniques and latter image analysis strategies understand these images. Computer-based medical image analysis is getting researcher's interest due to its advantages of speed and accuracy as compared to traditional techniques. The focus of current research is MRI-based medical image analysis for knee bone disease detection. Accordingly, several approaches for features extraction and segmentation for knee bone cancer are analyzed and compared on benchmark database. Finally, the current state of the art is investigated and future directions are proposed

    Full-resolution Lung Nodule Segmentation from Chest X-ray Images using Residual Encoder-Decoder Networks

    Full text link
    Lung cancer is the leading cause of cancer death and early diagnosis is associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive imaging mode for lung cancer diagnosis. Suspicious nodules are difficult to distinguish from vascular and bone structures using CXR. Computer vision has previously been proposed to assist human radiologists in this task, however, leading studies use down-sampled images and computationally expensive methods with unproven generalization. Instead, this study localizes lung nodules using efficient encoder-decoder neural networks that process full resolution images to avoid any signal loss resulting from down-sampling. Encoder-decoder networks are trained and tested using the JSRT lung nodule dataset. The networks are used to localize lung nodules from an independent external CXR dataset. Sensitivity and false positive rates are measured using an automated framework to eliminate any observer subjectivity. These experiments allow for the determination of the optimal network depth, image resolution and pre-processing pipeline for generalized lung nodule localization. We find that nodule localization is influenced by subtlety, with more subtle nodules being detected in earlier training epochs. Therefore, we propose a novel self-ensemble model from three consecutive epochs centered on the validation optimum. This ensemble achieved a sensitivity of 85% in 10-fold internal testing with false positives of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6 following morphological false positive reduction. This result is comparable to more computationally complex systems based on linear and spatial filtering, but with a sub-second inference time that is faster than other methods. The proposed algorithm achieved excellent generalization results against an external dataset with sensitivity of 77% at a false positive rate of 7.6

    Factors determining generalization in deep learning models for scoring COVID-CT images

    Get PDF
    The COVID-19 pandemic has inspired unprecedented data collection and computer vision modelling efforts worldwide, focused on the diagnosis of COVID-19 from medical images. However, these models have found limited, if any, clinical application due in part to unproven generalization to data sets beyond their source training corpus. This study investigates the generalizability of deep learning models using publicly available COVID-19 Computed Tomography data through cross dataset validation. The predictive ability of these models for COVID-19 severity is assessed using an independent dataset that is stratified for COVID-19 lung involvement . Each inter-dataset study is performed using histogram equalization, and contrast limited adaptive histogram equalization with and without a learning Gabor filter. We show that under certain conditions, deep learning models can generalize well to an external dataset with F1 scores up to 86%. The best performing model shows predictive accuracy of between 75% and 96% for lung involvement scoring against an external expertly stratified dataset. From these results we identify key factors promoting deep learning generalization, being primarily the uniform acquisition of training images, and secondly diversity in CT slice position
    corecore