43 research outputs found

    Prominent region of interest contrast enhancement for knee MR images: data from the OAI

    Get PDF
    Osteoarthritis is the most commonly seen arthritis, where there are 30.8 million adults affected in 2015. Magnetic resonance imaging (MRI) plays a key role to provide direct visualization and quantitative measurement on knee cartilage to monitor the osteoarthritis progression. However, the visual quality of MRI data can be influenced by poor background luminance, complex human knee anatomy, and indistinctive tissue contrast. Typical histogram equalisation methods are proven to be irrelevant in processing the biomedical images due to their steep cumulative density function (CDF) mapping curve which could result in severe washout and distortion on subject details. In this paper, the prominent region of interest contrast enhancement method (PROICE) is proposed to separate the original histogram of a 16-bit biomedical image into two Gaussians that cover dark pixels region and bright pixels region respectively. After obtaining the mean of the brighter region, where our ROI – knee cartilage falls, the mean becomes a break point to process two Bezier transform curves separately. The Bezier curves are then combined to replace the typical CDF curve to equalize the original histogram. The enhanced image preserves knee feature as well as region of interest (ROI) mean brightness. The image enhancement performance tests show that PROICE has achieved the highest peak signal-to-noise ratio (PSNR=24.747±1.315dB), lowest absolute mean brightness error (AMBE=0.020±0.007) and notably structural similarity index (SSIM=0.935±0.019). In other words, PROICE has considerably outperformed the other approaches in terms of its noise reduction, perceived image quality, its precision and has shown great potential to visually assist physicians in their diagnosis and decision-making process

    Image contrast enhancement for preserving entropy and image visual features

    Get PDF
    Histogram equalization is essential for low-contrast enhancement in image processing. Several methods have been proposed; however, one of the most critical problems encountered by existing methods is their ability to preserve information in the enhanced image as the original. This research proposes an image enhancement method based on a histogram equalization approach that preserves the entropy and fine details similar to those of the original image. This is achieved through proposed probability density functions (PDFs) that preserve the small gray values of the usual PDF. The method consists of several steps. First, occurrences and clipped histograms are extracted according to the proposed thresholding. Then, they are equalized and used by a proposed transferring function to calculate the new pixel values in the enhanced image. The proposed method is compared with widely used methods such as Clahe, CS, HE, and GTSHE. Experiments using benchmark datasets and entropy, contrast, PSNR, and SSIM measurements are conducted to evaluate the performance. The results show that the proposed method is the only one that preserves the entropy of the enhanced image of the original image. In addition, it is efficient and reliable in enhancing image quality. This method preserves fine details and improves image quality, supporting computer vision and pattern recognition fields

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
    corecore