51 research outputs found

    Noise Removal in Microarray Images Using Variational Mode Decomposition Technique

    Get PDF
    Microarray technology allows the simultaneous monitoring of thousands of genes in parallel. Based on the gene expression measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Enhancement, Gridding, Segmentation and Intensity extraction are important steps in microarray image analysis. This paper presents a noise removal method in microarray images based on Variational Mode Decomposition (VMD). VMD is a signal processing method which decomposes any input signal into discrete number of sub-signals (called Variational Mode Functions) with each mode chosen to be its band width in spectral domain. First the noisy image is processed using 2-D VMD to produce 2-D VMFs. Then Discrete Wavelet Transform (DWT) thresholding technique is applied to each VMF for denoising.  The denoised microarray image is reconstructed by the summation of VMFs.  This method is named as 2-D VMD and DWT thresholding method. The proposed method is compared with DWT thresholding and BEMD and DWT thresholding methods. The qualitative and quantitative analysis shows that 2-D VMD and DWT thresholding method produces better noise removal than other two methods

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Dimensionality reduction and hierarchical clustering in framework for hyperspectral image segmentation

    Get PDF
    The hyperspectral data contains hundreds of narrows bands representing the same scene on earth, with each pixel has a continuous reflectance spectrum. The first attempts to analysehyperspectral images were based on techniques that were developed for multispectral images by randomly selecting few spectral channels, usually less than seven. This random selection of bands degrades the performance of segmentation algorithm on hyperspectraldatain terms of accuracies. In this paper, a new framework is designed for the analysis of hyperspectral image by taking the information from all the data channels with dimensionality reduction method using subset selection and hierarchical clustering. A methodology based on subset construction is used for selecting k informative bands from d bands dataset. In this selection, similarity metrics such as Average Pixel Intensity [API], Histogram Similarity [HS], Mutual Information [MI] and Correlation Similarity [CS] are used to create k distinct subsets and from each subset, a single band is selected. The informative bands which are selected are merged into a single image using hierarchical fusion technique. After getting fused image, Hierarchical clustering algorithm is used for segmentation of image. The qualitative and quantitative analysis shows that CS similarity metric in dimensionality reduction algorithm gets high quality segmented image

    Image processing and machine learning techniques used in computer-aided detection system for mammogram screening - a review

    Get PDF
    This paper aims to review the previously developed Computer-aided detection (CAD) systems for mammogram screening because increasing death rate in women due to breast cancer is a global medical issue and it can be controlled only by early detection with regular screening. Till now mammography is the widely used breast imaging modality. CAD systems have been adopted by the radiologists to increase the accuracy of the breast cancer diagnosis by avoiding human errors and experience related issues. This study reveals that in spite of the higher accuracy obtained by the earlier proposed CAD systems for breast cancer diagnosis, they are not fully automated. Moreover, the false-positive mammogram screening cases are high in number and over-diagnosis of breast cancer exposes a patient towards harmful overtreatment for which a huge amount of money is being wasted. In addition, it is also reported that the mammogram screening result with and without CAD systems does not have noticeable difference, whereas the undetected cancer cases by CAD system are increasing. Thus, future research is required to improve the performance of CAD system for mammogram screening and make it completely automated

    Noise reduction and mammography image segmentation optimization with novel QIMFT-SSA method

    Get PDF
    Breast cancer is one of the most dreaded diseases that affects women worldwide and has led to many deaths. Early detection of breast masses prolongs life expectancy in women and hence the development of an automated system for breast masses supports radiologists for accurate diagnosis. In fact, providing an optimal approach with the highest speed and more accuracy is an approach provided by computer-aided design techniques to determine the exact area of breast tumors to use a decision support management system as an assistant to physicians. This study proposes an optimal approach to noise reduction in mammographic images and to identify salt and pepper, Gaussian, Poisson and impact noises to determine the exact mass detection operation after these noise reduction. It therefore offers a method for noise reduction operations called Quantum Inverse MFT Filtering and a method for precision mass segmentation called the Optimal Social Spider Algorithm (SSA) in mammographic images. The hybrid approach called QIMFT-SSA is evaluated in terms of criteria compared to previous methods such as peak Signal-to-Noise Ratio (PSNR) and Mean-Squared Error (MSE) in noise reduction and accuracy of detection for mass area recognition. The proposed method presents more performance of noise reduction and segmentation in comparison to state-of-arts methods. supported the work

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    Detection and Mosaicing through Deep Learning Models for Low-Quality Retinal Images

    Get PDF
    Glaucoma is a severe eye disease that is asymptomatic in the initial stages and can lead to blindness, due to its degenerative characteristic. There isn’t any available cure for it, and it is the second most common cause of blindness in the world. Most of the people affected by it only discovers the disease when it is already too late. Regular visits to the ophthalmologist are the best way to prevent or contain it, with a precise diagnosis performed with professional equipment. From another perspective, for some individuals or populations, this task can be difficult to accomplish, due to several restrictions, such as low incoming resources, geographical adversities, and travelling restrictions (distance, lack of means of transportation, etc.). Also, logistically, due to its dimensions, relocating the professional equipment can be expensive, thus becoming not viable to bring them to remote areas. In the market, low-cost products like the D-Eye lens offer an alternative to meet this need. The D-Eye lens can be attached to a smartphone to capture fundus images, but it presents a major drawback in terms of lower-quality imaging when compared to professional equipment. This work presents and evaluates methods for eye reading with D-Eye recordings. This involves exposing the retina in two steps: object detection and summarization via object mosaicing. Deep learning methods, such as the YOLO family architecture, were used for retina registration as an object detector. The summarization methods presented and inferred in this work mosaiced the best retina images together to produce a more detailed resultant image. After selecting the best workflow from these methods, a final inference was performed and visually evaluated, the results were not rich enough to serve as a pre-screening medical assessment, determining that improvements in the actual algorithm and technology are needed to retrieve better imaging

    Information Theory and Its Application in Machine Condition Monitoring

    Get PDF
    Condition monitoring of machinery is one of the most important aspects of many modern industries. With the rapid advancement of science and technology, machines are becoming increasingly complex. Moreover, an exponential increase of demand is leading an increasing requirement of machine output. As a result, in most modern industries, machines have to work for 24 hours a day. All these factors are leading to the deterioration of machine health in a higher rate than before. Breakdown of the key components of a machine such as bearing, gearbox or rollers can cause a catastrophic effect both in terms of financial and human costs. In this perspective, it is important not only to detect the fault at its earliest point of inception but necessary to design the overall monitoring process, such as fault classification, fault severity assessment and remaining useful life (RUL) prediction for better planning of the maintenance schedule. Information theory is one of the pioneer contributions of modern science that has evolved into various forms and algorithms over time. Due to its ability to address the non-linearity and non-stationarity of machine health deterioration, it has become a popular choice among researchers. Information theory is an effective technique for extracting features of machines under different health conditions. In this context, this book discusses the potential applications, research results and latest developments of information theory-based condition monitoring of machineries

    Machine Learning Methods with Noisy, Incomplete or Small Datasets

    Get PDF
    In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios

    A framework for ancient and machine-printed manuscripts categorization

    Get PDF
    Document image understanding (DIU) has attracted a lot of attention and became an of active fields of research. Although, the ultimate goal of DIU is extracting textual information of a document image, many steps are involved in a such a process such as categorization, segmentation and layout analysis. All of these steps are needed in order to obtain an accurate result from character recognition or word recognition of a document image. One of the important steps in DIU is document image categorization (DIC) that is needed in many situations such as document image written or printed in more than one script, font or language. This step provides useful information for recognition system and helps in reducing its error by allowing to incorporate a category-specific Optical Character Recognition (OCR) system or word recognition (WR) system. This research focuses on the problem of DIC in different categories of scripts, styles and languages and establishes a framework for flexible representation and feature extraction that can be adapted to many DIC problem. The current methods for DIC have many limitations and drawbacks that restrict the practical usage of these methods. We proposed an efficient framework for categorization of document image based on patch representation and Non-negative Matrix Factorization (NMF). This framework is flexible and can be adapted to different categorization problem. Many methods exist for script identification of document image but few of them addressed the problem in handwritten manuscripts and they have many limitations and drawbacks. Therefore, our first goal is to introduce a novel method for script identification of ancient manuscripts. The proposed method is based on patch representation in which the patches are extracted using skeleton map of a document images. This representation overcomes the limitation of the current methods about the fixed level of layout. The proposed feature extraction scheme based on Projective Non-negative Matrix Factorization (PNMF) is robust against noise and handwriting variation and can be used for different scripts. The proposed method has higher performance compared to state of the art methods and can be applied to different levels of layout. The current methods for font (style) identification are mostly proposed to be applied on machine-printed document image and many of them can only be used for a specific level of layout. Therefore, we proposed new method for font and style identification of printed and handwritten manuscripts based on patch representation and Non-negative Matrix Tri-Factorization (NMTF). The images are represented by overlapping patches obtained from the foreground pixels. The position of these patches are set based on skeleton map to reduce the number of patches. Non-Negative Matrix Tri-Factorization is used to learn bases from each fonts (style) and then these bases are used to classify a new image based on minimum representation error. The proposed method can easily be extended to new fonts as the bases for each font are learned separately from the other fonts. This method is tested on two datasets of machine-printed and ancient manuscript and the results confirmed its performance compared to the state of the art methods. Finally, we proposed a novel method for language identification of printed and handwritten manuscripts based on patch representation and Non-negative Matrix Tri-Factorization (NMTF). The current methods for language identification are based on textual data obtained by OCR engine or images data through coding and comparing with textual data. The OCR based method needs lots of processing and the current image based method are not applicable to cursive scripts such as Arabic. In this work we introduced a new method for language identification of machine-printed and handwritten manuscripts based on patch representation and NMTF. The patch representation provides the component of the Arabic script (letters) that can not be extracted simply by segmentation methods. Then NMTF is used for dictionary learning and generating codebooks that will be used to represent document image with a histogram. The proposed method is tested on two datasets of machine-printed and handwritten manuscripts and compared to n-gram features (text-based), texture features and codebook features (imagebased) to validate the performance. The above proposed methods are robust against variation in handwritings, changes in the font (handwriting style) and presence of degradation and are flexible that can be used to various levels of layout (from a textline to paragraph). The methods in this research have been tested on datasets of handwritten and machine-printed manuscripts and compared to state-of-the-art methods. All of the evaluations show the efficiency, robustness and flexibility of the proposed methods for categorization of document image. As mentioned before the proposed strategies provide a framework for efficient and flexible representation and feature extraction for document image categorization. This frame work can be applied to different levels of layout, the information from different levels of layout can be merged and mixed and this framework can be extended to more complex situations and different tasks
    corecore