353 research outputs found

    Adaptive decomposition-based evolutionary approach for multiobjective sparse reconstruction

    Full text link
    © 2018 Elsevier Inc. This paper aims at solving the sparse reconstruction (SR) problem via a multiobjective evolutionary algorithm. Existing multiobjective evolutionary algorithms for the SR problem have high computational complexity, especially in high-dimensional reconstruction scenarios. Furthermore, these algorithms focus on estimating the whole Pareto front rather than the knee region, thus leading to limited diversity of solutions in knee region and waste of computational effort. To tackle these issues, this paper proposes an adaptive decomposition-based evolutionary approach (ADEA) for the SR problem. Firstly, we employ the decomposition-based evolutionary paradigm to guarantee a high computational efficiency and diversity of solutions in the whole objective space. Then, we propose a two-stage iterative soft-thresholding (IST)-based local search operator to improve the convergence. Finally, we develop an adaptive decomposition-based environmental selection strategy, by which the decomposition in the knee region can be adjusted dynamically. This strategy enables to focus the selection effort on the knee region and achieves low computational complexity. Experimental results on simulated signals, benchmark signals and images demonstrate the superiority of ADEA in terms of reconstruction accuracy and computational efficiency, compared to five state-of-the-art algorithms

    A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends

    Get PDF
    Computer vision (CV) is a big and important field in artificial intelligence covering a wide range of applications. Image analysis is a major task in CV aiming to extract, analyse and understand the visual content of images. However, imagerelated tasks are very challenging due to many factors, e.g., high variations across images, high dimensionality, domain expertise requirement, and image distortions. Evolutionary computation (EC) approaches have been widely used for image analysis with significant achievement. However, there is no comprehensive survey of existing EC approaches to image analysis. To fill this gap, this paper provides a comprehensive survey covering all essential EC approaches to important image analysis tasks including edge detection, image segmentation, image feature analysis, image classification, object detection, and others. This survey aims to provide a better understanding of evolutionary computer vision (ECV) by discussing the contributions of different approaches and exploring how and why EC is used for CV and image analysis. The applications, challenges, issues, and trends associated to this research field are also discussed and summarised to provide further guidelines and opportunities for future research

    Segmentation of articular cartilage and early osteoarthritis based on the fuzzy soft thresholding approach driven by modified evolutionary ABC optimization and local statistical aggregation

    Get PDF
    Articular cartilage assessment, with the aim of the cartilage loss identification, is a crucial task for the clinical practice of orthopedics. Conventional software (SW) instruments allow for just a visualization of the knee structure, without post processing, offering objective cartilage modeling. In this paper, we propose the multiregional segmentation method, having ambitions to bring a mathematical model reflecting the physiological cartilage morphological structure and spots, corresponding with the early cartilage loss, which is poorly recognizable by the naked eye from magnetic resonance imaging (MRI). The proposed segmentation model is composed from two pixel's classification parts. Firstly, the image histogram is decomposed by using a sequence of the triangular fuzzy membership functions, when their localization is driven by the modified artificial bee colony (ABC) optimization algorithm, utilizing a random sequence of considered solutions based on the real cartilage features. In the second part of the segmentation model, the original pixel's membership in a respective segmentation class may be modified by using the local statistical aggregation, taking into account the spatial relationships regarding adjacent pixels. By this way, the image noise and artefacts, which are commonly presented in the MR images, may be identified and eliminated. This fact makes the model robust and sensitive with regards to distorting signals. We analyzed the proposed model on the 2D spatial MR image records. We show different MR clinical cases for the articular cartilage segmentation, with identification of the cartilage loss. In the final part of the analysis, we compared our model performance against the selected conventional methods in application on the MR image records being corrupted by additive image noise.Web of Science117art. no. 86

    Hybrid Multilevel Thresholding and Improved Harmony Search Algorithm for Segmentation

    Get PDF
    This paper proposes a new method for image segmentation is hybrid multilevel thresholding and improved harmony search algorithm. Improved harmony search algorithm which is a method for finding vector solutions by increasing its accuracy. The proposed method looks for a random candidate solution, then its quality is evaluated through the Otsu objective function. Furthermore, the operator continues to evolve the solution candidate circuit until the optimal solution is found. The dataset used in this study is the retina dataset, tongue, lenna, baboon, and cameraman. The experimental results show that this method produces the high performance as seen from peak signal-to-noise ratio analysis (PNSR). The PNSR result for retinal image averaged 40.342 dB while for the average tongue image 35.340 dB. For lenna, baboon and cameramen produce an average of 33.781 dB, 33.499 dB, and 34.869 dB. Furthermore, the process of object recognition and identification is expected to use this method to produce a high degree of accuracy

    A Hybrid COVID-19 Detection Model Using an Improved Marine Predators Algorithm and a Ranking-Based Diversity Reduction Strategy

    Get PDF
    Many countries are challenged by the medical resources required for COVID-19 detection which necessitates the development of a low-cost, rapid tool to detect and diagnose the virus effectively for a large numbers of tests. Although a chest X-Ray scan is a useful candidate tool the images generated by the scans must be analyzed accurately and quickly if large numbers of tests are to be processed. COVID-19 causes bilateral pulmonary parenchymal ground-glass and consolidative pulmonary opacities, sometimes with a rounded morphology and a peripheral lung distribution. In this work, we aim to extract rapidly from chest X-Ray images the similar small regions that may contain the identifying features of COVID-19. This paper therefore proposes a hybrid COVID-19 detection model based on an improved marine predators algorithm (IMPA) for X-Ray image segmentation. The ranking-based diversity reduction (RDR) strategy is used to enhance the performance of the IMPA to reach better solutions in fewer iterations. RDR works on finding the particles that couldn't find better solutions within a consecutive number of iterations, and then moving those particles towards the best solutions so far. The performance of IMPA has been validated on nine chest X-Ray images with threshold levels between 10 and 100 and compared with five state-of-art algorithms: equilibrium optimizer (EO), whale optimization algorithm (WOA), sine cosine algorithm (SCA), Harris-hawks algorithm (HHA), and salp swarm algorithms (SSA). The experimental results demonstrate that the proposed hybrid model outperforms all other algorithms for a range of metrics. In addition, the performance of our proposed model was convergent on all numbers of thresholds level in the Structured Similarity Index Metric (SSIM) and Universal Quality Index (UQI) metrics.</p

    Edges Detection Based On Renyi Entropy with Split/Merge

    Get PDF
    Most of the classical methods for edge detection are based on the first and second order derivatives of gray levels of the pixels of the original image. These processes give rise to the exponential increment of computational time, especially with large size of images, and therefore requires more time for processing. This paper shows the new algorithm based on both the Rényi entropy and the Shannon entropy together for edge detection using split and merge technique. The objective is to find the best edge representation and decrease the computation time. A set of experiments in the domain of edge detection are presented. The system yields edge detection performance comparable to the classic methods, such as Canny, LOG, and Sobel.  The experimental results show that the effect of this method is better to LOG, and Sobel methods. In addition, it is better to other three methods in CPU time. Another benefit comes from easy implementation of this method. Keywords: Rényi Entropy, Information content, Edge detection, Thresholdin

    A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images

    Get PDF
    Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality

    A generic optimising feature extraction method using multiobjective genetic programming

    Get PDF
    In this paper, we present a generic, optimising feature extraction method using multiobjective genetic programming. We re-examine the feature extraction problem and show that effective feature extraction can significantly enhance the performance of pattern recognition systems with simple classifiers. A framework is presented to evolve optimised feature extractors that transform an input pattern space into a decision space in which maximal class separability is obtained. We have applied this method to real world datasets from the UCI Machine Learning and StatLog databases to verify our approach and compare our proposed method with other reported results. We conclude that our algorithm is able to produce classifiers of superior (or equivalent) performance to the conventional classifiers examined, suggesting removal of the need to exhaustively evaluate a large family of conventional classifiers on any new problem. (C) 2010 Elsevier B.V. All rights reserved
    corecore