341 research outputs found

    Application of multiobjective genetic programming to the design of robot failure recognition systems

    Get PDF
    We present an evolutionary approach using multiobjective genetic programming (MOGP) to derive optimal feature extraction preprocessing stages for robot failure detection. This data-driven machine learning method is compared both with conventional (nonevolutionary) classifiers and a set of domain-dependent feature extraction methods. We conclude MOGP is an effective and practical design method for failure recognition systems with enhanced recognition accuracy over conventional classifiers, independent of domain knowledge

    Parameter Optimization for Image Denoising Based on Block Matching and 3D Collaborative Filtering

    Get PDF
    Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation

    Adaptive decomposition-based evolutionary approach for multiobjective sparse reconstruction

    Full text link
    © 2018 Elsevier Inc. This paper aims at solving the sparse reconstruction (SR) problem via a multiobjective evolutionary algorithm. Existing multiobjective evolutionary algorithms for the SR problem have high computational complexity, especially in high-dimensional reconstruction scenarios. Furthermore, these algorithms focus on estimating the whole Pareto front rather than the knee region, thus leading to limited diversity of solutions in knee region and waste of computational effort. To tackle these issues, this paper proposes an adaptive decomposition-based evolutionary approach (ADEA) for the SR problem. Firstly, we employ the decomposition-based evolutionary paradigm to guarantee a high computational efficiency and diversity of solutions in the whole objective space. Then, we propose a two-stage iterative soft-thresholding (IST)-based local search operator to improve the convergence. Finally, we develop an adaptive decomposition-based environmental selection strategy, by which the decomposition in the knee region can be adjusted dynamically. This strategy enables to focus the selection effort on the knee region and achieves low computational complexity. Experimental results on simulated signals, benchmark signals and images demonstrate the superiority of ADEA in terms of reconstruction accuracy and computational efficiency, compared to five state-of-the-art algorithms

    Monotonicity for Multiobjective Accelerated Proximal Gradient Methods

    Full text link
    Accelerated proximal gradient methods, which are also called fast iterative shrinkage-thresholding algorithms (FISTA) are known to be efficient for many applications. Recently, Tanabe et al. proposed an extension of FISTA for multiobjective optimization problems. However, similarly to the single-objective minimization case, the objective functions values may increase in some iterations, and inexact computations of subproblems can also lead to divergence. Motivated by this, here we propose a variant of the FISTA for multiobjective optimization, that imposes some monotonicity of the objective functions values. In the single-objective case, we retrieve the so-called MFISTA, proposed by Beck and Teboulle. We also prove that our method has global convergence with rate O(1/k2)O(1/k^2), where kk is the number of iterations, and show some numerical advantages in requiring monotonicity.Comment: - Added new numerical experiment

    A generic optimising feature extraction method using multiobjective genetic programming

    Get PDF
    In this paper, we present a generic, optimising feature extraction method using multiobjective genetic programming. We re-examine the feature extraction problem and show that effective feature extraction can significantly enhance the performance of pattern recognition systems with simple classifiers. A framework is presented to evolve optimised feature extractors that transform an input pattern space into a decision space in which maximal class separability is obtained. We have applied this method to real world datasets from the UCI Machine Learning and StatLog databases to verify our approach and compare our proposed method with other reported results. We conclude that our algorithm is able to produce classifiers of superior (or equivalent) performance to the conventional classifiers examined, suggesting removal of the need to exhaustively evaluate a large family of conventional classifiers on any new problem. (C) 2010 Elsevier B.V. All rights reserved

    An accelerated proximal gradient method for multiobjective optimization

    Full text link
    This paper presents an accelerated proximal gradient method for multiobjective optimization, in which each objective function is the sum of a continuously differentiable, convex function and a closed, proper, convex function. Extending first-order methods for multiobjective problems without scalarization has been widely studied, but providing accelerated methods with accurate proofs of convergence rates remains an open problem. Our proposed method is a multiobjective generalization of the accelerated proximal gradient method, also known as the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), for scalar optimization. The key to this successful extension is solving a subproblem with terms exclusive to the multiobjective case. This approach allows us to demonstrate the global convergence rate of the proposed method (O(1/k2)O(1 / k^2)), using a merit function to measure the complexity. Furthermore, we present an efficient way to solve the subproblem via its dual representation, and we confirm the validity of the proposed method through some numerical experiments

    Cardinality-Constrained Multi-Objective Optimization: Novel Optimality Conditions and Algorithms

    Full text link
    In this paper, we consider multi-objective optimization problems with a sparsity constraint on the vector of variables. For this class of problems, inspired by the homonymous necessary optimality condition for sparse single-objective optimization, we define the concept of L-stationarity and we analyze its relationships with other existing conditions and Pareto optimality concepts. We then propose two novel algorithmic approaches: the first one is an Iterative Hard Thresholding method aiming to find a single L-stationary solution, while the second one is a two-stage algorithm designed to construct an approximation of the whole Pareto front. Both methods are characterized by theoretical properties of convergence to points satisfying necessary conditions for Pareto optimality. Moreover, we report numerical results establishing the practical effectiveness of the proposed methodologies.Comment: 20 pages, 7 figures, 1 tabl

    A multiobjective continuation method to compute the regularization path of deep neural networks

    Full text link
    Sparsity is a highly desired feature in deep neural networks (DNNs) since it ensures numerical efficiency, improves the interpretability of models (due to the smaller number of relevant features), and robustness. In machine learning approaches based on linear models, it is well known that there exists a connecting path between the sparsest solution in terms of the â„“1\ell^1 norm (i.e., zero weights) and the non-regularized solution, which is called the regularization path. Very recently, there was a first attempt to extend the concept of regularization paths to DNNs by means of treating the empirical loss and sparsity (â„“1\ell^1 norm) as two conflicting criteria and solving the resulting multiobjective optimization problem. However, due to the non-smoothness of the â„“1\ell^1 norm and the high number of parameters, this approach is not very efficient from a computational perspective. To overcome this limitation, we present an algorithm that allows for the approximation of the entire Pareto front for the above-mentioned objectives in a very efficient manner. We present numerical examples using both deterministic and stochastic gradients. We furthermore demonstrate that knowledge of the regularization path allows for a well-generalizing network parametrization.Comment: 7 pages, 6 figure

    Mathematics in health care with applications

    Get PDF
    The Author aims to show how mathematics can be useful in supporting key activities in a hospital, including: noninvasive measurement of a patient’s status (see chapter 1), evaluation of quality of services (see chapter 2), business and clinical administration (see chapter 3), and diagnosis and prognosis (see chapter 4). Such applications suggest the development of innovative projects to improve health care processes, services and systems. In this way, mathematics can be a very important tool for technological and societal development
    • …
    corecore