3,402 research outputs found

    Local Optimal Sets and Bounded Archiving on Multi-objective NK-Landscapes with Correlated Objectives

    Get PDF
    The properties of local optimal solutions in multi-objective combinatorial optimization problems are crucial for the effectiveness of local search algorithms, particularly when these algorithms are based on Pareto dominance. Such local search algorithms typically return a set of mutually nondominated Pareto local optimal (PLO) solutions, that is, a PLO-set. This paper investigates two aspects of PLO-sets by means of experiments with Pareto local search (PLS). First, we examine the impact of several problem characteristics on the properties of PLO-sets for multi-objective NK-landscapes with correlated objectives. In particular, we report that either increasing the number of objectives or decreasing the correlation between objectives leads to an exponential increment on the size of PLO-sets, whereas the variable correlation has only a minor effect. Second, we study the running time and the quality reached when using bounding archiving methods to limit the size of the archive handled by PLS, and thus, the maximum size of the PLO-set found. We argue that there is a clear relationship between the running time of PLS and the difficulty of a problem instance.Comment: appears in Parallel Problem Solving from Nature - PPSN XIII, Ljubljana : Slovenia (2014

    Local Optimal Sets and Bounded Archiving on Multi-objective NK-Landscapes with Correlated Objectives

    Get PDF
    The properties of local optimal solutions in multi-objective combinatorial optimization problems are crucial for the effectiveness of local search algorithms, particularly when these algorithms are based on Pareto dominance. Such local search algorithms typically return a set of mutually nondominated Pareto local optimal (PLO) solutions, that is, a PLO-set. This paper investigates two aspects of PLO-sets by means of experiments with Pareto local search (PLS). First, we examine the impact of several problem characteristics on the properties of PLO-sets for multi-objective NK-landscapes with correlated objectives. In particular, we report that either increasing the number of objectives or decreasing the correlation between objectives leads to an exponential increment on the size of PLO-sets, whereas the variable correlation has only a minor effect. Second, we study the running time and the quality reached when using bounding archiving methods to limit the size of the archive handled by PLS, and thus, the maximum size of the PLO-set found. We argue that there is a clear relationship between the running time of PLS and the difficulty of a problem instance.Comment: appears in Parallel Problem Solving from Nature - PPSN XIII, Ljubljana : Slovenia (2014

    Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database

    Full text link
    Radiologists in their daily work routinely find and annotate significant abnormalities on a large number of radiology images. Such abnormalities, or lesions, have collected over years and stored in hospitals' picture archiving and communication systems. However, they are basically unsorted and lack semantic annotations like type and location. In this paper, we aim to organize and explore them by learning a deep feature representation for each lesion. A large-scale and comprehensive dataset, DeepLesion, is introduced for this task. DeepLesion contains bounding boxes and size measurements of over 32K lesions. To model their similarity relationship, we leverage multiple supervision information including types, self-supervised location coordinates and sizes. They require little manual annotation effort but describe useful attributes of the lesions. Then, a triplet network is utilized to learn lesion embeddings with a sequential sampling strategy to depict their hierarchical similarity structure. Experiments show promising qualitative and quantitative results on lesion retrieval, clustering, and classification. The learned embeddings can be further employed to build a lesion graph for various clinically useful applications. We propose algorithms for intra-patient lesion matching and missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde

    Investigation of Archiving Techniques for Evolutionary Multi-objective Optimizers

    Get PDF
    Abstract: The optimization of multi-objective problems from the Pareto dominance viewpoint can lead to huge sets of incomparable solutions. Many heuristic techniques proposed to these problems have to deal with approximation sets that can be limited or not. Usually, a new solution generated by a heuristic is compared with other archived non-dominated solutions generated previously. Many techniques deal with limited size archives, since comparisons within unlimited archives may require significant computational effort. To maintain limited archives, solutions need to be discarded. Several techniques were proposed to deal with the problem of deciding which solutions remain in the archive and which are discarded. Previous investigations showed that those techniques might not prevent deterioration of the archives. In this study, we propose to store discarded solutions in a secondary archive and, periodically, recycle them, bringing them back to the optimization process. Three recycling techniques were investigated for three known methods. The datasets for the experiments consisted of 91 instances of discrete and continuous problems with 2, 3 and 4 objectives. The results showed that the recycling method can benefit the tested optimizers on many problem classes

    Multi-Objective Archiving

    Full text link
    Most multi-objective optimisation algorithms maintain an archive explicitly or implicitly during their search. Such an archive can be solely used to store high-quality solutions presented to the decision maker, but in many cases may participate in the search process (e.g., as the population in evolutionary computation). Over the last two decades, archiving, the process of comparing new solutions with previous ones and deciding how to update the archive/population, stands as an important issue in evolutionary multi-objective optimisation (EMO). This is evidenced by constant efforts from the community on developing various effective archiving methods, ranging from conventional Pareto-based methods to more recent indicator-based and decomposition-based ones. However, the focus of these efforts is on empirical performance comparison in terms of specific quality indicators; there is lack of systematic study of archiving methods from a general theoretical perspective. In this paper, we attempt to conduct a systematic overview of multi-objective archiving, in the hope of paving the way to understand archiving algorithms from a holistic perspective of theory and practice, and more importantly providing a guidance on how to design theoretically desirable and practically useful archiving algorithms. In doing so, we also present that archiving algorithms based on weakly Pareto compliant indicators (e.g., epsilon-indicator), as long as designed properly, can achieve the same theoretical desirables as archivers based on Pareto compliant indicators (e.g., hypervolume indicator). Such desirables include the property limit-optimal, the limit form of the possible optimal property that a bounded archiving algorithm can have with respect to the most general form of superiority between solution sets.Comment: 21 pages, 4 figures, journa

    Elitist archiving for multi-objective evolutionary algorithms: To adapt or not to adapt

    Get PDF
    Objective-space discretization is a popular method to control the elitist archive size for evolutionary multi-objective optimization and avoid problems with convergence. By setting the level of discretization, the proximity and diversity of the Pareto approximation set can be controlled. This paper proposes an adaptive archiving strategy which is developed from a rigid-grid discretization mechanism. The main advantage of this strategy is that the practitioner just decides the desirable target size for the elitist archive while all the maintenance details are automatically handled. We compare the adaptive and rigid archiving strategies on the basis of a performance indicator that measures front quality, success rate, and running time. Experimental results confirm the competitiveness of the adaptive method while showing its advantages in terms of transparency and ease of use

    No-reference bitstream-based visual quality impairment detection for high definition H.264/AVC encoded video sequences

    Get PDF
    Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives

    A software framework based on a conceptual unified model for evolutionary multiobjective optimization: ParadisEO-MOEO

    Get PDF
    International audienceThis paper presents a general-purpose software framework dedicated to the design and the implementation of evolutionary multiobjective optimization techniques: ParadisEO-MOEO. A concise overview of evolutionary algorithms for multiobjective optimization is given. A substantial number of methods has been proposed so far, and an attempt of conceptually unifying existing approaches is presented here. Based on a fine-grained decomposition and following the main issues of fitness assignment, diversity preservation and elitism, a conceptual model is proposed and is validated by regarding a number of state-of-the-art algorithms as simple variants of the same structure. This model is then incorporated into the ParadisEO-MOEO software framework. This framework has proven its validity and high flexibility by enabling the resolution of many academic, real-world and hard multiobjective optimization problems
    corecore