34,648 research outputs found

    An Evolutionary Algorithm to Optimize Log/Restore Operations within Optimistic Simulation Platforms

    Get PDF
    In this work we address state recoverability in advanced optimistic simulation systems by proposing an evolutionary algorithm to optimize at run-time the parameters associated with state log/restore activities. Optimization takes place by adaptively selecting for each simulation object both (i) the best suited log mode (incremental vs non-incremental) and (ii) the corresponding optimal value of the log interval. Our performance optimization approach allows to indirectly cope with hidden effects (e.g., locality) as well as cross-object effects due to the variation of log/restore parameters for different simulation objects (e.g., rollback thrashing). Both of them are not captured by literature solutions based on analytical models of the overhead associated with log/restore tasks. More in detail, our evolutionary algorithm dynamically adjusts the log/restore parameters of distinct simulation objects as a whole, towards a well suited configuration. In such a way, we prevent negative effects on performance due to the biasing of the optimization towards individual simulation objects, which may cause reduced gains (or even decrease) in performance just due to the aforementioned hidden and/or cross-object phenomena. We also present an application-transparent implementation of the evolutionary algorithm within the ROme OpTimistic Simulator (ROOT-Sim), namely an open source, general purpose simulation environment designed according to the optimistic synchronization paradigm

    Robust optimization with incremental recourse

    Full text link
    In this paper, we consider an adaptive approach to address optimization problems with uncertain cost parameters. Here, the decision maker selects an initial decision, observes the realization of the uncertain cost parameters, and then is permitted to modify the initial decision. We treat the uncertainty using the framework of robust optimization in which uncertain parameters lie within a given set. The decision maker optimizes so as to develop the best cost guarantee in terms of the worst-case analysis. The recourse decision is ``incremental"; that is, the decision maker is permitted to change the initial solution by a small fixed amount. We refer to the resulting problem as the robust incremental problem. We study robust incremental variants of several optimization problems. We show that the robust incremental counterpart of a linear program is itself a linear program if the uncertainty set is polyhedral. Hence, it is solvable in polynomial time. We establish the NP-hardness for robust incremental linear programming for the case of a discrete uncertainty set. We show that the robust incremental shortest path problem is NP-complete when costs are chosen from a polyhedral uncertainty set, even in the case that only one new arc may be added to the initial path. We also address the complexity of several special cases of the robust incremental shortest path problem and the robust incremental minimum spanning tree problem

    Measuring Catastrophic Forgetting in Neural Networks

    Full text link
    Deep neural networks are used in many state-of-the-art systems for machine perception. Once a network is trained to do a specific task, e.g., bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize additional bird species or learning an entirely different task such as flower recognition. When new tasks are added, typical deep neural networks are prone to catastrophically forgetting previous tasks. Networks that are capable of assimilating new information incrementally, much like how humans form new memories over time, will be more efficient than re-training the model from scratch each time a new task needs to be learned. There have been multiple attempts to develop schemes that mitigate catastrophic forgetting, but these methods have not been directly compared, the tests used to evaluate them vary considerably, and these methods have only been evaluated on small-scale problems (e.g., MNIST). In this paper, we introduce new metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal, dual-memory, and sparse-coding. Our experiments on real-world images and sounds show that the mechanism(s) that are critical for optimal performance vary based on the incremental training paradigm and type of data being used, but they all demonstrate that the catastrophic forgetting problem has yet to be solved.Comment: To appear in AAAI 201
    • …
    corecore