1,031 research outputs found

    POWERPLAY: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem

    Get PDF
    Most of computer science focuses on automatically solving given computational problems. I focus on automatically inventing or discovering problems in a way inspired by the playful behavior of animals and humans, to train a more and more general problem solver from scratch in an unsupervised fashion. Consider the infinite set of all computable descriptions of tasks with possibly computable solutions. The novel algorithmic framework POWERPLAY (2011) continually searches the space of possible pairs of new tasks and modifications of the current problem solver, until it finds a more powerful problem solver that provably solves all previously learned tasks plus the new one, while the unmodified predecessor does not. Wow-effects are achieved by continually making previously learned skills more efficient such that they require less time and space. New skills may (partially) re-use previously learned skills. POWERPLAY's search orders candidate pairs of tasks and solver modifications by their conditional computational (time & space) complexity, given the stored experience so far. The new task and its corresponding task-solving skill are those first found and validated. The computational costs of validating new tasks need not grow with task repertoire size. POWERPLAY's ongoing search for novelty keeps breaking the generalization abilities of its present solver. This is related to Goedel's sequence of increasingly powerful formal theories based on adding formerly unprovable statements to the axioms without affecting previously provable theorems. The continually increasing repertoire of problem solving procedures can be exploited by a parallel search for solutions to additional externally posed tasks. POWERPLAY may be viewed as a greedy but practical implementation of basic principles of creativity. A first experimental analysis can be found in separate papers [53,54].Comment: 21 pages, additional connections to previous work, references to first experiments with POWERPLA

    The structure of problem-solving knowledge and the structure of organisations

    Get PDF
    This work presents a model of organisational problem solving able to account for the relationships between problem complexity, tasks decentralilzation and problem solving efficiency. Whenever problem solving requires the coordination of a multiplicity of interdependent elements, the varying degrees of decentralization of cognitive and operational tasks shape the solution which can be generated, tested and selected. Suboptimality and path-dependence are shown to be ubiquitous features of organisational problem solving. At the same time, the model allows a precise exploration of the possible trade-offs between decompostion patterns and search efficiency involved in different organisational architectures.-

    An information-theoretic and dissipative systems approach to the study of knowledge diffusion and emerging complexity in innovation systems

    Get PDF
    The paper applies information theory and the theory of dissipative systems to discuss the emergence of complexity in an innovation system, as a result of its adaptation to an uneven distribution of the cognitive distance between its members. By modelling, on one hand, cognitive distance as noise, and, on the other hand, the inefficiencies linked to a bad flow of information as costs, we propose a model of the dynamics by which a horizontal network evolves into a hierarchical network, with some members emerging as intermediaries in the transfer of knowledge between seekers and problem-solvers. Our theoretical model contributes to the understanding of the evolution of an innovation system by explaining how the increased complexity of the system can be thermodynamically justified by purely internal factors. Complementing previous studies, we demonstrate mathematically that the complexity of an innovation system can increase not only to address the complexity of the problems that the system has to solve, but also to improve the performance of the system in transferring the knowledge needed to find a solution

    Automated Design of Metaheuristic Algorithms: A Survey

    Full text link
    Metaheuristics have gained great success in academia and practice because their search logic can be applied to any problem with available solution representation, solution quality evaluation, and certain notions of locality. Manually designing metaheuristic algorithms for solving a target problem is criticized for being laborious, error-prone, and requiring intensive specialized knowledge. This gives rise to increasing interest in automated design of metaheuristic algorithms. With computing power to fully explore potential design choices, the automated design could reach and even surpass human-level design and could make high-performance algorithms accessible to a much wider range of researchers and practitioners. This paper presents a broad picture of automated design of metaheuristic algorithms, by conducting a survey on the common grounds and representative techniques in terms of design space, design strategies, performance evaluation strategies, and target problems in this field

    Organizational Routines Development and New Venture Performance

    Get PDF
    To better understand how entrepreneurial ventures vary as they evolve, we introduce and develop the concept of an organizational routine in a prototypical state, a protoroutine. Protoroutines allow experienced new ventures (but not inexperienced start-ups) to economize on decision-making and execution time in problem solving by drawing from an inventory of prior solutions to challenges. Protoroutines are not, however, tailored to the challenge at hand. We embed protoroutines into a simulation-based model featuring agents with differing decision-making speeds and abilities of exploring more distant solutions, two parameters influenced by founding team characteristics. Search speed and distance are typically traded off against each other at the team design level. Protoroutines may therefore be particularly helpful in organizational contexts in which it is optimal to have both search speed and distance. We characterize the organizational contextual configurations along the dimensions of environmental turbulence and decision complexity in which protoroutines, search speed, and search distance are associated with elevated (and dampened) organizational performance. One important conclusion is that decision-making speed can be a valuable organizational resource across organizational environments. Overall, our agent-based model and simulation results deepen our understanding of how and with what performance consequence new ventures develop

    Robust evolutionary algorithms

    Get PDF
    Evolutionary Algorithms (EAs) have shown great potential to solve complex real world problems, but their dependence on problem specific configuration in order to obtain high quality performance prevents EAs from achieving widespread use. While it is widely accepted that statically configuring an EA is already a complex problem, dynamic configuration of an EA is a combinatorially harder problem. Evidence provided here supports the claim that EAs achieve the best results when using dynamic configurations. By designing methods that automatically configure parts of an EA or by changing how EAs work to avoid configurable aspects, EAs can be made more robust, allowing them better performance on a wider variety of problems with less requirements on the user. Two methods are presented in this thesis to increase the robustness of EAs. The first is a novel algorithm designed to automatically configure and dynamically update the recombination method which is used by the EA to exploit known information to create new solutions. The techniques used by this algorithm can likely be applied to other aspects of an EA in the future, leading to even more robust EAs. The second is an existing set of algorithms which only require a single configurable parameter. The analysis of the existing set led to the creation of a new variation, as well as a better understanding of how these algorithms work. Both methods are able to outperform more traditional EAs while also making both easier to apply to new problems. By building upon these methods, and perhaps combining them, EAs can become even more robust and become more widely used --Abstract, page iv

    The Value and Costs of Modularity: A Cognitive Perspective

    Get PDF
    This paper discusses the issue of modularity from a problem-solving perspective. Modularity is in fact a decomposition heuristic, through which a complex problem is decomposed into independent or quasi-independent sub-problems. By means of a model of problem decomposition, this paper studies the trade-offs of modularity: on the one hand finer modules increase the speed of search, but on the other hand they usually determine lock-in into sub-optimal solutions. How effectively to balance this trade-off depends upon the problem environment and its complexity and volatility: we show that in stationary and complex environments there exists an evolutionary advantage to over-modularization, while in highly volatile – though “simple” – en- vironments, contrary to usual wisdom, modular search is inefficient. The empirical relevance of our findings is discussed, especially with reference to the literature on system integration.modularity, problem solving, complex systems

    Directed Incremental Symbolic Execution

    Get PDF
    The last few years have seen a resurgence of interest in the use of symbolic execution -- a program analysis technique developed more than three decades ago to analyze program execution paths. Scaling symbolic execution and other path-sensitive analysis techniques to large systems remains challenging despite recent algorithmic and technological advances. An alternative to solving the problem of scalability is to reduce the scope of the analysis. One approach that is widely studied in the context of regression analysis is to analyze the differences between two related program versions. While such an approach is intuitive in theory, finding efficient and precise ways to identify program differences, and characterize their effects on how the program executes has proved challenging in practice. In this paper, we present Directed Incremental Symbolic Execution (DiSE), a novel technique for detecting and characterizing the effects of program changes. The novelty of DiSE is to combine the efficiencies of static analysis techniques to compute program difference information with the precision of symbolic execution to explore program execution paths and generate path conditions affected by the differences. DiSE is a complementary technique to other reduction or bounding techniques developed to improve symbolic execution. Furthermore, DiSE does not require analysis results to be carried forward as the software evolves -- only the source code for two related program versions is required. A case-study of our implementation of DiSE illustrates its effectiveness at detecting and characterizing the effects of program changes
    • …
    corecore