33,222 research outputs found

    Behavioral conservatism is linked to complexity of behavior in chimpanzees (<i>Pan troglodytes</i>):implications for cognition and cumulative culture

    Get PDF
    Cumulative culture is rare, if not altogether absent in nonhuman species. At the foundation of cumulative learning is the ability to modify, relinquish, or build upon previous behaviors flexibly to make them more productive or efficient. Within the primate literature, a failure to optimize solutions in this way is often proposed to derive from low-fidelity copying of witnessed behaviors, suboptimal social learning heuristics, or a lack of relevant sociocognitive adaptations. However, humans can also be markedly inflexible in their behaviors, perseverating with, or becoming fixated on, outdated or inappropriate responses. Humans show differential patterns of flexibility as a function of cognitive load, exhibiting difficulties with inhibiting suboptimal behaviors when there are high demands on working memory. We present a series of studies on captive chimpanzees that indicate that behavioral conservatism in apes may be underlain by similar constraints: Chimpanzees showed relatively little conservatism when behavioral optimization involved the inhibition of a well-established but simple solution, or the addition of a simple modification to a well-established but complex solution. In contrast, when behavioral optimization involved the inhibition of a well-established but complex solution, chimpanzees showed evidence of conservatism. We propose that conservatism is linked to behavioral complexity, potentially mediated by cognitive resource availability, and may be an important factor in the evolution of cumulative culture.</p

    PID control system analysis, design, and technology

    Get PDF
    Designing and tuning a proportional-integral-derivative (PID) controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives such as short transient and high stability are to be achieved. Usually, initial designs obtained by all means need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired. This stimulates the development of "intelligent" tools that can assist engineers to achieve the best overall PID control for the entire operating envelope. This development has further led to the incorporation of some advanced tuning algorithms into PID hardware modules. Corresponding to these developments, this paper presents a modern overview of functionalities and tuning methods in patents, software packages and commercial hardware modules. It is seen that many PID variants have been developed in order to improve transient performance, but standardising and modularising PID control are desired, although challenging. The inclusion of system identification and "intelligent" techniques in software based PID systems helps automate the entire design and tuning process to a useful degree. This should also assist future development of "plug-and-play" PID controllers that are widely applicable and can be set up easily and operate optimally for enhanced productivity, improved quality and reduced maintenance requirements

    The SOS Platform: Designing, Tuning and Statistically Benchmarking Optimisation Algorithms

    Get PDF
    open access articleWe present Stochastic Optimisation Software (SOS), a Java platform facilitating the algorithmic design process and the evaluation of metaheuristic optimisation algorithms. SOS reduces the burden of coding miscellaneous methods for dealing with several bothersome and time-demanding tasks such as parameter tuning, implementation of comparison algorithms and testbed problems, collecting and processing data to display results, measuring algorithmic overhead, etc. SOS provides numerous off-the-shelf methods including: (1) customised implementations of statistical tests, such as the Wilcoxon rank-sum test and the Holm–Bonferroni procedure, for comparing the performances of optimisation algorithms and automatically generating result tables in PDF and formats; (2) the implementation of an original advanced statistical routine for accurately comparing couples of stochastic optimisation algorithms; (3) the implementation of a novel testbed suite for continuous optimisation, derived from the IEEE CEC 2014 benchmark, allowing for controlled activation of the rotation on each testbed function. Moreover, we briefly comment on the current state of the literature in stochastic optimisation and highlight similarities shared by modern metaheuristics inspired by nature. We argue that the vast majority of these algorithms are simply a reformulation of the same methods and that metaheuristics for optimisation should be simply treated as stochastic processes with less emphasis on the inspiring metaphor behind them

    Investment decisions and portfolios classificationbased on robust methods of estimation

    Get PDF
    In the process of assets selection and their allocation to the investment portfolio the most important factor issue thing is the accurate evaluation of the volatility of the return rate. In order to achieve stable and accurate estimates of parameters for contaminated multivariate normal distributions the robust estimators are required. In this paper we used some of the robust estimators to selection the optimal investment portfolios. The main goal of this paper was the comparative analysis of generated investment portfolios with respect to chosen robust estimation methods.Investment decisions, robust estimators, portfolios classification, cluster analysis 1. Introduction

    An empirical evaluation of High-Level Synthesis languages and tools for database acceleration

    Get PDF
    High Level Synthesis (HLS) languages and tools are emerging as the most promising technique to make FPGAs more accessible to software developers. Nevertheless, picking the most suitable HLS for a certain class of algorithms depends on requirements such as area and throughput, as well as on programmer experience. In this paper, we explore the different trade-offs present when using a representative set of HLS tools in the context of Database Management Systems (DBMS) acceleration. More specifically, we conduct an empirical analysis of four representative frameworks (Bluespec SystemVerilog, Altera OpenCL, LegUp and Chisel) that we utilize to accelerate commonly-used database algorithms such as sorting, the median operator, and hash joins. Through our implementation experience and empirical results for database acceleration, we conclude that the selection of the most suitable HLS depends on a set of orthogonal characteristics, which we highlight for each HLS framework.Peer ReviewedPostprint (author’s final draft
    • 

    corecore