26,281 research outputs found

    Theories about architecture and performance of multi-agent systems

    Get PDF
    Multi-agent systems are promising as models of organization because they are based on the idea that most work in human organizations is done based on intelligence, communication, cooperation, and massive parallel processing. They offer an alternative for system theories of organization, which are rather abstract of nature and do not pay attention to the agent level. In contrast, classical organization theories offer a rather rich source of inspiration for developing multi-agent models because of their focus on the agent level. This paper studies the plausibility of theoretical choices in the construction of multi-agent systems. Multi-agent systems have to be plausible from a philosophical, psychological, and organizational point of view. For each of these points of view, alternative theories exist. Philosophically, the organization can be seen from the viewpoints of realism and constructivism. Psychologically, several agent types can be distinguished. A main problem in the construction of psychologically plausible computer agents is the integration of response function systems with representational systems. Organizationally, we study aspects of the architecture of multi-agent systems, namely topology, system function decomposition, coordination and synchronization of agent processes, and distribution of knowledge and language characteristics among agents. For each of these aspects, several theoretical perspectives exist.

    A micro-meso-macro perspective on the methodology of evolutionary economics: integrating history, simulation and econometrics

    Get PDF
    Applied economics has long been dominated by multiple regression techniques. In this regard, econometrics has tended to have a narrower focus than, for example, psychometrics in psychology. Over the last two decades, the simulation and calibration approach to modeling has become more popular as an alternative to traditional econometric strategies. However, in contrast to the well-developed methodologies that now exist in econometrics, simulation/calibration remains exploratory and provisional, both as an explanatory and as a predictive modelling technique although clear progress has recently been made in this regard (see Brenner and Werker (2006)). In this paper, we suggest an approach that can usefully integrate both of these modelling strategies into a coherent evolutionary economic methodology.

    Simulation models of technological innovation: A Review

    Get PDF
    The use of simulation modelling techniques in studies of technological innovation dates back to Nelson and Winter''s 1982 book "An Evolutionary Theory of Economic Change" and is an area which has been steadily expanding ever since. Four main issues are identified in reviewing the key contributions that have been made to this burgeoning literature. Firstly, a key driver in the construction of computer simulations has been the desire to develop more complicated theoretical models capable of dealing with the complex phenomena characteristic of technological innovation. Secondly, no single model captures all of the dimensions and stylised facts of innovative learning. Indeed this paper argues that one can usefully distinguish between the various contributions according to the particular dimensions of the learning process which they explore. To this end the paper develops a taxonomy which usefully distinguishes between these dimensions and also clarifies the quite different perspectives underpinning the contributions made by mainstream economists and non-mainstream, neo-Schumpeterian economists. This brings us to a third point highlighted in the paper. The character of simulation models which are developed are heavily influenced by the generic research questions of these different schools of thought. Finally, attention is drawn to an important distinction between the process of learning and adaptation within a static environment, and dynamic environments in which the introduction of new artefacts and patterns of behaviour change the selective pressure faced by agents. We show that modellers choosing to explore one or other of these settings reveal their quite different conceptual understandings of "technological innovation".economics of technology ;

    Exploring foundations for using simulations in IS research

    Get PDF
    Simulation has been adopted in many disciplines as a means for understanding the behavior of a system by imitating it through an artificial object that exhibits a nearly identical behavior. Although simulation approaches have been widely adopted for theory building in disciplines such as engineering, computer science, management, and social sciences, their potential in the IS field is often overlooked. The aim of this paper is to understand how different simulation approaches are used in IS research, thereby providing insights and methodological recommendations for future studies. A literature review of simulation studies published in top-tier IS journals leads to the definition of three classes of simulations, namely the self-organizing, the elementary, and the situated. A set of stylized facts is identified for characterizing the ways in which the premise, the inference, and the contribution are presented in IS simulation studies. As a result, this study provides guidance to future simulation researchers in designing and presenting findings

    Measuring autonomy and emergence via Granger causality

    Get PDF
    Concepts of emergence and autonomy are central to artificial life and related cognitive and behavioral sciences. However, quantitative and easy-to-apply measures of these phenomena are mostly lacking. Here, I describe quantitative and practicable measures for both autonomy and emergence, based on the framework of multivariate autoregression and specifically Granger causality. G-autonomy measures the extent to which the knowing the past of a variable helps predict its future, as compared to predictions based on past states of external (environmental) variables. G-emergence measures the extent to which a process is both dependent upon and autonomous from its underlying causal factors. These measures are validated by application to agent-based models of predation (for autonomy) and flocking (for emergence). In the former, evolutionary adaptation enhances autonomy; the latter model illustrates not only emergence but also downward causation. I end with a discussion of relations among autonomy, emergence, and consciousness

    Divide and Conquer? Decentralisation, Co-ordination and Cluster Survival

    Get PDF
    This paper develops a simulation model of the behaviour of clusters in the face of bifurcation events in their environment. Bifurcations are understood as the regional equivalent to Schumpeterian creative destruction. The model investigates the role of decentralisation and co-ordination for the likelihood of successful adaptation by comparing adaptive performance of clusters exhibiting different degrees of decentralisation and alternative modes of co-ordination. Using Kauffman’s (1993) N/K model, it is found that there is an optimum degree of decentralisation with respect to cluster adaptability while different co-ordination mechanisms face a trade-off between speed and cluster-level optimality of results. In doing so, the model sheds light on an empirical controversy regarding the role of both factors for adaptation that has emerged between the Silicon Valley – Boston 128 comparison on the one and the Italian Industrial District experience on the other hand. Moreover, the identification of the roles played by decentralisation and co-ordination for cluster adaptability in changing environments could serve as guidance for future empirical research as well as policy initiatives.Clusters, Bifurcations. N/K model
    • 

    corecore