11 research outputs found

    A Mathematical Model for the Origin of Name Brands and Generics

    Full text link
    Firms in the U.S. spend over 200 billion dollars each year advertising their products to consumers, around one percent of the country's gross domestic product. It is of great interest to understand how that aggregate expenditure affects prices, market efficiency, and overall welfare. Here, we present a mathematical model for the dynamics of competition through advertising and find a surprising prediction: when advertising is relatively cheap compared to the maximum benefit advertising offers, rational firms split into two groups, one with significantly less advertising (a "generic" group) and one with significantly more advertising (a "name brand" group). Our model predicts that this segmentation will also be reflected in price distributions; we use large consumer data sets to test this prediction and find good qualitative agreement.Comment: 24 pages, 11 figure

    The complementary relationship between institutional and complexity economics: The example of deep mechanismic explanations

    Get PDF
    Analyzing economic systems from an evolutionary-institutional or a complexity perspective are two complementary approaches to economic inquiry. Three arguments in favor of this hypothesis are discussed: (i) eminent institutional economists have considered the economy as what today could be considered a complex system; (ii) complexity economists lack meta-theoretical foundations which could be provided by institutionalist theory; (iii) institutional economists could benefit from using methods of complexity economics. In this context I argue that scholars considering the economy to be complex should seek to explain it by discovering social mechanisms instead of focusing on prediction. For the discrimination between alternative explanations, scholars should refer to the deepness of an explanation, rather than to Occam’s razor

    Methodology Does Matter: About Implicit Assumptions in Applied Formal Modelling. The case of Dynamic Stochastic General Equilibrium Models vs Agent-Based Models

    Get PDF
    This article uses the functional decomposition approach to modeling Mäki (2009b) to discuss the importance of methodological considerations before choosing a modeling framework in applied research. It considers the case of agent-based models and dynamic stochastic general equilibrium models to illustrate the implicit epistemological and ontological statements related to the choice of the corresponding modeling framework and highlights the important role of the purpose and audience of a model. Special focus is put on the limited capacity for model exploration of equilibrium models and their difficulty to model mechanisms explicitly. To model mechanisms that have interaction effects with other mechanisms is identified as a particular challenge that sometimes makes the explanation of phenomena by isolating the underlying mechanisms a difficult task. Therefore I argue for a more extensive use of agent-based models as they provide a formal tool to address this challenge. The overall conclusion is that a plurality of models is required: single models are simply pushed to their limits if one wishes to identify the right degree of isolation required to understand reality

    Methodology Does Matter: About Implicit Assumptions in Applied Formal Modelling. The case of Dynamic Stochastic General Equilibrium Models vs Agent-Based Models

    Get PDF
    This article uses the functional decomposition approach to modeling Mäki (2009b) to discuss the importance of methodological considerations before choosing a modeling framework in applied research. It considers the case of agent-based models and dynamic stochastic general equilibrium models to illustrate the implicit epistemological and ontological statements related to the choice of the corresponding modeling framework and highlights the important role of the purpose and audience of a model. Special focus is put on the limited capacity for model exploration of equilibrium models and their difficulty to model mechanisms explicitly. To model mechanisms that have interaction effects with other mechanisms is identified as a particular challenge that sometimes makes the explanation of phenomena by isolating the underlying mechanisms a difficult task. Therefore I argue for a more extensive use of agent-based models as they provide a formal tool to address this challenge. The overall conclusion is that a plurality of models is required: single models are simply pushed to their limits if one wishes to identify the right degree of isolation required to understand reality

    Methodology Does Matter: About Implicit Assumptions in Applied Formal Modelling. The case of Dynamic Stochastic General Equilibrium Models vs Agent-Based Models

    Get PDF
    This article uses the functional decomposition approach to modeling Mäki (2009b) to discuss the importance of methodological considerations before choosing a modeling framework in applied research. It considers the case of agent-based models and dynamic stochastic general equilibrium models to illustrate the implicit epistemological and ontological statements related to the choice of the corresponding modeling framework and highlights the important role of the purpose and audience of a model. Special focus is put on the limited capacity for model exploration of equilibrium models and their difficulty to model mechanisms explicitly. To model mechanisms that have interaction effects with other mechanisms is identified as a particular challenge that sometimes makes the explanation of phenomena by isolating the underlying mechanisms a difficult task. Therefore I argue for a more extensive use of agent-based models as they provide a formal tool to address this challenge. The overall conclusion is that a plurality of models is required: single models are simply pushed to their limits if one wishes to identify the right degree of isolation required to understand reality

    A systemic framework for the computational analysis of complex economies: An evolutionary-institutional perspective on the ontology, epistemology, and methodology of complexity economics

    Get PDF
    This theses introduces the idea of a symbiotic relationship between evolutionary-institutional and complexity economics. It consists of two major contributions: The first contribution focuses on how the emerging research program of complexity economics can benefit from evolutionary-institutional theory. I show that complexity economics still lacks an adequate philosophical foundation. I explicate why such a foundation is needed if complexity economics is to promote further scientific progress and that such a foundation must consist of an adequate ontology, epistemology, and methodology. The following parts of the theses then draw upon institutionalist and social theory to develop these three aspects: I derive a definition of complex economic systems by identifying their essential properties. I then propose an epistemology that is based on the concepts of mechanism-based explanation, generative sufficiency, and an extended version of Uskali Mäki's concept of â Models as Isolations and Surrogate Systemsâ . I continue with some methodological considerations and argue that the method of 'Agent based computational economic modeling' must play a distinctive role for the analysis of complex economies. The second contribution of the theses shows how evolutionary-institutionalism can profit from a methodological transfer from complexity economics. In particular I argue that the method of 'Agent based computational modeling' can advance institutionalism both as a formalization device and by providing theoretical concepts that are useful for institutionalist theorizing itself. The theses closes by discussing a potential convergence of evolutionary-institutional and complexity economics and gives an outlook on avenues for further research

    Agent-Based Computational Models - A Formal Heuristic for Institutionalist Pattern Modelling?

    Get PDF
    Institutionalist economists have always been criticizing the neoclassical way of studying the economy, especially because of its obsession to a very strict and flawed formalism. This formalism receives critique also from advocates of agent-based computational economic (ACE) models. The criticism seems to be similar to that of institutional economists. Although some authors consider ACE models to belong to a completely new way of thinking about economics, many concepts of ACE have been anticipated by institutionalists: Although using a different vocabulary, ACE proponents speak about cumulative causation, realistic agents, explanatory models, dynamic relations among individuals and the necessity to see the economy as an systemic whole rather than from an atomistic perspective. Consequently, the emergence of the ACE framework may not be left unconsidered by institutionalist economists. This paper investigates the consistency of ACE models with the institutionalist research program as defined by Myrdal, Wilber and Harrison and other original institutionalists and discusses whether ACE models can be a useful heuristic for institutionalist "pattern modelling". I study the ability of ACE models to provide a holistic, systemic and evolutionary picture of the economy, the conception of agents in ACE models, and ask whether they can help to understand the social stratification of a society with its power relations. I also compare ACE models with earlier attempts to formalize institutionalist analysis, e.g. by Bush and Elsner (Theory of Institutional Change), Hayden (Social-­Fabric-Matrix) or Radzicki (System Dynamics)

    Multi-Scale Evacuation Models To Support Emergency And Disaster Response

    Get PDF
    Evacuation is a short-term measure to mitigate human injuries and losses by temporarily relocation of exposed population before, during, or after disasters. With the increasing growth of population and cities, buildings and urban areas are over-populated which brings about safety issues when there is a need for emergency evacuation. In disaster studies, simulation is widely used to explore how natural hazards might evolve in the future, and how societies might respond to these events. Accordingly, evacuation simulation is a potentially helpful tool for emergency responders and policy makers to evaluate the required time for evacuation and the estimated number and distribution of casualties under a disaster scenario. The healthcare system is an essential subsystem of communities which ensures the health and well-being of their residents. Hence, the resilience of the healthcare system plays an essential role in the resilience of the whole community. In disasters, patient mobility is a major challenge for healthcare systems to overcome. This is where the scientific society enters with modeling and simulation techniques to help decision-makers. Hospital evacuation simulation considering patients with different mobility characteristics, needs, and interactions, demands a microscopic modeling approach, like Agent-Based Modeling (ABM). However, as the system increases in size, the models become highly complex and intractable. Large-scale complex ABMs can be reduced by reformulating the micro-scale model of agents by a meso-scale model of population densities and partial differential equations, or a macro-scale model of population stocks and ordinary differential equations. However, reducing the size and fidelity of microscopic models to meso- or macro-scale models implies certain drawbacks. This dissertation contributes to the improvement of large-scale agent-based evacuation simulation and multi-scale hospital evacuation models. For large-scale agent-based models, application of bug navigation algorithms, popular in the field of robotics, is evaluated to improve the efficiency of such models. A candidate bug algorithm is proposed based on a performance evaluation framework, and its applicability and practicability are demonstrated by a real-world example. For hospital evacuation simulation, crowd evacuation considering people with different physical and mobility characteristics is modeled on three different scales: microscopic (ABM), mesoscopic (fluid dynamics model), and macroscopic (system dynamics model). Similar to the well-known Predator-Prey model, the results of this study show the extent to which macroscopic and mesoscopic models can produce global behaviors emerging from agents’ interactions in ABMs. To evaluate the performance of these multi-scale models, the evacuation of the emergency department at Johns Hopkins University is simulated, and the outputs and performance of the models are compared in terms of implementation complexity, required input data, provided output data, and computation time. It is concluded that the microscopic agent-based model is recommended to hospital emergency planners for long-term use such as evaluating different emergency scenarios and effectiveness of different evacuation plans. On the other hand, the macroscopic system dynamics model is best to be used as a simple tool (like an app) for rapid situation assessment and decision making in case of imminent events. The fluid dynamics model is found to be suitable only for studying crowd dynamics in medium to high densities, but it does not offer any competency as an evacuation simulation tool

    Markov chain aggregation for agent-based models

    Get PDF
    Banisch S. Markov chain aggregation for agent-based models. Bielefeld: Universitätsbibliothek Bielefeld; 2014.This thesis introduces a Markov chain approach that allows a rigorous analysis of a class of agent-based models (ABMs). It provides a general framework of aggregation in agent-based and related computational models by making use of Markov chain aggregation and lumpability theory in order to link between the micro and the macro level of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent model, which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. This is referred to as micro chain, and an explicit formal representation including microscopic transition rates can be derived for a class of models by using the random mapping representation of a Markov process. The explicit micro formulation enables the application of the theory of Markov chain aggregation -- namely, lumpability -- in order to reduce the state space of the micro chain and relate microscopic descriptions to a macroscopic formulation of interest. Well-known conditions for lumpability make it possible to establish the cases where the macro model is still Markov, and in this case we obtain a complete picture of the dynamics including the transient stage, the most interesting phase in applications. For such a purpose a crucial role is played by the type of probability distribution used to implement the stochastic part of the model which defines the updating rule and governs the dynamics. Namely, if we decide to remain at a Markovian level, then the partition, or equivalently, the collective variables used to build the macro model must be compatible with the symmetries of the probability distribution ω. This underlines the theoretical importance of homogeneous or complete mixing in the analysis of »voter-like« models at use in population genetics, evolutionary game theory and social dynamics. On the other hand, if a favored level of observation is not compatible with the symmetries in ω, a certain amount of memory is introduced by the transition from the micro level to such a macro description, and this is the fingerprint of emergence in ABMs. The resulting divergence from Markovianity can be quantified using information-theoretic measures and the thesis presents a scenario in which these measures can be explicitly computed. Two simple models are used to illustrate these theoretical ideas: the voter model (VM) and an extension of it called contrarian voter model (CVM). Using these examples, the thesis shows that Markov chain theory allows for a rather precise understanding of the model dynamics in case of »simple« population structures where a tractable macro chain can be derived. Constraining the system by interaction networks with a strong local structure leads to the emergence of meta-stable states in the transient of the model. Constraints on the interaction behavior such as bounded confidence or assortative mating lead to the emergence of new absorbing states in the associated macro chain and are related to stable patterns of polarization (stable co-existence of different opinions or species). Constraints and heterogeneities in the microscopic system and complex social interactions are the basic characteristics of ABMs, and the Markov chain approach to link the micro chain to a macro level description (and likewise the failure of a Markovian link) highlights the crucial role played by those ingredients in the generation of complex macroscopic outcomes

    Markov Chain Aggregation for Agent-Based Models

    Full text link
    corecore