516 research outputs found

    Ubiquitous Computing and Distributed Agent-based Simulation

    Get PDF
    Abstract-As much as ubiquitous computing systems are already claimed to exist in the real world, further development of these systems still pose challenges to computer science that are still quite beyond the state of the art. Two challenges stand out in particular: the complexity of next-generation ubiquitous computing systems, and their inherent scalability issues. This paper aims to establish that agent-based modelling provides a powerful tool in tackling these issues. As an example of a practical solution, readily available, this paper highlights the distributed agent-based simulation infrastructure PDES-MAS as particularly suited for the task. Using the PDES-MAS infrastructure, designers, developers, and builders of next-generation ubiquitous computing systems can, through an iterative agent-based simulation process, gain the required knowledge and information about these systems, without having precede to deployment of the system itself

    Motivated Health Risk Denial and Preventative Health Care Investments

    Get PDF
    People deny health risks, invest too little in disease prevention, and are highly sensitive to the price of preventative health care, especially in developing countries. Moreover, private sector R and D spending on developing-country diseases is almost non-existent. To explain these empirical observations, I propose a model of motivated belief formation, in which an agent's decision to engage in health risk denial balances the psychological benefits of reduced anxiety with the physical cost of underprevention. I use the model to study firms' price-setting behaviour and incentive to innovate. I also show that tax-funded prevention subsidies are welfare enhancing

    Games between responsive behavioural rules

    Get PDF
    I study recurrent strategic interaction between two responsive behavioural rules in generic bi-matrix weakly acyclic games. The two individuals that play the game in a particular period choose their strategy by responding to a sample of strategies used by the co-players in the recent history of play. The response of a player is determined by his behavioural rule. I show that the game reaches a convention whenever the behavioural rule of each player is `weakly responsive' to the manner in which strategies were chosen in the past by the co-players, and stays locked into the convention if the behavioural rules are `mildly responsive'. Furthermore, amongst `mildly responsive' behavioural rules, individuals described by the behavioural rule of `extreme optimism' perform the best in the sense that their most preferred convention is always in the stochastically stable set; under an additional mild restriction that differentiates the behavioural rule of the other player from extreme optimism, the convention referred to above is the unique stochastically stable state

    Games between responsive behavioural rules

    Get PDF
    I study recurrent strategic interaction between two responsive behavioural rules in generic bi-matrix weakly acyclic games. The two individuals that play the game in a particular period choose their strategy by responding to a sample of strategies used by the co-players in the recent history of play. The response of a player is determined by his behavioural rule. I show that the game reaches a convention whenever the behavioural rule of each player is `weakly responsive' to the manner in which strategies were chosen in the past by the co-players, and stays locked into the convention if the behavioural rules are `mildly responsive'. Furthermore, amongst `mildly responsive' behavioural rules, individuals described by the behavioural rule of `extreme optimism' perform the best in the sense that their most preferred convention is always in the stochastically stable set; under an additional mild restriction that differentiates the behavioural rule of the other player from extreme optimism, the convention referred to above is the unique stochastically stable state

    Revenue Maximisation; An Examination of the Influences of Heuristics and Biases on the Yield Management Decision Process in Dublin Hotels

    Get PDF
    Yield management in hotels has been described as a method of profitably managing fixed room capacity. A critical element of yield management is the decision strategy employed, as this determines the degree to which optimum financial solutions are generated. Recent research has indicated that the use of technology assisted decision optimising models, specifically the management science model of decision making, would greatly improve decision optimisation, by minimising the need to employ guesswork in achieving financial goals. However, despite this assurance, yield management remains couched in uncertainty through being inextricably associated with forecasting future demand for a perishable product in an increasingly volatile and competitive environment. The consequential pressures on decision-makers have afforded the opportunity for human idiosyncrasies to play a significant role in the decision-making process. The primary objective of this paper is, therefore, to gain an insight into how decisions are constructed in the yield management environment of Dublin hotels. The study reviews current literature on management science as a decision-making option. It also assesses heuristics and biases associated with decision-making, and their influence on rational decision protocol. The methodology employed phenomenological and hermeneutical techniques, with discourse analysis, in accessing and analysing data. The research findings reveal that within Dublin hotels, the management science model of decision-making has been sidelined in favour of decision strategies, wherein “human intervention” plays a more significant role. The findings also suggest that this “human intervention” has subconsciously facilitated an environment for decision-makers to fall into psychological traps, with the potential to make systematically biased errors, through satisfaction of ego needs and rationalising the irrational

    Two Dogmas of Biology

    Get PDF
    The problem with reductionism in biology is not the reduction, but the implicit attitude of determinism that usually accompanies it. Methodological reductionism is supported by deterministic beliefs, but making such a connection is problematic when it is based on an idea of determinism as fixed predictability. Conflating determinism with predictability gives rise to inaccurate models that overlook the dynamic complexity of our world, as well as ignore our epistemic limitations when we try to model it. Furthermore, the assumption of a strictly deterministic framework is unnecessarily hindering to biology. By removing the dogma of determinism, biological methods, including reductive methods, can be expanded to include stochastic models and probabilistic interpretations. Thus, the dogma of reductionism can be saved once its ties with determinism are severed. In this paper, I analyze two problems that have faced molecular biology for the last 50 years—protein folding and cancer. Both cases demonstrate the long influence of reductionism and determinism on molecular biology, as well as how abandoning determinism has opened the door to more probabilistic and unconstrained reductive methods in biology

    Modelling innovation and the macroeconomics of low-carbon transitions: theory, perspectives and practical use

    Get PDF
    This is the author accepted manuscript. The final version is available from Taylor & Francis (Routledge) via the DOI in this record.Energy and climate policies may have significant economy-wide impacts, which are regularly assessed based on quantitative energy-environment-economy models. These tend to vary in their conclusions on the scale and direction of the likely macroeconomic impacts of a low-carbon transition. This paper traces the characteristic discrepancies in models’ outcomes to their origins in different macro-economic theories, most importantly their treatment of technological innovation and finance. We comprehensively analyse the relevant branches of macro-innovation theory and group them into two classes: ‘Equilibrium’ and ‘Non-equilibrium’. While both approaches are rigorous and self-consistent, they frequently yield opposite conclusions for the economic impacts of low-carbon policies. We show that model outcomes are mainly determined by their representations of monetary and finance dimensions, and their interactions with investment, innovation and technological change. Improving these in all modelling approaches is crucial for strengthening the evidence base for policy making and gaining a more consistent picture of the macroeconomic impacts of achieving emissions reductions objectives. The paper contributes towards the ongoing effort of enhancing the transparency and understanding of sophisticated model mechanisms applied to energy and climate policy analysis. It helps tackle the overall “black box” critique, much-cited in policy circles and elsewhere

    Time to rebuild and reaggregate fluctuations: Minsky, complexity and agent-based modelling

    Get PDF

    A new paradigm of knowledge management: Crowdsourcing as emergent research and development

    Get PDF
    Drawing from knowledge management theory, this paper argues that the knowledge aggregation problem poses a fundamental constraint to knowledge creation and innovation, and offers a potential solution to this problem. Specific consequences of innovation failure include the failure of research and development to deliver new medicines to address threats such as widespread and increasing antibiotic resistance, the rise of airborne multidrug-resistant or totally drug-resistant tuberculosis, as well as a lack of new drugs to deal with emerging threats such as Ebola. Persistent constraints to knowledge creation exist in the form of market failure, or the failure of profit-seeking models of innovation to internalise the positive externalities associated with innovations, as well as academic failure, or the failure of academic research to provide much needed innovations to address societal problems. However, a lack of theory exists as to how to transcend these constraints to knowledge aggregation. This paper presents a probabilistic theoretical framework of innovation, suggesting that the ‘wisdom of the crowd’, or emergent properties of problem-solving, may emerge as a function of scale when crowdsourcing principles are applied to research and development. It is argued in this paper that the consequences of a lack of knowledge of innovation failure are already upon us, and that a radical new approach to knowledge management and innovation is needed.Keywords: probabilistic innovation, knowledge management, innovation, crowdsourcing, crowdsourced R&
    • 

    corecore