21,842 research outputs found

    Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations

    Get PDF
    The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded. The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded

    Tools of the Trade: A Survey of Various Agent Based Modeling Platforms

    Get PDF
    Agent Based Modeling (ABM) toolkits are as diverse as the community of people who use them. With so many toolkits available, the choice of which one is best suited for a project is left to word of mouth, past experiences in using particular toolkits and toolkit publicity. This is especially troublesome for projects that require specialization. Rather than using toolkits that are the most publicized but are designed for general projects, using this paper, one will be able to choose a toolkit that already exists and that may be built especially for one's particular domain and specialized needs. In this paper, we examine the entire continuum of agent based toolkits. We characterize each based on 5 important characteristics users consider when choosing a toolkit, and then we categorize the characteristics into user-friendly taxonomies that aid in rapid indexing and easy reference.Agent Based Modeling, Individual Based Model, Multi Agent Systems

    From Artifacts to Aggregations: Modeling Scientific Life Cycles on the Semantic Web

    Full text link
    In the process of scientific research, many information objects are generated, all of which may remain valuable indefinitely. However, artifacts such as instrument data and associated calibration information may have little value in isolation; their meaning is derived from their relationships to each other. Individual artifacts are best represented as components of a life cycle that is specific to a scientific research domain or project. Current cataloging practices do not describe objects at a sufficient level of granularity nor do they offer the globally persistent identifiers necessary to discover and manage scholarly products with World Wide Web standards. The Open Archives Initiative's Object Reuse and Exchange data model (OAI-ORE) meets these requirements. We demonstrate a conceptual implementation of OAI-ORE to represent the scientific life cycles of embedded networked sensor applications in seismology and environmental sciences. By establishing relationships between publications, data, and contextual research information, we illustrate how to obtain a richer and more realistic view of scientific practices. That view can facilitate new forms of scientific research and learning. Our analysis is framed by studies of scientific practices in a large, multi-disciplinary, multi-university science and engineering research center, the Center for Embedded Networked Sensing (CENS).Comment: 28 pages. To appear in the Journal of the American Society for Information Science and Technology (JASIST

    Endogenous space in the Net era

    Get PDF
    Libre Software communities are among the most interesting and advanced socio-economic laboratories on the Net. In terms of directions of Regional Science research, this paper addresses a simple question: “Is the socio-economics of digital nets out of scope for Regional Science, or might the latter expand to a cybergeography of digitally enhanced territories ?” As for most simple questions, answers are neither so obvious nor easy. The authors start drafting one in a positive sense, focussing upon a file rouge running across the paper: endogenous spaces woven by socio-economic processes. The drafted answer declines on an Evolutionary Location Theory formulation, together with two computational modelling views. Keywords: Complex networks, Computational modelling, Economics of Internet, Endogenous spaces, Evolutionary location theory, Free or Libre Software, Path dependence, Positionality.

    Research Agenda for Studying Open Source II: View Through the Lens of Referent Discipline Theories

    Get PDF
    In a companion paper [Niederman et al., 2006] we presented a multi-level research agenda for studying information systems using open source software. This paper examines open source in terms of MIS and referent discipline theories that are the base needed for rigorous study of the research agenda

    “It Takes All Kinds”: A Simulation Modeling Perspective on Motivation and Coordination in Libre Software Development Projects

    Get PDF
    This paper presents a stochastic simulation model to study implications of the mechanisms by which individual software developers’ efforts are allocated within large and complex open source software projects. It illuminates the role of different forms of “motivations-at-the-margin” in the micro-level resource allocation process of distributed and decentralized multi-agent engineering undertakings of this kind. We parameterize the model by isolating the parameter ranges in which it generates structures of code that share certain empirical regularities found to characterize actual projects. We find that, in this range, a variety of different motivations are represented within the community of developers. There is a correspondence between the indicated mixture of motivations and the distribution of avowed motivations for engaging in FLOSS development, found in the survey responses of developers who were participants in large projects.free and open source software (FLOSS), libre software engineering, maintainability, reliability, functional diversity, modularity, developers’ motivations, user-innovation, peer-esteem, reputational reward systems, agent-based modeling, stochastic simulation, stigmergy, morphogenesis.

    Intellectual property rights in a knowledge-based economy

    Get PDF
    Intellectual property rights (IPR) have been created as economic mechanisms to facilitate ongoing innovation by granting inventors a temporary monopoly in return for disclosure of technical know-how. Since the beginning of 1980s, IPR have come under scrutiny as new technological paradigms appeared with the emergence of knowledge-based industries. Knowledge-based products are intangible, non-excludable and non-rivalrous goods. Consequently, it is difficult for their creators to control their dissemination and use. In particular, many information goods are based on network externalities and on the creation of market standards. At the same time, information technologies are generic in the sense of being useful in many places in the economy. Hence, policy makers often define current IPR regimes in the context of new technologies as both over- and under-protective. They are over-protective in the sense that they prevent the dissemination of information which has a very high social value; they are under-protective in the sense that they do not provide strong control over the appropriation of rents from their invention and thus may not provide strong incentives to innovate. During the 1980s, attempts to assess the role of IPR in the process of technological learning have found that even though firms in high-tech sectors do use patents as part of their strategy for intellectual property protection, the reliance of these sectors on patents as an information source for innovation is lower than in traditional industries. Intellectual property rights are based mainly on patents for technical inventions and on copyrights for artistic works. Patents are granted only if inventions display minimal levels of utility, novelty and non-obviousness of technical know-how. By contrast, copyrights protect only final works and their derivatives, but guarantee protection for longer periods, according to the Berne Convention. Licensing is a legal aid that allows the use of patented technology by other firms, in return for royalty fees paid to the inventor. Licensing can be contracted on an exclusive or non-exclusive basis, but in most countries patented knowledge can be exclusively held by its inventors, as legal provisions for compulsory licensing of technologies do not exist. The fair use doctrine aims to prevent formation of perfect monopolies over technological fields and copyrighted artefacts as a result of IPR application. Hence, the use of patented and copyrighted works is permissible in academic research, education and the development of technologies that are complimentary to core technologies. Trade secrecy is meant to prevent inadvertent technology transfer to rival firms and is based on contracts between companies and employees. However, as trade secrets prohibit transfer of knowledge within industries, regulators have attempted to foster disclosure of technical know-how by institutional means of patents, copyrights and sui-generis laws. And indeed, following the provisions formed by IPR regulation, firms have shifted from methods of trade secrecy towards patenting strategies to achieve improved protection of intellectual property, as well as means to acquire competitive advantages in the market by monopolization of technological advances.economics of technology ;
    • 

    corecore