25,473 research outputs found

    Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations

    Get PDF
    The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded. The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded

    Modelling shared space users via rule-based social force model

    Get PDF
    The promotion of space sharing in order to raise the quality of community living and safety of street surroundings is increasingly accepted feature of modern urban design. In this context, the development of a shared space simulation tool is essential in helping determine whether particular shared space schemes are suitable alternatives to traditional street layouts. A simulation tool that enables urban designers to visualise pedestrians and cars trajectories, extract flow and density relation in a new shared space design and achieve solutions for optimal design features before implementation. This paper presents a three-layered microscopic mathematical model which is capable of representing the behaviour of pedestrians and vehicles in shared space layouts and it is implemented in a traffic simulation tool. The top layer calculates route maps based on static obstacles in the environment. It plans the shortest path towards agents' respective destinations by generating one or more intermediate targets. In the second layer, the Social Force Model (SFM) is modified and extended for mixed traffic to produce feasible trajectories. Since vehicle movements are not as flexible as pedestrian movements, velocity angle constraints are included for vehicles. The conflicts described in the third layer are resolved by rule-based constraints for shared space users. An optimisation algorithm is applied to determine the interaction parameters of the force-based model for shared space users using empirical data. This new three-layer microscopic model can be used to simulate shared space environments and assess, for example, new street designs

    Interacting with the biomolecular solvent accessible surface via a haptic feedback device

    Get PDF
    Background: From the 1950s computer based renderings of molecules have been produced to aid researchers in their understanding of biomolecular structure and function. A major consideration for any molecular graphics software is the ability to visualise the three dimensional structure of the molecule. Traditionally, this was accomplished via stereoscopic pairs of images and later realised with three dimensional display technologies. Using a haptic feedback device in combination with molecular graphics has the potential to enhance three dimensional visualisation. Although haptic feedback devices have been used to feel the interaction forces during molecular docking they have not been used explicitly as an aid to visualisation. Results: A haptic rendering application for biomolecular visualisation has been developed that allows the user to gain three-dimensional awareness of the shape of a biomolecule. By using a water molecule as the probe, modelled as an oxygen atom having hard-sphere interactions with the biomolecule, the process of exploration has the further benefit of being able to determine regions on the molecular surface that are accessible to the solvent. This gives insight into how awkward it is for a water molecule to gain access to or escape from channels and cavities, indicating possible entropic bottlenecks. In the case of liver alcohol dehydrogenase bound to the inhibitor SAD, it was found that there is a channel just wide enough for a single water molecule to pass through. Placing the probe coincident with crystallographic water molecules suggests that they are sometimes located within small pockets that provide a sterically stable environment irrespective of hydrogen bonding considerations. Conclusion: By using the software, named HaptiMol ISAS (available from http://​www.​haptimol.​co.​uk), one can explore the accessible surface of biomolecules using a three-dimensional input device to gain insights into the shape and water accessibility of the biomolecular surface that cannot be so easily attained using conventional molecular graphics software

    Augmented reality meeting table: a novel multi-user interface for architectural design

    Get PDF
    Immersive virtual environments have received widespread attention as providing possible replacements for the media and systems that designers traditionally use, as well as, more generally, in providing support for collaborative work. Relatively little attention has been given to date however to the problem of how to merge immersive virtual environments into real world work settings, and so to add to the media at the disposal of the designer and the design team, rather than to replace it. In this paper we report on a research project in which optical see-through augmented reality displays have been developed together with prototype decision support software for architectural and urban design. We suggest that a critical characteristic of multi user augmented reality is its ability to generate visualisations from a first person perspective in which the scale of rendition of the design model follows many of the conventions that designers are used to. Different scales of model appear to allow designers to focus on different aspects of the design under consideration. Augmenting the scene with simulations of pedestrian movement appears to assist both in scale recognition, and in moving from a first person to a third person understanding of the design. This research project is funded by the European Commission IST program (IST-2000-28559)

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    From buildings to cities: techniques for the multi-scale analysis of urban form and function

    Get PDF
    The built environment is a significant factor in many urban processes, yet direct measures of built form are seldom used in geographical studies. Representation and analysis of urban form and function could provide new insights and improve the evidence base for research. So far progress has been slow due to limited data availability, computational demands, and a lack of methods to integrate built environment data with aggregate geographical analysis. Spatial data and computational improvements are overcoming some of these problems, but there remains a need for techniques to process and aggregate urban form data. Here we develop a Built Environment Model of urban function and dwelling type classifications for Greater London, based on detailed topographic and address-based data (sourced from Ordnance Survey MasterMap). The multi-scale approach allows the Built Environment Model to be viewed at fine-scales for local planning contexts, and at city-wide scales for aggregate geographical analysis, allowing an improved understanding of urban processes. This flexibility is illustrated in the two examples, that of urban function and residential type analysis, where both local-scale urban clustering and city-wide trends in density and agglomeration are shown. While we demonstrate the multi-scale Built Environment Model to be a viable approach, a number of accuracy issues are identified, including the limitations of 2D data, inaccuracies in commercial function data and problems with temporal attribution. These limitations currently restrict the more advanced applications of the Built Environment Model

    Serious Games in Cultural Heritage

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Key challenges in agent-based modelling for geo-spatial simulation

    Get PDF
    Agent-based modelling (ABM) is fast becoming the dominant paradigm in social simulation due primarily to a worldview that suggests that complex systems emerge from the bottom-up, are highly decentralised, and are composed of a multitude of heterogeneous objects called agents. These agents act with some purpose and their interaction, usually through time and space, generates emergent order, often at higher levels than those at which such agents operate. ABM however raises as many challenges as it seeks to resolve. It is the purpose of this paper to catalogue these challenges and to illustrate them using three somewhat different agent-based models applied to city systems. The seven challenges we pose involve: the purpose for which the model is built, the extent to which the model is rooted in independent theory, the extent to which the model can be replicated, the ways the model might be verified, calibrated and validated, the way model dynamics are represented in terms of agent interactions, the extent to which the model is operational, and the way the model can be communicated and shared with others. Once catalogued, we then illustrate these challenges with a pedestrian model for emergency evacuation in central London, a hypothetical model of residential segregation tuned to London data which elaborates the standard Schelling (1971) model, and an agent-based residential location built according to spatial interactions principles, calibrated to trip data for Greater London. The ambiguities posed by this new style of modelling are drawn out as conclusions
    corecore