17 research outputs found

    Exascale Agent-Based Modelling for Policy Evaluation in Real-Time (ExAMPLER)

    Get PDF
    Exascale computing can potentially revolutionise the way in which we design and build agent-based models (ABM) through, for example, enabling scaling up, as well as robust calibration and validation. At present, there is no exascale computing operating with ABM (that we are aware of), but pockets of work using High Performance Computing (HPC). While exascale computing is expected to become more widely available towards the latter half of this decade, the ABM community is largely unaware of the requirements for exascale computing for agent-based modelling to support policy evaluation. This project will engage with the ABM community to understand what computing resources are currently used, what we need (both in terms of hardware and software) and to set out a roadmap by which to make it happen

    Exascale Agent-Based Modelling for Policy Evaluation in Real-Time (ExAMPLER)

    Get PDF
    Exascale computing can potentially revolutionise the way in which we design and build agent-based models (ABM) through, for example, enabling scaling up, as well as robust calibration and validation. At present, there is no exascale computing operating with ABM (that we are aware of), but pockets of work using High Performance Computing (HPC). While exascale computing is expected to become more widely available towards the latter half of this decade, the ABM community is largely unaware of the requirements for exascale computing for agent-based modelling to support policy evaluation. This project will engage with the ABM community to understand what computing resources are currently used, what we need (both in terms of hardware and software) and to set out a roadmap by which to make it happen

    Evaluating the potential of agent-based modelling to capture consumer grocery retail store choice behaviours

    Get PDF
    Evolving consumer behaviours with regards to store and channel choice, shopping frequency, shopping mission and spending heighten the need for robust spatial modelling tools for use within retail analytics. In this paper, we report on collaboration with a major UK grocery retailer to assess the feasibility of modelling consumer store choice behaviours at the level of the individual consumer. We benefit from very rare access to our collaborating retailers’ customer data which we use to develop a proof-of-concept agent-based model (ABM). Utilising our collaborating retailers’ loyalty card database, we extract key consumer behaviours in relation to shopping frequency, mission, store choice and spending. We build these observed behaviours into our ABM, based on a simplified urban environment, calibrated and validated against observed consumer data. Our ABM is able to capture key spatiotemporal drivers of consumer store choice behaviour at the individual level. Our findings could afford new opportunities for spatial modelling within the retail sector, enabling the complexity of consumer behaviours to be captured and simulated within a novel modelling framework. We reflect on further model development required for use in a commercial context for location-based decision-making

    Developing an Individual-level Geodemographic Classification

    Get PDF
    Geodemographics is a spatially explicit classification of socio-economic data, which can be used to describe and analyse individuals by where they live. Geodemographic information is used by the public sector for planning and resource allocation but it also has considerable use within commercial sector applications. Early geodemographic systems, such as the UK’s ACORN (A Classification of Residential Neighbourhoods), used only area-based census data, but more recent systems have added supplementary layers of information, e.g. credit details and survey data, to provide better discrimination between classes. Although much more data has now become available, geodemographic systems are still fundamentally built from area-based census information. This is partly because privacy laws require release of census data at an aggregate level but mostly because much of the research remains proprietary. Household level classifications do exist but they are often based on regressions between area and household data sets. This paper presents a different approach for creating a geodemographic classification at the individual level using only census data. A generic framework is presented, which classifies data from the UK Census Small Area Microdata and then allocates the resulting clusters to a synthetic population created via microsimulation. The framework is then applied to the creation of an individual-based system for the city of Leeds, demonstrated using data from the 2001 census, and is further validated using individual and household survey data from the British Household Panel Survey

    Analysing trajectories of a longitudinal exposure: A causal perspective on common methods in lifecourse research

    Get PDF
    Longitudinal data is commonly analysed to inform prevention policies for diseases that may develop throughout life. Commonly methods interpret the longitudinal data as a series of discrete measurements or as continuous patterns. Some of the latter methods condition on the outcome, aiming to capture ‘average’ patterns within outcome groups, while others capture individual-level pattern features before relating these to the outcome. Conditioning on the outcome may prevent meaningful interpretation. Repeated measurements of a longitudinal exposure (weight) and later outcome (glycated haemoglobin levels) were simulated to match three scenarios: one with no causal relationship between growth rate and glycated haemoglobin; two with a positive causal effect of growth rate on glycated haemoglobin. Two methods that condition on the outcome and one that did not were applied to the data in 1000 simulations. The interpretation of the two-step method matched the simulation in all causal scenarios, but that of the methods conditioning on the outcome did not. Methods that condition on the outcome do not accurately represent a causal relationship between a longitudinal pattern and outcome. Researchers considering longitudinal data should carefully determine if they wish to analyse longitudinal data as a series of discrete time points or by extracting pattern features

    DAG-informed regression modelling, agent-based modelling, and microsimulation modelling: A critical comparison of methods for causal inference

    Get PDF
    The current paradigm for causal inference in epidemiology relies primarily on the evaluation of counterfactual contrasts via statistical regression models informed by graphical causal models (often in the form of directed acyclic graphs, or DAGs) and their underlying mathematical theory. However, there have been growing calls for supplementary methods, and one such method that has been proposed is agent-based modelling due to its potential for simulating counterfactuals. However, within the epidemiological literature there currently exists a general lack of clarity regarding what exactly agent-based modelling is (and is not) and, importantly, how it differs from microsimulation modelling – perhaps its closest methodological comparator. We clarify this distinction by briefly reviewing the history of each method, which provides context for their similarities and differences, and casts light on the types of research questions that they have evolved (and thus are well-suited) to answering; we do the same for DAG-informed regression methods. The distinct historical evolutions of DAG-informed regression modelling, microsimulation modelling, and agent-based modelling have given rise to distinct features of the methods themselves, and provide a foundation for critical comparison. Not only are the three methods well-suited to addressing different types of causal questions, but in doing so they place differing levels of emphasis on fixed and random effects, and also tend to operate on different timescales and in different timeframes

    Genetic algorithm optimisation of an agent-based model for simulating a retail market

    No full text
    Traditionally, researchers have used elaborate regression models to simulate the retail petrol market. Such models are limited in their ability to model individual behaviour and geographical influences. Heppenstall et�al presented a novel agent-based framework for modelling individual petrol stations as agents and integrated important additional system behaviour through the use of established methodologies such as spatial interaction models. The parameters for this model were initially determined by the use of real data analysis and experimentation. This paper explores the parameterisation and verification of the model through data analysis and by use of a genetic algorithm (GA). The results show that a GA can be used to produce not just an optimised match, but results that match those derived by expert analysis through rational exploration. This may suggest that despite the apparent nonlinear and complex nature of the system, there are a limited number of optimal or near optimal behaviours given its constraints, and that both user-driven and GA solutions converge on them.

    How well does Western environmental theory explain crime in the Arabian context? The case study of Riyadh, Saudi Arabia

    No full text
    Crime within Arabic countries is significantly different from Western crime in type, frequency, and motivation. For example, motor vehicle theft (MVT) has constituted the largest proportion of property crime incidents in Saudi Arabia (SA) for decades. This is in stark contrast to Western countries where burglary and street theft dominate. Environmental criminology theories, such as routine activity theory and crime pattern theory, have the potential to help to investigate Arabic crime. However, there is no research that has sought to evaluate the validity of these theories within such a different cultural context. This article represents a first step in addressing this substantial research gap, taking MVT within SA as a case study. We evaluate previous MVT studies using an environmental criminology approach with a critical view to applying environmental criminology to an Arabic context. The article identifies a range of key features in SA that are different from typical Western contexts. These differences could limit the appropriateness of existing methodologies used to apply environmental criminology. The study also reveals that the methodologies associated with traditional environmental crime theory need adjusting more generally when working with MVT, not least to account for shifts in the location of opportunities for crime with time

    The utility of multilevel models for continuous-time feature selection of spatio-temporal networks

    No full text
    Many models for the analysis of spatio-temporal networks specify time as a series of discrete steps. This either requires evenly spaced measurement times or the aggregation of data into measurement windows. This can lead to the introduction of bias. An alternative is to use continuous-time models, for example, multilevel models. Models capturing complex spatio-temporal variation are often difficult to visualise and interpret. This can be addressed by simplifying the results, for example by extracting ‘features’ of interest (such as maxima or minima) of temporal patterns associated with different network connections. This paper uses simulation to evaluate the accuracy and precision with which b-spline-based multilevel models (a flexible form of continuous-time model that can easily capture complex variation associated with a spatio-temporal network structure) capture the timing and extent of maximum delays to journeys made between pairs of stations in a small railway network. On average models captured the timing and extent of maximum delay with small bias, but there was evidence of overestimation and underestimation of low and high values of these features, respectively. This systematic bias may have partially caused the undercoverage of credible intervals for the pattern features. Alternative model specifications – specifically to capture x-axis random variation, for example – should be considered in future work
    corecore