19 research outputs found

    The development of linked databases and environmental modelling systems for decision-making in London

    Get PDF
    A basic requirement for a city's growth is the availability of land, raw material and water. For continued and sustainable development of today’s cities we must be able to meet these basic requirements whilst being mindful of the environment and its relationship with anthropogenic activity. The heterogeneous and complex nature of urban systems where there are obvious environmental and anthropogenic inter-dependencies necessitates a more holistic approach to decision-making. New developments such as linked databases of environmental data and integrated environmental modelling systems provide new ways of organising cross-disciplinary information and a means to apply this to explain, explore and predict the urban systems response to environmental change. In this paper we show how, accessibility to linked databases, detailed understanding of the geology and integrated environmental modelling solutions has the potential to provide decision-makers and policy developers with the science based information needed to understand and address these challenges

    The application of componentised modelling techniques to catastrophe model generation

    Get PDF
    In this paper we show that integrated environmental modelling (IEM) techniques can be used to generate a catastrophe model for groundwater flooding. Catastrophe models are probabilistic models based upon sets of events representing the hazard and weights their likelihood with the impact of such an event happening which is then used to estimate future financial losses. These probabilistic loss estimates often underpin re-insurance transactions. Modelled loss estimates can vary significantly, because of the assumptions used within the models. A rudimentary insurance-style catastrophe model for groundwater flooding has been created by linking seven individual components together. Each component is linked to the next using an open modelling framework (i.e. an implementation of OpenMI). Finally, we discuss how a flexible model integration methodology, such as described in this paper, facilitates a better understanding of the assumptions used within the catastrophe model by enabling the interchange of model components created using different, yet appropriate, assumptions

    Meta-model : ensuring the widespread access to metadata and data for environmental models : scoping report

    Get PDF
    This work is a response to the challenge posed by the NERC Environmental Data call, and is designed to scope out how to meet the following objectives: 1. Ensure that the data used to create models are recorded and their source known. 2. The models produced are themselves available. 3. The results produced by these models can be obtained. To scope out how to fulfil these objectives a series of visits, phone calls and meetings were undertaken, alongside a Survey Monkey (on-line) questionnaire. The latter involved sending out a request to fill out the questionnaire to over three hundred contacts from institutions covering the UK, Europe and America, of which 106 responded. The responses have been analysed in conjunction with the information gained from other sources. There are a significant number of standards for both discovery and technical metadata. There are also a range of services by which metadata can be recorded and the data stored alongside these data. NERC itself puts a significant amount of effort into storing data and model results and making the metadata available. For example there are seven Data Centres and the Data Catalogue Service (DCS) to search metadata for datasets stored in the NERC data centres. Whilst there has been a significant amount of time and effort put into standards, the use is variable. There are a number of different standards, which are mainly related to ISO standards, WaterML, GEMINI, MEDIN, climate based standards as well as bespoke standards for data, but there is a lack of formal standards for model metadata. Storage of data and its associated metadata is facilitated via the NERC data centres with a reasonable uptake. Whilst the standards and approaches for discovery and technical metadata for data are well advanced and, in theory, well used there are a number of issues: Recognition of what the user wants rather than what the data manager feels is required. Consolidation of discovery metadata schema based on ISO19115 Recording different file formats and tools to allow ease of transfer from different file formats Retrospective capture of metadata for data and models Incorporation of time based information into metadata However for model metadata, the situation is less well advanced. There is no internationally recognised standard for model metadata, and one should be developed to include features such as: model code and version; code guardian contact details; Links to further information (URL to papers, manuals, etc.); details on how to run the models, etc.; spatial extent of the model instance. Other considerations include: an assessment of data quality and uncertainty needs to be recorded to enable model uncertainty to be quantified and there is the issue of storage of the models themselves. The latter could either be the model code (via standard repositories) or the executable. These gaps could be filled by a work programme that would consist of the development of a metadata standard for models, a portal for the recording and supply of these metadata, testing this with appropriate user organisations and liaising with international standards organisation to ensure that the development could be recognised. The results of the whole process should be disseminated through as many channels as possible

    Controls on the magnitude-frequency scaling of an inventory of secular landslides

    Get PDF
    Linking landslide size and frequency is important at both human and geological timescales for quantifying both landslide hazards and the effectiveness of landslides in the removal of sediment from evolving landscapes. The statistical behaviour of the magnitude-frequency of landslide inventories is usually compiled following a particular triggering event such as an earthquake or storm, and their statistical behaviour is often characterised by a power-law relationship with a small landslide rollover. The occurrence of landslides is expected to be influenced by the material properties of rock and/or regolith in which failure occurs. Here we explore the statistical behaviour and the controls of a secular landslide inventory (SLI) (i.e. events occurring over an indefinite geological time period) consisting of mapped landslide deposits and their underlying lithology (bedrock or superficial) across the United Kingdom. The magnitude-frequency distribution of this secular inventory exhibits an inflected power-law relationship, well approximated by either an inverse gamma or double Pareto model. The scaling exponent for the power-law scaling of medium to large landslides is � = −1.71 ± 0.02. The small-event rollover occurs at a significantly higher magnitude (1.0–7.0 × 10−3 km2) than observed in single-event landslide records (� 4 × 10−3 km2).We interpret this as evidence of landscape annealing, from which we infer that the SLI underestimates the frequency of small landslides. This is supported by a subset of data where a complete landslide inventory was recently mapped. Large landslides also appear to be under-represented relative to model predictions. There are several possible reasons for this, including an incomplete data set, an incomplete landscape (i.e. relatively steep slopes are under-represented), and/or temporal transience in landslide activity during emergence from the last glacial maximum toward a generally more stable late-Holocene state. The proposed process of landscape annealing and the possibility of a transient hillslope response have the consequence that it is not possible to use the statistical properties of the current SLI database to rigorously constrain probabilities of future landslides in the UK

    The geological framework of the Frome-Piddle Catchment

    Get PDF
    The purpose of this report is to describe the solid and drift geology of the catchment area of the rivers Frome and Piddle and a 5 km-wide buffer zone (together comprising the study area). The Frome-Piddle Catchment is centred on Dorchester in South Dorset and is predominantly a chalkland drainage basin containing rivers that flow SE into Poole Harbour (Figure 1). The drainage basin is some 48 km in length and 22 km wide, with the Frome draining an area of 463.7 km2 and the Piddle an area of 187.5 km2. The catchment has been recently geologically surveyed at 1:10 000-scale (1985-1997), and most is covered by published 1:50 000-scale maps (Figure 1). These have been compiled into a map covering the catchment and its 5 km buffer zone (Map 1). While the map is largely seamless within the catchment, there is one major misfit in the buffer zone resulting from the different lithostratigraphical subdivisions used on the modern Shaftesbury (313) and much older Yeovil (312) sheet. Some of the lithostratigraphical terms used on the 1:10 000-scale maps have been superseded, and a table showing the current terminology is included as Appendix 1. In addition, borehole, and surface outcrop data has been utilised to model important geological surfaces and produce a series of maps (Maps 2-5) and cross-sections (Sections 1-5)

    Three-dimensional geological modelling of anthropogenic deposits at small urban sites: a case study from Sheepcote Valley, Brighton, U.K.

    No full text
    Improvements in computing speed and capacity and the increasing collection and digitisation of geological data now allow geoscientists to produce meaningful 3D spatial models of the shallow subsurface in many large urban areas, to predict ground conditions and reduce risk and uncertainty in urban planning. It is not yet clear how useful this 3D modelling approach is at smaller urban scales, where poorly characterised anthropogenic deposits artificial/made ground and fill) form the dominant subsurface material and where the availability of borehole and other geological data is less comprehensive. This is important as it is these smaller urban sites, with complex site history, which frequently form the focus of urban regeneration and redevelopment schemes. This paper examines the extent to which the 3D modelling approach previously utilised at large urban scales can be extended to smaller less wellcharacterised urban sites, using a historic landfill site in Sheepcote Valley, Brighton, UK as a case study. Two 3D models were generated and compared using GSI3D� software, one using borehole data only, one combining borehole data with local geological maps and results from a desk study (involving collation of available site data, including ground contour plans). These models clearly delimit the overall subsurface geology at the site, and allow visualisation and modelling of the anthropogenic deposits present. Shallow geophysical data collected from the site partially validate the 3D modelled data, and can improve GSI3D� outputs where boundaries of anthropogenic deposits may not be clearly defined by surface, contour or borehole data. Attribution of geotechnical and geochemical properties to the 3D model is problematic without intrusive investigations and sampling. However, combining available borehole data, shallow geophysical methods and site histories may allow attribution of generic fill properties, and consequent reduction of urban development risk and uncertainty

    The potential for the use of model fusion techniques in building and developing catastrophe models

    No full text
    Global economic losses related to natural hazards are large and increasing, peaking at US$380 billion in 2011 driven by earthquakes in Japan and New Zealand and flooding in Thailand. Catastrophe models are stochastic event-set based computer models, first created 25 years ago, that are now vital to risk assessment within the insurance and reinsurance industry. They estimate likely losses from extreme events, whether natural or man-made. Most catastrophe models limit the level of user interaction, stereotyped as ‘black boxes’. In this paper we investigate how model fusion techniques could be used to develop ‘plug and play’ catastrophe models and discuss the impact of open access modelling on the insurance industry and other stakeholders (e.g. local government
    corecore