187,502 research outputs found

    Decision support system for the long-term city metabolism planning problem

    Get PDF
    A Decision Support System (DSS) tool for the assessment of intervention strategies (Alternatives) in an Urban Water System (UWS) with an integral simulation model called “WaterMetÂČ” is presented. The DSS permits the user to identify one or more optimal Alternatives over a fixed long-term planning horizon using performance metrics mapped to the TRUST sustainability criteria (Alegre et al., 2012). The DSS exposes lists of in-built intervention options and system performance metrics for the user to compose new Alternatives. The quantitative metrics are calculated by the WaterMetÂČ model and further qualitative or user-defined metrics may be specified by the user or by external tools feeding into the DSS. A Multi-Criteria Decision Analysis (MCDA) approach is employed within the DSS to compare the defined Alternatives and to rank them with respect to a pre-specified weighting scheme for different Scenarios. Two rich, interactive Graphical User Interfaces, one desktop and one web-based, are employed to assist with guiding the end user through the stages of defining the problem, evaluating and ranking Alternatives. This mechanism provides a useful tool for decision makers to compare different strategies for the planning of UWS with respect to multiple Scenarios. The efficacy of the DSS is demonstrated on a northern European case study inspired by a real-life urban water system for a mixture of quantitative and qualitative criteria. The results demonstrate how the DSS, integrated with an UWS modelling approach, can be used to assist planners in meeting their long-term, strategic level sustainability objectives

    On the use of schedule risk analysis for project management

    Get PDF
    The purpose of this paper is to give an overview on the existing literature and recent developments on the research on Schedule Risk Analysis (SRA) in Project Management (PM) to measure the sensitivity of activities and resources in the project network. SRA is a technique that relies on Monte-Carlo simulation runs to analyze the impact of changes in activity durations and costs on the overall project time and cost objectives. First, the paper gives an overview of the most commonly known sensitivity metrics from literature that are widely used by PM software tools to measure the time and cost sensitivity of activities as well as sensitivity for project resources. Second, the relevance of these metrics in an integrated project control setting is discussed based on some recent research studies. Finally, a short discussion on the challenges for future research is given. All sections in this paper are based on research studies done in the past for which references will be given throughout the manuscript

    WISER deliverable D3.1-4: guidance document on sampling, analysis and counting standards for phytoplankton in lakes

    Get PDF
    Sampling, analysis and counting of phytoplankton has been undertaken in European lakes for more than 100 years (Apstein 1892, Lauterborn 1896, Lemmermann 1903, Woloszynska 1912, Nygaard 1949). Since this early period of pioneers, there has been progress in the methods used to sample, fix, store and analyse phytoplankton. The aim of the deliverable D3.1-4 is to select, harmonize and recommend the most optimal method as a basis for lake assessment. We do not report and review the huge number of European national methods or other published manuals for phytoplankton sampling and analysis that are available. An agreement on a proper sampling procedure is not trivial for lake phytoplankton. In the early 20th century, sampling was carried out using plankton nets. An unconcentrated sample without any pre-screening is required for quantitative phytoplankton analysis, for which various water samplers were developed. Sampling of distinct water depths or an integral sample of the euphotic zone affects the choice of the sampler and sampling procedure. The widely accepted method to quantify algal numbers together with species determination was developed by Utermöhl (1958), who proposed the counting technique using sediment chambers and inverse microscopy. This is the basis for the recently agreed CEN standard “Water quality - Guidance standard on the enumeration of phytoplankton using inverted microscopy (Utermöhl technique)” (CEN 15204, 2006). This CEN standard does not cover the sampling procedure or the calculation of biovolumes for phytoplankton species, although Rott (1981), Hillebrand et al (1999) and Pohlmann & Friedrich (2001) have contributed advice on how to calculate taxa biovolumes effectively. WillĂ©n (1976) suggested a simplified counting method, when counting 60 individuals of each species. For the Scandinavian region an agreed phytoplankton sampling and counting manual was compiled, which has been in use for about 20 years (Olrik et al. 1998, Blomqvist & Herlitz 1998). It is very unfortunate that no European guidance on sampling of phytoplankton in lakes was agreed before the phytoplankton assessment methods for the EU-WFD were developed and intercalibrated by Member States. In 2008 an initiative by the European Commission (Mandate M424) for two draft CEN standards on sampling in freshwaters and on calculation of phytoplankton biovolume was unfortunately delayed by administrative difficulties. Recently a grant agreement was signed between the Commission and DIN (German Institute for Standardization) in January 2012 to develop these standards. We believe this WISER guidance document can usefully contribute to these up-coming standards

    Towards an automatic data value analysis method for relational databases

    Get PDF
    Data is becoming one of the world’s most valuable resources and it is suggested that those who own the data will own the future. However, despite data being an important asset, data owners struggle to assess its value. Some recent pioneer works have led to an increased awareness of the necessity for measuring data value. They have also put forward some simple but engaging survey-based methods to help with the first-level data assessment in an organisation. However, these methods are manual and they depend on the costly input of domain experts. In this paper, we propose to extend the manual survey-based approaches with additional metrics and dimensions derived from the evolving literature on data value dimensions and tailored specifically for our use case study. We also developed an automatic, metric-based data value assessment approach that (i) automatically quantifies the business value of data in Relational Databases (RDB), and (ii) provides a scoring method that facilitates the ranking and extraction of the most valuable RDB tables. We evaluate our proposed approach on a real-world RDB database from a small online retailer (MyVolts) and show in our experimental study that the data value assessments made by our automated system match those expressed by the domain expert approach

    Methodologies to develop quantitative risk evaluation metrics

    Get PDF
    The goal of this work is to advance a new methodology to measure a severity cost for each host using the Common Vulnerability Scoring System (CVSS) based on base, temporal and environmental metrics by combining related sub-scores to produce a unique severity cost by modeling the problem's parameters in to a mathematical framework. We build our own CVSS Calculator using our equations to simplify the calculations of the vulnerabilities scores and to benchmark with other models. We design and develop a new approach to represent the cost assigned to each host by dividing the scores of the vulnerabilities to two main levels of privileges, user and root, and we classify these levels into operational levels to identify and calculate the severity cost of multi steps vulnerabilities. Finally we implement our framework on a simple network, using Nessus scanner as tool to discover known vulnerabilities and to implement the results to build and represent our cost centric attack graph

    Risk-Informed Interference Assessment for Shared Spectrum Bands: A Wi-Fi/LTE Coexistence Case Study

    Full text link
    Interference evaluation is crucial when deciding whether and how wireless technologies should operate. In this paper we demonstrate the benefit of risk-informed interference assessment to aid spectrum regulators in making decisions, and to readily convey engineering insight. Our contributions are: we apply, for the first time, risk assessment to a problem of inter-technology spectrum sharing, i.e. Wi-Fi/LTE in the 5 GHz unlicensed band, and we demonstrate that this method comprehensively quantifies the interference impact. We perform simulations with our newly publicly-available tool and we consider throughput degradation and fairness metrics to assess the risk for different network densities, numbers of channels, and deployment scenarios. Our results show that no regulatory intervention is needed to ensure harmonious technical Wi-Fi/LTE coexistence: for the typically large number of channels available in the 5 GHz band, the risk for Wi-Fi from LTE is negligible, rendering policy and engineering concerns largely moot. As an engineering insight, Wi-Fi coexists better with itself in dense, but better with LTE, in sparse deployments. Also, both main LTE-in-unlicensed variants coexist well with Wi-Fi in general. For LTE intra-technology inter-operator coexistence, both variants typically coexist well in the 5 GHz band, but for dense deployments, implementing listen-before-talk causes less interference
    • 

    corecore