73,685 research outputs found

    An empirical learning-based validation procedure for simulation workflow

    Full text link
    Simulation workflow is a top-level model for the design and control of simulation process. It connects multiple simulation components with time and interaction restrictions to form a complete simulation system. Before the construction and evaluation of the component models, the validation of upper-layer simulation workflow is of the most importance in a simulation system. However, the methods especially for validating simulation workflow is very limit. Many of the existing validation techniques are domain-dependent with cumbersome questionnaire design and expert scoring. Therefore, this paper present an empirical learning-based validation procedure to implement a semi-automated evaluation for simulation workflow. First, representative features of general simulation workflow and their relations with validation indices are proposed. The calculation process of workflow credibility based on Analytic Hierarchy Process (AHP) is then introduced. In order to make full use of the historical data and implement more efficient validation, four learning algorithms, including back propagation neural network (BPNN), extreme learning machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture model (FIGMN), are introduced for constructing the empirical relation between the workflow credibility and its features. A case study on a landing-process simulation workflow is established to test the feasibility of the proposed procedure. The experimental results also provide some useful overview of the state-of-the-art learning algorithms on the credibility evaluation of simulation models

    Estimating the volatility of property assets

    Get PDF
    When an investor is allocating assets between equities, bonds and property, this allocation needs to provide a portfolio with an appropriate risk/return trade-off: for instance, a pension scheme may prefer a robust portfolio that holds its aggregate value in a number of different situations. In order to do this, some estimate needs to be made of the volatility or uncertainty in the property assets, in order to use that in the same way as the volatilities of equities and bonds are used in the allocation. However, property assets are only valued monthly or quarterly (and are sold only rarely) whereas equities and bonds are priced continuously and recorded daily. Currently many actuaries may assume that the volatility of property assets is between those of equities and bonds, but without quantifying it from real data. The challenge for the Study Group is to produce a model for estimating the volatility or uncertainty in property asset values, for use in portfolio planning. The Study Group examined contexts for the use of volatility estimates, particularly in relation to solvency calculations as required by the Financial Services Authority, fund trustees and corporate boards, and it proposed a number of possible approaches. This report summarises that work, and it suggests directions for further investigation

    Linear Encodings of Bounded LTL Model Checking

    Full text link
    We consider the problem of bounded model checking (BMC) for linear temporal logic (LTL). We present several efficient encodings that have size linear in the bound. Furthermore, we show how the encodings can be extended to LTL with past operators (PLTL). The generalised encoding is still of linear size, but cannot detect minimal length counterexamples. By using the virtual unrolling technique minimal length counterexamples can be captured, however, the size of the encoding is quadratic in the specification. We also extend virtual unrolling to Buchi automata, enabling them to accept minimal length counterexamples. Our BMC encodings can be made incremental in order to benefit from incremental SAT technology. With fairly small modifications the incremental encoding can be further enhanced with a termination check, allowing us to prove properties with BMC. Experiments clearly show that our new encodings improve performance of BMC considerably, particularly in the case of the incremental encoding, and that they are very competitive for finding bugs. An analysis of the liveness-to-safety transformation reveals many similarities to the BMC encodings in this paper. Using the liveness-to-safety translation with BDD-based invariant checking results in an efficient method to find shortest counterexamples that complements the BMC-based approach.Comment: Final version for Logical Methods in Computer Science CAV 2005 special issu

    An efficient methodology to estimate probabilistic seismic damage curves

    Get PDF
    The incremental dynamic analysis (IDA) is a powerful methodology that can be easily extended for calculating probabilistic seismic damage curves. These curves are metadata to assess the seismic risk of structures. Although this methodology requires a relevant computational effort, it should be the reference to correctly estimate the seismic risk of structures. Nevertheless, it would be of high practical interest to have a simpler methodology, based for instance on the pushover analysis (PA), to obtain similar results to those based on IDA. In this article, PA is used to obtain probabilistic seismic damage curves from the stiffness degradation and the energy of the nonlinear part of the capacity curve. A fully probabilistic methodology is tackled by means of Monte Carlo simulations with the purpose of establishing that the results based on the simplified proposed approach are compatible with those obtained with the IDA. Comparisons between the results of both approaches are included for a low- to midrise reinforced concrete building. The proposed methodology significantly reduces the computational effort when calculating probabilistic seismic damage curves.Peer ReviewedPostprint (author's final draft

    Shared Arrangements: practical inter-query sharing for streaming dataflows

    Full text link
    Current systems for data-parallel, incremental processing and view maintenance over high-rate streams isolate the execution of independent queries. This creates unwanted redundancy and overhead in the presence of concurrent incrementally maintained queries: each query must independently maintain the same indexed state over the same input streams, and new queries must build this state from scratch before they can begin to emit their first results. This paper introduces shared arrangements: indexed views of maintained state that allow concurrent queries to reuse the same in-memory state without compromising data-parallel performance and scaling. We implement shared arrangements in a modern stream processor and show order-of-magnitude improvements in query response time and resource consumption for interactive queries against high-throughput streams, while also significantly improving performance in other domains including business analytics, graph processing, and program analysis
    • …
    corecore