71,392 research outputs found

    Comparing stochastic design decision belief models : pointwise versus interval probabilities.

    Get PDF
    Decision support systems can either directly support a product designer or support an agent operating within a multi-agent system (MAS). Stochastic based decision support systems require an underlying belief model that encodes domain knowledge. The underlying supporting belief model has traditionally been a probability distribution function (PDF) which uses pointwise probabilities for all possible outcomes. This can present a challenge during the knowledge elicitation process. To overcome this, it is proposed to test the performance of a credal set belief model. Credal sets (sometimes also referred to as p-boxes) use interval probabilities rather than pointwise probabilities and therefore are more easier to elicit from domain experts. The PDF and credal set belief models are compared using a design domain MAS which is able to learn, and thereby refine, the belief model based on its experience. The outcome of the experiment illustrates that there is no significant difference between the PDF based and credal set based belief models in the performance of the MAS

    Copulas in finance and insurance

    Get PDF
    Copulas provide a potential useful modeling tool to represent the dependence structure among variables and to generate joint distributions by combining given marginal distributions. Simulations play a relevant role in finance and insurance. They are used to replicate efficient frontiers or extremal values, to price options, to estimate joint risks, and so on. Using copulas, it is easy to construct and simulate from multivariate distributions based on almost any choice of marginals and any type of dependence structure. In this paper we outline recent contributions of statistical modeling using copulas in finance and insurance. We review issues related to the notion of copulas, copula families, copula-based dynamic and static dependence structure, copulas and latent factor models and simulation of copulas. Finally, we outline hot topics in copulas with a special focus on model selection and goodness-of-fit testing

    Ordinary kriging for on-demand average wind interpolation of in-situ wind sensor data

    No full text
    We have developed a domain agnostic ordinary kriging algorithm accessible via a standards-based service-oriented architecture for sensor networks. We exploit the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards. We need on-demand interpolation maps so runtime performance is a major priority.Our sensor data comes from wind in-situ observation stations in an area approximately 200km by 125km. We provide on-demand average wind interpolation maps. These spatial estimates can then be compared with the results of other estimation models in order to identify spurious results that sometimes occur in wind estimation.Our processing is based on ordinary kriging with automated variogram model selection (AVMS). This procedure can smooth time point wind measurements to obtain average wind by using a variogram model that reflects the wind phenomenon characteristics. Kriging is enabled for wind direction estimation by a simple but effective solution to the problem of estimating periodic variables, based on vector rotation and stochastic simulation.In cases where for the region of interest all wind directions span 180 degrees, we rotate them so they lie between 90 and 270 degrees and apply ordinary kriging with AVMS directly to the meteorological angle. Else, we transform the meteorological angle to Cartesian space, apply ordinary kriging with AVMS and use simulation to transform the kriging estimates back to meteorological angle.Tests run on a 50 by 50 grid using standard hardware takes about 5 minutes to execute backward transformation with a sample size of 100,000. This is acceptable for our on-demand processing service requirements

    Dependence of dissipation on the initial distribution over states

    Full text link
    We analyze how the amount of work dissipated by a fixed nonequilibrium process depends on the initial distribution over states. Specifically, we compare the amount of dissipation when the process is used with some specified initial distribution to the minimal amount of dissipation possible for any initial distribution. We show that the difference between those two amounts of dissipation is given by a simple information-theoretic function that depends only on the initial and final state distributions. Crucially, this difference is independent of the details of the process relating those distributions. We then consider how dissipation depends on the initial distribution for a 'computer', i.e., a nonequilibrium process whose dynamics over coarse-grained macrostates implement some desired input-output map. We show that our results still apply when stated in terms of distributions over the computer's coarse-grained macrostates. This can be viewed as a novel thermodynamic cost of computation, reflecting changes in the distribution over inputs rather than the logical dynamics of the computation
    corecore