4,023 research outputs found

    Modelling network travel time reliability under stochastic demand

    Get PDF
    A technique is proposed for estimating the probability distribution of total network travel time, in the light of normal day-to-day variations in the travel demand matrix over a road traffic network. A solution method is proposed, based on a single run of a standard traffic assignment model, which operates in two stages. In stage one, moments of the total travel time distribution are computed by an analytic method, based on the multivariate moments of the link flow vector. In stage two, a flexible family of density functions is fitted to these moments. It is discussed how the resulting distribution may in practice be used to characterise unreliability. Illustrative numerical tests are reported on a simple network, where the method is seen to provide a means for identifying sensitive or vulnerable links, and for examining the impact on network reliability of changes to link capacities. Computational considerations for large networks, and directions for further research, are discussed

    Implementing Loss Distribution Approach for Operational Risk

    Full text link
    To quantify the operational risk capital charge under the current regulatory framework for banking supervision, referred to as Basel II, many banks adopt the Loss Distribution Approach. There are many modeling issues that should be resolved to use the approach in practice. In this paper we review the quantitative methods suggested in literature for implementation of the approach. In particular, the use of the Bayesian inference method that allows to take expert judgement and parameter uncertainty into account, modeling dependence and inclusion of insurance are discussed

    A COMPARISON OF SUBJECTIVE AND HISTORICAL CROP YIELD PROBABILITY DISTRIBUTIONS

    Get PDF
    Forecast distributions based on historical yields and subjective expectations for 1987 expected crop yields were compared for 90 Western Kentucky grain farms. Different subjective probability elicitation techniques were also compared. In many individual cases, results indicate large differences between subjective and empirical moments. Overall, farmer expectations for 1987 corn yields were below those predicted from their past yields, while soybean expectations were above the historical forecast. Geographical location plays a larger role than crop in comparisons of relative variability of yield. Neither elicitation technique nor manager characteristics have significant effects on the comparisons of the forecasts.Crop Production/Industries,

    Local generalised method of moments: an application to point process-based rainfall models

    Get PDF
    Long series of simulated rainfall are required at point locations for a range of applications, including hydrological studies. Clustered point process-based rainfall models have been used for generating such simulations for many decades. These models suffer from a major limitation, however, their stationarity. Although seasonality can be allowed by ïŹtting separate models for each calendar month or season, the models are unsuitable in their basic form for climate impact studies. In this paper, we develop new methodology to address this limitation. We extend the current ïŹtting approach by allowing the discrete covariate, calendar month, to be replaced or supplemented with continuous covariates that are more directly related to the incidence and nature of rainfall. The covariate-dependent model parameters are estimated for each time interval using a kernel-based nonparametric approach within a generalised method-of-moments framework. An empirical study demonstrates the new methodology using a time series of 5-min rainfall data. The study considers both local mean and local linear approaches. While asymptotic results are included, the focus is on developing useable methodology for a complex model that can only be solved numerically. Issues including the choice of weighting matrix, estimation of parameter uncertainty and bandwidth and model selection are considered from this perspective

    Extreme Value Index Estimators and Smoothing Alternatives: A Critical Review

    Get PDF
    Extreme-value theory and corresponding analysis is an issue extensively applied in many different fields. The central point of this theory is the estimation of a parameter Îł, known as the extreme-value index. In this paper we review several extreme-value index estimators, ranging from the oldest ones to the most recent developments. Moreover, some smoothing and robustifying procedures of these estimators are presented.Extreme value index, Semi-parametric estimation, Smoothing modification

    Nonparametric estimation of the dynamic range of music signals

    Full text link
    The dynamic range is an important parameter which measures the spread of sound power, and for music signals it is a measure of recording quality. There are various descriptive measures of sound power, none of which has strong statistical foundations. We start from a nonparametric model for sound waves where an additive stochastic term has the role to catch transient energy. This component is recovered by a simple rate-optimal kernel estimator that requires a single data-driven tuning. The distribution of its variance is approximated by a consistent random subsampling method that is able to cope with the massive size of the typical dataset. Based on the latter, we propose a statistic, and an estimation method that is able to represent the dynamic range concept consistently. The behavior of the statistic is assessed based on a large numerical experiment where we simulate dynamic compression on a selection of real music signals. Application of the method to real data also shows how the proposed method can predict subjective experts' opinions about the hifi quality of a recording

    A proposed framework for characterising uncertainty and variability in rock mechanics and rock engineering

    Get PDF
    This thesis develops a novel understanding of the fundamental issues in characterising and propagating unpredictability in rock engineering design. This unpredictability stems from the inherent complexity and heterogeneity of fractured rock masses as engineering media. It establishes the importance of: a) recognising that unpredictability results from epistemic uncertainty (i.e. resulting from a lack of knowledge) and aleatory variability (i.e. due to inherent randomness), and; b) the means by which uncertainty and variability associated with the parameters that characterise fractured rock masses are propagated through the modelling and design process. Through a critical review of the literature, this thesis shows that in geotechnical engineering – rock mechanics and rock engineering in particular – there is a lack of recognition in the existence of epistemic uncertainty and aleatory variability, and hence inappropriate design methods are often used. To overcome this, a novel taxonomy is developed and presented that facilitates characterisation of epistemic uncertainty and aleatory variability in the context of rock mechanics and rock engineering. Using this taxonomy, a new framework is developed that gives a protocol for correctly propagating uncertainty and variability through engineering calculations. The effectiveness of the taxonomy and the framework are demonstrated through their application to simple challenge problems commonly found in rock engineering. This new taxonomy and framework will provide engineers engaged in preparing rock engineering designs an objective means of characterising unpredictability in parameters commonly used to define properties of fractured rock masses. These new tools will also provide engineers with a means of clearly understanding the true nature of unpredictability inherent in rock mechanics and rock engineering, and thus direct selection of an appropriate unpredictability model to propagate unpredictability faithfully through engineering calculations. Thus, the taxonomy and framework developed in this thesis provide practical tools to improve the safety of rock engineering designs through an improved understanding of the unpredictability concepts.Open Acces

    Multiscale Bayesian State Space Model for Granger Causality Analysis of Brain Signal

    Full text link
    Modelling time-varying and frequency-specific relationships between two brain signals is becoming an essential methodological tool to answer heoretical questions in experimental neuroscience. In this article, we propose to estimate a frequency Granger causality statistic that may vary in time in order to evaluate the functional connections between two brain regions during a task. We use for that purpose an adaptive Kalman filter type of estimator of a linear Gaussian vector autoregressive model with coefficients evolving over time. The estimation procedure is achieved through variational Bayesian approximation and is extended for multiple trials. This Bayesian State Space (BSS) model provides a dynamical Granger-causality statistic that is quite natural. We propose to extend the BSS model to include the \`{a} trous Haar decomposition. This wavelet-based forecasting method is based on a multiscale resolution decomposition of the signal using the redundant \`{a} trous wavelet transform and allows us to capture short- and long-range dependencies between signals. Equally importantly it allows us to derive the desired dynamical and frequency-specific Granger-causality statistic. The application of these models to intracranial local field potential data recorded during a psychological experimental task shows the complex frequency based cross-talk between amygdala and medial orbito-frontal cortex. Keywords: \`{a} trous Haar wavelets; Multiple trials; Neuroscience data; Nonstationarity; Time-frequency; Variational methods The published version of this article is Cekic, S., Grandjean, D., Renaud, O. (2018). Multiscale Bayesian state-space model for Granger causality analysis of brain signal. Journal of Applied Statistics. https://doi.org/10.1080/02664763.2018.145581

    Analysis of Beta Distribution for Subjective Uncertainty Analysis in Cost Models

    Get PDF
    Subjective uncertainty exists within the realm of cost estimation. Typical methodology for subjective uncertainty involves elicitation from a subject matter expert to provide a high, low, and most likely value -- defining a triangular distribution -- to model said uncertainty. This manuscript explores ways to leverage research on elicitation geared towards defining a triangular distribution and provide a simple conversion to a beta distribution usable by cost analysts with various degrees of mathematical knowledge. Furthermore, this manuscript attempts to demonstrate the benefits of using a beta distribution through its application as a conjugate prior for Bayesian updating in cost models
    • 

    corecore