69 research outputs found

    Consistency and stability of risk indicators: The case of road infrastructures

    Get PDF
    Over the last decade, the World Road Association – PIARC and several European research projects, among which Ecoroads, have encouraged a promising reflection on risk analysis methods, acceptance criteria and safety practices applied to the road system. The goal of this research activity is the definition of best practice for safety analysis and management to be applied to network TERN (Trans European Road Network). Quantitative Risk Analysis (QRA) provides much information on safety management. Nevertheless, the potential fragility of the method, stochastic uncertainties (both parameters and models), and ethical aspect of criteria must be adequately analyzed. This paper focuses on all these aspects, assessing the reliability of QRA due to modeling errors and statistical errors, and assessing the statistical consistency of Risk Indicators of QRA

    Kernel-based Information Criterion

    Full text link
    This paper introduces Kernel-based Information Criterion (KIC) for model selection in regression analysis. The novel kernel-based complexity measure in KIC efficiently computes the interdependency between parameters of the model using a variable-wise variance and yields selection of better, more robust regressors. Experimental results show superior performance on both simulated and real data sets compared to Leave-One-Out Cross-Validation (LOOCV), kernel-based Information Complexity (ICOMP), and maximum log of marginal likelihood in Gaussian Process Regression (GPR).Comment: We modified the reference 17, and the subcaptions of Figure

    A Note on High-Probability versus In-Expectation Guarantees of Generalization Bounds in Machine Learning

    Full text link
    Statistical machine learning theory often tries to give generalization guarantees of machine learning models. Those models naturally underlie some fluctuation, as they are based on a data sample. If we were unlucky, and gathered a sample that is not representative of the underlying distribution, one cannot expect to construct a reliable machine learning model. Following that, statements made about the performance of machine learning models have to take the sampling process into account. The two common approaches for that are to generate statements that hold either in high-probability, or in-expectation, over the random sampling process. In this short note we show how one may transform one statement to another. As a technical novelty we address the case of unbounded loss function, where we use a fairly new assumption, called the witness condition

    Efficient cross-validation of the complete two stages in KFD classifier formulation

    Full text link
    This paper presents an efficient evaluation algorithm for cross-validating the two-stage approach of KFD classifiers. The proposed algorithm is of the same complexity level as the existing indirect efficient cross-validation methods but it is more reliable since it is direct and constitutes exact cross-validation for the KFD classifier formulation. Simulations demonstrate that the proposed algorithm is almost as fast as the existing fast indirect evaluation algorithm and the twostage cross-validation selects better models on most of the thirteen benchmark data sets.<br /
    corecore