1,065 research outputs found

    Loss Distribution Approach for Operational Risk Capital Modelling under Basel II: Combining Different Data Sources for Risk Estimation

    Full text link
    The management of operational risk in the banking industry has undergone significant changes over the last decade due to substantial changes in operational risk environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for banking Supervision has developed a regulatory framework, referred to as Basel II, that introduced operational risk category and corresponding capital requirements. Over the past five years, major banks in most parts of the world have received accreditation under the Basel II Advanced Measurement Approach (AMA) by adopting the loss distribution approach (LDA) despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. In this paper, we review methods proposed in the literature for combining different data sources (internal data, external data and scenario analysis) which is one of the regulatory requirement for AMA

    Student Modeling in Intelligent Tutoring Systems

    Get PDF
    After decades of development, Intelligent Tutoring Systems (ITSs) have become a common learning environment for learners of various domains and academic levels. ITSs are computer systems designed to provide instruction and immediate feedback, which is customized to individual students, but without requiring the intervention of human instructors. All ITSs share the same goal: to provide tutorial services that support learning. Since learning is a very complex process, it is not surprising that a range of technologies and methodologies from different fields is employed. Student modeling is a pivotal technique used in ITSs. The model observes student behaviors in the tutor and creates a quantitative representation of student properties of interest necessary to customize instruction, to respond effectively, to engage students¡¯ interest and to promote learning. In this dissertation work, I focus on the following aspects of student modeling. Part I: Student Knowledge: Parameter Interpretation. Student modeling is widely used to obtain scientific insights about how people learn. Student models typically produce semantically meaningful parameter estimates, such as how quickly students learn a skill on average. Therefore, parameter estimates being interpretable and plausible is fundamental. My work includes automatically generating data-suggested Dirichlet priors for the Bayesian Knowledge Tracing model, in order to obtain more plausible parameter estimates. I also proposed, implemented, and evaluated an approach to generate multiple Dirichlet priors to improve parameter plausibility, accommodating the assumption that there are subsets of skills which students learn similarly. Part II: Student Performance: Student Performance Prediction. Accurately predicting student performance is one of the most desired features common evaluations for student modeling. for an ITS. The task, however, is very challenging, particularly in predicting a student¡¯s response on an individual problem in the tutor. I analyzed the components of two common student models to determine which aspects provide predictive power in classifying student performance. I found that modeling the student¡¯s overall knowledge led to improved predictive accuracy. I also presented an approach, which, rather than assuming students are drawn from a single distribution, modeled multiple distributions of student performances to improve the model¡¯s accuracy. Part III: Wheel-spinning: Student Future Failure in Mastery Learning. One drawback of the mastery learning framework is its possibility to leave a student stuck attempting to learn a skill he is unable to master. We refer to this phenomenon of students being given practice with no improvement as wheel-spinning. I analyzed student wheel-spinning across different tutoring systems and estimated the scope of the problem. To investigate the negative consequences of see what wheel-spinning could have done to students, I investigated the relationships between wheel-spinning and two other constructs of interest about students: efficiency of learning and ¡°gaming the system¡±. In addition, I designed a generic model of wheel-spinning, which uses features easily obtained by most ITSs. The model can be well generalized to unknown students with high accuracy classifying mastery and wheel-spinning problems. When used as a detector, the model can detect wheel-spinning in its early stage with satisfying satisfactory precision and recall

    A role for the developing lexicon in phonetic category acquisition

    Get PDF
    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning

    Assessing the Reliability of Diverse Fault-Tolerant Systems

    Get PDF
    Design diversity between redundant channels is a way of improving the dependability of software-based systems, but it does not alleviate the difficulties of dependability assessment

    Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics

    Get PDF
    Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC) algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC) algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data. The implementation of GBHC is available at https://sites. google.com/site/gaussianbhc

    Bayesian inversion for finite fault earthquake source models I—theory and algorithm

    Get PDF
    The estimation of finite fault earthquake source models is an inherently underdetermined problem: there is no unique solution to the inverse problem of determining the rupture history at depth as a function of time and space when our data are limited to observations at the Earth’s surface. Bayesian methods allow us to determine the set of all plausible source model parameters that are consistent with the observations, our a priori assumptions about the physics of the earthquake source and wave propagation, and models for the observation errors and the errors due to the limitations in our forward model. Because our inversion approach does not require inverting any matrices other than covariance matrices, we can restrict our ensemble of solutions to only those models that are physically defensible while avoiding the need to restrict our class of models based on considerations of numerical invertibility. We only use prior information that is consistent with the physics of the problem rather than some artefice (such as smoothing) needed to produce a unique optimal model estimate. Bayesian inference can also be used to estimate model-dependent and internally consistent effective errors due to shortcomings in the forward model or data interpretation, such as poor Green’s functions or extraneous signals recorded by our instruments. Until recently, Bayesian techniques have been of limited utility for earthquake source inversions because they are computationally intractable for problems with as many free parameters as typically used in kinematic finite fault models. Our algorithm, called cascading adaptive transitional metropolis in parallel (CATMIP), allows sampling of high-dimensional problems in a parallel computing framework. CATMIP combines the Metropolis algorithm with elements of simulated annealing and genetic algorithms to dynamically optimize the algorithm’s efficiency as it runs. The algorithm is a generic Bayesian Markov Chain Monte Carlo sampler; it works independently of the model design, a priori constraints and data under consideration, and so can be used for a wide variety of scientific problems. We compare CATMIP’s efficiency relative to several existing sampling algorithms and then present synthetic performance tests of finite fault earthquake rupture models computed using CATMIP
    • …
    corecore