1,192 research outputs found

    Probability Semantics for Aristotelian Syllogisms

    Full text link
    We present a coherence-based probability semantics for (categorical) Aristotelian syllogisms. For framing the Aristotelian syllogisms as probabilistic inferences, we interpret basic syllogistic sentence types A, E, I, O by suitable precise and imprecise conditional probability assessments. Then, we define validity of probabilistic inferences and probabilistic notions of the existential import which is required, for the validity of the syllogisms. Based on a generalization of de Finetti's fundamental theorem to conditional probability, we investigate the coherent probability propagation rules of argument forms of the syllogistic Figures I, II, and III, respectively. These results allow to show, for all three Figures, that each traditionally valid syllogism is also valid in our coherence-based probability semantics. Moreover, we interpret the basic syllogistic sentence types by suitable defaults and negated defaults. Thereby, we build a knowledge bridge from our probability semantics of Aristotelian syllogisms to nonmonotonic reasoning. Finally, we show how the proposed semantics can be used to analyze syllogisms involving generalized quantifiers

    Thread Reconstruction in Conversational Data using Neural Coherence Models

    Get PDF
    Discussion forums are an important source of information. They are often used to answer specific questions a user might have and to discover more about a topic of interest. Discussions in these forums may evolve in intricate ways, making it difficult for users to follow the flow of ideas. We propose a novel approach for automatically identifying the underlying thread structure of a forum discussion. Our approach is based on a neural model that computes coherence scores of possible reconstructions and then selects the highest scoring, i.e., the most coherent one. Preliminary experiments demonstrate promising results outperforming a number of strong baseline methods.Comment: Neu-IR: Workshop on Neural Information Retrieval 201

    The Tail that Wags the Dog: Integrating Credit Risk in Asset Portfolios

    Get PDF
    Tails are of paramount importance in shaping the risk profile of portfolios with credit risk sensitive securities. In this context risk management tools require simulations that accurately capture the tails, and optimization models that limit tail effects. Ignoring the tails in the simulation or using inadequate optimization metrics can have significant effects and destroy portfolio efficiency. The resulting portfolio risk profile can be grossly misrepresented when long run performance is optimized without consideration of the short term tail effects. This paper illustrates the pitfalls and suggests models for avoiding them.

    Modelación del Riesgo de Crédito: la pérdida de distribución de una cartera de préstamos

    Get PDF
    The aim of this work is to present a methodology that allows in a simple way to compute the regulatory capital for credit risk. The Vasicek model is a popular one-factor model that derives the limiting form of the portfolio loss. This model will allow calculating different risk measures such as, for example, the expected loss (EL), the value at risk (VaR) and the Expected Shortfall (ES). Due to the difficulty of obtaining real data, simulated data were used. For this study, three different portfolios were proposed: the first was a homogeneous portfolio that had the same weighting among all loans, then a portfolio with unequal weights was considered and finally a mixed portfolio with different weights and different probabilities of default was used. Monte Carlo simulation with 100.000 scenarios served as our benchmark. It was observed that the Vasicek model correctly estimates the results of the homogeneous portfolio. On the other hand, when the portfolio is not homogeneous (portfolio unequal weights and mixed) the Vasicek model correctly estimates the mean (Expected Losses) but underestimates the Value at Risk and the Expected Shortfall. This is because the approximation of the Vasicek model is good on average but not at the extremes.El objetivo de este trabajo es presentar una metodología que permita calcular de manera simple el capital regulatorio para el riesgo de crédito. El modelo Vasicek es un modelo popular de un factor que deriva la forma limitante de la pérdida de cartera. Este modelo permitirá calcular diferentes medidas de riesgo, como, por ejemplo, la pérdida esperada (PE), el valor en riesgo (VaR) y el déficit esperado (DE). Debido a la dificultad de obtener datos reales, se utilizaron datos simulados. Para este estudio, se propusieron tres carteras diferentes: la primera fue una cartera homogénea que tenía la misma ponderación entre todos los préstamos, luego se consideró una cartera con ponderaciones desiguales y, finalmente, se utilizó una cartera mixta con diferentes ponderaciones y diferentes probabilidades de incumplimiento. La simulación de Monte Carlo con 100.000 escenarios sirvió como nuestro punto de referencia. Se observó que el modelo de Vasicek estima correctamente los resultados de la cartera homogénea. Por otro lado, cuando la cartera no es homogénea (pesos desiguales y mixtos de la cartera), el modelo de Vasicek calcula correctamente la media (pérdidas esperadas) pero subestima el valor en riesgo y el déficit esperado. Esto se debe a que la aproximación del modelo de Vasicek es buena en promedio, pero no en los extremos

    Estimation of Default Probabilities with Support Vector Machines

    Get PDF
    Predicting default probabilities is important for firms and banks to operate successfully and to estimate their specific risks. There are many reasons to use nonlinear techniques for predicting bankruptcy from financial ratios. Here we propose the so called Support Vector Machine (SVM) to estimate default probabilities of German firms. Our analysis is based on the Creditreform database. The results reveal that the most important eight predictors related to bankruptcy for these German firms belong to the ratios of activity, profitability, liquidity, leverage and the percentage of incremental inventories. Based on the performance measures, the SVM tool can predict a firms default risk and identify the insolvent firm more accurately than the benchmark logit model. The sensitivity investigation and a corresponding visualization tool reveal that the classifying ability of SVM appears to be superior over a wide range of the SVM parameters. Based on the nonparametric Nadaraya-Watson estimator, the expected returns predicted by the SVM for regression have a significant positive linear relationship with the risk scores obtained for classification. This evidence is stronger than empirical results for the CAPM based on a linear regression and confirms that higher risks need to be compensated by higher potential returns.Support Vector Machine, Bankruptcy, Default Probabilities Prediction, Expected Profitability, CAPM.
    corecore