3,281 research outputs found

    Sharp and fuzzy observables on effect algebras

    Full text link
    Observables on effect algebras and their fuzzy versions obtained by means of confidence measures (Markov kernels) are studied. It is shown that, on effect algebras with the (E)-property, given an observable and a confidence measure, there exists a fuzzy version of the observable. Ordering of observables according to their fuzzy properties is introduced, and some minimality conditions with respect to this ordering are found. Applications of some results of classical theory of experiments are considered.Comment: 23 page

    "Can Banks Learn to Be Rational?"

    Get PDF
    Can banks learn to be rational in their lending activities? The answer depends on the institutionally bounded constraints to learning. From an evolutionary perspective the functionality (for survival) of "learning to be rational" creates strong incentives for such learning without, however, guaranteeing that each member of the particular economic species actually achieves increased fitness. I investigate this issue for a particular economic species, namely, commrercial banks. The purpose of this paper is to illustrate the key issues related to learning in an economic model by proposing a new screening model for bank commercial loans that uses the neuro fuzzy technique. The technical modeling aspect is integrally connected in a rigorous way to the key conceptual and theoretical aspects of the capabilities for learning to be rational in a broad but precise sense. This paper also compares the relative predictability of loan default among three methods of prediction--- discriminant analysis, logit type regression, and neuro fuzzy--- based on the real data obtained from one of the banks in Taiwan.The neuro fuzzy model, in contrast with the other two, incorporates recursive learning in a real world, imprecise linguistic environment. The empirical results show that in addition to its better screening ability, the neuro fuzzy model is superior in explaining the relationship among the variables as well. With further modifications,this model could be used by bank regulatory agencies for loan examination and by bank loan officers for loan review. The main theoretical conclusion to draw from this demonstration is that non-linear learning in a vague semantic world is both possible and useful. Therefore the search for alternatives to the full neoclassical rationality and its equivalent under uncertainty---rational expectations--- is a plausible and desirable search, especially when the probability for convergence to a rational expectations equilibrium is low.

    Semantic Information Measure with Two Types of Probability for Falsification and Confirmation

    Get PDF
    Logical Probability (LP) is strictly distinguished from Statistical Probability (SP). To measure semantic information or confirm hypotheses, we need to use sampling distribution (conditional SP function) to test or confirm fuzzy truth function (conditional LP function). The Semantic Information Measure (SIM) proposed is compatible with Shannon’s information theory and Fisher’s likelihood method. It can ensure that the less the LP of a predicate is and the larger the true value of the proposition is, the more information there is. So the SIM can be used as Popper's information criterion for falsification or test. The SIM also allows us to optimize the true-value of counterexamples or degrees of disbelief in a hypothesis to get the optimized degree of belief, i. e. Degree of Confirmation (DOC). To explain confirmation, this paper 1) provides the calculation method of the DOC of universal hypotheses; 2) discusses how to resolve Raven Paradox with new DOC and its increment; 3) derives the DOC of rapid HIV tests: DOC of “+” =1-(1-specificity)/sensitivity, which is similar to Likelihood Ratio (=sensitivity/(1-specificity)) but has the upper limit 1; 4) discusses negative DOC for excessive affirmations, wrong hypotheses, or lies; and 5) discusses the DOC of general hypotheses with GPS as example

    Semantic Information G Theory and Logical Bayesian Inference for Machine Learning

    Get PDF
    An important problem with machine learning is that when label number n\u3e2, it is very difficult to construct and optimize a group of learning functions, and we wish that optimized learning functions are still useful when prior distribution P(x) (where x is an instance) is changed. To resolve this problem, the semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms together form a systematic solution. MultilabelMultilabel A semantic channel in the G theory consists of a group of truth functions or membership functions. In comparison with likelihood functions, Bayesian posteriors, and Logistic functions used by popular methods, membership functions can be more conveniently used as learning functions without the above problem. In Logical Bayesian Inference (LBI), every label’s learning is independent. For Multilabel learning, we can directly obtain a group of optimized membership functions from a big enough sample with labels, without preparing different samples for different labels. A group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions on a two-dimensional feature space, 2-3 iterations can make mutual information between three classes and three labels surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maxmization (EM) algorithm is improved and becomes the CM-EM algorithm, which can outperform the EM algorithm when mixture ratios are imbalanced, or local convergence exists. The CM iteration algorithm needs to combine neural networks for MMI classifications on high-dimensional feature spaces. LBI needs further studies for the unification of statistics and logic
    • …
    corecore