2,147 research outputs found
Conditional probability estimation
This paper studies in particular an aspect of the estimation of conditional probability distributions by maximum likelihood that seems to have been overlooked in the literature on Bayesian networks: The information conveyed by the conditioning event should be included in the likelihood function as well
Empirical interpretation of imprecise probabilities
This paper investigates the possibility of a frequentist interpretation of imprecise probabilities, by generalizing the approach of Bernoulli’s Ars Conjectandi. That is, by studying, in the case of games of chance, under which assumptions imprecise probabilities can be satisfactorily estimated from data. In fact, estimability on the basis of finite amounts of data is a necessary condition for imprecise probabilities in order to have a clear empirical meaning. Unfortunately, imprecise probabilities can be estimated arbitrarily well from data only in very limited settings
Transverse fracture properties of green wood and the anatomy of six temperate tree species
© Institute of Chartered Foresters, 2016. All rights reserved. The aim of this study was to investigate the effect of wood anatomy and density on the mechanics of fracture when wood is split in the radial-longitudinal (RL) and tangential-longitudinal (TL) fracture systems. The specific fracture energies (Gf, J m-2) of the trunk wood of six tree species were studied in the green state using double-edge notched tensile tests. The fracture surfaces were examined in both systems using Environmental Scanning Electron Microscopy (ESEM). Wood density and ray characteristics were also measured. The results showed that Gf in RL was greater than TL for five of the six species. In particular, the greatest degree of anisotropy was observed in Quercus robur L., and the lowest in Larix decidua Mill. ESEM micrographs of fractured specimens suggested reasons for the anisotropy and differences across tree species. In the RL system, fractures broke across rays, the walls of which unwound like tracheids in longitudinal-tangential (LT) and longitudinal-radial (LR) failure, producing a rough fracture surface which would absorb energy, whereas in the TL system, fractures often ran alongside rays
Pre-Poisson submanifolds
This is an expository and introductory note on some results obtained in
"Coisotropic embeddings in Poisson manifolds" (ArXiv math/0611480). Some
original material is contained in the last two sections, where we consider
linear Poisson structures.Comment: Proceedings of the conference "Poisson 2006". 14 page
On maxitive integration
The Shilkret integral is maxitive (i.e., the integral of a pointwise supremum of functions is the supremum of their integrals), but defined only for nonnegative functions. In the present paper, some properties of this integral (such as subadditivity and a law of iterated expectations) are studied, in comparison with the additive and Choquet integrals. Furthermore, the definition of a maxitive integral for all real functions is discussed. In particular, a convex, maxitive integral is introduced and some of its properties are derived
Likelihood decision functions
In both classical and Bayesian approaches, statistical inference is unified and generalized by the corresponding decision theory. This is not the case for the likelihood approach to statistical inference, in spite of the manifest success of the likelihood methods in statistics. The goal of the present work is to fill this gap, by extending the likelihood approach in order to cover decision making as well. The resulting decision functions, called likelihood decision functions, generalize the usual likelihood methods (such as ML estimators and LR tests), in the sense that these methods appear as the likelihood decision functions in particular decision problems. In general, the likelihood decision functions maintain some key properties of the usual
likelihood methods, such as equivariance and asymptotic optimality. By unifying and generalizing the likelihood approach to statistical inference, the present work offers a new perspective on statistical methodology and on the connections among likelihood methods
Different distance measures for fuzzy linear regression with Monte Carlo methods
The aim of this study was to determine the best distance measure for estimating the fuzzy linear regression model parameters with Monte Carlo (MC) methods. It is pointed out that only one distance measure is used for fuzzy linear regression with MC methods within the literature. Therefore, three different definitions of distance measure between two fuzzy numbers are introduced. Estimation accuracies of existing and proposed distance measures are explored with the simulation study. Distance measures are compared to each other in terms of estimation accuracy; hence this study demonstrates that the best distance measures to estimate fuzzy linear regression model parameters with MC methods are the distance measures defined by Kaufmann and Gupta (Introduction to fuzzy arithmetic theory and applications. Van Nostrand Reinhold, New York, 1991), Heilpern-2 (Fuzzy Sets Syst 91(2):259–268, 1997) and Chen and Hsieh (Aust J Intell Inf Process Syst 6(4):217–229, 2000). One the other hand, the worst distance measure is the distance measure used by Abdalla and Buckley (Soft Comput 11:991–996, 2007; Soft Comput 12:463–468, 2008). These results would be useful to enrich the studies that have already focused on fuzzy linear regression models
On maxitive integration
A functional is said to be maxitive if it commutes with the (pointwise) supremum operation. Such functionals find application in particular in decision theory and related fields. In the present paper, maxitive functionals are characterized as integrals with respect to maxitive measures (also known as possibility measures or idempotent measures). These maxitive integrals are then compared with the usual additive and nonadditive integrals on the basis of some important properties, such as convexity, subadditivity, and the law of iterated expectations
On the validity of minimin and minimax methods for support vector regression with interval data
Paper delivered at 9th International Symposium on Imprecise Probability: Theories and Applications, Pescara, Italy, 2015. Abstract: In the recent years, generalizations of support vector methods for analyzing interval-valued data have been suggested in both the regression and classification contexts. Standard Support Vector methods for precise data formalize these statistical problems as optimization problems that can be based on various loss functions. In the case of Support Vector Regression (SVR), on which we focus here, the function that best describes the relationship between a response and some explanatory variables is derived as the solution of the minimization problem associated with the expectation of some function of the residual, which is called the risk functional. The key idea of SVR is that even when considering an infinite-dimensional space of arbitrary regression functions, given a finitedimensional data set, the function minimizing the risk can be represented as the finite weighted sum of kernel functions. This allows to practically determine the SVR estimate by solving a much simpler optimization problem, even in the case of nonlinear regression. In case that only interval-valued observations of the variables of interest are available, it has been suggested to minimize the minimal or maximal risk values that are compatible with the imprecise data, yielding precise SVR estimates on the basis of interval data. In this paper, we show that also in the case of an interval-valued response the optimal function can be represented as the finite weighted sum of kernel functions. Thus, the minimin and minimax SVR estimates can be obtained by minimizing the corresponding simplified expressions of the empirical lower and upper risks, respectively
Robust regression with imprecise data
We consider the problem of regression analysis with imprecise data. By imprecise data we mean imprecise observations of precise quantities in the form of sets of values. In this paper, we explore a recently introduced likelihood-based approach to regression with such data. The approach is very general, since it covers all kinds of imprecise data (i.e. not only intervals) and it is not restricted to linear regression. Its result consists of a set of functions, reflecting the entire uncertainty of the regression problem. Here we study in particular a robust special case of the likelihood-based imprecise regression, which can be interpreted as a generalization of the method of least median of squares. Moreover, we apply it to data from a social survey, and compare it with other approaches to regression with imprecise data. It turns out that the likelihood-based approach is the most generally applicable one and is the only approach accounting for multiple sources of uncertainty at the same time
- …