2,713 research outputs found
Conditional probability estimation
This paper studies in particular an aspect of the estimation of conditional probability distributions by maximum likelihood that seems to have been overlooked in the literature on Bayesian networks: The information conveyed by the conditioning event should be included in the likelihood function as well
Empirical interpretation of imprecise probabilities
This paper investigates the possibility of a frequentist interpretation of imprecise probabilities, by generalizing the approach of Bernoulli’s Ars Conjectandi. That is, by studying, in the case of games of chance, under which assumptions imprecise probabilities can be satisfactorily estimated from data. In fact, estimability on the basis of finite amounts of data is a necessary condition for imprecise probabilities in order to have a clear empirical meaning. Unfortunately, imprecise probabilities can be estimated arbitrarily well from data only in very limited settings
Transverse fracture properties of green wood and the anatomy of six temperate tree species
© Institute of Chartered Foresters, 2016. All rights reserved. The aim of this study was to investigate the effect of wood anatomy and density on the mechanics of fracture when wood is split in the radial-longitudinal (RL) and tangential-longitudinal (TL) fracture systems. The specific fracture energies (Gf, J m-2) of the trunk wood of six tree species were studied in the green state using double-edge notched tensile tests. The fracture surfaces were examined in both systems using Environmental Scanning Electron Microscopy (ESEM). Wood density and ray characteristics were also measured. The results showed that Gf in RL was greater than TL for five of the six species. In particular, the greatest degree of anisotropy was observed in Quercus robur L., and the lowest in Larix decidua Mill. ESEM micrographs of fractured specimens suggested reasons for the anisotropy and differences across tree species. In the RL system, fractures broke across rays, the walls of which unwound like tracheids in longitudinal-tangential (LT) and longitudinal-radial (LR) failure, producing a rough fracture surface which would absorb energy, whereas in the TL system, fractures often ran alongside rays
Pre-Poisson submanifolds
This is an expository and introductory note on some results obtained in
"Coisotropic embeddings in Poisson manifolds" (ArXiv math/0611480). Some
original material is contained in the last two sections, where we consider
linear Poisson structures.Comment: Proceedings of the conference "Poisson 2006". 14 page
On maxitive integration
The Shilkret integral is maxitive (i.e., the integral of a pointwise supremum of functions is the supremum of their integrals), but defined only for nonnegative functions. In the present paper, some properties of this integral (such as subadditivity and a law of iterated expectations) are studied, in comparison with the additive and Choquet integrals. Furthermore, the definition of a maxitive integral for all real functions is discussed. In particular, a convex, maxitive integral is introduced and some of its properties are derived
Likelihood decision functions
In both classical and Bayesian approaches, statistical inference is unified and generalized by the corresponding decision theory. This is not the case for the likelihood approach to statistical inference, in spite of the manifest success of the likelihood methods in statistics. The goal of the present work is to fill this gap, by extending the likelihood approach in order to cover decision making as well. The resulting decision functions, called likelihood decision functions, generalize the usual likelihood methods (such as ML estimators and LR tests), in the sense that these methods appear as the likelihood decision functions in particular decision problems. In general, the likelihood decision functions maintain some key properties of the usual
likelihood methods, such as equivariance and asymptotic optimality. By unifying and generalizing the likelihood approach to statistical inference, the present work offers a new perspective on statistical methodology and on the connections among likelihood methods
Different distance measures for fuzzy linear regression with Monte Carlo methods
The aim of this study was to determine the best distance measure for estimating the fuzzy linear regression model parameters with Monte Carlo (MC) methods. It is pointed out that only one distance measure is used for fuzzy linear regression with MC methods within the literature. Therefore, three different definitions of distance measure between two fuzzy numbers are introduced. Estimation accuracies of existing and proposed distance measures are explored with the simulation study. Distance measures are compared to each other in terms of estimation accuracy; hence this study demonstrates that the best distance measures to estimate fuzzy linear regression model parameters with MC methods are the distance measures defined by Kaufmann and Gupta (Introduction to fuzzy arithmetic theory and applications. Van Nostrand Reinhold, New York, 1991), Heilpern-2 (Fuzzy Sets Syst 91(2):259–268, 1997) and Chen and Hsieh (Aust J Intell Inf Process Syst 6(4):217–229, 2000). One the other hand, the worst distance measure is the distance measure used by Abdalla and Buckley (Soft Comput 11:991–996, 2007; Soft Comput 12:463–468, 2008). These results would be useful to enrich the studies that have already focused on fuzzy linear regression models
On the validity of minimin and minimax methods for support vector regression with interval data
Paper delivered at 9th International Symposium on Imprecise Probability: Theories and Applications, Pescara, Italy, 2015. Abstract: In the recent years, generalizations of support vector methods for analyzing interval-valued data have been suggested in both the regression and classification contexts. Standard Support Vector methods for precise data formalize these statistical problems as optimization problems that can be based on various loss functions. In the case of Support Vector Regression (SVR), on which we focus here, the function that best describes the relationship between a response and some explanatory variables is derived as the solution of the minimization problem associated with the expectation of some function of the residual, which is called the risk functional. The key idea of SVR is that even when considering an infinite-dimensional space of arbitrary regression functions, given a finitedimensional data set, the function minimizing the risk can be represented as the finite weighted sum of kernel functions. This allows to practically determine the SVR estimate by solving a much simpler optimization problem, even in the case of nonlinear regression. In case that only interval-valued observations of the variables of interest are available, it has been suggested to minimize the minimal or maximal risk values that are compatible with the imprecise data, yielding precise SVR estimates on the basis of interval data. In this paper, we show that also in the case of an interval-valued response the optimal function can be represented as the finite weighted sum of kernel functions. Thus, the minimin and minimax SVR estimates can be obtained by minimizing the corresponding simplified expressions of the empirical lower and upper risks, respectively
On maxitive integration
A functional is said to be maxitive if it commutes with the (pointwise) supremum operation. Such functionals find application in particular in decision theory and related fields. In the present paper, maxitive functionals are characterized as integrals with respect to maxitive measures (also known as possibility measures or idempotent measures). These maxitive integrals are then compared with the usual additive and nonadditive integrals on the basis of some important properties, such as convexity, subadditivity, and the law of iterated expectations
On the implementation of LIR: the case of simple linear regression with interval data
This paper considers the problem of simple linear regression with interval-censored data. That is, n pairs of intervals are observed instead of the n pairs of precise values for the two variables (dependent and independent). Each of these intervals is closed but possibly unbounded, and contains the corresponding (unobserved) value of the dependent or independent variable. The goal of the regression is to describe the relationship between (the precise values of) these two variables by means of a linear function.
Likelihood-based Imprecise Regression (LIR) is a recently introduced, very general approach to regression for imprecisely observed quantities. The result of a LIR analysis is in general set-valued: it consists of all regression functions that cannot be excluded on the basis of likelihood inference. These regression functions are said to be undominated.
Since the interval data can be unbounded, a robust regression method is necessary. Hence, we consider the robust LIR method based on the minimization of the residuals' quantiles. For this method, we prove that the set of all the intercept-slope pairs corresponding to the undominated regression functions is the union of finitely many polygons. We give an exact algorithm for determining this set (i.e., for determining the set-valued result of the robust LIR analysis), and show that it has worst-case time complexity O(n^3 log n). We have implemented this exact algorithm as part of the R package linLIR
- …