17,798 research outputs found
A decision-theoretic approach for segmental classification
This paper is concerned with statistical methods for the segmental
classification of linear sequence data where the task is to segment and
classify the data according to an underlying hidden discrete state sequence.
Such analysis is commonplace in the empirical sciences including genomics,
finance and speech processing. In particular, we are interested in answering
the following question: given data and a statistical model of
the hidden states , what should we report as the prediction under
the posterior distribution ? That is, how should you make a
prediction of the underlying states? We demonstrate that traditional approaches
such as reporting the most probable state sequence or most probable set of
marginal predictions can give undesirable classification artefacts and offer
limited control over the properties of the prediction. We propose a decision
theoretic approach using a novel class of Markov loss functions and report
via the principle of minimum expected loss (maximum expected
utility). We demonstrate that the sequence of minimum expected loss under the
Markov loss function can be enumerated exactly using dynamic programming
methods and that it offers flexibility and performance improvements over
existing techniques. The result is generic and applicable to any probabilistic
model on a sequence, such as Hidden Markov models, change point or product
partition models.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS657 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A New Weighting Scheme in Weighted Markov Model for Predicting the Probability of Drought Episodes
Drought is a complex stochastic natural hazard caused by prolonged shortage
of rainfall. Several environmental factors are involved in determining drought
classes at the specific monitoring station. Therefore, efficient sequence
processing techniques are required to explore and predict the periodic
information about the various episodes of drought classes. In this study, we
proposed a new weighting scheme to predict the probability of various drought
classes under Weighted Markov Chain (WMC) model. We provide a standardized
scheme of weights for ordinal sequences of drought classifications by
normalizing squared weighted Cohen Kappa. Illustrations of the proposed scheme
are given by including temporal ordinal data on drought classes determined by
the standardized precipitation temperature index (SPTI). Experimental results
show that the proposed weighting scheme for WMC model is sufficiently flexible
to address actual changes in drought classifications by restructuring the
transient behavior of a Markov chain. In summary, this paper proposes a new
weighting scheme to improve the accuracy of the WMC, specifically in the field
of hydrology
Infinite Latent Feature Selection: A Probabilistic Latent Graph-Based Ranking Approach
Feature selection is playing an increasingly significant role with respect to
many computer vision applications spanning from object recognition to visual
object tracking. However, most of the recent solutions in feature selection are
not robust across different and heterogeneous set of data. In this paper, we
address this issue proposing a robust probabilistic latent graph-based feature
selection algorithm that performs the ranking step while considering all the
possible subsets of features, as paths on a graph, bypassing the combinatorial
problem analytically. An appealing characteristic of the approach is that it
aims to discover an abstraction behind low-level sensory data, that is,
relevancy. Relevancy is modelled as a latent variable in a PLSA-inspired
generative process that allows the investigation of the importance of a feature
when injected into an arbitrary set of cues. The proposed method has been
tested on ten diverse benchmarks, and compared against eleven state of the art
feature selection methods. Results show that the proposed approach attains the
highest performance levels across many different scenarios and difficulties,
thereby confirming its strong robustness while setting a new state of the art
in feature selection domain.Comment: Accepted at the IEEE International Conference on Computer Vision
(ICCV), 2017, Venice. Preprint cop
Handling non-ignorable dropouts in longitudinal data: A conditional model based on a latent Markov heterogeneity structure
We illustrate a class of conditional models for the analysis of longitudinal
data suffering attrition in random effects models framework, where the
subject-specific random effects are assumed to be discrete and to follow a
time-dependent latent process. The latent process accounts for unobserved
heterogeneity and correlation between individuals in a dynamic fashion, and for
dependence between the observed process and the missing data mechanism. Of
particular interest is the case where the missing mechanism is non-ignorable.
To deal with the topic we introduce a conditional to dropout model. A shape
change in the random effects distribution is considered by directly modeling
the effect of the missing data process on the evolution of the latent
structure. To estimate the resulting model, we rely on the conditional maximum
likelihood approach and for this aim we outline an EM algorithm. The proposal
is illustrated via simulations and then applied on a dataset concerning skin
cancers. Comparisons with other well-established methods are provided as well
Fully Bayesian Logistic Regression with Hyper-Lasso Priors for High-dimensional Feature Selection
High-dimensional feature selection arises in many areas of modern science.
For example, in genomic research we want to find the genes that can be used to
separate tissues of different classes (e.g. cancer and normal) from tens of
thousands of genes that are active (expressed) in certain tissue cells. To this
end, we wish to fit regression and classification models with a large number of
features (also called variables, predictors). In the past decade, penalized
likelihood methods for fitting regression models based on hyper-LASSO
penalization have received increasing attention in the literature. However,
fully Bayesian methods that use Markov chain Monte Carlo (MCMC) are still in
lack of development in the literature. In this paper we introduce an MCMC
(fully Bayesian) method for learning severely multi-modal posteriors of
logistic regression models based on hyper-LASSO priors (non-convex penalties).
Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling
framework; we call our method Bayesian logistic regression with hyper-LASSO
(BLRHL) priors. We have used simulation studies and real data analysis to
demonstrate the superior performance of hyper-LASSO priors, and to investigate
the issues of choosing heaviness and scale of hyper-LASSO priors.Comment: 33 pages. arXiv admin note: substantial text overlap with
arXiv:1308.469
- …