20 research outputs found
Efficient Computation of Log-likelihood Function in Clustering Overdispersed Count Data
In this work, we present an overdispersed count data clustering algorithm, which uses the mesh method for computing the log-likelihood function, of the multinomial Dirichlet, multinomial generalized Dirichlet, and multinomial Beta-Liouville distributions. Count data are often used in many areas such as information retrieval, data mining, and computer vision. The multinomial Dirichlet distribution (MDD) is one of the widely used methods of modeling multi-categorical count data with overdispersion. In recent works, the use of the mesh algorithm, which involves the approximation of the multinomial Dirichlet distribution's (MDD) log-likelihood function, based on the Bernoulli polynomials; has been proposed instead of using the traditional numerical computation of the log-likelihood function which either results in instability, or leads to long run times that make its use infeasible when modeling large-scale data. Therefore, we extend the mesh algorithm approach for computing the log likelihood function of more flexible distributions, namely multinomial generalized Dirichlet (MGD) and multinomial Beta-Liouville (MBL). A finite mixture model based on these distributions, is optimized by expectation maximization, and attempts to achieve a high accuracy for count data clustering. Through a set of experiments, the proposed approach shows its merits in two real-world clustering problems, that concern natural scenes categorization and facial expression recognition
Toward Reliable Human Pose Forecasting with Uncertainty
Recently, there has been an arms race of pose forecasting methods aimed at
solving the spatio-temporal task of predicting a sequence of future 3D poses of
a person given a sequence of past observed ones. However, the lack of unified
benchmarks and limited uncertainty analysis have hindered progress in the
field. To address this, we first develop an open-source library for human pose
forecasting, featuring multiple models, datasets, and standardized evaluation
metrics, with the aim of promoting research and moving toward a unified and
fair evaluation. Second, we devise two types of uncertainty in the problem to
increase performance and convey better trust: 1) we propose a method for
modeling aleatoric uncertainty by using uncertainty priors to inject knowledge
about the behavior of uncertainty. This focuses the capacity of the model in
the direction of more meaningful supervision while reducing the number of
learned parameters and improving stability; 2) we introduce a novel approach
for quantifying the epistemic uncertainty of any model through clustering and
measuring the entropy of its assignments. Our experiments demonstrate up to
improvements in accuracy and better performance in uncertainty
estimation