2,070 research outputs found
Dynamic Control of Explore/Exploit Trade-Off In Bayesian Optimization
Bayesian optimization offers the possibility of optimizing black-box
operations not accessible through traditional techniques. The success of
Bayesian optimization methods such as Expected Improvement (EI) are
significantly affected by the degree of trade-off between exploration and
exploitation. Too much exploration can lead to inefficient optimization
protocols, whilst too much exploitation leaves the protocol open to strong
initial biases, and a high chance of getting stuck in a local minimum.
Typically, a constant margin is used to control this trade-off, which results
in yet another hyper-parameter to be optimized. We propose contextual
improvement as a simple, yet effective heuristic to counter this - achieving a
one-shot optimization strategy. Our proposed heuristic can be swiftly
calculated and improves both the speed and robustness of discovery of optimal
solutions. We demonstrate its effectiveness on both synthetic and real world
problems and explore the unaccounted for uncertainty in the pre-determination
of search hyperparameters controlling explore-exploit trade-off.Comment: Accepted for publication in the proceedings of 2018 Computing
Conferenc
Gene Function Classification Using Bayesian Models with Hierarchy-Based Priors
We investigate the application of hierarchical classification schemes to the
annotation of gene function based on several characteristics of protein
sequences including phylogenic descriptors, sequence based attributes, and
predicted secondary structure. We discuss three Bayesian models and compare
their performance in terms of predictive accuracy. These models are the
ordinary multinomial logit (MNL) model, a hierarchical model based on a set of
nested MNL models, and a MNL model with a prior that introduces correlations
between the parameters for classes that are nearby in the hierarchy. We also
provide a new scheme for combining different sources of information. We use
these models to predict the functional class of Open Reading Frames (ORFs) from
the E. coli genome. The results from all three models show substantial
improvement over previous methods, which were based on the C5 algorithm. The
MNL model using a prior based on the hierarchy outperforms both the
non-hierarchical MNL model and the nested MNL model. In contrast to previous
attempts at combining these sources of information, our approach results in a
higher accuracy rate when compared to models that use each data source alone.
Together, these results show that gene function can be predicted with higher
accuracy than previously achieved, using Bayesian models that incorporate
suitable prior information
Sampling constrained probability distributions using Spherical Augmentation
Statistical models with constrained probability distributions are abundant in
machine learning. Some examples include regression models with norm constraints
(e.g., Lasso), probit, many copula models, and latent Dirichlet allocation
(LDA). Bayesian inference involving probability distributions confined to
constrained domains could be quite challenging for commonly used sampling
algorithms. In this paper, we propose a novel augmentation technique that
handles a wide range of constraints by mapping the constrained domain to a
sphere in the augmented space. By moving freely on the surface of this sphere,
sampling algorithms handle constraints implicitly and generate proposals that
remain within boundaries when mapped back to the original space. Our proposed
method, called {Spherical Augmentation}, provides a mathematically natural and
computationally efficient framework for sampling from constrained probability
distributions. We show the advantages of our method over state-of-the-art
sampling algorithms, such as exact Hamiltonian Monte Carlo, using several
examples including truncated Gaussian distributions, Bayesian Lasso, Bayesian
bridge regression, reconstruction of quantized stationary Gaussian process, and
LDA for topic modeling.Comment: 41 pages, 13 figure
BioMetricNet: deep unconstrained face verification through learning of metrics regularized onto Gaussian distributions
We present BioMetricNet: a novel framework for deep unconstrained face
verification which learns a regularized metric to compare facial features.
Differently from popular methods such as FaceNet, the proposed approach does
not impose any specific metric on facial features; instead, it shapes the
decision space by learning a latent representation in which matching and
non-matching pairs are mapped onto clearly separated and well-behaved target
distributions. In particular, the network jointly learns the best feature
representation, and the best metric that follows the target distributions, to
be used to discriminate face images. In this paper we present this general
framework, first of its kind for facial verification, and tailor it to Gaussian
distributions. This choice enables the use of a simple linear decision boundary
that can be tuned to achieve the desired trade-off between false alarm and
genuine acceptance rate, and leads to a loss function that can be written in
closed form. Extensive analysis and experimentation on publicly available
datasets such as Labeled Faces in the wild (LFW), Youtube faces (YTF),
Celebrities in Frontal-Profile in the Wild (CFP), and challenging datasets like
cross-age LFW (CALFW), cross-pose LFW (CPLFW), In-the-wild Age Dataset (AgeDB)
show a significant performance improvement and confirms the effectiveness and
superiority of BioMetricNet over existing state-of-the-art methods.Comment: Accepted at ECCV2
Improved algorithm for neuronal ensemble inference by Monte Carlo method
Neuronal ensemble inference is one of the significant problems in the study
of biological neural networks. Various methods have been proposed for ensemble
inference from their activity data taken experimentally. Here we focus on
Bayesian inference approach for ensembles with generative model, which was
proposed in recent work. However, this method requires large computational
cost, and the result sometimes gets stuck in bad local maximum solution of
Bayesian inference. In this work, we give improved Bayesian inference algorithm
for these problems. We modify ensemble generation rule in Markov chain Monte
Carlo method, and introduce the idea of simulated annealing for hyperparameter
control. We also compare the performance of ensemble inference between our
algorithm and the original one.Comment: 14 pages, 3 figure
Generalized Bayesian Record Linkage and Regression with Exact Error Propagation
Record linkage (de-duplication or entity resolution) is the process of
merging noisy databases to remove duplicate entities. While record linkage
removes duplicate entities from such databases, the downstream task is any
inferential, predictive, or post-linkage task on the linked data. One goal of
the downstream task is obtaining a larger reference data set, allowing one to
perform more accurate statistical analyses. In addition, there is inherent
record linkage uncertainty passed to the downstream task. Motivated by the
above, we propose a generalized Bayesian record linkage method and consider
multiple regression analysis as the downstream task. Records are linked via a
random partition model, which allows for a wide class to be considered. In
addition, we jointly model the record linkage and downstream task, which allows
one to account for the record linkage uncertainty exactly. Moreover, one is
able to generate a feedback propagation mechanism of the information from the
proposed Bayesian record linkage model into the downstream task. This feedback
effect is essential to eliminate potential biases that can jeopardize resulting
downstream task. We apply our methodology to multiple linear regression, and
illustrate empirically that the "feedback effect" is able to improve the
performance of record linkage.Comment: 18 pages, 5 figure
Hierarchical approach for deriving a reproducible unblocked LU factorization
[EN] We propose a reproducible variant of the unblocked LU factorization for graphics processor units (GPUs). For this purpose, we build upon Level-1/2 BLAS kernels that deliver correctly-rounded and reproducible results for the dot (inner) product, vector scaling, and the matrix-vector product. In addition, we draw a strategy to enhance the accuracy of the triangular solve via iterative refinement. Following a bottom-up approach, we finally construct a reproducible unblocked implementation of the LU factorization for GPUs, which accommodates partial pivoting for stability and can be eventually integrated in a high performance and stable algorithm for the (blocked) LU factorization.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The simulations were performed on resources provided by the Swed-ish National Infrastructure for Computing (SNIC) at PDC Centre for High Performance Computing (PDC-HPC). This work was also granted access to the HPC resources of The Institute for Scientific Computing and Simulation financed by Region Ile-de-France and the project Equip@Meso (reference ANR-10-EQPX-29-01) overseen by the French National Agency for Research (ANR) as part of the Investissements d Avenir pro-gram. This work was also partly supported by the FastRelax (ANR-14-CE25-0018-01) project of ANR.Iakymchuk, R.; Graillat, S.; Defour, D.; Quintana-Orti, ES. (2019). Hierarchical approach for deriving a reproducible unblocked LU factorization. International Journal of High Performance Computing Applications. 33(5):791-803. https://doi.org/10.1177/1094342019832968S791803335Arteaga, A., Fuhrer, O., & Hoefler, T. (2014). Designing Bit-Reproducible Portable High-Performance Applications. 2014 IEEE 28th International Parallel and Distributed Processing Symposium. doi:10.1109/ipdps.2014.127Bientinesi, P., Quintana-OrtÃ, E. S., & Geijn, R. A. van de. (2005). Representing linear algebra algorithms in code: the FLAME application program interfaces. ACM Transactions on Mathematical Software, 31(1), 27-59. doi:10.1145/1055531.1055533Chohra, C., Langlois, P., & Parello, D. (2016). Efficiency of Reproducible Level 1 BLAS. Lecture Notes in Computer Science, 99-108. doi:10.1007/978-3-319-31769-4_8Collange, S., Defour, D., Graillat, S., & Iakymchuk, R. (2015). Numerical reproducibility for the parallel reduction on multi- and many-core architectures. Parallel Computing, 49, 83-97. doi:10.1016/j.parco.2015.09.001Demmel, J., & Hong Diep Nguyen. (2013). Fast Reproducible Floating-Point Summation. 2013 IEEE 21st Symposium on Computer Arithmetic. doi:10.1109/arith.2013.9Demmel, J., & Nguyen, H. D. (2015). Parallel Reproducible Summation. IEEE Transactions on Computers, 64(7), 2060-2070. doi:10.1109/tc.2014.2345391Dongarra, J. J., Du Croz, J., Hammarling, S., & Duff, I. S. (1990). A set of level 3 basic linear algebra subprograms. ACM Transactions on Mathematical Software, 16(1), 1-17. doi:10.1145/77626.79170Dongarra, J., Hittinger, J., Bell, J., Chacon, L., Falgout, R., Heroux, M., … Wild, S. (2014). Applied Mathematics Research for Exascale Computing. doi:10.2172/1149042Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., & Zimmermann, P. (2007). MPFR. ACM Transactions on Mathematical Software, 33(2), 13. doi:10.1145/1236463.1236468Haidar, A., Dong, T., Luszczek, P., Tomov, S., & Dongarra, J. (2015). Batched matrix computations on hardware accelerators based on GPUs. The International Journal of High Performance Computing Applications, 29(2), 193-208. doi:10.1177/1094342014567546Hida, Y., Li, X. S., & Bailey, D. H. (s. f.). Algorithms for quad-double precision floating point arithmetic. Proceedings 15th IEEE Symposium on Computer Arithmetic. ARITH-15 2001. doi:10.1109/arith.2001.930115Higham, N. J. (2002). Accuracy and Stability of Numerical Algorithms. doi:10.1137/1.9780898718027Iakymchuk, R., Defour, D., Collange, S., & Graillat, S. (2015). Reproducible Triangular Solvers for High-Performance Computing. 2015 12th International Conference on Information Technology - New Generations. doi:10.1109/itng.2015.63Iakymchuk, R., Defour, D., Collange, S., & Graillat, S. (2016). Reproducible and Accurate Matrix Multiplication. Lecture Notes in Computer Science, 126-137. doi:10.1007/978-3-319-31769-4_11Kulisch, U., & Snyder, V. (2010). The exact dot product as basic tool for long interval arithmetic. Computing, 91(3), 307-313. doi:10.1007/s00607-010-0127-7Li, X. S., Demmel, J. W., Bailey, D. H., Henry, G., Hida, Y., Iskandar, J., … Yoo, D. J. (2002). Design, implementation and testing of extended and mixed precision BLAS. ACM Transactions on Mathematical Software, 28(2), 152-205. doi:10.1145/567806.567808Muller, J.-M., Brisebarre, N., de Dinechin, F., Jeannerod, C.-P., Lefèvre, V., Melquiond, G., … Torres, S. (2010). Handbook of Floating-Point Arithmetic. doi:10.1007/978-0-8176-4705-6Ogita, T., Rump, S. M., & Oishi, S. (2005). Accurate Sum and Dot Product. SIAM Journal on Scientific Computing, 26(6), 1955-1988. doi:10.1137/030601818Ortega, J. . (1988). The ijk forms of factorization methods I. Vector computers. Parallel Computing, 7(2), 135-147. doi:10.1016/0167-8191(88)90035-xRump, S. M. (2009). Ultimately Fast Accurate Summation. SIAM Journal on Scientific Computing, 31(5), 3466-3502. doi:10.1137/080738490Skeel, R. D. (1979). Scaling for Numerical Stability in Gaussian Elimination. Journal of the ACM, 26(3), 494-526. doi:10.1145/322139.322148Zhu, Y.-K., & Hayes, W. B. (2010). Algorithm 908. ACM Transactions on Mathematical Software, 37(3), 1-13. doi:10.1145/1824801.182481
Posterior-based proposals for speeding up Markov chain Monte Carlo
Markov chain Monte Carlo (MCMC) is widely used for Bayesian inference in
models of complex systems. Performance, however, is often unsatisfactory in
models with many latent variables due to so-called poor mixing, necessitating
development of application specific implementations. This paper introduces
"posterior-based proposals" (PBPs), a new type of MCMC update applicable to a
huge class of statistical models (whose conditional dependence structures are
represented by directed acyclic graphs). PBPs generates large joint updates in
parameter and latent variable space, whilst retaining good acceptance rates
(typically 33%). Evaluation against other approaches (from standard Gibbs /
random walk updates to state-of-the-art Hamiltonian and particle MCMC methods)
was carried out for widely varying model types: an individual-based model for
disease diagnostic test data, a financial stochastic volatility model, a mixed
model used in statistical genetics and a population model used in ecology.
Whilst different methods worked better or worse in different scenarios, PBPs
were found to be either near to the fastest or significantly faster than the
next best approach (by up to a factor of 10). PBPs therefore represent an
additional general purpose technique that can be usefully applied in a wide
variety of contexts.Comment: 54 pages, 11 figures, 2 table
Geometrical Insights for Implicit Generative Modeling
Learning algorithms for implicit generative models can optimize a variety of
criteria that measure how the data distribution differs from the implicit model
distribution, including the Wasserstein distance, the Energy distance, and the
Maximum Mean Discrepancy criterion. A careful look at the geometries induced by
these distances on the space of probability measures reveals interesting
differences. In particular, we can establish surprising approximate global
convergence guarantees for the -Wasserstein distance,even when the
parametric generator has a nonconvex parametrization.Comment: this version fixes a typo in a definitio
Probabilistic Clustering of Time-Evolving Distance Data
We present a novel probabilistic clustering model for objects that are
represented via pairwise distances and observed at different time points. The
proposed method utilizes the information given by adjacent time points to find
the underlying cluster structure and obtain a smooth cluster evolution. This
approach allows the number of objects and clusters to differ at every time
point, and no identification on the identities of the objects is needed.
Further, the model does not require the number of clusters being specified in
advance -- they are instead determined automatically using a Dirichlet process
prior. We validate our model on synthetic data showing that the proposed method
is more accurate than state-of-the-art clustering methods. Finally, we use our
dynamic clustering model to analyze and illustrate the evolution of brain
cancer patients over time
- …