537 research outputs found
The Graph Cut Kernel for Ranked Data
Many algorithms for ranked data become computationally intractable as the
number of objects grows due to the complex geometric structure induced by
rankings. An additional challenge is posed by partial rankings, i.e. rankings
in which the preference is only known for a subset of all objects. For these
reasons, state-of-the-art methods cannot scale to real-world applications, such
as recommender systems. We address this challenge by exploiting the geometric
structure of ranked data and additional available information about the objects
to derive a kernel for ranking based on the graph cut function. The graph cut
kernel combines the efficiency of submodular optimization with the theoretical
properties of kernel-based methods. The graph cut kernel combines the
efficiency of submodular optimization with the theoretical properties of
kernel-based methods
Uncovering Causality from Multivariate Hawkes Integrated Cumulants
We design a new nonparametric method that allows one to estimate the matrix
of integrated kernels of a multivariate Hawkes process. This matrix not only
encodes the mutual influences of each nodes of the process, but also
disentangles the causality relationships between them. Our approach is the
first that leads to an estimation of this matrix without any parametric
modeling and estimation of the kernels themselves. A consequence is that it can
give an estimation of causality relationships between nodes (or users), based
on their activity timestamps (on a social network for instance), without
knowing or estimating the shape of the activities lifetime. For that purpose,
we introduce a moment matching method that fits the third-order integrated
cumulants of the process. We show on numerical experiments that our approach is
indeed very robust to the shape of the kernels, and gives appealing results on
the MemeTracker database
Sampling permutations for Shapley value estimation
Game-theoretic attribution techniques based on Shapley values are used to interpret blackbox machine learning models, but their exact calculation is generally NP-hard, requiring approximation methods for non-trivial models. As the computation of Shapley values can be expressed as a summation over a set of permutations, a common approach is to sample a subset of these permutations for approximation. Unfortunately, standard Monte Carlo sampling methods can exhibit slow convergence, and more sophisticated quasi-Monte Carlo methods have not yet been applied to the space of permutations. To address this, we investigate new approaches based on two classes of approximation methods and compare them empirically. First, we demonstrate quadrature techniques in a RKHS containing functions of permutations, using the Mallows kernel in combination with kernel herding and sequential Bayesian quadrature. The RKHS perspective also leads to quasi-Monte Carlo type error bounds, with a tractable discrepancy measure defined on permutations. Second, we exploit connections between the hypersphere S d−2 and permutations to create practical algorithms for generating permutation samples with good properties. Experiments show the above techniques provide significant improvements for Shapley value estimates over existing methods, converging to a smaller RMSE in the same number of model evaluation
BaCO: A Fast and Portable Bayesian Compiler Optimization Framework
We introduce the Bayesian Compiler Optimization framework (BaCO), a general
purpose autotuner for modern compilers targeting CPUs, GPUs, and FPGAs. BaCO
provides the flexibility needed to handle the requirements of modern autotuning
tasks. Particularly, it deals with permutation, ordered, and continuous
parameter types along with both known and unknown parameter constraints. To
reason about these parameter types and efficiently deliver high-quality code,
BaCO uses Bayesian optimiza tion algorithms specialized towards the autotuning
domain. We demonstrate BaCO's effectiveness on three modern compiler systems:
TACO, RISE & ELEVATE, and HPVM2FPGA for CPUs, GPUs, and FPGAs respectively. For
these domains, BaCO outperforms current state-of-the-art autotuners by
delivering on average 1.36x-1.56x faster code with a tiny search budget, and
BaCO is able to reach expert-level performance 2.9x-3.9x faster
- …