6,596 research outputs found

    Model Extraction Warning in MLaaS Paradigm

    Full text link
    Cloud vendors are increasingly offering machine learning services as part of their platform and services portfolios. These services enable the deployment of machine learning models on the cloud that are offered on a pay-per-query basis to application developers and end users. However recent work has shown that the hosted models are susceptible to extraction attacks. Adversaries may launch queries to steal the model and compromise future query payments or privacy of the training data. In this work, we present a cloud-based extraction monitor that can quantify the extraction status of models by observing the query and response streams of both individual and colluding adversarial users. We present a novel technique that uses information gain to measure the model learning rate by users with increasing number of queries. Additionally, we present an alternate technique that maintains intelligent query summaries to measure the learning rate relative to the coverage of the input feature space in the presence of collusion. Both these approaches have low computational overhead and can easily be offered as services to model owners to warn them of possible extraction attacks from adversaries. We present performance results for these approaches for decision tree models deployed on BigML MLaaS platform, using open source datasets and different adversarial attack strategies

    Matter Wave Scattering from Ultracold Atoms in an Optical Lattice

    Full text link
    We study matter wave scattering from an ultracold, many body atomic system trapped in an optical lattice. We determine the angular cross section that a matter wave probe sees and show that it is strongly affected by the many body phase, superfluid or Mott insulator, of the target lattice. We determine these cross sections analytically in the first Born approximation, and we examine the variation at intermediate points in the phase transition by numerically diagonalizing the Bose Hubbard Hamiltonian for a small lattice. We show that matter wave scattering offers a convenient method for non-destructively probing the quantum many body phase transition of atoms in an optical lattice.Comment: 4 pages, 2 figure

    Dynamical trapping and chaotic scattering of the harmonically driven barrier

    Full text link
    A detailed analysis of the classical nonlinear dynamics of a single driven square potential barrier with harmonically oscillating position is performed. The system exhibits dynamical trapping which is associated with the existence of a stable island in phase space. Due to the unstable periodic orbits of the KAM-structure, the driven barrier is a chaotic scatterer and shows stickiness of scattering trajectories in the vicinity of the stable island. The transmission function of a suitably prepared ensemble yields results which are very similar to tunneling resonances in the quantum mechanical regime. However, the origin of these resonances is different in the classical regime.Comment: 14 page

    An Extended Empirical Saddlepoint Approximation for Intractable Likelihoods

    Get PDF
    The challenges posed by complex stochastic models used in computational ecology, biology and genetics have stimulated the development of approximate approaches to statistical inference. Here we focus on Synthetic Likelihood (SL), a procedure that reduces the observed and simulated data to a set of summary statistics, and quantifies the discrepancy between them through a synthetic likelihood function. SL requires little tuning, but it relies on the approximate normality of the summary statistics. We relax this assumption by proposing a novel, more flexible, density estimator: the Extended Empirical Saddlepoint approximation. In addition to proving the consistency of SL, under either the new or the Gaussian density estimator, we illustrate the method using two examples. One of these is a complex individual-based forest model for which SL offers one of the few practical possibilities for statistical inference. The examples show that the new density estimator is able to capture large departures from normality, while being scalable to high dimensions, and this in turn leads to more accurate parameter estimates, relative to the Gaussian alternative. The new density estimator is implemented by the esaddle R package, which can be found on the Comprehensive R Archive Network (CRAN)

    Structure of multicorrelation sequences with integer part polynomial iterates along primes

    Full text link
    Let TT be a measure preserving Z\mathbb{Z}^\ell-action on the probability space (X,B,μ),(X,{\mathcal B},\mu), q1,,qm:RRq_1,\dots,q_m:{\mathbb R}\to{\mathbb R}^\ell vector polynomials, and f0,,fmL(X)f_0,\dots,f_m\in L^\infty(X). For any ϵ>0\epsilon > 0 and multicorrelation sequences of the form α(n)=Xf0Tq1(n)f1Tqm(n)fm  dμ\displaystyle\alpha(n)=\int_Xf_0\cdot T^{ \lfloor q_1(n) \rfloor }f_1\cdots T^{ \lfloor q_m(n) \rfloor }f_m\;d\mu we show that there exists a nilsequence ψ\psi for which limNM1NMn=MN1α(n)ψ(n)ϵ\displaystyle\lim_{N - M \to \infty} \frac{1}{N-M} \sum_{n=M}^{N-1} |\alpha(n) - \psi(n)| \leq \epsilon and limN1π(N)pP[1,N]α(p)ψ(p)ϵ.\displaystyle\lim_{N \to \infty} \frac{1}{\pi(N)} \sum_{p \in {\mathbb P}\cap[1,N]} |\alpha(p) - \psi(p)| \leq \epsilon. This result simultaneously generalizes previous results of Frantzikinakis [2] and the authors [11,13].Comment: 7 page

    Partial Clustering in Binary Two-Dimensional Colloidal Suspensions

    Full text link
    Strongly interacting binary mixtures of superparamagnetic colloidal particles confined to a two-dimensional water-air interface are examined by theory, computer simulation and experiment. The mixture exhibits a partial clustering in equilibrium: in the voids of the matrix of unclustered big particles, the small particles form subclusters with a sponge-like topology which is accompanied by a characteristic small-wave vector peak in the small-small structure factor. This partial clustering is a general phenomenon occurring for strongly coupled negatively non-additive mixtures.Comment: 12 pages, 5 figures, submitted 200

    Learning Multiple Defaults for Machine Learning Algorithms

    Get PDF
    The performance of modern machine learning methods highly depends on their hyperparameter configurations. One simple way of selecting a configuration is to use default settings, often proposed along with the publication and implementation of a new algorithm. Those default values are usually chosen in an ad-hoc manner to work good enough on a wide variety of datasets. To address this problem, different automatic hyperparameter configuration algorithms have been proposed, which select an optimal configuration per dataset. This principled approach usually improves performance, but adds additional algorithmic complexity and computational costs to the training procedure. As an alternative to this, we propose learning a set of complementary default values from a large database of prior empirical results. Selecting an appropriate configuration on a new dataset then requires only a simple, efficient and embarrassingly parallel search over this set. We demonstrate the effectiveness and efficiency of the approach we propose in comparison to random search and Bayesian Optimization
    corecore