557 research outputs found

    A new dp-minimal expansion of the integers

    Full text link
    We consider the structure (Z,+,0,∣p1,…,∣pn)(\mathbb{Z},+,0,|_{p_{1}},\dots,|_{p_{n}}), where x∣pyx|_{p}y means vp(x)≤vp(y)v_{p}(x)\leq v_{p}(y) and vpv_p is the pp-adic valuation. We prove that its theory has quantifier elimination in the language {+,−,0,1,(Dm)m≥1,∣p1,…,∣pn}\{+,-,0,1,(D_{m})_{m\geq1},|_{p_{1}},\dots,|_{p_{n}}\} where Dm(x)↔∃y my=xD_m(x)\leftrightarrow \exists y ~ my = x, and that it has dp-rank nn. In addition, we prove that a first order structure with universe Z\mathbb{Z} which is an expansion of (Z,+,0)(\mathbb{Z},+,0) and a reduct of (Z,+,0,∣p)(\mathbb{Z},+,0,|_{p}) must be interdefinable with one of them. We also give an alternative proof for Conant's analogous result about (Z,+,0,<)(\mathbb{Z},+,0,<).Comment: 24 page

    The use of chimeric bacterial and plant protein toxins for targeted chemotherapy

    Get PDF

    The use of chimeric bacterial and plant protein toxins for targeted chemotherapy

    Get PDF

    Forwarders vs. centralized server

    Get PDF

    No-Regret Caching with Noisy Request Estimates

    Full text link
    Online learning algorithms have been successfully used to design caching policies with regret guarantees. Existing algorithms assume that the cache knows the exact request sequence, but this may not be feasible in high load and/or memory-constrained scenarios, where the cache may have access only to sampled requests or to approximate requests' counters. In this paper, we propose the Noisy-Follow-the-Perturbed-Leader (NFPL) algorithm, a variant of the classic Follow-the-Perturbed-Leader (FPL) when request estimates are noisy, and we show that the proposed solution has sublinear regret under specific conditions on the requests estimator. The experimental evaluation compares the proposed solution against classic caching policies and validates the proposed approach under both synthetic and real request traces

    Computing the Hit Rate of Similarity Caching

    Full text link
    Similarity caching allows requests for an item ii to be served by a similar item i′i'. Applications include recommendation systems, multimedia retrieval, and machine learning. Recently, many similarity caching policies have been proposed, but still we do not know how to compute the hit rate even for the simplest policies, like SIM-LRU and RND-LRU that are straightforward modifications of classical caching algorithms. This paper proposes the first algorithm to compute the hit rate of similarity caching policies under the independent reference model for the request process. In particular, our work shows how to extend the popular TTL approximation from classic caching to similarity caching. The algorithm is evaluated on both synthetic and real world traces

    Simulation analysis of download and recovery processes in P2P storage systems

    Get PDF
    International audiencePeer-to-peer storage systems rely on data fragmentation and distributed storage. Unreachable fragments are continuously recovered, requiring multiple fragments of data (constituting a ldquoblockrdquo) to be downloaded in parallel. Recent modeling efforts have assumed the recovery process to follow an exponential distribution, an assumption made mainly in the absence of studies characterizing the ldquorealrdquo distribution of the recovery process. This work aims at filling this gap through a simulation study. To that end, we implement the distributed storage protocol in the NS-2 network simulator and run a total of seven experiments covering a large variety of scenarios. We show that the fragment download time follows approximately an exponential distribution. We also show that the block download time and the recovery time essentially follow a hypo-exponential distribution with many distinct phases (maximum of as many exponentials). We use expectation maximization and least square estimation algorithms to fit the empirical distributions. We also provide a good approximation of the number of phases of the hypo-exponential distribution that applies in all scenarios considered. Last, we test the goodness of our fits using statistical (Kolmogorov-Smirnov test) and graphical methods

    Actes du 10ème Atelier en Évaluation de Performances

    Get PDF
    National audienceL'Atelier en Évaluation de Performances est une réunion destinée à faire s'exprimer et se rencontrer les jeunes chercheurs (doctorants et nouveaux docteurs) dans le domaine de la Modélisation et de l'Évaluation de Performances, une discipline consacrée à l'étude et l'optimisation de systèmes dynamiques stochastiques et/ou temporisés apparaissant en Informatique, Télécommunications, Productique et Robotique entre autres. La présentation informelle de travaux, même en cours, y est encouragée afin de renforcer les interactions entre jeunes chercheurs et préparer des soumissions de nouveaux projets scientifiques. Des exposés de synthèse sur des domaines de recherche d'actualité, donnés par des chercheurs confirmés du domaine renforcent la partie formation de l'atelier

    Modeling modern DNS caches

    Get PDF
    International audienceCaching is undoubtedly one of the most popular solution that easily scales up with a world-wide deployment of resources. Records in Domain Name System (DNS) caches are kept for a pre-set duration (time-to-live or TTL) to avoid becoming outdated. Modern caches are those that set locally the TTL regardless of what authoritative servers say. In this paper, we introduce analytic models to study the modern DNS cache behavior based on renewal arguments. For tree cache networks, we derive the cache performance metrics, characterize at each cache the miss process and the aggregate request process. We address the problem of the optimal caching duration and find that constant TTL is the best only if if inter-request times have a concave CDF. We validate our theoretical findings using real DNS traces (single cache case) and via event-driven simulations (network case). Our models are very robust as the relative error between empirical and analytic values stays within 1% in the former case and less than 5% in the latter case
    • …
    corecore