45 research outputs found

    Seeing through black boxes: Tracking transactions through queues under monitoring resource constraints

    Get PDF
    The problem of optimal allocation of monitoring resources for tracking transactions progressing through a distributed system, modeled as a queueing network, is considered. Two forms of monitoring information are considered, viz., locally unique transaction identifiers, and arrival and departure timestamps of transactions at each processing queue. The timestamps are assumed to be available at all the queues but in the absence of identifiers, only enable imprecise tracking since parallel processing can result in out-of-order departures. On the other hand, identifiers enable precise tracking but are not available without proper instrumentation. Given an instrumentation budget, only a subset of queues can be selected for the production of identifiers, while the remaining queues have to resort to imprecise tracking using timestamps. The goal is then to optimally allocate the instrumentation budget to maximize the overall tracking accuracy. The challenge is that the optimal allocation strategy depends on accuracies of timestamp-based tracking at different queues, which has complex dependencies on the arrival and service processes, and the queueing discipline. We propose two simple heuristics for allocation by predicting the order of timestamp-based tracking accuracies of different queues. We derive sufficient conditions for these heuristics to achieve optimality through the notion of the stochastic comparison of queues. Simulations show that our heuristics are close to optimality, even when the parameters deviate from these conditions

    A metadata calculus for secure information sharing,”

    Get PDF
    ABSTRACT In both commercial and defense sectors a compelling need is emerging for rapid, yet secure, dissemination of information to the concerned actors. Traditional approaches to information sharing that rely on security labels (e.g., Multi-Level Security (MLS)) suffer from at least two major drawbacks. First, static security labels do not account for tactical information whose value decays over time. Second, MLS-like approaches have often ignored information transform semantics when deducing security labels (e.g., output security label = max over all input security labels). While MLS-like label deduction appears to be conservative, we argue that this approach can result in both underestimation and overestimation of security labels. We contend that overestimation may adversely throttle information flows, while underestimation incites information misuse and leakage. In this paper we present a novel calculus approach to securely share tactical information. We model security metadata as a vector half-space (as against a lattice in a MLS-like approach) that supports three operators: Γ, + and ·. The value operator Γ maps a metadata vector into a time sensitive scalar value. The operators + and · support arithmetic on the metadata vector space that are homomorphic with the semantics of information transforms. We show that it is unfortunately impossible to achieve strong homomorphism without incurring exponential metadata expansion. We use B-splines (a class of compact parametric curves) to develop concrete realizations of our metadata calculus that satisfy weak homomorphism without suffering from metadata expansion and quantify the tightness of values estimates in the proposed approach

    Policy technologies for self-managing systems

    Full text link

    Probabilistic treatment of mixes to hamper traffic analysis

    Full text link
    The goal of anonymity providing techniques is to preserve the privacy of users, who has communicated with whom, for how long, and from which location, by hiding traffic information. This is accomplished by organizing additional traffic to conceal particular communication relationships and by embedding the sender and receiver of a message in their respective anonymity sets. If the number of overall participants is greater than the size of the anonymity set and if the anonymity set changes with time due to unsynchronized participants, then the anonymity technique becomes prone to traffic analysis attacks. In this paper, we are interested in the statistical properties of the disclosure attack, a newly suggested traffic analysis attack on the MIXes. Our goal is to provide analytical estimates of the number of observations required by the disclosure attack and to identify fundamental (but avoidable) ‘weak operational modes’ of the MIXes and thus to protect users against a traffic analysis by the disclosure attack. 1

    GMD Decoding of Euclidean -Space Codes and Iterative Decoding of Turbo Codes

    Full text link
    113 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1999.Using the bifurcation theory, the qualitative transition of phase trajectories in the waterfall region is associated with the bifurcation of certain fixed points. This connection explains the dramatic improvement of performance in the waterfall region, and leads to a better understanding of the turbo decoding algorithm. Specifically, a simplified stopping criterion based on the characteristics of the phase trajectories is provided.U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    Measuring Anonymity: The Disclosure Attack

    Full text link
    The goal of anonymity providing techniques is to preserve the privacy of users, who has communicated with whom, for how long, and from which location, by hiding traffic information. This is accomplished by organizing additional traffic to conceal particular communication relationships and by embedding the sender and receiver of a message in their respective anonymity sets. If the number of overall participants is greater than the size of the anonymity set and if the anonymity set changes with time due to unsynchronized participants, then the anonymity technique becomes prone to traffic analysis attacks. We are interested in the statistical properties of the disclosure attack, a newly suggested traffic analysis attack on the MIXes. Our goal is to provide analytical estimates of the number of observations required by the disclosure attack and to identify fundamental (but avoidable) 'weak operational modes' of the MIXes and thus to protect users against a traffic analysis by the disclosure attack

    Utility Sampling for Trust Metrics in PKI

    Full text link
    We propose a new trust metric for a network of public key certificates, e.g. as in PKI, which allows a user to buy insurance at a fair price on the possibility of failure of the certifications provided while transacting with an arbitrary party in the network. Our metric builds on a metric and model of insurance provided by Reiter and Stubblebine~\cite{RS}, while addressing various limitations and drawbacks of the latter. It conserves all the beneficial properties of the latter over other schemes, including protecting the user from unintentional or malicious dependencies in the network of certifications. Our metric is built on top of a simple and intuitive model of trust and risk based on ``utility sampling\u27\u27, which maybe of interest for non-monetary applications as well
    corecore