1,054 research outputs found

    Covert Bits Through Queues

    Full text link
    We consider covert communication using a queuing timing channel in the presence of a warden. The covert message is encoded using the inter-arrival times of the packets, and the legitimate receiver and the warden observe the inter-departure times of the packets from their respective queues. The transmitter and the legitimate receiver also share a secret key to facilitate covert communication. We propose achievable schemes that obtain non-zero covert rate for both exponential and general queues when a sufficiently high rate secret key is available. This is in contrast to other channel models such as the Gaussian channel or the discrete memoryless channel where only O(n)\mathcal{O}(\sqrt{n}) covert bits can be sent over nn channel uses, yielding a zero covert rate.Comment: To appear at IEEE CNS, October 201

    Towards Provably Invisible Network Flow Fingerprints

    Full text link
    Network traffic analysis reveals important information even when messages are encrypted. We consider active traffic analysis via flow fingerprinting by invisibly embedding information into packet timings of flows. In particular, assume Alice wishes to embed fingerprints into flows of a set of network input links, whose packet timings are modeled by Poisson processes, without being detected by a watchful adversary Willie. Bob, who receives the set of fingerprinted flows after they pass through the network modeled as a collection of independent and parallel M/M/1M/M/1 queues, wishes to extract Alice's embedded fingerprints to infer the connection between input and output links of the network. We consider two scenarios: 1) Alice embeds fingerprints in all of the flows; 2) Alice embeds fingerprints in each flow independently with probability pp. Assuming that the flow rates are equal, we calculate the maximum number of flows in which Alice can invisibly embed fingerprints while having those fingerprints successfully decoded by Bob. Then, we extend the construction and analysis to the case where flow rates are distinct, and discuss the extension of the network model

    Bits through queues with feedback

    Full text link
    In their 19961996 paper Anantharam and Verd\'u showed that feedback does not increase the capacity of a queue when the service time is exponentially distributed. Whether this conclusion holds for general service times has remained an open question which this paper addresses. Two main results are established for both the discrete-time and the continuous-time models. First, a sufficient condition on the service distribution for feedback to increase capacity under FIFO service policy. Underlying this condition is a notion of weak feedback wherein instead of the queue departure times the transmitter is informed about the instants when packets start to be served. Second, a condition in terms of output entropy rate under which feedback does not increase capacity. This condition is general in that it depends on the output entropy rate of the queue but explicitly depends neither on the queue policy nor on the service time distribution. This condition is satisfied, for instance, by queues with LCFS service policies and bounded service times

    Plugging Side-Channel Leaks with Timing Information Flow Control

    Get PDF
    The cloud model's dependence on massive parallelism and resource sharing exacerbates the security challenge of timing side-channels. Timing Information Flow Control (TIFC) is a novel adaptation of IFC techniques that may offer a way to reason about, and ultimately control, the flow of sensitive information through systems via timing channels. With TIFC, objects such as files, messages, and processes carry not just content labels describing the ownership of the object's "bits," but also timing labels describing information contained in timing events affecting the object, such as process creation/termination or message reception. With two system design tools-deterministic execution and pacing queues-TIFC enables the construction of "timing-hardened" cloud infrastructure that permits statistical multiplexing, while aggregating and rate-limiting timing information leakage between hosted computations.Comment: 5 pages, 3 figure
    • …
    corecore