4,046 research outputs found
Analysis and Decentralised Optimal Flow Control of Heterogeneous Computer Communication Network Models
General closed queueing networks are used to model the local flow
control in multiclass computer communication networks with single and
multiple transmission links. The problem of analysing multiclass
general closed queueing network models with single server and
multiserver is presented followed by the problem of decentralised
optimal local flow control of multiclass general computer
communication networks with single and multiple transmission links.
The generalised exponential (GE) distributional model with known first
two moments has been used to represent general interarrival and
transmission time distributions as various users have various traffic
characteristics.
A new method of general model reduction using the Norton' s
theorem for general queueing networks in conjunction with the
universal maximum entropy algorithm is proposed for the analysis of large general closed queueing networks. This extension to Norton's
theorem has an advantage over the direct application of the universal
maximum entropy approach whereby the study of a subset of queueing
centres of interest can be done without repeatedly solving the entire
network.
The principle of maximum entropy is used to derive new
approximate solutions for the joint queue length distributions of
multiclass general queueing network models with single server and
multiserver and favourable comparisons with other methods are made.
The decentralised optimal local flow control of the multiclass
computer communication networks with single and multiple transmission
links is shown to be a state dependent window type mechanism that has
been traditionally used in practice
Bits Through Bufferless Queues
This paper investigates the capacity of a channel in which information is
conveyed by the timing of consecutive packets passing through a queue with
independent and identically distributed service times. Such timing channels are
commonly studied under the assumption of a work-conserving queue. In contrast,
this paper studies the case of a bufferless queue that drops arriving packets
while a packet is in service. Under this bufferless model, the paper provides
upper bounds on the capacity of timing channels and establishes achievable
rates for the case of bufferless M/M/1 and M/G/1 queues. In particular, it is
shown that a bufferless M/M/1 queue at worst suffers less than 10% reduction in
capacity when compared to an M/M/1 work-conserving queue.Comment: 8 pages, 3 figures, accepted in 51st Annual Allerton Conference on
Communication, Control, and Computing, University of Illinois, Monticello,
Illinois, Oct 2-4, 201
Bits through queues with feedback
In their paper Anantharam and Verd\'u showed that feedback does not
increase the capacity of a queue when the service time is exponentially
distributed. Whether this conclusion holds for general service times has
remained an open question which this paper addresses.
Two main results are established for both the discrete-time and the
continuous-time models. First, a sufficient condition on the service
distribution for feedback to increase capacity under FIFO service policy.
Underlying this condition is a notion of weak feedback wherein instead of the
queue departure times the transmitter is informed about the instants when
packets start to be served. Second, a condition in terms of output entropy rate
under which feedback does not increase capacity. This condition is general in
that it depends on the output entropy rate of the queue but explicitly depends
neither on the queue policy nor on the service time distribution. This
condition is satisfied, for instance, by queues with LCFS service policies and
bounded service times
Recommended from our members
Improving the network transmission cost of differentiated web services
This paper investigates into the transmission cost of web services related messages which is affected by network
latency. Web services enable seamless interaction and integration of e-business applications. Web services contain a
collection of operations so as to interact with outside world over the Internet through XML messaging. Though XML
effectively describe message related information and is fairly human readable, it badly affects the performance of Web
services in terms of transmission cost, processing cost, and so on. This paper aims to minimize network latency of message
communication of Web services by employing pre-emptive resume scheduling. Fundamental principle of this approach is the
provision of preferential treatment to some messages as compared to others. This approach assigns different priorities to
distinct classes of messages given the fact that some messages may tolerate longer delays than others. For instance, shorter
messages may be given higher priority than longer messages, or the Web service provider may give higher priority to the
messages of paying subscribers
Towards Practical Oblivious RAM
We take an important step forward in making Oblivious RAM (O-RAM) practical.
We propose an O-RAM construction achieving an amortized overhead of 20X-35X
(for an O-RAM roughly 1 terabyte in size), about 63 times faster than the best
existing scheme. On the theoretic front, we propose a fundamentally novel
technique for constructing Oblivious RAMs: specifically, we partition a bigger
O-RAM into smaller O-RAMs, and employ a background eviction technique to
obliviously evict blocks from the client-side cache into a randomly assigned
server-side partition. This novel technique is the key to achieving the gains
in practical performance
The security of NTP's datagram protocol
For decades, the Network Time Protocol (NTP) has been
used to synchronize computer clocks over untrusted network paths. This
work takes a new look at the security of NTP’s datagram protocol. We
argue that NTP’s datagram protocol in RFC5905 is both underspecified
and flawed. The NTP specifications do not sufficiently respect (1) the
conflicting security requirements of different NTP modes, and (2) the
mechanism NTP uses to prevent off-path attacks. A further problem
is that (3) NTP’s control-query interface reveals sensitive information
that can be exploited in off-path attacks. We exploit these problems
in several attacks that remote attackers can use to maliciously alter a
target’s time. We use network scans to find millions of IPs that are
vulnerable to our attacks. Finally, we move beyond identifying attacks
by developing a cryptographic model and using it to prove the security
of a new backwards-compatible client/server protocol for NTP.https://eprint.iacr.org/2016/1006.pdfhttps://eprint.iacr.org/2016/1006.pdfPublished versio
Blindspot: Indistinguishable Anonymous Communications
Communication anonymity is a key requirement for individuals under targeted
surveillance. Practical anonymous communications also require
indistinguishability - an adversary should be unable to distinguish between
anonymised and non-anonymised traffic for a given user. We propose Blindspot, a
design for high-latency anonymous communications that offers
indistinguishability and unobservability under a (qualified) global active
adversary. Blindspot creates anonymous routes between sender-receiver pairs by
subliminally encoding messages within the pre-existing communication behaviour
of users within a social network. Specifically, the organic image sharing
behaviour of users. Thus channel bandwidth depends on the intensity of image
sharing behaviour of users along a route. A major challenge we successfully
overcome is that routing must be accomplished in the face of significant
restrictions - channel bandwidth is stochastic. We show that conventional
social network routing strategies do not work. To solve this problem, we
propose a novel routing algorithm. We evaluate Blindspot using a real-world
dataset. We find that it delivers reasonable results for applications requiring
low-volume unobservable communication.Comment: 13 Page
- …