11,836 research outputs found
Sign-Compute-Resolve for Random Access
We present an approach to random access that is based on three elements:
physical-layer network coding, signature codes and tree splitting. Upon
occurrence of a collision, physical-layer network coding enables the receiver
to decode the sum of the information that was transmitted by the individual
users. For each user this information consists of the data that the user wants
to communicate as well as the user's signature. As long as no more than
users collide, their identities can be recovered from the sum of their
signatures. A splitting protocol is used to deal with the case that more than
users collide. We measure the performance of the proposed method in terms
of user resolution rate as well as overall throughput of the system. The
results show that our approach significantly increases the performance of the
system even compared to coded random access, where collisions are not wasted,
but are reused in successive interference cancellation.Comment: Accepted for presentation at 52nd Annual Allerton Conference on
Communication, Control, and Computin
Sign-Compute-Resolve for Tree Splitting Random Access
We present a framework for random access that is based on three elements:
physical-layer network coding (PLNC), signature codes and tree splitting. In
presence of a collision, physical-layer network coding enables the receiver to
decode, i.e. compute, the sum of the packets that were transmitted by the
individual users. For each user, the packet consists of the user's signature,
as well as the data that the user wants to communicate. As long as no more than
K users collide, their identities can be recovered from the sum of their
signatures. This framework for creating and transmitting packets can be used as
a fundamental building block in random access algorithms, since it helps to
deal efficiently with the uncertainty of the set of contending terminals. In
this paper we show how to apply the framework in conjunction with a
tree-splitting algorithm, which is required to deal with the case that more
than K users collide. We demonstrate that our approach achieves throughput that
tends to 1 rapidly as K increases. We also present results on net data-rate of
the system, showing the impact of the overheads of the constituent elements of
the proposed protocol. We compare the performance of our scheme with an upper
bound that is obtained under the assumption that the active users are a priori
known. Also, we consider an upper bound on the net data-rate for any PLNC based
strategy in which one linear equation per slot is decoded. We show that already
at modest packet lengths, the net data-rate of our scheme becomes close to the
second upper bound, i.e. the overhead of the contention resolution algorithm
and the signature codes vanishes.Comment: This is an extended version of arXiv:1409.6902. Accepted for
publication in the IEEE Transactions on Information Theor
Perfect tag identification protocol in RFID networks
Radio Frequency IDentification (RFID) systems are becoming more and more
popular in the field of ubiquitous computing, in particular for objects
identification. An RFID system is composed by one or more readers and a number
of tags. One of the main issues in an RFID network is the fast and reliable
identification of all tags in the reader range. The reader issues some queries,
and tags properly answer. Then, the reader must identify the tags from such
answers. This is crucial for most applications. Since the transmission medium
is shared, the typical problem to be faced is a MAC-like one, i.e. to avoid or
limit the number of tags transmission collisions. We propose a protocol which,
under some assumptions about transmission techniques, always achieves a 100%
perfomance. It is based on a proper recursive splitting of the concurrent tags
sets, until all tags have been identified. The other approaches present in
literature have performances of about 42% in the average at most. The
counterpart is a more sophisticated hardware to be deployed in the manufacture
of low cost tags.Comment: 12 pages, 1 figur
A Bayesian regression tree approach to identify the effect of nanoparticles' properties on toxicity profiles
We introduce a Bayesian multiple regression tree model to characterize
relationships between physico-chemical properties of nanoparticles and their
in-vitro toxicity over multiple doses and times of exposure. Unlike
conventional models that rely on data summaries, our model solves the low
sample size issue and avoids arbitrary loss of information by combining all
measurements from a general exposure experiment across doses, times of
exposure, and replicates. The proposed technique integrates Bayesian trees for
modeling threshold effects and interactions, and penalized B-splines for dose-
and time-response surface smoothing. The resulting posterior distribution is
sampled by Markov Chain Monte Carlo. This method allows for inference on a
number of quantities of potential interest to substantive nanotoxicology, such
as the importance of physico-chemical properties and their marginal effect on
toxicity. We illustrate the application of our method to the analysis of a
library of 24 nano metal oxides.Comment: Published at http://dx.doi.org/10.1214/14-AOAS797 in the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Network Coding Tree Algorithm for Multiple Access System
Network coding is famous for significantly improving the throughput of
networks. The successful decoding of the network coded data relies on some side
information of the original data. In that framework, independent data flows are
usually first decoded and then network coded by relay nodes. If appropriate
signal design is adopted, physical layer network coding is a natural way in
wireless networks. In this work, a network coding tree algorithm which enhances
the efficiency of the multiple access system (MAS) is presented. For MAS,
existing works tried to avoid the collisions while collisions happen frequently
under heavy load. By introducing network coding to MAS, our proposed algorithm
achieves a better performance of throughput and delay. When multiple users
transmit signal in a time slot, the mexed signals are saved and used to jointly
decode the collided frames after some component frames of the network coded
frame are received. Splitting tree structure is extended to the new algorithm
for collision solving. The throughput of the system and average delay of frames
are presented in a recursive way. Besides, extensive simulations show that
network coding tree algorithm enhances the system throughput and decreases the
average frame delay compared with other algorithms. Hence, it improves the
system performance
Leveraging Coding Techniques for Speeding up Distributed Computing
Large scale clusters leveraging distributed computing frameworks such as
MapReduce routinely process data that are on the orders of petabytes or more.
The sheer size of the data precludes the processing of the data on a single
computer. The philosophy in these methods is to partition the overall job into
smaller tasks that are executed on different servers; this is called the map
phase. This is followed by a data shuffling phase where appropriate data is
exchanged between the servers. The final so-called reduce phase, completes the
computation.
One potential approach, explored in prior work for reducing the overall
execution time is to operate on a natural tradeoff between computation and
communication. Specifically, the idea is to run redundant copies of map tasks
that are placed on judiciously chosen servers. The shuffle phase exploits the
location of the nodes and utilizes coded transmission. The main drawback of
this approach is that it requires the original job to be split into a number of
map tasks that grows exponentially in the system parameters. This is
problematic, as we demonstrate that splitting jobs too finely can in fact
adversely affect the overall execution time.
In this work we show that one can simultaneously obtain low communication
loads while ensuring that jobs do not need to be split too finely. Our approach
uncovers a deep relationship between this problem and a class of combinatorial
structures called resolvable designs. Appropriate interpretation of resolvable
designs can allow for the development of coded distributed computing schemes
where the splitting levels are exponentially lower than prior work. We present
experimental results obtained on Amazon EC2 clusters for a widely known
distributed algorithm, namely TeraSort. We obtain over 4.69 improvement
in speedup over the baseline approach and more than 2.6 over current
state of the art
Goodbye, ALOHA!
©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The vision of the Internet of Things (IoT) to interconnect and Internet-connect everyday people, objects, and machines poses new challenges in the design of wireless communication networks. The design of medium access control (MAC) protocols has been traditionally an intense area of research due to their high impact on the overall performance of wireless communications. The majority of research activities in this field deal with different variations of protocols somehow based on ALOHA, either with or without listen before talk, i.e., carrier sensing multiple access. These protocols operate well under low traffic loads and low number of simultaneous devices. However, they suffer from congestion as the traffic load and the number of devices increase. For this reason, unless revisited, the MAC layer can become a bottleneck for the success of the IoT. In this paper, we provide an overview of the existing MAC solutions for the IoT, describing current limitations and envisioned challenges for the near future. Motivated by those, we identify a family of simple algorithms based on distributed queueing (DQ), which can operate for an infinite number of devices generating any traffic load and pattern. A description of the DQ mechanism is provided and most relevant existing studies of DQ applied in different scenarios are described in this paper. In addition, we provide a novel performance evaluation of DQ when applied for the IoT. Finally, a description of the very first demo of DQ for its use in the IoT is also included in this paper.Peer ReviewedPostprint (author's final draft
A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication
This paper has two contributions. First, we propose a novel coded matrix
multiplication technique called Generalized PolyDot codes that advances on
existing methods for coded matrix multiplication under storage and
communication constraints. This technique uses "garbage alignment," i.e.,
aligning computations in coded computing that are not a part of the desired
output. Generalized PolyDot codes bridge between Polynomial codes and MatDot
codes, trading off between recovery threshold and communication costs. Second,
we demonstrate that Generalized PolyDot can be used for training large Deep
Neural Networks (DNNs) on unreliable nodes prone to soft-errors. This requires
us to address three additional challenges: (i) prohibitively large overhead of
coding the weight matrices in each layer of the DNN at each iteration; (ii)
nonlinear operations during training, which are incompatible with linear
coding; and (iii) not assuming presence of an error-free master node, requiring
us to architect a fully decentralized implementation without any "single point
of failure." We allow all primary DNN training steps, namely, matrix
multiplication, nonlinear activation, Hadamard product, and update steps as
well as the encoding/decoding to be error-prone. We consider the case of
mini-batch size , as well as , leveraging coded matrix-vector
products, and matrix-matrix products respectively. The problem of DNN training
under soft-errors also motivates an interesting, probabilistic error model
under which a real number MDS code is shown to correct errors
with probability as compared to for the
more conventional, adversarial error model. We also demonstrate that our
proposed strategy can provide unbounded gains in error tolerance over a
competing replication strategy and a preliminary MDS-code-based strategy for
both these error models.Comment: Presented in part at the IEEE International Symposium on Information
Theory 2018 (Submission Date: Jan 12 2018); Currently under review at the
IEEE Transactions on Information Theor
Interactive Visual Analysis of Networked Systems: Workflows for Two Industrial Domains
We report on a first study of interactive visual analysis of networked systems. Working with ABB Corporate Research and Ericsson Research, we have created workflows which demonstrate the potential of visualization in the domains of industrial automation and telecommunications. By a workflow in this context, we mean a sequence of visualizations and the actions for generating them. Visualizations can be any images that represent properties of the data sets analyzed, and actions typically either change the selection of data visualized or change the visualization by choice of technique or change of parameters
- …