20 research outputs found
Cryptographic error correction
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 67-71).It has been said that "cryptography is about concealing information, and coding theory is about revealing it." Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traitor tracing. We demonstrate tight lower bounds on the lengths of such codes by devising and analyzing a general collusive attack that works for any code.by Christopher Jason Peikert.Ph.D
A STUDY OF ERASURE CORRECTING CODES
This work focus on erasure codes, particularly those that of high performance,
and the related decoding algorithms, especially with low
computational complexity. The work is composed of different pieces,
but the main components are developed within the following two main
themes.
Ideas of message passing are applied to solve the erasures after the
transmission. Efficient matrix-representation of the belief propagation
(BP) decoding algorithm on the BEG is introduced as the recovery
algorithm. Gallager's bit-flipping algorithm are further developed
into the guess and multi-guess algorithms especially for the
application to recover the unsolved erasures after the recovery algorithm.
A novel maximum-likelihood decoding algorithm, the In-place
algorithm, is proposed with a reduced computational complexity. A
further study on the marginal number of correctable erasures by the
In-place algoritinn determines a lower bound of the average number
of correctable erasures. Following the spirit in search of the most likable
codeword based on the received vector, we propose a new branch-evaluation-
search-on-the-code-tree (BESOT) algorithm, which is powerful
enough to approach the ML performance for all linear block
codes.
To maximise the recovery capability of the In-place algorithm in
network transmissions, we propose the product packetisation structure
to reconcile the computational complexity of the In-place algorithm.
Combined with the proposed product packetisation structure,
the computational complexity is less than the quadratic complexity
bound. We then extend this to application of the Rayleigh fading
channel to solve the errors and erasures. By concatenating an outer
code, such as BCH codes, the product-packetised RS codes have the
performance of the hard-decision In-place algorithm significantly better
than that of the soft-decision iterative algorithms on optimally
designed LDPC codes
Swarm intelligence and its applications to wireless ad hoc and sensor networks.
Swarm intelligence, as inspired by natural biological swarms, has numerous powerful
properties for distributed problem solving in complex real world applications such
as optimisation and control. Swarm intelligence properties can be found in natural
systems such as ants, bees and birds, whereby the collective behaviour of unsophisticated
agents interact locally with their environment to explore collective problem solving
without centralised control. Recent advances in wireless communication and digital
electronics have instigated important changes in distributed computing. Pervasive
computing environments have emerged, such as large scale communication networks
and wireless ad hoc and sensor networks that are extremely dynamic and unreliable.
The network management and control must be based on distributed principles where
centralised approaches may not be suitable for exploiting the enormous potential of
these environments. In this thesis, we focus on applying swarm intelligence to the
wireless ad hoc and sensor networks optimisation and control problems.
Firstly, an analysis of the recently proposed particle swarm optimisation, which is
based on the swarm intelligence techniques, is presented. Previous stability analysis
of the particle swarm optimisation was restricted to the assumption that all of the
parameters are non random since the theoretical analysis with the random parameters
is difficult. We analyse the stability of the particle dynamics without these restrictive
assumptions using Lyapunov stability and passive systems concepts. The particle
swarm optimisation is then used to solve the sink node placement problem in sensor
networks.
Secondly, swarm intelligence based routing methods for mobile ad hoc networks
are investigated. Two protocols have been proposed based on the foraging behaviour
of biological ants and implemented in the NS2 network simulator. The first protocol
allows each node in the network to choose the next node for packets to be
forwarded on the basis of mobility influenced routing table. Since mobility is one of
the most important factors for route changes in mobile ad hoc networks, the mobility
of the neighbour node using HELLO packets is predicted and then translated into a
pheromone decay as found in natural biological systems. The second protocol uses
the same mechanism as the first, but instead of mobility the neighbour node remaining
energy level and its drain rate are used. The thesis clearly shows that swarm
intelligence methods have a very useful role to play in the management and control
iv
problems associated with wireless ad hoc and sensor networks. This thesis has given
a number of example applications and has demonstrated its usefulness in improving
performance over other existing methods
Combined Industry, Space and Earth Science Data Compression Workshop
The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems
Cognitive Foundations for Visual Analytics
In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions