278,464 research outputs found
Improving randomness characterization through Bayesian model selection
Nowadays random number generation plays an essential role in technology with
important applications in areas ranging from cryptography, which lies at the
core of current communication protocols, to Monte Carlo methods, and other
probabilistic algorithms. In this context, a crucial scientific endeavour is to
develop effective methods that allow the characterization of random number
generators. However, commonly employed methods either lack formality (e.g. the
NIST test suite), or are inapplicable in principle (e.g. the characterization
derived from the Algorithmic Theory of Information (ATI)). In this letter we
present a novel method based on Bayesian model selection, which is both
rigorous and effective, for characterizing randomness in a bit sequence. We
derive analytic expressions for a model's likelihood which is then used to
compute its posterior probability distribution. Our method proves to be more
rigorous than NIST's suite and the Borel-Normality criterion and its
implementation is straightforward. We have applied our method to an
experimental device based on the process of spontaneous parametric
downconversion, implemented in our laboratory, to confirm that it behaves as a
genuine quantum random number generator (QRNG). As our approach relies on
Bayesian inference, which entails model generalizability, our scheme transcends
individual sequence analysis, leading to a characterization of the source of
the random sequences itself.Comment: 25 page
Recommended from our members
Automated test sequence generation for finite state machines using genetic algorithms
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Testing software implementations, formally specified using finite state automata (FSA) has been of interest. Such systems include communication protocols and control sections of safety critical systems. There is extensive literature regarding how to formally validate an FSM based specification, but testing that an implementation conforms to the specification is still an open problem.
Two aspects of FSA based testing, both NP-hard problems, are discussed in this thesis and then combined. These are the generation of state verification sequences (UIOs) and the generation of sequences of conditional transitions that are easy to trigger.
In order to facilitate test sequence generation a novel representation of the transition conditions and a number of fitness function algorithms are defined. An empirical study of the effectiveness on real FSA based systems and example FSAs provides some interesting positive results. The use of genetic algorithms (GAs) makes these problems scalable for large FSAs. The experiments used a software tool that was developed in Java
The STRESS Method for Boundary-point Performance Analysis of End-to-end Multicast Timer-Suppression Mechanisms
Evaluation of Internet protocols usually uses random scenarios or scenarios
based on designers' intuition. Such approach may be useful for average-case
analysis but does not cover boundary-point (worst or best-case) scenarios. To
synthesize boundary-point scenarios a more systematic approach is needed.In
this paper, we present a method for automatic synthesis of worst and best case
scenarios for protocol boundary-point evaluation.
Our method uses a fault-oriented test generation (FOTG) algorithm for
searching the protocol and system state space to synthesize these scenarios.
The algorithm is based on a global finite state machine (FSM) model. We extend
the algorithm with timing semantics to handle end-to-end delays and address
performance criteria. We introduce the notion of a virtual LAN to represent
delays of the underlying multicast distribution tree. The algorithms used in
our method utilize implicit backward search using branch and bound techniques
and start from given target events. This aims to reduce the search complexity
drastically. As a case study, we use our method to evaluate variants of the
timer suppression mechanism, used in various multicast protocols, with respect
to two performance criteria: overhead of response messages and response time.
Simulation results for reliable multicast protocols show that our method
provides a scalable way for synthesizing worst-case scenarios automatically.
Results obtained using stress scenarios differ dramatically from those obtained
through average-case analyses. We hope for our method to serve as a model for
applying systematic scenario generation to other multicast protocols.Comment: 24 pages, 10 figures, IEEE/ACM Transactions on Networking (ToN) [To
appear
Opaque Service Virtualisation: A Practical Tool for Emulating Endpoint Systems
Large enterprise software systems make many complex interactions with other
services in their environment. Developing and testing for production-like
conditions is therefore a very challenging task. Current approaches include
emulation of dependent services using either explicit modelling or
record-and-replay approaches. Models require deep knowledge of the target
services while record-and-replay is limited in accuracy. Both face
developmental and scaling issues. We present a new technique that improves the
accuracy of record-and-replay approaches, without requiring prior knowledge of
the service protocols. The approach uses Multiple Sequence Alignment to derive
message prototypes from recorded system interactions and a scheme to match
incoming request messages against prototypes to generate response messages. We
use a modified Needleman-Wunsch algorithm for distance calculation during
message matching. Our approach has shown greater than 99% accuracy for four
evaluated enterprise system messaging protocols. The approach has been
successfully integrated into the CA Service Virtualization commercial product
to complement its existing techniques.Comment: In Proceedings of the 38th International Conference on Software
Engineering Companion (pp. 202-211). arXiv admin note: text overlap with
arXiv:1510.0142
Compiling symbolic attacks to protocol implementation tests
Recently efficient model-checking tools have been developed to find flaws in
security protocols specifications. These flaws can be interpreted as potential
attacks scenarios but the feasability of these scenarios need to be confirmed
at the implementation level. However, bridging the gap between an abstract
attack scenario derived from a specification and a penetration test on real
implementations of a protocol is still an open issue. This work investigates an
architecture for automatically generating abstract attacks and converting them
to concrete tests on protocol implementations. In particular we aim to improve
previously proposed blackbox testing methods in order to discover automatically
new attacks and vulnerabilities. As a proof of concept we have experimented our
proposed architecture to detect a renegotiation vulnerability on some
implementations of SSL/TLS, a protocol widely used for securing electronic
transactions.Comment: In Proceedings SCSS 2012, arXiv:1307.802
Packet reordering, high speed networks and transport protocol performance
We performed end-to-end measurements of UDP/IP flows across an Internet backbone network. Using this data, we characterized the packet reordering processes seen in the network. Our results demonstrate the high prevalence of packet reordering relative to packet loss, and show a strong correlation between packet rate and reordering on the network we studied. We conclude that, given the increased parallelism in modern networks and the demands of high performance applications, new application and protocol designs should treat packet reordering on an equal footing to packet loss, and must be robust and resilient to both in order to achieve high performance
- âŠ