26 research outputs found
Completion of Computation of Improved Upper Bound on the Maximum Average Linear Hull Probabilty for Rijndael
This report presents the results from the completed computation of an algorithm introduced by the authors in [11] for evaluating the provable security of the AES (Rijndael) against linear cryptanalysis. This algorithm, later named KMT2, can in fact be applied to any SPN [8]. Preliminary results in [11] were based on 43\% of total computation, estimated at 200,000 hours on our benchmark machine at the time, a Sun Ultra 5. After some delay, we obtained access to the necessary computational resources, and were able to run the algorithm to completion. In addition to the above, this report presents the results from the dual version of our algorithm (KMT2-DC) as applied to the AES
ASYMPTOTIC ANALYSIS OF SINGLE-HOP STOCHASTIC PROCESSING NETWORKS USING THE DRIFT METHOD
Today’s era of cloud computing and big data is powered by massive data centers. The
focus of my dissertation is on resource allocation problems that arise in the operation of
these large-scale data centers. Analyzing these systems exactly is usually intractable, and
a usual approach is to study them in various asymptotic regimes with heavy traffic being a
popular one. We use the drift method, which is a two-step procedure to obtain bounds that
are asymptotically tight. In the first step, one shows state-space collapse, which, intuitively,
means that one detects the bottleneck(s) of the system. In the second step, one sets to zero
the drift of a carefully chosen test function. Then, using state-space collapse, one can obtain
the desired bounds.
This dissertation focuses on exploiting the properties of the drift method and providing
conditions under which one can completely determine the asymptotic distribution of the
queue lengths. In chapter 1 we present the motivation, research background, and main
contributions.
In chapter 2 we revisit some well-known definitions and results that will be repeatedly
used in the following chapters.
In chapter 3, chapter 4, and chapter 5 we focus on load-balancing systems, also known as
supermarket checkout systems. In the load-balancing system, there are a certain number of
servers, and jobs arrive in a single stream. Once they come, they join the queue associated
with one of the servers, and they wait in line until the corresponding server processes them.
In chapter 3 we introduce the moment generating function (MGF) method. The MGF,
also known as two-sided Laplace form, is an invertible transformation of the random variable’s
distribution and, hence, it provides the same information as the cumulative distribution
function or the density (when it exists). The MGF method is a two-step procedure to
compute the MGF of the delay in stochastic processing networks (SPNs) that satisfy the
complete resource pooling (CRP) condition. Intuitively, CRP means that the SPN has a
single bottleneck in heavy traffic.
A popular routing algorithm is power-of-d choices, under which one selects d servers
at random and routes the new arrivals to the shortest queue among those d. The power-of-d
choices algorithm has been widely studied in load-balancing systems with homogeneous
servers. However, it is not well understood when the servers are different. In chapter 4 we
study this routing policy under heterogeneous servers. Specifically, we provide necessary
and sufficient conditions on the service rates so that the load-balancing system achieves
throughput and heavy-traffic optimality. We use the MGF method to show heavy-traffic
optimality.
In chapter 5 we study the load-balancing system in the many-server heavy-traffic regime,
which means that we analyze the limit as the number of servers and the load increase together.
Specifically, we are interested in studying how fast the number of servers can grow
with respect to the load if we want to observe the same probabilistic behavior of the delay
as a system with a fixed number of servers in heavy traffic. We show two approaches to
obtain the results: the MGF method and Stein’s method.
In chapter 6 we apply the MGF method to a generalized switch, which is one of the
most general single-hop SPNs with control on the service process. Many systems, such
as ad hoc wireless networks, input-queued switches, and parallel-server systems, can be
modeled as special cases of the generalized switch.
Most of the literature in SPNs (including the previous chapters of this thesis) focuses on
systems that satisfy the CRP condition in heavy traffic, i.e., systems that behave as single-server
queues in the limit. In chapter 7 we study systems that do not satisfy this condition
and, hence, may have multiple bottlenecks. We specify conditions under which the drift
method is sufficient to obtain the distribution function of the delay, and when it can only be
used to obtain information about its mean value. Our results are valid for both, the CRP and
non-CRP cases and they are immediately applicable to a variety of systems. Additionally,
we provide a mathematical proof that shows a limitation of the drift method.Ph.D
Computing performability measures in Markov chains by means of matrix functions
We discuss the efficient computation of performance, reliability, and
availability measures for Markov chains; these metrics, and the ones obtained
by combining them, are often called performability measures. We show that this
computational problem can be recasted as the evaluation of a bilinear forms
induced by appropriate matrix functions, and thus solved by leveraging the fast
methods available for this task. We provide a comprehensive analysis of the
theory required to translate the problem from the language of Markov chains to
the one of matrix functions. The advantages of this new formulation are
discussed, and it is shown that this setting allows to easily study the
sensitivities of the measures with respect to the model parameters. Numerical
experiments confirm the effectiveness of our approach; the tests we have run
show that we can outperform the solvers available in state of the art
commercial packages on a representative set of large scale examples
Statistical cryptanalysis of block ciphers
Since the development of cryptology in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing, nowadays almost ubiquitous, presence of electronic communication means in our lives. Block ciphers are inevitable building blocks of the security of various electronic systems. Recently, many advances have been published in the field of public-key cryptography, being in the understanding of involved security models or in the mathematical security proofs applied to precise cryptosystems. Unfortunately, this is still not the case in the world of symmetric-key cryptography and the current state of knowledge is far from reaching such a goal. However, block and stream ciphers tend to counterbalance this lack of "provable security" by other advantages, like high data throughput and ease of implementation. In the first part of this thesis, we would like to add a (small) stone to the wall of provable security of block ciphers with the (theoretical and experimental) statistical analysis of the mechanisms behind Matsui's linear cryptanalysis as well as more abstract models of attacks. For this purpose, we consider the underlying problem as a statistical hypothesis testing problem and we make a heavy use of the Neyman-Pearson paradigm. Then, we generalize the concept of linear distinguisher and we discuss the power of such a generalization. Furthermore, we introduce the concept of sequential distinguisher, based on sequential sampling, and of aggregate distinguishers, which allows to build sub-optimal but efficient distinguishers. Finally, we propose new attacks against reduced-round version of the block cipher IDEA. In the second part, we propose the design of a new family of block ciphers named FOX. First, we study the efficiency of optimal diffusive components when implemented on low-cost architectures, and we present several new constructions of MDS matrices; then, we precisely describe FOX and we discuss its security regarding linear and differential cryptanalysis, integral attacks, and algebraic attacks. Finally, various implementation issues are considered
Decorrelation: A Theory for Block Cipher Security
Pseudorandomness is a classical model for the security of block ciphers. In this paper we propose convenient tools in order to study it in connection with the Shannon Theory, the Carter-Wegman universal hash functions paradigm, and the Luby-Rackoff approach. This enables the construction of new ciphers with security proofs under specific models. We show how to ensure security against basic differential and linear cryptanalysis and even more general attacks. We propose practical construction scheme
Strong Topological Trivialization of Multi-Species Spherical Spin Glasses
We study the landscapes of multi-species spherical spin glasses. Our results
determine the phase boundary for annealed trivialization of the number of
critical points, and establish its equivalence with a quenched \emph{strong
topological trivialization} property. Namely in the "trivial" regime, the
number of critical points is constant, all are well-conditioned, and all
approximate critical points are close to a true critical point. As a
consequence, we deduce that Langevin dynamics at sufficiently low temperature
has logarithmic mixing time.
Our approach begins with the Kac--Rice formula. We derive closed form
expressions for some asymptotic determinants studied in (Ben
Arous-Bourgade-McKenna 2023, McKenna 2021), and characterize the annealed
trivialization phase by explicitly solving a suitable multi-dimensional
variational problem. To obtain more precise quenched results, we develop
general purpose techniques to avoid sub-exponential correction factors and show
non-existence of \emph{approximate} critical points. Many of the results are
new even in the -species case.Comment: 57 pages, 4 figures. Updated reference
A Methodology for Extracting Human Bodies from Still Images
Monitoring and surveillance of humans is one of the most prominent applications of today and it is expected to be part of many future aspects of our life, for safety reasons, assisted living and many others. Many efforts have been made towards automatic and robust solutions, but the general problem is very challenging and remains still open. In this PhD dissertation we examine the problem from many perspectives. First, we study the performance of a hardware architecture designed for large-scale surveillance systems. Then, we focus on the general problem of human activity recognition, present an extensive survey of methodologies that deal with this subject and propose a maturity metric to evaluate them.
One of the numerous and most popular algorithms for image processing found in the field is image segmentation and we propose a blind metric to evaluate their results regarding the activity at local regions. Finally, we propose a fully automatic system for segmenting and extracting human bodies from challenging single images, which is the main contribution of the dissertation. Our methodology is a novel bottom-up approach relying mostly on anthropometric constraints and is facilitated by our research in the fields of face, skin and hands detection. Experimental results and comparison with state-of-the-art methodologies demonstrate the success of our approach