35 research outputs found
Coherent, super resolved radar beamforming using self-supervised learning
High resolution automotive radar sensors are required in order to meet the
high bar of autonomous vehicles needs and regulations. However, current radar
systems are limited in their angular resolution causing a technological gap. An
industry and academic trend to improve angular resolution by increasing the
number of physical channels, also increases system complexity, requires
sensitive calibration processes, lowers robustness to hardware malfunctions and
drives higher costs. We offer an alternative approach, named Radar signal
Reconstruction using Self Supervision (R2-S2), which significantly improves the
angular resolution of a given radar array without increasing the number of
physical channels. R2-S2 is a family of algorithms which use a Deep Neural
Network (DNN) with complex range-Doppler radar data as input and trained in a
self-supervised method using a loss function which operates in multiple data
representation spaces. Improvement of 4x in angular resolution was demonstrated
using a real-world dataset collected in urban and highway environments during
clear and rainy weather conditions.Comment: 28 pages 10 figure
Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications
In this paper we show that a large class of diverse problems have a
bicomposite structure which makes it possible to solve them with a new
type of algorithm called {\it dissection}, which has much better
time/memory tradeoffs than previously known algorithms. A typical example is the
problem of finding the key of multiple encryption schemes with independent
-bit keys. All the previous error-free attacks required time and
memory satisfying , and even if ``false negatives\u27\u27 are allowed,
no attack could achieve . Our new technique yields the first
algorithm which never errs and finds all the possible keys with a smaller
product of , such as time and memory for breaking
the sequential execution of r=7 block ciphers. The improvement ratio we obtain
increases in an unbounded way as increases, and if we allow algorithms
which can sometimes miss solutions, we can get even better tradeoffs by
combining our dissection technique with parallel collision search.
To demonstrate the generality of the new dissection technique, we show how
to use it in a generic way in order to improve rebound attacks on hash
functions and to solve with better time complexities (for small memory complexities)
hard combinatorial search problems, such as the well known knapsack problem
Decomposing the ASASA Block Cipher Construction
We consider the problem of recovering the internal specification of a general SP-network consisting of three linear layers (A) interleaved with two Sbox layers (S) (denoted by ASASA for short), given only black-box access to the scheme. The decomposition of such general ASASA schemes was first considered at ASIACRYPT 2014 by Biryukov et al. which used the alleged difficulty of this problem to propose several concrete block cipher designs as candidates for white-box cryptography.
In this paper, we present several attacks on general ASASA schemes that significantly outperform the analysis of Biryukov et al. As a result, we are able to break all the proposed concrete ASASA constructions with practical complexity. For example, we can decompose an ASASA structure that was supposed to provide -bit security in roughly steps, and break the scheme that supposedly provides -bit security in about time. Whenever possible, our findings are backed up with experimental verifications
New Attacks on Feistel Structures with Improved Memory Complexities
Feistel structures are an extremely important and extensively researched type of cryptographic schemes. In this paper we describe improved attacks on Feistel structures with more than 4 rounds.
We achieve this by a new attack that combines the main benefits of
meet-in-the-middle attacks (which can reduce the time complexity by
comparing only half blocks in the middle) and dissection attacks (which can reduce the memory complexity but have to guess full blocks in the middle in order to perform independent attacks above and below it). For example, for a 7-round Feistel structure on n-bit inputs with seven independent round keys of n/2 bits each), a MITM attack can use (2^1.5n ,2^1.5n) time and memory, while dissection requires (2^2n, 2^n) time and memory. Our new attack requires only (2^1.5n, 2^n) time and memory, using a few known plaintext/ciphertext pairs. When we are allowed to use more known plaintexts, we develop new techniques which rely on the existence of multicollisions and differential properties deep in the structure in order to further reduce the memory complexity.
Our new attacks are not just theoretical generic constructions - in fact, we can use them to improve the best known attacks on several concrete cryptosystems such as CAST-128 (where we reduce the memory complexity from 2^111 to 2^64) and DEAL-256 (where we reduce the memory complexity from 2^200 to 2^144), without affecting their time and data complexities. An extension of our techniques applies even to some non-Feistel structures - for example, in the case of FOX, we reduce the memory complexity of all the best known attacks by a factor of 2^16
Memory-Efficient Algorithms for Finding Needles in Haystacks
One of the most common tasks in cryptography and cryptanalysis is to find
some interesting event (a needle) in an exponentially large collection (haystack) of
possible events, or to demonstrate that no such event is likely to
exist. In particular, we are interested in finding needles which are defined as events that
happen with an unusually high probability of in a haystack which is an almost uniform
distribution on possible events. When the search algorithm can
only sample values from this distribution, the best known time/memory
tradeoff for finding such an event requires time given
memory.
In this paper we develop much faster needle searching algorithms in the common
cryptographic setting in which the distribution is defined
by applying some deterministic function to random inputs.
Such a distribution can be modelled by a random directed graph with vertices in
which almost all the vertices have predecessors while
the vertex we are looking for has an unusually large number of predecessors.
When we are given only a constant amount of memory, we propose a new search methodology which we call
\textbf{NestedRho}. As increases, such random graphs undergo several subtle phase transitions,
and thus the log-log dependence of the time complexity on
becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the
time complexity of the best previous algorithm in the full range of , and in particular
it improves the previous time complexity by a significant factor of for any in the range . When we are given more memory, we show how to combine the \textbf{NestedRho} technique with the parallel collision
search technique in order to further reduce its time complexity. Finally, we show how to apply our new search
technique to more complicated distributions with multiple peaks when we want to find all the peaks whose
probabilities are higher than
Improved Top-Down Techniques in Differential Cryptanalysis
The fundamental problem of differential cryptanalysis is to find the highest entries in the Difference Distribution Table (DDT) of a given mapping F over n-bit values, and in particular to find the highest diagonal entries which correspond to the best iterative characteristics of . The standard bottom-up approach to this problem is to consider all the internal components of the mapping along some differential characteristic, and to multiply their transition probabilities. However, this can provide seriously distorted estimates since the various events can be dependent, and there can be a huge number of low probability characteristics contributing to the same high probability entry.
In this paper we use a top-down approach which considers the given mapping as a black box, and uses only its input/output relations in order to obtain direct experimental estimates for its DDT entries which are likely to be much more accurate. In particular, we describe three new techniques which reduce the time complexity of three crucial aspects of this problem: Finding the exact values of all the diagonal entries in the DDT for small values of n, approximating all the diagonal entries which correspond to low Hamming weight differences for large values of , and finding an accurate approximation for any entry whose large value is obtained from many small contributions. To demonstrate the potential contribution of our new techniques, we apply them to the SIMON family of block ciphers, show experimentally that most of the previously published bottom-up estimates of the probabilities of various differentials are off by a significant factor, and describe new differential properties which can cover more rounds with roughly the same probability for several of its members. In addition, we show how to use our new techniques to attack a 1-key version of the iterated Even-Mansour scheme in the related key setting, obtaining the first generic attack on 4 rounds of this well-studied construction
Tight Bounds on Online Checkpointing Algorithms
The problem of online checkpointing is a classical problem with numerous applications which had been studied in various forms for almost 50 years. In the simplest version of this problem, a user has to maintain k memorized checkpoints during a long computation, where the only allowed operation is to move one of the checkpoints from its old time to the current time, and his goal is to keep the checkpoints as evenly spread out as possible at all times.
At ICALP\u2713 Bringmann et al. studied this problem as a special case of an online/offline optimization problem in which the deviation from uniformity is measured by the natural discrepancy metric of the worst case ratio between real and ideal segment lengths. They showed this discrepancy is smaller than 1.59-o(1) for all k, and smaller than ln4-o(1)~~1.39 for the sparse subset of k\u27s which are powers of 2. In addition, they obtained upper bounds on the achievable discrepancy for some small values of k.
In this paper we solve the main problems left open in the ICALP\u2713 paper by proving that ln4 is a tight upper and lower bound on the asymptotic discrepancy for all large k, and by providing tight upper and lower bounds (in the form of provably optimal checkpointing algorithms, some of which are in fact better than those of Bringmann et al.) for all the small values of k <= 10
Efficient Detection of High Probability Statistical Properties of Cryptosystems via Surrogate Differentiation
A central problem in cryptanalysis is to find all the significant deviations from randomness in a given -bit cryptographic primitive. When is small (e.g., an -bit S-box), this is easy to do, but for large , the only practical way to find such statistical properties was to exploit the internal structure of the primitive and to speed up the search with a variety of heuristic rules of thumb. However, such bottom-up techniques can miss many properties, especially in cryptosystems which are designed to have hidden trapdoors.
In this paper we consider the top-down version of the problem in which the cryptographic primitive is given as a structureless black box, and reduce the complexity of the best known techniques for finding all its significant differential and linear properties by a large factor of . Our main new tool is the idea of using {\it surrogate differentiation}. In the context of finding differential properties, it enables us to simultaneously find information about all the differentials of the form in all possible directions by differentiating in a single arbitrarily chosen direction (which is unrelated to the \u27s). In the context of finding linear properties, surrogate differentiation can be combined in a highly effective way with the Fast Fourier Transform. For -bit cryptographic primitives, this technique makes it possible to automatically find in about time all their differentials with probability and all their linear approximations with bias ; previous algorithms for these problems required at least time. Similar techniques can be used to significantly improve the best known time complexities of finding related key differentials, second-order differentials, and boomerangs. In addition, we show how to run variants of these algorithms which require no memory, and how to detect such statistical properties even in trapdoored cryptosystems whose designers specifically try to evade our techniques
Recommended from our members
Pairing of Competitive and Topologically Distinct Regulatory Modules Enhances Patterned Gene Expression
Biological networks are inherently modular, yet little is known about how modules are assembled to enable coordinated and complex functions. We used RNAi and time series, whole-genome microarray analyses to systematically perturb and characterize components of a Caenorhabditis elegans lineage-specific transcriptional regulatory network. These data are supported by selected reporter gene analyses and comprehensive yeast one-hybrid and promoter sequence analyses. Based on these results, we define and characterize two modules composed of muscle- and epidermal-specifying transcription factors that function together within a single cell lineage to robustly specify multiple cell types. The expression of these two modules, although positively regulated by a common factor, is reliably segregated among daughter cells. Our analyses indicate that these modules repress each other, and we propose that this cross-inhibition coupled with their relative time of induction function to enhance the initial asymmetry in their expression patterns, thus leading to the observed invariant gene expression patterns and cell lineage. The coupling of asynchronous and topologically distinct modules may be a general principle of module assembly that functions to potentiate genetic switches.Molecular and Cellular Biolog