35 research outputs found

    Coherent, super resolved radar beamforming using self-supervised learning

    Full text link
    High resolution automotive radar sensors are required in order to meet the high bar of autonomous vehicles needs and regulations. However, current radar systems are limited in their angular resolution causing a technological gap. An industry and academic trend to improve angular resolution by increasing the number of physical channels, also increases system complexity, requires sensitive calibration processes, lowers robustness to hardware malfunctions and drives higher costs. We offer an alternative approach, named Radar signal Reconstruction using Self Supervision (R2-S2), which significantly improves the angular resolution of a given radar array without increasing the number of physical channels. R2-S2 is a family of algorithms which use a Deep Neural Network (DNN) with complex range-Doppler radar data as input and trained in a self-supervised method using a loss function which operates in multiple data representation spaces. Improvement of 4x in angular resolution was demonstrated using a real-world dataset collected in urban and highway environments during clear and rainy weather conditions.Comment: 28 pages 10 figure

    Efficient Dissection of Bicomposite Problems with Cryptanalytic Applications

    Get PDF
    In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called {\it dissection}, which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with rr independent nn-bit keys. All the previous error-free attacks required time TT and memory MM satisfying TM=2rnTM = 2^{rn}, and even if ``false negatives\u27\u27 are allowed, no attack could achieve TM<23rn/4TM<2^{3rn/4}. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of TMTM, such as T=24nT=2^{4n} time and M=2nM=2^{n} memory for breaking the sequential execution of r=7 block ciphers. The improvement ratio we obtain increases in an unbounded way as rr increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to improve rebound attacks on hash functions and to solve with better time complexities (for small memory complexities) hard combinatorial search problems, such as the well known knapsack problem

    Decomposing the ASASA Block Cipher Construction

    Get PDF
    We consider the problem of recovering the internal specification of a general SP-network consisting of three linear layers (A) interleaved with two Sbox layers (S) (denoted by ASASA for short), given only black-box access to the scheme. The decomposition of such general ASASA schemes was first considered at ASIACRYPT 2014 by Biryukov et al. which used the alleged difficulty of this problem to propose several concrete block cipher designs as candidates for white-box cryptography. In this paper, we present several attacks on general ASASA schemes that significantly outperform the analysis of Biryukov et al. As a result, we are able to break all the proposed concrete ASASA constructions with practical complexity. For example, we can decompose an ASASA structure that was supposed to provide 6464-bit security in roughly 2282^{28} steps, and break the scheme that supposedly provides 128128-bit security in about 2412^{41} time. Whenever possible, our findings are backed up with experimental verifications

    New Attacks on Feistel Structures with Improved Memory Complexities

    Get PDF
    Feistel structures are an extremely important and extensively researched type of cryptographic schemes. In this paper we describe improved attacks on Feistel structures with more than 4 rounds. We achieve this by a new attack that combines the main benefits of meet-in-the-middle attacks (which can reduce the time complexity by comparing only half blocks in the middle) and dissection attacks (which can reduce the memory complexity but have to guess full blocks in the middle in order to perform independent attacks above and below it). For example, for a 7-round Feistel structure on n-bit inputs with seven independent round keys of n/2 bits each), a MITM attack can use (2^1.5n ,2^1.5n) time and memory, while dissection requires (2^2n, 2^n) time and memory. Our new attack requires only (2^1.5n, 2^n) time and memory, using a few known plaintext/ciphertext pairs. When we are allowed to use more known plaintexts, we develop new techniques which rely on the existence of multicollisions and differential properties deep in the structure in order to further reduce the memory complexity. Our new attacks are not just theoretical generic constructions - in fact, we can use them to improve the best known attacks on several concrete cryptosystems such as CAST-128 (where we reduce the memory complexity from 2^111 to 2^64) and DEAL-256 (where we reduce the memory complexity from 2^200 to 2^144), without affecting their time and data complexities. An extension of our techniques applies even to some non-Feistel structures - for example, in the case of FOX, we reduce the memory complexity of all the best known attacks by a factor of 2^16

    Memory-Efficient Algorithms for Finding Needles in Haystacks

    Get PDF
    One of the most common tasks in cryptography and cryptanalysis is to find some interesting event (a needle) in an exponentially large collection (haystack) of N=2nN=2^n possible events, or to demonstrate that no such event is likely to exist. In particular, we are interested in finding needles which are defined as events that happen with an unusually high probability of p≫1/Np \gg 1/N in a haystack which is an almost uniform distribution on NN possible events. When the search algorithm can only sample values from this distribution, the best known time/memory tradeoff for finding such an event requires O(1/Mp2)O(1/Mp^2) time given O(M)O(M) memory. In this paper we develop much faster needle searching algorithms in the common cryptographic setting in which the distribution is defined by applying some deterministic function ff to random inputs. Such a distribution can be modelled by a random directed graph with NN vertices in which almost all the vertices have O(1)O(1) predecessors while the vertex we are looking for has an unusually large number of O(pN)O(pN) predecessors. When we are given only a constant amount of memory, we propose a new search methodology which we call \textbf{NestedRho}. As pp increases, such random graphs undergo several subtle phase transitions, and thus the log-log dependence of the time complexity TT on pp becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the O(1/p2)O(1/p^2) time complexity of the best previous algorithm in the full range of 1/N<p<11/N<p<1, and in particular it improves the previous time complexity by a significant factor of N\sqrt{N} for any pp in the range N−0.75<p<N−0.5N^{-0.75}<p< N^{-0.5}. When we are given more memory, we show how to combine the \textbf{NestedRho} technique with the parallel collision search technique in order to further reduce its time complexity. Finally, we show how to apply our new search technique to more complicated distributions with multiple peaks when we want to find all the peaks whose probabilities are higher than pp

    Improved Top-Down Techniques in Differential Cryptanalysis

    Get PDF
    The fundamental problem of differential cryptanalysis is to find the highest entries in the Difference Distribution Table (DDT) of a given mapping F over n-bit values, and in particular to find the highest diagonal entries which correspond to the best iterative characteristics of FF. The standard bottom-up approach to this problem is to consider all the internal components of the mapping along some differential characteristic, and to multiply their transition probabilities. However, this can provide seriously distorted estimates since the various events can be dependent, and there can be a huge number of low probability characteristics contributing to the same high probability entry. In this paper we use a top-down approach which considers the given mapping as a black box, and uses only its input/output relations in order to obtain direct experimental estimates for its DDT entries which are likely to be much more accurate. In particular, we describe three new techniques which reduce the time complexity of three crucial aspects of this problem: Finding the exact values of all the diagonal entries in the DDT for small values of n, approximating all the diagonal entries which correspond to low Hamming weight differences for large values of nn, and finding an accurate approximation for any DDTDDT entry whose large value is obtained from many small contributions. To demonstrate the potential contribution of our new techniques, we apply them to the SIMON family of block ciphers, show experimentally that most of the previously published bottom-up estimates of the probabilities of various differentials are off by a significant factor, and describe new differential properties which can cover more rounds with roughly the same probability for several of its members. In addition, we show how to use our new techniques to attack a 1-key version of the iterated Even-Mansour scheme in the related key setting, obtaining the first generic attack on 4 rounds of this well-studied construction

    Tight Bounds on Online Checkpointing Algorithms

    Get PDF
    The problem of online checkpointing is a classical problem with numerous applications which had been studied in various forms for almost 50 years. In the simplest version of this problem, a user has to maintain k memorized checkpoints during a long computation, where the only allowed operation is to move one of the checkpoints from its old time to the current time, and his goal is to keep the checkpoints as evenly spread out as possible at all times. At ICALP\u2713 Bringmann et al. studied this problem as a special case of an online/offline optimization problem in which the deviation from uniformity is measured by the natural discrepancy metric of the worst case ratio between real and ideal segment lengths. They showed this discrepancy is smaller than 1.59-o(1) for all k, and smaller than ln4-o(1)~~1.39 for the sparse subset of k\u27s which are powers of 2. In addition, they obtained upper bounds on the achievable discrepancy for some small values of k. In this paper we solve the main problems left open in the ICALP\u2713 paper by proving that ln4 is a tight upper and lower bound on the asymptotic discrepancy for all large k, and by providing tight upper and lower bounds (in the form of provably optimal checkpointing algorithms, some of which are in fact better than those of Bringmann et al.) for all the small values of k <= 10

    Efficient Detection of High Probability Statistical Properties of Cryptosystems via Surrogate Differentiation

    Get PDF
    A central problem in cryptanalysis is to find all the significant deviations from randomness in a given nn-bit cryptographic primitive. When nn is small (e.g., an 88-bit S-box), this is easy to do, but for large nn, the only practical way to find such statistical properties was to exploit the internal structure of the primitive and to speed up the search with a variety of heuristic rules of thumb. However, such bottom-up techniques can miss many properties, especially in cryptosystems which are designed to have hidden trapdoors. In this paper we consider the top-down version of the problem in which the cryptographic primitive is given as a structureless black box, and reduce the complexity of the best known techniques for finding all its significant differential and linear properties by a large factor of 2n/22^{n/2}. Our main new tool is the idea of using {\it surrogate differentiation}. In the context of finding differential properties, it enables us to simultaneously find information about all the differentials of the form f(x)⊕f(x⊕α)f(x) \oplus f(x \oplus \alpha) in all possible directions α\alpha by differentiating ff in a single arbitrarily chosen direction γ\gamma (which is unrelated to the α\alpha\u27s). In the context of finding linear properties, surrogate differentiation can be combined in a highly effective way with the Fast Fourier Transform. For 6464-bit cryptographic primitives, this technique makes it possible to automatically find in about 2642^{64} time all their differentials with probability p≥2−32p \geq 2^{-32} and all their linear approximations with bias ∣p∣≥2−16|p| \geq 2^{-16}; previous algorithms for these problems required at least 2962^{96} time. Similar techniques can be used to significantly improve the best known time complexities of finding related key differentials, second-order differentials, and boomerangs. In addition, we show how to run variants of these algorithms which require no memory, and how to detect such statistical properties even in trapdoored cryptosystems whose designers specifically try to evade our techniques
    corecore