125 research outputs found

    Fast Computation of Sliding Discrete Tchebichef Moments and Its Application in Duplicated Regions Detection

    No full text
    International audienceComputational load remains a major concern when processing signals by means of sliding transforms. In this paper, we present an efficient algorithm for the fast computation of one-dimensional and two-dimensional sliding discrete Tchebichef moments. To do so, we first establish the relationships that exist between the Tchebichef moments of two neighboring windows taking advantage of Tchebichef polynomials’ properties. We then propose an original way to fast compute the moments of one window by utilizing the moment values of its previous window. We further theoretically establish the complexity of our fast algorithm and illustrate its interest within the framework of digital forensics and more precisely the detection of duplicated regions in an audio signal or an image. Our algorithm is used to extract local features of such a signal tampering. Experimental results show that its complexity is independent of the window size, validating the theory. They also exhibit that our algorithm is suitable to digital forensics and beyond to any applications based on sliding Tchebichef moments

    Source and channel coding using Fountain codes

    Get PDF
    The invention of Fountain codes is a major advance in the field of error correcting codes. The goal of this work is to study and develop algorithms for source and channel coding using a family of Fountain codes known as Raptor codes. From an asymptotic point of view, the best currently known sum-product decoding algorithm for non binary alphabets has a high complexity that limits its use in practice. For binary channels, sum-product decoding algorithms have been extensively studied and are known to perform well. In the first part of this work, we develop a decoding algorithm for binary codes on non-binary channels based on a combination of sum-product and maximum-likelihood decoding. We apply this algorithm to Raptor codes on both symmetric and non-symmetric channels. Our algorithm shows the best performance in terms of complexity and error rate per symbol for blocks of finite length for symmetric channels. Then, we examine the performance of Raptor codes under sum-product decoding when the transmission is taking place on piecewise stationary memoryless channels and on channels with memory corrupted by noise. We develop algorithms for joint estimation and detection while simultaneously employing expectation maximization to estimate the noise, and sum-product algorithm to correct errors. We also develop a hard decision algorithm for Raptor codes on piecewise stationary memoryless channels. Finally, we generalize our joint LT estimation-decoding algorithms for Markov-modulated channels. In the third part of this work, we develop compression algorithms using Raptor codes. More specifically we introduce a lossless text compression algorithm, obtaining in this way competitive results compared to the existing classical approaches. Moreover, we propose distributed source coding algorithms based on the paradigm proposed by Slepian and Wolf

    The role of Walsh structure and ordinal linkage in the optimisation of pseudo-Boolean functions under monotonicity invariance.

    Get PDF
    Optimisation heuristics rely on implicit or explicit assumptions about the structure of the black-box fitness function they optimise. A review of the literature shows that understanding of structure and linkage is helpful to the design and analysis of heuristics. The aim of this thesis is to investigate the role that problem structure plays in heuristic optimisation. Many heuristics use ordinal operators; which are those that are invariant under monotonic transformations of the fitness function. In this thesis we develop a classification of pseudo-Boolean functions based on rank-invariance. This approach classifies functions which are monotonic transformations of one another as equivalent, and so partitions an infinite set of functions into a finite set of classes. Reasoning about heuristics composed of ordinal operators is, by construction, invariant over these classes. We perform a complete analysis of 2-bit and 3-bit pseudo-Boolean functions. We use Walsh analysis to define concepts of necessary, unnecessary, and conditionally necessary interactions, and of Walsh families. This helps to make precise some existing ideas in the literature such as benign interactions. Many algorithms are invariant under the classes we define, which allows us to examine the difficulty of pseudo-Boolean functions in terms of function classes. We analyse a range of ordinal selection operators for an EDA. Using a concept of directed ordinal linkage, we define precedence networks and precedence profiles to represent key algorithmic steps and their interdependency in terms of problem structure. The precedence profiles provide a measure of problem difficulty. This corresponds to problem difficulty and algorithmic steps for optimisation. This work develops insight into the relationship between function structure and problem difficulty for optimisation, which may be used to direct the development of novel algorithms. Concepts of structure are also used to construct easy and hard problems for a hill-climber

    Some Words on Cryptanalysis of Stream Ciphers

    Get PDF
    In the world of cryptography, stream ciphers are known as primitives used to ensure privacy over a communication channel. One common way to build a stream cipher is to use a keystream generator to produce a pseudo-random sequence of symbols. In such algorithms, the ciphertext is the sum of the keystream and the plaintext, resembling the one-time pad principal. Although the idea behind stream ciphers is simple, serious investigation of these primitives has started only in the late 20th century. Therefore, cryptanalysis and design of stream ciphers are important. In recent years, many designs of stream ciphers have been proposed in an effort to find a proper candidate to be chosen as a world standard for data encryption. That potential candidate should be proven good by time and by the results of cryptanalysis. Different methods of analysis, in fact, explain how a stream cipher should be constructed. Thus, techniques for cryptanalysis are also important. This thesis starts with an overview of cryptography in general, and introduces the reader to modern cryptography. Later, we focus on basic principles of design and analysis of stream ciphers. Since statistical methods are the most important cryptanalysis techniques, they will be described in detail. The practice of statistical methods reveals several bottlenecks when implementing various analysis algorithms. For example, a common property of a cipher to produce n-bit words instead of just bits makes it more natural to perform a multidimensional analysis of such a design. However, in practice, one often has to truncate the words simply because the tools needed for analysis are missing. We propose a set of algorithms and data structures for multidimensional cryptanalysis when distributions over a large probability space have to be constructed. This thesis also includes results of cryptanalysis for various cryptographic primitives, such as A5/1, Grain, SNOW 2.0, Scream, Dragon, VMPC, RC4, and RC4A. Most of these results were achieved with the help of intensive use of the proposed tools for cryptanalysis

    Formal Methods in Quantum Circuit Design

    Get PDF
    The design and compilation of correct, efficient quantum circuits is integral to the future operation of quantum computers. This thesis makes contributions to the problems of optimizing and verifying quantum circuits, with an emphasis on the development of formal models for such purposes. We also present software implementations of these methods, which together form a full stack of tools for the design of optimized, formally verified quantum oracles. On the optimization side, we study methods for the optimization of Rz and CNOT gates in Clifford+Rz circuits. We develop a general, efficient optimization algorithm called phase folding, which reduces the number of Rz gates without increasing any metrics by computing its phase polynomial. This algorithm can further be combined with synthesis techniques for CNOT-dihedral operators to optimize circuits with respect to particular costs. We then study the optimal synthesis problem for CNOT-dihedral operators from the perspectives of Rz and CNOT gate optimization. In the case of Rz gate optimization, we show that the optimal synthesis problem is polynomial-time equivalent to minimum-distance decoding in certain Reed-Muller codes. For the CNOT optimization problem, we show that the optimal synthesis problem is at least as hard as a combinatorial problem related to Gray codes. In both cases, we develop heuristics for the optimal synthesis problem, which together with phase folding reduces T counts by 42% and CNOT counts by 22% across a suite of real-world benchmarks. From the perspective of formal verification, we make two contributions. The first is the development of a formal model of quantum circuits with ancillary bits based on the Feynman path integral, along with a concrete verification algorithm. The path integral model, with some syntactic sugar, further doubles as a natural specification language for quantum computations. Our experiments show some practical circuits with up to hundreds of qubits can be efficiently verified. Our second contribution is a formally verified, optimizing compiler for reversible circuits. The compiler compiles a classical, irreversible language to reversible circuits, with a formal, machine-checked proof of correctness written in the proof assistant F*. The compiler is structured as a partial evaluator, allowing verification to be carried out significantly faster than previous results

    Symmetry-Adapted Machine Learning for Information Security

    Get PDF
    Symmetry-adapted machine learning has shown encouraging ability to mitigate the security risks in information and communication technology (ICT) systems. It is a subset of artificial intelligence (AI) that relies on the principles of processing future events by learning past events or historical data. The autonomous nature of symmetry-adapted machine learning supports effective data processing and analysis for security detection in ICT systems without the interference of human authorities. Many industries are developing machine-learning-adapted solutions to support security for smart hardware, distributed computing, and the cloud. In our Special Issue book, we focus on the deployment of symmetry-adapted machine learning for information security in various application areas. This security approach can support effective methods to handle the dynamic nature of security attacks by extraction and analysis of data to identify hidden patterns of data. The main topics of this Issue include malware classification, an intrusion detection system, image watermarking, color image watermarking, battlefield target aggregation behavior recognition model, IP camera, Internet of Things (IoT) security, service function chain, indoor positioning system, and crypto-analysis

    Real-time system identification and self-tuning control of DC-DC power converter using Kalman Filter approach

    Get PDF
    Ph. D. ThesisSwitch-mode power converters (SMPCs) are employed in many industrial and consumer devices. Due to the continuous reduction in cost of microprocessors, and improvements in the processing power, digital control solutions for SMPCs have become a viable alternative to traditional analogue controllers. However, in order to achieve high-performance control of modern DC-DC converters, using direct digital design techniques, an accurate discrete model of the converter is necessary. This model can be acquired by means of prior knowledge about the system parameters or using system identification methods. For the best performance of the designed controller, the system identification methods are preferred to handle the model uncertainties such as component variations and load changes. This process is called indirect adaptive control, where the model is estimated from input and output data using a recursive algorithm and the controller parameters are tuned and adjusted accordingly. In the parameter estimation step, Recursive Least Squares (RLS) method and its modifications exhibit very good identification metrics (fast convergence rate, accurate estimate, and small prediction error) during steady-state operation. However, in real-time implementation, the accuracy of the estimated model using the RLS algorithm is affected by measurement noise. Moreover, there is a need to continuously inject an excitation signal to avoid estimator wind-up. In addition, the computational complexity of RLS algorithm is high which demands significant hardware resources and hence increase the overall cost of the digital system. For these reasons, this thesis presents a robust parametric identification method, which has the ability to provide accurate estimation and computationally efficient self-tuning controller suitable for real-time implementation in SMPCs systems. This thesis presents two complete real-time solutions for parametric system identification and explicit self-tuning control for SMPCs. The first is a new parametric estimation method, based on a state of the art Kalman Filter (KF) algorithm to estimate the discrete model of a synchronous DC-DC buck converter. The proposed method can accurately identify the discrete coefficients of the DC-DC converter. This estimator possesses the advantage of providing an independent strategy for adaptation of each individual parameter; thus offering a robust and reliable solution for real-time parameter estimation. To improve the tracking performance of the proposed KF, an adaptive tuning technique is proposed. Unlike many other published schemes, this approach offers the unique advantage of updating the parameter vector coefficients at different rates. This thesis also validates the performance of the identification algorithm with time-varying parameters; such as an abrupt load change. Furthermore, the proposed method demonstrates robust estimation with and without an excitation signal, which makes it very well suited for real-time power electronic control applications. Additionally, the estimator convergence time is significantly shorter compared to many other schemes, such as the classical Exponentially weighted Recursive Least Square (ERLS) method. To design a computationally efficient self-tuning controller for DC-DC SMPCs, the second part of the thesis develops a complete package for real-time explicit self-tuning control. The novel partial update KF (PUKF) is introduced for real-time parameter estimation. In this approach, a significant complexity reduction is attained as the number of arithmetic operations are reduced, more specifically the computation of adaptation gains and covariance updates. The explicit self-tuning control scheme is constructed via integrating the developed PUKF with low complexity control algorithm such as BĂĄnyĂĄsz/Keviczky PID controller. Experimental and simulation results clearly show an enhancement in the overall dynamic performance of the closed loop control system compared to the conventional PID controller designed based on a pre-calculated average model. Importantly, in this thesis, unlike a significant proportion of existing literature, the entire system identification, and closed loop control process is seamlessly implemented in real-time hardware, without any remote intermediate post processing analysis.Ministry of Higher Education, General Electricity Company of Liby

    Techniques améliorées pour la cryptanalyse des primitives symétriques

    Get PDF
    This thesis proposes improvements which can be applied to several techniques for the cryptanalysis of symmetric primitives. Special attention is given to linear cryptanalysis, for which a technique based on the fast Walsh transform was already known (Collard et al., ICISIC 2007). We introduce a generalised version of this attack, which allows us to apply it on key recovery attacks over multiple rounds, as well as to reduce the complexity of the problem using information extracted, for example, from the key schedule. We also propose a general technique for speeding key recovery attacks up which is based on the representation of Sboxes as binary decision trees. Finally, we showcase the construction of a linear approximation of the full version of the Gimli permutation using mixed-integer linear programming (MILP) optimisation.Dans cette thĂšse, on propose des amĂ©liorations qui peuvent ĂȘtre appliquĂ©es Ă  plusieurs techniques de cryptanalyse de primitives symĂ©triques. On dĂ©die une attention spĂ©ciale Ă  la cryptanalyse linĂ©aire, pour laquelle une technique basĂ©e sur la transformĂ©e de Walsh rapide Ă©tait dĂ©jĂ  connue (Collard et al., ICISC 2007). On introduit une version gĂ©nĂ©ralisĂ©e de cette attaque, qui permet de l'appliquer pour la rĂ©cupĂ©ration de clĂ© considerant plusieurs tours, ainsi que le rĂ©duction de la complexitĂ© du problĂšme en utilisant par example des informations provĂ©nantes du key-schedule. On propose aussi une technique gĂ©nĂ©rale pour accĂ©lĂ©rer les attaques par rĂ©cupĂ©ration de clĂ© qui est basĂ©e sur la reprĂ©sentation des boĂźtes S en tant que arbres binaires. Finalement, on montre comment on a obtenu une approximation linĂ©aire sur la version complĂšte de la permutation Gimli en utilisant l'optimisation par mixed-integer linear programming (MILP)

    Efficient algorithms and data structures for compressive sensing

    Get PDF
    Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stĂ¶ĂŸt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kĂŒrzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen ĂŒber das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verĂ€ndert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso mĂŒssen viele Hardwarekonzepte fĂŒr praktische Anwendungen neu ĂŒberdacht werden. Das heißt, dass man zwischen der Menge an Information, die man ĂŒber Signale gewinnen kann, und dem Aufwand fĂŒr das Design und Betreiben eines Signalverarbeitungssystems abwĂ€gen kann und muss. Die hier vorgestellte Arbeit trĂ€gt dazu bei, dass bei diesem AbwĂ€gen CS mehr begĂŒnstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende LeistungsfĂ€higkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode prĂ€sentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschĂ€tzt werden kann. Wir zeigen auf, dass dieser Ansatz fĂŒr Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler fĂŒhrt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. FĂŒr die einfachere Anwendung dieser Darstellung prĂ€sentieren wir eine freie Softwarearchitektur und demonstrieren deren VorzĂŒge, wenn sie fĂŒr die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits fĂŒr das Ermitteln von Defektpositionen in PrĂŒfkörpern mittels Ultraschall, und andererseits fĂŒr das SchĂ€tzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. DarĂŒber hinaus stellen wir fĂŒr die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate fĂŒr die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulĂ€sst, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeintrĂ€chtigen. FĂŒr die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende SchĂ€tzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten FunkkanĂ€len. Um den inhĂ€renten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die EinschrĂ€nkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schĂ€tzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schĂ€tzen. Weiterhin zeigen wir, wie dieser Ansatz zur RichtungsschĂ€tzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang prĂ€sentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut fĂŒr ParameterschĂ€tzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die SchĂ€tzgenauigkeit ĂŒber den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wĂŒnschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline
    • 

    corecore