90 research outputs found
Efficient Implementation of Elliptic Curve Cryptography on FPGAs
This work presents the design strategies of an FPGA-based elliptic curve co-processor. Elliptic curve cryptography is an important topic in cryptography due to its relatively short key length and higher efficiency as compared to other well-known public key crypto-systems like RSA. The most important contributions of this work are: - Analyzing how different representations of finite fields and points on elliptic curves effect the performance of an elliptic curve co-processor and implementing a high performance co-processor. - Proposing a novel dynamic programming approach to find the optimum combination of different recursive polynomial multiplication methods. Here optimum means the method which has the smallest number of bit operations. - Designing a new normal-basis multiplier which is based on polynomial multipliers. The most important part of this multiplier is a circuit of size for changing the representation between polynomial and normal basis
Generalised Mersenne Numbers Revisited
Generalised Mersenne Numbers (GMNs) were defined by Solinas in 1999 and
feature in the NIST (FIPS 186-2) and SECG standards for use in elliptic curve
cryptography. Their form is such that modular reduction is extremely efficient,
thus making them an attractive choice for modular multiplication
implementation. However, the issue of residue multiplication efficiency seems
to have been overlooked. Asymptotically, using a cyclic rather than a linear
convolution, residue multiplication modulo a Mersenne number is twice as fast
as integer multiplication; this property does not hold for prime GMNs, unless
they are of Mersenne's form. In this work we exploit an alternative
generalisation of Mersenne numbers for which an analogue of the above property
--- and hence the same efficiency ratio --- holds, even at bitlengths for which
schoolbook multiplication is optimal, while also maintaining very efficient
reduction. Moreover, our proposed primes are abundant at any bitlength, whereas
GMNs are extremely rare. Our multiplication and reduction algorithms can also
be easily parallelised, making our arithmetic particularly suitable for
hardware implementation. Furthermore, the field representation we propose also
naturally protects against side-channel attacks, including timing attacks,
simple power analysis and differential power analysis, which is essential in
many cryptographic scenarios, in constrast to GMNs.Comment: 32 pages. Accepted to Mathematics of Computatio
CYBER 200 Applications Seminar
Applications suited for the CYBER 200 digital computer are discussed. Various areas of application including meteorology, algorithms, fluid dynamics, monte carlo methods, petroleum, electronic circuit simulation, biochemistry, lattice gauge theory, economics and ray tracing are discussed
Eine generische komponentenbasierte Software-Architektur zur Simulation probabilistischer Modelle
An uncertain behaviour is in the nature of many physical phenomena. This uncertainty has to be quantified for a meaningful prediction by a computer-aided simulation. A stochastic description of the uncertainty carries a physical phenomenon over to a probabilistic model, which is usually solved by numerical schemes.
The present thesis discusses and develops models for challenging uncertain physical phenomena, efficient numerical schemes for a quantification of uncertainties (UQ), and a sustainable and efficient software implementation.
Probabilistic models are often described by stochastic partial differential equations (SPDEs). The stochastic Galerkin method represents the solution of an SPDE by a set of stochastic basis polynomials. A problem-independent choice of basis polynomials typically limits the application to relatively small maximum polynomial degrees. Moreover, many coefficients have to be computed and stored. In this thesis new error-controlled low-rank schemes are presented, which in addition select relevant basis polynomials. In this manner the previously mentioned problems are addressed.
The complexity of a UQ is as well reflected in the software implementation. A sustainable implementation relies on a reuse of software. Here, a software architecture for the simulation of probabilistic models is presented, which is based on distributed generic components. Many of these components are reused in different frameworks (and may also be used beyond a UQ). They can be instantiated in a distributed system many times and are interchangeable at runtime, where the generic aspect is preserved.
Probabilistic models are derived and simulated in this thesis, which for instance describe uncertainties for a composite material and an aircraft design. Among other things, several hundred stochastic dimensions or a long runtime for simulations arise.Ein unsicheres Verhalten liegt in der Natur vieler physikalischer Phänomene. Diese Unsicherheit muss für eine sinnvolle Prognose durch eine Computer-gestützte Simulation quantifiziert werden. Eine stochastische Beschreibung der Unsicherheit überführt ein physikalisches Phänomen in ein probabilistisches Modell, das üblicherweise durch numerische Verfahren gelöst wird.
Die vorliegende Arbeit behandelt und entwickelt Modelle für anspruchsvolle und mit Unsicherheit behaftete physikalische Phänomene, effiziente numerische Verfahren für eine Unsicherheitsquantifizierung (UQ) und eine nachhaltige und leistungsfähige Software-Umsetzung.
Probabilistische Modelle werden häufig durch stochastische partielle Differentialgleichungen (SPDGLn) beschrieben. Die stochastische Galerkin Methode stellt die Lösung einer SPDGL durch eine endliche Menge an stochastischen Basispolynomen dar. Eine problemunabhängige Wahl von Basispolynomen beschränkt die Anwendung typischerweise auf relativ kleine maximale Polynomgrade. Des Weiteren müssen viele Koeffizienten berechnet und gespeichert werden. In dieser Arbeit werden neue fehlergesteuerte Niedrig-Rang Verfahren vorgestellt, die zudem relevante Basispolynome selektieren. Auf diese Weise wird den zuvor beschriebenen Problemen entgegen gegangen.
Die Komplexität einer UQ schlägt sich ebenso auf die Software-Umsetzung nieder. Eine nachhaltige Umsetzung setzt auf die Wiederverwendbarkeit von Software. Hier wird eine auf verteilten und generischen Komponenten basierende Software-Architektur zur Simulation probabilistischer Modelle vorgestellt. Viele dieser Komponenten werden in verschiedenen Frameworks wiederverwendet (und mögen auch außerhalb einer UQ zum Einsatz kommen). Sie können mehrfach in einem verteilten System instanziiert und zur Laufzeit ausgetauscht werden, wobei der generische Aspekt erhalten bleibt.
Probabilistische Modelle beispielsweise zur Beschreibung von Unsicherheiten in einem Kompositwerkstoff und einem Flugzeugentwurf werden in dieser Arbeit hergeleitet und simuliert. Dabei treten mitunter mehrere hundert stochastische Dimensionen oder lange Simulationslaufzeiten auf
The Magnus expansion and some of its applications
Approximate resolution of linear systems of differential equations with
varying coefficients is a recurrent problem shared by a number of scientific
and engineering areas, ranging from Quantum Mechanics to Control Theory. When
formulated in operator or matrix form, the Magnus expansion furnishes an
elegant setting to built up approximate exponential representations of the
solution of the system. It provides a power series expansion for the
corresponding exponent and is sometimes referred to as Time-Dependent
Exponential Perturbation Theory. Every Magnus approximant corresponds in
Perturbation Theory to a partial re-summation of infinite terms with the
important additional property of preserving at any order certain symmetries of
the exact solution. The goal of this review is threefold. First, to collect a
number of developments scattered through half a century of scientific
literature on Magnus expansion. They concern the methods for the generation of
terms in the expansion, estimates of the radius of convergence of the series,
generalizations and related non-perturbative expansions. Second, to provide a
bridge with its implementation as generator of especial purpose numerical
integration methods, a field of intense activity during the last decade. Third,
to illustrate with examples the kind of results one can expect from Magnus
expansion in comparison with those from both perturbative schemes and standard
numerical integrators. We buttress this issue with a revision of the wide range
of physical applications found by Magnus expansion in the literature.Comment: Report on the Magnus expansion for differential equations and its
applications to several physical problem
Recommended from our members
High accuracy computational methods for the semiclassical Schrödinger equation
The computation of Schrödinger equations in the semiclassical regime presents several enduring challenges due to the presence of the small semiclassical parameter. Standard approaches for solving these equations commence with spatial discretisation followed by exponentiation of the discretised Hamiltonian via exponential splittings.
In this thesis we follow an alternative strategywe develop a new technique, called the symmetric Zassenhaus splitting procedure, which involves directly splitting the exponential of the undiscretised Hamiltonian. This technique allows us to design methods that are highly efficient in the semiclassical regime. Our analysis takes place in the Lie algebra generated by multiplicative operators and polynomials of the differential operator.
This Lie algebra is completely characterised by Jordan polynomials in the differential operator, which constitute naturally symmetrised differential operators. Combined with the -graded structure of this Lie algebra, the symmetry results in skew-Hermiticity of the exponents for Zassenhaus-style splittings, resulting in unitary evolution and numerical stability.
The properties of commutator simplification and height reduction in these Lie algebras result in a highly effective form of exponential splittings where consecutive terms are scaled by increasing powers of the small semiclassical parameter. This leads to high accuracy methods whose costs grow quadratically with higher orders of accuracy.
Time-dependent potentials are tackled by developing commutator-free Magnus expansions in our Lie algebra, which are subsequently split using the Zassenhaus algorithm. We present two approaches for developing arbitrarily high-order Magnus--Zassenhaus schemesone where the integrals are discretised using Gauss--Legendre quadrature at the outset and another where integrals are preserved throughout.
These schemes feature high accuracy, allow large time steps, and the quadratic growth of their costs is found to be superior to traditional approaches such as Magnus--Lanczos methods and Yoshida splittings based on traditional Magnus expansions that feature nested commutators of matrices.
An analysis of these operatorial splittings and expansions is carried out by characterising the highly oscillatory behaviour of the solution.The author acknowledges the generous financial support of King's College, Cambridge, in the form of a King’s College Studentship
which supported this doctoral research at Cambridge
Approximate inference in astronomy
This thesis utilizes the rules of probability theory and Bayesian reasoning to perform inference about astrophysical quantities from observational data, with a main focus on the inference of dynamical systems extended in space and time. The necessary assumptions to successfully solve such inference problems in practice are discussed and the resulting methods are applied to real world data. These assumptions range from the simplifying prior assumptions that enter the inference process up to the development of a novel approximation
method for resulting posterior distributions.
The prior models developed in this work follow a maximum entropy principle by solely constraining those physical properties of a system that appear most relevant to inference, while remaining uninformative regarding all other properties. To this end, prior models that only constrain the statistically homogeneous space-time correlation structure of a physical observable are developed. The constraints placed on these correlations are based on generic physical principles, which makes the resulting models quite flexible and allows for a wide range of applications. This flexibility is verified and explored using multiple numerical examples, as well as an application to data provided by the Event Horizon
Telescope about the center of the galaxy M87. Furthermore, as an advanced and extended form of application, a variant of these priors is utilized within the context of simulating partial differential equations. Here, the prior is used in order to quantify the physical plausibility of an associated numerical solution, which in turn improves the accuracy of the simulation. The applicability and implications of this probabilistic approach to simulation are discussed and studied using numerical examples.
Finally, utilizing such prior models paired with the vast amount of observational data provided by modern telescopes, results in Bayesian inference problems that are typically too complex to be fully solvable analytically. Specifically, most resulting posterior probability distributions become too complex, and therefore require a numerical approximation via a simplified distribution. To improve upon existing methods, this work proposes a novel approximation method for posterior probability distributions: the geometric Variational Inference (geoVI) method. The approximation capacities of geoVI are theoretically established and demonstrated using numerous numerical examples. These results suggest a broad range of applicability as the method provides a decrease in approximation errors compared to state of the art methods at a moderate level of computational costs.Diese Dissertation verwendet die Regeln der Wahrscheinlichkeitstheorie und Bayes’scher Logik, um astrophysikalische Größen aus Beobachtungsdaten zu rekonstruieren, mit einem Schwerpunkt auf der Rekonstruktion von dynamischen Systemen, die in Raum und Zeit definiert sind. Es werden die Annahmen, die notwendig sind um solche Inferenz-Probleme in der Praxis erfolgreich zu lösen, diskutiert, und die resultierenden Methoden auf reale Daten angewendet. Diese Annahmen reichen von vereinfachenden Prior-Annahmen, die in den Inferenzprozess eingehen, bis hin zur Entwicklung eines neuartigen Approximationsverfahrens für resultierende Posterior-Verteilungen.
Die in dieser Arbeit entwickelten Prior-Modelle folgen einem Prinzip der maximalen Entropie, indem sie nur die physikalischen Eigenschaften eines Systems einschränken, die für die Inferenz am relevantesten erscheinen, während sie bezüglich aller anderen Eigenschaften agnostisch bleiben. Zu diesem Zweck werden Prior-Modelle entwickelt, die nur die statistisch homogene Raum-Zeit-Korrelationsstruktur einer physikalischen Observablen einschränken. Die gewählten Bedingungen an diese Korrelationen basieren auf generischen
physikalischen Prinzipien, was die resultierenden Modelle sehr flexibel macht und ein breites Anwendungsspektrum ermöglicht. Dies wird anhand mehrerer numerischer Beispiele sowie einer Anwendung auf Daten des Event Horizon Telescope über das Zentrum der Galaxie M87 verifiziert und erforscht. Darüber hinaus wird als erweiterte Anwendungsform eine Variante dieser Modelle zur Simulation partieller Differentialgleichungen verwendet. Hier wird der Prior als Vorwissen benutzt, um die physikalische Plausibilität einer zugehörigen numerischen Lösung zu quantifizieren, was wiederum die Genauigkeit der Simulation verbessert. Die Anwendbarkeit und Implikationen dieses probabilistischen
Simulationsansatzes werden diskutiert und anhand von numerischen Beispielen untersucht.
Die Verwendung solcher Prior-Modelle, gepaart mit der riesigen Menge an Beobachtungsdaten moderner Teleskope, führt typischerweise zu Inferenzproblemen die zu komplex sind um vollständig analytisch lösbar zu sein. Insbesondere ist für die meisten resultierenden Posterior-Wahrscheinlichkeitsverteilungen eine numerische Näherung durch eine vereinfachte Verteilung notwendig. Um bestehende Methoden zu verbessern, schlägt diese Arbeit eine neuartige Näherungsmethode für Wahrscheinlichkeitsverteilungen vor: Geometric Variational Inference (geoVI). Die Approximationsfähigkeiten von geoVI werden theoretisch
ermittelt und anhand numerischer Beispiele demonstriert. Diese Ergebnisse legen einen breiten Anwendungsbereich nahe, da das Verfahren bei moderaten Rechenkosten eine Verringerung des Näherungsfehlers im Vergleich zum Stand der Technik liefert
Quantum Algorithm Implementations for Beginners
As quantum computers become available to the general public, the need has
arisen to train a cohort of quantum programmers, many of whom have been
developing classical computer programs for most of their careers. While
currently available quantum computers have less than 100 qubits, quantum
computing hardware is widely expected to grow in terms of qubit count, quality,
and connectivity. This review aims to explain the principles of quantum
programming, which are quite different from classical programming, with
straightforward algebra that makes understanding of the underlying fascinating
quantum mechanical principles optional. We give an introduction to quantum
computing algorithms and their implementation on real quantum hardware. We
survey 20 different quantum algorithms, attempting to describe each in a
succinct and self-contained fashion. We show how these algorithms can be
implemented on IBM's quantum computer, and in each case, we discuss the results
of the implementation with respect to differences between the simulator and the
actual hardware runs. This article introduces computer scientists, physicists,
and engineers to quantum algorithms and provides a blueprint for their
implementations
Diffeomorphic Transformations for Time Series Analysis: An Efficient Approach to Nonlinear Warping
The proliferation and ubiquity of temporal data across many disciplines has
sparked interest for similarity, classification and clustering methods
specifically designed to handle time series data. A core issue when dealing
with time series is determining their pairwise similarity, i.e., the degree to
which a given time series resembles another. Traditional distance measures such
as the Euclidean are not well-suited due to the time-dependent nature of the
data. Elastic metrics such as dynamic time warping (DTW) offer a promising
approach, but are limited by their computational complexity,
non-differentiability and sensitivity to noise and outliers. This thesis
proposes novel elastic alignment methods that use parametric \& diffeomorphic
warping transformations as a means of overcoming the shortcomings of DTW-based
metrics. The proposed method is differentiable \& invertible, well-suited for
deep learning architectures, robust to noise and outliers, computationally
efficient, and is expressive and flexible enough to capture complex patterns.
Furthermore, a closed-form solution was developed for the gradient of these
diffeomorphic transformations, which allows an efficient search in the
parameter space, leading to better solutions at convergence. Leveraging the
benefits of these closed-form diffeomorphic transformations, this thesis
proposes a suite of advancements that include: (a) an enhanced temporal
transformer network for time series alignment and averaging, (b) a
deep-learning based time series classification model to simultaneously align
and classify signals with high accuracy, (c) an incremental time series
clustering algorithm that is warping-invariant, scalable and can operate under
limited computational and time resources, and finally, (d) a normalizing flow
model that enhances the flexibility of affine transformations in coupling and
autoregressive layers.Comment: PhD Thesis, defended at the University of Navarra on July 17, 2023.
277 pages, 8 chapters, 1 appendi
- …