25 research outputs found
The power-series algorithm:A numerical approach to Markov processes
Abstract: The development of computer and communication networks and flexible manufacturing systems has led to new and interesting multidimensional queueing models. The Power-Series Algorithm is a numerical method to analyze and optimize the performance of such models. In this thesis, the applicability of the algorithm is extended. This is illustrated by introducing and analyzing a wide class of queueing networks with very general dependencies between the different queues. The theoretical basis of the algorithm is strengthened by proving analyticity of the steady-state distribution in light traffic and finding remedies for previous imperfections of the method. Applying similar ideas to the transient distribution renders new analyticity results. Various aspects of Markov processes, analytic functions and extrapolation methods are reviewed, necessary for a thorough understanding and efficient implementation of the Power-Series Algorithm.
Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems
Advances in artificial intelligence (AI) are fueling a new paradigm of
discoveries in natural sciences. Today, AI has started to advance natural
sciences by improving, accelerating, and enabling our understanding of natural
phenomena at a wide range of spatial and temporal scales, giving rise to a new
area of research known as AI for science (AI4Science). Being an emerging
research paradigm, AI4Science is unique in that it is an enormous and highly
interdisciplinary area. Thus, a unified and technical treatment of this field
is needed yet challenging. This work aims to provide a technically thorough
account of a subarea of AI4Science; namely, AI for quantum, atomistic, and
continuum systems. These areas aim at understanding the physical world from the
subatomic (wavefunctions and electron density), atomic (molecules, proteins,
materials, and interactions), to macro (fluids, climate, and subsurface) scales
and form an important subarea of AI4Science. A unique advantage of focusing on
these areas is that they largely share a common set of challenges, thereby
allowing a unified and foundational treatment. A key common challenge is how to
capture physics first principles, especially symmetries, in natural systems by
deep learning methods. We provide an in-depth yet intuitive account of
techniques to achieve equivariance to symmetry transformations. We also discuss
other common technical challenges, including explainability,
out-of-distribution generalization, knowledge transfer with foundation and
large language models, and uncertainty quantification. To facilitate learning
and education, we provide categorized lists of resources that we found to be
useful. We strive to be thorough and unified and hope this initial effort may
trigger more community interests and efforts to further advance AI4Science
Recommended from our members
The Foundations of Infinite-Dimensional Spectral Computations
Spectral computations in infinite dimensions are ubiquitous in the sciences. However, their many applications and theoretical studies depend on computations which are infamously difficult. This thesis, therefore, addresses the broad question,
“What is computationally possible within the field of spectral theory of separable Hilbert spaces?”
The boundaries of what computers can achieve in computational spectral theory and mathematical physics are unknown, leaving many open questions that have been unsolved for decades. This thesis provides solutions to several such long-standing problems.
To determine these boundaries, we use the Solvability Complexity Index (SCI) hierarchy, an idea which has its roots in Smale's comprehensive programme on the foundations of computational mathematics. The Smale programme led to a real-number counterpart of the Turing machine, yet left a substantial gap between theory and practice. The SCI hierarchy encompasses both these models and provides universal bounds on what is computationally possible. What makes spectral problems particularly delicate is that many of the problems can only be computed by using several limits, a phenomenon also shared in the foundations of polynomial root-finding as shown by McMullen. We develop and extend the SCI hierarchy to prove optimality of algorithms and construct a myriad of different methods for infinite-dimensional spectral problems, solving many computational spectral problems for the first time.
For arguably almost any operator of applicable interest, we solve the long-standing computational spectral problem and construct algorithms that compute spectra with error control. This is done for partial differential operators with coefficients of locally bounded total variation and also for discrete infinite matrix operators. We also show how to compute spectral measures of normal operators (when the spectrum is a subset of a regular enough Jordan curve), including spectral measures of classes of self-adjoint operators with error control and the construction of high-order rational kernel methods. We classify the problems of computing measures, measure decompositions, types of spectra (pure point, absolutely continuous, singular continuous), functional calculus, and Radon--Nikodym derivatives in the SCI hierarchy. We construct algorithms for and classify; fractal dimensions of spectra, Lebesgue measures of spectra, spectral gaps, discrete spectra, eigenvalue multiplicities, capacity, different spectral radii and the problem of detecting algorithmic failure of previous methods (finite section method). The infinite-dimensional QR algorithm is also analysed, recovering extremal parts of spectra, corresponding eigenvectors, and invariant subspaces, with convergence rates and error control. Finally, we analyse pseudospectra of pseudoergodic operators (a generalisation of random operators) on vector-valued spaces.
All of the algorithms developed in this thesis are sharp in the sense of the SCI hierarchy. In other words, we prove that they are optimal, realising the boundaries of what digital computers can achieve. They are also implementable and practical, and the majority are parallelisable. Extensive numerical examples are given throughout, demonstrating efficiency and tackling difficult problems taken from mathematics and also physical applications.
In summary, this thesis allows scientists to rigorously and efficiently compute many spectral properties for the first time. The framework provided by this thesis also encompasses a vast number of areas in computational mathematics, including the classical problem of polynomial root-finding, as well as optimisation, neural networks, PDEs and computer-assisted proofs. This framework will be explored in the future work of the author within these settings
Feature Extraction for image super-resolution using finite rate of innovation principles
To understand a real-world scene from several multiview pictures, it is necessary to find
the disparities existing between each pair of images so that they are correctly related to one
another. This process, called image registration, requires the extraction of some specific
information about the scene. This is achieved by taking features out of the acquired
images. Thus, the quality of the registration depends largely on the accuracy of the
extracted features.
Feature extraction can be formulated as a sampling problem for which perfect re-
construction of the desired features is wanted. The recent sampling theory for signals with
finite rate of innovation (FRI) and the B-spline theory offer an appropriate new frame-
work for the extraction of features in real images. This thesis first focuses on extending the
sampling theory for FRI signals to a multichannel case and then presents exact sampling
results for two different types of image features used for registration: moments and edges.
In the first part, it is shown that the geometric moments of an observed scene can
be retrieved exactly from sampled images and used as global features for registration. The
second part describes how edges can also be retrieved perfectly from sampled images for
registration purposes. The proposed feature extraction schemes therefore allow in theory
the exact registration of images. Indeed, various simulations show that the proposed
extraction/registration methods overcome traditional ones, especially at low-resolution.
These characteristics make such feature extraction techniques very appropriate for
applications like image super-resolution for which a very precise registration is needed. The
quality of the super-resolved images obtained using the proposed feature extraction meth-
ods is improved by comparison with other approaches. Finally, the notion of polyphase
components is used to adapt the image acquisition model to the characteristics of real
digital cameras in order to run super-resolution experiments on real images
Architectures and implementations for the Polynomial Ring Engine over small residue rings
This work considers VLSI implementations for the recently introduced Polynomial Ring Engine (PRE) using small residue rings. To allow for a comprehensive approach to the implementation of the PRE mappings for DSP algorithms, this dissertation introduces novel techniques ranging from system level architectures to transistor level considerations. The Polynomial Ring Engine combines both classical residue mappings and new polynomial mappings. This dissertation develops a systematic approach for generating pipelined systolic/ semi-systolic structures for the PRE mappings. An example architecture is constructed and simulated to illustrate the properties of the new architectures. To simultaneously achieve large computational dynamic range and high throughput rate the basic
building blocks of the PRE architecture use transistor size profiling. Transistor sizing software is developed for profiling the Switching Tree dynamic logic used to build the basic modulo blocks. The software handles complex nFET structures using a simple iterative algorithm. Issues such as convergence of the iterative technique and validity of the sizing formulae have been treated with an appropriate mathematical analysis.
As an illustration of the use of PRE architectures for modem DSP computational problems, a Wavelet Transform for HDTV image compression is implemented. An interesting use is made of the PRE technique of using polynomial indeterminates as \u27placeholders\u27 for components of the processed data. In this case we use an indeterminate to symbolically handle the irrational number [square root of 3] of the Daubechie mother wavelet for N = 4.
Finally, a multi-level fault tolerant PRE architecture is developed by combining the classical redundant residue approach and the circuit parity check approach. The proposed architecture uses syndromes to correct faulty residue channels and an embedded parity check to correct faulty computational channels. The architecture offers superior fault detection and correction with online data interruption
Efficient algorithms for arbitrary sample rate conversion with application to wave field synthesis
Arbitrary sample rate conversion (ASRC) is used in many fields of digital signal processing to alter the sampling rate of discrete-time signals by arbitrary, potentially time-varying ratios.
This thesis investigates efficient algorithms for ASRC and proposes several improvements. First, closed-form descriptions for the modified Farrow structure and Lagrange interpolators are derived that are directly applicable to algorithm design and analysis. Second, efficient implementation structures for ASRC algorithms are investigated. Third, this thesis considers coefficient design methods that are optimal for a selectable error norm and optional design constraints.
Finally, the performance of different algorithms is compared for several performance metrics. This enables the selection of ASRC algorithms that meet the requirements of an application with minimal complexity.
Wave field synthesis (WFS), a high-quality spatial sound reproduction technique, is the main application considered in this work. For WFS, sophisticated ASRC algorithms improve the quality of moving sound sources. However, the improvements proposed in this thesis are not limited to WFS, but applicable to general-purpose ASRC problems.Verfahren zur unbeschränkten Abtastratenwandlung (arbitrary sample rate
conversion,ASRC) ermöglichen die Änderung der Abtastrate zeitdiskreter
Signale um beliebige, zeitvarianteVerhältnisse. ASRC wird in vielen
Anwendungen digitaler Signalverarbeitung eingesetzt.In dieser Arbeit wird
die Verwendung von ASRC-Verfahren in der Wellenfeldsynthese(WFS), einem
Verfahren zur hochqualitativen, räumlich korrekten Audio-Wiedergabe,
untersucht.Durch ASRC-Algorithmen kann die Wiedergabequalität bewegter
Schallquellenin WFS deutlich verbessert werden. Durch die hohe Zahl der in
einem WFS-Wiedergabesystembenötigten simultanen ASRC-Operationen ist eine
direkte Anwendung hochwertigerAlgorithmen jedoch meist nicht möglich.Zur
Lösung dieses Problems werden verschiedene Beiträge vorgestellt. Die
Komplexitätder WFS-Signalverarbeitung wird durch eine geeignete
Partitionierung der ASRC-Algorithmensignifikant reduziert, welche eine
effiziente Wiederverwendung von Zwischenergebnissenermöglicht. Dies
erlaubt den Einsatz hochqualitativer Algorithmen zur Abtastratenwandlungmit
einer Komplexität, die mit der Anwendung einfacher konventioneller
ASRCAlgorithmenvergleichbar ist. Dieses Partitionierungsschema stellt
jedoch auch zusätzlicheAnforderungen an ASRC-Algorithmen und erfordert
Abwägungen zwischen Performance-Maßen wie der algorithmischen
Komplexität, Speicherbedarf oder -bandbreite.Zur Verbesserung von
Algorithmen und Implementierungsstrukturen fĂĽr ASRC werdenverschiedene
MaĂźnahmen vorgeschlagen. Zum Einen werden geschlossene,
analytischeBeschreibungen fĂĽr den kontinuierlichen Frequenzgang
verschiedener Klassen von ASRCStruktureneingefĂĽhrt. Insbesondere fĂĽr
Lagrange-Interpolatoren, die modifizierte Farrow-Struktur sowie
Kombinationen aus Ăśberabtastung und zeitkontinuierlichen
Resampling-Funktionen werden kompakte Darstellungen hergeleitet, die sowohl
Aufschluss ĂĽber dasVerhalten dieser Filter geben als auch eine direkte
Verwendung in Design-Methoden ermöglichen.Einen zweiten Schwerpunkt bildet
das Koeffizientendesign fĂĽr diese Strukturen, insbesonderezum optimalen
Entwurf bezüglich einer gewählten Fehlernorm und optionaler
Entwurfsbedingungenund -restriktionen. Im Gegensatz zu bisherigen Ansätzen
werden solcheoptimalen Entwurfsmethoden auch fĂĽr mehrstufige
ASRC-Strukturen, welche ganzzahligeĂśberabtastung mit zeitkontinuierlichen
Resampling-Funktionen verbinden, vorgestellt.FĂĽr diese Klasse von
Strukturen wird eine Reihe angepasster Resampling-Funktionen
vorgeschlagen,welche in Verbindung mit den entwickelten optimalen
Entwurfsmethoden signifikanteQualitätssteigerungen ermöglichen.Die
Vielzahl von ASRC-Strukturen sowie deren Design-Parameter bildet eine
Hauptschwierigkeitbei der Auswahl eines fĂĽr eine gegebene Anwendung
geeigneten Verfahrens.Evaluation und Performance-Vergleiche bilden daher
einen dritten Schwerpunkt. Dazu wirdzum Einen der Einfluss verschiedener
Entwurfsparameter auf die erzielbare Qualität vonASRC-Algorithmen
untersucht. Zum Anderen wird der benötigte Aufwand bezüglich
verschiedenerPerformance-Metriken in Abhängigkeit von Design-Qualität
dargestellt.Auf diese Weise sind die Ergebnisse dieser Arbeit nicht auf WFS
beschränkt, sondernsind in einer Vielzahl von Anwendungen unbeschränkter
Abtastratenwandlung nutzbar