198 research outputs found

    Decay bounds for the numerical quasiseparable preservation in matrix functions

    Get PDF
    Given matrices A and B such that B=f(A), where f(z) is a holomorphic function, we analyze the relation between the singular values of the off-diagonal submatrices of A and B. We provide a family of bounds which depend on the interplay between the spectrum of the argument A and the singularities of the function. In particular, these bounds guarantee the numerical preservation of quasiseparable structures under mild hypotheses. We extend the Dunford–Cauchy integral formula to the case in which some poles are contained inside the contour of integration. We use this tool together with the technology of hierarchical matrices (H-matrices) for the effective computation of matrix functions with quasiseparable arguments

    The application of parallel computer technology to the dynamic analysis of suspension bridges

    Get PDF
    This research is concerned with the application of distributed computer technology to the solution of non-linear structural dynamic problems, in particular the onset of aerodynamic instabilities in long span suspension bridge structures, such as flutter which is a catastrophic aeroelastic phenomena. The thesis is set out in two distinct parts:- Part I, presents the theoretical background of the main forms of aerodynamic instabilities, presenting in detail the main solution techniques used to solve the flutter problem. The previously written analysis package ANSUSP is presented which has been specifically developed to predict numerically the onset of flutter instability. The various solution techniques which were employed to predict the onset of flutter for the Severn Bridge are discussed. All the results presented in Part I were obtained using a 486DX2 66MHz serial personal computer. Part II, examines the main solution techniques in detail and goes on to apply them to a large distributed supercomputer, which allows the solution of the problem to be achieved considerably faster than is possible using the serial computer system. The solutions presented in Part II are represented as Performance Indices (PI) which quote the ratio of time to performing a specific calculation using a serial algorithm compared to a parallel algorithm running on the same computer system

    A direct method for the numerical solution of optimization problems with time-periodic PDE constraints

    Get PDF
    In der vorliegenden Dissertation entwickeln wir auf der Basis der Direkten Mehrzielmethode eine neue numerische Methode für Optimalsteuerungsprobleme (OCPs) mit zeitperiodischen partiellen Differentialgleichungen (PDEs). Die vorgeschlagene Methode zeichnet sich durch asymptotisch optimale Skalierung des numerischen Aufwandes in der Zahl der örtlichen Diskretisierungspunkte aus. Sie besteht aus einem Linearen Iterativen Splitting Ansatz (LISA) innerhalb einer Newton-Typ Iteration zusammen mit einer Globalisierungsstrategie, die auf natürlichen Niveaufunktionen basiert. Wir untersuchen die LISA-Newton Methode im Rahmen von Bocks kappa-Theorie und entwickeln zuverlässige a-posteriori kappa-Schätzer. Im Folgenden erweitern wir die LISA-Newton Methode auf den Fall von inexakter Sequentieller Quadratischer Programmierung (SQP) für ungleichungsbeschränke Probleme und untersuchen das lokale Konvergenzverhalten. Zusätzlich entwickeln wir klassische und Zweigitter Newton-Picard Vorkonditionierer für LISA und beweisen gitterunabhängige Konvergenz der klassischen Variante auf einem Modellproblem. Anhand numerischer Ergebnisse können wir belegen, dass im Vergleich zur klassichen Variante die Zweigittervariante sogar noch effizienter ist für typische Anwendungsprobleme. Des Weiteren entwickeln wir eine Zweigitterapproximation der Lagrange-Hessematrix, welche gut in den Rahmen des Zweigitter Newton-Picard Ansatzes passt und die im Vergleich zur exakten Hessematrix zu einer Laufzeitreduktion von 68% auf einem nichtlinearen Benchmarkproblem führt. Wir zeigen weiterhin, dass die Qualität des Feingitters die Genauigkeit der Lösung bestimmt, während die Qualität des Grobgitters die asymptotische lineare Konvergenzrate, d.h., das Bocksche kappa, festlegt. Zuverlässige kappa-Schätzer ermöglichen die automatische Steuerung der Grobgitterverfeinerung für schnelle Konvergenz. Für die Lösung der auftretenden, großen Probleme der Quadratischen Programmierung (QPs) wählen wir einen strukturausnutzenden zweistufigen Ansatz. In der ersten Stufe nutzen wir die durch den Mehrzielansatz und die Newton-Picard Vorkonditionierer bedingten Strukturen aus, um die großen QPs auf äquivalente QPs zu reduzieren, deren Größe von der Zahl der örtlichen Diskretisierungspunkte unabhängig ist. Für die zweite Stufe entwickeln wir Erweiterungen für eine Parametrische Aktive Mengen Methode (PASM), die zu einem zuverlässigen und effizienten Löser für die resultierenden, möglicherweise nichtkonvexen QPs führen. Weiterhin konstruieren wir drei anschauliche, contra-intuitive Probleme, die aufzeigen, dass die Konvergenz einer one-shot one-step Optimierungsmethode weder notwendig noch hinreichend für die Konvergenz der entsprechenden Methode für das Vorwärtsproblem ist. Unsere Analyse von drei Regularisierungsansätzen zeigt, dass de-facto Verlust von Konvergenz selbst mit diesen Ansätzen nicht verhindert werden kann. Des Weiteren haben wir die vorgestellten Methoden in einem Computercode mit Namen MUSCOP implementiert, der automatische Ableitungserzeugung erster und zweiter Ordnung von Modellfunktionen und Lösungen der dynamischen Systeme, Parallelisierung auf der Mehrzielstruktur und ein Hybrid Language Programming Paradigma zur Verfügung stellt, um die benötigte Zeit für das Aufstellen und Lösen neuer Anwendungsprobleme zu minimieren. Wir demonstrieren die Anwendbarkeit, Zuverlässigkeit und Effektivität von MUSCOP und damit der vorgeschlagenen numerischen Methoden anhand einer Reihe von PDE OCPs von steigender Schwierigkeit, angefangen bei linearen akademischen Problemen über hochgradig nichtlineare akademische Probleme der mathematischen Biologie bis hin zu einem hochgradig nichtlinearen Anwendungsproblem der chemischen Verfahrenstechnik im Bereich der präparativen Chromatographie auf Basis realer Daten: Dem Simulated Moving Bed (SMB) Prozess

    Sparse Cholesky factorization by greedy conditional selection

    Full text link
    Dense kernel matrices resulting from pairwise evaluations of a kernel function arise naturally in machine learning and statistics. Previous work in constructing sparse approximate inverse Cholesky factors of such matrices by minimizing Kullback-Leibler divergence recovers the Vecchia approximation for Gaussian processes. These methods rely only on the geometry of the evaluation points to construct the sparsity pattern. In this work, we instead construct the sparsity pattern by leveraging a greedy selection algorithm that maximizes mutual information with target points, conditional on all points previously selected. For selecting kk points out of NN, the naive time complexity is O(Nk4)\mathcal{O}(N k^4), but by maintaining a partial Cholesky factor we reduce this to O(Nk2)\mathcal{O}(N k^2). Furthermore, for multiple (mm) targets we achieve a time complexity of O(Nk2+Nm2+m3)\mathcal{O}(N k^2 + N m^2 + m^3), which is maintained in the setting of aggregated Cholesky factorization where a selected point need not condition every target. We apply the selection algorithm to image classification and recovery of sparse Cholesky factors. By minimizing Kullback-Leibler divergence, we apply the algorithm to Cholesky factorization, Gaussian process regression, and preconditioning with the conjugate gradient, improving over kk-nearest neighbors selection

    Aalto-2 satellite attitude control system

    Get PDF
    The Attitude Control System for the Aalto-2 satellite was designed and verified if it meets the requirements of the QB50 project mission. Attitude control is achieved through the passive atmospheric drag torque stabilization and active control over magnetorquers. For the detumble phase, B dot control is used, and for nominal stabilization phase, PD, LQR and SDRE control methods were investigated and compared, through the computer simulations. Software algorithms for solving of Algebraic Matrix Riccati Equation (Schur decomposition and Kleinman) and system tasks were realized for the onboard computer and the overall system functionality was tested with the Hardware In the Loop method. LQR control method showed the best performance, though none of the controllers completely met the mission needs

    Tree tensor networks for high-dimensional quantum systems and beyond

    Get PDF
    This thesis presents the development of a numerical simulation technique, the Tree Tensor Network, aiming to overcome current limitations in the simulation of two- and higher-dimensional quantum many-body systems. The development and application of methods based on Tensor Networks (TNs) for such systems are one of the most relevant challenges of the current decade with the potential to promote research and technologies in a broad range of fields ranging from condensed matter physics, high-energy physics, and quantum chemistry to quantum computation and quantum simulation. The particular challenge for TNs is the combination of accuracy and scalability which to date are only met for one-dimensional systems by other established TN techniques. This thesis first describes the interdisciplinary field of TN by combining mathematical modelling, computational science, and quantum information before it illustrates the limitations of standard TN techniques in higher-dimensional cases. Following a description of the newly developed Tree Tensor Network (TTN), the thesis then presents its application to study a lattice gauge theory approximating the low-energy behaviour of quantum electrodynamics, demonstrating the successful applicability of TTNs for high-dimensional gauge theories. Subsequently, a novel TN is introduced augmenting the TTN for efficient simulations of high-dimensional systems. Along the way, the TTN is applied to problems from various fields ranging from low-energy to high-energy up to medical physics.In dieser Arbeit wird die Entwicklung einer numerischen Simulationstechnik, dem Tree Tensor Network (TTN), vorgestellt, die darauf abzielt, die derzeitigen Limitationen bei der Simulation von zwei- und höherdimensionalen Quanten-Vielteilchensystemen zu überwinden. Die Weiterentwicklung von auf Tensor-Netzwerken (TN) basierenden Methoden für solche Systeme ist eine der aktuellsten und relevantesten Herausforderungen. Sie birgt das Potential, Forschung und Technologien in einem breiten Spektrum zu fördern, welches sich von der Physik der kondensierten Materie, der Hochenergiephysik und der Quantenchemie bis hin zur Quantenberechnung und Quantensimulation erstreckt. Die besondere Herausforderung für TN ist die Kombination von Genauigkeit und Skalierbarkeit, die bisher nur für eindimensionale Systeme erfüllt wird. Diese Arbeit beschreibt zunächst das interdisziplinäre Gebiet der TN als eine Kombination von mathematischer Modellierung, Computational Science und Quanteninformation, um dann die Grenzen der Standard-TN-Techniken in höherdimensionalen Fällen aufzuzeigen. Nach einer Beschreibung des neu entwickelten TTN stellt die Arbeit dessen Anwendung zur Untersuchung einer Gittereichtheorie vor, die das Niederenergieverhalten der Quantenelektrodynamik approximiert und somit die erfolgreiche Anwendbarkeit von TTNs für hochdimensionale Eichtheorien demonstriert. Anschließend wird ein neuartiges TN eingeführt, welches das TTN für effiziente Simulationen hochdimensionaler Systeme erweitert. Zusätzlich wird das TTN auf diverse Probleme angewandt, die von Niederenergie- über Hochenergie- bis hin zur medizinischen Physik reichen

    Research in Structures and Dynamics, 1984

    Get PDF
    A symposium on advanced and trends in structures and dynamics was held to communicate new insights into physical behavior and to identify trends in the solution procedures for structures and dynamics problems. Pertinent areas of concern were (1) multiprocessors, parallel computation, and database management systems, (2) advances in finite element technology, (3) interactive computing and optimization, (4) mechanics of materials, (5) structural stability, (6) dynamic response of structures, and (7) advanced computer applications

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies
    corecore