93 research outputs found

    Eigenmeasures and stochastic diagonalization of bilinear maps

    Full text link
    [EN] A new stochastic approach is presented to understand general spectral type problems for (not necessarily linear) functions between topological spaces. In order to show its potential applications, we construct the theory for the case of bilinear forms acting in couples of a Banach space and its dual. Our method consists of using integral representations of bilinear maps that satisfy particular domination properties, which is shown to be equivalent to having a certain spectral structure. Thus, we develop a measure-based technique for the characterization of bilinear operators having a spectral representation, introducing the notion of eigenmeasure, which will become the central tool of our formalism. Specific applications are provided for operators between finite and infinite dimensional linear spaces.Ministerio de Ciencia, Innovacion y Universidades; Agencia Estatal de investigacion; FEDER, Grant/Award Number: MTM2016-77054-C2-1-PErdogan, E.; SĂĄnchez PĂ©rez, EA. (2021). Eigenmeasures and stochastic diagonalization of bilinear maps. Mathematical Methods in the Applied Sciences. 44(6):5021-5039. https://doi.org/10.1002/mma.70855021503944

    An algorithm to schedule water delivery in pressurized irrigation networks

    Get PDF
    This study presents a deterministic constrained optimisation algorithm developed for using in a pressurized irrigation network. In irrigation networks —or water networks supplied by a head tank— utility managers can fully adapt the delivery times to suit their needs. The program provides a strategy for scheduling water delivery at a constant flow rate (opening and closing of hydrants, units, and subunits) to minimise energy consumption. This technique improves on earlier approaches by employing a deterministic method with little computing time. This method has been tested in the University of Alicante pressurized irrigation network, where decision-makers have identified the need to diminish the energy expenditure for watering University’s gardens.This work was supported by the research project “DESENREDA” through the 2021 call “Estancias de movilidad en el extranjero Jose Castillejo” of the Ministerio de Universidades (CAS21/00085) and for the project “Hi-Edu Carbon” Erasmus Plus Programme, Key Action KA22021, action type (2021-1-SK01-KA220-HED-000023274

    Constructing networks of quantum channels for state preparation

    Get PDF
    Entangled possibly mixed states are an essential resource for quantum computation, communication, metrology, and the simulation of many-body systems. It is important to develop and improve preparation protocols for such states. One possible way to prepare states of interest is to design an open system that evolves only towards the desired states. A Markovian evolution of a quantum system can be generally described by a Lindbladian. Tensor networks provide a framework to construct physically relevant entangled states. In particular, matrix product density operators (MPDOs) form an important variational class of states. MPDOs generalize matrix product states to mixed states, can represent thermal states of local one-dimensional Hamiltonians at sufficiently large temperatures, describe systems that satisfy the area law of entanglement, and form the basis of powerful numerical methods. In this work we develop an algorithm that determines for a given linear subspace of MPDOs whether this subspace can be the stable space of some frustration free k-local Lindbladian and, if so, outputs an appropriate Lindbladian. We proceed by using machine learning with networks of quantum channels, also known as quantum neural networks (QNNs), to train denoising post-processing devices for quantum sources. First, we show that QNNs can be trained on imperfect devices even when part of the training data is corrupted. Second, we show that QNNs can be trained to extrapolate quantum states to, e.g., lower temperatures. Third, we show how to denoise quantum states in an unsupervised manner. We develop a novel quantum autoencoder that successfully denoises Greenberger-Horne-Zeilinger, W, Dicke, and cluster states subject to spin-flip, dephasing errors, and random unitary noise. Finally, we develop recurrent QNNs (RQNNs) for denoising that requires memory, such as combating drifts. RQNNs can be thought of as matrix product quantum channels with a quantum algorithm for training and are closely related to MPDOs. The proposed preparation and denoising protocols can be beneficial for various emergent quantum technologies and are within reach of present-day experiments

    Context adaptivity for selected computational kernels with applications in optoelectronics and in phylogenetics

    Get PDF
    Computational Kernels sind der kritische Teil rechenintensiver Software, wofĂŒr der grĂ¶ĂŸte Rechenaufwand anfĂ€llt; daher mĂŒssen deren Design und Implementierung sorgfĂ€ltig vorgenommen werden. Zwei wissenschaftliche Anwendungsprobleme aus der Optoelektronik und aus der Phylogenetik, sowie dazugehörige Computational Kernels motivieren diese Arbeit. Im ersten Anwendungsproblem werden Komponenten zur Berechnung komplex-symmetrischer Eigenwertprobleme diskutiert, welche in der Simulation von Wellenleitern in der Optoelektronik auftreten. LAPACK und ScaLAPACK beinhalten sehr leistungsfĂ€hige Referenzimplementierungen fĂŒr bestimmte Problemstellungen der linearen Algebra. In Bezug auf Eigenwertprobleme werden ausschließlich reell-symmetrische und komplex-hermitesche Varianten angeboten, daher sind effiziente Codes fĂŒr komplex-symmetrische (nicht-hermitesche) Eigenwertprobleme sehr wĂŒnschenswert. Das zweite Anwendungsproblem behandelt einen parallelen, wissenschaftlichen Workflow zur Rekonstruktion von Phylogenien, welcher entworfen, umgesetzt und evaluiert wird. Die Rekonstruktion von phylogenetischen BĂ€umen ist ein NP-hartes Problem, welches Ă€ußerst viel RechenkapazitĂ€t benötigt, wodurch ein paralleler Ansatz erforderlich ist. Die grundlegende Idee dieser Arbeit ist die Untersuchung der Wechselbeziehung zwischen dem Kontext der behandelten Kernels und deren Effizienz. Ein Kontext eines Computational Kernels beinhaltet Modellaspekte (z.B. Struktur der Eingabedaten), Softwareaspekte (z.B. rechenintensive Bibliotheken), Hardwareaspekte (z.B. verfĂŒgbarer Hauptspeicher und unterstĂŒtzte darstellbare Genauigkeit), sowie weitere Anforderungen bzw. EinschrĂ€nkungen. EinschrĂ€nkungen sind hinsichtlich Laufzeit, Speicherverbrauch, gelieferte Genauigkeit usw., möglich. Das Konzept der KontextadaptivitĂ€t wird fĂŒr ausgewĂ€hlte Anwendungsprobleme in Computational Science gezeigt. Die vorgestellte Methode ist ein Meta-Algorithmus, der Aspekte des Kontexts verwendet, um optimale Leistung hinsichtlich der angewandten Metrik zu erzielen. Es ist wichtig, den Kontext einzubeziehen, weil Anforderungen gegeneinander ausgetauscht werden könnten, resultierend in einer höheren Leistung. Zum Beispiel kann im Falle einer niedrigen benötigten Genauigkeit ein schnellerer Algorithmus einer bewĂ€hrten, aber langsameren, Methode vorgezogen werden. Speziell fĂŒr komplex-symmetrische Eigenwertprobleme zugeschnittene Codes zielen darauf ab, Genauigkeit gegen Geschwindigkeit einzutauschen. Die Innovation wird durch neue algorithmische AnsĂ€tze belegt, welche die algebraische Struktur ausnutzen. BezĂŒglich der Berechnung von phylogenetischen BĂ€umen wird die Abbildung eines Workflows auf ein Campusgrid-System gezeigt. Die Innovation besteht in der anpassungsfĂ€higen Implementierung des Workflows, der nebenlĂ€ufige Instanzen von Computational Kernels in einem verteilten System darstellt. Die AdaptivitĂ€t bezeichnet hier die FĂ€higkeit des Workflows, die Rechenlast hinsichtlich verfĂŒgbarer Rechner, Zeit und QualitĂ€t der phylogenetischen BĂ€ume anzupassen. KontextadaptivitĂ€t wird durch die Implementierung und Evaluierung von wissenschaftlichen Problemstellungen aus der Optoelektronik und aus der Phylogenetik gezeigt. FĂŒr das Fachgebiet der Optoelektronik zielt eine Familie von Algorithmen auf die Lösung von verallgemeinerten komplex-symmetrischen Eigenwertproblemen ab. Unser alternativer Ansatz nutzt die symmetrische Struktur aus und spielt gĂŒnstigere Laufzeit gegen eine geringere Genauigkeit aus. Dieser Ansatz ist somit schneller, jedoch (meist) ungenauer als der konventionelle Lösungsweg. ZusĂ€tzlich zum sequentiellen Löser wird eine parallele Variante diskutiert und teilweise auf einem Cluster mit bis zu 1024 CPU-Cores evaluiert. Die erzielten Laufzeiten beweisen die Überlegenheit unseres Ansatzes -- allerdings sind weitere Untersuchungen zur Erhöhung der Genauigkeit notwendig. FĂŒr das Fachgebiet der Phylogenetik zeigen wir, dass die phylogenetische Baum-Rekonstruktion mittels eines Condor-basierten Campusgrids effizient parallelisiert werden kann. Dieser parallele wissenschaftliche Workflow weist einen geringen parallelen Overhead auf, resultierend in exzellenter Effizienz.Computational kernels are the crucial part of computationally intensive software, where most of the computing time is spent; hence, their design and implementation have to be accomplished carefully. Two scientific application problems from optoelectronics and from phylogenetics and corresponding computational kernels are motivating this thesis. In the first application problem, components for the computational solution of complex symmetric EVPs are discussed, arising in the simulation of waveguides in optoelectronics. LAPACK and ScaLAPACK contain highly effective reference implementations for certain numerical problems in linear algebra. With respect to EVPs, only real symmetric and complex Hermitian codes are available, therefore efficient codes for complex symmetric (non-Hermitian) EVPs are highly desirable. In the second application problem, a parallel scientific workflow for computing phylogenies is designed, implemented, and evaluated. The reconstruction of phylogenetic trees is an NP-hard problem that demands huge scale computing capabilities, and therefore a parallel approach is necessary. One idea underlying this thesis is to investigate the interaction between the context of the kernels considered and their efficiency. The context of a computational kernel comprises model aspects (for instance, structure of input data), software aspects (for instance, computational libraries), hardware aspects (for instance, available RAM and supported precision), and certain requirements or constraints. Constraints may exist with respect to runtime, memory usage, accuracy required, etc.. The concept of context adaptivity is demonstrated to selected computational problems in computational science. The method proposed here is a meta-algorithm that utilizes aspects of the context to result in an optimal performance concerning the applied metric. It is important to consider the context, because requirements may be traded for each other, resulting in a higher performance. For instance, in case of a low required accuracy, a faster algorithmic approach may be favored over an established but slower method. With respect to EVPs, prototypical codes that are especially targeted at complex symmetric EVPs aim at trading accuracy for speed. The innovation is evidenced by the implementation of new algorithmic approaches exploiting structure. Concerning the computation of phylogenetic trees, the mapping of a scientific workflow onto a campus grid system is demonstrated. The adaptive implementation of the workflow features concurrent instances of a computational kernel on a distributed system. Here, adaptivity refers to the ability of the workflow to vary computational load in terms of available computing resources, available time, and quality of reconstructed phylogenetic trees. Context adaptivity is discussed by means of computational problems from optoelectronics and from phylogenetics. For the field of optoelectronics, a family of implemented algorithms aim at solving generalized complex symmetric EVPs. Our alternative approach exploiting structural symmetry trades runtime for accuracy, hence, it is faster but (usually) features a lower accuracy than the conventional approach. In addition to a complete sequential solver, a parallel variant is discussed and partly evaluated on a cluster utilizing up to 1024 CPU cores. Achieved runtimes evidence the superiority of our approach, however, further investigations on improving accuracy are suggested. For the field of phylogenetics, we show that phylogenetic tree reconstruction can efficiently be parallelized on a campus grid infrastructure. The parallel scientific workflow features a moderate parallel overhead, resulting in an excellent efficiency

    Computational Tools for Large-Scale Linear Systems

    Get PDF
    While the theoretical analysis of linear dynamical systems with finite state-spaces is a mature topic, in situations where the underlying model has a large number of dimensions, modelers must turn to computational tools to better visualize and analyze the dynamic behavior of interest. In these situations, we are confronted with the Curse of Dimensionality: computational and storage complexity grows exponentially in the number of dimensions. This doctoral project focuses on two main classes of large-scale linear systems which arise in system biology. The Chemical Master Equation (CME) is a Fokker-Planck equation which describes the evolution of the probability mass function of a countable state space Markov process. Each state of the CME is labelled with an ordered S-tuple corresponding to one configuration of a well-mixed chemical system, where S is the number of distinct chemical species of interest. Even in cases where one only considers a projection of the CME to a finite subset of the states, one still must contend with the Curse of Dimensionality: the computational complexity grows exponentially in the number of chemical species. This dissertation describes a computational methodology for efficient solution of the CME which, in the best cases, will scale linearly in the number of chemical species. The second main class of high-dimensional problems requiring computational tools are coupled linear reaction-diffusion equations. For this class of models, we focus primarily on the computation of certain high-dimensional matrices which describe in a quantitative sense the input-to-state and state-to-output relationships. We describe algorithms for extracting useful information stored in these matrices and use this information to efficiently compute both reduced order models and open-loop control laws for steering the full system. A key feature of this approach is that the method is completely simulation or experiment free, in fact, in our numerical experiments, the computation of a reduced model or open-loop control law is an order of magnitude faster on a laptop than simulation of the full system on a 32 core node of a high-performance cluster. In both projects, the enabling computational technology is the recently proposed Tensor Train (TT) structured low-parametric representation of high-dimensional data. The TT-format effectively exploits low-rank structure of the "unfolding matrices" for compression and computational efficiency. Formally, the computational complexity of basic TT arithmetics scale linearly in the number of dimensions, potentially circumventing the curse of dimensionality. To demonstrate the effectiveness of this approach, we performed numerous numerical experiments whose results are reported here

    Systematic Antenna Design Using the Theory of Characteristic Modes

    Get PDF
    The day Faraday moved a magnet in and out of a wire loop and detected the time-varying magnetic field, the first wireless transmitter / receiver system was created and the world was changed forever. However, it took almost fifty years for Heinrich Hertz to use Maxwell's equations and Faraday's insights in his professorship at Karlsruhe to create the first electromagnetic wireless communication system using a spark gap dipole transmitter and a loop antenna-based receiver. This simple system utilized the first non-optical human designed electromagnetic antenna, and since then, businesses, researchers, doctoral candidates, and hobbyists have been trying to determine the best way to design antennas for a variety of different applications. In almost all situations, antennas are designed using either intuition, closed-form equations, or information which can be obtained from Maxwell's equations and a set of boundary conditions. This thesis combines these three design techniques into one by using the Theory of Characteristic Modes (TCM). This theory allows for physics-based electromagnetic insights of an object to be obtained and combined with closed-form equations for all real media, limiting the overall design space, and allowing an engineer's intuition to be focused on an area with greater importance to antenna performance. TCM is a unique amalgamation of many different theoretical concepts including Maxwell's equations, Sturm-liouville eigenvalue decomposition, Poynting's theorem, and in practical applications the Method of Moments (MoM). TCM was developed first by Garbacz in 1965 and then popularized by Harrington and Mautz in 1971. Many great researchers have put years of effort and hard work into advancing and popularizing TCM, and this thesis would not exist without the advances provided by these great women and men. The research that led to the initial idea of this thesis was based around the development of multiple-input multiple-output (MIMO) antennas for hand held devices, as this type of design environment is challenging due to the electrical size of the device and the limited real-estate available. As TCM is uniquely suited for analyzing electrically compact systems which require orthogonal modes of radiation, it was a perfect candidate for studying how it can be better applied to this type of application.The research contained within this thesis, as well as the articles published during the time of this doctoral study, analyze the practical and theoretical applications of TCM and present a set of theoretical proofs, which explain some of the shortcomings pertaining to MoM-based TCM analysis of dielectric or magnetic objects, and provide some solutions to many of these problems. Furthermore, a unique antenna design methodology was developed which allows for electrically compact MIMO terminal antennas to be designed in a fundamentally new way. As TCM provides a unique set of excitation-free attributes, as well as a set of orthogonal surface currents and far-fields, which are determined only by the object's shape and material. These orthogonal attributes can be used to determine how the object's characteristic modes (CMs) relate to a set of closed-form equations. Using the knowledge gained from each CM, and how the CMs link to these equations, small object alterations can be defined and used to adapt and feed the object, creating single or multiple optimized antennas from the object

    Light-triggered unidirectional molecular rotors: theoretical investigations on conformational dynamics and laser control

    Get PDF
    Two light-triggered molecular motors based on chiral overcrowded alkenes have been studied in the electronic ground state: a second-generation motor (2) and a redesigned motor (3). A semiempirical Monte-Carlo-type of conformational search has been implemented to find local minima in the ground state PESs of 2 and 3, which then have been reoptimized by ab-initio calculations. While in 3 only the four isomers of the rotary cycle are found, new isomers have been found in the case of 2, leading to different reaction pathways for the thermal helix-inversion. TSs for all the possible thermal conversions have been also computed. The obtained E_a values are in excellent agreement with those reported in the literature. The simple model BCH (core unit of many motors) has been studied from a quantum chemical and quantum dynamical point of view. The controversial nature of BCH's electronic transitions has been investigated using high-level ab-initio multiconfigurational and perturbational methods, including the development of a basis set specific to the problem at hand. The first two excited states of Bu-symmetry ((pi,3s)-Rydberg and (pi,pi*), respectively) are resolved at the MS-CASPT2-level of theory, providing vertical transition energies and oscillator strengths matching the experimental values. In addition, the origin of the (p,p*)-band is computed, yielding an energy value well below the FC-value of the (pi,3s_R)-maximum, explaining this band's unexpected intensity. Finally, a one-dimensional PES along BCH's torsional coordinate has been computed at the MS-CASPT2-level of theory, and quantum dynamical simulations have been carried out. These have focused on the obtainment of control laser fields that are able to trigger unidirectionality even in the symmetric PES (as opposed to 2 and 3 system). Optimal control strategies as well as the intuitive IR+UV-scheme both succeeded in achieving sustained, unidirectional torsional motion of BCH in the excited state

    Computation of minimal covariants bases for 2D coupled constitutive laws

    Full text link
    We produce minimal integrity bases for both isotropic and hemitropic invariant algebras (and more generally covariant algebras) of most common bidimensional constitutive tensors and -- possibly coupled -- laws, including piezoelectricity law, photoelasticity, Eshelby and elasticity tensors, complex viscoelasticity tensor, Hill elasto-plasticity, and (totally symmetric) fabric tensors up to twelfth-order. The concept of covariant, which extends that of invariant is explained and motivated. It appears to be much more useful for applications. All the tools required to obtain these results are explained in detail and a cleaning algorithm is formulated to achieve minimality in the isotropic case. The invariants and covariants are first expressed in complex forms and then in tensorial forms, thanks to explicit translation formulas which are provided. The proposed approach also applies to any nn-uplet of bidimensional constitutive tensors

    Solving Large Dense Symmetric Eigenproblem on Hybrid Architectures

    Get PDF
    Dense symmetric eigenproblem is one of the most significant problems in the numerical linear algebra that arises in numerous research fields such as bioinformatics, computational chemistry, and meteorology. In the past years, the problems arising in these fields become bigger than ever resulting in growing demands in both computational power as well as the storage capacities. In such problems, the eigenproblem becomes the main computational bottleneck for which solution is required an extremely high computational power. Modern computing architectures that can meet these growing demands are those that combine the power of the traditional multi-core processors and the general-purpose GPUs and are called hybrid systems. These systems exhibit very high performance when the data fits into the GPU memory ; however, if the volume of the data exceeds the total GPU memory, i.e. the data is out-of-core from the GPU perspective, the performance rapidly decreases. This dissertation is focused on the development of the algorithms that solve dense symmetric eigenproblems on the hybrid GPU-based architectures. In particular, it aims at developing the eigensolvers that exhibit very high performance even if a problem is out- of-core for the GPU. The developed out-of-core eigensolvers are evaluated and compared on real problems that arise in the simulation of molecular motions. In such problems the data, usually too large to fit into the GPU memory, are stored in the main memory and copied to the GPU memory in pieces. That approach results in the performance drop due to a slow interconnection and a high memory latency. To overcome this problem an approach that applies blocking strategy and re- designs the existing eigensolvers, in order to decrease the volume of data transferred and the number of memory transfers, is presented. This approach designs and implements a set of the block- oriented, communication-avoiding BLAS routines that overlap the data transfers with the number of computations performed. Next, these routines are applied to speed-up the following eigensolvers: the solver based on the multi-stage reduction to a tridiagonal form, the Krylov subspace-based method, and the spectral divide-and-conquer method. Although the out-of-core BLAS routines significantly improve the performance of these three eigensolvers, a careful re-design is required in order to tackle the solution of the large eigenproblems on the hybrid CPU-GPU systems. In the out-of-core multi-stage reduction approach, the factor that mostly influences the performance is the band size of the obtained band matrix. On the other hand, the Krylov subspace- based method, although it is based on the memory- bound BLAS-2 operations, is the fastest method if only a small subset of the eigenpairs is required. Finally, the spectral divide-and- conquer algorithm, which exhibits significantly higher arithmetic cost than the other two eigensolvers, achieves extremely high performance since it can be performed completely in terms of the compute-bound BLAS-3 operations. Furthermore, its high arithmetic cost is further reduced by exploiting the special structure of a matrix. Finally, the results presented in the dissertation show that the three out-of-core eigen- solvers, for a set of the specific macromolecular problems, significantly overcome the multi-core variants and attain high flops rate even if data do not fit into the GPU memory. This proves that it is possible to solve large eigenproblems on modest computing systems equipped with a single GPU

    Numerical study and optimization of photonic crystals

    Get PDF
    Photonic crystals (PhCs) are engineered nanostructures that enable an extraordinary control over the flow of light. These structures can be fabricated out of common semiconductors, are compatible with existing industrial fabrication technologies, and are expected to play a major role in future devices integrating photonic circuits - e.g. for telecommunications or in future quantum technologies. In this thesis, we explore a wide range of properties of the most common class of PhCs, formed by a lattice of circular holes in a semiconductor slab. To compute the electromagnetic eigenmodes of a given structure, we use fast mode-expansion methods, which are presented in detail here. The first application consists in a detailed analysis of the effects of fabrication disorder on the PhC structures. It is by now well-known that disorder is in many cases the limiting factor in device performance. Here, we shed more light on its effects, by statistically comparing various designs for PhC cavities with a high quality factor, and by analyzing the effect of irregular hole shapes on a PhC waveguide. The second application presented here stems from the fact that PhCs are in fact tremendously flexible, and their features are determined by a large number of controllable parameters. This is on one hand a great advantage, but on the other a great challenge when it comes to finding the optimal device for a given application. To face this challenge, we have developed an automated optimization procedure, using a global optimization algorithm for the exploration of an insightfully selected parameter space. This was applied to various devices of interest, and inevitably resulted in a vast improvement of their qualities. Specifically, we demonstrate various high-Q cavity designs, and a slow-light coupled-cavity waveguide with extraordinary features. We also present several experimental confirmations of the validity of our designs. Finally, we discuss two domains in which PhCs (and our optimization procedure) can be expected to play a major role. The first one is integrating quantum dots with the goal of long-range, photon-assisted dot-dot coupling, with implications for quantum information processing. We develop a semi-classical formalism, and analyze the magnitude and attenuation length of this coupling in large PhC cavities, as well as in a waveguide. The second outlook is in the field of topological photonics. We describe an array of resonators, in which an effective gauge field for photons can be induced through an appropriate time-periodic modulation of the resonant frequencies. This results in a Quantum Hall effect for light, and, in a finite system, one-directional edge states immune to fabrication disorder are predicted. We discuss the possibilities for a practical implementation, for which a PhC slab is among the most promising platforms
    • 

    corecore