1,616 research outputs found

    Classical and quantum algorithms for scaling problems

    Get PDF
    This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size

    Towards Neuromorphic Gradient Descent: Exact Gradients and Low-Variance Online Estimates for Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) are biologically-plausible models that can run on low-powered non-Von Neumann neuromorphic hardware, positioning them as promising alternatives to conventional Deep Neural Networks (DNNs) for energy-efficient edge computing and robotics. Over the past few years, the Gradient Descent (GD) and Error Backpropagation (BP) algorithms used in DNNs have inspired various training methods for SNNs. However, the non-local and the reverse nature of BP, combined with the inherent non-differentiability of spikes, represent fundamental obstacles to computing gradients with SNNs directly on neuromorphic hardware. Therefore, novel approaches are required to overcome the limitations of GD and BP and enable online gradient computation on neuromorphic hardware. In this thesis, I address the limitations of GD and BP with SNNs by proposing three algorithms. First, I extend a recent method that computes exact gradients with temporally-coded SNNs by relaxing the firing constraint of temporal coding and allowing multiple spikes per neuron. My proposed method generalizes the computation of exact gradients with SNNs and enhances the tradeoffs between performance and various other aspects of spiking neurons. Next, I introduce a novel alternative to BP that computes low-variance gradient estimates in a local and online manner. Compared to other alternatives to BP, the proposed method demonstrates an improved convergence rate and increased performance with DNNs. Finally, I combine these two methods and propose an algorithm that estimates gradients with SNNs in a manner that is compatible with the constraints of neuromorphic hardware. My empirical results demonstrate the effectiveness of the resulting algorithm in training SNNs without performing BP

    Conversations on Empathy

    Get PDF
    In the aftermath of a global pandemic, amidst new and ongoing wars, genocide, inequality, and staggering ecological collapse, some in the public and political arena have argued that we are in desperate need of greater empathy — be this with our neighbours, refugees, war victims, the vulnerable or disappearing animal and plant species. This interdisciplinary volume asks the crucial questions: How does a better understanding of empathy contribute, if at all, to our understanding of others? How is it implicated in the ways we perceive, understand and constitute others as subjects? Conversations on Empathy examines how empathy might be enacted and experienced either as a way to highlight forms of otherness or, instead, to overcome what might otherwise appear to be irreducible differences. It explores the ways in which empathy enables us to understand, imagine and create sameness and otherness in our everyday intersubjective encounters focusing on a varied range of "radical others" – others who are perceived as being dramatically different from oneself. With a focus on the importance of empathy to understand difference, the book contends that the role of empathy is critical, now more than ever, for thinking about local and global challenges of interconnectedness, care and justice

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Phase field modeling of hydraulic fracture propagation in transversely isotropic poroelastic media

    Full text link
    This paper proposes a phase field model (PFM) for describing hydraulic fracture propagation in transversely isotopic media. The coupling between the fluid flow and displacement fields is established according to the classical Biot poroelasticity theory while the phase field model characterizes the fracture behavior. The proposed method uses a transversely isotropic constitutive relationship between stress and strain as well as anisotropy in fracture toughness and permeability. An additional pressure-related term and an anisotropic fracture toughness tensor are added in the energy functional, which is then used to obtain the governing equations of strong form via the variational approach. In addition, the phase field is used to construct indicator functions that transit the fluid property from the intact domain to the fully fractured one. Moreover, the proposed PFM is implemented using the finite element method where a staggered scheme is applied and the displacement and fluid pressure are monolithically solved in a staggered step. Afterwards, two examples are tested to initially verify the proposed PFM: a transversely isotropic single-edge-notched square plate subjected to tension and an isotropic porous medium subjected to internal fluid pressure. Finally, numerical examples of 2D and 3D transversely isotropic media with one or two interior notches subjected to internal fluid pressure are presented to further prove the capability of the proposed PFM in 2D and 3D problems

    Data-driven exact model order reduction for computational multiscale methods to predict high-cycle fatigue-damage in short-fiber reinforced plastics

    Get PDF
    Motiviert durch die Entwicklung energieeffizienterer Maschinen und Transportmittel hat der Leichtbau in den letzten Jahren enorm an Wichtigkeit gewonnen. Eine wichtige Klasse der Leichtbaumaterialien sind die faserverstärkten Kunststoffe. In der vorliegenden Arbeit liegt der Fokus auf der Entwicklung und Bereitstellung von Materialmodellen zur Vorhersage des Ermüdungsverhaltens kurzglasfaserverstärkter Thermoplaste. Diese Materialien unterscheiden sich dabei durch ihre Aufschmelzbarkeit und ihrer damit einhergehenden besseren Recyclebarkeit von thermosetbasierten Materialien. Außerdem erlauben die Kurzglasfasern im Gegensatz zu Langfasern eine einfache und zeiteffiziente Herstellung komplexer Komponenten. Ermüdung ist ein wichtiger Versagensmechanismus in solchen Komponenten, insbesondere für Bauteile z.B. in Fahrzeugen, die vibrationsartigen Belastungen ausgesetzt sind. Durch die inherente Anisotropie des Materials sind die experimentelle Charakterisierung und Vorhersage dieses Versagensmechanismus jedoch äußerst zeitintensiv und stellen somit eine wesentliche Herausforderung im Entwicklungsprozess und für die breitere Anwendung solcher Bauteile dar. Daher ist die Entwicklung komplementärer simulativer Methoden von großem Interesse. Im Rahmen dieser Arbeit werden Methoden zur Vorhersage der Ermüdungsschädigung kurzglasfaserverstärkter Werkstoffe im Rahmen einer Multiskalenmethode entwickelt. Die in der Arbeit betrachteten Multiskalenmodelle bieten die Möglichkeit, allein anhand der experimentellen Charakterisierungen der Materialparameter der Konstituenten, d.h. Faser und Matrix, komplexe anisotrope Effekte des Verbundmaterials vorherzusagen. Der experimentelle Aufwand kann dadurch enorm reduziert werden. Dazu werden zunächst Materialmodelle für die Konstituenten des Komposits entwickelt. Mithilfe FFT-basierter rechnergestützter Homogenisierung wird daraus das Materialverhalten des Komposits für verschiedene Mikrostrukturen und Lastfälle vorhergesagt. Die vorberechneten Lastfälle auf Mikrostrukturebene werden mit datengetriebenen Methoden auf die Makroskala übertragen. Das ermöglicht eine effiziente Berechnung von Bauteilen in wenigen Stunden, wohingegen eine entsprechende Berechnung mit geometrischer Auflösung aller einzelnen Fasern der Mikrostruktur auf heutigen Computern viele Jahre dauern würden. Für die Matrix werden unterschiedliche Schädigungsmodelle untersucht. Ihre Vor- und Nachteile werden analysiert. Die Mikrostruktursimulationen geben einen Einblick in den Einfluss verschiedener statistischer Parameter wie Faserlängen und Faservolumengehalt auf das Kompositverhalten. Ein neues Modellordnungsreduktionsverfahren wird entwickelt und zur Simulation des Ermüdungsschädigungsverhaltens auf Bauteilebene angewandt. Weiter werden Modellerweiterungen zur Berücksichtigung des R-Wert-Verhältnisses und viskoelastischer Effekte in der Evolution der Ermüdungsschädigung entwickelt und mit experimentellen Ergebnissen validiert. Das entstandene Simulationsframework erlaubt nach Vorrechnungen auf einer geringen Menge von Mikrostrukturen und Lastfällen eine effiziente Makrosimulation eines Bauteils vorzunehmen. Dabei können Effekte wie Viskoelastizität und R-Wert-Abhängigkeit je nach gewünschter Modellierungstiefe berücksichtigt oder vernachlässigt werden, um immer das effizientste Modell, das alle relevanten Effekte abbildet, nutzen zu können

    The Bit Complexity of Efficient Continuous Optimization

    Full text link
    We analyze the bit complexity of efficient algorithms for fundamental optimization problems, such as linear regression, pp-norm regression, and linear programming (LP). State-of-the-art algorithms are iterative, and in terms of the number of arithmetic operations, they match the current time complexity of multiplying two nn-by-nn matrices (up to polylogarithmic factors). However, previous work has typically assumed infinite precision arithmetic, and due to complicated inverse maintenance techniques, the actual running times of these algorithms are unknown. To settle the running time and bit complexity of these algorithms, we demonstrate that a core common subroutine, known as \emph{inverse maintenance}, is backward-stable. Additionally, we show that iterative approaches for solving constrained weighted regression problems can be accomplished with bounded-error pre-conditioners. Specifically, we prove that linear programs can be solved approximately in matrix multiplication time multiplied by polylog factors that depend on the condition number κ\kappa of the matrix and the inner and outer radius of the LP problem. pp-norm regression can be solved approximately in matrix multiplication time multiplied by polylog factors in κ\kappa. Lastly, linear regression can be solved approximately in input-sparsity time multiplied by polylog factors in κ\kappa. Furthermore, we present results for achieving lower than matrix multiplication time for pp-norm regression by utilizing faster solvers for sparse linear systems.Comment: 71 page

    On linear, fractional, and submodular optimization

    Get PDF
    In this thesis, we study four fundamental problems in the theory of optimization. 1. In fractional optimization, we are interested in minimizing a ratio of two functions over some domain. A well-known technique for solving this problem is the Newton– Dinkelbach method. We propose an accelerated version of this classical method and give a new analysis using the Bregman divergence. We show how it leads to improved or simplified results in three application areas. 2. The diameter of a polyhedron is the maximum length of a shortest path between any two vertices. The circuit diameter is a relaxation of this notion, whereby shortest paths are not restricted to edges of the polyhedron. For a polyhedron in standard equality form with constraint matrix A, we prove an upper bound on the circuit diameter that is quadratic in the rank of A and logarithmic in the circuit imbalance measure of A. We also give circuit augmentation algorithms for linear programming with similar iteration complexity. 3. The correlation gap of a set function is the ratio between its multilinear and concave extensions. We present improved lower bounds on the correlation gap of a matroid rank function, parametrized by the rank and girth of the matroid. We also prove that for a weighted matroid rank function, the worst correlation gap is achieved with uniform weights. Such improved lower bounds have direct applications in submodular maximization and mechanism design. 4. The last part of this thesis concerns parity games, a problem intimately related to linear programming. A parity game is an infinite-duration game between two players on a graph. The problem of deciding the winner lies in NP and co-NP, with no known polynomial algorithm to date. Many of the fastest (quasi-polynomial) algorithms have been unified via the concept of a universal tree. We propose a strategy iteration framework which can be applied on any universal tree

    The European Experience: A Multi-Perspective History of Modern Europe, 1500–2000

    Get PDF
    The European Experience brings together the expertise of nearly a hundred historians from eight European universities to internationalise and diversify the study of modern European history, exploring a grand sweep of time from 1500 to 2000. Offering a valuable corrective to the Anglocentric narratives of previous English-language textbooks, scholars from all over Europe have pooled their knowledge on comparative themes such as identities, cultural encounters, power and citizenship, and economic development to reflect the complexity and heterogeneous nature of the European experience. Rather than another grand narrative, the international author teams offer a multifaceted and rich perspective on the history of the continent of the past 500 years. Each major theme is dissected through three chronological sub-chapters, revealing how major social, political and historical trends manifested themselves in different European settings during the early modern (1500–1800), modern (1800–1900) and contemporary period (1900–2000). This resource is of utmost relevance to today’s history students in the light of ongoing internationalisation strategies for higher education curricula, as it delivers one of the first multi-perspective and truly ‘European’ analyses of the continent’s past. Beyond the provision of historical content, this textbook equips students with the intellectual tools to interrogate prevailing accounts of European history, and enables them to seek out additional perspectives in a bid to further enrich the discipline

    An Interdisciplinary Survey on Origin-destination Flows Modeling: Theory and Techniques

    Full text link
    Origin-destination~(OD) flow modeling is an extensively researched subject across multiple disciplines, such as the investigation of travel demand in transportation and spatial interaction modeling in geography. However, researchers from different fields tend to employ their own unique research paradigms and lack interdisciplinary communication, preventing the cross-fertilization of knowledge and the development of novel solutions to challenges. This article presents a systematic interdisciplinary survey that comprehensively and holistically scrutinizes OD flows from utilizing fundamental theory to studying the mechanism of population mobility and solving practical problems with engineering techniques, such as computational models. Specifically, regional economics, urban geography, and sociophysics are adept at employing theoretical research methods to explore the underlying mechanisms of OD flows. They have developed three influential theoretical models: the gravity model, the intervening opportunities model, and the radiation model. These models specifically focus on examining the fundamental influences of distance, opportunities, and population on OD flows, respectively. In the meantime, fields such as transportation, urban planning, and computer science primarily focus on addressing four practical problems: OD prediction, OD construction, OD estimation, and OD forecasting. Advanced computational models, such as deep learning models, have gradually been introduced to address these problems more effectively. Finally, based on the existing research, this survey summarizes current challenges and outlines future directions for this topic. Through this survey, we aim to break down the barriers between disciplines in OD flow-related research, fostering interdisciplinary perspectives and modes of thinking.Comment: 49 pages, 6 figure
    • …
    corecore