8,825 research outputs found

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    3D magnetotelluric modeling using high-order tetrahedral Nédélec elements on massively parallel computing platforms

    Full text link
    We present a routine for 3D magnetotelluric (MT) modeling based upon high-order edge finite element method (HEFEM), tailored and unstructured tetrahedral meshes, and high-performance computing (HPC). This implementation extends the PETGEM modeller capabilities, initially developed for active-source electromagnetic methods in frequency-domain. We assess the accuracy, robustness, and performance of the code using a set of reference models developed by the MT community in well-known reported workshops. The scale and geological properties of these 3D MT setups are challenging, making them ideal for addressing a rigorous validation. Our numerical assessment proves that this new algorithm can produce the expected solutions for arbitrarily 3D MT models. Also, our extensive experimental results reveal four main insights: (1) high-order discretizations in conjunction with tailored meshes can offer excellent accuracy; (2) a rigorous mesh design based on the skin-depth principle can be beneficial for the solution of the 3D MT problem in terms of numerical accuracy and run-time; (3) high-order polynomial basis functions achieve better speed-up and parallel efficiency ratios than low-order polynomial basis functions on cutting-edge HPC platforms; (4) a triple helix approach based on HEFEM, tailored meshes, and HPC can be extremely competitive for the solution of realistic and complex 3D MT models and geophysical electromagnetics in general

    A review of differentiable digital signal processing for music and speech synthesis

    Get PDF
    The term “differentiable digital signal processing” describes a family of techniques in which loss function gradients are backpropagated through digital signal processors, facilitating their integration into neural networks. This article surveys the literature on differentiable audio signal processing, focusing on its use in music and speech synthesis. We catalogue applications to tasks including music performance rendering, sound matching, and voice transformation, discussing the motivations for and implications of the use of this methodology. This is accompanied by an overview of digital signal processing operations that have been implemented differentiably, which is further supported by a web book containing practical advice on differentiable synthesiser programming (https://intro2ddsp.github.io/). Finally, we highlight open challenges, including optimisation pathologies, robustness to real-world conditions, and design trade-offs, and discuss directions for future research

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Single-cell time-series analysis of metabolic rhythms in yeast

    Get PDF
    The yeast metabolic cycle (YMC) is a biological rhythm in budding yeast (Saccharomyces cerevisiae). It entails oscillations in the concentrations and redox states of intracellular metabolites, oscillations in transcript levels, temporal partitioning of biosynthesis, and, in chemostats, oscillations in oxygen consumption. Most studies on the YMC have been based on chemostat experiments, and it is unclear whether YMCs arise from interactions between cells or are generated independently by each cell. This thesis aims at characterising the YMC in single cells and its response to nutrient and genetic perturbations. Specifically, I use microfluidics to trap and separate yeast cells, then record the time-dependent intensity of flavin autofluorescence, which is a component of the YMC. Single-cell microfluidics produces a large amount of time series data. Noisy and short time series produced from biological experiments restrict the computational tools that are useful for analysis. I developed a method to filter time series, a machine learning model to classify whether time series are oscillatory, and an autocorrelation method to examine the periodicity of time series data. My experimental results show that yeast cells show oscillations in the fluorescence of flavins. Specifically, I show that in high glucose conditions, cells generate flavin oscillations asynchronously within a population, and these flavin oscillations couple with the cell division cycle. I show that cells can individually reset the phase of their flavin oscillations in response to abrupt nutrient changes, independently of the cell division cycle. I also show that deletion strains generate flavin oscillations that exhibit different behaviour from dissolved oxygen oscillations from chemostat conditions. Finally, I use flux balance analysis to address whether proteomic constraints in cellular metabolism mean that temporal partitioning of biosynthesis is advantageous for the yeast cell, and whether such partitioning explains the timing of the metabolic cycle. My results show that under proteomic constraints, it is advantageous for the cell to sequentially synthesise biomass components because doing so shortens the timescale of biomass synthesis. However, the degree of advantage of sequential over parallel biosynthesis is lower when both carbon and nitrogen sources are limiting. This thesis thus confirms autonomous generation of flavin oscillations, and suggests a model in which the YMC responds to nutrient conditions and subsequently entrains the cell division cycle. It also emphasises the possibility that subpopulations in the culture explain chemostat-based observations of the YMC. Furthermore, this thesis paves the way for using computational methods to analyse large datasets of oscillatory time series, which is useful for various fields of study beyond the YMC

    Backpropagation Beyond the Gradient

    Get PDF
    Automatic differentiation is a key enabler of deep learning: previously, practitioners were limited to models for which they could manually compute derivatives. Now, they can create sophisticated models with almost no restrictions and train them using first-order, i. e. gradient, information. Popular libraries like PyTorch and TensorFlow compute this gradient efficiently, automatically, and conveniently with a single line of code. Under the hood, reverse-mode automatic differentiation, or gradient backpropagation, powers the gradient computation in these libraries. Their entire design centers around gradient backpropagation. These frameworks are specialized around one specific task—computing the average gradient in a mini-batch. This specialization often complicates the extraction of other information like higher-order statistical moments of the gradient, or higher-order derivatives like the Hessian. It limits practitioners and researchers to methods that rely on the gradient. Arguably, this hampers the field from exploring the potential of higher-order information and there is evidence that focusing solely on the gradient has not lead to significant recent advances in deep learning optimization. To advance algorithmic research and inspire novel ideas, information beyond the batch-averaged gradient must be made available at the same level of computational efficiency, automation, and convenience. This thesis presents approaches to simplify experimentation with rich information beyond the gradient by making it more readily accessible. We present an implementation of these ideas as an extension to the backpropagation procedure in PyTorch. Using this newly accessible information, we demonstrate possible use cases by (i) showing how it can inform our understanding of neural network training by building a diagnostic tool, and (ii) enabling novel methods to efficiently compute and approximate curvature information. First, we extend gradient backpropagation for sequential feedforward models to Hessian backpropagation which enables computing approximate per-layer curvature. This perspective unifies recently proposed block- diagonal curvature approximations. Like gradient backpropagation, the computation of these second-order derivatives is modular, and therefore simple to automate and extend to new operations. Based on the insight that rich information beyond the gradient can be computed efficiently and at the same time, we extend the backpropagation in PyTorch with the BackPACK library. It provides efficient and convenient access to statistical moments of the gradient and approximate curvature information, often at a small overhead compared to computing just the gradient. Next, we showcase the utility of such information to better understand neural network training. We build the Cockpit library that visualizes what is happening inside the model during training through various instruments that rely on BackPACK’s statistics. We show how Cockpit provides a meaningful statistical summary report to the deep learning engineer to identify bugs in their machine learning pipeline, guide hyperparameter tuning, and study deep learning phenomena. Finally, we use BackPACK’s extended automatic differentiation functionality to develop ViViT, an approach to efficiently compute curvature information, in particular curvature noise. It uses the low-rank structure of the generalized Gauss-Newton approximation to the Hessian and addresses shortcomings in existing curvature approximations. Through monitoring curvature noise, we demonstrate how ViViT’s information helps in understanding challenges to make second-order optimization methods work in practice. This work develops new tools to experiment more easily with higher-order information in complex deep learning models. These tools have impacted works on Bayesian applications with Laplace approximations, out-of-distribution generalization, differential privacy, and the design of automatic differentia- tion systems. They constitute one important step towards developing and establishing more efficient deep learning algorithms

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Machine learning applications in search algorithms for gravitational waves from compact binary mergers

    Get PDF
    Gravitational waves from compact binary mergers are now routinely observed by Earth-bound detectors. These observations enable exciting new science, as they have opened a new window to the Universe. However, extracting gravitational-wave signals from the noisy detector data is a challenging problem. The most sensitive search algorithms for compact binary mergers use matched filtering, an algorithm that compares the data with a set of expected template signals. As detectors are upgraded and more sophisticated signal models become available, the number of required templates will increase, which can make some sources computationally prohibitive to search for. The computational cost is of particular concern when low-latency alerts should be issued to maximize the time for electromagnetic follow-up observations. One potential solution to reduce computational requirements that has started to be explored in the last decade is machine learning. However, different proposed deep learning searches target varying parameter spaces and use metrics that are not always comparable to existing literature. Consequently, a clear picture of the capabilities of machine learning searches has been sorely missing. In this thesis, we closely examine the sensitivity of various deep learning gravitational-wave search algorithms and introduce new methods to detect signals from binary black hole and binary neutron star mergers at previously untested statistical confidence levels. By using the sensitive distance as our core metric, we allow for a direct comparison of our algorithms to state-of-the-art search pipelines. As part of this thesis, we organized a global mock data challenge to create a benchmark for machine learning search algorithms targeting compact binaries. This way, the tools developed in this thesis are made available to the greater community by publishing them as open source software. Our studies show that, depending on the parameter space, deep learning gravitational-wave search algorithms are already competitive with current production search pipelines. We also find that strategies developed for traditional searches can be effectively adapted to their machine learning counterparts. In regions where matched filtering becomes computationally expensive, available deep learning algorithms are also limited in their capability. We find reduced sensitivity to long duration signals compared to the excellent results for short-duration binary black hole signals

    On the path integration system of insects: there and back again

    Get PDF
    Navigation is an essential capability of animate organisms and robots. Among animate organisms of particular interest are insects because they are capable of a variety of navigation competencies solving challenging problems with limited resources, thereby providing inspiration for robot navigation. Ants, bees and other insects are able to return to their nest using a navigation strategy known as path integration. During path integration, the animal maintains a running estimate of the distance and direction to its nest as it travels. This estimate, known as the `home vector', enables the animal to return to its nest. Path integration was the technique used by sea navigators to cross the open seas in the past. To perform path integration, both sailors and insects need access to two pieces of information, their direction and their speed of motion over time. Neurons encoding the heading and speed have been found to converge on a highly conserved region of the insect brain, the central complex. It is, therefore, believed that the central complex is key to the computations pertaining to path integration. However, several questions remain about the exact structure of the neuronal circuit that tracks the animal's heading, how it differs between insect species, and how the speed and direction are integrated into a home vector and maintained in memory. In this thesis, I have combined behavioural, anatomical, and physiological data with computational modelling and agent simulations to tackle these questions. Analysis of the internal compass circuit of two insect species with highly divergent ecologies, the fruit fly Drosophila melanogaster and the desert locust Schistocerca gregaria, revealed that despite 400 million years of evolutionary divergence, both species share a fundamentally common internal compass circuit that keeps track of the animal's heading. However, subtle differences in the neuronal morphologies result in distinct circuit dynamics adapted to the ecology of each species, thereby providing insights into how neural circuits evolved to accommodate species-specific behaviours. The fast-moving insects need to update their home vector memory continuously as they move, yet they can remember it for several hours. This conjunction of fast updating and long persistence of the home vector does not directly map to current short, mid, and long-term memory accounts. An extensive literature review revealed a lack of available memory models that could support the home vector memory requirements. A comparison of existing behavioural data with the homing behaviour of simulated robot agents illustrated that the prevalent hypothesis, which posits that the neural substrate of the path integration memory is a bump attractor network, is contradicted by behavioural evidence. An investigation of the type of memory utilised during path integration revealed that cold-induced anaesthesia disrupts the ability of ants to return to their nest, but it does not eliminate their ability to move in the correct homing direction. Using computational modelling and simulated agents, I argue that the best explanation for this phenomenon is not two separate memories differently affected by temperature but a shared memory that encodes both the direction and distance. The results presented in this thesis shed some more light on the labyrinth that researchers of animal navigation have been exploring in their attempts to unravel a few more rounds of Ariadne's thread back to its origin. The findings provide valuable insights into the path integration system of insects and inspiration for future memory research, advancing path integration techniques in robotics, and developing novel neuromorphic solutions to computational problems
    corecore