228 research outputs found

    Novel sinks for the atmospherically potent gas nitrous oxide

    Get PDF
    Nitrous oxide (N2O) is a potent climate gas, with its strong warming potential and ozone-depleting properties both focusing research on N2O sources. While undersaturation in N2O have been reported in natural waters indicating sinks for N2O, most of these found in the surface ocean and shallow freshwaters remain unaccounted for. Although a sink for N2O through biological fixation has been observed in the Pacific, the regulation of N2O- compared to canonical N2 -fixation is unknown. Here I show that both N2O and N2 can be fixed by freshwater communities but with distinct seasonalities and temperature dependencies. N2O fixation appears less sensitive to temperature than N2 fixation, driving a strong sink for N2O in winter. Moreover, by quantifying both N2O and N2 fixation I show that, rather than N2O being first reduced to N2 through denitrification, N2O fixation is direct and could explain the widely reported N2O sinks in natural waters. N2O can be fixed into NH4 + , which could then be further oxidised to NO2 - and NO3 - and being available to the wider community. In the cold, total N2O reduction was higher and a higher proportion of the reduced N2O was conserved. In addition, with activity of nitrification not detected in most of the ponds and anammox not detected in any pond, denitrification seem to be the primary process producing both N2O and N2 . The availability of nitrate limits the temperature sensitivity of the production of N2O and N2 from denitrification, with production of both gases only sensitive to changes in temperature at high concentration of additional nitrate. With the high substrate, the net production ratio of N2O to N2 from denitrification increases at lower temperatures, which could provide more N2O relative to N2 for N fixation in the cold

    Deep Neural Networks and Tabular Data: Inference, Generation, and Explainability

    Get PDF
    Over the last decade, deep neural networks have enabled remarkable technological advancements, potentially transforming a wide range of aspects of our lives in the future. It is becoming increasingly common for deep-learning models to be used in a variety of situations in the modern life, ranging from search and recommendations to financial and healthcare solutions, and the number of applications utilizing deep neural networks is still on the rise. However, a lot of recent research efforts in deep learning have focused primarily on neural networks and domains in which they excel. This includes computer vision, audio processing, and natural language processing. It is a general tendency for data in these areas to be homogeneous, whereas heterogeneous tabular datasets have received relatively scant attention despite the fact that they are extremely prevalent. In fact, more than half of the datasets on the Google dataset platform are structured and can be represented in a tabular form. The first aim of this study is to provide a thoughtful and comprehensive analysis of deep neural networks' application to modeling and generating tabular data. Apart from that, an open-source performance benchmark on tabular data is presented, where we thoroughly compare over twenty machine and deep learning models on heterogeneous tabular datasets. The second contribution relates to synthetic tabular data generation. Inspired by their success in other homogeneous data modalities, deep generative models such as variational autoencoders and generative adversarial networks are also commonly applied for tabular data generation. However, the use of Transformer-based large language models (which are also generative) for tabular data generation have been received scant research attention. Our contribution to this literature consists of the development of a novel method for generating tabular data based on this family of autoregressive generative models that, on multiple challenging benchmarks, outperformed the current state-of-the-art methods for tabular data generation. Another crucial aspect for a deep-learning data system is that it needs to be reliable and trustworthy to gain broader acceptance in practice, especially in life-critical fields. One of the possible ways to bring trust into a data-driven system is to use explainable machine-learning methods. In spite of this, the current explanation methods often fail to provide robust explanations due to their high sensitivity to the hyperparameter selection or even changes of the random seed. Furthermore, most of these methods are based on feature-wise importance, ignoring the crucial relationship between variables in a sample. The third aim of this work is to address both of these issues by offering more robust and stable explanations, as well as taking into account the relationships between variables using a graph structure. In summary, this thesis made a significant contribution that touched many areas related to deep neural networks and heterogeneous tabular data as well as the usage of explainable machine learning methods

    Understanding the robustness difference between stochastic gradient descent and adaptive gradient methods

    Full text link
    Stochastic gradient descent (SGD) and adaptive gradient methods, such as Adam and RMSProp, have been widely used in training deep neural networks. We empirically show that while the difference between the standard generalization performance of models trained using these methods is small, those trained using SGD exhibit far greater robustness under input perturbations. Notably, our investigation demonstrates the presence of irrelevant frequencies in natural datasets, where alterations do not affect models' generalization performance. However, models trained with adaptive methods show sensitivity to these changes, suggesting that their use of irrelevant frequencies can lead to solutions sensitive to perturbations. To better understand this difference, we study the learning dynamics of gradient descent (GD) and sign gradient descent (signGD) on a synthetic dataset that mirrors natural signals. With a three-dimensional input space, the models optimized with GD and signGD have standard risks close to zero but vary in their adversarial risks. Our result shows that linear models' robustness to 2\ell_2-norm bounded changes is inversely proportional to the model parameters' weight norm: a smaller weight norm implies better robustness. In the context of deep learning, our experiments show that SGD-trained neural networks show smaller Lipschitz constants, explaining the better robustness to input perturbations than those trained with adaptive gradient methods

    A Woman's Right to Know : Pregnancy Testing in Twentieth-Century Britain

    Get PDF
    The history of pregnancy testing, and how it transformed from an esoteric laboratory tool to a commonplace of everyday life. Pregnancy testing has never been easier. Waiting on one side or the other of the bathroom door for a “positive” or “negative” result has become a modern ritual and rite of passage. Today, the ubiquitous home pregnancy test is implicated in personal decisions and public debates about all aspects of reproduction, from miscarriage and abortion to the “biological clock” and IVF. Yet, only three generations ago, women typically waited not minutes but months to find out whether they were pregnant. A Woman's Right to Know tells, for the first time, the story of pregnancy testing—one of the most significant and least studied technologies of reproduction. Focusing on Britain from around 1900 to the present day, Jesse Olszynko-Gryn shows how demand shifted from doctors to women, and then goes further to explain the remarkable transformation of pregnancy testing from an obscure laboratory service to an easily accessible (though fraught) tool for every woman. Lastly, the book reflects on resources the past might contain for the present and future of sexual and reproductive health. Solidly researched and compellingly argued, Olszynko-Gryn demonstrates that the rise of pregnancy testing has had significant—and not always expected—impact and has led to changes in the ways in which we conceive of pregnancy itself

    Multimessenger Characterization of Markarian 501 during Historically Low X-Ray and γ-Ray Activity

    Get PDF
    We study the broadband emission of Mrk 501 using multiwavelength observations from 2017 to 2020 performed with a multitude of instruments, involving, among others, MAGIC, Fermi's Large Area Telescope (LAT), NuSTAR, Swift, GASP-WEBT, and the Owens Valley Radio Observatory. Mrk 501 showed an extremely low broadband activity, which may help to unravel its baseline emission. Nonetheless, significant flux variations are detected at all wave bands, with the highest occurring at X-rays and very-high-energy (VHE) 3-rays. A significant correlation (>3σ) between X-rays and VHE 3-rays is measured, supporting leptonic scenarios to explain the variable parts of the emission, also during low activity. This is further supported when we extend our data from 2008 to 2020, and identify, for the first time, significant correlations between the Swift X-Ray Telescope and Fermi-LAT. We additionally find correlations between high-energy 3-rays and radio, with the radio lagging by more than 100 days, placing the 3-ray emission zone upstream of the radio-bright regions in the jet. Furthermore, Mrk 501 showed a historically low activity in X-rays and VHE 3-rays from mid-2017 to mid-2019 with a stable VHE flux (>0.2 TeV) of 5% the emission of the Crab Nebula. The broadband spectral energy distribution (SED) of this 2 yr long low state, the potential baseline emission of Mrk 501, can be characterized with one-zone leptonic models, and with (lepto)-hadronic models fulfilling neutrino flux constraints from IceCube. We explore the time evolution of the SED toward the low state, revealing that the stable baseline emission may be ascribed to a standing shock, and the variable emission to an additional expanding or traveling shock. © 2023. The Author(s). Published by the American Astronomical Society

    Identification and characterization of N-degron pathways

    Get PDF

    Piezoelectric digital vibration absorbers for vibration mitigation of bladed structures

    Full text link
    Climate change and resource scarcity pose increasingly difficult challenges for the aviation industry requiring a reduction in fossil fuel consumption. To address these problems and increase the efficiency of aircraft engines, some of their parts are now manufactured in one piece. For example, a rotor of the compressor stage of an airplane engine consist of a drum with a large number of blades and is called BluM. These structures are lightweight and feature low structural damping and high modal density. Their particular dynamic characteristics require sophisticated solutions for vibration mitigation of these structures. This is precisely the starting point of this thesis. Based on a digital realization of piezoelectric shunt circuits, we provide a damping concept that is able to tackle the complex dynamics of bladed structures and to mitigate their vibrations. To this end, multiple digital vibration absorbers (DVAs) are used simultaneously. Two new strategies to tune these DVAs are proposed in the thesis, namely the isolated mode and mean shunt strategies. These strategies not only take advantage of the fact that multiple absorbers act simultaneously on the structure, but they also address the problem of closely-spaced modes. In order to target multiple families of BluM modes, these strategies are incorporated in a multi-stage shunt circuit. The concepts are demonstrated experimentally using two bladed structures with increasing complexity, namely a bladed rail and a BluM. Both methods exhibit excellent damping performances on multiple groups of modes. In addition, they prove robust to changes in the host structure which could, e.g., be due to mistuning. Thanks to their digital realization, DVAs are also easily adjustable. Finally, this thesis reveals the parallel that exists between resonant piezoelectric shunts with a negative capacitance and active positive position feedback (PPF) controllers. Based on this comparison, a new H∞ norm-based tuning rule is found for a PPF controller. It is demonstrated using both numerical and experimental cantilever beams. To this end, a method that accounts for the influence of modes higher in frequency than the targeted one is developed.Le changement climatique et la raréfaction des ressources posent des défis de plus en plus complexes à relever pour l'industrie aéronautique. Un de ces défis est la réduction de la consommation en énergies fossiles. Pour accroître l'efficacité des moteurs d'avion, certains de leurs composants sont désormais fabriqués en une seule pièce. Dans le cas des compresseurs, ces pièces monoblocs sont appelées BluMs et sont constituées d’un tambour avec un grand nombre d'aubes. Ce type de structures bénéficie d'un allègement significatif, ce qui conduit à un faible amortissement structurel. De plus, ces pièces monoblocs présentent une densité modale élevée en raison du nombre important de diamètres nodaux. Ces caractéristiques dynamiques particulières nécessitent des solutions d'amortissement sophistiquées. Cette thèse de doctorat aborde cette problématique. En exploitant le concept d'absorbeur de vibration digital (DVA), nous proposons une nouvelle technique d'amortissement des structures aubagées. Deux nouvelles stratégies d'accordage de ces DVA sont développées dans cette thèse, à savoir la stratégie du mode isolé et la stratégie du shunt moyen. Ces méthodes tirent non seulement parti du fait que plusieurs absorbeurs agissent simultanément sur la structure, mais elles s'attaquent aussi au problème des modes proches en fréquence. Afin de cibler plusieurs familles de modes, ces stratégies ont été incorporées dans un circuit de shunt à plusieurs étages. Les concepts sont testés expérimentalement sur deux structures aubagées de complexité croissante, à savoir un rail à aubes et un BluM comme application finale. Ces méthodes permettent d'obtenir d'excellentes performances d'amortissement sur plusieurs groupes de modes. Elles s'avèrent également robustes face à des variations de la structure, dues par exemple à un désaccordage de celle-ci. Il est à noter que, grâce à leur caractère digital, ces méthodes sont facilement adaptables. Finalement, nous révélons le parallèle qui existe entre les shunts piézoélectriques résonants avec une capacitance négative et le contrôleur actif à rétroaction positive de position (PPF). Sur base de cette comparaison, de nouvelles règles d'accordage basées sur la norme H∞ sont développées pour le contrôleur PPF. Leur efficacité est démontrée à la fois numériquement et expérimentalement sur une poutre encastrée-libre. Dans ce but, une méthode prenant en compte l'influence des modes dont la fréquence est supérieure au mode ciblé a été mise sur pied au moyen de facteurs de correction
    corecore