529 research outputs found

    An embodied approach to informational interventions: using conceptual metaphors to promote sustainable healthy diets

    Get PDF
    Poor diet quality and environmental degradation are two major challenges of our times. Unhealthy and unsustainable dietary practices, such as the overconsumption of meat and consumer food waste behaviour, contribute greatly to both issues. Across seventeen online and field experiments, in two different cultures (US and China), this thesis investigates if the embodied cognition approach, and more specifically, research on conceptual metaphors, can be used to develop interventions to promote sustainable healthy diets. Interventions relying on conceptual metaphors have been shown to stimulate attitudinal and behavioural changes in other fields (e.g., marketing and political communications), but are rarely adopted to encourage sustainable healthy diets. To fill in this gap in the literature, I conducted five sets of experimental studies examining the effects of different metaphors on specific sustainable healthy dietary practices, each of which forms an independent empirical paper (Chapters 2-6 of the thesis). After introducing the current perspectives on embodied cognition and conceptual metaphors in the context of this research (Chapter 1), Chapter 2 looks into the conceptual metaphor “Healthy is Up”, demonstrating that US people implicitly associate healthiness with verticality, and offering recommendations for healthy eating guidelines. Chapter 3 extends this research to Chinese samples and partially replicates the results. Chapter 4 shows that the anthropomorphic metaphor “Animals are Friends” discourages meat consumption by inducing anticipatory guilt among US omnivores, whereas Chapter 5 reveals that Chinese omnivores are more responsive to another anthropomorphic metaphor, namely, “Animals are Family”. Bringing lab insights 6 to the real world, Chapter 6 demonstrates with a longitudinal field experiment that anthropomorphic metaphors together with environmental feedback result in a higher reduction in food waste as compared to other feedback interventions. The strengths, limitations and implications of those empirical papers are discussed in the conclusive part of the thesis

    AI: Limits and Prospects of Artificial Intelligence

    Get PDF
    The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence

    Novel neural architectures & algorithms for efficient inference

    Get PDF
    In the last decade, the machine learning universe embraced deep neural networks (DNNs) wholeheartedly with the advent of neural architectures such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), transformers, etc. These models have empowered many applications, such as ChatGPT, Imagen, etc., and have achieved state-of-the-art (SOTA) performance on many vision, speech, and language modeling tasks. However, SOTA performance comes with various issues, such as large model size, compute-intensive training, increased inference latency, higher working memory, etc. This thesis aims at improving the resource efficiency of neural architectures, i.e., significantly reducing the computational, storage, and energy consumption of a DNN without any significant loss in performance. Towards this goal, we explore novel neural architectures as well as training algorithms that allow low-capacity models to achieve near SOTA performance. We divide this thesis into two dimensions: \textit{Efficient Low Complexity Models}, and \textit{Input Hardness Adaptive Models}. Along the first dimension, i.e., \textit{Efficient Low Complexity Models}, we improve DNN performance by addressing instabilities in the existing architectures and training methods. We propose novel neural architectures inspired by ordinary differential equations (ODEs) to reinforce input signals and attend to salient feature regions. In addition, we show that carefully designed training schemes improve the performance of existing neural networks. We divide this exploration into two parts: \textsc{(a) Efficient Low Complexity RNNs.} We improve RNN resource efficiency by addressing poor gradients, noise amplifications, and BPTT training issues. First, we improve RNNs by solving ODEs that eliminate vanishing and exploding gradients during the training. To do so, we present Incremental Recurrent Neural Networks (iRNNs) that keep track of increments in the equilibrium surface. Next, we propose Time Adaptive RNNs that mitigate the noise propagation issue in RNNs by modulating the time constants in the ODE-based transition function. We empirically demonstrate the superiority of ODE-based neural architectures over existing RNNs. Finally, we propose Forward Propagation Through Time (FPTT) algorithm for training RNNs. We show that FPTT yields significant gains compared to the more conventional Backward Propagation Through Time (BPTT) scheme. \textsc{(b) Efficient Low Complexity CNNs.} Next, we improve CNN architectures by reducing their resource usage. They require greater depth to generate high-level features, resulting in computationally expensive models. We design a novel residual block, the Global layer, that constrains the input and output features by approximately solving partial differential equations (PDEs). It yields better receptive fields than traditional convolutional blocks and thus results in shallower networks. Further, we reduce the model footprint by enforcing a novel inductive bias that formulates the output of a residual block as a spatial interpolation between high-compute anchor pixels and low-compute cheaper pixels. This results in spatially interpolated convolutional blocks (SI-CNNs) that have better compute and performance trade-offs. Finally, we propose an algorithm that enforces various distributional constraints during training in order to achieve better generalization. We refer to this scheme as distributionally constrained learning (DCL). In the second dimension, i.e., \textit{Input Hardness Adaptive Models}, we introduce the notion of the hardness of any input relative to any architecture. In the first dimension, a neural network allocates the same resources, such as compute, storage, and working memory, for all the inputs. It inherently assumes that all examples are equally hard for a model. In this dimension, we challenge this assumption using input hardness as our reasoning that some inputs are relatively easy for a network to predict compared to others. Input hardness enables us to create selective classifiers wherein a low-capacity network handles simple inputs while abstaining from a prediction on the complex inputs. Next, we create hybrid models that route the hard inputs from the low-capacity abstaining network to a high-capacity expert model. We design various architectures that adhere to this hybrid inference style. Further, input hardness enables us to selectively distill the knowledge of a high-capacity model into a low-capacity model by cleverly discarding hard inputs during the distillation procedure. Finally, we conclude this thesis by sketching out various interesting future research directions that emerge as an extension of different ideas explored in this work

    Vitalism and Its Legacy in Twentieth Century Life Sciences and Philosophy

    Get PDF
    This Open Access book combines philosophical and historical analysis of various forms of alternatives to mechanism and mechanistic explanation, focusing on the 19th century to the present. It addresses vitalism, organicism and responses to materialism and its relevance to current biological science. In doing so, it promotes dialogue and discussion about the historical and philosophical importance of vitalism and other non-mechanistic conceptions of life. It points towards the integration of genomic science into the broader history of biology. It details a broad engagement with a variety of nineteenth, twentieth and twenty-first century vitalisms and conceptions of life. In addition, it discusses important threads in the history of concepts in the United States and Europe, including charting new reception histories in eastern and south-eastern Europe. While vitalism, organicism and similar epistemologies are often the concern of specialists in the history and philosophy of biology and of historians of ideas, the range of the contributions as well as the geographical and temporal scope of the volume allows for it to appeal to the historian of science and the historian of biology generally

    Applications of Molecular Dynamics simulations for biomolecular systems and improvements to density-based clustering in the analysis

    Get PDF
    Molecular Dynamics simulations provide a powerful tool to study biomolecular systems with atomistic detail. The key to better understand the function and behaviour of these molecules can often be found in their structural variability. Simulations can help to expose this information that is otherwise experimentally hard or impossible to attain. This work covers two application examples for which a sampling and a characterisation of the conformational ensemble could reveal the structural basis to answer a topical research question. For the fungal toxin phalloidin—a small bicyclic peptide—observed product ratios in different cyclisation reactions could be rationalised by assessing the conformational pre-organisation of precursor fragments. For the C-type lectin receptor langerin, conformational changes induced by different side-chain protonations could deliver an explanation of the pH-dependency in the protein’s calcium-binding. The investigations were accompanied by the continued development of a density-based clustering protocol into a respective software package, which is generally well applicable for the use case of extracting conformational states from Molecular Dynamics data

    Compositional synthesis of reactive systems

    Get PDF
    Synthesis is the task of automatically deriving correct-by-construction implementations from formal specifications. While it is a promising path toward developing verified programs, it is infamous for being hard to solve. Compositionality is recognized as a key technique for reducing the complexity of synthesis. So far, compositional approaches require extensive manual effort. In this thesis, we introduce algorithms that automate these steps. In the first part, we develop compositional synthesis techniques for distributed systems. Providing assumptions on other processes' behavior is fundamental in this setting due to inter-process dependencies. We establish delay-dominance, a new requirement for implementations that allows for implicitly assuming that other processes will not maliciously violate the shared goal. Furthermore, we present an algorithm that computes explicit assumptions on process behavior to address more complex dependencies. In the second part, we transfer the concept of compositionality from distributed to single-process systems. We present a preprocessing technique for synthesis that identifies independently synthesizable system components. We extend this approach to an incremental synthesis algorithm, resulting in more fine-grained decompositions. Our experimental evaluation shows that our techniques automate the required manual efforts, resulting in fully automated compositional synthesis algorithms for both distributed and single-process systems.Synthese ist die Aufgabe korrekte Implementierungen aus formalen Spezifikation abzuleiten. Sie ist zwar ein vielversprechender Weg für die Entwicklung verifizierter Programme, aber auch dafür bekannt schwer zu lösen zu sein. Kompositionalität gilt als eine Schlüsseltechnik zur Verringerung der Komplexität der Synthese. Bislang erfordern kompositionale Ansätze einen hohen manuellen Aufwand. In dieser Dissertation stellen wir Algorithmen vor, die diese Schritte automatisieren. Im ersten Teil entwickeln wir kompositionale Synthesetechniken für verteilte Systeme. Aufgrund der Abhängigkeiten zwischen den Prozessen ist es in diesem Kontext von grundlegender Bedeutung, Annahmen über das Verhalten der anderen Prozesse zu treffen. Wir etablieren Delay-Dominance, eine neue Anforderung für Implementierungen, die es ermöglicht, implizit anzunehmen, dass andere Prozesse das gemeinsame Ziel nicht böswillig verletzen. Darüber hinaus stellen wir einen Algorithmus vor, der explizite Annahmen über das Verhalten anderer Prozesse ableitet, um komplexere Abhängigkeiten zu berücksichtigen. Im zweiten Teil übertragen wir das Konzept der Kompositionalität von verteilten auf Einzelprozesssysteme. Wir präsentieren eine Vorverarbeitungmethode für die Synthese, die unabhängig synthetisierbare Systemkomponenten identifiziert. Wir erweitern diesen Ansatz zu einem inkrementellen Synthesealgorithmus, der zu feineren Dekompositionen führt. Unsere experimentelle Auswertung zeigt, dass unsere Techniken den erforderlichen manuellen Aufwand automatisieren und so zu vollautomatischen Algorithmen für die kompositionale Synthese sowohl für verteilte als auch für Einzelprozesssysteme führen

    24th Nordic Conference on Computational Linguistics (NoDaLiDa)

    Get PDF

    Dark energy in quantum field theory: Implications on modern cosmology

    Full text link
    In this dissertation, the nature of Dark Energy (DE) is examined from both theoretical and phenomenological perspectives. The possibility of DE being a dynamic quantity in quantum field theory (QFT) in curved spacetime is studied. The primary aim is to go beyond the usual approach that relies on ad hoc fields and instead treat DE as a quantum vacuum under appropriate QFT renormalization. Specifically, the dynamic behavior of DE could arise from quantum vacuum fluctuations in the Universe, evolving alongside the background expansion. Thus, the evolution of the vacuum energy density can be expressed in terms of the Hubble function and its derivatives, ρvac=ρvac(H)\rho_{\rm vac} =\rho_{\rm vac}(H). This approach yields a significant revelation: the equation of state of the quantum vacuum, derived from first principles, deviates from its traditional constant value of wvac=1w_{\rm vac}=-1. Additionally, a new inflationary mechanism emerges in this context, rooted in the quantum effects in curved spacetime. Moreover, the thesis displays a phenomenological exploration of two related models that go beyond the Λ\LambdaCDM model: the Brans-Dicke model with a cosmological constant and the Running Vacuum Model, which is related to the QFT calculations. These models have been tested under different datasets and scenarios to determine the constraints on their free parameters. The results of the fits are presented and discussed in relation to cosmological tensions concerning H0H_0 and σ8\sigma_8. The conclusions drawn from this thesis indicate promising signals of the dynamic behavior of quantum vacuum, potentially impacting the cosmological constant problem and the cosmological tensions.Comment: PhD Thesi
    corecore