664 research outputs found

    30th European Congress on Obesity (ECO 2023)

    Get PDF
    This is the abstract book of 30th European Congress on Obesity (ECO 2023

    Resilience and food security in a food systems context

    Get PDF
    This open access book compiles a series of chapters written by internationally recognized experts known for their in-depth but critical views on questions of resilience and food security. The book assesses rigorously and critically the contribution of the concept of resilience in advancing our understanding and ability to design and implement development interventions in relation to food security and humanitarian crises. For this, the book departs from the narrow beaten tracks of agriculture and trade, which have influenced the mainstream debate on food security for nearly 60 years, and adopts instead a wider, more holistic perspective, framed around food systems. The foundation for this new approach is the recognition that in the current post-globalization era, the food and nutritional security of the world’s population no longer depends just on the performance of agriculture and policies on trade, but rather on the capacity of the entire (food) system to produce, process, transport and distribute safe, affordable and nutritious food for all, in ways that remain environmentally sustainable. In that context, adopting a food system perspective provides a more appropriate frame as it incites to broaden the conventional thinking and to acknowledge the systemic nature of the different processes and actors involved. This book is written for a large audience, from academics to policymakers, students to practitioners

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    What do rendering options tell us about the translating mind? Testing the choice network analysis hypothesis

    Get PDF
    Frame. Assessing the difficulty of source texts and parts thereof is important in CTIS, whether for research comparability, for didactic purposes or setting price differences in the market. In order to empirically measure it, Campbell & Hale (1999) and Campbell (2000) developed the Choice Network Analysis (CNA) framework. Basically, the CNA’s main hypothesis is that the more translation options (a group of) translators have to render a given source text stretch, the higher the difficulty of that text stretch will be. We will call this the CNA hypothesis. In a nutshell, this research project puts the CNA hypothesis to the test and studies whether it does actually measure difficulty. Data collection. Two groups of participants (n=29) of different profiles and from two universities in different countries had three translation tasks keylogged with Inputlog, and filled pre- and post-translation questionnaires. Participants translated from English (L2) into their L1s (Spanish or Italian), and worked—first in class and then at home—using their own computers, on texts ca. 800–1000 words long. Each text was translated in approximately equal halves in two 1-hour sessions, in three consecutive weeks. Only the parts translated at home were considered in the study. Results. A very different picture emerged from data than that which the CNA hypothesis might predict: there was no prevalence of disfluent task segments when there were many translation options, nor was a prevalence of fluent task segments associated to fewer translation options. Indeed, there was no correlation between the number of translation options (many and few) and behavioral fluency. Additionally, there was no correlation between pauses and both behavioral fluency and typing speed. The discussed theoretical flaws and the empirical evidence lead to the conclusion that the CNA framework does not and cannot measure text and translation difficulty.Stato dell'arte. La valutazione della difficoltà dei testi di partenza e di parti di essi ricopre un ruolo centrale nel campo degli studi cognitivi sulla traduzione e l'interpretazione (CTIS). Per misurarla a livello empirico, Campbell & Hale (1999) e Campbell (2000) hanno sviluppato la Choice Network Analysis (CNA). L'ipotesi principale della CNA è che quante più opzioni di traduzione un gruppo di traduttori ha per tradurre una porzione di testo, più alta sarà la sua difficoltà. Questo progetto di ricerca mette alla prova l'ipotesi della CNA per verificarne la validità come strumento per misurare la difficoltà. Raccolta dei dati. Due gruppi di partecipanti (n=29) di profili diversi e provenienti da due università di paesi diversi hanno svolto tre prove di traduzione usando Inputlog, ognuna preceduta e seguita da un questionario. I partecipanti hanno tradotto dall'inglese (L2) alla loro L1 (spagnolo o italiano) e hanno lavorato prima in classe e poi a casa con i propri computer su testi di circa 800-1000 parole. Ogni testo è stato suddiviso in metà pressoché uguali e tradotto in due sessioni da un'ora l'una, in tre settimane consecutive. Risultati. Dai dati è emerso un quadro molto diverso da quello suggerito dall'ipotesi della CNA: non è stata riscontrata alcuna prevalenza di segmenti con minore fluidità relativi a un maggior numero di opzioni di traduzione, né una prevalenza di segmenti con maggiore fluidità associati a un minor numero di opzioni di traduzione. Al contrario, in entrambi i casi la fluidità dei segmenti è rimasta tendenzialmente nella media. Infine, non è stata riscontrata alcuna correlazione tra le pause e fluidità comportamentale o la velocità di batttitura. Le inesattezze teoriche precedentemente discusse e le prove empiriche portano alla conclusione che la CNA non misura e non può misurare la difficoltà del testo e della traduzione

    Statistical learning of random probability measures

    Get PDF
    The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle

    50 Years of quantum chromodynamics – Introduction and Review

    Get PDF

    Undergraduate Bulletin, 2023-2024

    Get PDF
    https://red.mnstate.edu/bulletins/1107/thumbnail.jp

    Selected Topics in Gravity, Field Theory and Quantum Mechanics

    Get PDF
    Quantum field theory has achieved some extraordinary successes over the past sixty years; however, it retains a set of challenging problems. It is not yet able to describe gravity in a mathematically consistent manner. CP violation remains unexplained. Grand unified theories have been eliminated by experiment, and a viable unification model has yet to replace them. Even the highly successful quantum chromodynamics, despite significant computational achievements, struggles to provide theoretical insight into the low-energy regime of quark physics, where the nature and structure of hadrons are determined. The only proposal for resolving the fine-tuning problem, low-energy supersymmetry, has been eliminated by results from the LHC. Since mathematics is the true and proper language for quantitative physical models, we expect new mathematical constructions to provide insight into physical phenomena and fresh approaches for building physical theories
    • …
    corecore