985 research outputs found

    Strategic and Selfless Interactions: a study of human behaviour

    Get PDF
    Los seres humanos son animales únicos, cooperando en una escala sin par en cualquier otra especie. Construimos sociedades compuestas de individuos no emparentados, y resultados empíricos nos han demostrado que las personas tienen preferencias sociales y pueden estar dispuestas a tomar acciones costosas que beneficien a otros. Por otro lado, los seres humanos también compiten entre ellos mismos, lo que en ocasiones conlleva consecuencias negativas como la sobreutilización de recursos naturales. Sin embargo, la competición entre agentes económicos subyace el funcionamiento adecuado de los mercados, y su destabilización -- tal como en una distribución desbalanceada de poder de mercado -- puede ser dañina a la eficiencia comercial. Por consiguiente, analizar cómo las personas cooperan y compiten es de importancia primordial para el entendimiento del comportamiento humano, especialmente al considerar los desafíos inminentes que amenazan el bienestar futuro de nuestras sociedades.En esta tesis, se presentan trabajos analizando el comportamiento de las personas en dilemas sociales -- situaciones en las cuales decisiones egoístas discrepan del optimo social -- y en otros escenarios estratégicos. Utilizando el framework de la teoría de juegos, sus interacciones tienen lugar en juegos abstrayendo estas situaciones. Específicamente, realizamos experimentos conductuales en los cuales las personas participaron en juegos adaptados de recursos comunes, de bienes públicos y otros juegos hechos a medida. Además, con la intención de comprender la existencia de la cooperación en humanos, proponemos un enfoque teórico para modelar su evolución a través de una dinámica de selección de heurísticas.Empezamos presentando los fundamentos teóricos y empíricos en los que se basa esta tesis, a saber, la teoría de juegos, la economía experimental, la ciencia de redes y la evolución de la cooperación. Posteriormente, ilustramos los aspectos prácticos de la realización de experimentos mediante implementaciones de software.Para comprender el comportamiento de las personas en problemas de acción colectiva -- como la mitigación del cambio climático, que requiere un nivel global de coordinación y cooperación -- realizamos juegos de bienes públicos y recursos comunes entre participantes chinos y españoles. Los resultados obtenidos proporcionan algunas ideas sobre las variaciones y universalidades de las respuestas de las personas en estos escenarios.En esta línea, durante los últimos años, las personas e instituciones están cada vez más preocupadas por los temas sociales y ambientales. Sin embargo, las contribuciones en estos escenarios requieren un nivel sustancial de altruismo por parte de los agentes que tienen que tomar decisiones costosas. Realizamos dos experimentos para comprender los factores que impulsan dichas decisiones en dos situaciones de relevancia contemporánea: las donaciones benéficas y las inversiones socialmente responsables. Sus resultados indican que el encuadre y otras características sociodemográficas están asociadas significativamente con decisiones prosociales y altruistas.Además, también hemos analizado el comportamiento de las personas en un escenario competitivo y complejo en el cual los sujetos participaron como intermediarios en experimentos de formación de precios. Lo hacemos a través de un experimento que implementa en redes complejas una generalización del juego de negociación. Nuestros hallazgos indican efectos significativos de la topología de la red tanto en resultados experimentales como también en modelos teóricos basados en el comportamiento observado.Por último, exponemos un trabajo teórico que intenta comprender el surgimiento de la cooperación a través de un enfoque novedoso para estudiar la evolución de estrategias en poblaciones estructuradas. Esto se logra modelando las decisiones de los agentes como resultados de heurísticas, siendo estas heurísticas seleccionadas mediante un proceso inspirado en los algoritmos evolutivos. Nuestros análisis muestran que, cuando estos agentes tienen memoria de sus interacciones anteriores, las estrategias cooperativas prosperarán. Sin embargo, esas estrategias funcionarán de acuerdo con diferentes heurísticas según la información que tomen en consideración.<br /

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    The force interpretation of evolutionary theory: scope and limits

    Get PDF
    La teoría evolutiva suele entenderse como una teoría causal donde las causas principales del cambio evolutivo son identificadas con la selección natural, la deriva genética, la mutación y la migración. Siguiendo este razonamiento, muchos biólogos y filósofos de la biología han estructurado la teoría evolutiva de forma análoga a la mecánica newtoniana, entendiendo la teoría evolutiva como una teoría de fuerzas. El punto clave en el que se sustenta la analogía, es que la estructura de la mecánica newtoniana permite identificar los elementos causales del sistema de interés. De esta manera, la teoría evolutiva encuentra una útil imagen explicativa del fenómeno evolutivo, estructurándose como una ‘teoría quasi-newtoniana’ (Maudlin 2004). Esta forma de estructurar o conceptualizar una teoría de forma similar a la newtoniana ha sido utilizada en diferentes áreas: en la composición de colores, de deseos, de servicios, en la composición de “fuerzas sociales”, de deberes, en cuestiones éticas, y en la composición de poderes causales en general (Massin 2016). Esta analogía, sin embargo, ha sido desafiada en la última década, mostrando no sólo las limitaciones de la misma, sino postulando una visión radicalmente nueva según la cual las entendidas como fuerzas o causas evolutivas no serían más que pseudoprocesos. La acción causal se encontraría en el nivel de los individuos siendo la selección, la deriva, etc., resúmenes estadísticos de dichos hechos. Lo que nos proponemos en este trabajo es analizar esta polémica, mostrar las bondades pero también las limitaciones de la analogía de fuerzas y, sobre todo, vislumbrar cuál es la estructura adecuada de la teoría evolutiva, prestando especial atención a la deriva genética por ser el factor causal que peor encaja en el marco de las fuerzas.Since Darwin’s times, evolutionary theory has been conceptualized as a causal theory. In order to emphasize this causal view, textbooks and most of the evolutionary literature talk about evolutionary forces acting on a population. Elliott Sober, in his influential book The Nature of Selection (1984), argues that evolutionary theory is a theory of forces because, in the same way that different forces of Newtonian mechanics cause changes in the movement of bodies, evolutionary forces cause changes in gene and/or genotype frequencies. As a result, selection, drift, mutation and migration would be the main forces or causes of evolution. Nevertheless, the appropriateness of the causal view, and particularly the Newtonian analogy, has been challenged in the last decade. Several authors (Denis Walsh, Mohan Matthen, André Ariew…) have argued for a new view, the statistical view, where the evolutionary process and its parts (selection, drift, etc.) are mere statistical outcomes, inseparable from each other. The so called evolutionary forces should be conceptualized as statistical population-level tendencies, abandoning any causal role for them. I have developed a third way to defend the causal view. Authors committed to the Newtonian analogy capture the common theoretical structure between evolutionary theory and Newtonian mechanics. On the other hand, causalists not committed to the Newtonian analogy share statisticalists’ concern about some important problems in the force interpretation (the most important being the mismatch in the analogy produced by the lack of directionality of genetic drift). My approach postulates a broader causal framework (a difference-maker account of causation) unifying different causalists approaches, and avoiding problems like searching a directionality for genetic drift. In addition, clarifies the features that any Zero-Cause Law must accomplish. Finally, my approach explains the reason why the force metaphor was formulated in the first place and why it still continues in evolutionary literature. The Newtonian analogy is illuminating insofar as it is helpful in revealing the causal structure of evolutionary theory. In other words, the theory is constructed from a Zero-Cause Law that stipulates a default behaviour and arises by introducing factors which alters that behaviour. On the other hand, I have developed a new analysis of the Price equation, showing its virtues as a key equation in evolutionary theory, and overcoming recent critiques about its usefulness

    Differential evolution of non-coding DNA across eukaryotes and its close relationship with complex multicellularity on Earth

    Get PDF
    Here, I elaborate on the hypothesis that complex multicellularity (CM, sensu Knoll) is a major evolutionary transition (sensu Szathmary), which has convergently evolved a few times in Eukarya only: within red and brown algae, plants, animals, and fungi. Paradoxically, CM seems to correlate with the expansion of non-coding DNA (ncDNA) in the genome rather than with genome size or the total number of genes. Thus, I investigated the correlation between genome and organismal complexities across 461 eukaryotes under a phylogenetically controlled framework. To that end, I introduce the first formal definitions and criteria to distinguish ‘unicellularity’, ‘simple’ (SM) and ‘complex’ multicellularity. Rather than using the limited available estimations of unique cell types, the 461 species were classified according to our criteria by reviewing their life cycle and body plan development from literature. Then, I investigated the evolutionary association between genome size and 35 genome-wide features (introns and exons from protein-coding genes, repeats and intergenic regions) describing the coding and ncDNA complexities of the 461 genomes. To that end, I developed ‘GenomeContent’, a program that systematically retrieves massive multidimensional datasets from gene annotations and calculates over 100 genome-wide statistics. R-scripts coupled to parallel computing were created to calculate >260,000 phylogenetic controlled pairwise correlations. As previously reported, both repetitive and non-repetitive DNA are found to be scaling strongly and positively with genome size across most eukaryotic lineages. Contrasting previous studies, I demonstrate that changes in the length and repeat composition of introns are only weakly or moderately associated with changes in genome size at the global phylogenetic scale, while changes in intron abundance (within and across genes) are either not or only very weakly associated with changes in genome size. Our evolutionary correlations are robust to: different phylogenetic regression methods, uncertainties in the tree of eukaryotes, variations in genome size estimates, and randomly reduced datasets. Then, I investigated the correlation between the 35 genome-wide features and the cellular complexity of the 461 eukaryotes with phylogenetic Principal Component Analyses. Our results endorse a genetic distinction between SM and CM in Archaeplastida and Metazoa, but not so clearly in Fungi. Remarkably, complex multicellular organisms and their closest ancestral relatives are characterized by high intron-richness, regardless of genome size. Finally, I argue why and how a vast expansion of non-coding RNA (ncRNA) regulators rather than of novel protein regulators can promote the emergence of CM in Eukarya. As a proof of concept, I co-developed a novel ‘ceRNA-motif pipeline’ for the prediction of “competing endogenous” ncRNAs (ceRNAs) that regulate microRNAs in plants. We identified three candidate ceRNAs motifs: MIM166, MIM171 and MIM159/319, which were found to be conserved across land plants and be potentially involved in diverse developmental processes and stress responses. Collectively, the findings of this dissertation support our hypothesis that CM on Earth is a major evolutionary transition promoted by the expansion of two major ncDNA classes, introns and regulatory ncRNAs, which might have boosted the irreversible commitment of cell types in certain lineages by canalizing the timing and kinetics of the eukaryotic transcriptome.:Cover page Abstract Acknowledgements Index 1. The structure of this thesis 1.1. Structure of this PhD dissertation 1.2. Publications of this PhD dissertation 1.3. Computational infrastructure and resources 1.4. Disclosure of financial support and information use 1.5. Acknowledgements 1.6. Author contributions and use of impersonal and personal pronouns 2. Biological background 2.1. The complexity of the eukaryotic genome 2.2. The problem of counting and defining “genes” in eukaryotes 2.3. The “function” concept for genes and “dark matter” 2.4. Increases of organismal complexity on Earth through multicellularity 2.5. Multicellularity is a “fitness transition” in individuality 2.6. The complexity of cell differentiation in multicellularity 3. Technical background 3.1. The Phylogenetic Comparative Method (PCM) 3.2. RNA secondary structure prediction 3.3. Some standards for genome and gene annotation 4. What is in a eukaryotic genome? GenomeContent provides a good answer 4.1. Background 4.2. Motivation: an interoperable tool for data retrieval of gene annotations 4.3. Methods 4.4. Results 4.5. Discussion 5. The evolutionary correlation between genome size and ncDNA 5.1. Background 5.2. Motivation: estimating the relationship between genome size and ncDNA 5.3. Methods 5.4. Results 5.5. Discussion 6. The relationship between non-coding DNA and Complex Multicellularity 6.1. Background 6.2. Motivation: How to define and measure complex multicellularity across eukaryotes? 6.3. Methods 6.4. Results 6.5. Discussion 7. The ceRNA motif pipeline: regulation of microRNAs by target mimics 7.1. Background 7.2. A revisited protocol for the computational analysis of Target Mimics 7.3. Motivation: a novel pipeline for ceRNA motif discovery 7.4. Methods 7.5. Results 7.6. Discussion 8. Conclusions and outlook 8.1. Contributions and lessons for the bioinformatics of large-scale comparative analyses 8.2. Intron features are evolutionarily decoupled among themselves and from genome size throughout Eukarya 8.3. “Complex multicellularity” is a major evolutionary transition 8.4. Role of RNA throughout the evolution of life and complex multicellularity on Earth 9. Supplementary Data Bibliography Curriculum Scientiae Selbständigkeitserklärung (declaration of authorship

    Implementation of gaussian process models for non-linear system identification

    Get PDF
    This thesis is concerned with investigating the use of Gaussian Process (GP) models for the identification of nonlinear dynamic systems. The Gaussian Process model is a non-parametric approach to system identification where the model of the underlying system is to be identified through the application of Bayesian analysis to empirical data. The GP modelling approach has been proposed as an alternative to more conventional methods of system identification due to a number of attractive features. In particular, the Bayesian probabilistic framework employed by the GP model has been shown to have potential in tackling the problems found in the optimisation of complex nonlinear models such as those based on multiple model or neural network structures. Furthermore, due to this probabilistic framework, the predictions made by the GP model are probability distributions composed of mean and variance components. This is in contrast to more conventional methods where a predictive point estimate is typically the output of the model. This additional variance component of the model output has been shown to be of potential use in model-predictive or adaptive control implementations. A further property that is of potential interest to those working on system identification problems is that the GP model has been shown to be particularly effective in identifying models from sparse datasets. Therefore, the GP model has been proposed for the identification of models in off-equilibrium regions of operating space, where more established methods might struggle due to a lack of data. The majority of the existing research into modelling with GPs has concentrated on detailing the mathematical methodology and theoretical possibilities of the approach. Furthermore, much of this research has focused on the application of the method toward statistics and machine learning problems. This thesis investigates the use of the GP model for identifying nonlinear dynamic systems from an engineering perspective. In particular, it is the implementation aspects of the GP model that are the main focus of this work. Due to its non-parametric nature, the GP model may also be considered a ‘black-box’ method as the identification process relies almost exclusively on empirical data, and not on prior knowledge of the system. As a result, the methods used to collect and process this data are of great importance, and the experimental design and data pre-processing aspects of the system identification procedure are investigated in detail. Therefore, in the research presented here the inclusion of prior system knowledge into the overall modelling procedure is shown to be an invaluable asset in improving the overall performance of the GP model. In previous research, the computational implementation of the GP modelling approach has been shown to become problematic for applications where the size of training dataset is large (i.e. one thousand or more points). This is due to the requirement in the GP modelling approach for repeated inversion of a covariance matrix whose size is dictated by the number of points included in the training dataset. Therefore, in order to maintain the computational viability of the approach, a number of different strategies have been proposed to lessen the computational burden. Many of these methods seek to make the covariance matrix sparse through the selection of a subset of existing training data. However, instead of operating on an existing training dataset, in this thesis an alternative approach is proposed where the training dataset is specifically designed to be as small as possible whilst still containing as much information. In order to achieve this goal of improving the ‘efficiency’ of the training dataset, the basis of the experimental design involves adopting a more deterministic approach to exciting the system, rather than the more common random excitation approach used for the identification of black-box models. This strategy is made possible through the active use of prior knowledge of the system. The implementation of the GP modelling approach has been demonstrated on a range of simulated and real-world examples. The simulated examples investigated include both static and dynamic systems. The GP model is then applied to two laboratory-scale nonlinear systems: a Coupled Tanks system where the volume of liquid in the second tank must be predicted, and a Heat Transfer system where the temperature of the airflow along a tube must be predicted. Further extensions to the GP model are also investigated including the propagation of uncertainty from one prediction to the next, the application of sparse matrix methods, and also the use of derivative observations. A feature of the application of GP modelling approach to nonlinear system identification problems is the reliance on the squared exponential covariance function. In this thesis the benefits and limitations of this particular covariance function are made clear, and the use of alternative covariance functions and ‘mixed-model’ implementations is also discussed
    corecore