985 research outputs found
Strategic and Selfless Interactions: a study of human behaviour
Los seres humanos son animales únicos, cooperando en una escala sin par en cualquier otra especie. Construimos sociedades compuestas de individuos no emparentados, y resultados empíricos nos han demostrado que las personas tienen preferencias sociales y pueden estar dispuestas a tomar acciones costosas que beneficien a otros. Por otro lado, los seres humanos también compiten entre ellos mismos, lo que en ocasiones conlleva consecuencias negativas como la sobreutilización de recursos naturales. Sin embargo, la competición entre agentes económicos subyace el funcionamiento adecuado de los mercados, y su destabilización -- tal como en una distribución desbalanceada de poder de mercado -- puede ser dañina a la eficiencia comercial. Por consiguiente, analizar cómo las personas cooperan y compiten es de importancia primordial para el entendimiento del comportamiento humano, especialmente al considerar los desafíos inminentes que amenazan el bienestar futuro de nuestras sociedades.En esta tesis, se presentan trabajos analizando el comportamiento de las personas en dilemas sociales -- situaciones en las cuales decisiones egoístas discrepan del optimo social -- y en otros escenarios estratégicos. Utilizando el framework de la teoría de juegos, sus interacciones tienen lugar en juegos abstrayendo estas situaciones. Específicamente, realizamos experimentos conductuales en los cuales las personas participaron en juegos adaptados de recursos comunes, de bienes públicos y otros juegos hechos a medida. Además, con la intención de comprender la existencia de la cooperación en humanos, proponemos un enfoque teórico para modelar su evolución a través de una dinámica de selección de heurísticas.Empezamos presentando los fundamentos teóricos y empíricos en los que se basa esta tesis, a saber, la teoría de juegos, la economía experimental, la ciencia de redes y la evolución de la cooperación. Posteriormente, ilustramos los aspectos prácticos de la realización de experimentos mediante implementaciones de software.Para comprender el comportamiento de las personas en problemas de acción colectiva -- como la mitigación del cambio climático, que requiere un nivel global de coordinación y cooperación -- realizamos juegos de bienes públicos y recursos comunes entre participantes chinos y españoles. Los resultados obtenidos proporcionan algunas ideas sobre las variaciones y universalidades de las respuestas de las personas en estos escenarios.En esta línea, durante los últimos años, las personas e instituciones están cada vez más preocupadas por los temas sociales y ambientales. Sin embargo, las contribuciones en estos escenarios requieren un nivel sustancial de altruismo por parte de los agentes que tienen que tomar decisiones costosas. Realizamos dos experimentos para comprender los factores que impulsan dichas decisiones en dos situaciones de relevancia contemporánea: las donaciones benéficas y las inversiones socialmente responsables. Sus resultados indican que el encuadre y otras características sociodemográficas están asociadas significativamente con decisiones prosociales y altruistas.Además, también hemos analizado el comportamiento de las personas en un escenario competitivo y complejo en el cual los sujetos participaron como intermediarios en experimentos de formación de precios. Lo hacemos a través de un experimento que implementa en redes complejas una generalización del juego de negociación. Nuestros hallazgos indican efectos significativos de la topología de la red tanto en resultados experimentales como también en modelos teóricos basados en el comportamiento observado.Por último, exponemos un trabajo teórico que intenta comprender el surgimiento de la cooperación a través de un enfoque novedoso para estudiar la evolución de estrategias en poblaciones estructuradas. Esto se logra modelando las decisiones de los agentes como resultados de heurísticas, siendo estas heurísticas seleccionadas mediante un proceso inspirado en los algoritmos evolutivos. Nuestros análisis muestran que, cuando estos agentes tienen memoria de sus interacciones anteriores, las estrategias cooperativas prosperarán. Sin embargo, esas estrategias funcionarán de acuerdo con diferentes heurísticas según la información que tomen en consideración.<br /
Recommended from our members
Nature inspired computational intelligence for financial contagion modelling
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Financial contagion refers to a scenario in which small shocks, which initially affect only a few financial institutions or a particular region of the economy, spread to the rest of the financial sector and other countries whose economies were previously healthy. This resembles the “transmission” of a medical disease. Financial contagion happens both at domestic level and international level. At domestic level, usually the failure of a domestic bank or financial intermediary triggers transmission by defaulting on inter-bank liabilities, selling assets in a fire sale, and undermining confidence in similar banks. An example of this phenomenon is the failure of Lehman Brothers and the subsequent turmoil in the US financial markets. International financial contagion happens in both advanced economies and developing economies, and is the transmission of financial crises across financial markets. Within the current globalise financial system, with large volumes of cash flow and cross-regional operations of large banks and hedge funds, financial contagion usually happens simultaneously among both domestic institutions and across countries. There is no conclusive definition of financial contagion, most research papers study contagion by analyzing the change in the variance-covariance matrix during the period of market turmoil. King and Wadhwani (1990) first test the correlations between the US, UK and Japan, during the US stock market crash of 1987. Boyer (1997) finds significant increases in correlation during financial crises, and reinforces a definition of financial contagion as a correlation changing during the crash period. Forbes and Rigobon (2002) give a definition of financial contagion. In their work, the term interdependence is used as the alternative to contagion. They claim that for the period they study, there is no contagion but only interdependence. Interdependence leads to common price movements during periods both of stability and turmoil. In the past two decades, many studies (e.g. Kaminsky et at., 1998; Kaminsky 1999) have developed early warning systems focused on the origins of financial crises rather than on financial contagion. Further authors (e.g. Forbes and Rigobon, 2002; Caporale et al, 2005), on the other hand, have focused on studying contagion or interdependence. In this thesis, an overall mechanism is proposed that simulates characteristics of propagating crisis through contagion. Within that scope, a new co-evolutionary market model is developed, where some of the technical traders change their behaviour during crisis to transform into herd traders making their decisions based on market sentiment rather than underlying strategies or factors. The thesis focuses on the transformation of market interdependence into contagion and on the contagion effects. The author first build a multi-national platform to allow different type of players to trade implementing their own rules and considering information from the domestic and a foreign market. Traders’ strategies and the performance of the simulated domestic market are trained using historical prices on both markets, and optimizing artificial market’s parameters through immune - particle swarm optimization techniques (I-PSO). The author also introduces a mechanism contributing to the transformation of technical into herd traders. A generalized auto-regressive conditional heteroscedasticity - copula (GARCH-copula) is further applied to calculate the tail dependence between the affected market and the origin of the crisis, and that parameter is used in the fitness function for selecting the best solutions within the evolving population of possible model parameters, and therefore in the optimization criteria for contagion simulation. The overall model is also applied in predictive mode, where the author optimize in the pre-crisis period using data from the domestic market and the crisis-origin foreign market, and predict in the crisis period using data from the foreign market and predicting the affected domestic market
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
The force interpretation of evolutionary theory: scope and limits
La teoría evolutiva suele entenderse como una teoría causal donde las causas
principales del cambio evolutivo son identificadas con la selección natural, la deriva
genética, la mutación y la migración. Siguiendo este razonamiento, muchos biólogos y
filósofos de la biología han estructurado la teoría evolutiva de forma análoga a la
mecánica newtoniana, entendiendo la teoría evolutiva como una teoría de fuerzas. El
punto clave en el que se sustenta la analogía, es que la estructura de la mecánica
newtoniana permite identificar los elementos causales del sistema de interés. De esta
manera, la teoría evolutiva encuentra una útil imagen explicativa del fenómeno
evolutivo, estructurándose como una ‘teoría quasi-newtoniana’ (Maudlin 2004). Esta
forma de estructurar o conceptualizar una teoría de forma similar a la newtoniana ha
sido utilizada en diferentes áreas: en la composición de colores, de deseos, de servicios,
en la composición de “fuerzas sociales”, de deberes, en cuestiones éticas, y en la
composición de poderes causales en general (Massin 2016).
Esta analogía, sin embargo, ha sido desafiada en la última década, mostrando no
sólo las limitaciones de la misma, sino postulando una visión radicalmente nueva según
la cual las entendidas como fuerzas o causas evolutivas no serían más que
pseudoprocesos. La acción causal se encontraría en el nivel de los individuos siendo la
selección, la deriva, etc., resúmenes estadísticos de dichos hechos. Lo que nos
proponemos en este trabajo es analizar esta polémica, mostrar las bondades pero
también las limitaciones de la analogía de fuerzas y, sobre todo, vislumbrar cuál es la
estructura adecuada de la teoría evolutiva, prestando especial atención a la deriva
genética por ser el factor causal que peor encaja en el marco de las fuerzas.Since Darwin’s times, evolutionary theory has been conceptualized as a causal theory. In order to emphasize this causal view, textbooks and most of the evolutionary literature talk about evolutionary forces acting on a population. Elliott Sober, in his influential book The Nature of Selection (1984), argues that evolutionary theory is a theory of forces because, in the same way that different forces of Newtonian mechanics cause changes in the movement of bodies, evolutionary forces cause changes in gene and/or genotype frequencies. As a result, selection, drift, mutation and migration would be the main forces or causes of evolution. Nevertheless, the appropriateness of the causal view, and particularly the Newtonian analogy, has been challenged in the last decade. Several authors (Denis Walsh, Mohan Matthen, André Ariew…) have argued for a new view, the statistical view, where the evolutionary process and its parts (selection, drift, etc.) are mere statistical outcomes, inseparable from each other. The so called evolutionary forces should be conceptualized as statistical population-level tendencies, abandoning any causal role for them.
I have developed a third way to defend the causal view. Authors committed to the Newtonian analogy capture the common theoretical structure between evolutionary theory and Newtonian mechanics. On the other hand, causalists not committed to the Newtonian analogy share statisticalists’ concern about some important problems in the force interpretation (the most important being the mismatch in the analogy produced by the lack of directionality of genetic drift). My approach postulates a broader causal framework (a difference-maker account of causation) unifying different causalists approaches, and avoiding problems like searching a directionality for genetic drift. In addition, clarifies the features that any Zero-Cause Law must accomplish. Finally, my approach explains the reason why the force metaphor was formulated in the first place and why it still continues in evolutionary literature. The Newtonian analogy is illuminating insofar as it is helpful in revealing the causal structure of evolutionary theory. In other words, the theory is constructed from a Zero-Cause Law that stipulates a default behaviour and arises by introducing factors which alters that behaviour.
On the other hand, I have developed a new analysis of the Price equation, showing its virtues as a key equation in evolutionary theory, and overcoming recent critiques about its usefulness
Differential evolution of non-coding DNA across eukaryotes and its close relationship with complex multicellularity on Earth
Here, I elaborate on the hypothesis that complex multicellularity (CM, sensu Knoll) is a major evolutionary transition (sensu Szathmary), which has convergently evolved a few times in Eukarya only: within red and brown algae, plants, animals, and fungi. Paradoxically, CM seems to correlate with the expansion of non-coding DNA (ncDNA) in the genome rather than with genome size or the total number of genes. Thus, I investigated the correlation between genome and organismal complexities across 461 eukaryotes under a phylogenetically controlled framework. To that end, I introduce the first formal definitions and criteria to distinguish ‘unicellularity’, ‘simple’ (SM) and ‘complex’ multicellularity. Rather than using the limited available estimations of unique cell types, the 461 species were classified according to our criteria by reviewing their life cycle and body plan development from literature. Then, I investigated the evolutionary association between genome size and 35 genome-wide features (introns and exons from protein-coding genes, repeats and intergenic regions) describing the coding and ncDNA complexities of the 461 genomes. To that end, I developed ‘GenomeContent’, a program that systematically retrieves massive multidimensional datasets from gene annotations and calculates over 100 genome-wide statistics. R-scripts coupled to parallel computing were created to calculate >260,000 phylogenetic controlled pairwise correlations. As previously reported, both repetitive and non-repetitive DNA are found to be scaling strongly and positively with genome size across most eukaryotic lineages. Contrasting previous studies, I demonstrate that changes in the length and repeat composition of introns are only weakly or moderately associated with changes in genome size at the global phylogenetic scale, while changes in intron abundance (within and across genes) are either not or only very weakly associated with changes in genome size. Our evolutionary correlations are robust to: different phylogenetic regression methods, uncertainties in the tree of eukaryotes, variations in genome size estimates, and randomly reduced datasets. Then, I investigated the correlation between the 35 genome-wide features and the cellular complexity of the 461 eukaryotes with phylogenetic Principal Component Analyses. Our results endorse a genetic distinction between SM and CM in Archaeplastida and Metazoa, but not so clearly in Fungi. Remarkably, complex multicellular organisms and their closest ancestral relatives are characterized by high intron-richness, regardless of genome size. Finally, I argue why and how a vast expansion of non-coding RNA (ncRNA) regulators rather than of novel protein regulators can promote the emergence of CM in Eukarya. As a proof of concept, I co-developed a novel ‘ceRNA-motif pipeline’ for the prediction of “competing endogenous” ncRNAs (ceRNAs) that regulate microRNAs in plants. We identified three candidate ceRNAs motifs: MIM166, MIM171 and MIM159/319, which were found to be conserved across land plants and be potentially involved in diverse developmental processes and stress responses. Collectively, the findings of this dissertation support our hypothesis that CM on Earth is a major evolutionary transition promoted by the expansion of two major ncDNA classes, introns and regulatory ncRNAs, which might have boosted the irreversible commitment of cell types in certain lineages by canalizing the timing and kinetics of the eukaryotic transcriptome.:Cover page
Abstract
Acknowledgements
Index
1. The structure of this thesis
1.1. Structure of this PhD dissertation
1.2. Publications of this PhD dissertation
1.3. Computational infrastructure and resources
1.4. Disclosure of financial support and information use
1.5. Acknowledgements
1.6. Author contributions and use of impersonal and personal pronouns
2. Biological background
2.1. The complexity of the eukaryotic genome
2.2. The problem of counting and defining “genes” in eukaryotes
2.3. The “function” concept for genes and “dark matter”
2.4. Increases of organismal complexity on Earth through multicellularity
2.5. Multicellularity is a “fitness transition” in individuality
2.6. The complexity of cell differentiation in multicellularity
3. Technical background
3.1. The Phylogenetic Comparative Method (PCM)
3.2. RNA secondary structure prediction
3.3. Some standards for genome and gene annotation
4. What is in a eukaryotic genome? GenomeContent provides a good answer
4.1. Background
4.2. Motivation: an interoperable tool for data retrieval of gene annotations
4.3. Methods
4.4. Results
4.5. Discussion
5. The evolutionary correlation between genome size and ncDNA
5.1. Background
5.2. Motivation: estimating the relationship between genome size and ncDNA
5.3. Methods
5.4. Results
5.5. Discussion
6. The relationship between non-coding DNA and Complex Multicellularity
6.1. Background
6.2. Motivation: How to define and measure complex multicellularity across eukaryotes?
6.3. Methods
6.4. Results
6.5. Discussion
7. The ceRNA motif pipeline: regulation of microRNAs by target mimics
7.1. Background
7.2. A revisited protocol for the computational analysis of Target Mimics
7.3. Motivation: a novel pipeline for ceRNA motif discovery
7.4. Methods
7.5. Results
7.6. Discussion
8. Conclusions and outlook
8.1. Contributions and lessons for the bioinformatics of large-scale comparative analyses
8.2. Intron features are evolutionarily decoupled among themselves and from genome size throughout Eukarya
8.3. “Complex multicellularity” is a major evolutionary transition
8.4. Role of RNA throughout the evolution of life and complex multicellularity on Earth
9. Supplementary Data
Bibliography
Curriculum Scientiae
Selbständigkeitserklärung (declaration of authorship
Implementation of gaussian process models for non-linear system identification
This thesis is concerned with investigating the use of Gaussian Process (GP) models for the identification of nonlinear dynamic systems. The Gaussian Process model is a non-parametric approach to system identification where the model of the underlying system is to be identified through the application of Bayesian analysis to empirical data. The GP modelling approach has been proposed as an alternative to more conventional methods of system identification due to a number of attractive features. In particular, the Bayesian probabilistic framework employed by the GP model has been shown to have potential in tackling the problems found in the optimisation of complex nonlinear models such as those based on multiple model or neural network structures. Furthermore, due to this probabilistic framework, the predictions made by the GP model are probability distributions composed of mean and variance components. This is in contrast to more conventional methods where a predictive point estimate is typically the output of the model. This additional variance component of the model output has been shown to be of potential use in model-predictive or adaptive control implementations. A further property that is of potential interest to those working on system identification problems is that the GP model has been shown to be particularly effective in identifying models from sparse datasets. Therefore, the GP model has been proposed for the identification of models in off-equilibrium regions of operating space, where more established methods might struggle due to a lack of data.
The majority of the existing research into modelling with GPs has concentrated on detailing the mathematical methodology and theoretical possibilities of the approach. Furthermore, much of this research has focused on the application of the method toward statistics and machine learning problems. This thesis investigates the use of the GP model for identifying nonlinear dynamic systems from an engineering perspective. In particular, it is the implementation aspects of the GP model that are the main focus of this work. Due to its non-parametric nature, the GP model may also be considered a ‘black-box’ method as the identification process relies almost exclusively on empirical data, and not on prior knowledge of the system. As a result, the methods used to collect and process this data are of great importance, and the experimental design and data pre-processing aspects of the system identification procedure are investigated in detail. Therefore, in the research presented here the inclusion of prior system knowledge into the overall modelling procedure is shown to be an invaluable asset in improving the overall performance of the GP model.
In previous research, the computational implementation of the GP modelling approach has been shown to become problematic for applications where the size of training dataset is large (i.e. one thousand or more points). This is due to the requirement in the GP modelling approach for repeated inversion of a covariance matrix whose size is dictated by the number of points included in the training dataset. Therefore, in order to maintain the computational viability of the approach, a number of different strategies have been proposed to lessen the computational burden. Many of these methods seek to make the covariance matrix sparse through the selection of a subset of existing training data. However, instead of operating on an existing training dataset, in this thesis an alternative approach is proposed where the training dataset is specifically designed to be as small as possible whilst still containing as much information. In order to achieve this goal of improving the ‘efficiency’ of the training dataset, the basis of the experimental design involves adopting a more deterministic approach to exciting the system, rather than the more common random excitation approach used for the identification of black-box models. This strategy is made possible through the active use of prior knowledge of the system.
The implementation of the GP modelling approach has been demonstrated on a range of simulated and real-world examples. The simulated examples investigated include both static and dynamic systems. The GP model is then applied to two laboratory-scale nonlinear systems: a Coupled Tanks system where the volume of liquid in the second tank must be predicted, and a Heat Transfer system where the temperature of the airflow along a tube must be predicted. Further extensions to the GP model are also investigated including the propagation of uncertainty from one prediction to the next, the application of sparse matrix methods, and also the use of derivative observations. A feature of the application of GP modelling approach to nonlinear system identification problems is the reliance on the squared exponential covariance function. In this thesis the benefits and limitations of this particular covariance function are made clear, and the use of alternative covariance functions and ‘mixed-model’ implementations is also discussed
Recommended from our members
A Multidisciplinary Study Of Antecedents To Voluntary Knowledge Contribution Within Online Forums
One challenge faced by online forums is the provision of a sustainable supply of contributions of knowledge (Wasco et al., 2009). Previous studies have identified online trust and perceived critical mass as antecedents of online knowledge contributions. However, the dynamic aspects of antecedents are little investigated. Moreover, how the dynamics together impact on members’ willingness to contribute knowledge is an open question to be further investigated.
To examine the dynamic antecedents of online knowledge continuance, this thesis seeks to develop a holistic approach through three studies. Drawing on a decomposed theory of planned behaviour (Taylor and Todd, 1995), study one identifies dynamic antecedents of intentional online contribution behaviours. Covariance-based structural equation modelling analysis of 910 responses obtained shows that perceived critical mass and trust in online forums that mediates trust in members are the highlighted antecedents in the context of online forums. The development of trust in online forums is investigated through a time series approach in study two. Findings using webnographic and machine learning analysis show that the cognitive dimension of institutional trust is essential in initial trust building. Study three uses network analysis techniques to explore the role of critical mass members. Results indicate that only 5% of critical mass members can sustain online forums. However, critical mass members compete for their connections, inferring the importance of brand building in the beginning of online forums development. A summary of findings from the three studies suggests that the structure assurance of online forums can mediate the effects of interactions between members to a coalition of membership over time. The study provides further knowledge on the voluntary contribution within online forums by taking a dynamic approach, while previous studies in this field are predominantly cross-sectional and un-prophetic
- …