41 research outputs found

    Rhythms and Evolution: Effects of Timing on Survival

    Get PDF
    The evolution of metabolism regulation is an intertwined process, where different strategies are constantly being developed towards a cognitive ability to perceive and respond to an environment. Organisms depend on an orchestration of a complex set of chemical reactions: maintaining homeostasis with a changing environment, while simultaneously sending material and energetic resources to where they are needed. The success of an organism requires efficient metabolic regulation, highlighting the connection between evolution, population dynamics and the underlying biochemistry. In this work, I represent organisms as coupled information-processing networks, that is, gene-regulatory networks receiving signals from the environment and acting on chemical reactions, eventually affecting material flows. I discuss the mechanisms through which metabolism control is improved during evolution and how the nonlinearities of competition influence this solution-searching process. The propagation of the populations through the resulting landscapes generally point to the role of the rhythm of cell division as an essential phenotypic feature driving evolution. Subsequently, as it naturally follows, different representations of organisms as oscillators are constructed to indicate more precisely how the interplay between competition, maturation timing and cell-division synchronisation affects the expected evolutionary outcomes, not always leading to the \"survival of the fastest\"

    Evolutionary dynamics, topological disease structures, and genetic machine learning

    Full text link
    Topological evolution is a new dynamical systems model of biological evolution occurring within a genomic state space. It can be modeled equivalently as a stochastic dynamical system, a stochastic differential equation, or a partial differential equation drift-diffusion model. An application of this approach is a model of disease evolution tracing diseases in ways similar to standard functional traits (e.g., organ evolution). Genetically embedded diseases become evolving functional components of species-level genomes. The competition between species-level evolution (which tends to maintain diseases) and individual evolution (which acts to eliminate them), yields a novel structural topology for the stochastic dynamics involved. In particular, an unlimited set of dynamical time scales emerges as a means of timing different levels of evolution: from individual to group to species and larger units. These scales exhibit a dynamical tension between individual and group evolutions, which are modeled on very different (fast and slow, respectively) time scales. This is analyzed in the context of a potentially major constraint on evolution: the species-level enforcement of lifespan via (topological) barriers to genomic longevity. This species-enforced behavior is analogous to certain types of evolutionary altruism, but it is denoted here as extreme altruism based on its potential shaping through mass extinctions. We give examples of biological mechanisms implementing some of the topological barriers discussed and provide mathematical models for them. This picture also introduces an explicit basis for lifespan-limiting evolutionary pressures. This involves a species-level need to maintain flux in its genome via a paced turnover of its biomass. This is necessitated by the need for phenomic characteristics to keep pace with genomic changes through evolution. Put briefly, the phenome must keep up with the genome, which occurs with an optimized limited lifespan. An important consequence of this model is a new role for diseases in evolution. Rather than their commonly recognized role as accidental side-effects, they play a central functional role in the shaping of an optimal lifespan for a species implemented through the topology of their embedding into the genome state space. This includes cancers, which are known to be embedded into the genome in complex and sometimes hair-triggered ways arising from DNA damage. Such cancers are known also to act in engineered and teleological ways that have been difficult to explain using currently very popular theories of intra-organismic cancer evolution. This alternative inter-organismic picture presents cancer evolution as occurring over much longer (evolutionary) time scales rather than very shortened organic evolutions that occur in individual cancers. This in turn may explain some evolved, intricate, and seemingly engineered properties of cancer. This dynamical evolutionary model is framed in a multiscaled picture in which different time scales are almost independently active in the evolutionary process acting on semi-independent parts of the genome. We additionally move from natural evolution to artificial implementations of evolutionary algorithms. We study genetic programming for the structured construction of machine learning features in a new structural risk minimization environment. While genetic programming in feature engineering is not new, we propose a Lagrangian optimization criterion for defining new feature sets inspired by structural risk minimization in statistical learning. We bifurcate the optimization of this Lagrangian into two exhaustive categories involving local and global search. The former is accomplished through local descent with given basins of attraction while the latter is done through a combinatorial search for new basins via an evolution algorithm

    Multinomial logit processes and preference discovery: inside and outside the black box

    Get PDF
    We provide two characterizations, one axiomatic and the other neuro-computational, of the dependence of choice probabilities on deadlines, within the widely used softmax representation. Our axiomatic analysis provides a behavioural foundation of softmax (also known as Multinomial Logit Model). Our neuro-computational derivation provides a biologically inspired algorithm that may explain the emergence of softmax in choice behaviour. Jointly, the two approaches provide a thorough understanding of softmaximization in terms of internal causes (neuro-physiological mechanisms) and external effects (testable implications)

    Antecipação na tomada de decisĂŁo com mĂșltiplos critĂ©rios sob incerteza

    Get PDF
    Orientador: Fernando JosĂ© Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia ElĂ©trica e de ComputaçãoResumo: A presença de incerteza em resultados futuros pode levar a indecisĂ”es em processos de escolha, especialmente ao elicitar as importĂąncias relativas de mĂșltiplos critĂ©rios de decisĂŁo e de desempenhos de curto vs. longo prazo. Algumas decisĂ”es, no entanto, devem ser tomadas sob informação incompleta, o que pode resultar em açÔes precipitadas com consequĂȘncias imprevisĂ­veis. Quando uma solução deve ser selecionada sob vĂĄrios pontos de vista conflitantes para operar em ambientes ruidosos e variantes no tempo, implementar alternativas provisĂłrias flexĂ­veis pode ser fundamental para contornar a falta de informação completa, mantendo opçÔes futuras em aberto. A engenharia antecipatĂłria pode entĂŁo ser considerada como a estratĂ©gia de conceber soluçÔes flexĂ­veis as quais permitem aos tomadores de decisĂŁo responder de forma robusta a cenĂĄrios imprevisĂ­veis. Essa estratĂ©gia pode, assim, mitigar os riscos de, sem intenção, se comprometer fortemente a alternativas incertas, ao mesmo tempo em que aumenta a adaptabilidade Ă s mudanças futuras. Nesta tese, os papĂ©is da antecipação e da flexibilidade na automação de processos de tomada de decisĂŁo sequencial com mĂșltiplos critĂ©rios sob incerteza Ă© investigado. O dilema de atribuir importĂąncias relativas aos critĂ©rios de decisĂŁo e a recompensas imediatas sob informação incompleta Ă© entĂŁo tratado pela antecipação autĂŽnoma de decisĂ”es flexĂ­veis capazes de preservar ao mĂĄximo a diversidade de escolhas futuras. Uma metodologia de aprendizagem antecipatĂłria on-line Ă© entĂŁo proposta para melhorar a variedade e qualidade dos conjuntos futuros de soluçÔes de trade-off. Esse objetivo Ă© alcançado por meio da previsĂŁo de conjuntos de mĂĄximo hipervolume esperado, para a qual as capacidades de antecipação de metaheurĂ­sticas multi-objetivo sĂŁo incrementadas com rastreamento bayesiano em ambos os espaços de busca e dos objetivos. A metodologia foi aplicada para a obtenção de decisĂ”es de investimento, as quais levaram a melhoras significativas do hipervolume futuro de conjuntos de carteiras financeiras de trade-off avaliadas com dados de açÔes fora da amostra de treino, quando comparada a uma estratĂ©gia mĂ­ope. AlĂ©m disso, a tomada de decisĂ”es flexĂ­veis para o rebalanceamento de carteiras foi confirmada como uma estratĂ©gia significativamente melhor do que a de escolher aleatoriamente uma decisĂŁo de investimento a partir da fronteira estocĂĄstica eficiente evoluĂ­da, em todos os mercados artificiais e reais testados. Finalmente, os resultados sugerem que a antecipação de opçÔes flexĂ­veis levou a composiçÔes de carteiras que se mostraram significativamente correlacionadas com as melhorias observadas no hipervolume futuro esperado, avaliado com dados fora das amostras de treinoAbstract: The presence of uncertainty in future outcomes can lead to indecision in choice processes, especially when eliciting the relative importances of multiple decision criteria and of long-term vs. near-term performance. Some decisions, however, must be taken under incomplete information, what may result in precipitated actions with unforeseen consequences. When a solution must be selected under multiple conflicting views for operating in time-varying and noisy environments, implementing flexible provisional alternatives can be critical to circumvent the lack of complete information by keeping future options open. Anticipatory engineering can be then regarded as the strategy of designing flexible solutions that enable decision makers to respond robustly to unpredictable scenarios. This strategy can thus mitigate the risks of strong unintended commitments to uncertain alternatives, while increasing adaptability to future changes. In this thesis, the roles of anticipation and of flexibility on automating sequential multiple criteria decision-making processes under uncertainty are investigated. The dilemma of assigning relative importances to decision criteria and to immediate rewards under incomplete information is then handled by autonomously anticipating flexible decisions predicted to maximally preserve diversity of future choices. An online anticipatory learning methodology is then proposed for improving the range and quality of future trade-off solution sets. This goal is achieved by predicting maximal expected hypervolume sets, for which the anticipation capabilities of multi-objective metaheuristics are augmented with Bayesian tracking in both the objective and search spaces. The methodology has been applied for obtaining investment decisions that are shown to significantly improve the future hypervolume of trade-off financial portfolios for out-of-sample stock data, when compared to a myopic strategy. Moreover, implementing flexible portfolio rebalancing decisions was confirmed as a significantly better strategy than to randomly choosing an investment decision from the evolved stochastic efficient frontier in all tested artificial and real-world markets. Finally, the results suggest that anticipating flexible choices has lead to portfolio compositions that are significantly correlated with the observed improvements in out-of-sample future expected hypervolumeDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElĂ©tric

    Finite-Time Thermodynamics

    Get PDF
    The theory around the concept of finite time describes how processes of any nature can be optimized in situations when their rate is required to be non-negligible, i.e., they must come to completion in a finite time. What the theory makes explicit is “the cost of haste”. Intuitively, it is quite obvious that you drive your car differently if you want to reach your destination as quickly as possible as opposed to the case when you are running out of gas. Finite-time thermodynamics quantifies such opposing requirements and may provide the optimal control to achieve the best compromise. The theory was initially developed for heat engines (steam, Otto, Stirling, a.o.) and for refrigerators, but it has by now evolved into essentially all areas of dynamic systems from the most abstract ones to the most practical ones. The present collection shows some fascinating current examples

    Jamming, glass transition, and entropy in monodisperse and polydisperse hard-sphere packings

    Get PDF
    This thesis is dedicated to the investigation of properties of computer-generated monodisperse and polydisperse three-dimensional hard-sphere packings, frictional and frictionless. For frictionless packings, we (i) assess their total (fluid) entropy in a wide range of packing densities (solid volume fractions), (ii) investigate the structure of their phase space, (iii) and estimate several characteristic densities (the J-point, the ideal glass transition density, and the ideal glass density). For frictional packings, we estimate the Edwards entropy in a wide range of densities. We utilize the Lubachevsky–Stillinger, Jodrey–Tory, and force-biased packing generation algorithms. We always generate packings of 10000 particles in cubic boxes with periodic boundary conditions. For estimation of the Edwards entropy, we also use experimentally produced and reconstructed packings of fluidized beds. In polydisperse cases, we use the log-normal, Pareto, and Gaussian particle diameter distributions with polydispersities (relative radii standard deviations) from 0.05 (5%) to 0.3 (30%) in steps of 0.05. This work consists of six chapters, each corresponding to a published paper. In the first chapter, we introduce a method to estimate the probability to insert a particle in a packing (insertion probability) through the so-called pore-size (nearest neighbour) distribution. Under certain assumptions about the structure of the phase space, we link this probability to the (total) entropy of packings. In this chapter, we use only frictionless monodisperse hard-sphere packings. We conclude that the two characteristic particle volume fractions (or densities, φ) often associated with the Random Close Packing limit, φ ≈ 0.64 and φ ≈ 0.65, may refer to two distinct phenomena: the J-point and the Glass Close Packing limit (the ideal glass density), respectively. In the second chapter, we investigate the behaviour of jamming densities of frictionless polydisperse packings produced with different packing generation times. Packings produced quickly are structurally closer to Poisson packings and jam at the J-point (φ ≈ 0.64 for monodisperse packings). Jamming densities (inherent structure densities) of packings with sufficient polydispersity that were produced slowly approach the glass close packing (GCP) limit. Monodisperse packings overcome the GCP limit (φ ≈ 0.65) because they can incorporate crystalline regions. Their jamming densities eventually approach the face-centered cubic (FCC) / hexagonal close packing (HCP) crystal density φ = π/(3 √2) ≈ 0.74. These results support the premise that φ ≈ 0.64 and φ ≈ 0.65 in the monodisperse case may refer to the J-point and the GCP limit, respectively. Frictionless random jammed packings can be produced with any density in-between. In the third chapter, we add one more intermediate step to the procedure from the second chapter. We take the unjammed (initial) packings in a wide range of densities from the second chapter, equilibrate them, and only then jam (search for their inherent structures). Thus, we investigate the structure of their phase space. We determine the J-point, ideal glass transition density, and ideal glass density. We once again recover φ ≈ 0.64 as the J-point and φ ≈ 0.65 as the GCP limit for monodisperse packings. The ideal glass transition density for monodisperse packings is estimated at φ ≈ 0.585. In the fourth chapter, we demonstrate that the excess entropies of the polydisperse hard-sphere fluid at our estimates of the ideal glass transition densities do not significantly depend on the particle size distribution. This suggests a simple procedure to estimate the ideal glass transition density for an arbitrary particle size distribution by solving an equation, which requires that the excess fluid entropy shall equal to some universal value characteristic of the ideal glass transition density. Excess entropies for an arbitrary particle size distribution and density can be computed through equations of state, for example the BoublĂ­k–Mansoori–Carnahan–Starling–Leland (BMCSL) equation. In the fifth chapter, we improve the procedure from the first chapter. We retain the insertion probability estimation from the pore-size distribution, but switch from the initial assumptions about the structure of the phase space to a more advanced Widom particle insertion method, which for hard spheres links the insertion probability to the excess chemical potential. With the chemical potential at hand, we can estimate the excess fluid entropy, which complies well with theoretical predictions from the BMCSL equation of state. In the sixth chapter, we extend the Widom particle insertion method from the fifth chapter as well as the insertion probability estimation method from the first chapter to determine the upper bound on the Edwards entropy per particle in monodisperse frictional packings. The Edwards entropy counts the number of mechanically stable configurations at a given density (density interval). We demonstrate that the Edwards entropy estimate is maximum at the Random Loose Packing (RLP) limit (φ ≈ 0.55) and decreases with density increase. In this chapter, we accompany computer-generated packings with experimentally produced and reconstructed ones. Overall, this study extends the understanding of the glass transition, jamming, and the Edwards entropy behavior in the system of hard spheres. The results can help comprehend these phenomena in more complex molecular, colloidal, and granular systems
    corecore