11,728 research outputs found

    UMSL Bulletin 2023-2024

    Get PDF
    The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp

    Parámetros genéticos de los caracteres morfológicos lineales de la raza caprina murciano-granadina y sus relaciones con otros caracteres funcionales

    Get PDF
    Linear appraisal systems (LAS) are effective strategies for systematically collecting zoometric information from animal populations. Traditionally applied LAS in goats was developed considering the variability and scales found in highly selected breeds. Implementing LAS may reduce time, personnel, and resource needs when performing zoometric large-scale collection. Moreover, selection for zoometrics defines individuals’ productive longevity, endurance, enhanced productive abilities, and consequently, long-term profitability. As a result, traditional LAS may no longer cover the different contexts of goat breeds widespread throughout the world, and departures from normality may be indicative of the different stages of selection at which a certain population can be found. In the first study, an evaluation of the distribution and symmetry properties of twenty-eight zoometric traits was developed. After symmetry analysis was performed, the scale readjustment proposal suggested specific strategies should be implemented such as scale reduction of lower or upper levels, determination of a setup moment to evaluate and collect information from young (up to 2 years) and adult bucks (over 2 years), the addition of upper categories in males due to upper values in the scale being incorrectly clustered together. Thus, the particular analysis of each variable permits determining specific strategies for each trait and serve as a model for other breeds, either selected or in terms of selection. The aim of the second study was to propose a method to optimize and validate LAS in opposition to traditional measuring protocols routinely implemented in Murciano-Granadina goats. The data sample consisted of 41323 LAS and traditional measuring records, belonging to 22727 herdbook registered primipara does, 17111 multipara does, and 1485 bucks. Each record comprised information on 17 linear traits for primipara and multipara does, and 10 traits for bucks. All zoometric parameters were scored on a 9-points scale. Cronbach’s alpha values suggested a high internal consistency of the optimized variable panel. Model fit, variability explanation power, and predictive power (MSE, AIC/AICc, and BIC, respectively) suggested a model comprising zoometric LAS scores performed better than traditional zoometry. Optimization procedures result in reduced models able to capture variability for dairy-related zoometric traits without noticeable detrimental effects on model validity properties. The third study aimed to perform a particular analysis of each variable that permits determining specific strategies for each trait and serves as a model for other breeds. Among the strategies proposed are the reduction/readjustment of the levels in the scale as it happens for limb-related traits, the extension of the scale as it occurs in the stature of males, or the subdivision of the scale used in males into two categories, bucks younger than two years and bucks of two years old and older. Murciano- Granadina goat breed has drifted towards better dairy-linked conformation traits but without losing the grounds of the zoometric basis which confers it with enhanced adaptability to the environment. Hence, such strategies can help to achieve a better understanding of the momentum of selection for dairy-linked zoometric traits in Murciano-Granadina population and their future evolution to enhance the profitability and efficiency of breeding plans. The objective of the fourth study was to evaluate the progress of heritabilities of the traits comprising the linear appraisal system in the Murciano-Granadina breed during the complete decade from December 2011 to December 2021. The estimated values for heritability were obtained from multivariate analyzes using the BLUP methodology and MTDFREML software. For 2021 heritabilities, a simple animal model was applied to records collected from 22727 primiparous goats and 17111 multiparous goats belonging to 85 herds. The model included the linear and quadratic and linear components of the covariates age and days in milk, respectively. The fixed effects considered in the model were herd, reproductive status, calving month, and herd/year interaction. The animal was considered as a random effect. The variables studied included five characteristics related to structure and capacity, two traits related to dairy structure, six related to the mammary system, and three related to legs and feet. The heritabilities for structure and capacity characters progressed from 0.22 to 0.28 including non-convergent variables in June 2012 to values between 0.10 and 0.41 with all variables converging in June 2021. Heritabilities for dairy structure progressed from 0.18 with nonconvergent variables in 2011 to 0.17 to 0.25 in 2021. Heritabilities for mammary system traits progressed from 0.12 to 0, 27 with non-convergent variables in 2012 to between 0.10 and 0.41 in 2021. For legs and feet, heritabilities progressed from 0.16 to 0.17 with non-convergent variables to 0.09 a 0.22. Genetic progress is not only evident in heritability values, but there has been a notable reduction in the standard error of heritabilities from 0.1000 (0.080-0.120) to 0.000 (0.000-0.001) from 2011 to 2021. These results provide evidence of the enhancement in the effectiveness and precision of the linear qualification system applied during the past decade and its successful integration into the breeding program of the Murciano- Granadina breed. The fifth study estimates genetic and phenotypic parameters for zoometric/LAS traits in Murciano-Granadina goats, estimate genetic and phenotypic correlations among all traits, and to determine whether major area selection would be appropriate or if adaptability strategies may need to be followed. Heritability estimates for the zoometric/LAS traits were low to high, ranging from 0.09 to 0.43 and the accuracy of estimation has improved after decades rendering standard errors negligible. Scale inversion of specific traits may need to be performed before major areas selection strategies are implemented. Genetic and phenotypic correlations suggest that negative selection against thicker bones and higher rear insertion heights, indirectly results in the optimization of selection practices in the rest of the traits, especially of those in the structure and capacity and mammary system major areas. The integration and implementation of the strategies proposed within Murciano-Granadina breeding program maximize selection opportunities and the sustainable international competitiveness of the Murciano- Granadina goat in the dairy goat breed panorama. The objective of the sixth study was to develop a discriminant canonical analysis (DCA) tool that permits outlining the role of the individual haplotypes of each component of the casein complex (αS1, β, αS2, and κ-casein) on zoometrics/linear appraisal breeding values. The relationship of the predicted breeding value for 17 zoometric/Linear appraisal traits and αS1, β, αS2, and κ-casein genes haplotypic sequences was assessed. Results suggest that, although a lack of significant differences (P>0.05) was reported across the predictive breeding values of zoometric/linear appraisal traits for αS1, αS2 and κ casein, significant differences were found for β Casein (P0,05) en los valores de cría predichos de los rasgos de zoometría/calificación lineal para la αS1, αS2 y κ-caseína, se encontraron diferencias significativas para la β-caseína (P<0,05), respectivamente. La presencia de secuencias haplotípicas de β-caseína GAGACCCC, GGAACCCC, GGAACCTC, GGAATCTC, GGGACCCC, GGGATCTC y GGGGCCCC, vinculadas a combinaciones diferenciales de mayores cantidades de leche de mayor calidad en términos de su composición, también puede estar relacionada con una mayor valoración zoométrica/lineal de la predicción de los valores de cría. La selección debe realizarse con cuidado, dado que la consideración de animales aparentemente deseables que presentan la secuencia haplotípica GGGATCCC en el gen de la β- caseína, debido a sus valores genéticos predichos positivos para ciertos rasgos de zoometría/calificación lineal, como la altura de la inserción trasera, la calidad ósea , la inserción anterior, la profundidad de ubre, la vista lateral de patas traseras y la vista trasera de patas traseras pueden conducir a una selección indirecta frente al resto de rasgos de zoometría/calificación lineal y a su vez conducir a una selección ineficiente hacia un tipo morfotipo lechero óptimo en cabras Murciano-Granadina. Por el contrario, la consideración de animales que presentan la secuencia haplotípica GGAACCCC implica también considerar animales que aumentan el potencial genético para todos los rasgos de zoometría/calificación lineal, haciéndolos así recomendables como reproductores. La información derivada de los presentes análisis mejorará la selección de individuos reproductores que busquen un tipo lechero bastante deseable, a través de la determinación de las secuencias haplotípicas que presentan en el locus β-caseína. Todos estos estudios persiguen la obtención de un conocimiento más profundo de los caracteres morfológicos lineales de la raza caprina Murciano-Granadina y sus relaciones con otras características funcionales. Esto sienta las bases para estrategias de normalización y mejora de la capacidad productiva y el morfotipo lechero de la cabra Murciano-Granadina y ayudará a alcanzar su consolidación competitiva en el panorama caprino lechero internacional

    Reinforcement learning in large state action spaces

    Get PDF
    Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios. This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory). In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications

    Deep Multimodality Image-Guided System for Assisting Neurosurgery

    Get PDF
    Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden. Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt. Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte. Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist. Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet. Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann. Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann. Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt. In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse

    Science and Innovations for Food Systems Transformation

    Get PDF
    This Open Access book compiles the findings of the Scientific Group of the United Nations Food Systems Summit 2021 and its research partners. The Scientific Group was an independent group of 28 food systems scientists from all over the world with a mandate from the Deputy Secretary-General of the United Nations. The chapters provide science- and research-based, state-of-the-art, solution-oriented knowledge and evidence to inform the transformation of contemporary food systems in order to achieve more sustainable, equitable and resilient systems

    Bayesian Inference for Multivariate Monotone Densities

    Full text link
    We consider a nonparametric Bayesian approach to estimation and testing for a multivariate monotone density. Instead of following the conventional Bayesian route of putting a prior distribution complying with the monotonicity restriction, we put a prior on the step heights through binning and a Dirichlet distribution. An arbitrary piece-wise constant probability density is converted to a monotone one by a projection map, taking its L1\mathbb{L}_1-projection onto the space of monotone functions, which is subsequently normalized to integrate to one. We construct consistent Bayesian tests to test multivariate monotonicity of a probability density based on the L1\mathbb{L}_1-distance to the class of monotone functions. The test is shown to have a size going to zero and high power against alternatives sufficiently separated from the null hypothesis. To obtain a Bayesian credible interval for the value of the density function at an interior point with guaranteed asymptotic frequentist coverage, we consider a posterior quantile interval of an induced map transforming the function value to its value optimized over certain blocks. The limiting coverage is explicitly calculated and is seen to be higher than the credibility level used in the construction. By exploring the asymptotic relationship between the coverage and the credibility, we show that a desired asymptomatic coverage can be obtained exactly by starting with an appropriate credibility level

    A Statistical View of Column Subset Selection

    Full text link
    We consider the problem of selecting a small subset of representative variables from a large dataset. In the computer science literature, this dimensionality reduction problem is typically formalized as Column Subset Selection (CSS). Meanwhile, the typical statistical formalization is to find an information-maximizing set of Principal Variables. This paper shows that these two approaches are equivalent, and moreover, both can be viewed as maximum likelihood estimation within a certain semi-parametric model. Using these connections, we show how to efficiently (1) perform CSS using only summary statistics from the original dataset; (2) perform CSS in the presence of missing and/or censored data; and (3) select the subset size for CSS in a hypothesis testing framework

    A New Paradigm for Generative Adversarial Networks based on Randomized Decision Rules

    Full text link
    The Generative Adversarial Network (GAN) was recently introduced in the literature as a novel machine learning method for training generative models. It has many applications in statistics such as nonparametric clustering and nonparametric conditional independence tests. However, training the GAN is notoriously difficult due to the issue of mode collapse, which refers to the lack of diversity among generated data. In this paper, we identify the reasons why the GAN suffers from this issue, and to address it, we propose a new formulation for the GAN based on randomized decision rules. In the new formulation, the discriminator converges to a fixed point while the generator converges to a distribution at the Nash equilibrium. We propose to train the GAN by an empirical Bayes-like method by treating the discriminator as a hyper-parameter of the posterior distribution of the generator. Specifically, we simulate generators from its posterior distribution conditioned on the discriminator using a stochastic gradient Markov chain Monte Carlo (MCMC) algorithm, and update the discriminator using stochastic gradient descent along with simulations of the generators. We establish convergence of the proposed method to the Nash equilibrium. Apart from image generation, we apply the proposed method to nonparametric clustering and nonparametric conditional independence tests. A portion of the numerical results is presented in the supplementary material

    An integrative ecological and evolutionary genomic study of lake Daphnia across time

    Get PDF
    An undisputable fact of the modern age is that human activities are now a major force shaping the biosphere. Some have called for a new geological epoch called the “Anthropocene” or the age of humans— the search is underway for a reliable and unambiguous mark in the geological record for the designation of this new epoch. As a biologist, however, I am reminded daily of the “marks” this age has left on the world around us. The American philosopher and ecologist Aldo Leopold perhaps said it best when he described an “ecological education” as “living alone in a world of wounds.” What follows in this dissertation is maybe best described as searching for wounds, for marks, in biological archives that allow us to better understand the ecosystems of the Anthropocene. My dissertation is focused primarily on studying the widespread crustacean zooplankter, Daphnia pulicaria. A common refrain in the following chapters will be to point out the function of this and related species in lake ecosystems and their value to the humans that enjoy and benefit from lakes. Daphnia in lakes are important for two reasons; first, they are keystone species of the pelagic food webs, connecting primary production from algae to higher trophic levels namely fish. Daphnia thus support the recreational and commercial fisheries of freshwater lakes. A knock-on benefit of Daphnia trophic position is they control the standing crop of algae in freshwater ecosystems and maintain water clarity, so lakes are not choked with noxious algal blooms. My dissertation is separated into three chapters. In the first chapter, I sequence, assemble, and annotate a genome for D. pulicaria using the latest long-read DNA sequencing technology. We use the genomic resources developed for this species to better understand its evolutionary history, especially its split from the closely related “sister” species D. pulex. This reference genome is important for enabling other work, a thread we pick up in the third and final chapter. In my second chapter, I chronicled the 175-years history of D. pulicaria and other Daphnia species in a small lake that will be the primary focus of the last two chapters. Tanners Lake is an ecosystem replete with wounds from the numerous human activities that dominate the landscape surrounding it. Due to their landscape position, lakes integrate vast amounts of information about their watersheds in their sediments. Tanners Lake records in its sediments the history of a landscape dominated by humans with the development of a major city surrounding it. Located just outside of Saint Paul, MN, Tanners Lake is impacted by two major human impacts inflicted on Northern Temperate Lakes. It is not only eutrophic, from the export of unprecedented amounts of nutrients but also it is severely salinized. Freshwater salinization is a consequence of the widespread use of de-icing salts on impervious surfaces such as roads and parking lots- a ubiquitous feature of human-dominated landscapes. I collected sediment cores from Tanners Lake to reconstruct the ecological dynamics of the Daphnia community across time in this lake by examining the abundance and diversity of resting eggs (encased in durable sclerotized structures called ephippia) across time in the core. I found that only modest changes in diversity and abundance occurred in the lake during salinization. This result suggests that Daphnia may be resilient to the threat of salinization- perhaps maintaining the ecosystems they support despite salinization. The results of my second chapter set up an important question to tackle in my third chapter. Since D. pulicaria remains in Tanners Lake despite salinization, is this population evolving higher tolerance to these conditions? We tackle this question using an approach that is somewhat unique to Daphnia, by hatching the eggs contained in the ephippia from across time. This method known as Resurrection Ecology, allows us to sample individuals from across time and study their phenotypes and genotypes. I hatched Daphnia from across approximately 25 years (~1994-2019) and resequenced their genomes. In addition, we also evaluated these D. pulicaria clones for their tolerance to salinity using phenotypic (i.e., survivorship) assays. I compiled this data set together to understand the evolution of this population through time. The genomic data supports the idea that salinity is a driving force for evolution in this population. In particular, genes related to osmoregulation and salinity tolerance are enriched within the statistical outliers. The phenotypic data supported this finding, as we observed that the salinity tolerance of modern Daphnia was higher than that of the ancestors hatched from the sediment. Interestingly, while I initially described this work as a search for ecological and or evolutionary wounds, I was surprised to find that while they exist in very real ways in the biological archives studied, they are not mortal. My research, taken together, suggests that Daphnia populations should have the potential to respond to human threats and evolve to maintain the ecosystems they support. However, this resiliency will be highly dependent on the speed and intensity of the threats that these systems will face, as well as the strength of the evolutionary forces (i.e., selection, drift, migration, mutation) that will shape the underlying genetic structure of these populations

    A stratified decision-making model for long-term planning: application in flood risk management in Scotland

    Get PDF
    In a standard decision-making model for a game of chance, the best strategy is chosen based on the current state of the system under various conditions. There is however a shortcoming of this standard model, in that it can be applicable only for short-term decision-making periods. This is primarily due to not evaluating the dynamic characteristics and changes in status of the system and the outcomes of nature towards an a priori target or ideal state, which can occur in longer periods. Thus, in this study, a decision-making model based on the concept of stratification (CST), game theory and shared socio-economic pathway (SSP) is developed and its applicability to disaster management is shown. The game of chance and CST have been integrated to incorporate the dynamic nature of the decision environment for long-term disaster risk planning, while accounting for various states of the system and an ideal state. Furthermore, an interactive web application with dynamic user interface is built based on the proposed model to enable decision makers to identify the best choices in their model by a predictive approach. The Monte Carlo simulation is applied to experimentally validate the proposed model. Then, it is demonstrated how this methodology can suitably be applied to obtain ad hoc models, solutions, and analysis in the strategic decision-making process of flooding risk strategy evaluation. The model's applicability is shown in an uncertain real-world decision-making context, considering dynamic nature of socio-economic situations and flooding hazards in the Highland and Argyll Local Plan District in Scotland. The empirical results show that flood forecasting and awareness raising are the two most beneficial mitigation strategies in the region followed by emergency plans/response, planning policies, maintenance, and self help
    corecore