2,111 research outputs found

    Modelos para el apoyo a la gestión forestal bajo un ambiente cambiante

    Get PDF
    Forests are experiencing an environment that changes much faster than in at least the past several hundred years. In addition, the abiotic factors determining forest dynamics vary depending on its location. Forest modeling thus faces the new challenge of supporting forest management in the context of environmental change. This review has focused on three types of models that are used in forest management: empirical, process-based and hybrid models. Recent approaches might lead to the applicability of empirical models under changing environmental conditions, such as (i) the dynamic state-space approach, or (ii) the development of productivity-environment relationships. 25 process-based models in use in Europe were analyzed in terms of its structure, inputs and outputs having in mind a forest management perspective. Two paths for hybrid modeling were distinguished: (i) coupling of EMs and PBMs by developing signal-transfer environment-productivity functions; (ii) hybrid models with causal structure including both empirical and mechanistic components. Several gaps of knowledge were identified for the three types of models reviewed. The strengths and weaknesses of the three model types differ and all are likely to remain in use. There is a trade-off between how little data the models need for calibration and simulation purposes, and the variety of input-output relationships that they can quantify. PBMs are the most versatile, with a wide range of environmental conditions and output variables they can account for. However, PBMs require more data making them less applicable whenever data for calibration are scarce. The EMs, on the other hand, are easier to run as they require much less prior information, but their simplicity makes them less reliable in the context of environmental changes. These different deficiencies of PBMs and EMs suggest that hybrid models may be a good compromise, but a more extensive testing of these models is required in practice.Los bosques están experimentando un ambiente que cambia más rápidamente que en, al menos, varios cientos de años en el pasado. Además, los factores abióticos que determinan la dinámica forestal varían dependiendo de su localización. La modelización forestal, por tanto, se enfrenta al nuevo reto de apoyar la gestión forestla en el contexto del cambio climático. Esta revisión se enfoca en tres tipos de modelos que se usan en gestión forestal: empíricos, basados en procesos e híbridos. Las aproximaciones reciente pueden conducir a la aplicabilidad de los modelos empíricos bajo condiciones de cambio ambiental, tales como (i) aproximaciones de la dinámica de estado-espacio, o (ii) el desarrollo de relaciones de productividad-ambiente. Se han analizado 25 modelos basados en proceso que están en uso en Europa en términos de su estructura, entradas y salidas teniendo en cuenta una perspectiva de gestión forestal. Se han distinguido dos pasos para modelos híbridos: (i) acoplamiento de modelos EM y PBM mediante el desarrollo de funciones de transferencia de señal ambiente-productividad; (ii) modelos híbridos con una estructura causal incluyendo tanto componentes empíricos como mecanicistas. Se han identificado varias lagunas de conocimiento para los tres tipos de modelos revisados. Las fortalezas y debilidades de los tres tipos de modelos difieren y es probable que se sigan utilizando. Hay un compromiso entre la cantidad mínima de información necesaria para la calibración y la simulación, y la variedad de relaciones input-output que pueden cuantifiar. Los modelos PBM requieren más datos, haciéndolos menos aplicables cuando los datos para la calibración son escasos. Los modelos EM, por otra parte, son más fáciles de correr puesto que requieren mucha menos información previa, pero la representación agregada de los efectos ambientales los hace menos fiables en el contexto del cambio climático. Las distintas desventajas de los modelos PBM y EM sugieren que los modelos híbridos pueden ser un buen compromiso, pero se requiere una mayor evaluación práctica de estos modelos

    Accelerating Science: A Computing Research Agenda

    Full text link
    The emergence of "big data" offers unprecedented opportunities for not only accelerating scientific advances but also enabling new modes of discovery. Scientific progress in many disciplines is increasingly enabled by our ability to examine natural phenomena through the computational lens, i.e., using algorithmic or information processing abstractions of the underlying processes; and our ability to acquire, share, integrate and analyze disparate types of data. However, there is a huge gap between our ability to acquire, store, and process data and our ability to make effective use of the data to advance discovery. Despite successful automation of routine aspects of data management and analytics, most elements of the scientific process currently require considerable human expertise and effort. Accelerating science to keep pace with the rate of data acquisition and data processing calls for the development of algorithmic or information processing abstractions, coupled with formal methods and tools for modeling and simulation of natural processes as well as major innovations in cognitive tools for scientists, i.e., computational tools that leverage and extend the reach of human intellect, and partner with humans on a broad range of tasks in scientific discovery (e.g., identifying, prioritizing formulating questions, designing, prioritizing and executing experiments designed to answer a chosen question, drawing inferences and evaluating the results, and formulating new questions, in a closed-loop fashion). This calls for concerted research agenda aimed at: Development, analysis, integration, sharing, and simulation of algorithmic or information processing abstractions of natural processes, coupled with formal methods and tools for their analyses and simulation; Innovations in cognitive tools that augment and extend human intellect and partner with humans in all aspects of science.Comment: Computing Community Consortium (CCC) white paper, 17 page

    The Architecture of Ignorance

    Get PDF

    Equilibrium and non-equilibrium concepts in forest genetic modelling: population- and individually-based approaches

    Get PDF
    The environment is changing and so are forests, in their functioning, in species composition, and in the species’ genetic composition. Many empirical and process-based models exist to support forest management. However, most of these models do not consider the impact of environmental changes and forest management on genetic diversity nor on the rate of adaptation of critical plant processes. How genetic diversity and rates of adaptation depend on management actions is a crucial next step in model development. Modelling approaches of genetic and demographic processes that operate in forests are categorized here in two classes. One approach assumes equilibrium conditions in phenotype and tree density, and analyses the characteristics of the demography and the genetic system of the species that determine the rate at which that equilibrium is attained. The other modelling approach does not assume equilibrium conditions and describes both the ecological —and genetic processes to analyse how environmental changes result in selection pressures on functional traits of trees and the consequences of that selection for tree— and ecosystem functioning. The equilibrium approach allows analysing the recovery rate after a perturbation in stable environments, i.e. towards the same pre-perturbation stable state. The nonequilibrium approach allows, in addition to the equilibrium approach, analysing consequences of ongoing environmental changes and forest management, i.e. non-stationary environments, on tree functioning, species composition, and genetic composition of the trees in forest ecosystem. In this paper we describe these two modelling approaches and discuss advantages and disadvantages of them and current knowledge gaps

    Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Get PDF
    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness

    MultiCellDS: a community-developed standard for curating microenvironment-dependent multicellular data

    Get PDF
    Exchanging and understanding scientific data and their context represents a significant barrier to advancing research, especially with respect to information siloing. Maintaining information provenance and providing data curation and quality control help overcome common concerns and barriers to the effective sharing of scientific data. To address these problems in and the unique challenges of multicellular systems, we assembled a panel composed of investigators from several disciplines to create the MultiCellular Data Standard (MultiCellDS) with a use-case driven development process. The standard includes (1) digital cell lines, which are analogous to traditional biological cell lines, to record metadata, cellular microenvironment, and cellular phenotype variables of a biological cell line, (2) digital snapshots to consistently record simulation, experimental, and clinical data for multicellular systems, and (3) collections that can logically group digital cell lines and snapshots. We have created a MultiCellular DataBase (MultiCellDB) to store digital snapshots and the 200+ digital cell lines we have generated. MultiCellDS, by having a fixed standard, enables discoverability, extensibility, maintainability, searchability, and sustainability of data, creating biological applicability and clinical utility that permits us to identify upcoming challenges to uplift biology and strategies and therapies for improving human health

    MultiCellDS : a community-developed standard for curating microenvironment-dependent multicellular data

    Get PDF
    Exchanging and understanding scientific data and their context represents a significant barrier to advancing research, especially with respect to information siloing. Maintaining information provenance and providing data curation and quality control help overcome common concerns and barriers to the effective sharing of scientific data. To address these problems in and the unique challenges of multicellular systems, we assembled a panel composed of investigators from several disciplines to create the MultiCellular Data Standard (MultiCellDS) with a use-case driven development process. The standard includes (1) digital cell lines, which are analogous to traditional biological cell lines, to record metadata, cellular microenvironment, and cellular phenotype variables of a biological cell line, (2) digital snapshots to consistently record simulation, experimental, and clinical data for multicellular systems, and (3) collections that can logically group digital cell lines and snapshots. We have created a MultiCellular DataBase (MultiCellDB) to store digital snapshots and the 200+ digital cell lines we have generated. MultiCellDS, by having a fixed standard, enables discoverability, extensibility, maintainability, searchability, and sustainability of data, creating biological applicability and clinical utility that permits us to identify upcoming challenges to uplift biology and strategies and therapies for improving human health

    CROSS-DISCIPLINARY COLLABORATIONS IN DATA QUALITY RESEARCH

    Get PDF
    Data Quality has been the target of research and development for over four decades, and due to its cross-disciplinary nature has been approached by business analysts, solution architects, database experts and statisticians to name a few. As data quality increases in importance and complexity, there is a need to motivate the exploitation of synergies across diverse research communities in order to form holistic solutions that span across its organizational, architectural and computational aspects. As a first step towards bridging gaps between the various research communities, we undertook a comprehensive literature study of data quality research published in the last two decades. In this study we considered a broad range of Information System (IS) and Computer Science (CS) publication outlets. The main aims of the study were to understand the current landscape of data quality research, create better awareness of (lack of) synergies between various research communities, and, subsequently, direct attention towards holistic solutions. In this paper, we present a summary of the findings from the study that outline the overlaps and distinctions between the two communities from various points of view, including publication outlets, topics and themes of research, highly cited or influential contributors and strength and nature of co-authorship networks
    corecore