6,940 research outputs found

    Combining local- and large-scale models to predict the distributions of invasive plant species

    Get PDF
    Habitat-distribution models are increasingly used to predict the potential distributions of invasive species and to inform monitoring. However, these models assume that species are in equilibrium with the environment, which is clearly not true for most invasive species. Although this assumption is frequently acknowledged, solutions have not been adequately addressed. There are several potential methods for improving habitat-distribution models. Models that require only presence data may be more effective for invasive species, but this assumption has rarely been tested. In addition, combining modeling types to form ‘ensemble’ models may improve the accuracy of predictions. However, even with these improvements, models developed for recently invaded areas are greatly influenced by the current distributions of species and thus reflect near- rather than long-term potential for invasion. Larger scale models from species’ native and invaded ranges may better reflect long-term invasion potential, but they lack finer scale resolution. We compared logistic regression (which uses presence/absence data) and two presence-only methods for modeling the potential distributions of three invasive plant species on the Olympic Peninsula in Washington State, USA. We then combined the three methods to create ensemble models. We also developed climate-envelope models for the same species based on larger scale distributions and combined models from multiple scales to create an index of near- and long-term invasion risk to inform monitoring in Olympic National Park (ONP). Neither presence-only nor ensemble models were more accurate than logistic regression for any of the species. Larger scale models predicted much greater areas at risk of invasion. Our index of near- and long-term invasion risk indicates that \u3c4% of ONP is at high near-term risk of invasion while 67-99% of the Park is at moderate or high long-term risk of invasion. We demonstrate how modeling results can be used to guide the design of monitoring protocols and monitoring results can in turn be used to refine models. We propose that by using models from multiple scales to predict invasion risk and by explicitly linking model development to monitoring, it may be possible to overcome some of the limitations of habitat-distribution models

    A Bayesian Downscaler Model to Estimate Daily PM2.5 levels in the Continental US

    Full text link
    There has been growing interest in extending the coverage of ground PM2.5 monitoring networks based on satellite remote sensing data. With broad spatial and temporal coverage, satellite based monitoring network has a strong potential to complement the ground monitor system in terms of the spatial-temporal availability of the air quality data. However, most existing calibration models focused on a relatively small spatial domain and cannot be generalized to national-wise study. In this paper, we proposed a statistically reliable and interpretable national modeling framework based on Bayesian downscaling methods with the application to the calibration of the daily ground PM2.5 concentrations across the Continental U.S. using satellite-retrieved aerosol optical depth (AOD) and other ancillary predictors in 2011. Our approach flexibly models the PM2.5 versus AOD and the potential related geographical factors varying across the climate regions and yields spatial and temporal specific parameters to enhance the model interpretability. Moreover, our model accurately predicted the national PM2.5 with a R2 at 70% and generates reliable annual and seasonal PM2.5 concentration maps with its SD. Overall, this modeling framework can be applied to the national scale PM2.5 exposure assessments and also quantify the prediction errors.Comment: 14 pages, 6 figure

    Part 3: Systemic risk in ecology and engineering

    Get PDF
    The Federal Reserve Bank of New York released a report -- New Directions for Understanding Systemic Risk -- that presents key findings from a cross-disciplinary conference that it cosponsored in May 2006 with the National Academy of Sciences' Board on Mathematical Sciences and Their Applications. ; The pace of financial innovation over the past decade has increased the complexity and interconnectedness of the financial system. This development is important to central banks, such as the Federal Reserve, because of their traditional role in addressing systemic risks to the financial system. ; To encourage innovative thinking about systemic issues, the New York Fed partnered with the National Academy of Sciences to bring together more than 100 experts on systemic risk from 22 countries to compare cross-disciplinary perspectives on monitoring, addressing and preventing this type of risk. ; This report, released as part of the Bank's Economic Policy Review series, outlines some of the key points concerning systemic risk made by the various disciplines represented - including economic research, ecology, physics and engineering - as well as presentations on market-oriented models of financial crises, and systemic risk in the payments system and the interbank funds market. The report concludes with observations gathered from the sessions and a discussion of potential applications to policy. ; The three papers presented in this conference session highlighted the positive feedback effects that produce herdlike behavior in markets, and the subsequent discussion focused in part on means of encouraging heterogeneous investment strategies to counter such behavior. Participants in the session also discussed the types of models used to study systemic risk and commented on the challenges and trade-offs researchers face in developing their models.Financial risk management ; Financial markets ; Financial stability ; Financial crises

    The Blind Oracle, eXplainable Artififical Intelligence (XAI) and human agency

    Get PDF
    An explainable machine learning model is a requirement for trust. Without it the human operator cannot form a correct mental model and will distrust and reject the machine learning model. Nobody will ever trust a system which exhibit an apparent erratic behaviour. The development of eXplainable AI (XAI) techniques try to uncover how a model works internally and the reasons why they make some predictions and not others. But the ultimate objective is to use these techniques to guide the training and deployment of fair automated decision systems that support human agency and are beneficial to humanity. In addition, automated decision systems based on Machine Learning models are being used for an increasingly number of purposes. However, the use of black-box models and massive quantities of data to train them make the deployed models inscrutable. Consequently, predictions made by systems integrating these models might provoke rejection by their users when they made seemingly arbitrary predictions. Moreover, the risk is compounded by the use of models in high-risk environments or in situations when the predictions might have serious consequences.Departamento de Informática (Arquitectura y Tecnología de Computadores, Ciencias de la Computación e Inteligencia Artificial, Lenguajes y Sistemas Informáticos)Máster en Ingeniería Informátic

    A review of machine learning applications in wildfire science and management

    Full text link
    Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.Comment: 83 pages, 4 figures, 3 table

    Fine-tuning the BFOLDS fire regime module to support the assessment of fire-related functions and services in a changing Mediterranean mountain landscape

    Get PDF
    Fire simulation models are useful to advance fire research and improve landscape management. However, a better understanding of these tools is crucial to increase their reliability and expansion into research fields where their application remains limited (e.g., ecosystem services). We evaluated several components of the BFOLDS Fire Regime Module and then tested its ability to simulate fire regime attributes in a Mediterranean mountainous landscape. Based on model outputs, we assessed the landscape fire regulation capacity over time and its implications for supporting the climate regulation ecosystem service. We found that input data quality and the adjustment of fuel and fire behaviour parameters are crucial to accurately emulating key fire regime attributes. Besides, the high predictive capacity shown by BFOLDS-FRM allows to reliably inform the planning and sustainable management of fire-prone mountainous areas of the Mediterranean. Moreover, we identified and discussed modelling limitations and made recommendations to improve future model applications.A. Sil received support from the Portuguese Foundation for Science and Technology (FCT) through Ph.D. Grant SFRH/BD/132838/2017, funded by the Ministry of Science, Technology and Higher Education, and by the European Social Fund - Operational Program Human Capital within the 2014–2020 EU Strategic Framework. P.M. Fernandes contributed in the framework of the UIDB/04033/2020 project, funded by the Portuguese Foundation for Science and Technology (FCT).info:eu-repo/semantics/publishedVersio

    Landcover and crop type classification with intra-annual times series of sentinel-2 and machine learning at central Portugal

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesLand cover and crop type mapping have benefited from a daily revisiting period of sensors such as MODIS, SPOT-VGT, NOAA-AVHRR that contains long time-series archive. However, they have low accuracy in an Area of Interest (ROI) due to their coarse spatial resolution (i.e., pixel size > 250m). The Copernicus Sentinel-2 mission from the European Spatial Agency (ESA) provides free data access for Sentinel 2-A(S2a) and B (S2b). This satellite constellation guarantees a high temporal (5-day revisit cycle) and high spatial resolution (10m), allowing frequent updates on land cover products through supervised classification. Nevertheless, this requires training samples that are traditionally collected manually via fieldwork or image interpretation. This thesis aims to implement an automatic workflow to classify land cover and crop types at 10m resolution in central Portugal using existing databases, intra-annual time series of S2a and S2b, and Random Forest, a supervised machine learning algorithm. The agricultural classes such as temporary and permanent crops as well as agricultural grasslands were extracted from the Portuguese Land Parcel Identification System (LPIS) of the Instituto de Financiamento da Agricultura e Pescas (IFAP); land cover classes like urban, forest and water were trained from the Carta de Ocupação do Solo (COS) that is the national Land Use and Land Cover (LULC) map of Portugal; and lastly, the burned areas are identified from the corresponding national map of the Instituto da Conservação da Natureza e das Florestas (ICNF). Also, a set of preprocessing steps were defined based on the implementation of ancillary data allowing to avoid the inclusion of mislabeled pixels to the classifier. Mislabeling of pixels can occur due to errors in digitalization, generalization, and differences in the Minimum Mapping Unit (MMU) between datasets. An inner buffer was applied to all datasets to reduce border overlap among classes; the mask from the ICNF was applied to remove burned areas, and NDVI rule based on Landsat 8 allowed to erase recent clear-cuts in the forest. Also, the Copernicus High-Resolution Layers (HRL) datasets from 2015 (latest available), namely Dominant Leaf Type (DLT) and Tree Cover Density (TCD) are used to distinguish between forest with more than 60% coverage (coniferous and broadleaf) such as Holm Oak and Stone Pine and between 10 and 60% (coniferous) for instance Open Maritime Pine. Next, temporal gap-filled monthly composites were created for the agricultural period in Portugal, ranging from October 2017 till September 2018. The composites provided data free of missing values in opposition to single date acquisition images. Finally, a pixel-based approach classification was carried out in the “Tejo and Sado” region of Portugal using Random Forest (RF). The resulting map achieves a 76% overall accuracy for 31 classes (17 land cover and 14 crop types). The RF algorithm captured the most relevant features for the classification from the cloud-free composites, mainly during the spring and summer and in the bands on the Red Edge, NIR and SWIR. Overall, the classification was more successful on the irrigated temporary crops whereas the grasslands presented the most complexity to classify as they were confused with other rainfed crops and burned areas

    Fire and Smoke Digital Twin -- A computational framework for modeling fire incident outcomes

    Full text link
    Fires and burning are the chief causes of particulate matter (PM2.5), a key measurement of air quality in communities and cities worldwide. This work develops a live fire tracking platform to show active reported fires from over twenty cities in the U.S., as well as predict their smoke paths and impacts on the air quality of regions within their range. Specifically, our close to real-time tracking and predictions culminates in a digital twin to protect public health and inform the public of fire and air quality risk. This tool tracks fire incidents in real-time, utilizes the 3D building footprints of Austin to simulate smoke outputs, and predicts fire incident smoke falloffs within the complex city environment. Results from this study include a complete fire and smoke digital twin model for Austin. We work in cooperation with the City of Austin Fire Department to ensure the accuracy of our forecast and also show that air quality sensor density within our cities cannot validate urban fire presence. We additionally release code and methodology to replicate these results for any city in the world. This work paves the path for similar digital twin models to be developed and deployed to better protect the health and safety of citizens.Comment: 8 pages, 8 figures, conferenc
    corecore