49 research outputs found

    A Constant-Time Algorithm for Vector Field SLAM using an Exactly Sparse Extended Information Filter

    Full text link
    Abstract — Designing a localization system for a low-cost robotic consumer product poses a major challenge. In previous work, we introduced Vector Field SLAM [5], a system for simultaneously estimating robot pose and a vector field induced by stationary signal sources present in the environment. In this paper we show how this method can be realized on a low-cost embedded processing unit by applying the concepts of the Exactly Sparse Extended Information Filter [15]. By restricting the set of active features to the 4 nodes of the current cell, the size of the map becomes linear in the area explored by the robot while the time for updating the state can be held constant under certain approximations. We report results from running our method on an ARM 7 embedded board with 64 kByte RAM controlling a Roomba 510 vacuum cleaner in a standard test environment. NS spot1 X (sensor units) Spot1 X readings Node X1 estimate

    Development and performance comparison of MPI and Fortran Coarrays within an atmospheric research model

    Get PDF
    A mini-application of The Intermediate Complexity Research (ICAR) Model offers an opportunity to compare the costs and performance of the Message Passing Interface (MPI) versus coarray Fortran, two methods of communication across processes. The application requires repeated communication of halo regions, which is performed with either MPI or coarrays. The MPI communication is done using non-blocking two-sided communication, while the coarray library is implemented using a one-sided MPI or OpenSHMEM communication backend. We examine the development cost in addition to strong and weak scalability analysis to understand the performance costs

    Fortran coarray implementation of semi-Lagrangian convected air particles within an atmospheric model

    Get PDF
    This work added semi-Lagrangian convected air particles to the Intermediate Complexity Atmospheric Research (ICAR) model. The ICAR model is a simplified atmospheric model using quasi-dynamical downscaling to gain performance over more traditional atmospheric models. The ICAR model uses Fortran coarrays to split the domain amongst images and handle the halo region communication of the image’s boundary regions. The newly implemented convected air particles use trilinear interpolation to compute initial properties from the Eulerian domain and calculate humidity and buoyancy forces as the model runs. This paper investigated the performance cost and scaling attributes of executing unsaturated and saturated air particles versus the original particle-less model. An in-depth analysis was done on the communication patterns and performance of the semi-Lagrangian air particles, as well as the performance cost of a variety of initial conditions such as wind speed and saturation mixing ratios. This study found that given a linear increase in the number of particles communicated, there is an initial decrease in performance, but that it then levels out, indicating that over the runtime of the model, there is an initial cost of particle communication, but that the computational benefits quickly offset it. The study provided insight into the number of processors required to amortize the additional computational cost of the air particles

    ESD Reviews: Model Dependence in Multi-Model Climate Ensembles: Weighting, Sub-Selection and Out-Of-Sample Testing

    Get PDF
    The rationale for using multi-model ensembles in climate change projections and impacts research is often based on the expectation that different models constitute independent estimates; therefore, a range of models allows a better characterisation of the uncertainties in the representation of the climate system than a single model. However, it is known that research groups share literature, ideas for representations of processes, parameterisations, evaluation data sets and even sections of model code. Thus, nominally different models might have similar biases because of similarities in the way they represent a subset of processes, or even be near-duplicates of others, weakening the assumption that they constitute independent estimates. If there are near-replicates of some models, then treating all models equally is likely to bias the inferences made using these ensembles. The challenge is to establish the degree to which this might be true for any given application. While this issue is recognised by many in the community, quantifying and accounting for model dependence in anything other than an ad-hoc way is challenging. Here we present a synthesis of the range of disparate attempts to define, quantify and address model dependence in multi-model climate ensembles in a common conceptual framework, and provide guidance on how users can test the efficacy of approaches that move beyond the equally weighted ensemble. In the upcoming Coupled Model Intercomparison Project phase 6 (CMIP6), several new models that are closely related to existing models are anticipated, as well as large ensembles from some models. We argue that quantitatively accounting for dependence in addition to model performance, and thoroughly testing the effectiveness of the approach used will be key to a sound interpretation of the CMIP ensembles in future scientific studies

    Characterizing uncertainty of the hydrologic impacts of climate change

    Get PDF
    The high climate sensitivity of hydrologic systems, the importance of those systems to society, and the imprecise nature of future climate projections all motivate interest in characterizing uncertainty in the hydrologic impacts of climate change. We discuss recent research that exposes important sources of uncertainty that are commonly neglected by the water management community, especially, uncertainties associated with internal climate system variability, and hydrologic modeling. We also discuss research exposing several issues with widely used climate downscaling methods. We propose that progress can be made following parallel paths: first, by explicitly characterizing the uncertainties throughout the modeling process (rather than using an ad hoc “ensemble of opportunity”) and second, by reducing uncertainties through developing criteria for excluding poor methods/models, as well as with targeted research to improve modeling capabilities. We argue that such research to reveal, reduce, and represent uncertainties is essential to establish a defensible range of quantitative hydrologic storylines of climate change impacts

    Shower Thoughts: Why Scientists Should Spend More Time in the Rain

    Get PDF
    Stormwater is a vital resource and dynamic driver of terrestrial ecosystem processes. However, processes controlling interactions during and shortly after storms are often poorly seen and poorly sensed when direct observations are substituted with technological ones. We discuss how human observations complement technological ones and the benefits of scientists spending more time in the storm. Human observation can reveal ephemeral storm-related phenomena such as biogeochemical hot moments, organismal responses, and sedimentary processes that can then be explored in greater resolution using sensors and virtual experiments. Storm-related phenomena trigger lasting, oversized impacts on hydrologic and biogeochemical processes, organismal traits or functions, and ecosystem services at all scales. We provide examples of phenomena in forests, across disciplines and scales, that have been overlooked in past research to inspire mindful, holistic observation of ecosystems during storms. We conclude that technological observations alone are insufficient to trace the process complexity and unpredictability of fleeting biogeochemical or ecological events without the shower thoughts produced by scientists\u27 human sensory and cognitive systems during storms

    Characterizing uncertainty of the hydrologic impacts of climate change

    Get PDF
    The high climate sensitivity of hydrologic systems, the importance of those systems to society, and the imprecise nature of future climate projections all motivate interest in characterizing uncertainty in the hydrologic impacts of climate change. We discuss recent research that exposes important sources of uncertainty that are commonly neglected by the water management community, especially, uncertainties associated with internal climate system variability, and hydrologic modeling. We also discuss research exposing several issues with widely used climate downscaling methods. We propose that progress can be made following parallel paths: first, by explicitly characterizing the uncertainties throughout the modeling process (rather than using an ad hoc “ensemble of opportunity”) and second, by reducing uncertainties through developing criteria for excluding poor methods/models, as well as with targeted research to improve modeling capabilities. We argue that such research to reveal, reduce, and represent uncertainties is essential to establish a defensible range of quantitative hydrologic storylines of climate change impacts

    A Unified Approach for Process-Based Hydrologic Modeling: 2. Model Implementation and Case Studies

    Get PDF
    This work advances a unified approach to process-based hydrologic modeling, which we term the “Structure for Unifying Multiple Modeling Alternatives (SUMMA).” The modeling framework, introduced in the companion paper, uses a general set of conservation equations with flexibility in the choice of process parameterizations (closure relationships) and spatial architecture. This second paper specifies the model equations and their spatial approximations, describes the hydrologic and biophysical process parameterizations currently supported within the framework, and illustrates how the framework can be used in conjunction with multivariate observations to identify model improvements and future research and data needs. The case studies illustrate the use of SUMMA to select among competing modeling approaches based on both observed data and theoretical considerations. Specific examples of preferable modeling approaches include the use of physiological methods to estimate stomatal resistance, careful specification of the shape of the within-canopy and below-canopy wind profile, explicitly accounting for dust concentrations within the snowpack, and explicitly representing distributed lateral flow processes. Results also demonstrate that changes in parameter values can make as much or more difference to the model predictions than changes in the process representation. This emphasizes that improvements in model fidelity require a sagacious choice of both process parameterizations and model parameters. In conclusion, we envisage that SUMMA can facilitate ongoing model development efforts, the diagnosis and correction of model structural errors, and improved characterization of model uncertainty

    Snow Ensemble Uncertainty Project (SEUP): quantification of snow water equivalent uncertainty across North America via ensemble land surface modeling

    Get PDF
    The Snow Ensemble Uncertainty Project (SEUP) is an effort to establish a baseline characterization of snow water equivalent (SWE) uncertainty across North America with the goal of informing global snow observational needs. An ensemble-based modeling approach, encompassing a suite of current operational models is used to assess the uncertainty in SWE and total snow storage (SWS) estimation over North America during the 2009–2017 period. The highest modeled SWE uncertainty is observed in mountainous regions, likely due to the relatively deep snow, forcing uncertainties, and variability between the different models in resolving the snow processes over complex terrain. This highlights a need for high-resolution observations in mountains to capture the high spatial SWE variability. The greatest SWS is found in Tundra regions where, even though the spatiotemporal variability in modeled SWE is low, there is considerable uncertainty in the SWS estimates due to the large areal extent over which those estimates are spread. This highlights the need for high accuracy in snow estimations across the Tundra. In midlatitude boreal forests, large uncertainties in both SWE and SWS indicate that vegetation–snow impacts are a critical area where focused improvements to modeled snow estimation efforts need to be made. Finally, the SEUP results indicate that SWE uncertainty is driving runoff uncertainty, and measurements may be beneficial in reducing uncertainty in SWE and runoff, during the melt season at high latitudes (e.g., Tundra and Taiga regions) and in the western mountain regions, whereas observations at (or near) peak SWE accumulation are more helpful over the midlatitudes
    corecore