13 research outputs found
Recommended from our members
Managing climate risk in water supply systems : materials and tools designed to empower technical professionals to better understand key issues
This manual has been developed as a learning tool to be used with a
companion series of practical exercises. They have been developed to
provide a hands-on approach to learning key concepts in hydrology and
climate science as they relate to climate risk management in water supply
systems, as introduced in the text
A synthetic model to downscale and forecast evapotranspiration using wavelets and SVMs
Provision of reliable forecasts of evapotranspiration (ET) at the farm level can be a key element in efficient water management in irrigated basins. This paper presents an algorithm that provides a means to downscale and forecast ET images. The key concepts driving the development of this algorithm are building multiple relationships between inputs and outputs at all different spatial scales, and using these relationships to downscale and forecast the output at the finest scale. This downscaling/forecasting algorithm is designed for dependent properties such as ET. Decomposing and reconstructing processes are done using two-dimensional (2D) discrete wavelet decomposition (2D- DWT) with basis functions that suit the physics of the property in question. 2D- DWT, for one level, results in one datum image (Low-Low pass filter image, or LL) and three detailing images (Low-High or LH, High-Low or HL, and HighHigh or HH). The underlying physics between the input variables and the output are learned by using Support Vector Machines (SVMs) at the resolution of the output. The machines are then applied at a higher resolution to produce detailing images to help downscale the output image (e.g., ET). In addition to being downscaled, the output image can be shifted ahead in time, providing a means for the algorithm to be used for forecasting. The algorithm has been applied on two case studies, one in Bondville, Illinois where the results have been validated against Ameriflux observations, and another in the Sevier River Basin, Utah
Problems Related to Missing Data in Hydrologic Modeling: Implications and Solution
A common practice in pre-processing data for hydrological modeling is to ignore observations with any missing variable values at any given time step, even if it is only one of the independent variables that is missing. These rows of data are labeled incomplete and would not be used in either model building or subsequent testing and verification steps. This is not necessarily the best way of doing it as information is lost when incomplete rows of data are discarded. Learning algorithms are affected by such problems more than physically-based models as they rely heavily on the data to learn the underlying input/output relationships. In this study, the extent of damage to the performance of the learning algorithm due to missing data is explored in a field-scale application. We have tested and compared the performance of two well-known learning algorithms, namely Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) for short-term prediction of groundwater levels in a well field. A comparison of these two algorithms is made using various percentages of missing data. In addition to understanding the relative performance of these algorithms in dealing with missing data, a solution in the form of an imputation methodology is proposed for filling the data gaps. The proposed imputation methodology is tested against observed data
Downscaling and Assimilation of Surface Soil Moisture Using Ground Truth Measurements
Methods for reconciliation of spatial and temporal scales of data have become increasingly important as remote sensing data become more readily available and as the science of hydrology moves more heavily toward distributed modeling. The purpose of this paper is to develop a method to disaggregate coarse-resolution remote sensing data to finer scale resolutions that are more appropriate for use in hydrologic studies and water management. This disaggregation is done with the help of point measurements on the ground. The downscaling of remote sensing data is achieved by three main steps: initialization, spatial pattern mimicking, and assimilation. The first two steps are part of the main algorithm, and the last step, assimilation, is included for fine-tuning and to ensure further compatibility between the coarse-scale and fine-scale images. The assimilation step also incorporates the information coming from the point measurements. The approach has been applied and validated by downscaling images for two cases. In the first case, a synthetically generated random field is reproduced at fine and coarse resolutions. The downscaled image has been shown to match the spatial properties of the true image according to the variogram test as well as the magnitude of values according to the various univariate goodness-of-fit measures R2 = 0.91. In the second case, a soil moisture field from the Southern Great Plains (SGP 97) experiments is downscaled from a resolution of 800 m X 800 m to a resolution of 50 m X 50 m
Downscaling and Forecasting of Evapotranspiration Using a Synthetic Model of Wavelets and Support Vector Machines
Providing reliable forecasts of evapotranspiration (ET) at farm level is a key element toward efficient water management in irrigated basins. This paper presents an algorithm that provides a means to downscale and forecast dependent variables such as ET images. Using the discrete wavelet transform (DWT) and support vector machines (SVMs), the algorithm finds multiple relationships between inputs and outputs at all different spatial scales and uses these relationships to predict the output at the finest resolution. Decomposing and reconstructing processes are done by using 2-D DWT with basis functions that suit the physics of the property in question. Two-dimensional DWT for one level will result in one datum image (low-low-pass filter image) and three detail images (low-high, high-low, and high-high). The underlying relationship between the input variables and the output are learned by training an SVM on the datum images at the resolution of the output. The SVM is then applied on the detailed images to produce the detailed images of the output, which are needed to help downscale the output image to a higher resolution. In addition to being downscaled, the output image can be shifted ahead in time, providing a means for the algorithm to be used for forecasting. The algorithm has been applied on two case studies, one in Bondville, IL, where the results have been validated against AmeriFlux observations, and another in the Sevier River Basin, UT
Recommended from our members
Designing Index-Based Weather Insurance for Farmers In Central America: Final Report to the World Bank Commodity Risk Management Group, ARD
This report is one of the deliverables for the project "Commodity Risk Management Group (ARD) seeks a qualified firm for Designing Index-Based Weather Insurance Contracts For Farmers in Central America, Terms of Reference." In this report, we document the development of eleven revised and improved standardized drought contracts, including six contracts specified in the World Bank's Commodity Risk Management Group's Terms of Reference for this project. Contracts are developed for three locations in Nicaragua (Chinandega, Leon, and Managua) for rice, soy, and sorghum crops and three locations in Honduras (La Conce, Catacamas, and Guayabillas/Olancho) for sorghum, soy, and maize crops. We provide background on the contract structure and design methods used. The final standardized drought contracts perform very well in our statistical analysis using crop models, likely due to the strong potential represented by the initial contracts proposed by project partners. In this report we provide a detailed report and discussion of the contracts and the refinement process. Of course, it is important that project partners make sure to validate this performance through alternate sources of information, such as discussions with farmers, experts, and accurate historical yield data, when available. For the future, for standardization of the process, it could be worthwhile to make a more systematic process for quantitatively evaluating and tuning the contracts for additional risks (such as excess rainfall) based on farmer interviews and agronomic knowledge. Specifically we recommend evaluating each risk of the contract independently prior to bundling. We also recommend development of a process for documentation of farmer and expert interviews that would provide information on the risk, as well as a historical record of when each risk was an issue. More intimate inclusion of Reinsurers in the design process for standardized contracts, as well as development of guidelines for features that may lead to additional expense could help provide for fewer surprises in reinsurance pricing. We also recommend that the contracts for additional risks be structured and designed so that they can be adjusted to meet price, payout, and coverage constraints through systematic statistically-based tuning of a small number of parameters. In response to queries raised during the project by project partners we have deepened our study of the climate of Central America and its implications for the forecasts, we find that although there appears to be a strong link between the natural ENSO climate cycle and contract payouts, there is probably little scope for geographical hedging, as crops covered do not span the geographic regions with negatively correlated rainfall. It is likely that the best hedging strategy would be to include excess contracts in the drought portfolio. Also, we see little evidence for altering contracts or pricing to address potential long term precipitation trends in the near term. Given the strong potential for index insurance as a mechanism for adapting to climate risk, we highly recommend that products and prices be regularly updated over the years, with care to ensure the value and product continuity with each change. In response to queries raised during the project by project partners, we perform an in depth illustration and analysis of the use of rainfall simulators on the contracts. We illustrate the limitations of rainfall simulators as well as their potential for improving contract design and pricing for areas with short datasets, developing rainfall simulator for the analysis. Certain features of some contracts (the shifting sowing window) led a wide range of rainfall simulators to under-represent variability. Often, subtle contract features can lead to a lack of robustness to sensitivity tests and difficulty in analysis, and potentially could lead to increased reinsurance pricing without substantially adding to the quality of the coverage. We have noticed that the bulk of the protection of many of the contracts could be provided through much simpler indices that are much more robust to sensitivity tests and perform much more predictably on rainfall simulators when practically implementable. It may be that an additional stage of index design might be very valuable following the development of a sophisticated contract. This additional stage would be to determine if the bulk of the coverage of the contract could be duplicated in a simplified statistical approximation of the contract