743,978 research outputs found

    Near Real-Time Biophysical Rice (Oryza sativa L.) Yield Estimation to Support Crop Insurance Implementation in India

    Get PDF
    Immediate yield loss information is required to trigger crop insurance payouts, which are important to secure agricultural income stability for millions of smallholder farmers. Techniques for monitoring crop growth in real-time and at 5 km spatial resolution may also aid in designing price interventions or storage strategies for domestic production. In India, the current government-backed PMFBY (Pradhan Mantri Fasal Bima Yojana) insurance scheme is seeking such technologies to enable cost-efficient insurance premiums for Indian farmers. In this study, we used the Decision Support System for Agrotechnology Transfer (DSSAT) to estimate yield and yield anomalies at 5 km spatial resolution for Kharif rice (Oryza sativa L.) over India between 2001 and 2017. We calibrated the model using publicly available data: namely, gridded weather data, nutrient applications, sowing dates, crop mask, irrigation information, and genetic coefficients of staple varieties. The model performance over the model calibration years (2001–2015) was exceptionally good, with 13 of 15 years achieving more than 0.7 correlation coefficient (r), and more than half of the years with above 0.75 correlation with observed yields. Around 52% (67%) of the districts obtained a relative Root Mean Square Error (rRMSE) of less than 20% (25%) after calibration in the major rice-growing districts (>25% area under cultivation). An out-of-sample validation of the calibrated model in Kharif seasons 2016 and 2017 resulted in differences between state-wise observed and simulated yield anomalies from –16% to 20%. Overall, the good ability of the model in the simulations of rice yield indicates that the model is applicable in selected states of India, and its outputs are useful as a yield loss assessment index for the crop insurance scheme PMFBY

    Modelling flood inundation in the Mlazi river under uncertainty.

    Get PDF
    Thesis (M.Sc.)-University of Natal, Durban,2003.The research project described in this dissertation studies the modelling techniques employed for the Mlazi River in the context of flood analysis and flood forecasting in order to model flood inundation. These techniques are applicable to an environment where there is uncertainty due to a lack of historical input data for calibration and validation purposes. This uncertainty is best explained by understanding the process and data required to model flood inundation. In order to model flood inundation in real time, forecasted flood flows would be required as input to a hydraulic river model used for simulating flood inundation levels. During this process, forecasted flood flows would be obtained from a flood-forecasting model that would need to be calibrated and validated. The calibration process would require historical rainfall data correlating with streamflow data and subsequently, the validation process would require real time streamflow data. In the context of the Mlazi Catchment, there are only two stream gauges located in the upper subcatchments. Although these stream gauges have recorded data for 20 years, the streamflow data does not correlate with disaggregated daily rainfall data, of which there are records for at least 40 years. Therefore it would be difficult to develop the forecasting model based on the rainfall and streamflow data available. In this instance, a more realistic approach to modelling flood inundation involved the integration of GIS technology, a physically based hydrological model for flood analysis, a conceptual forecasting model for real time forecasting and a hydraulic model for computation of inundation levels. The integration of modelling techniques are better explained by categorising the process into three phases: Phase 1 Desktop catchment modelling: A continuous, physically based simulation model (HEC-HMS Model) was set up using GIS technology. The model applied the SCS-UH method for the estimation of peak discharges. Synthetic hyetographs for various recurrence intervals were used as input to the model. A sensitivity analysis was implemented and subsequently the HEC-HMS model was calibrated against output SCS-UH method and peak discharges simulated. The synthetic hyetographs together with results from the HEC-HMS model were used for validation of the Mlazi Meta Model (MMM) used for real time flood forecasting. Phase 2 Implementation of the Inundation Model: The hydraulic model (HEC-RAS) was created using a Digital Elevation Model (DEM). A field survey was conducted for the purpose of capturing the roughness coefficients and hydraulic structures, which were incorporated into the model and also for the confirmation of the terrain cross sections from the DEM. Flow data for the computation of levels of inundation were obtained from the HEC-HMS model. The levels of inundation for the natural channel of Mlazi River were simulated using the one dimensional steady state analysis, whereas for the canal overbank areas, simulation was conducted for unsteady state conditions. Phase 3 Creation of the Mlazi Meta Model (MMM): The MMM used for real time flood forecasting is a linear catchment model which consists of a semi-distributed three reservoir cell model (Pegram and Sinclair, 2002). The MMM parameters were initially adjusted using the HEC-HMS model so that it became representative of the Mlazi catchment. This approach sounds unreasonable because a model is being validated by another model but it gave the best initial estimate of the parameters rather than using trial and error. The MMM will be further updated using record radar data and streamflow data once all structures have been put in place. The confidence in the applicability of the HEC-HMS model is based on the intensive efforts applied in setting it up. Furthermore, the output results from the calibrated HEC-HMS model were compared with other reliable methods of computing design peak discharges and also validated with frequency analysis conducted on one of the subcatchments

    Numerical simulations of two-dimensional saturated granular media

    Get PDF
    The liquefaction phenomenon in soil has been studied in great detail during the past 20 years. The need to understand this phenomenon has been emphasized by the extent of the damages resulting from soil liquefaction during earthquakes. Although an overall explanation exists for this phenomenon through the concept of effective. stress, the basic mechanism of loss of strength of the soil skeleton has not been thoroughly examined and remains unclear. The present study proposes a numerical model for simulations of the behavior of saturated granular media. The model was developed with two main objectives: 1. To represent the mechanical response of an assemblage of discrete paxticles having the shape of discs. 2. To model and represent the interaction of interstitial pore fluid present with the idealized granular media. The representation of the solid skeleton is based on Cundall and Strack's distinct element model, in which discrete particles axe modelled as discs in two dimensions, each obeying Newton's laws. Interparticle contacts consisting of springs and frictional element dashpots are included. Assuming a Newtonian incompressible fluid with constant viscosity and density, and quasi-steady flow, the fluid phase is described by Stokes' equations. The solution to Stokes' equations is obtained through the boundary integral element formulation. Several validation test cases axe presented along with four simple shear tests on dry and saturated granular assemblages. For these last four tests, the numerical results indicate that the model is able to represent qualitatively the behavior of real soil, while at the same time clarifying the processes occurring at the microscale that influence soil response

    VO2 prediction based on physiologic and mechanical exercise measurements

    Get PDF
    The Cardiopulmonary Exercise Test (CPET) is a diagnostic test that evaluates the functional capacity of an individual through the integrated response of the cardiovascular, respiratory, and metabolic systems. VO2 max is the parameter that access functional capacity, although it’s difficult to achieve given the effort that implies. In recent years, an increase in computing capabilities combined with the available storage of large amounts of information has led to a heightened interest in machine learning (ML). We aimed in this study to enable CPET with ML models that allow predicting oxygen consumption in healthy individuals. The study methodology is based on the cleaning and exploratory analysis of a public database with about 992 CPETs performed on healthy individuals and athletes. To predict each value of VO2 (~569,000 instances), five ML algorithms were used (Random Forests, kNN, Neural Networks, Linear Regression, and SVM) with heart rate, respiratory rate, time from the beginning of the exam and treadmill speed, using a 20-fold cross-validation. The best result came from the Random Forest model, with a R2 of 0.88 and a RMSE of 334.34 ml.min-1. Furthermore, using the same methodology but different features, we tried to predict the VO2max with the 724 adult participants with a maximal test (RER≄1.05) but weaker results were obtained (the best model was the Linear Regression, with a R2 of 0.50 and a RMSE of 498.06 ml.min-1). Still, this model showed a better correlation with the real VO2max than the Wasserman equation (R=0.71 vs R=0.59). It is possible to predict with accuracy breath-by-breath VO2, based on easy-to-obtain physiological and mechanical measurements.info:eu-repo/semantics/publishedVersio

    Slope stability prediction using the Artificial Neural Network (ANN)

    Get PDF
    Slope failure is a significant risk in both civil and mining operations. This failure phenomenon is more likely to occur during the high rainfall season, areas with a high probability of seismic activity and in cold countries due to freezing-thawing. Further, a poor understanding of hydrogeology and geotechnical factors can contribute to erroneous engineering designs. Several Limit Equilibrium Methods (LEMs) and numerical modelling tools have been developed over the years. However, the highlighted success of the Artificial Neural Networks (ANNs) in other disciplines/sectors has motivated researchers to implement ANNs to forecast the Factor Of Safety (FOS). This paper aims to develop ANNs to predict the value of the FOS for slopes formed by (i) uniform one soil/rock material and (ii) formed by two soil/rock materials. Each of these slopes contains three sub-models with 6, 7 and 8 input material parameters. Thousands of FOS values were generated for each sub-model using LEMs by randomly generating material input parameters. Over 80% of generated FOS values were used to train ANNs and the remaining 20% were used to for validation. The one-material models performed better than the two-material models overall. The first sub-model from the one-material models and the third sub-model from the two-material models exhibited the best performance compared to the other sub-models, achieving Mean Square Error (MSE) of 8.35E-04 and 5.10E-3, respectively. The third sub-model from the one-material models and the first sub-model from the two-material models have a MSE of 2.00E-3 and 9.80E-3, respectively. The second sub-models have shown the lowest performance compared to the other models. The minimal errors between LEMs and ANNs have led to the conclusion that ANN can be used as a tool for a quick and first-pass analysis by design engineers without undertaking rigours, complex, time-consuming and tedious computation of FOS using LEMs. An actual field-tested database can be usedto predict real-world slope failures

    Measurement error in long-term retrospective recall surveys of earnings

    Get PDF
    Several recent studies in labour and population economics use retrospective surveys to substitute for the high cost and limited availability of longitudinal survey data. Although a single interview can obtain a lifetime history, inaccurate long-term recall could make such retrospective surveys a poor substitute for longitudinal surveys, especially if it induces non-classical error that makes conventional statistical corrections less effective. In this paper, we use the unique Panel Study of Income Dynamics Validation Study to assess the accuracy of long-term recall data. We find underreporting of transitory events. This recall error creates a non-classical measurement error problem. A limited cost-benefit analysis is also conducted, showing how savings from using a cheaper retrospective recall survey might be compared with the cost of applying the less accurate recall data to a specific policy objective such as designing transfers to reduce chronic poverty
    • 

    corecore