21 research outputs found
Development of a nitrogen recommendation tool for corn considering static and dynamic variables
Many soil and weather variables can affect the economical optimum nitrogen (N) rate (EONR) for maize. We classified 54 potential factors as dynamic (change rapidly over time, e.g. soil water) and static (change slowly over time, e.g. soil organic matter) and explored their relative importance on EONR and yield prediction by analyzing a dataset with 51 N trials from Central-West region of Argentina. Across trials, the average EONR was 113 ± 83 kg N haâ1 and the average optimum yield was 12.3 ± 2.2 Mg haâ1, which is roughly 50% higher than the current N rates used and yields obtained by maize producers in that region. Dynamic factors alone explained 50% of the variability in the EONR whereas static factors explained only 20%. Best EONR predictions resulted by combining one static variable (soil depth) together with four dynamic variables (number of days with precipitation\u3e20 mm, residue amount, soil nitrate at planting, and heat stress around silking). The resulting EONR model had a mean absolute error of 39 kg N haâ1 and an adjusted R2 of 0.61. Interestingly, the yield of the previous crop was not an important factor explaining EONR variability. Regression models for yield at optimum and at zero N fertilization rate as well as regression models to be used as forecasting tools at maize planting time were developed and discussed. The proposed regression models are driven by few easy to measure variables filling the gap between simple (minimum to no inputs) and complex EONR prediction tools such as simulation models. In view of increasing data availability, our proposed models can be further improved and deployed across environments.
Includes supplemental figures and table. Excel model attached below as additional file
Leveraging digital agriculture for on-farm testing of technologies
The Precision Nitrogen Project (PNP) worked with more than 80 corn and winter wheat producers to inexpensively design and implement randomized, replicated field strip trials on whole commercial farm fields, and to provide site-specific testing of current nitrogen (N) technologies. This article proposes a conceptual framework and detailed procedure to select the N technology to be tested; design and implement field trials; generate, process, and manage field trial data; and automatically analyze, report, and share benefits from precision N technology. The selection of the N technology was farmer-driven to ensure a good fit and to increase the likelihood of future technology adoption. The technology selection method was called the âN tiered approachâ, which consisted of selecting a technology that progressively increases the level of complexity without exceeding the farmerâs learning process or farm logistic constraints. The N tools were classified into (1) crop model-based, (2) remote sensing-based, (3) enhanced efficiency fertilizers, and (4) biologicals. Field strip trials comparing producersâ traditional management and the selected N technology were combined with site-specific N rate blocks placed in contrasting areas of the fields. Yield data from the N rate blocks was utilized to derive the site-specific optimal N rate. The benefits of current N technologies were quantified by comparing their yield, profit, and N use efficiency (NUE) to growersâ traditional management and to the estimated site-specific optimal N rate. Communication of the trial results back to the growers was crucial to ensure the promotion and adoption of these N technologies farm wide. The framework and overall benefits from N technologies was presented and discussed. The proposed framework allowed researchers, agronomists, and farmers to carry out on-farm precision N experimentation using novel technologies to quantify benefits of digital ag technology and promote adoption
A yield comparison between small-plot and on-farm foliar fungicide trials in soybean and maize
Agronomic research provides management recommendations based on small-plot trials (SPTs) and on-farm trials (OFTs) with very different characteristics. SPTs are traditionally conducted at agricultural experiment stations by research institutes or universities, while OFTs are conducted under commercial-scale conditions and managed by farmers using their own equipment. Several researchers claimed that discrepancies could occur between these two types of trials, which can make the extrapolation of results from SPTs to the farm level difficult. In our study, we conducted an extensive comparison of small-plot and on-farm trials to analyze the effect of foliar fungicide application on maize and soybean yields. We collected data on maize and soybean from five US states. Analysis of the soybean data showed similar mean yield responses and within-trial standard deviation to fungicide application between 479 OFTs and 83 SPTs. For maize, our comparison of 300 OFTs and 114 SPTs showed similar mean yield response in both. Nevertheless, the within-trial standard deviation was three times smaller in on-farm compared to small-plot trials. On the other hand, the between-trial standard deviation (measuring the variability of the effects of fungicide application across different environments) was almost twice as large in SPTs than in OFTs for both crops. Hence, the differences in the effects of fungicide on yield were similar whether they were estimated using OFTs or SPTs for both crops. This implies that OFTs can potentially detect significant yield differences with fewer replicates and thus reduce the cost of data generation. We argue that SPTs can be seen as a preliminary step before scaling up to OFTs to facilitate technology transfer and extrapolate the results in real farming conditions
Development of a Scalable Edge-Cloud Computing Based Variable Rate Irrigation Scheduling Framework
Currently, variable-rate precision irrigation (VRI) scheduling methods require large amounts of data and processing time to accurately determine crop water demands and spatially process those demands into an irrigation prescription. Unfortunately, irrigated crops continue to develop additional water stress when the previously collected data is being processed. Machine learning is a helpful tool, but handling and transmitting large datasets can be problematic; more rural areas may not have access to necessary wireless data transmission infrastructure to support cloud interaction. The introduction of âedge-cloudâ processing to agricultural applications has shown to be effective at increasing data processing speed and reducing the amount of data transmission to remote processing computers or base stations. In irrigation in particular, edge-cloud computing has so far had limited implementation. Therefore, an initial logic flow concept has been developed to effectively implement this new processing technique for VRI. Utilizing edge-cloud computer nodes in the field, autonomous data collection devices such as center pivot-mounted infrared canopy thermometers, soil moisture sensors, local weather stations, and UAVs could transmit highly localized crop data to the edge-cloud computer for processing. The edge computer Following the implementation of an irrigation strategy created by the edge-cloud computer with a machine learning model, data would be transmitted to the cloud (requiring transmission of only minimal model parameters), resulting in a feedback loop for continual improvement of the global model on the cloud (federated learning). VRI prescription maps from the SETMI model were used as the training data for training the machine learning model
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N haâ1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3%lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost
The nitrogen fertilizer conundrum: why is yield a poor determinant of cropsâ nitrogen fertilizer requirements?
The application of nitrogen (N) fertilizer both underpins high productivity of agricultural systems and contributes to multiple environmental harms. The search for ways that farmers can optimize the N fertilizer applications to their crops is of global significance. A common concept in developing recommendations for N fertilizer applications is the âmass balance paradigmâ â that is, bigger crops need more N, and smaller less â despite several studies showing that the crop yield at the optimum N rate (Nopt) is poorly related to Nopt. In this study we simulated two contrasting field experiments where crops were grown for 5 and 16 consecutive years under uniform management, but in which yield at Nopt was poorly correlated to Nopt. We found that N lost to the environment relative to yields (i.e., kg N t-1) varied +/- 124 and 164 % of the mean in the simulations of the experiments. Conversely, N exported in harvested produce (kg N t-1) was +/- 11 and 48 % of the mean. Given the experiments were uniformly managed across time, the variations result from crop-to-crop climatic differences. These results provide, for the first time, a quantitative example of the importance of climatic causes of the poor correlation between yield at Nopt and Nopt. An implication of this result is that, even if yield of the coming crop could be accurately predicted it would be of little use in determining the amount of N fertilizer farmers need to apply because of the variability in environmental N losses and/or crop N uptake. These results, in addition to previous empirical evidence that yield at Nopt and Nopt are poorly correlated, may help industry and farmers move to more credible systems of N fertilizer management.This article is published as Thorburn, P.J., Biggs, J.S., Puntel, L.A. et al. The nitrogen fertilizer conundrum: why is yield a poor determinant of cropsâ nitrogen fertilizer requirements?. Agron. Sustain. Dev. 44, 18 (2024). https://doi.org/10.1007/s13593-024-00955-7. Copyright 2024, The Authors.This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the articleâs Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the articleâs Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/
How digital is agriculture in a subset of countries from South America? Adoption and limitations
Digital agriculture (DA) can contribute solutions to meet an increase in healthy, nutritious, and affordable food demands in an efficient and sustainable way. South America (SA) is one of the main grain and protein producers in the world but the status of DA in the region is unknown. A systematic review and case studies from Brazil, Argentina, Uruguay, and Chile were conducted to address the following objectives: (1) quantify adoption of existing DA technologies, (2) identify limitations for DA adoption; and (3) summarise existing metrics to benchmark DA benefits. Level of DA adoption was led by Brazil and Argentina followed by Uruguay and at a slower rate, Chile. GPS guidance systems, mapping tools, mobile apps and remote sensing were the most adopted DA technologies in SA. The most reported limitations to adoption were technology cost, lack of training, limited number of companies providing services, and unclear benefits from DA. Across the case studies, there was no clear definition of DA. To mitigate some of these limitations, our findings suggest the need for a DA educational curriculum that can fulfill the demand for job skills such as data processing, analysis and interpretation. Regional efforts are needed to standardise these metrics. This will allow stakeholders to design targeted initiatives to promote DA towards sustainability of food production in the region
Understanding the 2016 yields and interactions between soils, crops, climate and management
Several technologies to forecast crop yields and soil nutrient dynamics have emerged over the past years. These include process-based models, statistical models, machine learning, aerial images, or combinations. These technologies are viewed as promising to assist Midwestern agriculture to achieve production and environmental goals, but in general, most of these technologies are in their initial stages of implementation. In June 2016 we launched a web-tool (http://crops.extension.iastate.edu/facts/) that provided real-time information and yield predictions for 20 combinations of crops and management practices. Our project, which is called FACTS (Forecast and Assessment of Cropping sysTemS), takes a systems approach to forecast and evaluate cropping systems performance. In this paper we report FACTS yield predictions accuracy against ground-truth measurements and analyzing factors responsible for achieving 200-240 bu/acre corn yield and 55-75 bu/acre soybean yields in the FACTS plots in 2016
Maize and soybean root front velocity and maximum depth in Iowa, USA
Quantitative measurements of root traits can improve our understanding of how crops respond to soil and weather conditions, but such data are rare. Our objective was to quantify maximum root depth and root front velocity (RFV) for maize (Zea mays) and soybean (Glycine max) crops across a range of growing conditions in the Midwest USA. Two sets of root measurements were taken every 10â15 days: in the crop row (in-row) and between two crop rows (center-row) across six Iowa sites having different management practices such as planting dates and drainage systems, totaling 20 replicated experimental treatments. Temporal root data were best described by linear segmental functions. Maize RFV was 0.62 ± 0.2 cm dâ1 until the 5th leaf stage when it increased to 3.12 ± 0.03 cm dâ1 until maximum depth occurred at the 18th leaf stage (860 °Cd after planting). Similar to maize, soybean RFV was 1.19 ± 0.4 cm dâ1 until the 3rd node when it increased to 3.31 ± 0.5 cm dâ1 until maximum root depth occurred at the 13th node (813.6 °C d after planting). The maximum root depth was similar between crops (P \u3e 0.05) and ranged from 120 to 157 cm across 18 experimental treatments, and 89â90 cm in two experimental treatments. Root depth did not exceed the average water table (two weeks prior to start grain filling) and there was a significant relationship between maximum root depth and water table depth (R2 = 0.61; P = 0.001). Current models of root dynamics rely on temperature as the main control on root growth; our results provide strong support for this relationship (R2 \u3e 0.76; P \u3c 0.001), but suggest that water table depth should also be considered, particularly in conditions such as the Midwest USA where excess water routinely limits crop production. These results can assist crop model calibration and improvements as well as agronomic assessments and plant breeding efforts in this region