Yield monitor accuracy
|
The accuracy of yield monitors is a hot topic. Yield monitors were developed before global positioning system (GPS) technology was generally available and they were originally designed to give good accuracy on "loads." Loads are usually the amount of grain in a combine hopper or on a truck or wagon. Even now most methods of calibration depend on loads by comparing the amount reported by the yield monitor for some reasonably large area with some type of scale. Research has shown that it is extremely important to calibrate the yield monitor under the conditions in which it will be used. One of the ways that yield monitors would be helpful is by measuring the amount of grain harvested from test plots. We used a combine equipped with both scales and a yield monitor to harvest an experimental field with 60 plots that had combinations of tillage and herbicide treatments. The plots were 300 feet in length and five 30-inch rows in width. The focus of this article is the comparison of yields obtained with the yield monitor with those on the same plots measured with the scale. In the figure, the upper line is the yield for the plot and the lower line is the comparison between the yield monitor and the scale. This comparison is not about dots on a yield map but about loads as reported by a monitor compared with a scale. Most of the time the yield monitor recorded about 2 to 3 percent higher than the scale. We had calibrated the yield monitor in some of the "average" plots (approximately 130 bu/acre) and when the yields as shown by the scales are near this value (plot No. 10) the yield monitor is almost the same as the scales. When we have higher yields (plot No. 51) the monitor shows more grain than indicated by the scales. When we have a lower yield (plot No. 30) we have the monitor showing less grain than that shown by the scales. This information is from one set of comparisons and may not be repeatable, but let's consider the implications of the data in the figure. The yield as shown by the scale for plot 30 is about 80 bu/acre. The yield from the scale for plot 50 is about 130 bu/acre. If we apply the percentage differences between scale and yield monitor to these yields, the yield as measured by the monitor for plot 30 would be (80 x 0.96) = 76.8 bu/acre, whereas the yield for plot 51 would then be (145 x 1.03) = 149.4 bu/acre. Under these circumstances, low yields get lower and high yields get higher. For the extreme yields this would not cause problems in deciding which treatment gives the better yield if you are not interested in the exact difference. The difference between plots 51 and 30 increased by almost 8 bu/acre when measured with the yield monitor compared with the scales. The point is that the percentage difference is important but the sign and consistency of the sign (+ or -) is also important. If for some reason, the monitor caused the high-yield plots to show a lower yield, and the reverse, there could be a problem in determining which treatments gave better yields. This difficulty would occur because the actual differences might be erased or even reversed. The difference in yields needed to show statistically significant differences between treatments for experiments of this type is about 10 percent. Notice that values go from about4 percent positive to 5 percent negative for a difference of 9 percent in this experiment. That might be enough to provide incorrect comparisons and give significant differences when none really exist. Careful operation and good calibration are needed to prevent you from getting bad data either for experiments or for infield comparisons.
I thank Jeff Cook at the National Soil Tilth Laboratory for his help with the field operations and data analysis for this report. This research is supported by Case Corporation. This article originally appeared on page 9 of the IC-482 (PrecisAg) -- May 5, 1999 issue. Updated 05/04/1999 - 1:00pm
|



