V3.51 Calibration considerations

Now that BenF has eliminated the GndPos (0V) interaction with the gain adjustments, I have been digging deeper into the calibration procedures and thought I would share my findings. I was hoping to find a short cut method of calibration.

Some consideration was given to the fact that the front end (with respect to calibration) is simple voltage dividers for DC levels, therefore the V/Div steps within a range scale would most likely be linear and that would result in linear %cal for each V/Div of each range. After all, the only impact to resistor resistance changes at DC is simple heating effects and that should be negligible in the Nano front-end. The attached zip contains a pdf that shows the front-end equivalent range circuits. My method had the GndPos centered on the display vertically, and I used the top of screen levels to perform the calibration in each case.

As seen below, the results are quite to the contrary for my calibration adjustments upon the 1x probe ranges. It would appear from these non-linear results that something else is involved, such as ADC calibration constant errors for each V/Div setting, or charge and hold leakage of the ADC input cap.

V/Div %cal adjust
0.01 -0.64 range-1
0.02 -0.67 range-1
0.05 -0.67 range-1
0.1 -3.04 range-1

0.2 -1.52 range-2
0.5 -1.79 range-2
1 -1.43 range-2

2 0.77 range-3
5 1.07 range-3
10 1.73 range-3

In summary, it appears that one calibration event for each range scale is not reasonable for maximum accuracy. It appears to be required to calibrate each individual V/Div with it’s own unique voltage. Instead of calibrating at full scale voltage settings, it also makes sense to calibrate with how you intend to use the Nano. In my case, I plan to keep GndPos centered on the display so I can see both positive and negative signals. Therefore I only calibrate to half-full-scale for both plus and minus voltages.
front end.zip (25.4 KB)

Thanks for the input!

I think we’re fooled here by the displayed gain percentage. This figure gets calculated as a percentage variation of gain (I’ll see if I can replace this with a more useful figure) and does not make a lot of sense as an absolute value and even less so as a comparative figure across V/Div’s. The following steps (keep gain at zero for all V/Div’s) may give us some answers:

  1. V/Div’s that span across the same input stage should require the same calibration correction for any single voltage. One way to confirm this is to check the absolute Vavg value across V/Div’s (same input range and constant voltage) and see that they are indeed equal.

  2. Next step would be to check the linearity of U5A (TL082 op-amp). We can do this if we keep input voltage constant, but alter Gnd Pos. Variations in Vavg readings must then come from non linearity in U5A (remember to allow a few seconds to pass between Gnd Pos variations to allow for automatic offset calibration to settle).

  3. Final step is to check linearity of total gain (from probe to actual measurement including U5B, U5A and ADC). For this test we need a volt meter connected across the input probes and we should keep Gnd Pos fixed. We then record volt meter and Vavg readings across a series of input voltages. If we calculate the error as a percentage variation between volt meter and Vavg, these percentages should ideally be equal.

I’ll see if I can carry out these measurements and perhaps if you (or others) do the same we may have some useful results to share.

To verify automatic offset calibration and get some peace of mind, I took measurements in accordance with suggested steps. Findings are as follows:

  1. Voltage stays the same across V/Div’s within the same range.
  2. Vavg stays constant irrespective of Gnd Pos.
  3. Linearity from probe through to actual measurements checks out fine.

On my Nano, range-1 and range-2 is better than 0.5% with factory calibration. Range-2 reads about 2% low. I’ve chosen to calibrate when running on battery power.