Calibration procedure alternative

Some forum members have reported that the Cal Offs and Cal Gain interact to some degree. I have also noticed this and it can be very pronounced. I have found a plausible method that may eliminate that interaction.

I understand that full scale Cal Gain adjustment is always better, but in this case, the half-scale Cal Gain seems to be adequate.

If (for each V/Div range) you place the Gnd Pos in the center of the screen (half way up) and then adjust and save Cal Offs, you now have a constant baseline for easy Cal Gain adjustment without Cal Offs interaction (for that V/Div range).

With 1x probe, 5V/Div range, Gnd Pos set to mid screen (Cal Offs = 0), and with ME set for Vavg, I alternated both plus_9.53VDC and minus_9V.53DC (using a digital voltmeter for monitoring) and performed and saved Cal Gain adjustments by averaging the plus and minus voltage differences with the Cal Gain. The Cal Gain I saved was +0.43%.

Then I set the Gnd Pos at the bottom of the screen, and applied 36.52VDC to the Nano and measured 36.34VDC on the Nano (once again, digital voltmeter monitoring). This results in an accuracy of 36.34/36.52= 0.99507 or just 1/2 of 1% error. For a non-lab calibrated instrument this accuracy should be more than adequate for most users, and it makes the calibration much faster and simpler with no diddling around going back and forth.

It may also be possible that if you just double the Cal Gain that you think it needs using this method, then you may get results similar to full-scale calibration. I say this because because my Cal Gain was +0.43% and may measured error was -0.493%. It will take more testing to confirm this, so maybe some of you forum members can test out this theory across various V/Div scales and report your findings. :slight_smile:

This post is techie and requires a good understanding of the Nano HW design, but if you struggle through (or skip the middle part) there is some good news at the end.

I took some measurements of the bias voltage used to offset the input signal going into the Nano ADC. This voltage is controlled in firmware (relative to Gnd Pos) through adjusting the duty cycle of a PWM output pin and I looked at the relationship between duty cycle and output voltage. The current calibration model (as inherited from 2.5e) expects this relationship to be linear. For Gnd Pos close to vertical center it is, but less so towards top and bottom. This suggests that the procedure outlined by lygra above makes sense (calibrate with Gnd Pos at vertical center).

When we calibrate offset, we determine an offset for the bias output voltage whereas gain calibration is related to probe input voltage. Lacking from this model is a way to calibrate the bias voltage gain. The effect of this is that calibration is only accurate for the current Gnd Pos. As soon as we move Gnd Pos (I tend to keep mine at a fixed location and hadn’t really noticed this until V3.5), offset voltage drifts away from zero.

Rather than improving on this model (which is complex as is) I found a way to measure the bias voltage relative to ground using the Nano ADC. This is the equivalent of a voltage reference with feedback and so we can determine signal ground offset accurately and irrespective of Gnd Pos. Using this approach I’ve implemented automatic offset calibration. The offset now gets calibrated whenever we change Gnd Pos and also at scheduled intervals during operation (currently every 5s). This only takes a couple milliseconds (done between acquisitions) and so is transparent to normal operation.

The simplified and more accurate calibration model will be in V3.51.