There seems to be lots of confusion around the DSO Quad calibration process, and no documentation, so I’ve put the following guide together based on what the firmware actually does. This is all based on the v2.33 app source, but the calibration function in v2.32 is identical.
First, the bugs:
Don’t bother trying to calibrate channel B, positive adjustment to the zero calibration for any range also modifies the drift/offset calibration (which affects all ranges).
Exit option selection is broken in two ways. Calibration changes to the 10V range also change the exit option (without changing the displayed text), so don’t try to calibrate 10V range if you want to save changes. And navigation through the cyclic buffer of exit options is not handled correctly anyway - pay attention to calibration step 5 below.
(Not a calibration bug, but related) Signal levels reported by the Quad when in use will only match calibrated values when ypos is 25 (1 div up from the bottom of the screen).
I have modified the app firmware to fix the first two of these, which I can post if anyone is interested, but until the ypos-dependent amplitude issue is fixed, calibration is a bit pointless anyway.
The - and + controls:
In the Zero column, change zero calibration. One value per range per channel
In the Diff column, change gain calibration. One value per range per channel.
In the Votage column, change drift/offset calibration. One value per channel, for all ranges.
General points:
The value displayed in a cell only updates when the cell is selected, so to get valid readings in a cell, you need to navigate through it to see the effect of calibration changes in other cells.
Navigating through a cell may change the displayed value, but does not overwrite/lose calibration settings. (E.g. If you navigate through a calibrated Zero cell with the probe not grounded, you will get a non-zero reading, but you don’t lose your zero calibration.)
Calibration steps:
Ground the probe.
Adjust the zero column value to 0. (You can confirm the zero voltage by navigating through the Votage cell, you should see 0.00uV.)
Connect the probe to a suitable reference voltage for the range being calibrated.
Adjust the gain calibration in the Diff column until the value in the Votage column matches the reference voltage. (Yes, this requires making a change in the Diff column, then navigating through the Votage cell to update the voltage reading.) Note that the values displayed in the Diff column are in thousandths. You should expect to have a value fairly close to 1000 (1.00), but only 2 decimal places are displayed for values above 999, so don’t expect a visible change for every adjustment above 1.00.
Navigate down to the bottom row, select “+” once to change to “Exit with save”. (This is important; reaching “Exit with save” any other way will not save your changes.) Press button 2 (square) to save and exit. If it doesn’t say “Save the calibration data” at the top of the screen for a second, you have just lost all your changes
One argument is that you should use mid-scale range calibration for best accuracy across the entire scale range. There is also a counter argument that you should always use full-scale calibration. So you will have to consider which argument you agree with and also consider how you will be using those range scales.
If you will typically use the input voltage range in the upper half of mid-scale, then full-scale calibration would make sense. If your typical input voltage usage falls in the lower half of the scale range, then the mid-scale calibration would make sense.
Exceeding the scale range is probably not a good option unless you know for sure that the ADC range overhead can properly measure that excess voltage in a linear manner.
I am having a problem calibrating the voltage on my DSO Quad. I am running APP: 2.45(b)
SYS: 1.41.
I hit the calibrate key and get the “Please connect CH_A input to GND”. Device is connected to gnd and the calibrate key is hit one more time.
At this point device initially says "Input 250-300mv Standard Voltage to CH_A. I connect the channel A probe to a regulated source with voltage 277mV DC. Now, I understand that I need to adjust DIFF column on the CH_A row so that the VOLTAGE reads “+277mV”.
Using Navigator A (-…+), I move the button to the right or left. Neither the DIFF nor the VOLTAGE changes.
If I use Navigator B (ISELECT RANGE <—>) to move to a new voltage, I cannot change the DIFF or VOLTAGE values on that row either.
Not sure if it’s worth my 2 cents, but I just installed said software and the zero and diff went into an “auto-cal” mode, streaming through the reading for 2 or 3 passes then instructing me to supply voltage and adjust. seemed to work better than expected.
Thanks for your note. You gave me hope to try again.
With the 2.45(b) revision, I would amend step 4 to include the note that the (+…-) Navigator A key must be held right or left while the Quad beeps in the background as the value in the VOLTAGE box changes. I would also note that the value shown in the DIFF column is not updated.
For step 5, after the last calibration value (50-60V) has been accomplished, one must hit the Navigator B button one more time. The banner at the top of the display will ask the user to hit the square button to save the update.
The ranges are labelled volts per div… so the 50 mv range is 50 mv per division. You would want to calibrate at mid scale or above. So perhaps 3 to 5 division would be mid scale. In this case use 3 to 5 times 50 mv; i.e. use 150-250mv. This is just a general rule. Hope it helps you.
The latest APP v2.45b displays recommended values for each scale as you move from one scale to another during the calibration process. Those recommended values tend to be around 5 to 6 x the volts per division for that scale.
My calibration results were less than satisfactory. Apparently the APP interpolation algorithm needs improvement.
And, by the way, the first post above is outdated. The APP v2.45b automatically calibrates the zero voltages. The cursor cannot be moved into the diff field. Instead you adjust the displayed voltage in the voltage field.
You use an adjustable power supply and set the voltage with a digital voltmeter. Also, if you have another oscilloscope, sometimes they have a built in calibrator that may have the required voltage. For example, I have a TEK 7834 that has selectable square waves of 40 mv, 400 mv, and 4v peak-to-peak.
I hate to admit it, but I used a mostly dead battery that was laying on my desk. I connected a voltmeter and set it to whatever the actual output was. Otherwise, you could put two potentiometers (variable resistors) in series with a battery, having both pots set to the highest resitance and a voltmeter across one pot. Adjust the pot to somewhere in the calibration range, as read on the voltmeter, and set the DSO accordingly. Your resistances are going to be low, so expect to have to adjust the other pot down as well, but you could burn a pot if you turn both to zero. Just a hillbilly-field fix.