I have discovered that starting from 2µs/div when one channel from A or B is hidden the other one uses both channels of the ADC to sample the input. But due to calibration problems the signal is worth .
If both channels are enabled the trace looks better
The problem is mainly due to offset and gain difference between the two channels. This can be made visible this way:
disable one channel
make an acquisition
stop the aquisition (the display shows the first view on that post)
while in hold mode, go to the hidden channel and make it visible.
you have this
You can see the little offset and the gain difference between the 2 channels.
I tried to do a calibration but I don’t well understand how it works.
For the first column (zero) it is easy. You short circuit the probe and by toggling A navigator you zero the display.
For the second column I suppose it works the same way but I am not sure. And I don’t understand what it is used for.
For the third one as far as I understand it is only a display of the voltage you apply at the input.
When in the third column (or the fifth), when there is a difference between what the Quad and the voltmeter display there is no way to correct the Quad. I thought the calibration was a 2 points calibration (offset and gain) but I don’t understand how to set gain.
On my Quad, on A channel I have a few percent difference between the Quad and the voltmeter so it is of nearly no importance. But on B channel I have between 10 and 15 percent difference between the Quad and the voltmeter. This can explain the problem I have when using the 2 ADC to sample a single channel.
I also noticed similar behavior.
The other interesting thing I noticed is that after say channel A is calibrated if I move the zero cross up or down the unit is no longer holding calibration and magically shows some average and RMS values different then 0 with the probe shorted. The only way to fix that appears to be recalibration. I smell software bug, but I may be wrong.
Calibration process is needs a lot of explanation and documentation.
There is no clear explanation on what the 3 tables are and what is required on the end-user side to do to calibrate it.
After editting one Row 1 of Chn A, it will jump to Chn B (no big deal, I can live with it), then it jumps back to Row 1 of Chn A. Wouldn’t this mess up the previous cal that was done before?
So, I hope the developer can get the calibration manual up soon!
Could Seeed please post a more coherent guide to calibration? We could probably figure this out ourselves if we were told what each field represents (zero, diff, votage). I appreciate your efforts, but the section of the manual on calibration is not helpful. We can’t help test these devices (which is what we spent over $100 on beta hardware for) if we can’t calibrate them.
This is also disappointing because I was hoping to have some fun with my new DSO.
Yes, SeeedStudio Or someone please post a clear, concise step-by-step procedure for calibration (or better more automated firmware).
I think I have totally miscalibrated my unit by trying to guess my way through this procedure. Very annoying. I’m just going to put it aside until I get better documentation. Don’t want to break it.
The 0.91b manual is not helpful; it’s almost as if it was written for another version of the calibration code. No mention of three fields or what they represent. NOT good.
OH and BTW, Before anyone does the probe compensation procedure, I would suggest they hook the Quad analog inputs up to another lab calibration source. I thought mine needed adjustment, but I hooked the unit up to a Tek 2430A calibration square wave only to find that the probes/quad needed NO calibration. Almost a perfect square wave, some noise. But, No overshoot whatsoever. So the problem is not compensation of the probes.
Then I connected the DSO Wave Out to my Tek scope, “nearly” zero overshoot. Could be better, but not nearly as bad as with the dso analog -> wave out. The issue is an interaction between the analog in and wave out. Sounds like a hardware issue.
I gave up on the user manual and inspected the code instead. I think I understand how the calibration is supposed to work, and can provide details if anyone is interested, but my short answer would be don’t bother.
There is a typo in the calibration code that makes it impossible to calibrate channel B, and unless I am completely off track (which is always possible) there appears to be a dependency between reported signal levels and ypos, which means that the calibration is only valid for one specific ypos anyway.
I have also tried to calibrate but believe that there is a bug. The way I understand it is that you must calibrate the different vertical scales of 50mv,0.1v, 0.2V, 0.5V, 1V, 2V, 5V, 10V. First I connect the probe to ground and then adjust to read 0.00. Then I connect the probe to the input voltage, for instance 250mV for the 50mv vertical scale which is the first row. I then ignore diff, it looks as if it is correlated to the “Votage” column. In the “Votage” column I then adjust until the Voltage that is displayed is the same as on my multimeter. The problem is that there seems to be a bug. I can adjust the voltage lower, but not higher, it sometimes works and goes higher, but other times it will go lower, but not higher.
The interface is not very good because if you then go back to the beginning of the first cell in the row the previous value is overwritten, so you have to be careful and move to the next row.
This is how I think it works, but I might be wrong.
I think you are right. That’s the only way I could make any progress. But as soon as I tried the B channel everything got all messed up. I had the voltage set at 260mv. I couldn’t adjust the B above 216. Then when I finished and went back to the scope the voltage I was reading on the b channel was over 400. I gave up and put the scope aside. Waste of time, The cal code is broken.
Another question is how do I set the unit back to defaults?