Yes, it works now, congratulations and thank you so much!
There is still a problem with the display of channels A and B (but no crash). Both channels are grounded, the problem happens with the short as well as with the long buffer:
in AUTO and SCAN modes, I have huge offsets on A (e.g. 6 V on a 2V/div scale)
In SCAN, short buffer, sometimes it refreshes the screen several times per second with a 1s time base (!) and trigs on a square impuls of 6V (2V/div) and 1div long (synchron on both channels) - with grounded inputs…
Marco’s firmware does not show this behaviour.
Patrick
PS: when using the 10x mode, the readings on the right (Vrms and so on) are correctly multiplied by 10, but in the menu (second line), the voltage display is not. E.g. CH(A) / DC 2V, then when I chose 10x(A) I would expect “DC 20V” and not “DC 2V”. Do you understand why?
I’m now seeing this behavior as well. Didn’t happen in the previous builds.
Tried running the calibration procedure and that seems to go well, with only minor offsets detected, so something’s wrong in the other part of the code.
At least we are now seeing the same thing
Predictable/reproducible behavior is a great step forward
…now all it needs is a bit more love&care to squash some of those nasty bugs.
EDIT: Seems to be a question of the offsets being completely messed up.
Inputted a 2 volt square from the signal generator into chA and the signal itself is perfect in all scales…
This thing is just too big compiled with “-O0” and some vars are pointing to illegal addresses.
I got the channel A low voltage correction array in that situation. For some ranges it even has fluctuating values
I think a “-Os” compile is probably mandatory to avoid these problems, but unfortunately it’s not working for me.
I have been playing with the compiler settings with no success. The settings of jpa’s “Frequency response analyser” (CFLAGS=-Wall -Werror -mcpu=cortex-m3 -mthumb -mno-thumb-interwork -fno-common -O2 -g -std=gnu99 -I !STM_INC! -I !SRC_INC! -I !SRC_SRC!) don’t work better than -O0, no crash but totally unuseable traces.
What I don’t understand: what is the problem behind those unpredictable offsets?
not properly initialized variables (I learnt a long time ago that software where not every variable is properly initialized before use is scrap)? Then it should be enough to initialize them properly in the source code?
overlapping variables/arrays/stacks? Should not the compiler prevent that?
something else?
With all those cracks who wrote in the forum last year, I can’t believe that nobody understands enough about the system architecture, how the memory is organised and how the compiler works to tell us which settings are appropriate!!!
Patrick
PS: does anybody know which compilation options Marco used? I know this is not the same compiler but maybe we could learn from them?
Re-did a complete code merge of marcosin into 2.51 by hand and got it to compile with -Os
Still a few problems with it, but they should be “fixable”.
Will post as asap.
…well, better than 1.4 but… there is still a huge offset on CH A, as well as an impossible to understand Vpp with a DC signal. But no more random square signals.
I still don’t understand what happens with this processor/compiler…
why the fact to merge marcosin’s source code into 2.51 makes any difference compared to using marcosin’s complete source code? why does it know compile with Os?
why the X-Y mode don’t work? Juste merging marcosin’s code doesn’t seem to me to be a suitable explanation…
Yes, you probably had the calibration settings messed up by the previous release.
Marco’s code doesn’t compile directly in gcc. There are a few differences between IAR and gcc source, like some datatypes.
The last version was more or less a straight port of Marco’s code to gcc, but it had issues, like you saw. (and something in the code didn’t “like” -Os).
Merging his code over the stable gcc 2.51 code I had actually involved more work/time than the previous attempt, but I knew I could control each step of the process to determine any incompatibilities with -Os.
Don’t know yet what’s the problem with X-Y part, but shouldn’t be hard to fix. (we’ve all heard that before )
I have a version ready that already fixes the max, min and vpp readings, but will try to fix X-Y also before release.
…by the way, I have just noticed that this firmware has been available since september 2011 (beta) and decembre 2011 (v1.00) on the chinese forum for the DSO203 (same as DSO Quad):
I was looking for a portable low cost oscilloscope and found DSO203. Looks like a good piece of equipment, but searching a bit I see there are quite some bugs and there is no real official change/fix log so hard to tell what of those has been fixed.
Triggering is, I would say, a bit unreliable. I wouldn’t say it’s useless, just unreliable.
It doesn’t work consistently in all situations. That’s about it.
It works for me, in most situations.
The triggering is done in the FPGA, and there’s currently no way to rebuild that code without purchasing a licence, and that I would say is my biggest gripe about it.
I find it a bit more sad that the analog bandwidth has been “strangled” due to some bad hardware design choices. The digital processing part is actually superior but you can’t take advantage of it.
ps: This build has correct frequency units, yes.
The official one (still) doesn’t.
ps2: In the end, it’s a (very nice) expensive toy, yes.
You can’t compare it to a professional dso, but then again, how many good dso’s do you know of this size?
Thanks for your contributions. Last night I upgraded the s/w with the latest sys and fpga. The only thing to watch out is to download the app using zip, otherwise you get an error. Have not have a chance to do any measurement, but the upgrading worked perfectly.
With regards to the FPGA code compiler license how much does that cost? The thing is that for other open source s/w, the tools are also free. Then how would people deal with open source fpga source?
I am not sure if the community is willing to contribute to buy a license, but it is something to think about.