DSO203 GCC APP - Community Edition (2.51+SmTech1.8+Fixes)

Hello all. I admire the Wildcat and others for great job. The little DSO now turn into very powerful tool.



My question - can you explain step by step how to adjust the horizontal thickness? I read the guide and tried many combinations - no success. The only think that show up is ADCoffset.

Take your time Wildcat! It’s not like there is a deadline or something:p Either way if you understand the topic it is easy to get to find everything! I don’t even have an actual oscilloscope. On that topic what is the best and proper way to calibrate software and hardware? I have a computer power supply that i converted to bench power supply, put it is not very accurate, although i have a nice boost converter. Can someone tell me the proper steps to calibrate with these limitations?

This is adjusted while in the display brightness menu which part of the meters. You need to have the small meters engaged to see it (engage by short pressing button 3 until small meters appear). You can then use the right toggle to move the focus until B.L. blinks. With this blinking, press the right toggle center button, and the thickness can be adjusted with the left toggle, press again to return to bright adjust. The display can be observed while changing. Finally, save your selection with config 0.



This is in the user’s guide under button functions, option for right toggle center button short press:

WITH MENU ON DISPLAY BRIGHT ADJUST: TOGGLES WAVEFORM HORIZONTAL THICKNESS ADJUST

(Change with LEFT TOGGLE)

Well, you need a wide range of clean, finely adjustable and monitored stable source of DC. The problem with using switched mode power supplies is that they very often have a lot of noise. A better solution would be to use batteries (for example 1 to 3 nine volt batteries in series). Batteries have no noise, and also benefit from not being connected to the AC supply, which greatly reduces RF interference pickup from nearby equipment.



Next, you need to have a way of adjusting the output with good precision. Best way to do this, regardless whether the source a power supply or batteries is with some resistors and a pot. A 10 turn pot will work great if available. Use the resistors in series with the pot, and also from the pot to ground, to reduce the voltage range to a small value to improve resolution. Change resistors to get the various values needed for the different ranges. Monitor the voltage with a digital multimeter while adjusting the pot to get the values needed for each range. Then adjust the Quad to display exactly the same reading as on the DMM for each range.



If doing this twice to enable supply voltage fluctuation compensation, the low settings should be done with the Quad battery as low as possible, without risking shut down. The high settings should be done with a charger connected. Some chargers may provide more current than others, it helps if the battery voltage displayed is as high as possible, say around 4.5V (this is actually the system voltage and not the battery voltage when a charger is connected). Also, chargers will vary greatly as to noise output, it may be useful to check how much noise is induced (at low V/Div’s) prior to using for calibration.



Finally, once a good calibration is achieved, save a backup of the WPT config file. This can be reloaded later if ever needed, and any other saved config merged back with the calibration by loading and saving as #0. While all this is a tedious pain to do, it only needs to ever be done once for the life of the device.

Thanks a lot Wildcat. Still i’m limited to my multimeter’s accuracy this way…and i wouldn’t say i have even a good entry multimeter (although i will be buying a good one in the near future). After this i have to do the hardware calibration i guess to minimize overshoot and rise time. Or should i do this first? Now i remember something on calibrating at 50V, and not being possible due to the clamping diodes of the hw v2.81!?!

Unless you have a REAL crappy multimeter (for example with limited ranges), it’s probably accurate enough. The 8 bit readings in the Quad do not offer great accuracy, even when compared with cheap DMM’s. That’s why it’s important to calibrate it to get the most out of it. What the Quad voltage meters lack in accuracy they make it up for in bandwidth and ability to measure complex waveforms, while generally providing enough accuracy for most purposes.



Hardware calibration (high frequency compensation) is not in any way related to the DC software calibration. Doesn’t matter which one you do first.



If your device still has the clamping diodes at the analog inputs, I would strongly recommend removing these. Not only will these limit the voltage applied to the inputs, but if what you measure is of a low enough impedance (very often is) it can short the diodes and result in a dead channel, or even possibly blow a circuit trace and/or damage the circuitry you are measuring. The HW 2.81 device I purchased from SEEED had these already removed when I got it, presumably at SEEED’s request to their supplier. It did still however have them at the digital inputs.



I could go into a very long, drawn out discussion about the input circuitry of the Quad. I don’t wish to do that at this time, suffice it to say that clamping diodes at the input, without any resistive buffering is NOT a good idea…

Hello,



First of all, many thanks to Wildcat for maintaining the software. It is much better than the original one! I just bought a DSO203. The original software is quite buggy. So I searched an alternative and found yours. It works great and I did not find any bug until now! However, for the moment I’m using V5.0 and the original FPGA.

This is typical when you do changes inside the FPGA. I assume that the timing was already critical before you did the changes. If your changes add some additional delay, this can happen.



Now if the output of the ADC changes for example form 1000 0000 to 0111 1111 (128 to 127) and bit 5 is a bit to slow, the sampled value can be 0101 1111 (95). By heating or cooling the ADC, these delays change and the effect can be better or worse. This is the same if you change the ADC.



I just reverse engineered the FPGA code a bit. MCI seams to be the master clock from the CPU (72MHz). The ADC Clocks are given by: assign CKA=MCI;

assign CKB= (Ctrl_Link[1]) ? !MCI : MCI;



In the DP_RAM, the ADC Data is sampled on the rising edge of MCI.



This seams all fine, but assign CKB= (Ctrl_Link[1]) ? !MCI : MCI; adds some additional delay on CKB. Just give it a try and use:

assign CKA= (Ctrl_Link[1]) ? !MCI : MCI;

to have the same delay on CKA. I looked trough the code and it seems Ctrl_Link[1] is always zero.



The code integrates a mux which gives additional delay. It is possible that all delays are so big that the FPGA samples the signal on one rising edge later, so giving more delay could fix the bug.



It might also be that CHA and CHB are exchanged some where by accident, so trying:

assign CKB=MCI;

removes the delay on CHB which could be CHA… :wink:



Normally, such timing things are tested and fixed if possible by the FPGA tool, but the actual SDC files has no information about the external delay of the ADC, so it assumes zero delay. And it might work or not by chance and temperature! :wink:



By the way: Which tool are you using to synthesize the FPGA? I could improve the SDC file and do some FPGA simulations to see if there is really a problem. And which compiler are you using for the C code?



Regards,

Hennes

This is what I originally assumed before doing countless builds, however the problem has proven frustratingly difficult to pinpoint.

I did try that, while it may have changed things a bit (just about anything you changed affected it somewhat) it did not fix the problem. The reason for the Ctrl_Link conditional is to provide support for 144Mhz sampling, which requires the ADC to be set up with out of phase clocks.

Some of the early builds in trying to solve this looked as if this could have been the case, as post synthesis timing showed around a 1 MCI clock period shortfall. However, other compiles with everything stripped down to keep timings in line still did not solve the problem.

I didn’t add delay with the constraints file but after observing that reading from the falling rather than the rising edge made no difference whatsoever with the noise I kind of dismissed the possibility of input timing issues. Still it wouldn’t hurt to try, would be great if such a simple fix would work…

I’m using Lattice’s ICEcube2, release 2015.04.27409 with the Synopsys synthesizer and GCC 4.6.1 from codesourcery for the C compiler (CodeBench Lite).



I spent a good amount of time trying to solve this problem before changing the ADC chip. I eventually came up with some builds that actually worked pretty well, but while some minimized the noise to barely noticeable levels, they never completely eliminated it.



Here are some notes from my experience with this issue that may be of help:



First of all, the issue was confirmed coming from the ADC by swapping the A ch up to bits 8-15 and B ch down to 0-7 at the input ports of the FPGA. This caused the problem to now display in the B channel (the A ch from the ADC was now being sent to the B display). The noise was not changed at all, it was exactly the same.



The problem seems to be intermittent in nature, in other words it’s not a continuous, on every cycle event but a random occasional blip. Of course, if something is “on the edge” it can act that way but it’s unusually persistent, occurring to a varying extent at all clock speeds and with a wide variety of completely different FPGA configurations.



Most of the builds I tried did not show the effect at all in normal modes, due to it’s intermittent nature. However full speed mode stores any event like this. At slow timebases, the cumulative effect of having say 60,000 samples taken, stored and displayed would result in the near certitude of capturing at least one event for each displayed sample, resulting in what looks like a continuous string of noise.



While some bit toggling at various levels could be observed with some configurations, and these would respond to changes in timing, as well as would diminish and eventually disappear when reducing the clock speed (keep in mind in normal modes the timbase sets the clock speed, it can as well be adjusted in full speed mode), there was one “level” at which the noise occurred that did NOT respond to reducing the clock speed and proved frustratingly persistent.



Many configurations were tried, with different functions clocking on different edges. For example, reading the input on the falling edge rather than the rising edge did not have any effect, with the exact same particular bits in the ADC ch A behaving in the exact same way.



The one thing that seemed to cause the most problem was increasing XTthreshold from 16 bits to 32 bits to account for the faster clock used with full speed mode at slow timebases. This seemed to come from the increased complexity of having to compare several 32 bit registers, in comparison to comparing 16 bit regs.



The final FPGA I published is NOT optimized to minimize this problem, but rather optimized to function properly. Final timing analysis for that one looks good, the only exceptions being with some inter-clock relationships with unspecified constraints between the main clocks and program/FPGA control transfers that occur only occasionally, too seldom to be an issue. Many builds were found to almost eliminate the noise, but frustratingly needed a compromise somewhere (most notably in the time based triggering function which needed to be changed to 32 bits) and for that reason could not be used.



None of this made any sense to me. At first, I was sure the issue was timing but no matter what I tried, the results were always random and unpredictable.



After the ADC was changed, none of the builds showed ANY trace of this whatsoever.



At this point I’m leaning more towards some hardware issue being the culprit, possibly the added gate count in the FPGA causing increased noise on the supply lines that the ADC can’t cope with, or something like that.



I hope you can find something I overlooked that can solve the problems using this with the ADC’s that come with the units. It also appears that some seem to work just fine… At this point, though, I have no way to reproduce the issue for testing as my unit no longer has the problem, so it’s up to you if you wish.



Thanks for taking an interest in this, I believe this is a potentially very useful mode and well worth the effort to make it right. Let me know if I can help in any way.



By the way, you wouldn’t by any chance have the means to program the ICE65 chips used in the earlier devices? These were made by Silicon Blue which was bought by Lattice but they do not appear to want to support the old chips. Although the 65 version can be selected, the compiler comes back with a missing DEV file. An earlier version of ICEcube from Silicon Blue only has Synopsys available for synthesizing and will not take the current license available from Lattice.

I see you did a lot of tests!


I was thinking this could be for double sample rate using two combined channels. Is this supported by the software and the bandwidth of the analog input?



I’m still waiting to be able to download ICEcube2… I hate this stupid sites where you need to register, than to wait… :evil:



I see, you did a lot of testing…

So as you describe, it might be also possible that this problem also exists with the original FPGA image, but there is no way to see it because you just don’t display eneught samples that you could see it in a reasonable time?


What do you mean with level? At which clock speeds the problem disappeared?



It might also really be the ADC. The HWD9288 is just a cheap copy of the AD9288, they even copied the data sheet images! :shock: The FPGA Gates run on 1.2V but this supply is generated from the 2.8V where the CPU and the ADC runs on. I’ve also seen that the ADC runns from 2.7 to 3.6V. So the supply is just 100mv more than required. This might be a problem, especially on a switched supply which is possibly slow and not so stable.



No, I don’t have anything for the ICE65 chips. At the company, we use Altera FPGA’s. May be you could just ask Lattice support for a license of the old tool?

It is really crappy for accuracy and precision, but really good for 8gbp:p No auto-range, no lead compensation, 2000 counts only i believe, next one i’m buying is gonna be about 100gbp, 6000 count min (looking at Brymen ones). I understand about the ADC, with only 8bits you can’t do much accuracy wise:p



I checked the hardware, clamping diodes are missing which is good. Didn’t even need to take the shield off, just lift it up a bit and i could just barely check. Maybe all hw v2.81 has them removed but who knows.



I managed to do a calibration. Not the best way though, i’ll probably re-do it. I used 2 18650 batteries in series, that went to a power module with selectable 5V and 3.3V (on board regulators) and used two variable resistors as potential dividers with the output connected to the input of the other one so i could make more fine adjustments. It was really hard to pinpoint the adc to the center at low voltage ranges, probably due to noise from the power module regulators (dumb me;p). For the higher ranges i used a power supply connected to a boost converter to go up to 35V. For the last range i couldn’t generate high enough voltage so i just grounded the probe (i’m not sure if this the correct action to do).



Also everytime i save a calibration for low-battery, and full-battery it overwrites the old one automatically? Or do i need to delete the old calibration files manually?

144M sampling is not currently implemented in the software. It’s supported in all FPGA versions, even very early ones. I set it up not too long ago to see how well it worked. It works but is extremely complex and tedious to set up, with special time bases, both time and level interlacing compensation needed and meters need to be adapted to it. It is only of any benefit at the 0.1uS/div timebase, since at 0.2 you need to halve the sampling rate so you might as well just use a non interleaved mode. The display is already pretty good at the ~10Mhz max anyways, not much to gain from such an elaborate function in my opinion.



The noise problem may have been there with previous FPGA’s, but the added gate functions, particularly the 32 bit compares for the time triggering makes it much worse, plus the nature of full speed sampling “records” every little blip, so it really shows up with that.

By a certain “level” I mean a level shift on the vertical display where some more significant bit takes over lesser ones (eg: shift from b01111 to b10000) . There was one such noise “spot” about 3/4 of the way up (actually this is 1/4 of the way up, since the FPGA inverts the input from the ADC) that acted different, seemed to always be there, even with the sampling rate reduced way down, while if a particular configuration otherwise created a lot of noise (at other “levels”), these would generally disappear below 18 or 9 MS/sec.



As far as the HWD9288 is concerned, I had to replace another one of those a couple of years back on another device that produced nothing but garbage on one channel at the 2 fastest timebases. And yes, I saw the photocopied data sheets… Pretty blatant of them!

You don’t need to delete the old ones, the new values overwrite the old ones.

Thanks for an amazing firmware, Wildcat. I’m mostly a hobbyist and this is my first feature-rich scope. I’m learning a lot thanks to your efforts!



Here’s a feature request I’ve been thinking about: What if there was a mode where the scope boots in the same condition as it was when it was turned off? (Technically the config 0 file would be saved whenever parameters were adjusted, perhaps after a timeout to avoid excessive writes.)



I try to remember to save my config before I power-off but sometimes I forget and spend a lot of time trying to restore whatever settings I had before. Thanks for reading -Zach

Well, I’ve done that myself, on more than one occasion. If I can come up with a simple way of monitoring the parameters that doesn’t use a lot of memory I might do that. Only working with 48 KILObytes of RAM for the entire program, things have to be done in a very efficient way. There’s very little memory left to spare…



Will be a while before another update though, I have little free time right now. Also waiting to see how the new FPGA I posted works out.

Thanks for considering the request, Wildcat. I understand that there’s no timeline for features like these. Take care.

I’m sorry to bother you. I’m sure the information I seek is somewhere in here, but my eyes and head hurt after trying to scan a lot of the 60 pages here.



Mine is HW version 2.72, System version 1.60, and it says Device Firmware Upgrade V3.12C. I get no end of errors when trying to use it.



I was hoping to get it working, but it is unclear to me what works with it. I had tried Gabonator and that didn’t work, I saw later that it doesn’t work with my HW version. I put Version 1.60 back on it, now I get error messages many times when I press a button.



I was hoping to install Wildcat. But it is not clear to me which one I can use. I see 5.1 has FPGA files with a ReadMe that says it doesn’t work with earlier HW versions like mine. But does that mean 5.1 won’t work for me, or only that I can use it but not to install the FPGA files?



Thank you for any help.

5.1 should work fine. It’s just the FPGA than can only be installed on HW 2.81. There will be a couple of functions such as full speed sampling that will be missing but everything else should work OK.

Hi, Wildcat,



Just install 5.1 on 2.81 DSO. It works without any problems for my device.



Thanks!

Thanks, Wildcat!



Now I have to figure out why my 2.72 Quad won’t stay connected…

Thanks for the feedback!



Hope to hear from more users, curious to know how many have the problem vs how many don’t.