DSO203 GCC APP - Community Edition (2.51+SmTech1.8+Fixes)

Thanks for checking this other one. It does not appear to have the problem, I would check it with a full scale waveform though just to be sure it doesn’t show up at the top or bottom. Doesn’t matter what voltage scale you’re on, it’s all the same to the ADC, it basically sees what you see on the screen.

Wildcat,

I was wondering which AD9288 spec from Analog Devices you replaced on your DSO. Was it another 40, or did you use a faster rated chip like the 80 or 100? Are they all 48 pin LQFP packaging.



Also, I found this photo of an old pcb board with 2MB flash from an eBay seller. The photo seems to indicate an option for AD9218 (10-bit A/D). Is the AD9218 pin compatible with the AD9288 and supported by the DSO’s firmware?

I used an 80MS/Sec chip for replacement. All speed versions are the same, just different grades. The 10 bit version is not compatible with the hardware, this is only advertising hype. They do make a pin compatible 10 bit version, but the extra bit ports would be unused if installed in the Quad.



The speed rating of the chip however I don’t believe is a factor in this since the parasitic noise was still there even with the sampling rate brought down to 2MS/Sec, thought somewhat less prominent. I chose a 80M for replacement simply because it’s proper engineering practice, but the 40M devices seem to work OK at 72M.



What I would be curious to know though, is the difference, if any, between the ADC versions used in the 2 devices you tested. The schematic for HW 2.81 specifies a HWD9288 from Chengdu Sino (CSMT), which is what I had in my device. Another, earlier HW V2.70 device I have might also have had one of these or possibly something from another source, can’t remember, while the early V2.60 versions have Analog Devices chips.

Single Channel A. Looks good to me. Let me know if this is the sample with the new dso203 with the new FPGA chip and the wildcat 5.1 and the new FPGA 1.1 you are looking for.

attachment=0]image.jpg[/attachment]

Thanks for checking this out. You should however engage the full speed oversampling mode (center press the right toggle until the orange zig-zag line at the bottom turns blue). Also, a larger portion of the screen should be covered, you can use the 0.2V/div range for example and adjust the waveform for close to full scale.

I have expanded the waveforms and attached screenshots for both devices, each in two buffer modes.



I ordered the AD-9288-80 from Digikey.com. When I find out, I’ll let you know what chip is installed, but I expect both units have the same chip you found. May be a couple weeks before I do the transplant surgery.

Interesting, some units don’t show the problem at all. Thanks again.

Hello Wildcat. I did some testing for the improvement of the software, I don’t seem to get the problems in full buffer mode:









The ambient temp is about 20oC. I haven’t done testing in lower temps.



Also in the manual you could add:

  • Save IMG in the File Functions list
  • Add colours (example blue in full buffer mode) in BUFFER SIZE, AVERAGING AND OVERSAMPLING MODES and RIGHT TOGGLE CENTER PRESS: - SHORT PRESS:



    Also i’m not sure if you’ve seen one of my previous posts, the problem i had with installing your hex files from v4.x and upwards seems to be a hardware “support issue”/incompatibility with the usb controller on my specific laptop (HM55 chipset). It is not an OS problem afterall, it seems to disconnect on large hex files. On other PC’s i tested it and it is fine. I apologize for bashing you in the past, but i don’t think you can do something about it, the only software way it seems with minidso fixing it on the DFU firmware.

No problem. Just glad you got things worked out.



Thanks for posting these results, nice to see more of these units operating properly.



I agree with the user guide. Things were added in a bit of a hurry, every time I add some items, the whole guide has to be re-paged and indexed. Once I feel the program is in a reasonable state of completion I would like to update the entire guide, perhaps with more screenshots demonstrating various functions. Pretty busy right now though, will be a while before I have some free time…

Hello all. I admire the Wildcat and others for great job. The little DSO now turn into very powerful tool.



My question - can you explain step by step how to adjust the horizontal thickness? I read the guide and tried many combinations - no success. The only think that show up is ADCoffset.

Take your time Wildcat! It’s not like there is a deadline or something:p Either way if you understand the topic it is easy to get to find everything! I don’t even have an actual oscilloscope. On that topic what is the best and proper way to calibrate software and hardware? I have a computer power supply that i converted to bench power supply, put it is not very accurate, although i have a nice boost converter. Can someone tell me the proper steps to calibrate with these limitations?

This is adjusted while in the display brightness menu which part of the meters. You need to have the small meters engaged to see it (engage by short pressing button 3 until small meters appear). You can then use the right toggle to move the focus until B.L. blinks. With this blinking, press the right toggle center button, and the thickness can be adjusted with the left toggle, press again to return to bright adjust. The display can be observed while changing. Finally, save your selection with config 0.



This is in the user’s guide under button functions, option for right toggle center button short press:

WITH MENU ON DISPLAY BRIGHT ADJUST: TOGGLES WAVEFORM HORIZONTAL THICKNESS ADJUST

(Change with LEFT TOGGLE)

Well, you need a wide range of clean, finely adjustable and monitored stable source of DC. The problem with using switched mode power supplies is that they very often have a lot of noise. A better solution would be to use batteries (for example 1 to 3 nine volt batteries in series). Batteries have no noise, and also benefit from not being connected to the AC supply, which greatly reduces RF interference pickup from nearby equipment.



Next, you need to have a way of adjusting the output with good precision. Best way to do this, regardless whether the source a power supply or batteries is with some resistors and a pot. A 10 turn pot will work great if available. Use the resistors in series with the pot, and also from the pot to ground, to reduce the voltage range to a small value to improve resolution. Change resistors to get the various values needed for the different ranges. Monitor the voltage with a digital multimeter while adjusting the pot to get the values needed for each range. Then adjust the Quad to display exactly the same reading as on the DMM for each range.



If doing this twice to enable supply voltage fluctuation compensation, the low settings should be done with the Quad battery as low as possible, without risking shut down. The high settings should be done with a charger connected. Some chargers may provide more current than others, it helps if the battery voltage displayed is as high as possible, say around 4.5V (this is actually the system voltage and not the battery voltage when a charger is connected). Also, chargers will vary greatly as to noise output, it may be useful to check how much noise is induced (at low V/Div’s) prior to using for calibration.



Finally, once a good calibration is achieved, save a backup of the WPT config file. This can be reloaded later if ever needed, and any other saved config merged back with the calibration by loading and saving as #0. While all this is a tedious pain to do, it only needs to ever be done once for the life of the device.

Thanks a lot Wildcat. Still i’m limited to my multimeter’s accuracy this way…and i wouldn’t say i have even a good entry multimeter (although i will be buying a good one in the near future). After this i have to do the hardware calibration i guess to minimize overshoot and rise time. Or should i do this first? Now i remember something on calibrating at 50V, and not being possible due to the clamping diodes of the hw v2.81!?!

Unless you have a REAL crappy multimeter (for example with limited ranges), it’s probably accurate enough. The 8 bit readings in the Quad do not offer great accuracy, even when compared with cheap DMM’s. That’s why it’s important to calibrate it to get the most out of it. What the Quad voltage meters lack in accuracy they make it up for in bandwidth and ability to measure complex waveforms, while generally providing enough accuracy for most purposes.



Hardware calibration (high frequency compensation) is not in any way related to the DC software calibration. Doesn’t matter which one you do first.



If your device still has the clamping diodes at the analog inputs, I would strongly recommend removing these. Not only will these limit the voltage applied to the inputs, but if what you measure is of a low enough impedance (very often is) it can short the diodes and result in a dead channel, or even possibly blow a circuit trace and/or damage the circuitry you are measuring. The HW 2.81 device I purchased from SEEED had these already removed when I got it, presumably at SEEED’s request to their supplier. It did still however have them at the digital inputs.



I could go into a very long, drawn out discussion about the input circuitry of the Quad. I don’t wish to do that at this time, suffice it to say that clamping diodes at the input, without any resistive buffering is NOT a good idea…

Hello,



First of all, many thanks to Wildcat for maintaining the software. It is much better than the original one! I just bought a DSO203. The original software is quite buggy. So I searched an alternative and found yours. It works great and I did not find any bug until now! However, for the moment I’m using V5.0 and the original FPGA.

This is typical when you do changes inside the FPGA. I assume that the timing was already critical before you did the changes. If your changes add some additional delay, this can happen.



Now if the output of the ADC changes for example form 1000 0000 to 0111 1111 (128 to 127) and bit 5 is a bit to slow, the sampled value can be 0101 1111 (95). By heating or cooling the ADC, these delays change and the effect can be better or worse. This is the same if you change the ADC.



I just reverse engineered the FPGA code a bit. MCI seams to be the master clock from the CPU (72MHz). The ADC Clocks are given by: assign CKA=MCI;

assign CKB= (Ctrl_Link[1]) ? !MCI : MCI;



In the DP_RAM, the ADC Data is sampled on the rising edge of MCI.



This seams all fine, but assign CKB= (Ctrl_Link[1]) ? !MCI : MCI; adds some additional delay on CKB. Just give it a try and use:

assign CKA= (Ctrl_Link[1]) ? !MCI : MCI;

to have the same delay on CKA. I looked trough the code and it seems Ctrl_Link[1] is always zero.



The code integrates a mux which gives additional delay. It is possible that all delays are so big that the FPGA samples the signal on one rising edge later, so giving more delay could fix the bug.



It might also be that CHA and CHB are exchanged some where by accident, so trying:

assign CKB=MCI;

removes the delay on CHB which could be CHA… :wink:



Normally, such timing things are tested and fixed if possible by the FPGA tool, but the actual SDC files has no information about the external delay of the ADC, so it assumes zero delay. And it might work or not by chance and temperature! :wink:



By the way: Which tool are you using to synthesize the FPGA? I could improve the SDC file and do some FPGA simulations to see if there is really a problem. And which compiler are you using for the C code?



Regards,

Hennes

This is what I originally assumed before doing countless builds, however the problem has proven frustratingly difficult to pinpoint.

I did try that, while it may have changed things a bit (just about anything you changed affected it somewhat) it did not fix the problem. The reason for the Ctrl_Link conditional is to provide support for 144Mhz sampling, which requires the ADC to be set up with out of phase clocks.

Some of the early builds in trying to solve this looked as if this could have been the case, as post synthesis timing showed around a 1 MCI clock period shortfall. However, other compiles with everything stripped down to keep timings in line still did not solve the problem.

I didn’t add delay with the constraints file but after observing that reading from the falling rather than the rising edge made no difference whatsoever with the noise I kind of dismissed the possibility of input timing issues. Still it wouldn’t hurt to try, would be great if such a simple fix would work…

I’m using Lattice’s ICEcube2, release 2015.04.27409 with the Synopsys synthesizer and GCC 4.6.1 from codesourcery for the C compiler (CodeBench Lite).



I spent a good amount of time trying to solve this problem before changing the ADC chip. I eventually came up with some builds that actually worked pretty well, but while some minimized the noise to barely noticeable levels, they never completely eliminated it.



Here are some notes from my experience with this issue that may be of help:



First of all, the issue was confirmed coming from the ADC by swapping the A ch up to bits 8-15 and B ch down to 0-7 at the input ports of the FPGA. This caused the problem to now display in the B channel (the A ch from the ADC was now being sent to the B display). The noise was not changed at all, it was exactly the same.



The problem seems to be intermittent in nature, in other words it’s not a continuous, on every cycle event but a random occasional blip. Of course, if something is “on the edge” it can act that way but it’s unusually persistent, occurring to a varying extent at all clock speeds and with a wide variety of completely different FPGA configurations.



Most of the builds I tried did not show the effect at all in normal modes, due to it’s intermittent nature. However full speed mode stores any event like this. At slow timebases, the cumulative effect of having say 60,000 samples taken, stored and displayed would result in the near certitude of capturing at least one event for each displayed sample, resulting in what looks like a continuous string of noise.



While some bit toggling at various levels could be observed with some configurations, and these would respond to changes in timing, as well as would diminish and eventually disappear when reducing the clock speed (keep in mind in normal modes the timbase sets the clock speed, it can as well be adjusted in full speed mode), there was one “level” at which the noise occurred that did NOT respond to reducing the clock speed and proved frustratingly persistent.



Many configurations were tried, with different functions clocking on different edges. For example, reading the input on the falling edge rather than the rising edge did not have any effect, with the exact same particular bits in the ADC ch A behaving in the exact same way.



The one thing that seemed to cause the most problem was increasing XTthreshold from 16 bits to 32 bits to account for the faster clock used with full speed mode at slow timebases. This seemed to come from the increased complexity of having to compare several 32 bit registers, in comparison to comparing 16 bit regs.



The final FPGA I published is NOT optimized to minimize this problem, but rather optimized to function properly. Final timing analysis for that one looks good, the only exceptions being with some inter-clock relationships with unspecified constraints between the main clocks and program/FPGA control transfers that occur only occasionally, too seldom to be an issue. Many builds were found to almost eliminate the noise, but frustratingly needed a compromise somewhere (most notably in the time based triggering function which needed to be changed to 32 bits) and for that reason could not be used.



None of this made any sense to me. At first, I was sure the issue was timing but no matter what I tried, the results were always random and unpredictable.



After the ADC was changed, none of the builds showed ANY trace of this whatsoever.



At this point I’m leaning more towards some hardware issue being the culprit, possibly the added gate count in the FPGA causing increased noise on the supply lines that the ADC can’t cope with, or something like that.



I hope you can find something I overlooked that can solve the problems using this with the ADC’s that come with the units. It also appears that some seem to work just fine… At this point, though, I have no way to reproduce the issue for testing as my unit no longer has the problem, so it’s up to you if you wish.



Thanks for taking an interest in this, I believe this is a potentially very useful mode and well worth the effort to make it right. Let me know if I can help in any way.



By the way, you wouldn’t by any chance have the means to program the ICE65 chips used in the earlier devices? These were made by Silicon Blue which was bought by Lattice but they do not appear to want to support the old chips. Although the 65 version can be selected, the compiler comes back with a missing DEV file. An earlier version of ICEcube from Silicon Blue only has Synopsys available for synthesizing and will not take the current license available from Lattice.

I see you did a lot of tests!


I was thinking this could be for double sample rate using two combined channels. Is this supported by the software and the bandwidth of the analog input?



I’m still waiting to be able to download ICEcube2… I hate this stupid sites where you need to register, than to wait… :evil:



I see, you did a lot of testing…

So as you describe, it might be also possible that this problem also exists with the original FPGA image, but there is no way to see it because you just don’t display eneught samples that you could see it in a reasonable time?


What do you mean with level? At which clock speeds the problem disappeared?



It might also really be the ADC. The HWD9288 is just a cheap copy of the AD9288, they even copied the data sheet images! :shock: The FPGA Gates run on 1.2V but this supply is generated from the 2.8V where the CPU and the ADC runs on. I’ve also seen that the ADC runns from 2.7 to 3.6V. So the supply is just 100mv more than required. This might be a problem, especially on a switched supply which is possibly slow and not so stable.



No, I don’t have anything for the ICE65 chips. At the company, we use Altera FPGA’s. May be you could just ask Lattice support for a license of the old tool?

It is really crappy for accuracy and precision, but really good for 8gbp:p No auto-range, no lead compensation, 2000 counts only i believe, next one i’m buying is gonna be about 100gbp, 6000 count min (looking at Brymen ones). I understand about the ADC, with only 8bits you can’t do much accuracy wise:p



I checked the hardware, clamping diodes are missing which is good. Didn’t even need to take the shield off, just lift it up a bit and i could just barely check. Maybe all hw v2.81 has them removed but who knows.



I managed to do a calibration. Not the best way though, i’ll probably re-do it. I used 2 18650 batteries in series, that went to a power module with selectable 5V and 3.3V (on board regulators) and used two variable resistors as potential dividers with the output connected to the input of the other one so i could make more fine adjustments. It was really hard to pinpoint the adc to the center at low voltage ranges, probably due to noise from the power module regulators (dumb me;p). For the higher ranges i used a power supply connected to a boost converter to go up to 35V. For the last range i couldn’t generate high enough voltage so i just grounded the probe (i’m not sure if this the correct action to do).



Also everytime i save a calibration for low-battery, and full-battery it overwrites the old one automatically? Or do i need to delete the old calibration files manually?

144M sampling is not currently implemented in the software. It’s supported in all FPGA versions, even very early ones. I set it up not too long ago to see how well it worked. It works but is extremely complex and tedious to set up, with special time bases, both time and level interlacing compensation needed and meters need to be adapted to it. It is only of any benefit at the 0.1uS/div timebase, since at 0.2 you need to halve the sampling rate so you might as well just use a non interleaved mode. The display is already pretty good at the ~10Mhz max anyways, not much to gain from such an elaborate function in my opinion.



The noise problem may have been there with previous FPGA’s, but the added gate functions, particularly the 32 bit compares for the time triggering makes it much worse, plus the nature of full speed sampling “records” every little blip, so it really shows up with that.

By a certain “level” I mean a level shift on the vertical display where some more significant bit takes over lesser ones (eg: shift from b01111 to b10000) . There was one such noise “spot” about 3/4 of the way up (actually this is 1/4 of the way up, since the FPGA inverts the input from the ADC) that acted different, seemed to always be there, even with the sampling rate reduced way down, while if a particular configuration otherwise created a lot of noise (at other “levels”), these would generally disappear below 18 or 9 MS/sec.



As far as the HWD9288 is concerned, I had to replace another one of those a couple of years back on another device that produced nothing but garbage on one channel at the 2 fastest timebases. And yes, I saw the photocopied data sheets… Pretty blatant of them!