DSO203 GCC APP - Community Edition (2.51+SmTech1.8+Fixes)

Hi,



I am having problems, with version 4.4

I’m not sure if the problem was here before but I get a constant DC offset even after doing a calibration (I did the full calibration, DC offset correction and voltages from a calibrated power supply - I didn’t change the CPU frequency calibration).



With the input shorted to GND, on 50mV/div scale, I read a 50mV offset.



Is there anything that can be done ? Maybe the hardware is faultly, or there is a limit as to the offset the calibration can correct ?



I have attached a screenshot showing the offset. During the acquisition, Ch A was shorted to GND.



Interestingly the offset changes depending of the input range setting :

50mV : offset 50mV

0.1V : offset 84mV

0.2V : offset 0.152V

0.5V : offset 0.360V

1V : offset 0.400V

2V : offset 1.12V

5V : offset 3.40V

10V : offset 6.80V



I have attached my WPT file - the forum does not allow WPT files (wtf seriously this is annoying) so I changed the extension to ZIP. Please rename it to WPT. It is not a ZIP file - just rename it to WPT.



Thanks

Not shure but

I found the analog calibration somewhat time consuming, but I would suggest you retry the analog calibration after loading Wildcat version 4.3 or earlier.



If this corrects your problem, the resultant WPT file can be archived and used with Wildcat 4.4.


The screenshot shows the device is in one of the OS modes (green buffer display with orange box outline at bottom of screen, this is the so called averaging or summing mode but is still oversampling).



As previously noted, performing the initial calibration while in OS mode will fail, with resulting large DC offsets and perhaps even a program lockup.



There is no need to calibrate with another version, simply shift over to a regular buffer mode (orange buffer display with white box outline) before entering calibration.



If the device was previously calibrated with an earlier version and a config file has been saved in one of the extra positions (appears as a CONF00x.CFG file on the drive) and that file was not overwritten by saving to that position after an improper calibration, it can be copied/renamed and saved, overwriting the xxxx.WPT bootup file and the previous calibration data will be retrieved. The program does not read calibration data when loading an extra config file, it only loads it from the bootup file #0 (WPT), but it does save the data to them, so they can be used as backups. This can possibly save some time and frustration.



A xxxx.BAK file is also generated from the previous WPT file when a new one is saved. If incorrect data has only been saved once this can also be used.



Was waiting for a future update to fix this. Looking to add more functions as well as update existing ones, specifically relating to triggering (adding the ability to trigger on configurable binary sequences and perhaps also data sequences in SPI and i2C decode modes). This would require reprogramming the FPGA, and while I now have the necessary software to do this, it only works with the new chips in HW V2.81, meaning these added functions would not work on earlier devices unless someone with a licensed older version (which appears to no longer be available) can generate a bitmap from the source if/when developed . Don’t have one of the new Quads with HW 2.81 at the moment, but plan to purchase one shortly.



Also having some serious health issues, so it may be a while before a new update. When I get a chance might post a minor revision with a fix for this calibration issue for anyone concerned. This will simply amount to calibration switching over to a non oversampling mode when engaged. For the time being, doing this manually will work just as well.

Just acquired a HW V2.81 Quad with the latest DFU.

A few things I noted:



The “swapped Ch B least significant bits” issue of the earlier FPGA versions has been fixed. Since the program compensated for this in the earlier devices, this now has the effect of CREATING the problem on V2.81. This just amounts to some excessive noise on CH B, but likely will also affect initial calibration somewhat. The next update will detect the device version to fix this.



Also I noted in my sample that the preamp clips below the top limit of the ADC producing visible clipping at the top of the display. This can easily be fixed by shifting the ADC “window” down, see the user guide for how to do this.



Will be posting a new update shortly, waiting to finish a few added/updated functions. This will also include removing the requirement to have config files present before being able to same them on devices with 8MB drives as well as a fix for the “calibration with OS modes” issue.



Some of the new functions I’m working on include extending the chart mode time base up to 100mS/sec, auto saving of incrementing BUF and CSV files at end of each buffer while in chart mode and the ability to create a binary file image of the entire ROM so a device can be restored to it’s original state via the internal JTAG header if ever necessary (a utility to do this was previously posted by bobtidey and jpa but will only work for devices with 2MB drives).

Good news! Thank you!

Version 4.5: fixes some issues with 8MB drives and HW 2.81, adds/updates a few functions and the user guide has also been updated. I now have the means to program the new FPGA chip so will be looking to see what can be done with that. Unfortunately, still have no way to program the earlier FPGA version.



CHANGELOG TO VERSION W4.5:



-Added version detection to properly implement the “2 least significant ChB swapped bit issue” of earlier FPGA versions.

Fixes problem of excessive noise and possible initial calibration accuracy on ChB waveforms with hardware 2.81.



-Fixed initial calibration failing if device is in oversampling mode.



-Added overwrite file warnings when saving BMP, CSV and BUF file formats.



-BMP and BUF load functions now auto increment file numbers.



-BMP load can now display next file with just one push of center button 5.



-Fixed 16 color BMP load not covering entire screen on 8MB drive devices.



-Added function to save entire ROM to an image file to restore a disabled device to it’s original state via JTAG header if ever necessary.



-Extended Chart mode time base up to 100mS/div.



-Added option to auto save incrementing BUF or CSV files at end of each acquired buffer in full buffer chart mode. Provides continuous recording of long periods of data.



-Disabled TH, TL, %duty and period time meters while in chart mode.



-Fixed T cursor delta time display while in chart mode, now works up to 1000 seconds.

WC,



Thanks for the update. Hope you have luck programming the later FPGA.



Can’t seem to figure out what you mean by "shifting the ADC window down? I looked in the guide, but can’t find it. Are you saying to make the adjustment while doing the analog calibration?

If you hold button 2 for more than 3 seconds, as if to enter calibration, but with the menu NOT on ChA or ChB, you will get a screen where you can change a value (with the left toggle). This is 54 by default. Changing it to a smaller value changes the operating point of the ADC window, which is advantageous to set as high as possible (EG: with a setting of 54, the window is from 54 to 254 (200 steps of the display). This was done to get away from some non linearity at the bottom of the screen.



If clipping happens in the preamp at the top or close to it, you may see it. This varies from unit to unit and maybe even from range to range or with battery level. Changing the default setting of 54 to a lower value will bring the window down and can be used to move the visible clipping out of sight. It does not stop the clipping, just moves the window so you can’t see it.



Factory apps leave the window at the very lowest point, so you don’t see it, but it is still happening…



Move the window down so it just clears the clipping. Note that you have to exit the adjustment screen for the change to take effect. Finally, save the config so it stays that way.



This is explained in the user guide at the very beginning of the calibration section (2nd paragraph).

Just uploaded 4.5 and no issues. Went thru a calibration and could not accurately calibrate the 10V range as my supply would only run to 47.8V. Set it there since I will probably never use that range anyhow.



Kudos to Wildcat. Fantastic job , just wish I had the talent to understand and program at his level.

I having trouble in getting voltages calibrated with a good accuracy. I don’t know if it is a common fault or not.



I having some fluctuation in low/med voltage readings when I am calibrating in the last two digits. I sometimes get two readings blinking,looks like DAC is peaking voltage reference fluctuations. Normally I got two values (eg: 1.045v and 1.037v) I don’t know if I have to set by the higher or the lower value.



I am using the last wildcat 4.5 Fw and all the last fixes etc.



I don’t believe it’s my calibrating power supply or else I will get the same results in my 5 digit multimeter, it’s steady.


Use the one closest to what your supply is indicating, this is only a difference of 8mV and is likely the limit of the resolution of the ADC. As has been mentioned before, voltage resolution of an 8 bit ADC is low, with only 256 coarse steps compared to that of even a cheap DVM. This is a trade off you get between speed and accuracy. The resolution of the calibration display has been extended beyond what the device is capable of providing to minimize any errors there.



One thing you can do while calibrating, if your supply source is finely adjustable is to try to get on the center of each step as indicated by the “centering” display. Often, if this jumps around too much, moving up to the next or lower “step” or two can prove helpful. The exact voltage you chose for each range is not critical, just set the display to the closest value as indicated on your supply.



Just finishing up a revised FPGA, will be posting shortly along with a new app update. New FPGA will fix inconsistent triggering at the fastest timebases under some conditions, provide more coherent AUTO mode wave display while untriggered and allow the development of 144 Ms/sec sampling and other functions and refinements hopefully in a future update.



APP update will include fixed jitter stabilization not working at fastest timebases if either channel is in invert mode and a few other minor fixes and additions.

Thank you for your quick reply and your work is amazing specially the serial decode.



Maybe I not explain well my point, or I don’t understand well your answer.





Power supply -> [color=red]1.037v



I got two types of readings with blinking :



A [color=red]1.037v <-> 1.045v

B 1.030v <->[color=red] 1.037v



I will try to adjust the PSU to have steady values. It’s a bit tricky.

If the display is fluctuating between 2 values, it just means that the correction “step” is in between 2 values and can’t “decide” which one to use. From a practical standpoint, it makes little difference in such a case which one you choose. If you want to be real fussy, choose the one that it stays the longest in, if you can tell.



In the end, due to the relatively low resolution of an 8 bit ADC, some non-linearity inherent in the analog to digital conversion and calibration process used in the hardware, particularly at the bottom of the screen, and the fact that gain and DC offsets in the preamp stages are affected by battery/power supply level, all these will have more of an effect on accuracy.



I have extended the resolution of the calibration display, and provided some means to compensate for various influences affecting accuracy, all in an effort to eliminate at least adding more errors while calibrating. This may have given the impression that the device is capable of higher accuracy than it really is. The core function of the calibration routine is basically unchanged from the original authors, and seems to work as well as it can under the circumstances, but the fact remains that from a voltage level standpoint, this is a relatively low accuracy device. If was not unusual with the original software on earlier devices for voltage to be off by 30% or more under some conditions.



The hardware could certainly be improved, by providing better power supply regulation and analog compensation of DC offsets and gain rather than digital “steps” that distort the waveform and add ambiguity to the voltage meters, but these would greatly add to the complexity of the design. With careful calibration, errors can be brought down to the 2-3% level or less, depending on the range and the level measured, at least making it useful for troubleshooting purposes, and waveform distortion from these calibration “steps” can be disabled: long press center button 5 (left toggle) with meters off.

V5.0: Just some fixes for previously unnoticed issues with recently added functions.

Specifically,

-Fixed jitter stabilization not working at fastest timebases if either channel A or B was in invert mode.

-Fixed BUF file loads not scaling time base right if loaded while in time bases faster than 5uS/div.

-Fixed UAR file load not stopping UART GEN transmission at end of file on 8MB devices.

-Moved the “File already exists” notification off the screen so it does not get saved along with the display.

-Increased hysteresis of battery level compensation to minimize frequency of shifting DC offsets

from varying loads on battery.

-Fixed pixels left behind in meter section after loading BMP’S with meters on.

-Fixed screen update after “scope disabled” notification while in XY mode with wave generator on.



Also added button 6 center press (right toggle) to view previous BMP/BUF file.



When used with revised FPGA, provides selection of untriggered AUTO time base mode behavior

(see screenshots). Toggle selection menu by pressing left toggle center button while main menu is in

TIME BASE MODE with AUTO flashing. Change with left toggle.



V 5.x and up also will mark the starting point at which any functions added to new FPGA versions will be

accessible from the program.



Revised FPGA includes added triggering data buffer to eliminate the possibility of read/write collisions

while capturing data on the fly, as was previously done, and implementation of a freerun mode for

AUTO trig. This will also serve as a base for any possible new FPGA based functions added in the future.

Upload to device by first copying the ADR file, then when volume reappears, copy the BIN file.

NOTE: this is ONLY FOR HARDWARE V2.81, will NOT work on older devices.







“Continuous” mode on the right vs “Fast” mode on left while untriggered in AUTO mode. CONT minimizes

waveform corruption when display can’t keep up with incomming signal, but at the expense of lost

pre-trigger section when mode first comes out of freerun on a trigger event.

Wildcat,



Thanks for the update. Unfortunately, I used Windows 10 to do the FPGA update and bricked my HW2.81 with no way to recover or launch WC4.5/5.0.



Welcome screen now states:



FPGA Configuration Err!

Hardware Ver 2.81d tSerial No: 7E1AC099

DS203 Mini DSO SYS Ver 1.64

FPGA error



That’s not a typo, previously stated: Ver 2.81 Serial No:.



DFU mode (DFU 3.45c) still appears to work (Windows 10), so I tried reloading factory FPGA 2.81.ADR and FPGA2.81.Bin. They appear to load properly, but this does not correct/change the welcome screen.

These strings are in the SYS area of ROM, I would suggest reloading the SYS file.



I can’t think of any way this could have been written over unless something has gotten corrupted. I tested this a great

number of times with essentially the same SYS and HW version without any problems. Would be curious if this has happened to anyone else…



Will check more into it tomorrow, it’s very late for me right now.

Just to make sure, I just downloaded the archive, extracted the files and loaded the FPGA to my device, with no problems.

This is Hardware V 2.81 with DFU 3.45c , SYS 1.64

Does the remark about 2.81 apply only to the revised FPGA part? That’s what it sounds like



So one can load v5 app OK onto older devices to benefit from the non FPGA bits?

Yes, the app will detect whatever hardware/FPGA version is used and adjust itself accordingly.