DSO firmware version 3.64

I can not duplicate your problem. As seen by the attached views, the trigger level always assumes the last thing loaded. Perhaps you have some YA offsets involved.

First view = 10xmlprimaryload.jpg = file010.xml loaded into primary buffer.

Second view = 09xmlrefload.jpg = file09.xml loaded into Reference buffer.

Third view = both10primaryloadedlast.jpg = file09.xml into ref buffer first, file10.xml into primary buffer last. I believe that the phase shift is due to different trigger level and/or signal variations between captures. These two captures were obtained by placing my finger on the probe tip. This view also demonstrates why I feel that it would be good to separate the vertical offset and V/Div for the two buffers.

See next post for xml files zip attachment (limit of 3 attachments has been exceeded). If we all use these files then we can describe any issues in common detail.
10xmlprimaryload.jpg
09xmlrefload.jpg
both10primaryloadedlast.jpg

Because of the attachment limits, 09xmlprimaryload.jpg is shown here and is the file09.xml loaded into the primary buffer.

Here is the zip attachment.
09xmlprimaryload.jpg
muffingspawn.zip (67.9 KB)

Hello,
this firmware works well, but I like the design of Paul´s version.
Michael

Hi all,

I found that then changing Gnd position V1 and V2 cursors not moved with GND level and I need to move them separately. Is this a bug or the future? Same with Vert Pos.

Both voltage and time cursors are disconnected from the waveform position by design. This is how most scopes work and I also think it makes sense more often than not. If we were to link cursors to gnd/vert pos (and V/Div) we would have issues with pushing cursors off screen and this would complicate the user interface (where are my cursors?).

Knowing how it works, you can adapt usage accordingly – set gnd/vert pos first and then position your voltage cursors.

benf- how hard would it be to put a few software attenuators in there for useing the current probe, that convert 10mv per 1 amp and 100mv for 1 amp, i keep confusing my self with milivolts only while using a current clamp… thanks for the hard work you did with the 3.6 series, i’ve been using the nano instead of my pico scope for quickwaves at work, its nice when i can pull something out of my pocket to look at a quickwave and not have to clean up 10 minutes worth of test leads like i do with my other scopes

That would be a very important application of the Nano because clamp on ampmeters are extremely useful for troubleshooting many current demand situations. They are used to observe current waveforms in addition to DC values. They have become another valuable o’scope probe.

Maybe BenF would consider this possibility in the measurements function, with a new measurement value that converts the milli-volts to ampere values.

Just an extension of your idea…

ingra- how would he do the ranges- there are 3 common ranges 1mv=1amp 10mv=1amp =100mv=1amp, it wouldnt be usefull unless all three were implimented, i use the first one for starter relitive compression checks, the second one for alternator ripple measurements and diesel injectors… and the third is the most used for coil/injector/iac and such… i think you have a good idea, i guess the options would be 10amp(100mv=1A) 100amp(10mv=1A) 1000amp(1mv=1A) these would be your max current ranges in those ranges

After expending more thought on this matter, your idea of an attenuator might be the better approach for evaluating amp clamp waveforms. It could be handled as a configurable custom probe (in lieu of 1x and 10x) using user configuration parameters. This would also allow users to connect custom probes with alternate attenuation for voltage applications. If it can be done for one custom probe, then a couple of custom probes could also use the same structure.

A configuration pop-up that would allow the user to select input ratio choices of 1,2,3,5,10,100,1000 and then allow the user to select output ratio choices of 1,2,3,5,10,100,1000. Also allow the user to select the output choice of V or A. Now the input voltage would be scaled for this custom probe (using the user selected ratio), the vertical units could display V or A, and this custom probe could be saved with the configuration data.

BenF is much better at sorting out the menu organization and he most likely has a better idea. In summary, the custom probe concept would have many applications beyond amp clamps.

Improved support for non standard probes is probably a good idea, but it will be a challenge to implement this in a way that will be intuitive, logical and easy to use. Changing the user interface to use terms like Amp/Div, delta amps and amp trigger level seems awkward. Also we would not want to limit ourselves to amps, but also consider that custom probes may be used for pressure, energy, mass, force and what not. As it is now we can use any such probe as long as they output a voltage/frequency that fit within Nano hardware constraints, but then measurements may require a final conversion step (perhaps something for the XML analyzer).

A custom probe with user configurable gain/attenuation might work and I’ll keep that in mind for future upgrades.

Maybe this would be much simpler if the user just configured the ratio units required to reflect the desired units in volts, milli-volts, etc, and then calls the voltage units by another name such as amps, pressure, strain, or what-ever units. Then the Nano can stay with voltage units displayed. For example for 10mv = 100ma, then the user just selects a probe ratio of (1 input unit = 10 out units) probe ratio and the 10mv would convert to 100mv and the user would call that 100ma, or in other examples other 100m-Units.

Just another discussion of the feature.

Hello, I think brandonb aproach would be much simpler for an amp probe like the one on this video.
youtube.com/watch?v=gMq26dubD5I
thanks a lot for your great improvements.

It would be also be nice to have a possibility to save several calibration settings (like with the preferences, one as power on default and some ‘customized’). This wound help to re-calibrate the whole system of probe and oscilloscope for different probes.

Just updated my v1.1 DSO Nano to BenF 3.61. (Yeah I am slow…) Woah, what a difference!

Thank you very much for your work, your firmware rocks.

BenF,

Thanks a lot for all your work on this. I have few suggestions:

  1. Faster refresh rate on Auto (and Norm) sync mode for higher TD (lower freq) because we can miss some changes. For example, you do not need to capture the whole buffer to display results on screen (you can use circular buffer, if you do not already use it and have last N records of any time).

  2. It is not possible to see small interval changes on lower freq sample. If CPU allows, I would prefer to capture always on higher frequency, maybe always on 1 MHz. For each pixel on screen you can capture two dots: min and max value in appropriate time interval and display vertical line for each X (time dot on screen). On that way we can see if there are fast changes even on big TD (low freq). Then we can change the other parameters to capture that fast signal. Otherwise we do not know if these fast signals exist. This option reduces size of buffer to 2048, but it is still enough.

  3. Scan "trigger’ option for faster TD, what CPU allows.

  4. Currently there are different and strange results if we use Normal and Fast sampling. If 1-3 are made, then we do not need this option.

  5. This is harder. Filters: Low, High and Mid. Probably this need few bytes of buffer for filtering. If we have, for example, high amplitude AC = 50Hz and some small signal of 1 kHz, it is hard to see that small signal. If we use High Pass filter f = 500Hz, we can see only that signal. Also if there are big high freq noise, we can reduce it with LP filter.

There could be other improvements (auto sync, FFT), but the 1 - 3 are the most important.

Regards,
Dejan

Hi BenF,

I was thinking more about my suggestions, and have another ideas.

Related to capturing always on higher possible frequency and MIN and MAX values, when I better think, it is better to capture three values:

  1. Averatage Value – used to draw lines as you did before
  2. Min value (apsolute difference of average value)
  3. Max value (apsolute diffrence of average value)

To be able to save buffer memory, the normal average value can be stored with 8 bit, while DIFFERENCE of average value to Min and Max can be stored with 4 bit resolution per each (2x4 = 8 bit).

Because these 4 bit is very low resolution (only 16 values), and because these min and max will be usualy close to Average, the none linear converion is needed: Smaler distance need to be stored with bigger precision then other. For example, the table like this:
Diff Min/Max Value Diff stored in buffer


0 0
1 1
2 2
3 4
… …
7 7
8 8

< 10=8+2 8+1=9
< 12=8+4 8+2=10
< 16=8+8 8+3=11
< 24=8+16 8+4=12
< 40=8+32 8+5=13
< 72=8+64 8+6=14

= 136=8+128 8+7=15

There could be better non liner functions, but I guess this is simple to realize.

To simplify proggramimg, you can draw your lines as you alredy did, but before you draw line, the program displays the vertical line between min and max value with lower intesity color. There are other possibilitis to draw and fill lines between Mins and between Maxs with better look, but that is not important.


For auto option, if trigger is not found, while to wait 100ms for new display. Let’s display this as soon as possible, ot better, give the user posibilty to define that time.


If you need to extends your user interface with more options where some of them requires entering values (for previous option, fs for filters, …), I suggert to make routine for entring these numbers. I’ve put the picture what I’m using on my application which uses only 4 keys for user interface (Up/Down, Esc, Ok).


This option is not important and it is hard to implement, but it gives great look. If neded, it is possible to diplay full input resulution on smaler screen resolution using antialiasing lines which uses differnt color intesity. If somthing is between two pixels, but it more close to one, you can display an value with highr color intesity then on another. But the sum intesiy of all pixel need to be the same. Example from UBUNTU screen.

Regards,
Dejan
Entering nubers.JPG
Antialiasing Lines.JPG

Vow! - That was quite a list of ideas and implementation details on your part.

Perhaps you could single in on your most desired enhancement and present a simple use case / example that I and others can comment on where you focus on the actual measurement and desired result rather than the implementation?

One of your ideas is to always sample at 1Mhz (the Nano maximum sample rate) for all T/Div’s. When we sample at 1MHz, time between 1st and last sample (for one capture cycle of 4098 points) is approximately 4ms. For a T/Div of 10ms, all we would see plotted then is a tiny waveform about 12 pixels wide. Surely this is not practical and so a compromise is needed.

Another idea of yours is to capture two points per on-screen pixel as opposed to one. With the current firmware and using fast buffer mode we capture 10 points per single pixel.

Visual perception is also an aspect to consider. Current refresh rate is 10Hz (300 points redrawn 10 times every second) and this is about as much as we humans can comprehend. If we increase refresh rate further, consecutive capture cycles will appear as if they blend together. If we need to capture a specific waveform detail we may be better off letting the DSO do the hard work (continues sampling) and then stop and show us (trigger) if and when an issue is found.

You’re also suggesting there are issues with using fast buffer mode and Normal trigger. Perhaps you could expand on this with some examples?

BenF;

  1. In your Firmware User’s Guide, on page 13, you describe a calibration procedure using a DC voltage. Is it possible to establish gain calibration using a different procedure that would support the use of a function generator AC signal to establish gain calibration?

The reason I ask is because many more users have function generators than variable DC voltage supplies. If we use an external calibrated o’scope to measure the input waveform amplitude, could we use something like Vpp?

  1. In the Measurement section of your Firmware User’s Guide, on page 11, you discuss the measurement of an AC signal. When you refer to the number of complete acquired waveforms, are you referring to the capture buffer or the display buffer as the source for these acquired and measured waveforms?

Thanks

I would say any calibrated voltage source (AC or DC) close in amplitude/frequency to whatever you need the DSO calibrated for will do fine. Vavg may still be preferred since Vpp is a peak measurement that will include ripple if present. Personally I use a calibrated Fluke connected parallel with the Nano probe and so can use any available DC source (9V battery, car battery, lithium cell phone battery, USB port etc.).

Measurements are calculated from values in the capture/acquisition buffer with partial cycles excluded for AC waveforms.

BenF:

Sorry, but I have another question about the Nano. When the firmware detects a trigger condition during an acquisition, the trigger is always shown on a sample point. What happens if the trigger condition occurs between sample points. Does the firmware just interpolate that the trigger condition happened between two sample points, or does the firmware miss that trigger condition when it does not fall upon a sample point?

Thanks