DSO firmware version 3.64

BenF,

Thanks a lot for all your work on this. I have few suggestions:

  1. Faster refresh rate on Auto (and Norm) sync mode for higher TD (lower freq) because we can miss some changes. For example, you do not need to capture the whole buffer to display results on screen (you can use circular buffer, if you do not already use it and have last N records of any time).

  2. It is not possible to see small interval changes on lower freq sample. If CPU allows, I would prefer to capture always on higher frequency, maybe always on 1 MHz. For each pixel on screen you can capture two dots: min and max value in appropriate time interval and display vertical line for each X (time dot on screen). On that way we can see if there are fast changes even on big TD (low freq). Then we can change the other parameters to capture that fast signal. Otherwise we do not know if these fast signals exist. This option reduces size of buffer to 2048, but it is still enough.

  3. Scan "trigger’ option for faster TD, what CPU allows.

  4. Currently there are different and strange results if we use Normal and Fast sampling. If 1-3 are made, then we do not need this option.

  5. This is harder. Filters: Low, High and Mid. Probably this need few bytes of buffer for filtering. If we have, for example, high amplitude AC = 50Hz and some small signal of 1 kHz, it is hard to see that small signal. If we use High Pass filter f = 500Hz, we can see only that signal. Also if there are big high freq noise, we can reduce it with LP filter.

There could be other improvements (auto sync, FFT), but the 1 - 3 are the most important.

Regards,
Dejan

Hi BenF,

I was thinking more about my suggestions, and have another ideas.

Related to capturing always on higher possible frequency and MIN and MAX values, when I better think, it is better to capture three values:

  1. Averatage Value – used to draw lines as you did before
  2. Min value (apsolute difference of average value)
  3. Max value (apsolute diffrence of average value)

To be able to save buffer memory, the normal average value can be stored with 8 bit, while DIFFERENCE of average value to Min and Max can be stored with 4 bit resolution per each (2x4 = 8 bit).

Because these 4 bit is very low resolution (only 16 values), and because these min and max will be usualy close to Average, the none linear converion is needed: Smaler distance need to be stored with bigger precision then other. For example, the table like this:
Diff Min/Max Value Diff stored in buffer


0 0
1 1
2 2
3 4
… …
7 7
8 8

< 10=8+2 8+1=9
< 12=8+4 8+2=10
< 16=8+8 8+3=11
< 24=8+16 8+4=12
< 40=8+32 8+5=13
< 72=8+64 8+6=14

= 136=8+128 8+7=15

There could be better non liner functions, but I guess this is simple to realize.

To simplify proggramimg, you can draw your lines as you alredy did, but before you draw line, the program displays the vertical line between min and max value with lower intesity color. There are other possibilitis to draw and fill lines between Mins and between Maxs with better look, but that is not important.


For auto option, if trigger is not found, while to wait 100ms for new display. Let’s display this as soon as possible, ot better, give the user posibilty to define that time.


If you need to extends your user interface with more options where some of them requires entering values (for previous option, fs for filters, …), I suggert to make routine for entring these numbers. I’ve put the picture what I’m using on my application which uses only 4 keys for user interface (Up/Down, Esc, Ok).


This option is not important and it is hard to implement, but it gives great look. If neded, it is possible to diplay full input resulution on smaler screen resolution using antialiasing lines which uses differnt color intesity. If somthing is between two pixels, but it more close to one, you can display an value with highr color intesity then on another. But the sum intesiy of all pixel need to be the same. Example from UBUNTU screen.

Regards,
Dejan
Entering nubers.JPG
Antialiasing Lines.JPG

Vow! - That was quite a list of ideas and implementation details on your part.

Perhaps you could single in on your most desired enhancement and present a simple use case / example that I and others can comment on where you focus on the actual measurement and desired result rather than the implementation?

One of your ideas is to always sample at 1Mhz (the Nano maximum sample rate) for all T/Div’s. When we sample at 1MHz, time between 1st and last sample (for one capture cycle of 4098 points) is approximately 4ms. For a T/Div of 10ms, all we would see plotted then is a tiny waveform about 12 pixels wide. Surely this is not practical and so a compromise is needed.

Another idea of yours is to capture two points per on-screen pixel as opposed to one. With the current firmware and using fast buffer mode we capture 10 points per single pixel.

Visual perception is also an aspect to consider. Current refresh rate is 10Hz (300 points redrawn 10 times every second) and this is about as much as we humans can comprehend. If we increase refresh rate further, consecutive capture cycles will appear as if they blend together. If we need to capture a specific waveform detail we may be better off letting the DSO do the hard work (continues sampling) and then stop and show us (trigger) if and when an issue is found.

You’re also suggesting there are issues with using fast buffer mode and Normal trigger. Perhaps you could expand on this with some examples?

BenF;

  1. In your Firmware User’s Guide, on page 13, you describe a calibration procedure using a DC voltage. Is it possible to establish gain calibration using a different procedure that would support the use of a function generator AC signal to establish gain calibration?

The reason I ask is because many more users have function generators than variable DC voltage supplies. If we use an external calibrated o’scope to measure the input waveform amplitude, could we use something like Vpp?

  1. In the Measurement section of your Firmware User’s Guide, on page 11, you discuss the measurement of an AC signal. When you refer to the number of complete acquired waveforms, are you referring to the capture buffer or the display buffer as the source for these acquired and measured waveforms?

Thanks

I would say any calibrated voltage source (AC or DC) close in amplitude/frequency to whatever you need the DSO calibrated for will do fine. Vavg may still be preferred since Vpp is a peak measurement that will include ripple if present. Personally I use a calibrated Fluke connected parallel with the Nano probe and so can use any available DC source (9V battery, car battery, lithium cell phone battery, USB port etc.).

Measurements are calculated from values in the capture/acquisition buffer with partial cycles excluded for AC waveforms.

BenF:

Sorry, but I have another question about the Nano. When the firmware detects a trigger condition during an acquisition, the trigger is always shown on a sample point. What happens if the trigger condition occurs between sample points. Does the firmware just interpolate that the trigger condition happened between two sample points, or does the firmware miss that trigger condition when it does not fall upon a sample point?

Thanks

An edge trigger condition is not satisfied from a single point, but as a transition across a lower and upper threshold as determined by trigger level, trigger sensitivity and trigger kind.

Let’s say we configure for a rising edge trigger and start with the waveform sitting well above trigger level. The waveform then drops below trigger level (or more precisely below the sensitivity band lower level) and now the first condition is satisfied. Eventually, the waveform will start to rise and once we reach or exceed trigger level (sensitivity band upper level), the second condition is satisfied and we have a successful trigger. That is a low to high transition crossing the sensitivity band (a rising edge). If the waveform starts out below trigger level, the same logic applies, but the first condition will be satisfied immediately.

Trigger position as marked on the DSO coincides with the sample point that satisfies the second condition.

BenF:

Thanks for that very thorough explanation. I had forgotten about trigger sensitivity because I usually keep that set to zero. Now I and others can completely understand the firmware trigger detection process.

Hi BenF,

I’m commenting “Sat May 07, 2011 2:52 am”.

First of all, thank you a lot for your time and improvements on this. I’m using DSO Nano occasionally for developing and servicing some machines with analog and digital signals less then 100 kHz. The small size of DSO nano makes it great.

I had two main problems (I’m using 3.61 version):

  1. In most cases, I’m examining signals about 50 Hz (for example, controlling thyristors) and use TD ~ 5ms. The “Normal” and “Single” trigger mode work OK. But, before I use these modes, I wish to see the signal “in general” with AUTO mode. On this TD, the refresh rate is too slow (I didn’t measured, but lets say 0.2 - 0.5 sec), so I cannot see all signal changes and miss something.

  2. There could be occasional high frequency components on low frequency changes. It is hard to see this high frequency change (at least you cannot set trigger for it).

My main suggestions are:

  1. To increase refresh rate on auto mode for bigger TD, to be able to see what is going on with signal.

  2. Always sampling at 1MHz. Unfortunately, you din’t understand my idea. Sorry for my bad English.

For example, let’s use TD=5ms. Currently, on Normal sampling mode you use fs=5kHz sampling and fill the buffer. You have 25 values per one time div. Because fs is only 5kHz, faster frequency changes (theoretically >= 2.5 kHz) looks very strange (or can be missed) and moved to low frequency. This is normal theoretical situation: if you sample 6kHz sine signal with 5kHz you will got 1kHz as result (I think?). Use my example in attachment and change the input frequency at filed B2 to: 100, 200, …, 1000, 2000, …, 6000 (the same as 1000).

My idea is following. For ONE pixel on screen (5ms / 25 = 20us), capture 20us / (1/1Mhz) = 200 samples. Calculate Average value, Min value and Max value for this set of 200 samples (you need just three variables), and put all three values in buffer and plot them on screen. You can use Average value as you did before to draw lines, while Min and Max value can draw vertical line with lower color intensity, or just plot two dots.

On that way we can DETECT and see high frequency changes amplitude even on low frequency signals. For example, if there is very small impulse with dt = 5us, we can miss it or display it occasionally. With min max values, we will always see this signal because we sample with dt = 1us (we will catch it 5 times).

For trigger modes, need to be option to catch only Average, or all three values. Or just use average value to simplify this.

This will reduce buffer size by factor 2 or 3 depending do you store Min and Max with 4 or 8 bits. Probably it is the bast just to store each value with 8 bits and simplify things and have real picture of signals, instead my idea I suggest in my second email (Min and Max with 4 bit none linear difference to average).

There could be option to enable or disable this mode.

Related to the Normal / Fast sampling modes, I have strange picture and the level of these signal are different. I expected that level on signal with high frequency can be higher (if it catch some high freq value), but I got opposite (I think, not sure now).

Anyway, with increasing refresh rate and sampling additional Min and Max values, we do not not need fast mode.

Thanks and regards,
Dejan
sine at 5kHz sampling.zip (38.2 KB)

For Open Office/Excel example: filed B1, not B2, sorry.

Benf,

Just few additional comments and ideas.

We need Average, Min and Max values per each t. You can use Average values as replacement for you old samples: for displaying waveform, triggers, saving, … The Min and Max can be used only for displaying high freq components with lower color intensity.

The nature of Average value is actually Low Pass filter, which is good and eliminates high freq noises. The Min and Max displays these high freq values. With current values we have these mixed and cannot see any of them on best way.

My idea for calculating Avg is simplest possible, and it is better then nothing. Theoretically, it is possible to make more quality Low pass filter (for this re sampling purpose) using formula c0v0 + c1v2 + c2v3 + … + cn vn, but this require memory for storing ci. I suggest to use my simplest possible approach suggestion.

Another ideas: in Normal and Single sync modes, we cannot see the signal until it is catched. It is possible to display always current signal changes, while triggered signal (and hold signal) could be displays with another color, for example red (actually light red for Avg value, and dark red for Min/Max). On this way we adjust trigger position and see signal while we are waiting on catching trigger.

The trigger mode is important option, so it has a sense to be separate menu, similar as TD, VD.

The Auto mode can be spitted in on two modes. Old Auto and new Auto which display new waveform immediately if trigger is not cached. It can be used in Normal and Single mode too for current “green” signal while holding signal is red.

Some links:
en.wikipedia.org/wiki/Circular_buffer
en.wikipedia.org/wiki/Xiaolin_Wu … _algorithm
en.wikipedia.org/wiki/Downsampling

Thanks,
Dejan

First of all, THANKS! This software is great!

In the FIT (auto scale) mode, it would be nice if it automatically adjusted the ground position such that the waveform is centered vertically. I tend to keep the ground position centered in the middle of the screen for bipolar signals (signals the swing both + and -). Then when I go to a digital signal (the output on the dso nano for instance) the displayed signal is asymmetric and compressed toward the top of the screen until I go and adjust the ground position. Quite often the peaks are off the top of screen as well when ground is not near the bottom.

Its not a big deal, but something that would save me from having to constantly re-adjust the ground position even when in FIT mode. I’d appreciate it if you could add it to the list of improvements.

Again … thanks for all the hard work.

Ben,

Just my two cents on allowing the reference waveform to be positioned independent of the real time waveform. When debugging serial UARTs, I often send out a reference waveform something like “UUUU” (which is 0x55,0x55,0x55,0x55) at the baud rate I’m using. This gives me a 01010101… pattern showing the bit times. On other scopes, I will position this at the top of the screen as a reference and then I can compare it to the real time serial data. It makes it very easy to count long strings of 0’s and 1’s when you have an alternating bit pattern at the same baud rate to use as a reference.

This can still be done if they overlap, but it would be nice in this case (and in general for all digital signals) to have the ability place one above or below the other like you see on a typical scope with digital channels or a logic analyzer.

It may be more trouble than its worth, or clutter the menu etc. … your call. I read earlier that you like real world example/needs to be articulated, so I just thought I would weigh in on the topic with one that I run into quite a bit.

Best,
Dave

Fast-mode is likely your best choice for short cycle time and high waveform detail at a T/Div of 5ms. Cycle time will then be close to 100ms for NORM and slightly above 100ms for AUTO. This mode acquires a 10x oversampled signal and use simple averaging for display rendering.

The Nano does not have the capacity to do significant processing on real-time waveform data when high-speed data acquisition is ongoing. Part of the reason is that DMA (used for acquisition) steels bus-cycles and prevents concurrent data access. Processing is limited to searching for a trigger and so display data is scaled and refreshed directly from raw buffer data between cycles. If you stop capture in fast-mode and use the zoom capability, you should see progressively more waveform detail. Measurements and XML export data are also obtained directly from the acquisition buffer in order to preserve full ADC range. You should be able to trigger on waveform details in fast-mode even though you may not see such details on the display (due to averaging). This is also why you see a somewhat flattened signal when you compare fast-mode to normal in real-time for the same waveform (see page 5 of this thread for more detail on this issue).

What we could do is add an optional min/max (peak) mode for display rendering of fast-mode acquisitions and this may be something to consider for a future upgrade.

Automatic ground level positioning and vertical reference offset are certainly within the reach of what we could add. A workaround for different preferences when viewing bipolar and ground referenced waveforms might be to prepare two profiles and load them as and when needed.

You make a good case for adding a vertical reference offset and so I’ll consider this for a future upgrade.

Thanks for your input.

BenF,

Thanks for the consideration. To be clear (and I think you understood correctly) I’m only interested in being able to control vertical position of the reference waveform. I like the way you have other other aspects of the reference waveform tracking the real time signal (time/div, V/div, etc.).

Thanks again,
Dave

BenF: “Fast-mode is likely your best choice for short cycle time and high waveform detail at a T/Div of 5ms. …”

I understand the problem with CPU. That’s very bad news, actually it is the worst then I expected. If I understand the algorithm, not only that we have problem with high frequencies on big TD, but there is one additional: the scope does not work in real time fully because trigger cannot detect regular case if signal comes during screen refresh or it is till at the end/begging of the buffer.

I had problem with trigger, but I thought it was because of high trigger level. But now, I figured out that trigger case can be missed.

I think you are focused too much for high frequency cases. Accentually, hight frequency signals works much better then expected (I tested RS232).
But, I think the most of people does not expect to see 300kHz waveform with 1 MHz sampling rate, but we definitely expect to see good and quality signal for frequencies less then 50kHz:

  • Industrial/Automotive signals,
  • Sound,
  • Car/Auto signals,
  • Power supply signals,
  • Switches (inverters, …)
  • Very slow signals: temperature, …
    I think this is real and very big target of this scope.

For example, we have ONLY 5 kHz (or 50kHz, but virtually small buffer size) sampling rate for TD=5ms, without LP filter, with slow auto refresh and possibility to miss some signals or trigger cases.

Don’t take me wrong. I’m very very very thankful for your enormous effort and time on this, and you have made fantastic improvements comparing to original software, but I think you are not focused on middle/low frequencies which are the most important for this kind of scope.

  1. Let’s keep DMA approach for low TD (high frequencies). Also, maybe you could consider splitting buffer on two parts: while one part is filled with DMA, you can examine another one, and then switch the DMA to second part of buffer. I used that approach in past for MD DOS sound card oscilloscope and FFT and it worked great in real time. Of course, I’m not sure is CPU can cover this. This part is not so important.

  2. Most important: For middle and low frequencies, let’s skip DMA approach. I hope you can process with at lest 100kHz sampling rate without DMA and keep average, Min and Max values in buffer as I suggested with your existing approach and buffer size (but reduced for MIN/MAX values). Also the buffer must be circular, so you will never loss trigger and can make auto refresh fast because you always have last n values in the buffer.

I think that calculating Avg, Min and Max could be very fast (I think that CPU has condition assign instruction). Per each sample, you need just three additional interrupt instructions:
[0]. Take the sample (skip calling utility function to save time)

  1. avg_sum += sample_value
  2. if sample_value > max then max := sample_value
  3. if sample_value < min then min := sample_value

After calculating avg, min, max for series of samples, store these three values in circular buffer. This is the time to process trigger detection and avoid that latency. When detect trigger, you start counting time position, so you can beginning of signal in the buffer. The buffer is used only for displaying. All (or most of) other calculations can be at rel-time. If measurements makes this complicated, skip that functionality. The main purpose of oscilloscope is to see the signal waveform.

I think that processing interrupt, taking sample and these thee additional instructions for avg, min and max can be done with at least 100 MHz sampling rate because they requires just “few” CPU clocks. I will not surprised if this can be done with 250 kHz or even more sampling rate.

Thanks,
Dejan

BenF,

I’m sorry (and somewhat embarrassed) to just keep throwing out requests … I’m new to this wonderful device, and if nothing else, I want to document my wish list as I run across issues using it.

I’m looking at some serial data packets with relatively long delays between packets. When I try to zoom in on an area, it re-triggers on other bits in the same packet making things look a bit unstable. It appears there is no way to force a standard trigger hold-off so that it will wait until the first bit of the next packet?

Again, this is not ultra important. I realize its easy for me to request new features, while you are the one that has to triage all the requests (by the looks of it, some pretty crazy :open_mouth: ) and do the real work of implementing them. I can’t imagine how much time you spend on this. Anyways, if nothing else, add it to the list for some day when you have too much time on your hands … ha ha :mrgreen: .

Thanks again,
Dave

P.S. An after thought … it may be triggering in the middle of a subsequent packet rather than re-triggering within the same packet. Not sure how that timing in the code all works out? In any event, it would be useful to add (additional) trigger delay to push it out to the start of a new packet.

Dejan,

I appreciate your enthusiasm for the Nano and sharing of ideas, but you also display all the signs of someone a bit immature with loads of unsubstantiated assumptions and a naive approach to technologies you don’t fully understand yet. I have some 20+ years of experience with commercial product development using related technologies and managing teams of engineers spanning multiple disciplines. You would be a fresh breath as a new team member in one such project group, but I also know that new members (especially talented ones) tend to be lost in pursuing ever changing ideas and never produce anything worthy of release to customers unless properly mentored. You seem to fit that profile well.

Although it may be of interest to a number of aspiring professionals, I can not take on entertaining implementation discussions along the lines of your posts. For this to be productive, you need a detailed overall design, a thorough analysis of constraints and an in-depth understanding of the technologies involved and how they relate to your design goals. This is too much to ask from anyone and so my choice is to keep this at the user level (what you observe and what you want to accomplish). As users, anyone’s opinion is valued regardless of specific competences or lack thereof and so we create an arena more accessible to the majority of Nano owners.

As for some of your assumptions that may leave users confused:

  • We will never loose triggers. Triggers are processed in real-time from a circular buffer with no samples skipped or lost between buffer cycles or otherwise.
  • Using DMA is more efficient than interrupt driven acquisitions by a factor of about 100 (due to context switching overhead inherent in interrupt based designs) and is the main contributing factor towards getting optimal performance from our Nano’s.
  • Display refresh rate is exactly where we want it to be. For T/Div’s 100ms and above, display is refreshed in real-time (scan mode) and for full screen modes we’re only limited by the display width time span. Refresh rate (for high frequency waveforms) is intentionally limited to 100ms so that consecutive acquisitions do not appear as if they blend together (this would otherwise distort the display and make it difficult to read).
  • A blink-of-an-eye (as the expression goes) is about 300-400ms for a healthy individual. For every single blink during acquisitions, we will loose track of 3 to 4 full acquisition cycles (4k+ samples each). Proper use of triggers on the other hand, will never miss a single sample.
  • Every frequency range supported by the Nano is treated with equal importance as frequency to sample rate is maintained across the full range. We also have the choice of 1x or 10x sampling rate as well as normal/post priority to tune buffer usage and frequency response to match a wide variety of real-life requirements.
  • Claims of being able to process samples at a rate of 100Mhz using a microcontroller running at 72MHz lacks any touch with reality whatsoever.

Unfortunately I don’t see any workarounds other than trying run/stop until you succeed (if at all possible). Hold-off will be tricky to implement with the current firmware design, so don’t expect to see this any time soon. Perhaps you should consider a low cost digital protocol analyzer as a supplement?