DSO firmware version 3.64

BenF:

Sorry, but I have another question about the Nano. When the firmware detects a trigger condition during an acquisition, the trigger is always shown on a sample point. What happens if the trigger condition occurs between sample points. Does the firmware just interpolate that the trigger condition happened between two sample points, or does the firmware miss that trigger condition when it does not fall upon a sample point?

Thanks

An edge trigger condition is not satisfied from a single point, but as a transition across a lower and upper threshold as determined by trigger level, trigger sensitivity and trigger kind.

Let’s say we configure for a rising edge trigger and start with the waveform sitting well above trigger level. The waveform then drops below trigger level (or more precisely below the sensitivity band lower level) and now the first condition is satisfied. Eventually, the waveform will start to rise and once we reach or exceed trigger level (sensitivity band upper level), the second condition is satisfied and we have a successful trigger. That is a low to high transition crossing the sensitivity band (a rising edge). If the waveform starts out below trigger level, the same logic applies, but the first condition will be satisfied immediately.

Trigger position as marked on the DSO coincides with the sample point that satisfies the second condition.

BenF:

Thanks for that very thorough explanation. I had forgotten about trigger sensitivity because I usually keep that set to zero. Now I and others can completely understand the firmware trigger detection process.

Hi BenF,

I’m commenting “Sat May 07, 2011 2:52 am”.

First of all, thank you a lot for your time and improvements on this. I’m using DSO Nano occasionally for developing and servicing some machines with analog and digital signals less then 100 kHz. The small size of DSO nano makes it great.

I had two main problems (I’m using 3.61 version):

  1. In most cases, I’m examining signals about 50 Hz (for example, controlling thyristors) and use TD ~ 5ms. The “Normal” and “Single” trigger mode work OK. But, before I use these modes, I wish to see the signal “in general” with AUTO mode. On this TD, the refresh rate is too slow (I didn’t measured, but lets say 0.2 - 0.5 sec), so I cannot see all signal changes and miss something.

  2. There could be occasional high frequency components on low frequency changes. It is hard to see this high frequency change (at least you cannot set trigger for it).

My main suggestions are:

  1. To increase refresh rate on auto mode for bigger TD, to be able to see what is going on with signal.

  2. Always sampling at 1MHz. Unfortunately, you din’t understand my idea. Sorry for my bad English.

For example, let’s use TD=5ms. Currently, on Normal sampling mode you use fs=5kHz sampling and fill the buffer. You have 25 values per one time div. Because fs is only 5kHz, faster frequency changes (theoretically >= 2.5 kHz) looks very strange (or can be missed) and moved to low frequency. This is normal theoretical situation: if you sample 6kHz sine signal with 5kHz you will got 1kHz as result (I think?). Use my example in attachment and change the input frequency at filed B2 to: 100, 200, …, 1000, 2000, …, 6000 (the same as 1000).

My idea is following. For ONE pixel on screen (5ms / 25 = 20us), capture 20us / (1/1Mhz) = 200 samples. Calculate Average value, Min value and Max value for this set of 200 samples (you need just three variables), and put all three values in buffer and plot them on screen. You can use Average value as you did before to draw lines, while Min and Max value can draw vertical line with lower color intensity, or just plot two dots.

On that way we can DETECT and see high frequency changes amplitude even on low frequency signals. For example, if there is very small impulse with dt = 5us, we can miss it or display it occasionally. With min max values, we will always see this signal because we sample with dt = 1us (we will catch it 5 times).

For trigger modes, need to be option to catch only Average, or all three values. Or just use average value to simplify this.

This will reduce buffer size by factor 2 or 3 depending do you store Min and Max with 4 or 8 bits. Probably it is the bast just to store each value with 8 bits and simplify things and have real picture of signals, instead my idea I suggest in my second email (Min and Max with 4 bit none linear difference to average).

There could be option to enable or disable this mode.

Related to the Normal / Fast sampling modes, I have strange picture and the level of these signal are different. I expected that level on signal with high frequency can be higher (if it catch some high freq value), but I got opposite (I think, not sure now).

Anyway, with increasing refresh rate and sampling additional Min and Max values, we do not not need fast mode.

Thanks and regards,
Dejan
sine at 5kHz sampling.zip (38.2 KB)

For Open Office/Excel example: filed B1, not B2, sorry.

Benf,

Just few additional comments and ideas.

We need Average, Min and Max values per each t. You can use Average values as replacement for you old samples: for displaying waveform, triggers, saving, … The Min and Max can be used only for displaying high freq components with lower color intensity.

The nature of Average value is actually Low Pass filter, which is good and eliminates high freq noises. The Min and Max displays these high freq values. With current values we have these mixed and cannot see any of them on best way.

My idea for calculating Avg is simplest possible, and it is better then nothing. Theoretically, it is possible to make more quality Low pass filter (for this re sampling purpose) using formula c0v0 + c1v2 + c2v3 + … + cn vn, but this require memory for storing ci. I suggest to use my simplest possible approach suggestion.

Another ideas: in Normal and Single sync modes, we cannot see the signal until it is catched. It is possible to display always current signal changes, while triggered signal (and hold signal) could be displays with another color, for example red (actually light red for Avg value, and dark red for Min/Max). On this way we adjust trigger position and see signal while we are waiting on catching trigger.

The trigger mode is important option, so it has a sense to be separate menu, similar as TD, VD.

The Auto mode can be spitted in on two modes. Old Auto and new Auto which display new waveform immediately if trigger is not cached. It can be used in Normal and Single mode too for current “green” signal while holding signal is red.

Some links:
en.wikipedia.org/wiki/Circular_buffer
en.wikipedia.org/wiki/Xiaolin_Wu … _algorithm
en.wikipedia.org/wiki/Downsampling

Thanks,
Dejan

First of all, THANKS! This software is great!

In the FIT (auto scale) mode, it would be nice if it automatically adjusted the ground position such that the waveform is centered vertically. I tend to keep the ground position centered in the middle of the screen for bipolar signals (signals the swing both + and -). Then when I go to a digital signal (the output on the dso nano for instance) the displayed signal is asymmetric and compressed toward the top of the screen until I go and adjust the ground position. Quite often the peaks are off the top of screen as well when ground is not near the bottom.

Its not a big deal, but something that would save me from having to constantly re-adjust the ground position even when in FIT mode. I’d appreciate it if you could add it to the list of improvements.

Again … thanks for all the hard work.

Ben,

Just my two cents on allowing the reference waveform to be positioned independent of the real time waveform. When debugging serial UARTs, I often send out a reference waveform something like “UUUU” (which is 0x55,0x55,0x55,0x55) at the baud rate I’m using. This gives me a 01010101… pattern showing the bit times. On other scopes, I will position this at the top of the screen as a reference and then I can compare it to the real time serial data. It makes it very easy to count long strings of 0’s and 1’s when you have an alternating bit pattern at the same baud rate to use as a reference.

This can still be done if they overlap, but it would be nice in this case (and in general for all digital signals) to have the ability place one above or below the other like you see on a typical scope with digital channels or a logic analyzer.

It may be more trouble than its worth, or clutter the menu etc. … your call. I read earlier that you like real world example/needs to be articulated, so I just thought I would weigh in on the topic with one that I run into quite a bit.

Best,
Dave

Fast-mode is likely your best choice for short cycle time and high waveform detail at a T/Div of 5ms. Cycle time will then be close to 100ms for NORM and slightly above 100ms for AUTO. This mode acquires a 10x oversampled signal and use simple averaging for display rendering.

The Nano does not have the capacity to do significant processing on real-time waveform data when high-speed data acquisition is ongoing. Part of the reason is that DMA (used for acquisition) steels bus-cycles and prevents concurrent data access. Processing is limited to searching for a trigger and so display data is scaled and refreshed directly from raw buffer data between cycles. If you stop capture in fast-mode and use the zoom capability, you should see progressively more waveform detail. Measurements and XML export data are also obtained directly from the acquisition buffer in order to preserve full ADC range. You should be able to trigger on waveform details in fast-mode even though you may not see such details on the display (due to averaging). This is also why you see a somewhat flattened signal when you compare fast-mode to normal in real-time for the same waveform (see page 5 of this thread for more detail on this issue).

What we could do is add an optional min/max (peak) mode for display rendering of fast-mode acquisitions and this may be something to consider for a future upgrade.

Automatic ground level positioning and vertical reference offset are certainly within the reach of what we could add. A workaround for different preferences when viewing bipolar and ground referenced waveforms might be to prepare two profiles and load them as and when needed.

You make a good case for adding a vertical reference offset and so I’ll consider this for a future upgrade.

Thanks for your input.

BenF,

Thanks for the consideration. To be clear (and I think you understood correctly) I’m only interested in being able to control vertical position of the reference waveform. I like the way you have other other aspects of the reference waveform tracking the real time signal (time/div, V/div, etc.).

Thanks again,
Dave

BenF: “Fast-mode is likely your best choice for short cycle time and high waveform detail at a T/Div of 5ms. …”

I understand the problem with CPU. That’s very bad news, actually it is the worst then I expected. If I understand the algorithm, not only that we have problem with high frequencies on big TD, but there is one additional: the scope does not work in real time fully because trigger cannot detect regular case if signal comes during screen refresh or it is till at the end/begging of the buffer.

I had problem with trigger, but I thought it was because of high trigger level. But now, I figured out that trigger case can be missed.

I think you are focused too much for high frequency cases. Accentually, hight frequency signals works much better then expected (I tested RS232).
But, I think the most of people does not expect to see 300kHz waveform with 1 MHz sampling rate, but we definitely expect to see good and quality signal for frequencies less then 50kHz:

  • Industrial/Automotive signals,
  • Sound,
  • Car/Auto signals,
  • Power supply signals,
  • Switches (inverters, …)
  • Very slow signals: temperature, …
    I think this is real and very big target of this scope.

For example, we have ONLY 5 kHz (or 50kHz, but virtually small buffer size) sampling rate for TD=5ms, without LP filter, with slow auto refresh and possibility to miss some signals or trigger cases.

Don’t take me wrong. I’m very very very thankful for your enormous effort and time on this, and you have made fantastic improvements comparing to original software, but I think you are not focused on middle/low frequencies which are the most important for this kind of scope.

  1. Let’s keep DMA approach for low TD (high frequencies). Also, maybe you could consider splitting buffer on two parts: while one part is filled with DMA, you can examine another one, and then switch the DMA to second part of buffer. I used that approach in past for MD DOS sound card oscilloscope and FFT and it worked great in real time. Of course, I’m not sure is CPU can cover this. This part is not so important.

  2. Most important: For middle and low frequencies, let’s skip DMA approach. I hope you can process with at lest 100kHz sampling rate without DMA and keep average, Min and Max values in buffer as I suggested with your existing approach and buffer size (but reduced for MIN/MAX values). Also the buffer must be circular, so you will never loss trigger and can make auto refresh fast because you always have last n values in the buffer.

I think that calculating Avg, Min and Max could be very fast (I think that CPU has condition assign instruction). Per each sample, you need just three additional interrupt instructions:
[0]. Take the sample (skip calling utility function to save time)

  1. avg_sum += sample_value
  2. if sample_value > max then max := sample_value
  3. if sample_value < min then min := sample_value

After calculating avg, min, max for series of samples, store these three values in circular buffer. This is the time to process trigger detection and avoid that latency. When detect trigger, you start counting time position, so you can beginning of signal in the buffer. The buffer is used only for displaying. All (or most of) other calculations can be at rel-time. If measurements makes this complicated, skip that functionality. The main purpose of oscilloscope is to see the signal waveform.

I think that processing interrupt, taking sample and these thee additional instructions for avg, min and max can be done with at least 100 MHz sampling rate because they requires just “few” CPU clocks. I will not surprised if this can be done with 250 kHz or even more sampling rate.

Thanks,
Dejan

BenF,

I’m sorry (and somewhat embarrassed) to just keep throwing out requests … I’m new to this wonderful device, and if nothing else, I want to document my wish list as I run across issues using it.

I’m looking at some serial data packets with relatively long delays between packets. When I try to zoom in on an area, it re-triggers on other bits in the same packet making things look a bit unstable. It appears there is no way to force a standard trigger hold-off so that it will wait until the first bit of the next packet?

Again, this is not ultra important. I realize its easy for me to request new features, while you are the one that has to triage all the requests (by the looks of it, some pretty crazy :open_mouth: ) and do the real work of implementing them. I can’t imagine how much time you spend on this. Anyways, if nothing else, add it to the list for some day when you have too much time on your hands … ha ha :mrgreen: .

Thanks again,
Dave

P.S. An after thought … it may be triggering in the middle of a subsequent packet rather than re-triggering within the same packet. Not sure how that timing in the code all works out? In any event, it would be useful to add (additional) trigger delay to push it out to the start of a new packet.

Dejan,

I appreciate your enthusiasm for the Nano and sharing of ideas, but you also display all the signs of someone a bit immature with loads of unsubstantiated assumptions and a naive approach to technologies you don’t fully understand yet. I have some 20+ years of experience with commercial product development using related technologies and managing teams of engineers spanning multiple disciplines. You would be a fresh breath as a new team member in one such project group, but I also know that new members (especially talented ones) tend to be lost in pursuing ever changing ideas and never produce anything worthy of release to customers unless properly mentored. You seem to fit that profile well.

Although it may be of interest to a number of aspiring professionals, I can not take on entertaining implementation discussions along the lines of your posts. For this to be productive, you need a detailed overall design, a thorough analysis of constraints and an in-depth understanding of the technologies involved and how they relate to your design goals. This is too much to ask from anyone and so my choice is to keep this at the user level (what you observe and what you want to accomplish). As users, anyone’s opinion is valued regardless of specific competences or lack thereof and so we create an arena more accessible to the majority of Nano owners.

As for some of your assumptions that may leave users confused:

  • We will never loose triggers. Triggers are processed in real-time from a circular buffer with no samples skipped or lost between buffer cycles or otherwise.
  • Using DMA is more efficient than interrupt driven acquisitions by a factor of about 100 (due to context switching overhead inherent in interrupt based designs) and is the main contributing factor towards getting optimal performance from our Nano’s.
  • Display refresh rate is exactly where we want it to be. For T/Div’s 100ms and above, display is refreshed in real-time (scan mode) and for full screen modes we’re only limited by the display width time span. Refresh rate (for high frequency waveforms) is intentionally limited to 100ms so that consecutive acquisitions do not appear as if they blend together (this would otherwise distort the display and make it difficult to read).
  • A blink-of-an-eye (as the expression goes) is about 300-400ms for a healthy individual. For every single blink during acquisitions, we will loose track of 3 to 4 full acquisition cycles (4k+ samples each). Proper use of triggers on the other hand, will never miss a single sample.
  • Every frequency range supported by the Nano is treated with equal importance as frequency to sample rate is maintained across the full range. We also have the choice of 1x or 10x sampling rate as well as normal/post priority to tune buffer usage and frequency response to match a wide variety of real-life requirements.
  • Claims of being able to process samples at a rate of 100Mhz using a microcontroller running at 72MHz lacks any touch with reality whatsoever.

Unfortunately I don’t see any workarounds other than trying run/stop until you succeed (if at all possible). Hold-off will be tricky to implement with the current firmware design, so don’t expect to see this any time soon. Perhaps you should consider a low cost digital protocol analyzer as a supplement?

BenF,

if you don’t like my suggestions or have no time for them I understand that, but you don’t need to shut on me. Your profiling is mostly wrong: I’m programing about 25 years (last 12 professionally), I develop automations, electronic and software for more then 30 machines which customer uses during last 10 years (that’s my second additional job).
Related to DSO Nano, you are right: I don’t know the details about this, and that is obvious. But I can suggest. But I will stop on that. I already spent too much time on this.

I’m not planning to be of any group only because I do not have additinal time for this (as I told you, I’m already working two jobs).

Related to technical stuff, my assumption that DSO nano can lost the trigger is based on you previous email: DMA fill the buffer, then software display and examine the buffer. If that is the true you can lost the trigger. If two buffers are used and the software can examine second buffer in real-time, then trigger is NOT lost. Or there is something third which I’m not aware. Anyway, even losing trigger is not such bad if that is going on, but users need to be aware of that. It that is not the case, then take my apologies. My assumption is based on yours explanation.

100MHz is type mistake of course. I mean 100kHz.

As I told you, you did great job for this: displaying/visual effect is nice and high frequency works fine. But the time scale of 5ms per div can be better. The fast mode is not full solution because high frequencies are lost (which is generally good, but we need high freq info too). The additional Min, Max even on this fast mode could be much more useful to be aware of these high freq signals, but it is not full solution if CPU can scan this faster.

Did you test the Max sampling rate without DMA?

If this is better even then 50 kHz it is worth to use my approach. The effect of MIX and MAX is almost the same as you have big buffer to store all samples on high frequency and displays all elements on high TD.

Here are some concrete problems with this 5ms time scale I had:

  1. On AC=50Hz there are high freq nose at 10kHz from servo motor switcher. I cannot see it well.
  2. I’m controlling thyristors: there are some cases then trigger not catch the tyrystors “current” waveform. But maybe that was from high trigger level or sensitivity. Controlling thyristors are on 100Hz, while input signal is very short ~0.1 ms. This is another situation when I need to see both low and “high” frequency. For this case I use smaller TD, but the buffer cannot catch all impulses, even that could be with TD=5ms (but impulses are not well)
  3. I tried fast mode, but because there are not Min, Max, it is not so much useful. And the buffer is small.
    I guess using my suggestion could give much better result for these cases, in case that CPU can sample of reasonable high frequency.

Anyway, this Nano and your changes are useful and definitely better then nothing. But I think that they could be better. If you do not have more time, I understand that.

Dejan

Excuse me for butting in here, but I feel that I am qualified to enter into this discussion because I once did the same thing as you are doing in your posts. I was also trying to explain “how”, when all BenF wants is the “why”. BenF is a reasonable man and only wants to accurately respond to feed-back. But when you spend excessive time explaining "how, then that wastes BenF’s time sorting through this to find the “why”. As you have already stated, you and myself do not know the innards as well as BenF, so we should stick to “why”. Like I say, I have already been guilty of this too, so your options are to get upset about this or to learn from this experience. The choice is entirely yours.

As for the missed triggers, no standard DSO scope with software trigger detection can see triggers outside acquisition time, so your argument is moot. You appear to want a second simultaneous acquisition while the Nano is processing the current acquisition and that can not be done with the Nano hardware limitations.

When the Nano capabilities are exceeded, then your next option is to use a scope that costs more than $89. The Nano is a general purpose scope only, and has no exotic trigger and/or display capabilities.

It sounds as if you need a scope that can stream capture data to a file so you can examine it for missed waveform actions. You should move to something like a Pico 2205, whose educational version can be purchased for about $250. It allows you to stream to a PC file, and then you can scroll through this file and expand your view of the captured data as needed.

I understand. I was thinking it would be “easy” to ignore additional triggers for a specified amount of time, but I can surmise reasons why that might not be the case. I’ll take your word for it. I can get by with single shots, or if needed, I have access to plenty of more powerful tools at work with built in protocol analysis, etc. I just like working at home in my lazyboy with a netbook and a couple of USB powered boards …

If I run into other issues that might have a relatively easy fix, I’ll chime back in. I promise I won’t ask you to add a second channel, violate Nyquist or calculate a real time FFT. I would like X-Y mode though :smiley: .

BTW, what tool chain do you use? I’m an EE faculty at a State University and I have a pretty strong background in hardware. I’ve done some embedded C and can pump out assembly code and bit-bang with the best, but I consider myself to be a real “hack” when it comes to high quality, structured software. I wouldn’t mind playing around with the code more for fun and the learning experience than anything else (I’m actually more likely to break the code than improve anything).

Thanks again,
Dave

It’s not my intention to push you away, but if we keep the discussions at a higher level, it is likely to be more productive (from my point of view at least).

A single ADC interrupt may require upwards of 70 cycles just for saving and restoring context. At a sampling rate of 1Ms/s we would then consume 100% of CPU time just for ADC interrupt overhead. Also we have to account for all other interrupts. This includes a millisecond interval timer used for general time keeping, scanning input button key states (so that the Nano appears responsive at all times) and a few other housekeeping tasks. Connected to USB it gets a lot worse as we have to service inbound interrupt requests for the SD card file system. Service time for all interrupts combined will determine the minimum ADC cycle time. DMA is the performance enabler here, not the problem.

Using DMA and a 4k buffer size provides a window for real-time concurrent access to the acquisition buffer and we need this in order not to loose samples.

At 5ms we will sample at 50kHz and so barely capture the 10kHz ripple (5 samples per cycle). Step down one TD however and you should be ok.

In this case I think you need to go back and check your settings (fast mode, zero sensitivity and correct trigger level) as this should be within the capability of the Nano/firmware.

You can not have a large buffer AND short cycle at the same time so this is a trade off. The min/max samples however will be in the acquisition buffer and so can be targeted by triggers.

If we add a hold-off feature, the expectation (mine at least) is for precise interval timing between trigger cycles at sub ms level. Time between cycles is used for display refresh, measurement calculations, general house-keeping, checking battery level, processing user input, and periodic ground level calibration and so is not easily controlled with the current design (a special purpose trigger is probably easier to implement).

Assembly and bit flipping is the right background for this so why not get your hands dirty with the V3.1 source available for download? It is based on the IAR toolchain for ARM Cortex with its C/C++ compiler and assembler.

I may be squeezed time wise to work on this, but it shouldn’t stop anyone from wishing or requesting.