DSO firmware version 3.64

First of all, THANKS! This software is great!

In the FIT (auto scale) mode, it would be nice if it automatically adjusted the ground position such that the waveform is centered vertically. I tend to keep the ground position centered in the middle of the screen for bipolar signals (signals the swing both + and -). Then when I go to a digital signal (the output on the dso nano for instance) the displayed signal is asymmetric and compressed toward the top of the screen until I go and adjust the ground position. Quite often the peaks are off the top of screen as well when ground is not near the bottom.

Its not a big deal, but something that would save me from having to constantly re-adjust the ground position even when in FIT mode. I’d appreciate it if you could add it to the list of improvements.

Again … thanks for all the hard work.


Just my two cents on allowing the reference waveform to be positioned independent of the real time waveform. When debugging serial UARTs, I often send out a reference waveform something like “UUUU” (which is 0x55,0x55,0x55,0x55) at the baud rate I’m using. This gives me a 01010101… pattern showing the bit times. On other scopes, I will position this at the top of the screen as a reference and then I can compare it to the real time serial data. It makes it very easy to count long strings of 0’s and 1’s when you have an alternating bit pattern at the same baud rate to use as a reference.

This can still be done if they overlap, but it would be nice in this case (and in general for all digital signals) to have the ability place one above or below the other like you see on a typical scope with digital channels or a logic analyzer.

It may be more trouble than its worth, or clutter the menu etc. … your call. I read earlier that you like real world example/needs to be articulated, so I just thought I would weigh in on the topic with one that I run into quite a bit.


Fast-mode is likely your best choice for short cycle time and high waveform detail at a T/Div of 5ms. Cycle time will then be close to 100ms for NORM and slightly above 100ms for AUTO. This mode acquires a 10x oversampled signal and use simple averaging for display rendering.

The Nano does not have the capacity to do significant processing on real-time waveform data when high-speed data acquisition is ongoing. Part of the reason is that DMA (used for acquisition) steels bus-cycles and prevents concurrent data access. Processing is limited to searching for a trigger and so display data is scaled and refreshed directly from raw buffer data between cycles. If you stop capture in fast-mode and use the zoom capability, you should see progressively more waveform detail. Measurements and XML export data are also obtained directly from the acquisition buffer in order to preserve full ADC range. You should be able to trigger on waveform details in fast-mode even though you may not see such details on the display (due to averaging). This is also why you see a somewhat flattened signal when you compare fast-mode to normal in real-time for the same waveform (see page 5 of this thread for more detail on this issue).

What we could do is add an optional min/max (peak) mode for display rendering of fast-mode acquisitions and this may be something to consider for a future upgrade.

Automatic ground level positioning and vertical reference offset are certainly within the reach of what we could add. A workaround for different preferences when viewing bipolar and ground referenced waveforms might be to prepare two profiles and load them as and when needed.

You make a good case for adding a vertical reference offset and so I’ll consider this for a future upgrade.

Thanks for your input.


Thanks for the consideration. To be clear (and I think you understood correctly) I’m only interested in being able to control vertical position of the reference waveform. I like the way you have other other aspects of the reference waveform tracking the real time signal (time/div, V/div, etc.).

Thanks again,

BenF: “Fast-mode is likely your best choice for short cycle time and high waveform detail at a T/Div of 5ms. …”

I understand the problem with CPU. That’s very bad news, actually it is the worst then I expected. If I understand the algorithm, not only that we have problem with high frequencies on big TD, but there is one additional: the scope does not work in real time fully because trigger cannot detect regular case if signal comes during screen refresh or it is till at the end/begging of the buffer.

I had problem with trigger, but I thought it was because of high trigger level. But now, I figured out that trigger case can be missed.

I think you are focused too much for high frequency cases. Accentually, hight frequency signals works much better then expected (I tested RS232).
But, I think the most of people does not expect to see 300kHz waveform with 1 MHz sampling rate, but we definitely expect to see good and quality signal for frequencies less then 50kHz:

  • Industrial/Automotive signals,
  • Sound,
  • Car/Auto signals,
  • Power supply signals,
  • Switches (inverters, …)
  • Very slow signals: temperature, …
    I think this is real and very big target of this scope.

For example, we have ONLY 5 kHz (or 50kHz, but virtually small buffer size) sampling rate for TD=5ms, without LP filter, with slow auto refresh and possibility to miss some signals or trigger cases.

Don’t take me wrong. I’m very very very thankful for your enormous effort and time on this, and you have made fantastic improvements comparing to original software, but I think you are not focused on middle/low frequencies which are the most important for this kind of scope.

  1. Let’s keep DMA approach for low TD (high frequencies). Also, maybe you could consider splitting buffer on two parts: while one part is filled with DMA, you can examine another one, and then switch the DMA to second part of buffer. I used that approach in past for MD DOS sound card oscilloscope and FFT and it worked great in real time. Of course, I’m not sure is CPU can cover this. This part is not so important.

  2. Most important: For middle and low frequencies, let’s skip DMA approach. I hope you can process with at lest 100kHz sampling rate without DMA and keep average, Min and Max values in buffer as I suggested with your existing approach and buffer size (but reduced for MIN/MAX values). Also the buffer must be circular, so you will never loss trigger and can make auto refresh fast because you always have last n values in the buffer.

I think that calculating Avg, Min and Max could be very fast (I think that CPU has condition assign instruction). Per each sample, you need just three additional interrupt instructions:
[0]. Take the sample (skip calling utility function to save time)

  1. avg_sum += sample_value
  2. if sample_value > max then max := sample_value
  3. if sample_value < min then min := sample_value

After calculating avg, min, max for series of samples, store these three values in circular buffer. This is the time to process trigger detection and avoid that latency. When detect trigger, you start counting time position, so you can beginning of signal in the buffer. The buffer is used only for displaying. All (or most of) other calculations can be at rel-time. If measurements makes this complicated, skip that functionality. The main purpose of oscilloscope is to see the signal waveform.

I think that processing interrupt, taking sample and these thee additional instructions for avg, min and max can be done with at least 100 MHz sampling rate because they requires just “few” CPU clocks. I will not surprised if this can be done with 250 kHz or even more sampling rate.



I’m sorry (and somewhat embarrassed) to just keep throwing out requests … I’m new to this wonderful device, and if nothing else, I want to document my wish list as I run across issues using it.

I’m looking at some serial data packets with relatively long delays between packets. When I try to zoom in on an area, it re-triggers on other bits in the same packet making things look a bit unstable. It appears there is no way to force a standard trigger hold-off so that it will wait until the first bit of the next packet?

Again, this is not ultra important. I realize its easy for me to request new features, while you are the one that has to triage all the requests (by the looks of it, some pretty crazy :open_mouth: ) and do the real work of implementing them. I can’t imagine how much time you spend on this. Anyways, if nothing else, add it to the list for some day when you have too much time on your hands … ha ha :mrgreen: .

Thanks again,

P.S. An after thought … it may be triggering in the middle of a subsequent packet rather than re-triggering within the same packet. Not sure how that timing in the code all works out? In any event, it would be useful to add (additional) trigger delay to push it out to the start of a new packet.


I appreciate your enthusiasm for the Nano and sharing of ideas, but you also display all the signs of someone a bit immature with loads of unsubstantiated assumptions and a naive approach to technologies you don’t fully understand yet. I have some 20+ years of experience with commercial product development using related technologies and managing teams of engineers spanning multiple disciplines. You would be a fresh breath as a new team member in one such project group, but I also know that new members (especially talented ones) tend to be lost in pursuing ever changing ideas and never produce anything worthy of release to customers unless properly mentored. You seem to fit that profile well.

Although it may be of interest to a number of aspiring professionals, I can not take on entertaining implementation discussions along the lines of your posts. For this to be productive, you need a detailed overall design, a thorough analysis of constraints and an in-depth understanding of the technologies involved and how they relate to your design goals. This is too much to ask from anyone and so my choice is to keep this at the user level (what you observe and what you want to accomplish). As users, anyone’s opinion is valued regardless of specific competences or lack thereof and so we create an arena more accessible to the majority of Nano owners.

As for some of your assumptions that may leave users confused:

  • We will never loose triggers. Triggers are processed in real-time from a circular buffer with no samples skipped or lost between buffer cycles or otherwise.
  • Using DMA is more efficient than interrupt driven acquisitions by a factor of about 100 (due to context switching overhead inherent in interrupt based designs) and is the main contributing factor towards getting optimal performance from our Nano’s.
  • Display refresh rate is exactly where we want it to be. For T/Div’s 100ms and above, display is refreshed in real-time (scan mode) and for full screen modes we’re only limited by the display width time span. Refresh rate (for high frequency waveforms) is intentionally limited to 100ms so that consecutive acquisitions do not appear as if they blend together (this would otherwise distort the display and make it difficult to read).
  • A blink-of-an-eye (as the expression goes) is about 300-400ms for a healthy individual. For every single blink during acquisitions, we will loose track of 3 to 4 full acquisition cycles (4k+ samples each). Proper use of triggers on the other hand, will never miss a single sample.
  • Every frequency range supported by the Nano is treated with equal importance as frequency to sample rate is maintained across the full range. We also have the choice of 1x or 10x sampling rate as well as normal/post priority to tune buffer usage and frequency response to match a wide variety of real-life requirements.
  • Claims of being able to process samples at a rate of 100Mhz using a microcontroller running at 72MHz lacks any touch with reality whatsoever.

Unfortunately I don’t see any workarounds other than trying run/stop until you succeed (if at all possible). Hold-off will be tricky to implement with the current firmware design, so don’t expect to see this any time soon. Perhaps you should consider a low cost digital protocol analyzer as a supplement?


if you don’t like my suggestions or have no time for them I understand that, but you don’t need to shut on me. Your profiling is mostly wrong: I’m programing about 25 years (last 12 professionally), I develop automations, electronic and software for more then 30 machines which customer uses during last 10 years (that’s my second additional job).
Related to DSO Nano, you are right: I don’t know the details about this, and that is obvious. But I can suggest. But I will stop on that. I already spent too much time on this.

I’m not planning to be of any group only because I do not have additinal time for this (as I told you, I’m already working two jobs).

Related to technical stuff, my assumption that DSO nano can lost the trigger is based on you previous email: DMA fill the buffer, then software display and examine the buffer. If that is the true you can lost the trigger. If two buffers are used and the software can examine second buffer in real-time, then trigger is NOT lost. Or there is something third which I’m not aware. Anyway, even losing trigger is not such bad if that is going on, but users need to be aware of that. It that is not the case, then take my apologies. My assumption is based on yours explanation.

100MHz is type mistake of course. I mean 100kHz.

As I told you, you did great job for this: displaying/visual effect is nice and high frequency works fine. But the time scale of 5ms per div can be better. The fast mode is not full solution because high frequencies are lost (which is generally good, but we need high freq info too). The additional Min, Max even on this fast mode could be much more useful to be aware of these high freq signals, but it is not full solution if CPU can scan this faster.

Did you test the Max sampling rate without DMA?

If this is better even then 50 kHz it is worth to use my approach. The effect of MIX and MAX is almost the same as you have big buffer to store all samples on high frequency and displays all elements on high TD.

Here are some concrete problems with this 5ms time scale I had:

  1. On AC=50Hz there are high freq nose at 10kHz from servo motor switcher. I cannot see it well.
  2. I’m controlling thyristors: there are some cases then trigger not catch the tyrystors “current” waveform. But maybe that was from high trigger level or sensitivity. Controlling thyristors are on 100Hz, while input signal is very short ~0.1 ms. This is another situation when I need to see both low and “high” frequency. For this case I use smaller TD, but the buffer cannot catch all impulses, even that could be with TD=5ms (but impulses are not well)
  3. I tried fast mode, but because there are not Min, Max, it is not so much useful. And the buffer is small.
    I guess using my suggestion could give much better result for these cases, in case that CPU can sample of reasonable high frequency.

Anyway, this Nano and your changes are useful and definitely better then nothing. But I think that they could be better. If you do not have more time, I understand that.


Excuse me for butting in here, but I feel that I am qualified to enter into this discussion because I once did the same thing as you are doing in your posts. I was also trying to explain “how”, when all BenF wants is the “why”. BenF is a reasonable man and only wants to accurately respond to feed-back. But when you spend excessive time explaining "how, then that wastes BenF’s time sorting through this to find the “why”. As you have already stated, you and myself do not know the innards as well as BenF, so we should stick to “why”. Like I say, I have already been guilty of this too, so your options are to get upset about this or to learn from this experience. The choice is entirely yours.

As for the missed triggers, no standard DSO scope with software trigger detection can see triggers outside acquisition time, so your argument is moot. You appear to want a second simultaneous acquisition while the Nano is processing the current acquisition and that can not be done with the Nano hardware limitations.

When the Nano capabilities are exceeded, then your next option is to use a scope that costs more than $89. The Nano is a general purpose scope only, and has no exotic trigger and/or display capabilities.

It sounds as if you need a scope that can stream capture data to a file so you can examine it for missed waveform actions. You should move to something like a Pico 2205, whose educational version can be purchased for about $250. It allows you to stream to a PC file, and then you can scroll through this file and expand your view of the captured data as needed.

I understand. I was thinking it would be “easy” to ignore additional triggers for a specified amount of time, but I can surmise reasons why that might not be the case. I’ll take your word for it. I can get by with single shots, or if needed, I have access to plenty of more powerful tools at work with built in protocol analysis, etc. I just like working at home in my lazyboy with a netbook and a couple of USB powered boards …

If I run into other issues that might have a relatively easy fix, I’ll chime back in. I promise I won’t ask you to add a second channel, violate Nyquist or calculate a real time FFT. I would like X-Y mode though :smiley: .

BTW, what tool chain do you use? I’m an EE faculty at a State University and I have a pretty strong background in hardware. I’ve done some embedded C and can pump out assembly code and bit-bang with the best, but I consider myself to be a real “hack” when it comes to high quality, structured software. I wouldn’t mind playing around with the code more for fun and the learning experience than anything else (I’m actually more likely to break the code than improve anything).

Thanks again,

It’s not my intention to push you away, but if we keep the discussions at a higher level, it is likely to be more productive (from my point of view at least).

A single ADC interrupt may require upwards of 70 cycles just for saving and restoring context. At a sampling rate of 1Ms/s we would then consume 100% of CPU time just for ADC interrupt overhead. Also we have to account for all other interrupts. This includes a millisecond interval timer used for general time keeping, scanning input button key states (so that the Nano appears responsive at all times) and a few other housekeeping tasks. Connected to USB it gets a lot worse as we have to service inbound interrupt requests for the SD card file system. Service time for all interrupts combined will determine the minimum ADC cycle time. DMA is the performance enabler here, not the problem.

Using DMA and a 4k buffer size provides a window for real-time concurrent access to the acquisition buffer and we need this in order not to loose samples.

At 5ms we will sample at 50kHz and so barely capture the 10kHz ripple (5 samples per cycle). Step down one TD however and you should be ok.

In this case I think you need to go back and check your settings (fast mode, zero sensitivity and correct trigger level) as this should be within the capability of the Nano/firmware.

You can not have a large buffer AND short cycle at the same time so this is a trade off. The min/max samples however will be in the acquisition buffer and so can be targeted by triggers.

If we add a hold-off feature, the expectation (mine at least) is for precise interval timing between trigger cycles at sub ms level. Time between cycles is used for display refresh, measurement calculations, general house-keeping, checking battery level, processing user input, and periodic ground level calibration and so is not easily controlled with the current design (a special purpose trigger is probably easier to implement).

Assembly and bit flipping is the right background for this so why not get your hands dirty with the V3.1 source available for download? It is based on the IAR toolchain for ARM Cortex with its C/C++ compiler and assembler.

I may be squeezed time wise to work on this, but it shouldn’t stop anyone from wishing or requesting.

Hi guys,

First of all I wish to say that I don’t like “I’m … you are …”. If you are all agree with this, please let’s avoid this in future.

I hope we can all agree for this:

  • DSO Nano is fine and useful product, but has more limitation then we originally expected for big TD (TD>=5ms). The freq sample is too low, there is no LP filters and no way to see high components. I put an demonstration on page 9 with OO/Excel file where you can see 6kHz as 1kHz if you sample at 5kHz.

  • BenF software is big improvement to original one and we all respect that work. Long time ago, I made an FFT (and oscilloscope) PC sound card app. I made first version just for few days, but need few month to make it good. The PC is much easier for programming and testing then this hardware. I can just imagine how it is difficult to do the same with hardware limitations of Nano. Testing also require a lot of time because you need to transfer program to Nano, … This is big and hard work. I’m aware of that, and I’m sure others.

The main question from my point is: can this be better for bigger TD? My intuition says: it can. But maybe it can not. I cannot be 100% sure.

Unfortunately, I really do not have time to go even a bit deeper into this. So I will just push some another ideas. Let’s BenF or somebody else think about them more. If they cannot be implemented or they are too much hard to implement, that’s OK. Otherwise, maybe they can help.

  1. First, I was surprised that irq switching time requites such a lot overhead. I guess BenF check that. If that is true, how we can stand with sampling at 100 kHz, or even 50 kHz? Maybe that can pass? Even with very small battery life.
    The 50kHz irq sampling looks the same as current TD=5ms on fast mode, but it is not: with Avg, Min and Max we have LP filter (Avg) and see high components (Min, Max) and the buffer is 10x bigger.

  2. If I understand, the main limitation is irq switching time. If #1 cannot pass , let’s consider an another more complicated:

  • Let’s use DMA, but on very small buffer. Accentually two small buffers. While DMA fill one of them, the DSO process another one.
  • Data from small buffer is transfered to main bigger CIRCULAR buffer used for displaying, saving, … During this transfer, more then one value from small buffer correspond to one tupe (Avg, Min, Max) in big/main buffer. During this transfer from “small” to “big” buffer, program is looking for trigger and process other calculations.

How long this small buffer has to be? Will we have the whole “small” buffer just to one tuple in big (main) buffer, or more?
I will leave these to be considered. Maybe the small buffer will be bigger then main buffer. This need some calculations and considerations.

I kind of figured there were other things going on that would preclude a precise deterministic hold-off.

I’ll make it a goal for the summer to get a few DSO bits stuck under my fingernails …

If nothing else, that will help me to filter and refine future questions/suggestions so as not to waste your time. On that note, I’m fully content with short answers like “that’s not feasible/easy”. I trust you have good reasons … no need to be verbose unless you have the time and think the details would be of general interest to the forum.


P.S. - I have now downloaded BenF_V3.1 and IAR Kickstart and got things compiled and downloaded into the DSO.


Thank you very much for those changes in V3.62, they are awesome. With regard to the Peak Display mode, the User Guide talks about using Peak for oversampling. Can the Peak display mode also be used in normal acquisition mode? I understand that there would be more noise, but glitches may become more observable if Peak can be used in normal acquisition mode.

Thank you, thank you very much


Thanks a lot for being so responsive to the never ending requests (including mine) for changes/improvement!

On my unit, If I am in FIT mode and looking at the output of the internal signal generator (while powered by the internal battery), the vertical scale continuously alternates back and forth between 1V/div (good display) and 0.5V/div (square wave off the top of display) causing an unstable view. The setting is stable when powered of my laptop’s usb port (the square wave amplitude is bit larger due to the higher supply voltage). Is it possible to adjust the threshold a little for when it decides to increase the sensitivity … and/or put in a little hysteresis so it stabilizes at one setting when near the threshold for vertical sensitivle change?


Thanks for the feedback. As this is based on forum feedback however, credit is all yours.

You can keep this as a preference for any configuration, but it will only have an effect on displayed output when there are more samples per div in the buffer than needed for the display (i.e. > 25 samples per div). This is the case with fast mode (250 samples per div) or if zooming out (increase T/Div) on a stopped waveform when in normal mode.

There is hysteresis on all auto-fit dimensions already, but I’ll see if I can reproduce it and tune it a bit to avoid this issue.


thanks for v3.62 update. The peek mode gives possibility to he high freq components on fast sample mode.


the peek mode has no sense for Normal sample mode, i.e Average and Peek are the same for Normal sample mode.

Explanation of Avg and Peek mode:
Sampling buffer: 5, 6, 7, 6, 0, 1, 2, 1
Let’s assume that oversampling is factor 4, instead of 10.
In Average mode program displays: 6 for the first t1 dot (= avg of 5, 6, 7, 6) and 1 for second t2 dot (=avg of 0, 1, 2, 1)
In Peek mode effect is the similar as program displays vertical line between 5 and 7 (min and max of 5, 6, 7, 6) for the first t1 dot, and vertical line between 0 and 2 (min/max of 0, 1, 2, 1) for the second t2 dot, so you can see high components.

This gives similar effect as my first suggestion with with Min, Max only, but only for Fast sample mode and with small buffer size equivalent.
My idea is to have both Min, Max with lower color intensity and Avg with higher color intensity, so both can be seen. But, my main suggestion is to always sample on faster possible sampling rate with processing Avg, Min Max at real time and storing result values to buffer, so we can detect components on higher possible sampling rate and to have bigger buffer size (related to current Fast mode), both on the same time.
Main disarrange of my suggestion is that you will not get sampling details if you reduce TD on holding sample waveform.

Anyway, this is much more close to what we need.

Thanks again to BenF!