DSO firmware version 3.64

BenF,

if you don’t like my suggestions or have no time for them I understand that, but you don’t need to shut on me. Your profiling is mostly wrong: I’m programing about 25 years (last 12 professionally), I develop automations, electronic and software for more then 30 machines which customer uses during last 10 years (that’s my second additional job).
Related to DSO Nano, you are right: I don’t know the details about this, and that is obvious. But I can suggest. But I will stop on that. I already spent too much time on this.

I’m not planning to be of any group only because I do not have additinal time for this (as I told you, I’m already working two jobs).

Related to technical stuff, my assumption that DSO nano can lost the trigger is based on you previous email: DMA fill the buffer, then software display and examine the buffer. If that is the true you can lost the trigger. If two buffers are used and the software can examine second buffer in real-time, then trigger is NOT lost. Or there is something third which I’m not aware. Anyway, even losing trigger is not such bad if that is going on, but users need to be aware of that. It that is not the case, then take my apologies. My assumption is based on yours explanation.

100MHz is type mistake of course. I mean 100kHz.

As I told you, you did great job for this: displaying/visual effect is nice and high frequency works fine. But the time scale of 5ms per div can be better. The fast mode is not full solution because high frequencies are lost (which is generally good, but we need high freq info too). The additional Min, Max even on this fast mode could be much more useful to be aware of these high freq signals, but it is not full solution if CPU can scan this faster.

Did you test the Max sampling rate without DMA?

If this is better even then 50 kHz it is worth to use my approach. The effect of MIX and MAX is almost the same as you have big buffer to store all samples on high frequency and displays all elements on high TD.

Here are some concrete problems with this 5ms time scale I had:

  1. On AC=50Hz there are high freq nose at 10kHz from servo motor switcher. I cannot see it well.
  2. I’m controlling thyristors: there are some cases then trigger not catch the tyrystors “current” waveform. But maybe that was from high trigger level or sensitivity. Controlling thyristors are on 100Hz, while input signal is very short ~0.1 ms. This is another situation when I need to see both low and “high” frequency. For this case I use smaller TD, but the buffer cannot catch all impulses, even that could be with TD=5ms (but impulses are not well)
  3. I tried fast mode, but because there are not Min, Max, it is not so much useful. And the buffer is small.
    I guess using my suggestion could give much better result for these cases, in case that CPU can sample of reasonable high frequency.

Anyway, this Nano and your changes are useful and definitely better then nothing. But I think that they could be better. If you do not have more time, I understand that.

Dejan

Excuse me for butting in here, but I feel that I am qualified to enter into this discussion because I once did the same thing as you are doing in your posts. I was also trying to explain “how”, when all BenF wants is the “why”. BenF is a reasonable man and only wants to accurately respond to feed-back. But when you spend excessive time explaining "how, then that wastes BenF’s time sorting through this to find the “why”. As you have already stated, you and myself do not know the innards as well as BenF, so we should stick to “why”. Like I say, I have already been guilty of this too, so your options are to get upset about this or to learn from this experience. The choice is entirely yours.

As for the missed triggers, no standard DSO scope with software trigger detection can see triggers outside acquisition time, so your argument is moot. You appear to want a second simultaneous acquisition while the Nano is processing the current acquisition and that can not be done with the Nano hardware limitations.

When the Nano capabilities are exceeded, then your next option is to use a scope that costs more than $89. The Nano is a general purpose scope only, and has no exotic trigger and/or display capabilities.

It sounds as if you need a scope that can stream capture data to a file so you can examine it for missed waveform actions. You should move to something like a Pico 2205, whose educational version can be purchased for about $250. It allows you to stream to a PC file, and then you can scroll through this file and expand your view of the captured data as needed.

I understand. I was thinking it would be “easy” to ignore additional triggers for a specified amount of time, but I can surmise reasons why that might not be the case. I’ll take your word for it. I can get by with single shots, or if needed, I have access to plenty of more powerful tools at work with built in protocol analysis, etc. I just like working at home in my lazyboy with a netbook and a couple of USB powered boards …

If I run into other issues that might have a relatively easy fix, I’ll chime back in. I promise I won’t ask you to add a second channel, violate Nyquist or calculate a real time FFT. I would like X-Y mode though :smiley: .

BTW, what tool chain do you use? I’m an EE faculty at a State University and I have a pretty strong background in hardware. I’ve done some embedded C and can pump out assembly code and bit-bang with the best, but I consider myself to be a real “hack” when it comes to high quality, structured software. I wouldn’t mind playing around with the code more for fun and the learning experience than anything else (I’m actually more likely to break the code than improve anything).

Thanks again,
Dave

It’s not my intention to push you away, but if we keep the discussions at a higher level, it is likely to be more productive (from my point of view at least).

A single ADC interrupt may require upwards of 70 cycles just for saving and restoring context. At a sampling rate of 1Ms/s we would then consume 100% of CPU time just for ADC interrupt overhead. Also we have to account for all other interrupts. This includes a millisecond interval timer used for general time keeping, scanning input button key states (so that the Nano appears responsive at all times) and a few other housekeeping tasks. Connected to USB it gets a lot worse as we have to service inbound interrupt requests for the SD card file system. Service time for all interrupts combined will determine the minimum ADC cycle time. DMA is the performance enabler here, not the problem.

Using DMA and a 4k buffer size provides a window for real-time concurrent access to the acquisition buffer and we need this in order not to loose samples.

At 5ms we will sample at 50kHz and so barely capture the 10kHz ripple (5 samples per cycle). Step down one TD however and you should be ok.

In this case I think you need to go back and check your settings (fast mode, zero sensitivity and correct trigger level) as this should be within the capability of the Nano/firmware.

You can not have a large buffer AND short cycle at the same time so this is a trade off. The min/max samples however will be in the acquisition buffer and so can be targeted by triggers.

If we add a hold-off feature, the expectation (mine at least) is for precise interval timing between trigger cycles at sub ms level. Time between cycles is used for display refresh, measurement calculations, general house-keeping, checking battery level, processing user input, and periodic ground level calibration and so is not easily controlled with the current design (a special purpose trigger is probably easier to implement).

Assembly and bit flipping is the right background for this so why not get your hands dirty with the V3.1 source available for download? It is based on the IAR toolchain for ARM Cortex with its C/C++ compiler and assembler.

I may be squeezed time wise to work on this, but it shouldn’t stop anyone from wishing or requesting.

Hi guys,

First of all I wish to say that I don’t like “I’m … you are …”. If you are all agree with this, please let’s avoid this in future.

I hope we can all agree for this:

  • DSO Nano is fine and useful product, but has more limitation then we originally expected for big TD (TD>=5ms). The freq sample is too low, there is no LP filters and no way to see high components. I put an demonstration on page 9 with OO/Excel file where you can see 6kHz as 1kHz if you sample at 5kHz.

  • BenF software is big improvement to original one and we all respect that work. Long time ago, I made an FFT (and oscilloscope) PC sound card app. I made first version just for few days, but need few month to make it good. The PC is much easier for programming and testing then this hardware. I can just imagine how it is difficult to do the same with hardware limitations of Nano. Testing also require a lot of time because you need to transfer program to Nano, … This is big and hard work. I’m aware of that, and I’m sure others.

The main question from my point is: can this be better for bigger TD? My intuition says: it can. But maybe it can not. I cannot be 100% sure.

Unfortunately, I really do not have time to go even a bit deeper into this. So I will just push some another ideas. Let’s BenF or somebody else think about them more. If they cannot be implemented or they are too much hard to implement, that’s OK. Otherwise, maybe they can help.

  1. First, I was surprised that irq switching time requites such a lot overhead. I guess BenF check that. If that is true, how we can stand with sampling at 100 kHz, or even 50 kHz? Maybe that can pass? Even with very small battery life.
    The 50kHz irq sampling looks the same as current TD=5ms on fast mode, but it is not: with Avg, Min and Max we have LP filter (Avg) and see high components (Min, Max) and the buffer is 10x bigger.

  2. If I understand, the main limitation is irq switching time. If #1 cannot pass , let’s consider an another more complicated:

  • Let’s use DMA, but on very small buffer. Accentually two small buffers. While DMA fill one of them, the DSO process another one.
  • Data from small buffer is transfered to main bigger CIRCULAR buffer used for displaying, saving, … During this transfer, more then one value from small buffer correspond to one tupe (Avg, Min, Max) in big/main buffer. During this transfer from “small” to “big” buffer, program is looking for trigger and process other calculations.

How long this small buffer has to be? Will we have the whole “small” buffer just to one tuple in big (main) buffer, or more?
I will leave these to be considered. Maybe the small buffer will be bigger then main buffer. This need some calculations and considerations.

I kind of figured there were other things going on that would preclude a precise deterministic hold-off.

I’ll make it a goal for the summer to get a few DSO bits stuck under my fingernails …

If nothing else, that will help me to filter and refine future questions/suggestions so as not to waste your time. On that note, I’m fully content with short answers like “that’s not feasible/easy”. I trust you have good reasons … no need to be verbose unless you have the time and think the details would be of general interest to the forum.

Best,
Dave

P.S. - I have now downloaded BenF_V3.1 and IAR Kickstart and got things compiled and downloaded into the DSO.

BenF:

Thank you very much for those changes in V3.62, they are awesome. With regard to the Peak Display mode, the User Guide talks about using Peak for oversampling. Can the Peak display mode also be used in normal acquisition mode? I understand that there would be more noise, but glitches may become more observable if Peak can be used in normal acquisition mode.

Thank you, thank you very much

BenF,

Thanks a lot for being so responsive to the never ending requests (including mine) for changes/improvement!

On my unit, If I am in FIT mode and looking at the output of the internal signal generator (while powered by the internal battery), the vertical scale continuously alternates back and forth between 1V/div (good display) and 0.5V/div (square wave off the top of display) causing an unstable view. The setting is stable when powered of my laptop’s usb port (the square wave amplitude is bit larger due to the higher supply voltage). Is it possible to adjust the threshold a little for when it decides to increase the sensitivity … and/or put in a little hysteresis so it stabilizes at one setting when near the threshold for vertical sensitivle change?

Thanks,
Dave

Thanks for the feedback. As this is based on forum feedback however, credit is all yours.

You can keep this as a preference for any configuration, but it will only have an effect on displayed output when there are more samples per div in the buffer than needed for the display (i.e. > 25 samples per div). This is the case with fast mode (250 samples per div) or if zooming out (increase T/Div) on a stopped waveform when in normal mode.

There is hysteresis on all auto-fit dimensions already, but I’ll see if I can reproduce it and tune it a bit to avoid this issue.

BenF,

thanks for v3.62 update. The peek mode gives possibility to he high freq components on fast sample mode.

Iygra,

the peek mode has no sense for Normal sample mode, i.e Average and Peek are the same for Normal sample mode.

Explanation of Avg and Peek mode:
Sampling buffer: 5, 6, 7, 6, 0, 1, 2, 1
Let’s assume that oversampling is factor 4, instead of 10.
In Average mode program displays: 6 for the first t1 dot (= avg of 5, 6, 7, 6) and 1 for second t2 dot (=avg of 0, 1, 2, 1)
In Peek mode effect is the similar as program displays vertical line between 5 and 7 (min and max of 5, 6, 7, 6) for the first t1 dot, and vertical line between 0 and 2 (min/max of 0, 1, 2, 1) for the second t2 dot, so you can see high components.

This gives similar effect as my first suggestion with with Min, Max only, but only for Fast sample mode and with small buffer size equivalent.
My idea is to have both Min, Max with lower color intensity and Avg with higher color intensity, so both can be seen. But, my main suggestion is to always sample on faster possible sampling rate with processing Avg, Min Max at real time and storing result values to buffer, so we can detect components on higher possible sampling rate and to have bigger buffer size (related to current Fast mode), both on the same time.
Main disarrange of my suggestion is that you will not get sampling details if you reduce TD on holding sample waveform.

Anyway, this is much more close to what we need.

Thanks again to BenF!

I hooked it up to a sig-gen when I came to work to today to see if this was an isolated phenomenon or something more general. For both sine and square waves, there appeared to be a range of amplitudes near the thresholds where it very consistently oscillates back and forth between two vertical range settings. Interestingly, the range of amplitudes (minumum to maximum) over which this occurs, is about the same as I would intuitively use to set the hysteresis levels … just eyeballing it. It’s almost as though the there is a sign problem on the vertical scaling hysteresis causing “anti-hysteresis”, if that makes any sense?

Thanks,
Dave

P.S. I watched this action a little more this evening … it appears that there may be some kind of feedback between the ground level adjustment and vertical sensitivity. One adjusts first affecting the other … the second then adjusts affecting the first … this continues for ever. Just another thought on what might be happening?

I am glad that I asked that question and appreciate BenF’s answer which points out that peak can be useful in normal mode while zooming a stopped waveform “if zooming out (increase T/Div) on a stopped waveform when in normal mode”. This appears to be a condition when display oversampling is averaging, while not in “Fast” mode.

Not sure if this is the right place to post this?

BenFWaves.Client is not happy with new FIT trigger mode.

“System.InvalidOperationException
Instance validation error: ‘FIT’ is not a valid value for TriggerMode.”

Regards,
Dave

This appears to come from the XML viewer utility submitted by forum user Bainesbunch and so you would need to get his attention.

Posting in this thread might be better:

viewtopic.php?f=22&t=1871&start=10

It’s actually from me, though Bainesbunch is developing something with some shared functionality. I’ve already written a fix for the FIT mode breakage, and as soon as Google Code decides to let me log in again, I’ll issue another release.

A suggestion: instead of the following format for dumped data buffers:

<Point> <seq>1418</seq> <val>-40.000e-3</val> </Point>

Could the user select an option for a slightly more compact form?-such as:

<Point seq="1418" val="-40.000e-3"/>

This is still compliant XML, but would be nicer on SD cards with limited space and/or many dumped buffers. Cheers.

Sorry about the author mix-up. :blush:

The XML format is chosen so that it can efficiently be parsed by the Nano firmware on import (one tag and value per line). XML support is optimized and hand-coded for this particular format and requires no more than 1KB or so of program memory and virtually no RAM at all. A full blown XML parser on the other hand would require several orders of magnitude more space than would fit within Nano capacity constraints.

Despite limited XML support, the export file format is still fully compliant and export file size is not an issue with the SD cards we use in our Nano’s. Quad however is limited by its total secondary storage capacity of 2MB and using XML format on these units is not practical. A compact binary format may be an option for the Quad, but even this would be severely limited capacity wise.

Is the v3.6 firmware under source control somewhere where I could check it out? I see that there’s a copy of BenF’s source in the DSO Nano Google Code project, but it’s from November. If it isn’t under source control, how would I go about getting a copy of the source?

Thanks.

BenF source availability is limited to versions 3.11/3.12 (courtesy of Seeed).

BenF, can you planing to add FFT functionality from Paulvvz FFT firmware?