DSO firmware version 3.43

Hi Tim,

For triggering it may be worthwhile to spend some time with the video tutorials prepared by forum user lygra.

http://www.youtube.com/user/lwgraves?feature=mhum#p/p

Trigger level is set as a percentage of full scale, so yes it will change in absolute value with change of V/Div. Trigger sensitivity is explained in the referenced videos, but you may want to set it to zero to keep it simple.

Your calibration figures seem way off and may need some attention. Are you using the probes that came with the Nano?

The calibration procedure is documented as follows:

As a further clarification to the video, the trigger sensitivity levels are two boundaries that the signal transition must cross. If the signal fails to cross both boundary levels, then no trigger is found. As explained by BenF, you can set the trigger sensitivity to zero and it no longer conditions the trigger findings, and then only the trigger level affects the trigger findings. The video does show you how to adjust Trig Sens. The lower right hand context sensitive display window will show the sensitivity value as you change it. Because the Trig Sens is usually adjusted to be a much smaller value that the Trigger level, then it would be very sensitive to V/Div changes.

This trigger sensitivity feature is normally used to generate a single display sweep when some abnormal portion of the waveform transcends the normal waveform pattern.

EDIT: In the above paragraph I removed “touch” because I went back and looked at the code and it must cross (exceed not just equal) those parameters.

Gents,

Thanks for the help.

The trig sensy makes more sense now - but I still don’t completely see why it is necessary (probably because I look at audio signals 99.9% of the time.)

I did set the ground offset by shorting the (supplied) probes together first, then tried to adjust the gain on a couple of ranges. I followed the instructions and applied DC and cal’d the gain - but got the results in the table from my previous post. Is it better to cal a range near it’s full scale? for instance on the 1V/div scale, should I input 8V?

Thanks,

Tim

hello Ben .
first of all i want to thank you for your work!

i really like the single trigger mode and the ability to view the full buffer .but , i find a small problem with that .
after the signal is saved in the buffer , if i go and change v/div or t/div , the signal dissappears from the screen .
then i go to the x axis mode and move the buffer , the signal appears again .

I would say yes, but the values you posted are so far off that one suspect there is more to it. Keep in mind that each V/Div needs to be calibrated separately for both offset and gain and also they will be back to defaults after a power cycle unless you explicitly save the calibration.

I too would welcome the possibility to stretch a captured waveform both in the x and y direction. As it is now however it is a trade off for faster refresh rate. The fact that it reappears after a change of T/Div however was new to me and it is not intentional (the waveform will not reflect the new time base).

the waveform does not reappears with the new time and voltage scales . it goes back to the original time/voltage scales (those that the signal was captured with) .
so i think there’s nothing wrong with this .

Closer inspection will reveal that the waveform maintains it’s same physical dimensions on the screen but the changed parameter T/Div and/or V/Div indicators do change. All the measurement and cursor functions (another form of measurement) are incorrect because of the changed scales (for example changing T/Div from 200us to 50us results in an erroneous frequency measurement of 4Khz vice the correct 1Khz. You can change the scales back to the original values and all is normal once again. So bottom line, nothing was gained by this maneuver at this time except potential confusion.

BenF: Thank you for your input on this. I played with it a bit more this morning, and I think I figured it out. The offset and gain seem to be interacting with each other. If I set offset to 0uV, cal gain (near full scale for the range) then check the offset again, it is off by 400mV or so. I adjusted it to half of the offset, re-cal gain, check offset and it is off by less this time. I wash, rinse and repeat until it is very close. Now, my 1V/div range is close enough for what this will be used for.

Thanks again.

Tim

EDIT: I have been going through the ranges and the 10V range will not cal. I have a two to three volt disparity between what is going in and what the DSO reads. The ranges from 0.1V to 5V all cal’d quite nicely.

It would be a nice feature if it was displaying correct again once you have the same time,volt/div so the buffer is not lost because of clumsiness with the buttons. My friends fluke industrial scopemeter even maintains the buffer when powered off so it can be turned on and looked at again as long as you don’t start to measure again.

This is a feature I would highly appreciate too. Could this made as an option (between faster refresh and redraw after T/DIV or V/DIV change) ?

I think some older firmware had this “feature” but then refresh rate was not so fast I guess

I remember requesting this feature at some time, but have never seen it implemented in any versions from Seeed or others.

I’ve done some work on an upcoming V3.5 and this update will include support for waveform resizing.

Is there some way to stream the capture data via USB in a format that a Linux/Windows Oscilloscope program could use?

USB should be able to handle around 20MB/s or more, so a 1MSps data stream should work …

This is related to a separate topic that I will post in a little while as well.

The Nano supports export of capture data to XML files, but not streaming of live data. Doing so would require development of a custom firmware to replace the current USB file system interface with a USB serial interface as well as a PC application to process the live data.

For those interested in pursuing such development, here are some high level constraints to consider:

BenF, does DSO Nano hardware can use USB 2.0 high-speed (480 mbit/s) instead of full-speed (12 Mbit/s)?

Ahhh, OK. I assume that is a limitation of the chip and/or phy that is being used rather than that of the software? If so, perhaps the DSO Quad should have a High-Speed (480Mbps) capable phy and chip.

As to the custom firmware, while I have only ever been involved with USB storage at that level, I think you can support multiple end-points if you set things up correctly.

I agree that it would involve a PC application to support the live data, but there are already some to choose from, like XOScope and others, although their code might not be capable enough or in good enough shape to work from. However, having been involved in Ethereal/Wireshark I have seen how something can start from humble beginnings and take over the world.

Will I be able to use the DSO Nano V2 to analyse 100kbps I2C?

I imagine that 400kbps I2C would be beyond the DSO Nano’s capabilities (I seem to recall an HP document suggesting that you need something like 5 times the sample rate as the bit rate you are trying to measure to get reasonable fidelity on the wave forms).

I have an answer for you, but since this thread is discussing future DSO capability and your question relates to existing capability I have posted it in a separate topic.

Please see:

http://www.seeedstudio.com/forum/viewtopic.php?f=12&t=1698

I recently received my DSO Nano v1.
I have updated to DSO firmware version 3.43.
I have tried to use it during my lab exam in the college and compared the results to the Tektronix oscilloscope that we have in the lab.
I have found that for some strange reason when I measure 2 outputs in the circuit and overlap them on the DSO the time line doesn’t match.
for example, I had this circuit:

I have measured the two outputs with my DSO and got:

After that I have measured the same circuit with Tektronix and got:

Is there any way to fix this?
Thanks.

The following info would be required to best analyze your question: Trig. Level and Trig. Sens. for each original Nano waveform capture, and the sampling rate, trigger level and trigger sensitivity (if it has this feature) that the Tek Scope was using for the the Tek capture(s).

I can explain the reason, but finding a solution to this is more of a challenge.

A limitation with a single channel (and single trigger) scope is that you can not synchronize two waveforms in time. You can do two separate captures and compare the shape (amplitude, frequency etc.), but you can not determine relative timing between them. On the Tektronix, both waveforms are synchronized to the rising edge of the square wave. The DSO triangle waveform however will trigger on its own rising edge independent of the square wave. If you repeat measurements and move trigger level up towards the triangle peek, the DSO triangle waveform will move left and so approximate what you’re after. The only way you can determine that this is how they relate timing wise however is if you already know.

The upside would be that as a lab exercise, you probably learn more from this than any of your fellow students without access to a Nano.

Edit: I didn’t see your post lygra, but assume he is referring to the time displacement between the two waveforms.