DSO Quad bandwidth

Thanks for providing these very necessary measurements.

Neither you nor HugeMan have specified which range scale you are using for these measurements. As long as the range scale is not changed, then I would suspect that calibration of that range scale would not be a factor. On the other hand, the probe compensation adjustments for that channel could have very dramatic affect.

U5 pin “Y”, is that what you are measuring? The pin numbering seems to be inconsistent on the schematic. It appears that U5 “Y” is the input to the op-amp U-7, and U5 “X” is the output of the op-amp U-7. If so, then U5 “X” will be the low impedence signal into the ADC, and that is where the measurement should take place.

I am still awaiting my replacement Quad. Maybe you could upgrade to the latest firmware, conduct the new probe compensation procedures provided by HugeMan, and then repeat your test to see if the results are different. Hopefully the probe compensation procedure will help to flatten this bandwidth. It may be robbing Peter (2-3Mhz) to pay Paul (15Mhz). If this is true then the bandwidth will probably end up being more than 15Mhz.

Thanks for sharing this info.

Ok, I revisited the code that you suggested. I can not find your reference, please tell me where this file is located. App-Bios.h only defines terms and parameters. It also defines some routine headers and that is to be expected, but even in the routine headers, there is no mention of trigger detection routines here.

If you visit the App-process.c, there appear to be three routines relevant to trigger detection. Those routines are “Update Trigger”, “Synchro”, and “Process”. Now my Chinese is a little rusty (actually non-existent), but it appears to me that the “Update Trigger” routine is setting the trigger levels to look for, the “Synchro” process determines what kind of trigger to look for, and then “Synchro” uses the “Process” process to go and scan the various channel capture buffers looking for those trigger conditions. Of course, “Process” is also looking for many other parameters at the same time.

Now I don’t claim to be as knowledgeable as yourself in “C-language”, so maybe you could point out which lines in the App-process.c files indicate that my observations here are incorrect.


I was planning to order DSO Quad, but I think I will wait until this product become better.

One idea based on vernarm’s “Wed May 11, 2011 8:52 pm” measurements. It looks like bandwidth could be about 20Mhz (I guess) if seeedstudio correct this by appropriate software digital filter. I guess this is possible, because problem is in the middle of bandwidth and not so big (50% of nominal level?), and not at the end of bandwidth (which could be critical for filters). As I suggested for DSO Nano, the sampling rate need to be always maximum possible, in this case to make the best possible digital filter.


i don’t have much time now, so i have to be very brief.

In process.h (APP code) is where the function Update_trig (trigger update) is. In the very first lines, __Set function is used to set parameters… how?, in BIOS.h are the definitions. At some point in the file, you could see that some parameters are used to reset FPGA, etc. I also don’t speak Chinese :frowning:.

Following that trail, you have to find _set function (which is in SYS code, BIOS.c file). It’s a giant switch used to set parameters both in the ARM and in the FPGA. For simplicity’s sake, look for TRIGG_MODE, V_THRESHOLD… etc. These parameters call Set_param() which is also in the same file. Looking at Set_param() there is a call to Sendbyte()… which sends a byte to the FPGA. I think the rest is self explanatory…

Venarim, your measurements could indicate that there is possibly a phase inversion (OPAMP instability). Or at least thats what i think.

And Dejan (are you the Nokia one?), although a equalizer is a very good idea, Venarim only have tested discrete frequency steps. It could be possible that the bandpass line would be not so easy to equalize (very small gains @ specific freqs.). Also it would draw more current (15 stage FIR filter @ 72 Msps -> switching loses). But definitely a nice one.

So the problem now is that somebody with a spectrum analizer has to do some sweeps :slight_smile: in order to verify that “line”.

I think we should conduct the simplest test first, and that is to do the front-end compensation alignment as provided by HugeMan. If that does not flatten out Venarim’s test results, then go to the next level and consider a spectral analysis to identify what is happening here.

Some new threads have mentioned improvements with the latest firmware updates, so it is very important that we all apply these updates before more testing is continued.

I was told that my Quad is shipping so I can lend some help with testing as soon as it arrives.

Thanks everybody.

About the opamp instability, I could agree but the enhancement is not so great. It reach about +6dB, so…

I don’t agree the digital filtering equalization.
The ADC is 8bit, so its dynamics is about 46dB.
Also the sampling rate is 72Ms/s, so -to avoid aliasing- we should be able to keep the signal as low as -46dB at fc/2=36MHz.
This is a constraint for the correct sampling: it is useless any further consideration if the ADC sampled signal is noisy. Any digital processing couldn’t repair/equalize that.
Assuming a nominal 15MHz of bandwidth (what’s HugeMan is supposed to obtain), it means that we should be able to have an analog section capable to cut in just one octave over 43dB!
This is not impossible, of course, but it is very hard to obtain (it is a over-16 poles lowpass filter), by keeping a decent flatness within the bandpass.

My deal is having a GOOD analog section, having a flat bandpass from 0 to 10MHz. As “flat” I mean less than -/+1dB.

Anyone of you knows how the analog section switches are closed upon the various settings?
That’s would be comfortable to simulate on a PC before touching any hardware part.


I must agree with you now that I have found the SYS source code, that APP_Process.c “Update Trigger” function takes the trigger changes provided by the user interface, passes them to the SYS_Bios.c “SET” function who then uses its “SET_PARAM” function to call its “SendByte” function to pass those new trigger parameters to the FPGA via the I2C serial data bus.

The APP_Process.c “Synchro” function now becomes less clear. It definitely refreshes the LCD screen with the previous captured waveform and it appears to search for min/max values. What is confusing to me is the use of trigger conditions prior to looking a a FIFO buffer input. Maybe you could shed some knowledge on this aspect of that routine.

So at this juncture, we can not tell if the FPGA uses a hardware trigger circuit or if it just runs firmware that scans the captured data similar to the Nano. I will search the Internet for consideration of using an FPGA to form hardware trigger circuits. It is reasonable to expect that Bure had some reference application note for his trigger detection method.

After several hours of research, my initial findings support the following operations, and this seems reasonable to me.

  1. The ADC data sheet has no trigger capabilities defined. One thing I did discover here is that there is an interleave mode where both ADC channels can be connected to the same clock, with one channel having inverted ADC output. It would seem that the single clock approach would have been better (less chance for jitter), with the errant channel’s output simply being 2’s complimented before storage to its Dual-port FIFO buffer.

  2. The FPGA data sheet reveals no comparators either digital nor analog, so a true hardware trigger is unlikely. Most likely the STM ships the trigger parameters to the FPGA, and the FPGA uses look-up tables to find the trigger condition, much the same as a firmware implementation would do, but without lost CPU cycles.

  3. It also appears that when the STM is signaled that the Dual port FIFO buffer is ready, the STM reads the FIFO buffer until it is empty. The ADC is awaiting an empty condition of the Dual-port FIFO buffer and when it finds the buffer empty, then new acquisition captures are commenced in this circular buffer until a trigger is detected. When trigger is found (by the FPGA), then the ADC captures FIFO/2 more bytes and stops sampling. This forces the trigger into the middle of the captured buffer data (FIFO). These methods are not conclusive to the Quad FPGA, but these concepts do reflect the related info I could find on the Internet. These methods allow the STM to use a slower clock to fetch the acquired data from from the Dual-port FIFO. This is done between acquisitions. This method also allows the ADC to stuff the same (but empty) Dual-port FIFO with high speed samples during acquisition. What is neat is that this is done asynchronously with no handshaking between the ADC and the STM.

Another thing I found in this data sheet is that each Dual-port FIFO memory cell is 4K-bits and not 4K-bytes as was the Nano ADC buffer. So it is likely that Bure has joined multiple memory cells to achieve the desired 4K-byte buffers, if that is possible.

  1. This acquisition and trigger process described above seems to be in agreement with the published firmware code listings for the STM.

Slimfish: I’m not the Nokia guy. :slight_smile:

I agree with concerns with digital filters. But it is cheaper then fixing the hardware (which is the only real solution). Depending of freq ch., maybe filer could not be so big to make problem with performance, power constipation and additional noise. Also, need to be tested if frequency characteristic does not depend on temperature or chosen TD (resistance on transistor switches on input).

But, if the quad works similar as Nano, MAYBE it could be problem with mixing high and low frequencies depending on TD (no LP sampling filter), …, plus maybe missing trigger signals (I’m not sure about that, this is just assumption). I put some comments for Nano. Maybe they could be helpful for Quad: viewtopic.php?f=12&t=1793&p=7157#p7157
But, based on some comments above, maybe Quad works better then Nano. I’m not sure.

Hi lygra,


  1. ADC has no trigger. As far as i know, none of them have (in the sense of start a conversion when reaching a predefined limit).
  2. A FPGA is a very complex device. It’s advantage comes from its versatility in implementing digital logic which can run to 100’s of MHz. It can implement comparators (useful for triggers), adders, multipliers, filters, counters, registers… the list is endless.
  3. Memory: this FPGA has 80 kbit memory (divided in 20 blocks which can be grouped). So this is enough space to fit 2x4096 buffers (analog - 8 bit) and 2x8192 buffers (digital - 1bit). But i don’t know the exact buffer size.

How the scope works (or should work - i don’t want to analyze the code in deep). Again based on my experience:
-STM controls display, periferics & commands to FPGA
-FPGA controls ADC sampling, buffering & trigger.

  1. FPGA is commanded with a trigger mode, threshold, sampling speed, etc. ADC is continuosly sampling and samples are stored in a circular buffer. Once a sample activates the trigger, the FPGA completes the buffer (based on settings) and signal that to STM uC.
  2. uC transfers the buffer to process it locally (compute amplitudes, frequencies, etc) and displays it.
  3. Command FPGA to look for the next trigger.

Maybe i explained everything too simple for you. If i offended you by oversimplifiying, excuse me in advance. It was not my intention for sure.

My work is done here in this thread until I get a Quad in my hand for my own bandwidth testing. Too many people here have strayed from scientific observation to conjecture. This includes myself, so I will stop until my test results can be observed and presented to this thread.


ROFLMAO please tell me you ment to say “power consumption”

Cheers Pete

The worst part of this situation is the almost total silence about the Company.
It’s a silence making lot of rumor…

HugeMan, would you explain what will be the future of (our) Quads (at least of mine)?

For anyone is reading at this message: I’ll sell my Quad for a discounted price (having few weeks). Contact me privately.

Actually if i recall correctly they (Seeed) offered to change our quads (Beta versions) for production versions once they had ironed all the faults out for the cost of the postage and price difference. Considering the debacle that this has turned out to be i would have thought that they (Seeed) would be trying very hard to do some damage limitation public relations by at least offer to change them free of charge once the are fixed.

They (Seeed) have after all mis sold us a device under the description and sale of goods acts where the item is clearly not as described.

Furthermore if you had bought one to measure signals over 10 meg then it is also not fit for purpose.


Cheers Pete.

Although I will refrain from further analog bandwidth discussions until I can conduct my own tests, I will not bad mouth the Quad analog bandwidth quality, until someone performs the following:

  1. Loads the current firmware
  2. Performs the published compensation adjustments
  3. Repeats the sinewave bandwidth tests with at least 1Mhz steps, while looking at the ADC input with a high quality scope.

I will do this when I get my replacement Quad. Until someone does all these things, the Quad analog bandwidth is still undefined.

Beta hardware replacement discussions are only valid once it has been determined that the firmware and adjustment procedures can not fix the problems. I don’t think we are there yet. Most things “beta” carry some level of frustration, so why should this be any different? Patience is a virtue that is hard to manage, yet necessary when dealing with “beta” products.

Bainsbunch has a previous post about “lightening up”, and I have to agree with that concept, even though it now appears that “lightening up” appears to have become road kill. Seeed Studio does not control those folks who write the firmware changes, so Seeed certainly can not expedite those updates. It would be most helpful though, if Seeed would confirm when forum issues and concerns have been presented to the manufacturer for action.

By the way, all the above is just my opinion, not factual representation, and is therefore not subject to argument.

Whilst I am not qualified to make any technical input into this debate I have to say that Seeed have been conspicuously quiet on the subject.

I understand the risks of beta systems but when I signed up for the trials by investing my money I have, I believe, a reasonable expectation to see some response from the people who took my money.

One of the reasons for beta testing is to iron out issues like the ones being discussed in this thread and others issues in another thread about firmware improvements which incidentally are getting a response

This thread moved into a technical debate into the theory of what and how to test the units but has not addressed the post that started it all off and that was Seeed telling us that the unit was not coming up to specification.

All I am asking for is a statement from Seeed as to what , if anything, they are doing to address the issue they themselves brought to our attention.

“Lightening” the debate does not mean staying quiet and hopping that seeed will eventually speak up and say something.

Cheers Pete.

For Bainesbunch: yes, my English is horrible, plus auto correct option => funny sentiences :slight_smile:

I mean to say that digital filters require more CPU work, so CPU uses more current (power consumption) which reduces battery life.

See my May 14 post at viewtopic.php?f=22&t=2003

I want to correct a previous post which is in error. Further examination of the FPGA (U6) caps C21 and C22, finds that I was mistaken when I thought both caps were 22ufd. Instead, both caps are connected in parallel and C22 is a 105 cap which will provide the necessary decoupling. I think my eyes crossed on that one.

Ok, I decided to model the front end of the analog channels, which was done on Multisim.

Here are the results:


The 3dB point for signal 1 about 700kHz and for signal 2 is about 200kHz.Looking at these large resistors in the front end the results do not surprise me actually – the RC constants are far stretch from minimized. The resistances in the op-amps feedback loops are also way too large to expect overall wide BW. Realistically the front end should be have been buffered and then scaled and sampled.
Keeping in mind that this simulation is assuming perfect conditions (no stray capacitance at the very least) I’d say that the front end needs some work, but this is just my 2c.