Interface to CLK SYNC Control

The core v2 schematics (<LINK_TEXT text=“https://raw.githubusercontent.com/Seeed … mg/SYS.png”>https://raw.githubusercontent.com/SeeedDocument/Respeaker_V2/master/img/SYS.png</LINK_TEXT>) show an interface from the I2S and CPU to the CLK SYNC Control of the microphone array. Does this mean, that the clock of the array can be accessed to syncronize several arrays across several core boards via a common timestamp (GPS time). I do not mean just using the GPS timestamp as the starting time of a recording, but to do sample-accurate synchronization of several core 2’s via truly synchronizing the A/D clocks (either with a timed WiFi-signal, or via the GPS time, or a combiantion of both)- which would dramatically increase the accuracy of sound source localization with custom/third party (post-processing) algorithms.



Thanks for any information if (and how) this is possible!?



Max Ringler

Universtiy of Vienna

Hi Max,



This is a great question… The simple answer is no, this is not possible.



The Core v2.0 is not a real-time system. As such, when we start to record the audio, we would have to send a message to get the system time/timestamp or the GPS time/timestamp. There is overhead to send this message and get the time. We cannot guarantee how long that will take. the real GPS time on one device may be off by only 0.05ms, but the other may be off by 0.15ms, leading to a 0.1ms difference that we cannot solve or detect. This would not allow for a highly accurate localization to be recorded. If you try some FFT algorithms with this data to help with the localization, you would end up with bad results.

Note: I have no idea what the ± delay would actually be, it could be much larger, or smaller, but it would not be consistent.



From the chat I had with our engineers, there is no simple way to reduce the time on the board as it is.

Best of luck Max, and please let us know if you find something. The engineers are hungry for knowledge!

Hi Seth,



Thanks for your (even if currently negative) reply!



It would not neccessarily have to be a GPS timestamp, but could be any other shared/common periodical synchronization signal that controlls the clock of the A/D-converter of the microphone array(s). Also the recordings would not necessarily have to start synchronously, as long as they rely on a underlying shared timestamp that is periodically injected into all A/D clocks of all core2s in the meta-array.



The procedure I was thinking of would be somehow similar to this described here: http://lac.linuxaudio.org/2014/papers/18.pdf



“Analogue to digital conversion is performed by an AD1974, a 4 channel, 24 bit ADC with integrated phase-locked loop (PLL). The internal sampling clock of the AD1974 is derived from the word clock provided by the synchronisation module. Wireless synchronisation within the WiLMA system is established via a 1 pulse-per-second timestamp signal that is broadcasted by the master module on a sub-GHz ISM band. The synchronisation module is populated with a voltage controlled oscillator (VCXO) that is disciplined by a frequency locked loop (FLL) and a subsequent frequency divider to obtain the 48 kHz word clock for the ADC. The sample accurate timestamps generated by the synchronisation module is multiplexed with the output data of the ADC into a 8-channel/32 bit time-division multiplexing (TDM) stream.”



I am collaborating with the people who have built the WiLMA system - so in general they know what to do, the question would be, if the respeaker core2 allows for such a procedure.

Hi again



as my topic has caused quite some interest/reads, I want to bring this up again on the list and ask about thoughts/experiences to inject a (common) periodical synchronization timestamp into the clock of the A/D converter (across several devices) as per the procedure described in my last post.