View Full Version : timing: how to determine lag?

rnd me
11-03-2012, 08:32 AM
I've come across a simple yet rather interesting problem that i can't see a find simple solution to.

I need to figure out how to exactly sync two clients, or at least determine the one-way lag between them.

i have a music app that loads a list of http-view-able mps and plays them using an audio tag. i wanted to hear it around the house, so i wrote a lightweight shell client that talks to this music app, getting it's mp3 url and using it's own <audio> tag to play that same url on a different device.

i am using a comet/event source server to broadcast events ;
"client > server > all clients"

The master player broadcasts it's <audio>.currentTime every 10 seconds for the clients to sync upon, but the lag between the mp3 server, the master player app, the event server, and each client is unpredictable.

This makes the songs not exactly sync up, which is annoying when you're in a part of the house where two devices overlap. Even an 80ms discrepancy is annoying; it sounds like a parking garage. around 120-200ms it sounds bouncy, and longer delays are just plain annoying.

if i broadcast the master position as 1m23s132ms, its 20-120 ms behind by the time the client get it. this lag can vary with each ping, so it needs to be computed on-the-fly each time.

Theory: If i can determine how far apart each client's system clock is from the master app's clock, i can determine an EXACT offset by bundling a timestamp with each response, and subtracting the difference. that's the solution, i know it is...

How can one tell how far apart two clocks are, if one doesn't know how long it takes to check the time?

for example, if it comes back that they are 150ms apart, are they basically synced and the trip took 149ms, or are they 100ms apart and the trip took 50ms? it feels like i don't have enough info to solve the equation.

if it helps to visualize, the challenge is the same as determining what portion of a regular ajax request is spent getting to the server, and what portion is spent on the journey back. While it's easy to log the round-trip time, the half-leg distance is what's important, and 50% of the total is not correct in my tests...

any suggestions or ideas? i'm VERY open here.

this has stumped me for a couple weeks, please help. pretty please, with &%^# sugar on top!

rnd me
11-04-2012, 12:18 AM
ok, i got it down to about +-20ms using a simple ajax routine.

i was able to do this because i noticed that in chrome's inspector's Net tab that my connections were 98% lag. that is to say that most of the round-trip time of an ajax request is spent reaching the server, at which point it returns to my browser in 2ms.

so if you'r net tab looks like this:

instead of this

you can do the same.

here i make a request the first time, grab the dealye, and make an Date emulator so that i can compare my local estimate to the server, which is what happens on all executions except the first:

aGet(location.href, function(e) {
var serverTime= (e * 1)+2 ;
console.log( serverTime - dt() );
aGet(location.href, function(e) {
var n = +new Date, serverTime= (e * 1)+2 ;
var off= serverTime - n ;
window.dt=function(){return +new Date() + off; };
console.log( off );

window.dt here is like Date.now(), but returns the date on the server instead of the local time.