Originally Posted by Old Pedant
Hmmm...correct me if I'm wrong, then.
If you had 1000 web browsers all hitting you with long AJAX wait times, then wouldn't each one require a separate thread on the server? How else does the server keep track of which request a given server "page" is responding to?
Don't get me wrong: I *can* imagine a server-side architecture that could say "oh, that HTTP response isn't ready yet, so I'll just put it in a queue and use this thread for a different HTTP request." But does PHP actually *do* that?
when i was big into node.js and socket.io, i wrote a server that turned around 1 million HTTP transactions per hour at 7% cpu on what's now a now five-year-old box. it trounced the php competition, especially anything using sql anything for state storage, since node uses a JS variable instead of a DB to pass data between connections. PHP's APC is a ram-cache, and though it still need serialization, it's WAY more apropos than a DB for real-time anything.
if you are broadcasting, then a minimal php routine could be under 1mb, so figure 2.5mb/user as a starting place. if everyone needs their own history maintained and authentication and the whole 9 yards, 20mb/user is fairly conservative. If you server has 4gb of ram, and you app needs 4mb / user, that's 1000 users on an apache/php rig.
for the older comet techs (forever iframe, jsonp, ajax), you really need a "keep-alive" data packet to be sent every 29.99 seconds. accounting for lag and buffering, 27.5 is the most seconds i've found to be safe. turning around 2200 connections per hour should be no problem for an operational webserver. 10,000 per hour should be quite doable and provides 1,000 users a 12 msg/min message volume.
so, the answer i guess would be yes, php and apache can do that. node.js can do 10-1000X more, but php can do the 500 the OP needs...