I keep going back to this for some reason. I've been to numerous pages on the internet about how to combat
bots. Each wants you to check and see if the HTTP_USERAGENT is a bot. Doesn't this seem kinda backwards
from the normal way things are checked? I mean instead of checking a mega long list of bad bots or having to add
each and every one to an htaccess file, shouldn't we just be checking the useragent or servename against an 'accepted'
list of urls and send the rest back? Like a login check does or a doorman at a club? You ain't on the list, go home!
Just wondering since new bots come out every day. And which is better to compare useragent or servername?
Just seems like there should be an easier way.