I'm trying to detect normal urls to other websites or pages and urls that point to files on the client site

I have this for urls which point to files, and it seems to work.

Code:
$parsed_txt = ereg_replace("(http://www.mysite.com/files/)([_a-zA-z0-9\-\/.]+)", "<a href=\"/files/\\2\" target=\"blank\" google tracking stuff...>\\0</a>", $parsed_txt);
But then after that I also want to pickup and replace urls which point to pages or websites (these links don't need google tracking stuff). I have the code below but what happens is that it also parsed the above already-parsed file links too!

Code:
$parsed_txt = ereg_replace("[[:alpha:]]+://[^<>[:space:]]+[[:alnum:]/]", "<a href=\"\\0\" target=\"blank\">\\0</a>", $parsed_txt);
Normal links shouldn't have the /mysite.com/files/ part or '.' for that matter. Can anyone enlighten me to how I could write this second replace? I've been trying for about 40mins with no luck...