I would ignore the code, per se. I would just use a debugger and figure out where to set a breakpoint that will show me the name of the file that is about to be downloaded (or, when you ask for a chart, shown in another window).
I would try to determine the PATTERN of the file names (URLs) use for various purposes.
Then, if I only needed one or two *kinds* of files/URLs, I'd just build an interface to download only those.
*ALL* of this assumes that they don't have some sort of heavy protection in their system that won't respond to requests that don't come from their own web site.
Actually, they probably do have that. When you submit anything to server-side code, in the headers sent is something called the HTTP_REFERER that tells the server the full URL of the page that is making the request. Most servers are set up to reject requests from pages not from the same site and/or not a specific page on that site.
So, again, all of this likely depends upon your level of expertise in coding and your ability to "spoof" a site.
On top of which, if that site has a legal "terms of service" agreement, you may well be breaking the law.
An optimist sees the glass as half full.
A pessimist sees the glass as half empty.
A realist drinks it no matter how much there is.