Well, personally, I would probably *NOT* choose to use INNODB for a table with millions of records. But I suppose it depends on what your data access patterns are.
I've never tried to do things such as COUNT(*) based on an index in INNODB with such a large table. (Do it all the time with tables with thousand and tens of thousands of records, though.) So I don't really know how bad the performance might be.
I would comment, though, that worst case is that INNODB has to scan the index, which clearly has to be better no matter what than scanning the records.
I do have a similar situation (happens to be using MyISAM, but it involves joining a pair of tables and so joining indexes, which even with MyISAM is not super fast), and my solution was to build a cache table.
Somewhat like your suggestion of creating a separate table of counts but with the difference that the table starts empty. Then I use a Stored Procedure to find a given count. If I find it in the cache, then I'm done. If not, then I run the query to get the count from the join and I store that result in the cache table of counts, so it is there next time. This happens to be data that depends upon city and state, so the most popular (and populous) cities will be requested most often. Only the first request, then, for a particular city and state has to do the expensive query. From then on, the data is in the cached table. It's really simple to do and well worth the effort. You don't have to guess which data is "important"; the natural action of queries will soon build the cache with the most important values.
An optimist sees the glass as half full.
A pessimist sees the glass as half empty.
A realist drinks it no matter how much there is.