I used to manage a MySQL DB that *GREW* at the rate of roughly ONE GIGABYTE every 20 minutes. There were several tables in the DB, but probably 90% of each gigabyte was concentrated in one of three tables. And those three tables each had two indexes. And *AT THE SAME TIME* the DB was growing at that rate, we could make a SQL query to find 100 particular records from *EACH* of those three tables, use Java code (not JS) to create a GRAPH of the 2100 data points thus represented (7 data points per record) and present that graph to the use in CONSIDERABLY less than one second.
Oh...did I mention that the database was also trimming old data at the same time, keeping the total DB size to a user-specified size between 20GB and 50GB?
Meaning that the total number of records in those three tables ranged from 100 million to 250 million records at any given time.
And you are worried about a piddling 20,000 records? Really?
An optimist sees the glass as half full.
A pessimist sees the glass as half empty.
A realist drinks it no matter how much there is.