Skip to content
Advertisement

How to optimize querying multiple unrelated tables in SQLite?

I have scenario when I have to iterate through multiple tables in quite big sqlite database. In tables I store informations about planet position on sky through years. So e.g. for Mars I have tables Mars_2000, Mars_2001 and so on. Table structure is always the same:

Thing is that for certain task I need to iterate through this tables, which cost much time (for more than 10 queries it’s painful).

I suppose that if I merge all tables with years to one big table performance might be better as one query through one big table is better than 50 through smaller tables. I wanted to make sure that this might work, as database is humongous (around 20Gb), and reshaping it would cost a while.

Is this plan I just described viable? Is there any other solution for such case?

It might be helpfull so I attach function that produces my SQL query that is unique for each table:

Advertisement

Answer

I solved the issue programmatically. Whole thing was done with Rust and r2d2_sqlite library. I’m still doing a lot of queries, but now it’s done in threads. It allowed me to reduce execution time from 25s to around 3s. Here’s the code:

As you can see no optimization happened from sqlite site, which kinda makes me feel I’m doing something wrong, but for now it’s alright. It might be good to press some more control over amount of spawned threads.

User contributions licensed under: CC BY-SA
7 People found this is helpful
Advertisement