Skip to content
Advertisement

Is creating a new table for each `list` the best way to speed up database queries here?

My postgres database has a lists and list_items table. There is an index on the list_id column. To list all items in a list, I have been searching through list_items for where the list_id equals id from the lists table.

lists:

id title
0 “foo”
1 “bar”

list_items:

id content list_id
0 “hello” 0
1 “world” 1
2 “foobar” 0

However, as the number of lists and list items increases, it will take longer to search through list_items and match them to a list.


The current implementation will work for me. But suppose I wanted to speed things up. I was thinking of having an extra column in the lists table called items_list. When creating a list in the lists table, a new table will be created and the name stored in the items_list. When deleting a list, the table referenced by items_list will be dropped. It feels awkward, but it seems like a good way to speed up queries.

lists:

id title items_list
0 “foo” “list_0_items”
1 “bar” “list_1_items”

list_0_items:

id content
0 “hello”
1 “foobar”

list_1_items:

id content
0 “world”

If needed, is this a good way to speed up queries?

Are there other methods or databases (such as noSQL) that I should try instead?

Advertisement

Answer

Decisions like this always depend on how you think your final queries might end up. Your initial solution works well in most cases provided you put indexes on the lookup columns. Then you can just join the tables together using the ids when you run your searches. By putting list items into a single table you have the advantage of normalizing the data easily so a specific item only takes up space in your database once.

Sometimes you might categorise a table so it might get a subset of the data. Maybe all of the items that start with a letter, but you wouldn’t do this sort of thing until the table reached a certain threshold. Your multiple solution does work, but you are going to need unions on lots of tables if you want to export the data together in a single query.

If you never need to lookup what the items are and just want to export them as is you could consider jsonb which allows you to put a json binary object directly into your row alongside the list details. You can query the items in the json but it is not as efficient as a indexed database column for quick lookups. Using your example you would end up with a single table.

id title list_items
0 “foo” [‘hello’,’foobar’]
1 “bar” [‘world’]

If you had more data relating to each item then you can make the item the key or the value (depending on your use case) to make a dictionary or even go for a tree of data.

id title list_items
0 “foo” {‘i1′:’hello’,’i3′:’foobar’: null}
1 “bar” {‘i2′:’world’}
id title list_items
0 “foo” {‘hello’:{‘note’:’hello info’},’foobar’: null}
1 “bar” {‘world’:{‘note’:’some notes’}}
User contributions licensed under: CC BY-SA
2 People found this is helpful
Advertisement