Skip to content
Advertisement

Proper use of MySQL Memory Tables for performance

I have a table that has 60 columns (and thousands of rows) and it references to 15 catalog tables and 3 of those reference to another 3 catalog tables (the whole database is less than 1 GB). These tables are used to perform some “onchange” search query in an web app (what I mean is a user select or write a value in an select/input field on the screen, and automatically the results update until he/she finds what is looking for).

As you can expect, this search query is performed frequently so MySQL struggles to fetch the results (with 10 concurrent users… for now).

I realize that using memory tables (a copy of the original tables) only for this query would be a good option but I never worked on those kinds of tables. I don’t know how to manage the insert/update/delete in these tables. Do I have to perform those action twice (one in the memory table and one in the original table)? I don’t have a problem to do that but, is there another/better way?

I understand the limitations of memory tables about size (I can change max_heap_table_size) and if system crash (using init_file to repopulate the tables) and I have enough RAM memory (the system use only 6 GB of 16 GB installed). Is there another thing that I have to take in mind in using theses kind of tables?

Advertisement

Answer

If your entire database fits in innodb_buffer_pool_size, the changes are that ENGINE=MEMORY will actually be slower, especially if you have any writes going into it because MEMORY engine has same locking characteristics as MyISAM (table level locking on writes). So in a read-only workload where all the data fits into the buffer pool, InnoDB performance should be similar. In a read-write workload, InnoDB will massively outperform it.

By all means – test it for your specific usage. But don’t be surprised to discover that it is actually slower for your workload.

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement