Skip to content
Advertisement

How do I handle very large data (100000 javascript objects) in webpack-react-node based website?

I have a react-node based website in which I need to work on a dataset close to 100000 javascript objects, all objects containing the same set of keys. Various functionalities have to be implemented like display suggestions on searching by name, sorting, filtering, all of which need to be as fast as instantaneous.

There are a few options which I thought of:

1.) Storing all the data in a relational database (SQL) with keys as the column name.

v/s

2.) Storing all the data in a non-relational database (MongoDB) as an array of objects.

v/s

3.) Storing all the data in a .js file in the server as `export const data={Array-Of_Objects}`

In the functionality in which when input is typed in a search box, suggestions are shown, I think that I can’t keep going back to the database to fetch all the objects and then perform processing to find best suggestions as soon as a letter is typed to maintain fast functionality.

Hence, I decided to follow the third option but still, the .js file containing all the data is close to 100 MB and importing the data using const {data}= require(./file.js) is not happening, let alone processing and finding the best suggestions, which happens when I reduce the data size from 100000 objects to (say) 5000 objects.

If I divide all the data into various .js files and import them separately, I think still the importing of all the files and the processing of data will not be fast enough.

Please suggest me what is the best method to follow or free technology to use.

Advertisement

Answer

Things are working now. I am accessing the data from the backend from the database as soon as the user presses first 2-3 character. The result thus obtained is of lower size than the complete database. And then from the next request onward I am filtering the last results in javascript itself.

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement