Skip to content
Advertisement

Should I create separate SQL Server database for each user?

I am working on Asp.Net MVC web application, back-end is SQL Server 2012.

This application will provide billing, accounting, and inventory management. The user will create an account by signup. just like http://www.quickbooks.in. Each user will create some masters and various transactions. There is no limit, user can make unlimited records in the database.

I want to keep stable database performance, after heavy data load. I am maintaining proper indexing and primary keys in it, but there would be a heavy load on the database, per user.

So, should I create a separate database for each user, or should maintain one database with UserID. Add UserID in each table and making a partition based on UserID?

I am not an expert in SQL Server, so please provide suggestions with clear specifications.

Please inform me if there is any lack of information.

Advertisement

Answer

A DB per user is what happens when customers need to be able pack up and leave taking the actual database with them. Think of a self hosted wordpress website. Or if there are incredible risks to one user accidentally seeing another user’s data, so it’s safer to rely on the servers security model than to rely on remembering to add the UserId filter to all your queries. I can’t imagine a scenario like that, but who knows– maybe if the privacy laws allowed for jail time, I would rather data partitioned by security rules rather than carefully writing WHERE clauses.

If you did do user-per-database, creating a new user will be 10x more effort. While INSERT, UPDATE and so on stay the same from version to version, with each upgrade the syntax for database, user creation, permission granting and so on will evolve enough to break those scripts each SQL version upgrade.

Also, this will multiply your migration headaches by the number of users. Let’s say you have 5000 users and you need to add some new columns, change a columns data type, update a trigger, and so on. Instead of needing to run that change script 1x, you need to run it 5000 times.

Per user Dbs also probably wastes disk space. Each of those databases is going to have a transaction log, sitting idle taking up the minimum log space.

As for load, if collectively your 5000 users are doing 1 billion inserts, updates and so on per day, my intuition tells me that it’s going to be faster on one database, unless there is some sort of contension issue (everyone reading and writing to the same table at the same time and the same pages of the same table). Each database has machine resources (probably threads and memory) per database doing housekeeping, so these extra DBs can’t be free.

Anyhow, the best thing to do is to simulate the two architectures and use a random data generator to simulate load and see how they perform.

User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement