01-18-2008, 06:30 PM
Our app runs off of one set of PHP pages. We used to have one database set up to keep track of all people added to the system, and each person added would be associated with the client_id of the client who added it. But we have tens of thousands of people per client, so I've rebuilt the system to look at a `client` table in a separate DB that has a `db_name` entry that stores the database name for that client, and then selects that database to use to retrieve information. So for each client, I will have a completely separate DB for them, so everything doesn't have to go in one.
I guess I'm just wondering if this is a good idea. I need to know if there are any problems I may run into in the future with this solution, so I can deal with them now on the front end instead of in the middle of the project. Thanks everyone!
01-18-2008, 06:39 PM
From your description you gave, this sounds like a decent idea to me, the only downsides would be the extra connection strings floating around transactionally, make sure these never are exposed in the client side implementation, the other drawback is the extra processing time and load to your SQL server itself, making a database request for something like (SELECT [ConnectionString] FROM [USER_MAPPINGS] WHERE [USER_ID] = 'UserName') and then creating a new connection based on that connection string to get access to a table based on the user name. That's several requests to get the database not to mention the table or the data you want, this will also slow down your page load, just make sure that the processing time does not exceede common request timeout values.
01-18-2008, 06:45 PM
Hmm, ok...so doing a (SELECT * FROM $db_name.person) query is a lot slower than having already selected the DB and doing (SELECT * FROM person)? Is that what you're saying?
Here's how mine works: the user logs in, and the system finds which client he is associated with. It then takes that client's database name and plugs it into the mysql_select_db() call that is made on each page. Then, on every page, the script has already selected the client's DB, and gets all the information it needs from that client's DB. Are there any slowdowns there? And should I be doing the mysql_connect() and mysql_select_db() on every page? Because I am right now.
01-18-2008, 06:55 PM
Me personally, I wouldn't go to the trouble of selecting different databases. I take it you are talking about a MySQL database(?) which I've read people reporting over a million rows and operating fairly. If it ever got to big I'd have to change my DB system to something on the Enterprise level maybe MSSQL. Perhaps you could optimize your queries.
01-18-2008, 07:00 PM
I've already got the system working where it selects the different MySQL database for each different client. But one other thing I realized is if I have to change a field on the databases, I'll have to do it to all of them somehow. So I guess it might be easier to switch back to the old way of keeping it all in one big DB, huh?
01-18-2008, 07:34 PM
From my experience of seeing this proposed solution I've always read it would be better to not separate the tables into different databases. I think its kind of defeating the purpose of people I'm pretty sure I've seen this proposed on this forum before as well. I'll try and see if I can find some threads and post back. Its just going to be hard to find some key terms to search against. :P
01-18-2008, 08:22 PM
The following quote was just the first link that I clicked. I'm not advocating that it is better or worse than the others. I started out wanting to post several replies from different forums but there were too many good responses. But here is the Google llink that I used to receive the links.
Here are some links that were good but also check the google link.
Splitting the data across databases can have advantages; you have the
flexibility of setting configurations and performing backups seperately
based on specific case usage needs. However, this also complicates the
database management requirements. If you do decide to partition your data
across multiple databases/servers, then I would reccomend you do so by
operational case usage or time (ex: FINANCES, INVOICE_HISTORY) rather than
by category (ex: FLORIDA, CUSTOMER03). You want to avoid having to develop
queries that span databases when possible.
Partitioning Very Large Databases
Strategies for Partitioning Relational Data Warehouses in Microsoft SQL
Scalability and Very Large Database (VLDB) Resources
It is also possible to keep all the tables in the same database and split
the location of tables across multiple files and drives.
As for splitting your tables across multiple servers, consider what inpact
this will have in terms of SQL Server licensing costs, query performance,
and distributed transactions.
01-19-2008, 12:03 AM
Thanks for the links! I didn't quite know how to word it for Google, but you did well.
After reading, it seems as if it's about 50-50 on which way is best for databases. Looks like in my situation, these facts seem to be true:
Using separate databases for each client
- If a client needs to restore, only restore one DB instead of the whole thing.
- Logically keeps each client's information separated...means less time searching through rows. Easier to manage each client's info.
- If I have to update the structure of a table, I'll have to do it to all of them. Could be complicated or time-consuming.
- Maybe some slow-downs when matching between two databases on same server? But I will rarely if ever do this...because all the client's people data and stuff is in their own DB, while all login info and such is in the main DB.
Using one database for all clients
- No slow-downs from crossing databases (are there any significant ones? Especially if I rarely do this).
- All info is contained in one database.
- Huge, bulky tables.
- Having to use client_id all over the place in tables.
- Harder to manage separate clients' info, since it's all together.
Any other thoughts on this? Am I way off? Please pick away...cuz this is a pretty significant project, and I want to think through this thoroughly on the front end.
01-19-2008, 12:27 AM
You could cache the most-used, or recently-used user->database associations in a non-web-accessible flat-file to avoid the extra database call. Something like this:
It should be a little faster, and less resource intensive than the extra query. Just don't store anything like their password or other sensitive info.
01-23-2008, 06:07 PM
Oh man, I am so lost! I could really use some other suggestions on how to accomplish this. I work well on brainstorming with people, but just don't have anyone else here to bounce ideas off of. Any other thoughts?
Here's my project, in a nutshell. It's a person information-based web app built in PHP / MySQL. We are finding clients who need to keep track of the students in their organization, the parents of those students, givers to their organization, etc. They can also manage their own website (news articles, events, look and feel, etc.) and do lots of other cool, organizational things.
So what I'm trying to figure out at this point is if I should use one main database (which I called `main_db`) to keep info about each client and all the login info, and then give each client we sign up their own database; or if I should just use only the `main_db` and put every client's info (people info, website info, etc.) in that DB and associate everything having to do with that client with their `client_id`.
One of the biggest problems I'm having with the separate DB idea is how to upgrade each DB when I need to make changes to the tables. I'll have to go through every DB of every client and make the changes, which unless I build a script to go through them all automatically, it will take forever. And I also need to know if there's anything else I'm not thinking of before I continue. Any and all thoughts are appreciated!
(all databases will be located on one server, btw)