I have a database and some of the existing queries give me lots of headaches.
For example I have this table called TAB1 of around 600K records (and rapidly rising) and a table called TAB2 of approx. 1500 records (this will not expand much further).
The query searches for an ID and a NAME (text field) of an item from TAB2 into the huge table TAB1 with the purpose of adding a new entry in TAB1.
I know that these actions really drag my server down, and sure, it isn't the best MySQL query scenario. But how could I improve this operation, as the indexes that are already set on the tables do not make a difference any more?
Thanks in advance.
01-16-2012, 01:08 PM
1) 600,000 records is still very small in size
2) post the schemas of your table
3) post the query
without indexes you could have a query that is slow that would improve noticably with proper indexing.
Here are the structures for the 2 tables in cause:
The BIG table:
-- Table structure for table `inventory`
CREATE TABLE IF NOT EXISTS `inventory` (
`inventory_id` int(11) NOT NULL AUTO_INCREMENT,
`type_id` int(11) DEFAULT NULL,
`name` varchar(255) DEFAULT NULL,
`ignore` tinyint(1) DEFAULT '0' COMMENT '0 - dont ignore, 1 - ignore',
PRIMARY KEY (`inventory_id`),
KEY `type_id` (`type_id`,`name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=549867 ;
The smaller table:
-- Table structure for table `type`
CREATE TABLE IF NOT EXISTS `type` (
`type_id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
`alias` varchar(255) DEFAULT NULL,
`type_category` tinyint(1) DEFAULT '0' COMMENT '0 - not know, 1 - chat, 2 - document, 3 - site',
`type_second_category` tinyint(1) DEFAULT '0',
PRIMARY KEY (`type_id`),
KEY `name` (`name`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1625 ;
Here is the query:
SELECT inventory_id FROM inventory where type_id = "[number-known]" AND name="[text-known, representing the inventory item name]";
So basically this query is really slow because I must identify precisely the item in the inventory table, and the inventory table is really large. I am searching for 1 row information response in a very large table.
I have a simple index running on the inventory table for the multiple key: (type_id, name), but this index is not efficient anymore as the table is reaching 600K entries already.
01-16-2012, 05:09 PM
Again, 600,000 rows is not all that large, particularly that you are only using CHAR(255) column types and not trying to match MEDIUMTEXT for example.
Until you are talking tens of millions of rows there should be no problem. For your table unless you are somehow creating a CROSS JOIN effect, you should be talking a couple of seconds at most.
Also are you actually running that query above or are you only giving us an example of the query and yours is actually more complex?
Are you actually using NAME='exact text you are searching for' or are you perhaps using NAME LIKE '%some string of text%'. The difference is if you are using a LIKE search and your search string begins with % you will never be able to make use of an index on NAME field and thus yes your query would take longer to run.
Thanks for your reply. The theory indeed states that hundreds of thousands of rows is not at all challenging. The query runs as it is (including the = which makes the fastest comparison), what I have extra is that "type_id" and "name" are filled into the query after picking up the values from variables. However, the server gathering the information is accessed by at least 150 concurrent connections and maybe this is one of the main reasons the server jams easily (the hardware resources is fairly good).