top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

SHOW CREATE TABLE suddenly slow on InnoDB?

+1 vote
464 views

I'm trying to figure out how InnoDB executes a SHOW CREATE TABLE query so I can figure out what could possibly have made them suddenly slow down?

mysql> SHOW CREATE TABLE my_table;
...
1 row in set (37.48 sec)

We tend to execute many of these statements concurrently, but it's never been a problem until recently. I upgraded the IO subsystem, and our statistics indicate that it's not maxing out IO (at least IOPS).

posted Mar 17, 2014 by Jagan Mishra

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote

What version are you using? My immediate thought is to check if innodb_stats_on_metadata is off. If it is on, switch off and check your timings again.

answer Mar 17, 2014 by Amit Mishra
I'm on 5.5.27.  Just flipped that setting off and getting the same results. It pretty clearly seems to be InnoDB: If I create a HEAP table, I don't get this behavior.

I have (and always have had) innodb_file_per_table enabled, but my tablespace file is still gigantic (56GB)?
My guess is that it could be contention trying to open the table. i.e. when you run into this issue and run SHOW PROCESSLIST, does it show any threads waiting in state “Opening tables”?
Similar Questions
+1 vote

I've restored an MySQL backup from our MySQL server into another server. The backup includes InnoDB tables. After the import, MySQL recognized the innodb tables fine but when I try to do a check table it returns that the table doesn't exists.

Permission and owner of the table files (.frm files) are ok, since it recognizes MyISAM tables (they have the same permission). Innodb engine is enabled..

Which can cause the tables to appears as "non existent", as far as they do really exist?

0 votes

I've searched but with no luck... what do exactly these variables mean:

1343928 OS file reads, ********** OS file writes, 19976022 OS fsyncs

I am wondering if my innodb_buffer_pool setting is not to low. Does 'file reads' show number of times innodb files have been read into memory from server's start? What about file writes/fsyncs?

0 votes

Using Fedora 18, KDE 4.10.2 on x86_64 laptop with kernel 3.8.11-200 and nvidia proprietary drivers.

I have two hard disk. One SSD with / and swap and one hybrid with /home. Recently, I remarked extremely long times (>60 s) to open some programs like firefox or dolphin, the latter being the most critical.
I suspect disk access to slow down the machine as this happens when the processor is nearly idle.
What can I check? Can I check the hard disks, how? Could it be something else?

+1 vote

Below table contains billion of rows,

CREATE TABLE `Sample1` (
  `c1` bigint(20) NOT NULL AUTO_INCREMENT,
  `c2` varchar(45) NOT NULL,
  `c3` tinyint(4) DEFAULT NULL,
  `c4` tinyint(4) DEFAULT NULL,
  `time` bigint(20) DEFAULT NULL,
  PRIMARY KEY (`c1`),
  KEY `varchar_time_idx` (`c2`,`Time`),
  KEY `varchar_c3_time_idx` (`c2`,`c3`,`Time`),
  KEY `varchar_c4_time_idx` (`c2`,`c4`,`Time`),
  KEY `varchar_c3_c4_time_idx` (`c2`,'c3', `c4`,`Time`),
) ENGINE=InnoDB AUTO_INCREMENT=10093495 DEFAULT CHARSET=utf8

Four multi column index created because having below conditions in where

1) c2 and time
2) c2 and c3 and time
3) c2 and c4 and time
4) c2 and c3 and c4 and time

Cardinality wise c2, c3 and c4 are very low. (ex: Out of one million c2, c3 and c4 have 50 unique column in each)

Column time contains mostly unique fields.

Select, insert and update happened frequently.

Tables has 5 indexing fields(4 multi column). Due to this, 1) Insert and update on index fields become costlier. 2) As the table keep on growing (Nearly one billion rows), Index size also increase more rapidly.

Kindly suggest good approach in mysql to solve this use case.

+1 vote

I was trying to optimize the InnoDB tables. I have executed the next query to detect what are the fragmented tables.

SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND Data_free > 0

After that, I have seen that there are 49 fragmented tables. With one table, I have executed "optimize table table_name;" and "analyze table table_name;". The result is the same, the table continuous fragmented.

Any suggestions? I have followed the mysqltuner recommendations...

...