top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

How to VIEW performance improvement using MySQL

0 votes
580 views

I've a table with 10 Million records in MySQL with INNODB engine. Using this table I am doing some calculations in STORED PROCEDURE and getting the results.

In Stored Procedure I used the base table and trying to process all the records in the table. But it's taking more than 15 Minutes to execute the procedure. When executing the Procedure in the process list I am getting 3 states like 'Sending data', 'Sorting Result' and 'Sending data' again.

Then I created one view by using the base table and updated the procedure by replacing that view in the place of a base table, it took only 4 minutes to execute the procedure with a view. When executing the Procedure in the process list I am getting 2 states like 'Sorting Result' and 'Sending data'. The first state of 'Sending data' is not happened with view, It's directly started with 'Sorting Result' state.

When I'm referring some MySQL sites and other blogs, I have seen that VIEWS will never improve the performance. But here I see some improvements with a view.

I would like to know how VIEW is improving the performance.

posted Jul 30, 2013 by Luv Kumar

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

3 Answers

+1 vote

If you turn on your slow queries logs and activate log queries without indexes, I suspect you'll find your answer.

answer Jul 30, 2013 by Jagan Mishra
0 votes

I think you're reducing the amount of rows referenced throughout the proc using the view. This might be where you're seeing a performance difference. If you create an innodb table where the structure and row count match the view maybe you'll see another difference?

answer Jul 30, 2013 by Jagan Mishra
0 votes

VIEWs are not well optimized. Avoid them.

The SlowLog will probably point to the worst query; we can help you improve it (SHOW CREATE TABLE; SHOW TABLE STATUS; EXPLAIN)

Only minutes to go through 10 million records? Sounds good. It takes time to shovel through that much stuff.

"Sending data" (etc) -- yeah these states are "useless information" in my book. They merely say "you have a slow query". "Sorting results" probably implies a GROUP BY or ORDER BY. It _may_ be possible to avoid the sort (when we review the naughty query).

What kind of things are you doing? If Data Warehouse 'reports', consider Summary Tables. Non-trivial, but the 'minutes' will become 'seconds'.

answer Jul 30, 2013 by Salil Agrawal
Similar Questions
+1 vote

Below table contains billion of rows,

CREATE TABLE `Sample1` (
  `c1` bigint(20) NOT NULL AUTO_INCREMENT,
  `c2` varchar(45) NOT NULL,
  `c3` tinyint(4) DEFAULT NULL,
  `c4` tinyint(4) DEFAULT NULL,
  `time` bigint(20) DEFAULT NULL,
  PRIMARY KEY (`c1`),
  KEY `varchar_time_idx` (`c2`,`Time`),
  KEY `varchar_c3_time_idx` (`c2`,`c3`,`Time`),
  KEY `varchar_c4_time_idx` (`c2`,`c4`,`Time`),
  KEY `varchar_c3_c4_time_idx` (`c2`,'c3', `c4`,`Time`),
) ENGINE=InnoDB AUTO_INCREMENT=10093495 DEFAULT CHARSET=utf8

Four multi column index created because having below conditions in where

1) c2 and time
2) c2 and c3 and time
3) c2 and c4 and time
4) c2 and c3 and c4 and time

Cardinality wise c2, c3 and c4 are very low. (ex: Out of one million c2, c3 and c4 have 50 unique column in each)

Column time contains mostly unique fields.

Select, insert and update happened frequently.

Tables has 5 indexing fields(4 multi column). Due to this, 1) Insert and update on index fields become costlier. 2) As the table keep on growing (Nearly one billion rows), Index size also increase more rapidly.

Kindly suggest good approach in mysql to solve this use case.

+2 votes

We have a table with 254 columns in it. 80% of the time, a very small subset of these columns are queried. The other columns are rarely, if ever, queried. (But they could be at any time, so we do need to maintain them.). Would I expect to get a marked performance boost if I split my table up into 2 tables, one with the few frequently queried columns and another with less frequently queried ones? Doing this will require a lot of code changes, so I don't want to go down this path if it won't be beneficial.

Can folks here offer their experiences and learned opinions about this?

+1 vote

I am having collection with 127706 document. In aggregation pipeline i having 2 group stages. It is giving me result in 1.5 sec.

To optimize it to more I have created index on the fields which I am using in match stages with no success. Is their any to optimize aggregation performance in more way?

I m using mongodb 3.2.1?

0 votes

We are using git-1.8.2 version for version control. It is an centralized server and git status takes too long

How to improve the performance of git status

Git repo details:
Size of the .git folder is 8.9MB
Number of commits approx 53838 (git rev-list HEAD --count)
Number of branches - 330
Number of files - 63883
Working tree clone size is 4.3GB

time git status shows
real 0m23.673s
user 0m9.432s
sys 0m3.793s

then after 5 mins
real 0m4.864s
user 0m1.417s
sys 0m4.710s

And I have experimented the following ways
- - Setting core.ignorestat to true
- - Git gc &git clean
- - Shallow clone €“ Reducing number of commits
- - Clone only one branch
- Git repacking - git repack -ad && git prune
- - Cold/warm cache

Could you please let me know, what are the ways to improve the git performance ?

...