Hack MySQL

Archive for the ‘Programming’ Category

New algorithm for calculating 95 percentile

with 4 comments

The 95 percentile for query response times is and old concept; Peter and Roland blogged about it in 2008. Since then, MySQL tools have calculated the 95 percentile by collecting all values, either exactly or approximately, and returning all_values[int(number_of_values * 0.95)] (that’s an extreme simplification). But recently I asked myself*: must we save all values? The answer is no. I created a new algorithm** for calculating the 95 percentile that is faster, more accurate, and saves only 100 values.***

Firstly, my basis of comparison is the 95 percentile algo used by mk-query-digest. That algo is fast, memory-stable, and very proven in the real world. It works well for any number of values, even hundreds of thousands of values. It saves all values by using base 1.05 buckets and counting the number of values that fall within the range of each bucket. The results are not exact, but the differences are negligible because a 10ms and 13ms response time are indiscernible to a human. Any algo that hopes to handle very large numbers of values must approximate because not even C can store and sort hundreds of thousands of floats (times N many attributes times N many query classes) quickly enough.

So when I finished the new algo, I compared it to the mk-query-digest algo and obtained the following results:

FILE                         REAL_95     OLD_95     NEW_95  OLD_DIFF NEW_DIFF  OLD_TIME NEW_TIME   FILE_SIZE  OLD_RSS  NEW_RSS
nums/500k-1-or-2               1.751      1.697      1.784    -0.054   +0.033     12.12     9.37     4500000    3.88M    2.63M
nums/100k-1-or-2               1.749      1.697      1.794    -0.052   +0.045      2.42     1.88      900000    3.88M    2.63M
nums/50k-trend-1-to-9          6.931      6.652      6.995    -0.279   +0.064      1.24     0.90      450000    3.88M    2.63M
nums/25k-trend-1-to-5          3.888      3.704      3.988    -0.184   +0.100      0.64     0.47      225000    3.88M    2.63M
nums/21k-1-spike5-1            0.997      0.992      2.002    -0.005   +1.005      0.55     0.42      189000    3.88M    2.63M
nums/10k-rand-0-to-20         19.048     18.532     19.054    -0.516   +0.006      0.29     0.21       95079    3.86M    2.62M
nums/10k-rand-0-to-10          9.511      9.360      9.525    -0.151   +0.014      0.29     0.21       90000    3.86M    2.62M
nums/4k-trend-1-to-7           5.594      5.473      6.213    -0.121   +0.619      0.14     0.09       36000    3.86M    2.63M
nums/1k-sub-sec                0.941      0.900      0.951    -0.041   +0.010      0.07     0.04        9000    3.80M    2.62M
nums/400-half-10              10.271      9.828     10.273    -0.443   +0.002      0.05     0.03        3800    3.79M    2.62M
nums/400-high-low             10.446     10.319     10.446    -0.127        0      0.05     0.03        3800    3.79M    2.62M
nums/400-low-high             10.445     10.319     10.475    -0.126   +0.030      0.05     0.03        3800    3.79M    2.63M
nums/400-quarter-10           10.254      9.828     10.254    -0.426        0      0.06     0.03        3700    3.79M    2.62M
nums/153-bias-50              88.523     88.305     88.523    -0.218        0      0.05     0.03        1500    3.79M    2.62M
nums/100-rand-0-to-100        90.491     88.305     90.491    -2.186        0      0.05     0.03         991    3.79M    2.62M
nums/105-ats                  42.000     42.000     42.000         0        0      0.05     0.03         315    3.75M    2.61M
nums/20                       19.000     18.532     19.000    -0.468        0      0.04     0.03          51    3.79M    2.62M
nums/1                        42.000     42.000     42.000         0        0      0.04     0.03           3    3.75M    2.61M

I generated random microsecond values in various files. The first number of the filename indicates the number of values. So the first file has 500k values. The remaining part of the filename hints at the distribution of the values. For example, “50k-trend-1-to-9″ mean 50k values that increase from about 1 second to 9 seconds. Number and distribution of values affects 95 percentile algorithms, so I wanted to simulate several possible combinations.

“REAL_95″ is the real, exact 95 percentile; this is the control by which the “old” (i.e. the mk-query-digest) and new algos are compared. The diffs are comparisons to this control.

Each algo was timed and its memory (rss) measured, too. The time and memory comparisons are a little bias because the mk-query-digest module that implements its 95 percentile algo does more than my test script for the new algo.

The results show that the new algo is about 20% faster in all cases and more accurate in all but one case (“21k-1-spike5-1″). Also, the new algo uses less memory, but again this is a little bias; the important point is that it doesn’t use more memory to get its speed or accuracy increase.

The gains of the new algo are small in these comparisons, but I suspect they’ll be much larger given that the algo is used at least twice for each query. So saving 1 second in the algo can save minutes in data processing when there’s tens of thousands of queries.

Instead of explaining the algorithm exhaustively, I have upload all my code and data so you can reproduce the results on your machine: new-95-percentile-algo.tar.gz. You’ll need to checkout Maatkit, tweak the “require” lines in the Perl files, and tweak the Bash script (cmp-algos.sh), but otherwise I think the experiment should be straight forward. The new algo is in new-algo.pl. (new-algo.py is for another blog post.)

My ulterior motive for this blog post is to get feedback. Is the algorithm sane? Is there a critical flaw that I overlooked? Do you have a real-world example that doesn’t work well? If you’re intrepid or just curious and actually study the algo and have questions, feel free to contact me.

* By “recently asked myself” I mean that some time ago Baron and I wondered if it was possible to calculate 95 percentile without saving all values. At that time, I didn’t think it was feasible, but lately I thought and coded more about the problem.

** By “a new algorithm” I doubt that this has never been attempted or coded before, but I can’t find any examples of a similar algorithm.

*** By “saves only 100 values” I mean ultimately. At certain times, 150 values may be saved, but eventually the extra 50 should be integrated back into the base 100 values.

Written by Daniel Nichter

August 29th, 2011 at 9:42 pm

Posted in Programming

Tagged with , ,

Down the rabbit hole

without comments

Generally I avoid going down rabbit holes but today I decided to see how deep a particular testing rabbit hole went. This post is a third in what seems be a continuing series of programming anecdotes. It’s not particularly MySQL-related so you can stop reading here unless you grok code stuff.

Before beginning work on issue 720 I ran the mk-table-checksum test suite to make sure it was in working order. No sense writing new tests and code when the old tests and code aren’t reliable. I actually made one seemingly innocuous change to the test suite in preparation for the issue: I changed the –replicate checksum table from MyISAM to InnoDB.

Surprisingly, the test suite proved unstable. Random tests would fail at random times. Some instability was due to new tests for other issues that I wrote poorly and hadn’t been run a zillion times yet. But other instability was due to switching the checksum table to InnoDB. I knew this because I could switch it back to InnoDB and the tests were ok. Thus began my descent into the rabbit hole.

In particular the test for issue 982 was not finishing or, if it did, it took forever. I discovered that the problem had to do with –lock even though it shouldn’t since the docu says that –replicate and –lock are useless together because the former eliminates locking concerns. Obviously not. So for this I created issue 1027.

Then I turned my attention to test 207_issue_51.t for issue 51 which was opened 2 years ago and closed 1 year ago as WontFix because the problem could not be reproduced. Some might have thought it forgotten amongst the nearly 1,000 issues since its time, but it was found again at the end of this rabbit hole and revivified.

It turns out that issue 51 is, as the original reporter stated (partially), caused by –wait when (this is the part the original reporter didn’t state) the –replicate checksum table is InnoDB because –wait enables –lock which disables AutoCommit so writes to the checksum table may not be committed.

So the rabbit hole lead from issue 720 to issue 982 to issue 1027 to issue 51. Then end results will be a needed bug fix and faster, more stable tests. Sometimes it pays to follow the rabbit.

Written by Daniel Nichter

May 25th, 2010 at 4:28 pm

Posted in Programming

Tagged with , ,

Debugging and ripple effects

without comments

Like I said earlier, every tiny change that the test suite reveals after code changes is significant. I caught a very subtle “bug” today in recent changes to mk-query-digest (a.k.a. mqd). If you like to read about subtle bugs, read on.

An mqd test on sample file slow023.txt began to differ after some pretty extensive code changes of late:

< # Query 1: 0 QPS, 0x concurrency, ID 0x8E38374648788E52 at byte 0 ________
> # Query 1: 0 QPS, 0x concurrency, ID 0x2CFD93750B99C734 at byte 0 ________

The ID which depends on the query’s fingerprint has changed. It’s very important that we don’t suddenly change these on users because these IDs are pivotal in trend analyses with mqd’s --review-history option. First some background info on the recent code changes and then the little story about how I tracked down the source of this change.

mqd internals used to run like this: call parser module (like SlowLogParser) and pass it an array of callbacks which it ran events through. Now this has changed so there’s a single, unified pipeline of “callbacks” (they’re technically no longer callbacks). The first process in the pipeline is usually a parser module which returns each event and then mqd keeps pumping the events through the pipeline (in contrast to before where the parser module did the pumping). So “obviously” this has nothing to do with query fingerprinting or ID making which is done in code that has not changed. Thus, this “bug” was very perplexing at first.

First step: see what value make_checksum(), which makes the query IDs, used to get and gets now by using the Perl debugger:

DB<3> x $item
0 'select count(*) as a from x '

DB<12> x $item
0 'select count(*) as a from x'

The difference is that single trailing space. But why has this space suddenly disappeared in the new (later) rev? Something in fingerprint() must have changed, which is the sub that makes that query. Use the debugger again to step through fingerprint() while a watch is set on the var:

1574: $query =~ s/\A\s+//; # Chop off leading whitespace
Watchpoint 0: $query changed:
old value: ' select count(*) as A from X
new value: 'select count(*) as A from X
1575: chomp $query; # Kill trailing whitespace
1576: $query =~ tr[ \n\t\r\f][ ]s; # Collapse whitespace
Watchpoint 0: $query changed:
old value: 'select count(*) as A from X
new value: 'select count(*) as A from X '

Notice that the var did not change after the line “# Kill trailing whitespace” was executed. The trailing newline was removed and reduced to a single trailing space when “# Collapse whitespace” was executed. The new rev:

1585: $query =~ s/\A\s+//; # Chop off leading whitespace
Watchpoint 0: $query changed:
old value: ' select count(*) as A from X
new value: 'select count(*) as A from X
1586: chomp $query; # Kill trailing whitespace
Watchpoint 0: $query changed:
old value: 'select count(*) as A from X
new value: 'select count(*) as A from X'

Notice how chomp in the new rev removed all trailing whitespace; the result of chomp has changed, but why? In case you didn’t know, chomp actually chomps any trailing $INPUT_RECORD_SEPARATOR, not just newlines. It just so happens that many of the parser modules change $INPUT_RECORD_SEPARATOR.

The root of this subtle but very important change is due to the fact that the parser modules no longer call the pipeline callbacks. When they did, their changes to $INPUT_RECORD_SEPARATOR were visible to the callbacks, and operations like fingerprint() are part of the callbacks. Now that they do not, their changes to $INPUT_RECORD_SEPARATOR are “invisible” and operations like fingerprint() see a different (i.e. the default) $INPUT_RECORD_SEPARATOR.

Conclusion in brief: an issue of scope at the beginning of mk-query-digest affects chomp causing fingerprint() and make_checksum() to generate different query IDs at the end of the script.

Written by Daniel Nichter

November 18th, 2009 at 4:58 pm

Posted in Programming

Tagged with ,

Zero is a big number

without comments

I made changes to mk-query-digest yesterday that I didn’t expect to cause any adverse affects. On the contrary, several tests began to fail because a single new but harmless line began to appear in the expected output: “Databases 0″. Perhaps I’m preaching to the choir, as you are all fantastic, thorough and flawless programmers, but as for myself I’ve learned to never take a single failed test for granted.

One time a test failed because some values differed by a millisecond or two. Being curious I investigated and found that our standard deviation equation was just shy of perfect. I fixed it and spent hours cross-checking the myriad tiny values with my TI calculator. Probably no one cared about 0.023 vs. 0.022 but it’s the cultivation of a disposition towards perfection that matters.

My innocuous changes yesterday introduced a case of Perl auto-vivification. Doing:

my ($db_for_show) = $sample->{db} ? $sample->{db} : keys %{$stats->{db}->{unq}};

can auto-vivify $stats->{db}. Before yesterday, this was done before the report for those stats were printed; changes yesterday made this happen after the report. Thus the report did it’s job and reported db or “Databases 0″. It’s been fixed, and just in time since I’m doing Maatkit’s October release today.

Written by Daniel Nichter

October 30th, 2009 at 9:20 am

Posted in Maatkit,Programming

Tagged with