Slow Log and long_query_time Microsecond Resolution

MySQL has not always supported microsecond resolution for slow query logging. The first major appearance of such support was in late 2006 by the experts at MySQL Performance Blog. Since then, their patch has been officially integrated into newer versions of MySQL and they have written patches for older versions of MySQL.

Official Support in MySQL

According to the MySQL manual, intrinsic support for microsecond resolution of slow query logging is as follows:
v4.0, 4.1, 5.0: not supported; long_query_time is limited to a resolution of 1 to 10 seconds.

v5.1 up to and including 5.1.20: not supported; long_query_time is limited to a resolution of 1 to 10 seconds.

5.1.21+: for the value of long_query_time "the minimum is 0, and a resolution of microseconds is supported when logging to a file. However, the microseconds part is ignored and only integer values are written when logging to tables." (MySQL :: MySQL 5.1 Reference Manual :: 5.2.5 The Slow Query Log)

6.0 up to and including 6.0.3: not supported; long_query_time is limited to a resolution of 1 to 10 seconds.

6.0.4+: for the value of long_query_time "the minimum is 0, and a resolution of microseconds is supported when logging to a file. However, the microseconds part is ignored and only integer values are written when logging to tables." (MySQL :: MySQL 6.0 Reference Manual :: 5.2.5 The Slow Query Log)

Microsecond Resolution: What & Why

A microsecond, denoted by the symbol µ, is one millionth of a second. That means 1 µ is equal to 0.000001 second. This is not to be confused with a millisecond (denoted by 'ms') which is one thousandth of a second, or 0.001 second.

Obviously, the change from second to microsecond resolution support for long_query_time is huge. Microsecond resolution allows a realistic logging of slow queries. In most high-end applications, even 1 full second is too long. Therefore, developers, DBAs and hackers alike think in much finer terms: milliseconds and microseconds.

In reality, query optimization is most often done in milliseconds rather than microseconds. Whereas a 1 second query can be painfully long, a 1 millisecond query is better than most queries can achieve. Regardless, the maximum resolution is in microseconds because this is the resolution provided by the underlying code.

1 s vs. 1 ms vs. 1 µ At A Glance

Since microsecond slow logging in MySQL is relatively new (or, since the industry never moves quickly to widely adopt new technology), I have not seen many applications which make the clear distinction between 1 second, 1 millisecond and 1 microsecond. Some "legacy" applications if ran on a new version of MySQL supporting microsecond resolution may show unwieldy values, for example: 0.025000 s instead of 25 ms.

At a glance, the distinction between these three resolutions is as follows:
0.000000 - 0.000999 s = 0 - 999 µ
0.001000 - 0.999999 s = 1 ms - 999.999 ms
1.000000 - n.nnnnnn s = 1 s - n.nnnnnn s
At present, mysqlreport v3.5+ formats the value of long_query_time shown on the Slow line of its report (line 16) according the proper distinction, displaying µ, ms or s. Eventually, mysqlsla will do the same, too.