MySQL Backup Best Practices

Today is World Backup Day, so I thought I would use the opportunity to discuss some best practices and general considerations regarding backing up MySQL instances. While I focus on MySQL, several of these tips apply to backups in general.

Doplhin with a USB Stick with the label BACKUP
Backup your data

Before heading into the gory details, let's first take a look at the best practices at a high level:

  • Make sure you can restore your backups:
    • Document and script the restore procedures. Do you know the steps required to restore a full backup – or a single table?
    • Keep copies of the backups off-site. Do you have a copy of your backup if the data center becomes unavailable for example due to a fire?
    • Validate your backups. Does your backup method work with the features you use? Are you writing to a disk which is failing?
  • Monitor the backups. Do you know when a backup failed? How long time does the backups take?
  • Use a backup method appropriate for your system and your requirements.
  • Never stop considering your backup strategy. The World changes, so does your backup requirements.


No two systems are identical. Make sure you consider your specific requirements when you design your backup procedures.

The rest of this blog will discuss all of these things in more detail.

Make Sure You Can Restore Your Backups

It may seem obvious, but one of the more common issues I see is that backups exist, but the steps to restore them are not known. Or even worse, the backups cannot be restored at all as they are broken.


Whatever you do regarding backups, the single most important thing is that you verify you know how to restore the backups and that the restored instance is valid.

There are several things to consider: how to restore a backup, do you have access to the backup, and is the backup valid?

The Restore Procedure

If you do not know how to restore your backups, the day you do need to restore one, a relatively standard operation can turn into a major crisis with the manager staring down your backup.

So, make sure you practice the steps to restore your backups for all of the scenarios you can think of, for example:

  • a plain full restore of the backup
  • a point-in-time recovery, that is: restore the backup itself and apply binary logs up to a given point in time
  • a partial restore, for example to restore a single table or schema from a full backup

There are more possible scenarios. Take some time to consider which are important for your databases and regular practice doing those kind of restores.

When you practice a restore, document all steps you make in detail and keep the steps in a place where they can easily be found again – for example in a knowledge base. Even better, script the restore; that works both to document how the restore should be done, but also automates the steps and ensure each restore is done in the same way. If you need to restore a backup in the middle of a crisis, then having all the steps scripted and documented not only helps you remember what to do, but also reduces the chance that something goes wrong.

Related to this discussion is that you should copy the backups to remote storage.

Copy the Backups Off Site

In the previous section, it was discussed how you need to consider all your restore scenarios. That should include the case where the whole server or even whole data center is gone. What do you do in that case?

From XKCD comic 1718.

Other than the need to provision a new MySQL instance somewhere else, you also need to still have access to your backups. This means that a backup that is only stored locally (either on the same host or in the same data center) is of no use for this case.


Always copy your backups to remote storage. How remote depends on your needs.

When you decide where to store your backups, you need to consider your requirements. How long time is acceptable to wait to download the backup during a recovery, and what kind of disasters (power outage, fire, earthquake, meteor strike, etc.) must the backup be able to survive? You can choose to have backups available on the local host and/or data center, so they are quickly available, for example in case a user deletes the wrong data. Then have another storage location either in the other end of the country or even on another continent to protect against a major disaster.


Replication is also a great way to export your data to another location. Note though that unless the replication is delayed, the replica will not help you recover from bad queries.

Of course even having the best written instructions in the World and copies of the backups on all continents do not help you if the backup is corrupted or broken.

Verify Your Backups

A backup is only as good as your ability to restore it and bring the restored instance online. This is the reason, it is so important to test your restore procedures as discussed above. Optimally, you should restore every single backup. In the real world that is not always realistic, but it is still important that you practice a restore from time to time.


The ultimate validation of your backups is to restore them, bring the restored instance online, and verify the data is correct.

In practice it may not be possible to restore every single backup in all the restore combinations. So, you will need to add some other checks. The exact checks you should do depend on your backups, but some possibilities are:

  • MySQL Enterprise Backup (MEB) has a validate command. This will verify the InnoDB checksums for each page. This checks whether the backup is truncated, corrupted, or damaged.
  • MySQL Enterprise Backup can store the result of the backup in the mysql.backup_history table (enabled by default). This includes the overall backup status.
  • Verify the backup is created and has a minimum size.
  • If you have a logical backup, grep for some strings you know should be in the backup, such as CREATE TABLE statements.

The validation of your backups is of course only useful if you realize when the validation fails, so you also need to monitor the backups.

Monitor the Backups

Monitoring is one of the most important tasks for a database administrator. That also includes monitoring the backups. Once you have verification of the backups in place, you need to ensure the validation status is monitored.

How you do this depends on the monitoring solution you use. In MySQL Enterprise Monitor (MEM) there is a built-in backup dashboard with information about your MySQL Enterprise Backup (MEB) backups; this information is based on the data logged by MySQL Enterprise Backup to the mysql.backup_history table and includes the type of backup, the backup status, how long time the backup took, how long time locks were held, etc. MySQL Enterprise Monitor also creates events when backups fail.

The MySQL Enterprise Monitor (MEM) dashboard for backups.
The MySQL Enterprise Monitor (MEM) dashboard for backups.

This far, all the advises have been focused on what you should do with the backup after it has been created. What about creating the backups?

Creating Backups

When you decide how you want to create the backup, there are many considerations to take. This section will consider some of those.


Remember that all systems are unique. What is the best choice for one system may not be the best for another.

First of all you need to determine what you need for your backups and what interruption of your production system is allowed when creating the backups. Some of the things to consider are:

  • How much data can you afford to lose in case of a catastrophic disaster?
  • How long time is acceptable to restore the backup?
  • What data must be included in the backup?
  • Which other files (for example binary logs and configuration files) must be included?
  • Do you need to be able to do a point-in-time recovery?
  • Can the production system be taken offline during the backup or into read-only mode? If so, for how long?


Since MySQL 5.6, mysqlbinlog has been able to stream binary logs to disk giving you a near real-time backup of the binary logs that can be used for a point-in-time recovery. The binary log is enabled by default in MySQL 8.0.

Answering these questions helps you determine the backup method that is optimal for your system. Some of the backup methods available are:

  • Logical Backups:
    • mysqlpump: This is available in MySQL 5.7 and later and allows for parallel backups. In most cases other than for MySQL NDB Cluster, it is preferred over mysqldump.
    • mysqldump: This is the classical program to create logical backups in MySQL.
    • Native NDB Backups: This is a bit of hybrid between a logical backup and a raw backup specialized for the NDBCluster storage engine. It uses a native storage format but can be converted to CSV files.
  • Binary (Raw) Backups:

Whichever method you choose, make sure you understand its limitations. As an example, file system snapshots can work great in many cases, but if MySQL uses more than one file system for the database files, then it may not be possible to create a consistent snapshot (FLUSH TABLES WITH READ LOCK does not stop background writes for InnoDB except for tables that have been explicitly listed).


Always do your testing with as much write activity as you can have in a worst-case scenario. Backing up an idle instance will not reveal limitations related to writes happening during the backup.

You also need to take the overhead of the backup method into consideration. At the very least it will impact MySQL by reading the data. There will also be some locking involved even if it in some cases may be very limited. In all cases, creating the backup at the most quiet time of the day can help reduce the impact. An option is also to use a replica for the backups, but even in that case the overhead must be considered as the replica need to be able to keep up or catch up before the next backup.


If you create the backup from a dedicated replica, you create the backup without impacting the users. In this case, you should also monitor the replica and ensure it can keep up with the production source, so you always create up to date backups.

Now you have considered how to create the backups, validated them, copied them to secure off-site locations, and tested all possible the restore scenarios. So you are all set and can put backups on auto-pilot? Not so fast.

Backups Are a Never Ending Process

The World is not a static place. Neither are your MySQL instances. The configuration changes, the application add new features, requirements change, the amount of data grows, new MySQL instance are installed on new hardware or different cloud shapes or with a different cloud provider, there are updates to MySQL and the backup program, and so on.

This means that the process of working with the backup and restore processes never ends. You need to regularly evaluate whether your backup strategy still works and fulfills all requirements. Look at the bright side: you keep learning and the experience you have gathered since the last evaluation may help you implement an even better backup solution.

Happy World Backup Day.

MySQL 8: Performance Schema Digests Improvements

Since MySQL 5.6, the digest feature of the MySQL Performance Schema has provided a convenient and effective way to obtain statistics of queries based on their normalized form. The feature works so well that it has almost completely (from my experience) replaced the connector extensions and proxy for collecting query statistics for the Query Analyzer (Quan) in MySQL Enterprise Monitor (MEM).

MySQL 8 adds further improvements to the digest feature in the Performance Schema including a sample query with statistics for each digest, percentile information, and a histogram summary. This blog will explore these new features.

The MySQL Enterprise Monitor Query Analyzer
MySQL Enterprise Monitor is one of the main users of the Performance Schema digests for its Query Analyzer.

Let's start out looking at the the good old summary by digest table.

Query Sample

The base table for digest summary information is the events_statements_summary_by_digest table. This has been around since MySQL 5.6. In MySQL 8.0 it has been extended with six columns of which three have data related to a sample query will be examined in this section.

The three sample columns are:

  • QUERY_SAMPLE_TEXT: An actual example of a query.
  • QUERY_SAMPLE_SEEN: When the sample query was seen.
  • QUERY_SAMPLE_TIMER_WAIT: How long time the sample query took to execute (in picoseconds).

As an example consider the query SELECT * FROM WHERE id = <value>. The sample information for that query as well as the digest and digest text (normalized query) may look like:

              sys.format_time(QUERY_SAMPLE_TIMER_WAIT) AS SampleTimerWait
         FROM performance_schema.events_statements_summary_by_digest
        WHERE DIGEST_TEXT LIKE '%`world` . `city`%'\G
*************************** 1. row ***************************
           DIGEST: 9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366
      DIGEST_TEXT: SELECT * FROM `world` . `city` WHERE `ID` = ?
QUERY_SAMPLE_SEEN: 2018-10-09 17:19:20.500944
  SampleTimerWait: 17.34 ms
1 row in set (0.00 sec)

There are a few things to note here:

  • The digest in MySQL 8 is a sha256 hash whereas in 5.6 and 5.7 it was an md5 hash.
  • The digest text is similar to the normalized query that the mysqldumpslow script can generate for queries in the slow query log; just that the Performance Schema uses a question mark as a placeholder.
  • The QUERY_SAMPLE_SEEN value is in the system time zone.
  • The sys.format_time() function is in the query used to convert the picoseconds to a human readable value.

The maximum length of the sample text is set with the performance_schema_max_sql_text_length option. The default is 1024 bytes. It is the same option that is used for the SQL_TEXT columns in the statement events tables. It requires a restart of MySQL to change the value. Since the query texts are stored in several contexts and some of the Performance Schema tables can have thousands of rows, do take care not to increase it beyond what you have memory for.

How is the sample query chosen? The sample is the slowest example of a query with the given digest. If the performance_schema_max_digest_sample_age option is set to a non-zero value (the default is 60 seconds) and the existing sample is older than the specified value, it will always be replaced.

The events_statements_summary_by_digest also has another set of new columns: percentile information.

Percentile Information

Since the beginning, the events_statements_summary_by_digest table has included some statistical information about the query times for a given digest: the minimum, average, maximum, and total query time. In MySQL 8 this has been extended to include information about the 95th, 99th, and 99.9th percentile. The information is available in the QUANTILE_95, QUANTILE_99, and QUANTILE_999 column respectively. All of the values are in picoseconds.

What does the new columns mean? Based on the histogram information of the query (see the next section), MySQL calculates a high estimate of the query time. For a given digest, 95% of the executed queries are expected to be faster than the query time given by QUANTILE_95. Similar for the two other columns.

As an example consider the same digest as before:

              sys.format_time(SUM_TIMER_WAIT) AS SumTimerWait,
              sys.format_time(MIN_TIMER_WAIT) AS MinTimerWait,
              sys.format_time(AVG_TIMER_WAIT) AS AvgTimerWait,
              sys.format_time(MAX_TIMER_WAIT) AS MaxTimerWait,
              sys.format_time(QUANTILE_95) AS Quantile95,
              sys.format_time(QUANTILE_99) AS Quantile99,
              sys.format_time(QUANTILE_999) AS Quantile999
         FROM performance_schema.events_statements_summary_by_digest
        WHERE DIGEST_TEXT LIKE '%`world` . `city`%'\G
*************************** 1. row ***************************
           DIGEST: 9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366
      DIGEST_TEXT: SELECT * FROM `world` . `city` WHERE `ID` = ?
     SumTimerWait: 692.77 ms
     MinTimerWait: 50.32 us
     AvgTimerWait: 68.92 us
     MaxTimerWait: 17.34 ms
       Quantile95: 104.71 us
       Quantile99: 165.96 us
      Quantile999: 363.08 us
1 row in set (0.00 sec)

Having the 95th, 99th, and 99.9th percentile helps predict the performance of a query and show the spread of the query times. Even more information about the spread can be found using the new family member: histograms.


Histograms is a way to put the query execution times into buckets, so it is possible to see how the query execution times spread. This can for example be useful to see how evenly the query time is. The average query time may be fine, but if that is based on some queries executing super fast and others very slow, it will still result in unhappy users and customers.

The MAX_TIMER_WAIT column of the events_statements_summary_by_digest table discussed this far shows the high watermark, but it does not say whether it is a single outlier or a result of general varying query times. The histograms give the answer to this.

Using the query digest from earlier in the blog, the histogram information for the query can be found in the events_statements_histogram_by_digest table like:

              sys.format_time(BUCKET_TIMER_LOW) AS TimerLow,
              sys.format_time(BUCKET_TIMER_HIGH) AS TimerHigh,
         FROM performance_schema.events_statements_histogram_by_digest
        WHERE DIGEST = '9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366'
              AND COUNT_BUCKET > 0
|            36 | 50.12 us  | 52.48 us  |          524 |                    524 |        0.052400 |
|            37 | 52.48 us  | 54.95 us  |         2641 |                   3165 |        0.316500 |
|            38 | 54.95 us  | 57.54 us  |          310 |                   3475 |        0.347500 |
|            39 | 57.54 us  | 60.26 us  |          105 |                   3580 |        0.358000 |
|            40 | 60.26 us  | 63.10 us  |           48 |                   3628 |        0.362800 |
|            41 | 63.10 us  | 66.07 us  |         3694 |                   7322 |        0.732200 |
|            42 | 66.07 us  | 69.18 us  |          611 |                   7933 |        0.793300 |
|            43 | 69.18 us  | 72.44 us  |          236 |                   8169 |        0.816900 |
|            44 | 72.44 us  | 75.86 us  |          207 |                   8376 |        0.837600 |
|            45 | 75.86 us  | 79.43 us  |          177 |                   8553 |        0.855300 |
|            46 | 79.43 us  | 83.18 us  |          236 |                   8789 |        0.878900 |
|            47 | 83.18 us  | 87.10 us  |          186 |                   8975 |        0.897500 |
|            48 | 87.10 us  | 91.20 us  |          203 |                   9178 |        0.917800 |
|            49 | 91.20 us  | 95.50 us  |          116 |                   9294 |        0.929400 |
|            50 | 95.50 us  | 100.00 us |          135 |                   9429 |        0.942900 |
|            51 | 100.00 us | 104.71 us |          105 |                   9534 |        0.953400 |
|            52 | 104.71 us | 109.65 us |           65 |                   9599 |        0.959900 |
|            53 | 109.65 us | 114.82 us |           65 |                   9664 |        0.966400 |
|            54 | 114.82 us | 120.23 us |           59 |                   9723 |        0.972300 |
|            55 | 120.23 us | 125.89 us |           40 |                   9763 |        0.976300 |
|            56 | 125.89 us | 131.83 us |           34 |                   9797 |        0.979700 |
|            57 | 131.83 us | 138.04 us |           33 |                   9830 |        0.983000 |
|            58 | 138.04 us | 144.54 us |           27 |                   9857 |        0.985700 |
|            59 | 144.54 us | 151.36 us |           16 |                   9873 |        0.987300 |
|            60 | 151.36 us | 158.49 us |           25 |                   9898 |        0.989800 |
|            61 | 158.49 us | 165.96 us |           20 |                   9918 |        0.991800 |
|            62 | 165.96 us | 173.78 us |            9 |                   9927 |        0.992700 |
|            63 | 173.78 us | 181.97 us |           11 |                   9938 |        0.993800 |
|            64 | 181.97 us | 190.55 us |           11 |                   9949 |        0.994900 |
|            65 | 190.55 us | 199.53 us |            4 |                   9953 |        0.995300 |
|            66 | 199.53 us | 208.93 us |            6 |                   9959 |        0.995900 |
|            67 | 208.93 us | 218.78 us |            6 |                   9965 |        0.996500 |
|            68 | 218.78 us | 229.09 us |            6 |                   9971 |        0.997100 |
|            69 | 229.09 us | 239.88 us |            3 |                   9974 |        0.997400 |
|            70 | 239.88 us | 251.19 us |            2 |                   9976 |        0.997600 |
|            71 | 251.19 us | 263.03 us |            2 |                   9978 |        0.997800 |
|            72 | 263.03 us | 275.42 us |            2 |                   9980 |        0.998000 |
|            73 | 275.42 us | 288.40 us |            4 |                   9984 |        0.998400 |
|            74 | 288.40 us | 302.00 us |            2 |                   9986 |        0.998600 |
|            75 | 302.00 us | 316.23 us |            2 |                   9988 |        0.998800 |
|            76 | 316.23 us | 331.13 us |            1 |                   9989 |        0.998900 |
|            78 | 346.74 us | 363.08 us |            3 |                   9992 |        0.999200 |
|            79 | 363.08 us | 380.19 us |            2 |                   9994 |        0.999400 |
|            80 | 380.19 us | 398.11 us |            1 |                   9995 |        0.999500 |
|            83 | 436.52 us | 457.09 us |            1 |                   9996 |        0.999600 |
|           100 | 954.99 us | 1.00 ms   |            1 |                   9997 |        0.999700 |
|           101 | 1.00 ms   | 1.05 ms   |            1 |                   9998 |        0.999800 |
|           121 | 2.51 ms   | 2.63 ms   |            1 |                   9999 |        0.999900 |
|           162 | 16.60 ms  | 17.38 ms  |            1 |                  10000 |        1.000000 |
49 rows in set (0.02 sec)

In this example, 3694 times (the COUNT_BUCKET column) when the query were executed, the query time was between 63.10 microseconds and 66.07 microseconds, so the execution time matched the interval of bucket number 41. There has been at total of 7322 executions (the COUNT_BUCKET_AND_LOWER column) of the query with a query time of 66.07 microseconds or less. This means that 73.22% (the BUCKET_QUANTILE column) of the queries have a query time of 66.07 microseconds or less.

In addition to the shown columns, there is SCHEMA_NAME and DIGEST (which together with BUCKET_NUMBER form a unique key). For each digest there are 450 buckets with the width of the bucket (in terms of difference between the low and high timers) gradually becoming larger and larger. The first, middle, and last five buckets are:

              sys.format_time(BUCKET_TIMER_LOW) AS TimerLow,
              sys.format_time(BUCKET_TIMER_HIGH) AS TimerHigh
         FROM performance_schema.events_statements_histogram_by_digest
        WHERE DIGEST = '9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366'
| BUCKET_NUMBER | TimerLow  | TimerHigh |
|             0 | 0 ps      | 10.00 us  |
|             1 | 10.00 us  | 10.47 us  |
|             2 | 10.47 us  | 10.96 us  |
|             3 | 10.96 us  | 11.48 us  |
|             4 | 11.48 us  | 12.02 us  |
|           223 | 275.42 ms | 288.40 ms |
|           224 | 288.40 ms | 302.00 ms |
|           225 | 302.00 ms | 316.23 ms |
|           226 | 316.23 ms | 331.13 ms |
|           227 | 331.13 ms | 346.74 ms |
|           445 | 2.11 h    | 2.21 h    |
|           446 | 2.21 h    | 2.31 h    |
|           447 | 2.31 h    | 2.42 h    |
|           448 | 2.42 h    | 2.53 h    |
|           449 | 2.53 h    | 30.50 w   |
15 rows in set (0.02 sec)

The bucket thresholds are fixed and thus the same for all digests. There is also a global histogram in the events_statements_histogram_global.

This includes the introduction to the new Performance Schema digest features. As monitoring tools start to use this information, it will help create a better monitoring experience. Particularly the histograms will benefit from being shown as graphs.

What Does I/O Latencies and Bytes Mean in the Performance and sys Schemas?

The MEM InnoDB File I/O Graphs showing a peak in latency and bytes at 12:51.

The Performance Schema and sys schema are great for investigating what is going on in MySQL including investigating performance issues. In my work in MySQL Support, I have several times heard questions whether a peak in the InnoDB Data File I/O – Latency graph in MySQL Enterprise Monitor (MEM) or some values from the corresponding tables and view in the Performance Schema and sys schema are cause for concern. This blog will discuss what these observations means and how to use them.

The Tables and Views Involved

This blog will look into three sources in the Performance Schema for I/O latencies, so let’s first take a look at those tables. The three Performance Schema tables are:

  • events_waits_summary_global_by_event_name: with the event name set to wait/io/table/sql/handler or wait/io/file/%. This is the table used for the waits_global_by_latency and wait_classes_global_by_% views in the sys schema.
  • table_io_waits_summary_by_table: this is the table used for the schema_table_statistics% views in the sys schema.
  • file_summary_by_instance: this is the table used for the io_global_by_file_by% views in the sys schema.

These are also the sources used in MySQL Enterprise Monitor for the InnoDB Data File I/O graphs shown above. Let’s take a look at an example of each of the tables.


The events_waits_summary_global_by_event_name has aggregate data for wait events grouped by the event name. For the purpose of this discussion it is the table and file wait/io events that are of interested. An example of the data returned is:

mysql> SELECT *
         FROM performance_schema.events_waits_summary_global_by_event_name
        WHERE EVENT_NAME = 'wait/io/table/sql/handler'
              OR EVENT_NAME LIKE 'wait/io/file/%'
        LIMIT 5;
| wait/io/file/innodb/innodb_log_file  |    4003029 | 97371921796204 |        5371684 |       24324412 |   180878537324 |
| wait/io/table/sql/handler            |    4049559 | 43338753895440 |         494128 |       10702072 |   138756437188 |
| wait/io/file/innodb/innodb_data_file |      25850 | 11395061934996 |              0 |      440814508 |    43029060332 |
| wait/io/file/sql/binlog              |      20041 |  1316679917820 |              0 |       65699088 |    25816580304 |
| wait/io/file/sql/io_cache            |       2439 |    68212824648 |        1448920 |       27967360 |      628484180 |
5 rows in set (0.00 sec)

              sys.format_time(SUM_TIMER_WAIT) AS SumTimerWait,
              sys.format_time(MIN_TIMER_WAIT) AS MinTimeWait,
              sys.format_time(AVG_TIMER_WAIT) AS AvgTimerWait,
              sys.format_time(MAX_TIMER_WAIT) AS MaxTimerWait
         FROM performance_schema.events_waits_summary_global_by_event_name
        WHERE EVENT_NAME = 'wait/io/table/sql/handler'
              OR EVENT_NAME LIKE 'wait/io/file/
| EVENT_NAME                           | COUNT_STAR | SumTimerWait | MinTimeWait | AvgTimerWait | MaxTimerWait |
| wait/io/file/innodb/innodb_log_file  |    4003029 | 1.62 m       | 5.37 us     | 24.32 us     | 180.88 ms    |
| wait/io/table/sql/handler            |    4049559 | 43.34 s      | 494.13 ns   | 10.70 us     | 138.76 ms    |
| wait/io/file/innodb/innodb_data_file |      25853 | 11.40 s      | 0 ps        | 440.78 us    | 43.03 ms     |
| wait/io/file/sql/binlog              |      20041 | 1.32 s       | 0 ps        | 65.70 us     | 25.82 ms     |
| wait/io/file/sql/io_cache            |       2439 | 68.21 ms     | 1.45 us     | 27.97 us     | 628.48 us    |
5 rows in set (0.01 sec)

mysql> SELECT *
         FROM sys.waits_global_by_latency
        WHERE events = 'wait/io/table/sql/handler'
              OR events LIKE 'wait/io/file/%'
        LIMIT 5;
| events                               | total   | total_latency | avg_latency | max_latency |
| wait/io/file/innodb/innodb_log_file  | 4003029 | 1.62 m        | 24.32 us    | 180.88 ms   |
| wait/io/table/sql/handler            | 4049559 | 43.34 s       | 10.70 us    | 138.76 ms   |
| wait/io/file/innodb/innodb_data_file |   25874 | 11.43 s       | 441.88 us   | 43.03 ms    |
| wait/io/file/sql/binlog              |   20131 | 1.32 s        | 65.66 us    | 25.82 ms    |
| wait/io/file/sql/io_cache            |    2439 | 68.21 ms      | 27.97 us    | 628.48 us   |
5 rows in set (0.01 sec)

These three queries show the same data, just obtained and displayed in different ways.

In the result there are two groups of events. The wait/io/table events (the wait/io/table/sql/handler is the only event of this group which is why it can be listed explicitly) and the wait/io/file group.

The table events are for accessing data in tables. It does not matter whether the data is cached in the buffer pool or is accessed on disk. In this table and view, there is no distinguishing between different types of access (read, write, etc.).

The file events are, as the name suggest, for actually accessing files. There is one file event per file type. For example, in he output there are the wait/io/file/innodb/innodb_log_file event for accessing the InnoDB redo log files, the wait/io/file/innodb/innodb_data_file event for accessing the InnoDB data files themselves, the wait/io/file/sql/binlog event for the binary log files, etc.

In the second query, all of the timings are wrapped in the sys.format_time() function. The timings returned by the Performance Schema are in picoseconds (10^-12 second) which are somewhat hard for us humans to read. The sys.format_time() function converts the picoseconds to human readable strings. When you sort, however, make sure you sort by the original non-converted values.


Use the sys.format_time() function to convert the picoseconds to a human readable value, but do only so for the returned row; sort and filter by the original values. The Performance Schema always returns timings in picoseconds irrespective of the timer used internally for the event.

The sys schema by default returns the timing as human readable values. If you need the values in picoseconds prefix the table name with x$, for example sys.x$waits_global_by_latency. The sys schema includes an ORDER BY clause in most views. For the waits_global_by_latency view, the default ordering is by the total latency, so there is no need to add an ORDER BY clause in this example.


The table_io_waits_summary_by_table Performance Schema table and schema_table_statistics% sys schema views are related to the wait/io/table/sql/handler event just discussed. These provide information about the amount of time spent per table. Unlike querying the wait/io/table/sql/handler in the wait_events_% tables, it split the time spent into whether it is used for reads, writes, fetch, insert, update, or delete. The read and write columns are the aggregates for the corresponding read and write operations, respectively. Since fetch is the only read operation, the read and fetch columns will have the same values.

The table and view show the table I/O, i.e. the access to table data irrespective of whether the data is accessed in-memory or on disk. This is similar to the wait/io/table/sql/handler event. An example of the result of querying the table and view for the employees.salaries table is:

mysql> SELECT *
         FROM performance_schema.table_io_waits_summary_by_table
        WHERE OBJECT_SCHEMA = 'employees'
              AND OBJECT_NAME = 'salaries'\
*************************** 1. row ***************************
   OBJECT_SCHEMA: employees
     OBJECT_NAME: salaries
      COUNT_STAR: 2844047
  SUM_TIMER_WAIT: 31703741623644
  MIN_TIMER_WAIT: 4975456
  AVG_TIMER_WAIT: 11147072
  MAX_TIMER_WAIT: 138756437188
      COUNT_READ: 0
     COUNT_WRITE: 2844047
 SUM_TIMER_WRITE: 31703741623644
 AVG_TIMER_WRITE: 11147072
 MAX_TIMER_WRITE: 138756437188
    COUNT_INSERT: 2844047
SUM_TIMER_INSERT: 31703741623644
MAX_TIMER_INSERT: 138756437188
1 row in set (0.00 sec)

mysql> SELECT *
         FROM sys.schema_table_statistics
        WHERE table_schema = 'employees'
              AND table_name = 'salaries'\G
*************************** 1. row ***************************
     table_schema: employees
       table_name: salaries
    total_latency: 31.70 s
     rows_fetched: 0
    fetch_latency: 0 ps
    rows_inserted: 2844047
   insert_latency: 31.70 s
     rows_updated: 0
   update_latency: 0 ps
     rows_deleted: 0
   delete_latency: 0 ps
 io_read_requests: 493
          io_read: 7.70 MiB
  io_read_latency: 611.04 ms
io_write_requests: 8628
         io_write: 134.91 MiB
 io_write_latency: 243.19 ms
 io_misc_requests: 244
  io_misc_latency: 2.50 s
1 row in set (0.05 sec)

In this case it shows that there has been mostly writes – inserts – for the table. The sys schema view effectively joins on the performance_schema.file_summary_by_instance for the read columns, so for the schema_table_statistics view fetch and read are not synonyms.

So, what is it the file_summary_by_instance table shows that is different table the “table I/O” that has been the topic of the first two tables? Let’s see.


Unlike the two previous tables, the file_summary_by_instance shows how much time is spent on actual file I/O and how much data is accessed. This makes the file_summary_by_instance table and the corresponding sys schema views very useful for determining where time is spent doing disk I/O and which files have the most data accesses on disk.

An example of using the Performance Schema and two of the sys schema views is:

mysql> SELECT *
         FROM performance_schema.file_summary_by_instance
        WHERE FILE_NAME LIKE '%\\\\employees\\\\salaries.ibd'\G
*************************** 1. row ***************************
                FILE_NAME: C:\ProgramData\MySQL\MySQL Server 8.0\Data\employees\salaries.ibd
               EVENT_NAME: wait/io/file/innodb/innodb_data_file
    OBJECT_INSTANCE_BEGIN: 2230271392896
               COUNT_STAR: 9365
           SUM_TIMER_WAIT: 3351594917896
           MIN_TIMER_WAIT: 11970500
           AVG_TIMER_WAIT: 357885020
           MAX_TIMER_WAIT: 40146113948
               COUNT_READ: 493
           SUM_TIMER_READ: 611040652056
           MIN_TIMER_READ: 65421052
           AVG_TIMER_READ: 1239433224
           MAX_TIMER_READ: 16340582272
              COUNT_WRITE: 8628
          SUM_TIMER_WRITE: 243186542696
          MIN_TIMER_WRITE: 11970500
          AVG_TIMER_WRITE: 28185588
          MAX_TIMER_WRITE: 1984546920
               COUNT_MISC: 244
           SUM_TIMER_MISC: 2497367723144
           MIN_TIMER_MISC: 154910196
           AVG_TIMER_MISC: 10235113564
           MAX_TIMER_MISC: 40146113948
1 row in set (0.00 sec)

mysql> SELECT *
         FROM sys.io_global_by_file_by_latency
        WHERE file = '@@datadir\\employees\\salaries.ibd'\G
*************************** 1. row ***************************
         file: @@datadir\employees\salaries.ibd
        total: 9365
total_latency: 3.35 s
   count_read: 493
 read_latency: 611.04 ms
  count_write: 8628
write_latency: 243.19 ms
   count_misc: 244
 misc_latency: 2.50 s
1 row in set (0.09 sec)

mysql> SELECT *
         FROM sys.io_global_by_file_by_bytes
         WHERE file = '@@datadir\\employees\\salaries.ibd'\G
*************************** 1. row ***************************
         file: @@datadir\employees\salaries.ibd
   count_read: 493
   total_read: 7.70 MiB
     avg_read: 16.00 KiB
  count_write: 8628
total_written: 134.91 MiB
    avg_write: 16.01 KiB
        total: 142.61 MiB
    write_pct: 94.60
1 row in set (0.10 sec)

This example is from Microsoft Windows, and as always when backslashes are in play, it is fun to try to determine the appropriate number of backslashes to use. When specifying the file name with LIKE, you need four backslashes per backslash in the file name; when using = you need two backslashes.

Again, the values are split into reads and writes (though not as detailed as before with fetch, insert, update, and delete – that is not known at the level where the file I/O happens). The miscellaneous values include everything that is not considered reads or write; this includes opening and closing the file.

The sys schema queries not only have formatted the timings, but also the path and the bytes. This has been done using the sys.format_path() and sys.format_bytes() functions, respectively.

From the output, it can be seen that despite no rows were ever fetched (read) from the employees.salaries table (that was found in the previous output), there has still been some read file I/O. This was what the sys.schema_table_statistics view reflected.

So, what does all of this mean? The graph in MySQL Enterprise Monitor showed that there was six seconds file I/O latency for the InnoDB data files. Is that bad? As often with these kinds of questions, the answer is: “it depends”.

What to Make of the Values

In this case we have a case where the graphs in MySQL Enterprise Monitor show a peak for the latency and bytes used doing I/O on the InnoDB data files. This is actually disk I/O. But what exactly does that means and should the alarm sound?

Let’s first refresh our memory on how the graphs looks:

The MEM InnoDB File I/O Graphs showing a peak in latency and bytes at 12:31.

If you are not using MySQL Enterprise Monitor, you may have similar graphs from your monitoring solution, or you have obtained latency and bytes values from the tables and views discussed in this blog.

The latency graph shows that we have done six seconds if I/O. What does that mean? It is the aggregate I/O during the period the data was collected. In this case, the data is plotted for every minute, so in the one minute around 12:51, of the 60 seconds a total of six seconds was spent doing I/O. Now, the six seconds suddenly do no sound so bad. Similar for the bytes, around 4.6MiB of data was read or written.

In general, the values obtained either from the monitoring graphs or from the underlying tables cannot be used to conclude whether the is a problem or not. They just show how much I/O was done at different times.

Similar for the values from the Performance Schema. On their own they do not tell much. You can almost say they are too neutral – they just state how much work was done, not whether it was too much or too little.

A more useful way to use this data is in case a problem is reported. This can be that system administrator reports the disks are 100% utilized or that end users report the system is slow. Then, you can go and look at what happened. If the disk I/O was unusually high at that point in time, then that is likely related, and you can continue your investigation from there.

There are more reports in MySQL Enterprise Monitor both as time series graphs and as point-in-time snapshots. The point-in-time snapshots are often using the sys schema views but allows sorting. An example is the Database File I/O reports:

MEM's Database File I/O Report

MySQL Workbench also provides performance reports based on the sys scheme. The equivalent to the previous report is the Top File I/O Activity Report:

MySQL Workbench's Top File I/O Activity Report

The MySQL Workbench performance reports also allows you to export the report or copy the query used to generate the report, so you can execute it manually.

With respect to the wait/io/table/sql/handler events, then remember that I/O here does not mean disk I/O, but table I/O. So, all it means that time is accumulating for these events – including when looking at the per table is that the data in the table is used. There are also per index values in table_io_waits_summary_by_index_usage Performance Schema table and the schema_index_statistics sys view which have the same meaning. Whether a given usage is too high depends on many things. However, it can again be useful to investigate what is causing problems.

For the example data from this blog, it was triggered by loading the employees sample database. So, there was no problem. If you want to put data into your database, it must necessarily be written to the data file, otherwise it will not be persisted. However, if you think the data import took too long, you can use the data as part of your investigation on how to make the import faster.

The conclusion is that you should not panic if you see the I/O latencies peak. On their own they just mean that more work was done during that period that usual. Without context a peak is no worse than a dip (which could indicate something is preventing work from being done or that there is a natural slow period). Instead use the I/O latencies and bytes together with other observations to understand where MySQL is spending time and for which files data is being read and written.


I will recommend you look at the following references, if you want to understand more about the Performance Schema tables and sys schema views discussed in this blog:

The GitHub repository for the sys schema includes all the definitions of the views (and other objects). Since these are written in plain SQL, they are very useful to see in more depth where the data is coming from. The GitHub website allows you to browse through each of the files. Each sys schema object is defined in its own file.

Install MySQL Enteprise Monitor (MEM) 3.0 Using Limited Resources

MySQL Enterprise Monitor (MEM) is the monitoring solution offered as part of MySQL Enterprise Edition and MySQL Cluster Carrier Grade Edition. In this blog, I will not go into details of the features of MEM, but rather focus on the case where you want to install MEM 3.0 to try it out on your personal desktop or laptop.

A trial version (the full product, but the can only be used for 30 days) is available from Oracle's Software Delivery Cloud. If you are a MySQL customer, it is recommended that you download MEM from My Oracle Support (MOS).

Once you have downloaded and extracted the installation binaries, you can start the installation. You have the choice between using a GUI, text based, and unattended install. Here I will use the GUI install, but if you want to try one of the other install options, launch the installer with the –help option to get more information.

MEM 3.0 consists of two parts:

  • The Service Manager
  • Agents

Only the Service Manager is required, so that will be the first thing to install. As this is intended to show how you can test MEM, I will use a normal user rather than root for the installation.

It is also recommended to take a look at the MEM 3.0 Reference Manual.

Installation Wizard

When launching the installer the first screen asks which language to use – you have the choice of English and Japanese:

Installing the MEM 3.0 Service Manager - Step 1: Choose language
Step 1: Choose language

Next is an information screen that you should ensure you keep track of the usernames and passwords entered during the installation process:

Installing the MEM 3.0 Service Manager - Step 2: Remember the usernames and passwords entered during the installation process
Step 2: Remember the usernames and passwords entered during the installation process

The actual install process is now ready to start:

Installing the MEM 3.0 Service Manager - Step 3: Ready to start the actual install process
Step 3: Ready to start the actual install process

The following steps are to configure the installation – the first of which is to decide where to install the Service Manager:

Installing the MEM 3.0 Service Manager - Step 4: Choose the installation directory
Step 4: Choose the installation directory

The Service Manager will need three ports:

  • Tomcat Server Port: For the web UI when using non-SSL connections
  • Tomcat Shutdown Port: Used internally to shutdown the web server
  • Tomcat SSL Port: For the web UI when using SSL connections
Installing the MEM 3.0 Service Manager - Step 5: Choose the port numbers
Step 5: Choose the port numbers

The Service Manager uses a MySQL database to store the data collected. The next screen allows you to choose between using the MySQL database bundled with MEM or an existing one. Using the bundled instance has the advantage that MEM will configure it automatically and upgrades can be easier, however it will mean running two MySQL instances if you already have MySQL installed. For a test instance using the bundled instance also has the advantage that it's easy to uninstall the whole installation again, so we will use the bundled instance in this example:

Installing the MEM 3.0 Service Manager - Step 6: Choose whether to use the bundled MySQL database or an existing one
Step 6: Choose whether to use the bundled MySQL database or an existing one

The final of the pre-installation configuration is to choose the username and password to use for the connection to the MySQL database. This is the username and password that you were reminded of earlier to make sure you remember:

Installing the MEM 3.0 Service Manager - Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring
Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring

Next a note that because we have chosen to install the Service Manager as a non-root user, it will not be started automatically when the server is restarted:

Installing the MEM 3.0 Service Manager - Step 8: Info that MEM will not start automatically when not installed as root
Step 8: Info that MEM will not start automatically when not installed as root

Ready to Install:

Installing the MEM 3.0 Service Manager - Step 9: Configuration completed
Step 9: Configuration completed – ready to install

The Service Manager is now being installed – this will take a little while as it both includes copying all the files in place as well as configuring and starting the web server and the bundled MySQL database:

Installing the MEM 3.0 Service Manager - Step 10: Installation is in progress
Step 10: Installation is in progress

MEM includes an uninstall script if you need to remove the Service Manager again:

Installing the MEM 3.0 Service Manager - Step 11: Files are now installed
Step 11: Files are now installed

To improve security MEM 3.0 by default uses SSL. The installation process adds a self-signed certificate, but you can choose to install your own certificate later:

Installing the MEM 3.0 Service Manager - Step 12: Information that MEM is using SSL with a self-signed certificate by default
Step 12: Information that MEM is using SSL with a self-signed certificate by default

The installer is now done and it is time to launch the UI:

Installing the MEM 3.0 Service Manager - Step 13: Ready to launch the Service Manager
Step 13: Ready to launch the Service Manager

Unattended Install

If you go through the installation process often, it is more convenient to use the unattended installation as it allows you to automate the installation. To perform the same installation as above using the unattended installation you can execute:

shell$ ./mysqlmonitor- \
            --mode unattended --unattendedmodeui none \
            --installdir /home/jesper/mysql/enterprise/monitor \
            --adminuser service_manager --adminpassword mypassword \
            --dbport 13306 --mysql_installation_type bundled \
            --tomcatport 18080 --tomcatshutdownport 18005 \
            --tomcatsslport 18443

As several of the values are the default ones, you can skip some of the options, but they are included here for completeness.

When the above command completes, continue with the post-installation configuration as you would have done using the installation wizard.

Post-Installation Configuration

Once the Service Manager UI has opened in your browser there is a little post-installation configuration to take care of:

  • You need to create an admin user for the web UI
  • You need to create a user agents can use if you choose to install any agents
  • You can choose whether to have the Service Manager automatically check for updates and how for how long time to keep historical data
Installing the MEM 3.0 Service Manager - Step 14: Post installation configuration
Step 14: Post installation configuration

The last step if to choose your timezone and which locale to use in the user interface:

Installing the MEM 3.0 Service Manager - Step 15: Choose timezone and the locale
Step 15: Choose timezone and the locale

After this you are ready to play around with the monitor. If you choose to install agents, the steps are similar.

Reducing Memory Footprint

One thing to be aware of though is that the default configuration used by the Service Manager is aimed at a small to medium production installation. If you for example installed MEM in a virtual machine or computer with limited memory available, the default configuration may not be optimal.

With the installation above just completed, the memory usage of the web server (Tomcat) and the bundled MySQL database is:

15653 jesper    20   0 3794m 924m  13m S  1.0 23.4   4:46.01 java
15586 jesper    20   0 3425m 608m 9588 S  2.6 15.4   2:23.88 mysqld

So around 1.5G resident memory. That is without adding any additional agents and/or MySQL instances to monitor.

So when I use MEM for testing, the first thing I do is to change a few configuration values to reduce the memory usage. The configuration options I change are located in two files (paths are given relative to the installation directory):

  • mysql/my.cnf – options related to the bundled MySQL database
  • apache-tomcat/bin/ – options related to the web server


For the MySQL database I change two setting:

  • innodb_buffer_pool_size – this is by default 768M, but if you only monitor a couple of MySQL instances, something like 100M is enough
  • table_definition_cache – this is by default 2048. One side effect of the value being so large is that the auto-sizing of the Performance Schema considers the installation as a large instance, so the Performance Schema tables are made large. Reducing the value to 800 is enough for testing and will reduce the memory usage of the Performance Schema with several hundred megabytes.

So my mysql/my.cnf looks something like this after the changes:

# Main Caches
table_definition_cache = 800

# InnoDB
innodb_buffer_pool_size = 100M


It is only necessary to edit one line in the Tomcat configuration file – the default settings are:

JAVA_OPTS="-Xmx768M -Xms768M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/jesper/mysql/enterprise/monitor/apache-tomcat/temp -XX:+UseParallelOldGC -XX:MaxPermSize=512M"

I change that to:

JAVA_OPTS="-Xmx256M -Xms256M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/jesper/mysql/enterprise/monitor/apache-tomcat/temp -XX:+UseParallelOldGC -XX:MaxPermSize=200M"

I.e. the three setting I have changed are:

  • -Xmx (maximum  heap size) from 768M to 256M
  • -Xms (minimum heap size) from 768M to 256M
  • -XX:MaxPermSize from 512M to 200M

Enabling the New Configurations

It requires restarting the Service Manager to make the new configurations take effect. You can do the restart by going into the installation directory and execute:

monitor$ ./ restart
Using CATALINA_BASE:   /home/jesper/mysql/enterprise/monitor/apache-tomcat
Using CATALINA_HOME:   /home/jesper/mysql/enterprise/monitor/apache-tomcat
Using CATALINA_TMPDIR: /home/jesper/mysql/enterprise/monitor/apache-tomcat/temp
Using JRE_HOME:        /home/jesper/mysql/enterprise/monitor/java
Using CLASSPATH:       /home/jesper/mysql/enterprise/monitor/apache-tomcat/bin/bootstrap.jar:/home/jesper/mysql/enterprise/monitor/apache-tomcat/bin/tomcat-juli.jar
Using CATALINA_PID:    /home/jesper/mysql/enterprise/monitor/apache-tomcat/temp/
Stopping tomcat service . [ OK ]
Stopping mysql service . [ OK ]
./ : mysql  started
131128 13:58:59 mysqld_safe Logging to '/home/jesper/mysql/enterprise/monitor/mysql/runtime/mysqld.log'.
131128 13:58:59 mysqld_safe Starting mysqld daemon with databases from /home/jesper/mysql/enterprise/monitor/mysql/data/
Using CATALINA_BASE:   /home/jesper/mysql/enterprise/monitor/apache-tomcat
Using CATALINA_HOME:   /home/jesper/mysql/enterprise/monitor/apache-tomcat
Using CATALINA_TMPDIR: /home/jesper/mysql/enterprise/monitor/apache-tomcat/temp
Using JRE_HOME:        /home/jesper/mysql/enterprise/monitor/java
Using CLASSPATH:       /home/jesper/mysql/enterprise/monitor/apache-tomcat/bin/bootstrap.jar:/home/jesper/mysql/enterprise/monitor/apache-tomcat/bin/tomcat-juli.jar
Using CATALINA_PID:    /home/jesper/mysql/enterprise/monitor/apache-tomcat/temp/

The memory usage is now somewhat smaller:

19521 jesper    20   0 2847m 544m  15m S  2.0 13.8   1:51.61 java
19484 jesper    20   0 2407m 160m 9296 S  0.3  4.1   0:09.21 mysqld