MySQL Connector/Python on iOS Using Pythonista 3

One of the nice things about MySQL Connector/Python is that it is available in a pure Python implementation. This makes it very portable. Today I have been exploring the possibility to take advantage of that to make MySQL Connector/Python available on my iPad.

There are few Python interpreters available for iPad. The one I will be discussing today is Pythonista 3 which has support for both Python 2.7 and 3.6. One of the things that caught my interest is that it comes with libraries to work with iOS such as accessing the contact and photos as well as UI tools. This is a commercial program (AUD 15), but this far looks to be worth the money. There are other options and I hope to write about those another day.

MySQL Connector/Python is available from PyPi. This makes it easy to install the modules in a uniform way across platforms. Unfortunately, Pythonista 3 does not support the pip command out of the box. Instead there is a community contribution called StaSh that can be used to get a shell-like environment that can be used to execute pip. So, our first step is to install StaSh.

Coding with MySQL Connector/Python on an iPad
Coding with MySQL Connector/Python on an iPad

Install StaSh

StaSh is a project maintained by ywangd and can be found on GitHub. It is self-installing by executing a small Python program that can be found in the README.md file on the project repository page. You copy the source code into a new file in Pythonista 3 and execute it (using the “Play” button):

StaSh is installed by executing a downloaded script.
StaSh is installed by executing a downloaded script.

Then you need to restart Pythonista 3. At this point StaSh is installed.

Installing PyPi Package Using StaSh pip

In order to be able to use the pip command through StaSh, you need to launch the launch_stash.py program which was installed in the previous step. The program can be found in the This iPad folder:

Open the launch_stash.py Script in This iPad
Open the launch_stash.py Script in This iPad

Open the program and use the “Play” button again to execute it. This creates the shell. You can do other things than use the pip command, but for the purpose of installing MySQL Connector/Python that is all that is required. The command is:

The console output is:

Using the pip command in StaSh to install MySQL Connector/Python.
Using the pip command in StaSh to install MySQL Connector/Python.

That’s it. Now you can use the MySQL Connector/Python modules on your iPad just as in any other environment.

Example

To verify it is working, let’s create a simple example. For this to work, you need MySQL installed allowing external access which likely requires enabling access to port 3306 in your firewall.

A simple example that queries the city table in the world database for the city with ID = 130 is:

Edit the connect_args dictionary with the connection arguments required for your MySQL instance.

Warning: The connection arguments are included inside the source code here to keep the example simple. Do not do this in real programs. It is unsafe to store the password in the source code and it makes the program hard to maintain.

When you run it, the details of Sydney, Australia is printed to the console:

Example of querying the world.city table using MySQL Connector/Python in Pythonista 3.
Example of querying the world.city table using MySQL Connector/Python in Pythonista 3.

This all looks great, but what about the X DevAPI? Unfortunately, there are some problems there.

X DevAPI

The X DevAPI is new in MySQL 8.0. It is included in MySQL Connector/Python in the mysqlx module. However, it does not work out of the box in Pythonista 3. When you try to execute the import mysqlx command, it fails:

So, the Protobuf module that comes with Pythonista 3 uses the old comma syntax for exception handling. This is not allowed with Python 3 (PEP 3110).

Update: While I had tried to install a newer version of Protobuf using the StaSh pip command, what I had not realised originally is that for the change to take effect, you need to restart Pythonista 3. Once that is done, the mysqlx module works as well. To install Protobuf in StaSh, launch StaSh in the same way as when MySQL Connector/Python was installed above and execute the pip command:

 

MySQL Shell: Built-In Help

It can be hard to recall all the details of how a program and API work. The usual way to handle that is to look at the manual or a book. Another – and in my opinion – nice way is to have built-in help, so you can find the information without changing between the program and browser. This blog discuss how to obtain help when you use MySQL Shell.

MySQL Shell is a client that allows you to execute queries and manage MySQL through SQL commands and JavaScript and Python code. It is a second generation command-line client with additional WebOps support. If you have not installed MySQL Shell yet, then you can download it from MySQL’s community downloads, Patches & Updates in My Oracle Support (MOS) (for customers), or Oracle Software Delivery Cloud (for trial downloads). You can also install it through MySQL Installer for Microsoft Windows.
MySQL Shell: Get help for a table object
MySQL Shell: Get help for a table object

MySQL Shell has a very nice and comprehensive built-in help. There is of course the help output produced using the --help option if you invoke the shell from the command line:

However, this help is not what makes MySQL Shell special. It is the help that you can see from within the shell when working in JavaScript or Python that is the worth some extra attention. There is both support for general help and obtaining help through objects.

General Help

The first layer of help is what is also known from the old mysql command-line client. A command existing of a backslash and a ?, h, or help (\?, \h or \help) will show information about the general usage of MySQL Shell:

This shows which commands and global objects are available. But there is more: you can also get help about the usage of MySQL Shell such as how to use the Admin API (for MySQL InnoDB Cluster), how to connect, or the SQL syntax. The search for relevant help topics are context sensitive, for example searching for the word select return different results depending on the mode and whether you are connected:

  • In Python or JavaScript mode without a connection, it is noted that information was found in the mysqlx.Table.select and mysqlx.TableSelect.select categories.
  • In Python or JavaScript mode with a connection, the SELECT SQL statement is included as a category.
  • In SQL mode the actual help text for the SELECT SQL statement is returned (requires a connection).

For example, to get help about the select method of a table object:

To get help for the SELECT SQL statement:

Note here how it is possible to get the help for the SELECT statement both from the Python (and JavaScript) as well as SQL modes, but the search term is different.

Tip: To get information about SQL statements, you must be connected to a MySQL instance.

When you use the JavaScript or Python modes there is another way to get  help based on your object. Let’s look at that.

Object Based Help

If you are coding in MySQL Shell using JavaScript or Python it may happen you need a hint how to use a given object, for example a table object. You can use the method described in the previous section to get help by searching for mysqlx.Table, however, you can also access the help directly from the object.

All of the X DevAPI objects in MySQL Shell has a help() method that you can invoke to have help returned for the object. For example, if you have an object named city for the city table in the world schema, then calling city.help() returns information about table object:

As you can see, the built-in help in MySQL Shell is a powerful resource. Make sure you use it.

MySQL NDB Cluster: Never Install a Management Node on the Same Host as a Data Node

In MySQL NDB Cluster, the management node (ndb_mgmd) is a lightweight process that among other things handles the configuration of the cluster. Since it is lightweight. It can be tempting to install it with one of the other nodes. However, if you want a high-availability setup, you should never install it on the same host as a data node (ndbd or ndbmtd). If you do that, it can cause a total cluster outage where the cluster could otherwise have survived.

The first sign of trouble occurs when you start the management nodes. The following warning is printed to standard output:

To understand why this setup can cause a complete cluster shutdown, it is necessary first to review how a node failure is handled in MySQL NDB Cluster.

Node Failure Handling

When a data node fails, the cluster automatically evaluates whether it is safe to continue. A node failure can in this respect be either that a data node crashes or there is a network outage meaning one or more nodes cannot be seen by the other nodes.

A clean node shutdown (such as when using the recommended STOP command in the management client) is not subject to the evaluation as the other nodes will be notified of the shutdown.

So, how does MySQL NDB Cluster decide whether, the cluster can continue after a node failure or not? And if there are two halves, which one gets to continue?
Assuming two replicas (NoOfReplicas = 2), the rules are quite simple. To quote Pro MySQL NDB Cluster:

In short, data nodes are allowed to continue if the following conditions are true:

  • The group of data nodes holds all the data.
  • Either more than half the data nodes are in the group or the group has won the arbitration process.

For the group of data nodes to hold all the data, there must be one data node from each node group available. A node group is a group of NoOfReplicas nodes that share data. The arbitration process refers to the process of contacting the arbitrator (typically a management node) – the first half to make contact will remain online.

This is all a bit abstract, so let’s take a look at a couple of examples.

Examples

Consider a cluster with two data nodes and two management nodes. Most of the examples will have a management node installed on each of the hosts with the data nodes. The last example will as contrast have the management nodes on separate hosts.

The starting point is thus a cluster using two hosts each with one data node and one management node as shown in this figure:

A healthy cluster with two data nodes and the management nodes installed on the same hosts as the data nodes.
A healthy cluster with two data nodes and the management nodes installed on the same hosts as the data nodes.

The green colour represents that the data node is online. The blue colour for the management node with Node Id 49 is the arbitrator, and the yellow management node is on standby.

This is where the problem with the setup starts to show up. The arbitrator is the node that is involved when there is exactly half the data nodes available after a node failure. In that case, the data node(s) will have to contact the arbitrator to confirm whether it is OK to continue. This avoids a split-brain scenario where there are two halves with all the data; in that case it is imperative that one half is shut down or the data can start to diverge. The half that is first to contact the arbitrator survives, the other is killed (STONITH – shoot the other node in the head).

So, let’s look at a potential split-brain scenario.

Split-Brain Scenario

A split-brain scenario can occur when the network between the two halves of the cluster is lost as shown in the next figure:

NDB Cluster with network failure but the cluster survives.
NDB Cluster with network failure but the cluster survives.

In this case the network connection between the two hosts is lost. Since both nodes have all data, it is necessary with arbitration to decide who can continue. The data node with Id 1 can still contact the arbitrator as it is on the same host, but Node Id 2 cannot (it would need to use the network that is down). So, Node Id 1 gets to continue whereas Node Id 2 is shut down.

So far so good. This is what is expected. A single point of failure does not lead to a cluster outage. However, what happens if we instead of a network failure considers a complete host failure?

Host Failure

Consider now a case where there is a hardware failure on Host A or someone by accident pulls the power. This causes the whole host to shut down taking both the data and management node with it. What happens in this case?

The first thought is that it will not be an issue. Node Id 2 has all the data, so surely it will continue, right? No, that is not so. The result is a total cluster outage as shown in the following figure:

The failure of the host with the arbitrator causes complete cluster outage.
The failure of the host with the arbitrator causes complete cluster outage.

Why does this happen? When Host A crashes, so does the arbitrator management node. Since Node Id 2 does not on its own constitute a majority of the data nodes, it must contact the arbitrator to be allowed to remain online.

You may think it can use the management node with Node Id 50 as the arbitrator, but that will not happen: while handling a node failure, under no circumstances can the arbitrator be changed. The nodes on Host B cannot know whether it cannot see the nodes on Host A because of a network failure (as in the previous example) or because the nodes are dead. So, they have to assume the other nodes are still alive or there would sooner or later be a split-brain cluster with both halves online.

Important: The arbitrator will never change while the cluster handles a node failure.

So, the data node with Id 2 has no other option than to shut itself down, and there is a total cluster outage. A single point of failure has caused a total failure. That is not the idea of a high availability cluster.

What could have been done to prevent the cluster outage? Let’s reconsider the case where the arbitrator is on a third independent host.

Arbitrator on Independent Host

The picture changes completely, if the management nodes are installed on Hosts C and D instead of Hosts A and B. For simplicity the management node with Node Id 50 is left out as it is anyway just a spectator while handling the node failure. In this case the scenario is:

The failure of the host with the arbitrator on a third host ensures the cluster remains online.
The failure of the host with the arbitrator on a third host ensures the cluster remains online.

Here Node Id 2 can still contact the arbitrator. Node Id 1 is dead, so it will not compete to win the arbitration, and the end result becomes that Node Id 2 remains online. The situation is back where a single point of failure does not bring down the whole cluster.

Conclusion

If you want your cluster to have the best chance of survival if there is a problem with one of the hosts, never install the management nodes on the same hosts as where there are data nodes. One of the management nodes will also act as the arbitrator. Since the arbitrator cannot change while the cluster is handling a node failure, if the host with the arbitrator crashes, it will cause a complete cluster shutdown if arbitration is required.

When you consider what is a host, you should look at physical hosts. Installing the management node in a different virtual machine on the same physical host offers only little extra protection compared to the case where they are installed in the same virtual host or on the same host using bare metal.

So, to conclude: make sure your management nodes are on a completely different physical host compared to your data nodes.

Want to Know More?

The book Pro MySQL NDB Cluster (published by Apress) by Mikiya Okuno and me includes lots of information about designing your cluster and how MySQL NDB Cluster works.

Disclaimer: I am one of the authors of Pro MySQL NDB Cluster.

MySQL Shell: Using External Python Modules

MySQL Shell is a great tool for working with MySQL. One of the features that make it stand out compared to the traditional mysql command-line client is the support for JavaScript and Python in addition to SQL statements. This allows you to write code you otherwise would have had to write outside the client. I showed a simple example of this in my post about the instant ALTER TABLE feature in MySQL 8.0.12 where a Python loop was used to populate a table with 1 million rows This blog will look further into the use of Python and more specifically external modules.

Using a customer table_tools module in MySQL Shell.
Using a customer table_tools module in MySQL Shell.

Using Standard Modules

Aforementioned loop that was used to populate a test table also showed another feature of MySQL Shell: You can use the standard Python modules just as you would do in any other Python script. For example, if you need to create UUIDs you can use the uuid module:

This on its own is great, but what about your own modules? Sure, that is supported as well. Before showing how you can access your own modules, let’s create a simple module to use as an example.

Example Module

For the purpose of this blog, the following code should be saved in the file table_tools.py. You can save it in whatever directory you keep your Python libraries. The code is:

The describe function takes a Table object from which it works backwards to get the session object. It then queries the information_schema.COLUMNS view to get the same information about the table as the DESC SQL command. Both the table and schema name can be found through the table object. Finally, the information is printed.

The example is overly simplified for general usage as it does not change the width of the output based on the length of the data, and there is no error handling whatsoever. However, this is on purpose to focus on the usage of the code from within MySQL Shell rather than on the code.

Note: The same code works in a MySQL Connector/Python script except that the rows are returned as mysqlx.result.Row objects. So, the loop printing the rows look a little different:

With the function ready, it is time to look at how you can import it into MySQL Shell.

Importing Modules Into MySQL Shell

In order to be able to import a module into MySQL Shell, it must be in the path searched by Python. If you have saved table_tools.py into a location already searched, then that is it. However, a likely more common scenario is that you have saved the file in a custom location. In that case, you need to tell Python where to look for the files.

You modify the search path in MySQL Shell just as you would in a regular Python program. If you for example have saved the file to D:\MySQL\Shell\Python, then you can add that to the path using the following code:

If this is something you need as a one off, then it is fine just to modify the path directly in MySQL Shell. However, if you are working on some utilities that you want to reuse, it becomes tedious. MySQL Shell has support for configuration files where commands can be executed. The one for Python is named mysqlshrc.py (and mysqlshrc.js for JavaScript).

MySQL Shell searches for the mysqlshrc.py file in four locations including global locations as well as user specific locations. You can see the full list and the search order in the MySQL Shell User Guide. The user specific file is %APPDATA%\MySQL\mysqlsh\mysqlshrc.py on Microsoft Windows and $HOME/.mysqlsh/mysqlshrc.py on Linux and macOS.

You can do more than just changing the search path in the mysqlshrc.py file. However, for this example nothing else is needed.

Using the Module

Now that MySQL Shell has been set up to search in the path where your module is saved, you can use it in MySQL Shell. For example to get the description of the world.city table, you can use the following commands:

The \use world command sets the default schema to the world database. As a side effect, it also makes the tables in the world database available as properties of the db object. So, it possible to pass an object for the world.city table as db.city to table_tools.describe() function.

That is it. Now it is your turn to explore the possibilities that have been opened with MySQL Shell.

MySQL 8.0: Persisted Variables

MySQL 8.0 introduced a new feature that allows you to persist configuration changes from inside MySQL. Previously you could execute SET GLOBAL to change the configuration at runtime, but you needed to update your MySQL configuration file in order to persist the change. In MySQL 8.0 you can skip the second step. This blog discuss how this works and how to backup and restore the configuration.

Using SET PERSIST to set a variable and the persisted_variables table in the Performance Schema to get a list of persisted variables.
Using SET PERSIST to set a variable and the persisted_variables table in the Performance Schema to get a list of persisted variables.

Persisting Variables

You persist changes with either the SET PERSIST or SET PERSIST_ONLY statement. The different is that SET PERSIST_ONLY only updates the configuration whereas SET PERSIST essentially combines SET GLOBAL and SET PERSIST_ONLY.

Note: Some variables such as innodb_buffer_pool_instances can only use PERSIST_ONLY, i.e. it requires a restart to make the changes take effect. Still others, such as datadir can currently not be persisted.

mysqld-auto.cnf and variables_info

The persisted variables are stored in the file mysqld-auto.cnf located in the data directory using the JSON format. It includes more information than just the persisted value. It also includes information such as who made the change and when. An example file is:

Since it is JSON, it is easy to reformat to make easier to read, for example:

This information is also available from the performance_schema.variables_info table:

Notice that the source for join_buffer_size is DYNAMIC whereas the two other variables have the source set to PERSISTED. Why? After all they all three existed in the mysqld-auto.cnf file. DYNAMIC means that the variable was changes since the last restart either using SET GLOBAL or SET PERSIST. Another thing to be aware of is that variables changed with SET PERSIST_ONLY will not show up in variables_info until after the next restart. I will soon get back to show a way to get the variables that have been persisted in one way or another.

Backup and Restore

As a simple way to back up the configuration is simply copy the mysqld-auto.cnf file to a safe location. Similarly, you can restore the configuration by copying it back.

However, what if you want most of the configuration but not everything or you want to edit some of the values? In that case you need another way of exporting the configuration as you should not manually edit mysqld-auto.cnf.

Warning: Do not edit the mysqld-auto.cnf file manually. It should only be changed with SET PERSIST and SET PERSIST_ONLY. If there are any errors in the file, MySQL will refuse to start.

Fortunately as it turns out, it is easy to export all persisted variables. The table performance_schema.persisted_variables includes all variables that has either been read from mysqld-auto.cnf or has been changed with SET PERSIST or SET PERSIST_ONLY since the last restart. The table include the persisted values. For example:

This can be used to create SET statements that can be used to recreate the configuration on another instance. For example:

Using the mysql command-line client, you can avoid the column names and table format by using the --skip-column-names and --batch options:

Note: On Microsoft Windows ensure everything is on one line and the backslashes are removed.

Now the file config.sql contains an export of the persisted variables:

This example creates SET PERSIST_ONLY statement as those will work with all persistable variables. When you replay the SET statements, it will require a restart of MySQL for the changes to take effect. If you want to use SET PERSIST where possible, then you need to take into consideration whether the variable support SET PERSIST.  A list of variables that require SET PERSIST_ONLY are included at the end.

The configuration can now be restored as:

PERSIST_ONLY Variables

As promised, I will conclude with a list of persistable variables that only supports SET PERSIST_ONLY. As of MySQL 8.0.12 without any plugins installed, the variables are:

 

New Book: MySQL Connector/Python Revealed

When you write programs that uses a database backend, it is necessary to use a connector/API to submit the queries and retrieve the result. If you are writing Python programs that used MySQL, you can use MySQL Connector/Python – the connector developered by Oracle Corporation.

Now there is a new book dedicated to the usage of the connector: MySQL Connector/Python Revealed, which is published by Apress. It is available in a softcover edition as well as an eBook (PDF, ePub, Mobi).

MySQL Connector/Python Revealed

The book is divided into four parts spanning from the installation to error handling and troubleshooting. The four parts are:

  • Part I: Getting Ready
    This part consists of a single chapter that helps you to get up and running. The chapter includes an introduction to MySQL Connector/Python and getting the connector and MySQL Server installed.
  • Part II: The Legacy APIs
    The legacy APIs include the connector module that implements PEP249 (the Python Database API). The discussion of the mysql.connector module spans four chapters. In addition to query execution, the use of connection pools and the failover feature is covered. Finally, there is also a discussion about the C Extension.
  • Part III – The X DevAPI
    One of the big new features in MySQL 8 is the MySQL Document Store including the X DevAPI. It allows you to use MySQL through the NoSQL API as well as by executing SQL queries. The NoSQL API includes support both for working with MySQL as a document store where the data is stored in JSON documents and with SQL tables. Part III includes three chapters that are dedicated to the X DevAPI.
  • Part IV – Error Handling and Troubleshooting
    The final part of book goes through the two important topics of error handling and troubleshooting including several examples of how common errors and how to resolve them.

With the book comes 66 code examples that are available for download from Apress’ GitHub repository. See the book’s homepage for instructions.

MySQL Connector/Python is available from several sources including online bookshops. The following table shows some of the places, where you can buy the book. (The table if current as of 13 August 2018; changes to the available formats may happen in the future.)

ShopSoftcovereBook
ApressYesPDF and ePub (both DRM free)
AmazonYesMobi (Kindle)
Barnes & NoblesYes
SaxoYes

InnoDB Progress Information

MySQL has since version 5.7 had support for progress information for some queries. As promised in my previous post, I will here discuss how you can use that to get progress information for ALTER TABLE on InnoDB tables.

Background and Setup

Progress information is implemented through the Performance Schema using the stage events. In version 8.0.12 there are currently seven stages that can provide this information for  ALTER TABLE statements on InnoDB tables. In MySQL 8, it is easy to list the stages capable of reporting progress information by using the setup_instruments Performance Schema table:

This also shows how the setup_instruments table in MySQL 8 has some additional information about the instruments such as properties and documentation (not included in the output). Adding this information is still work in progress.

MySQL 5.7 does not provide as easy a way to obtain the instruments providing progress information. Instead you need to consult the reference manual. However, the principle in using the feature is the same.

As you can see, all of the instruments are enabled and timed by default. What is not enabled by default, however, is the consumer that can make the information available:

Since the consumers form a hierarchical system, the sys schema function ps_is_consumer_enabled() is used to show whether the consumer is enabled taking the whole hierarchy into consideration.

In order to use the progress information, you need to enable the events_stages_current consumer. This is the consumer that is responsible for keeping the performance_schema.events_stages_current table up to date, i.e. record the current (or latest if there is no current stage) for each thread. With the default Performance Schema settings, the rest of the hierarchy is enabled. To enable event_stages_current and verify it will be consuming instruments, you can use the following queries:

That is it. Now you can monitor the progress of the queries that uses the stages with progress information.

Note: The more parts of the Performance Schema that is enabled and the more fine grained monitoring, the more overhead. Stages are not the worst with respect to overhead; nevertheless it is recommended you keep an eye on the affect of enabling the events_stages_current consumer.

Monitoring Progress

The base for monitoring the progress information is the performance_schema.events_stages_current table. There are two columns of interest for this discussion:

  • WORK_COMPLETED: The amount of work that is reported to have been completed.
  • WORK_ESTIMATED: The estimated amount of work that needs to be done.

For InnoDB ALTER TABLE the estimated amount of work is for the entire operation. That said, the estimate may be revised during the process, so it may happen that the if you calculate the percentage it decreases as time goes. However, in general the percentage (100% * WORK_COMPLETED/WORK_ESTIMATED) will increase steadily until the operation completes at 100%.

To learn more about how the progress information works, the following pages in the manual are recommended:

For now, let’s look at an example.

Example

For the example, the salaries table in the employees sample database will be used. The table is sufficiently large that it will be possible to query the progress while adding a column using the INPLACE algorithm. As discussed in MySQL 8.0.12: Instant ALTER TABLE, it is possible to add a column instantly, but for the purpose of this example, the INPLACE algorithm illustrates the progress information feature better. The query that will be executed is:

The performance_schema.events_stages_current table can be joined with the performance_schema.events_statements_current to show the query and progress. For example:

There is another way though. Instead of using the performance_schema.events_stages_current table directly, an easier way is to use the sys.session view. This is an advanced process list that includes much more information than the usual SHOW PROCESSLIST statement including progress information. The performance of sys.session has been improved with more than an order of magnitude in MySQL 8 by the addition of indexes to the Performance Schema tables making it highly useful.

Querying the sys.session view for sessions showing progress information while the ALTER TABLE is in progress returns an output similar to the following example:

In the example, the @sys.statement_truncate_len user variable is set to 85. By default the sys schema ensures the current statement is at most 64 characters long. In order to avoid truncation in this case, the truncate length is increased to 85.

The example output shows that the progress is at around 49%. It is important to note that is an estimate and not an exact number. The ALTER TABLE is performing a merge sort at the time, and the query has been running for 4.22 seconds.

A couple of other interesting columns are included. It can be seen the connection is using 464KiB at the time. In MySQL 8 memory instrumentation is enabled by default (in MySQL 5.7 you need to enable it yourself). Additionally, the name of the program executing the query is MySQLWorkbench, that is the query originates from MySQL Workbench.

So, next time you plan a large ALTER TABLE operation, consider enabling the events_stages_current consumer, so you can follow the progress.

Monitoring NDBCluster Copying Alter Progress

MySQL NDB Cluster has great support for online (inplace) schema changes, but it is still sometimes necessary to perform an offline (copying) ALTER TABLE. These are relatively expensive to make as the entire table is copied into a new table which eventually replace the old table.

One example where a copying ALTER TABLE is required is when upgrading from MySQL NDB Cluster 7.2 or earlier to MySQL NDB Cluster 7.3 or later. The format used for temporal columns changed between these version (corresponding to MySQL Server 5.5 to 5.6). In order to take advantage of the new temporal format, a table rebuild is required.

Note: Support for the old temporal format has been removed in MySQL 8.0. So, you must upgrade your tables before an upgrade is possible. There is at the time of writing no MySQL NDB Cluster releases based on MySQL Server 8.0.
Schematic representation of a copying ALTER TABLE
Schematic representation of a copying ALTER TABLE

For long running operations, it can be useful to monitor the progress. There is no built-in way to do this like there is for InnoDB in MySQL 5.7 and later (see the blog InnoDB Progress Information), however the ndbinfo schema can give some information about the progress.

The ndbinfo schema is a virtual schema with views that show information from the data nodes. You can argue it is MySQL NDB Cluster’s answer to the Performance Schema. The ndbinfo schema was introduced in MySQL NDB Cluster 7.1 more than eight years ago and has steadily seen more and more information becoming available.

One of these changes arrived in MySQL NDB Cluster 7.4 where the memory_per_fragment view was added. This view shows detailed information about the memory used per fragment (in most cases the same as partitions). This can also be used to get an estimate of the progress of a copying ALTER TABLE.

As mentioned, a copying ALTER TABLE is similar to creating a new table with the new schema (which may potential be the same as the old schema), then inserting all of the data from the old table to the new. At the end, the two tables are swapped and the old table dropped.

Note: Remember that a copying ALTER TABLE is an offline operation. Any changes made to the table during the operation may be lost! Make sure the table is read-only while the ALTER TABLE is executing.

The temporary table (that later become the real table) is an NDBCluster table like other user created tables. This means the table will show up in ndbinfo.memory_per_fragment as a normal table, just with a special table name.

Temporary tables are named like #sql-7f4b_4 where the part after the – is generated based on the operating system process ID of the mysqld process and the connection id of the connection executing the ALTER TABLE. The schema name for the temporary table is the same as for the original table. In the example the process ID is 32587 or 7f4b in hexadecimal notation and the connection ID is 4.

As an example consider a rebuild of the db1.t1 table. In this case the fully qualified name (the name used by NDB Cluster instead of the normal table name) is db1/def/t1, i.e. the schema name and table name with /def/ between them. You can choose to create the fully qualified name for the temporary table as described above. An alternative, if you just have one concurrent table rebuild in the schema is to just look for the fully qualified name matching db1/def/#sql-%.

So, you can use the ndbinfo.memory_per_fragment table to see how much memory is allocated per fragment of the temporary table compared to the original table. For example:

The columns with information about the node ID, block instance, and fragment number have been left out. This is why it looks like there are duplicate rows. It is also worth noticing that there are several “child tables” for the indexes and a blob column.

There are three memory columns. The first is for the fixed size column format, the second for the variable width columns format, and the last for hash indexes.

MySQL NDB Cluster supports two storage formats for the columns. The fixed format uses less memory for columns that are fixed width in nature (such as integers), however variable (called DYNAMIC in CREATE TABLE and ALTER TABLE statements) is more flexible. The variable/dynamic column format is also the only one supported when adding a column inplace (online). See also the manual page for CREATE TABLE for more information about the column format.

The hash memory is the memory used by hash indexes (for the primary key and unique indexes).

For the fixed and variable element memory usages there is both allocated and free bytes. Here the free bytes is used as a measure of the amount of fragmentation. A copying ALTER TABLE defragments the table, so it is necessary to the the fragmentation into consideration when estimating the progress. In reality it is more complicated than the query suggest, so the memory values in the query result will not end up matching 100%, however in most cases it should be a reasonable estimate.

You can also choose to aggregate the memory, for example:

This aggregate query also uses the sys schema function format_bytes() to convert the number of bytes into human readable numbers. The sys schema is installed by default for MySQL NDB Cluster 7.5 and later and is available from MySQL’s repository on GitHub for MySQL NDB Cluster 7.3 and 7.4.

This way of estimating the progress of a copying ALTER TABLE is not perfect, but at least it can give an idea of how the operation progresses.

What Does I/O Latencies and Bytes Mean in the Performance and sys Schemas?

The Performance Schema and sys schema are great for investigating what is going on in MySQL including investigating performance issues. In my work in MySQL Support, I have a several times heard questions whether a peak in the InnoDB Data File I/O – Latency graph in MySQL Enterprise Monitor (MEM) or some values from the corresponding tables and view in the Performance Schema and sys schema are cause for concern. This blog will discuss what these observations means and how to use them.
The MEM InnoDB File I/O Graphs showing a peak in latency and bytes at 12:51.

The Tables and Views Involved

This blog will look into three sources in the Performance Schema for I/O latencies, so let’s first take a look at those tables. The three Performance Schema tables are:

  • events_waits_summary_global_by_event_name: with the event name set to wait/io/table/sql/handler or wait/io/file/%. This is the table used for the waits_global_by_latency and wait_classes_global_by_% views in the sys schema.
  • table_io_waits_summary_by_table: this is the table used for the schema_table_statistics% views in the sys schema.
  • file_summary_by_instance: this is the table used for the io_global_by_file_by% views in the sys schema.

These are also the sources used in MySQL Enterprise Monitor for the InnoDB Data File I/O graphs shown above. Let’s take a look at an example of each of the tables.

events_waits_summary_global_by_event_name

The events_waits_summary_global_by_event_name has aggregate data for wait events grouped by the event name. For the purpose of this discussion it is the table and file wait/io events that are of interested. An example of the data returned is:

These three queries show the same data, just obtained and displayed in different ways.

In the result there are two groups of events. The wait/io/table events (the wait/io/table/sql/handler is the only event of this group which is why it can be listed explicitly) and the wait/io/file group.

The table events are for accessing data in tables. It does not matter whether the data is cached in the buffer pool or is accessed on disk. In this table and view, there is no distinguishing between different types of access (read, write, etc.).

The file events are, as the name suggest, for actually accessing files. There is one file event per file type. For example, in he output there are the wait/io/file/innodb/innodb_log_file event for accessing the InnoDB redo log files, the wait/io/file/innodb/innodb_data_file event for accessing the InnoDB data files themselves, the wait/io/file/sql/binlog event for the binary log files, etc.

In the second query, all of the timings are wrapped in the sys.format_time() function. The timings returned by the Performance Schema are in picoseconds (10^-12 second) which are somewhat hard for us humans to read. The sys.format_time() function converts the picoseconds to human readable strings. When you sort, however, make sure you sort by the original non-converted values.

Tip: Use the sys.format_time() function to convert the picoseconds to a human readable value, but do only so for the returned row; sort and filter by the original values. The Performance Schema always returns timings in picoseconds irrespective of the timer used internally for the event.

The sys schema by default returns the timing as human readable values. If you need the values in picoseconds prefix the table name with x$, for example sys.x$waits_global_by_latency. The sys schema includes an ORDER BY clause in most views. For the waits_global_by_latency view, the default ordering is by the total latency, so there is no need to add an ORDER BY clause in this example.

table_io_waits_summary_by_table

The table_io_waits_summary_by_table Performance Schema table and schema_table_statistics% sys schema views are related to the wait/io/table/sql/handler event just discussed. These provide information about the amount of time spent per table. Unlike querying the wait/io/table/sql/handler in the wait_events_% tables, it split the time spent into whether it is used for reads, writes, fetch, insert, update, or delete. The read and write columns are the aggregates for the corresponding read and write operations, respectively. Since fetch is the only read operation, the read and fetch columns will have the same values.

The table and view show the table I/O, i.e. the access to table data irrespective of whether the data is accessed in-memory or on disk. This is similar to the wait/io/table/sql/handler event. An example of the result of querying the table and view for the employees.salaries table is:

In this case it shows that there has been mostly writes – inserts – for the table. The sys schema view effectively joins on the performance_schema.file_summary_by_instance for the read columns, so for the schema_table_statistics view fetch and read are not synonyms.

So, what is it the file_summary_by_instance table shows that is different table the “table I/O” that has been the topic of the first two tables? Let’s see.

file_summary_by_instance

Unlike the two previous tables, the file_summary_by_instance shows how much time is spent on actual file I/O and how much data is accessed. This makes the file_summary_by_instance table and the corresponding sys schema views very useful for determining where time is spent doing disk I/O and which files have the most data accesses on disk.

An example of using the Performance Schema and two of the sys schema views is:

This example is from Microsoft Windows, and as always when backslashes are in play, it is fun to try to determine the appropriate number of backslashes to use. When specifying the file name with LIKE, you need four backslashes per backslash in the file name; when using = you need two backslashes.

Again, the values are split into reads and writes (though not as detailed as before with fetch, insert, update, and delete – that is not known at the level where the file I/O happens). The miscellaneous values include everything that is not considered reads or write; this includes opening and closing the file.

The sys schema queries not only have formatted the timings, but also the path and the bytes. This has been done using the sys.format_path() and sys.format_bytes() functions, respectively.

From the output, it can be seen that despite no rows were ever fetched (read) from the employees.salaries table (that was found in the previous output), there has still been some read file I/O. This was what the sys.schema_table_statistics view reflected.

So, what does all of this mean? The graph in MySQL Enterprise Monitor showed that there was six seconds file I/O latency for the InnoDB data files. Is that bad? As often with these kinds of questions, the answer is: “it depends”.

What to Make of the Values

In this case we have a case where the graphs in MySQL Enterprise Monitor show a peak for the latency and bytes used doing I/O on the InnoDB data files. This is actually disk I/O. But what exactly does that means and should the alarm sound?

Let’s first refresh our memory on how the graphs looked:
The MEM InnoDB File I/O Graphs showing a peak in latency and bytes at 12:31.If you are not using MySQL Enterprise Monitor, you may have similar graphs from your monitoring solution, or you have obtained latency and bytes values from the tables and views discussed in this blog.

The latency graph shows that we have done six seconds if I/O. What does that mean? It is the aggregate I/O during the period the data was collected. In this case, the data is plotted for every minute, so in the one minute around 12:51, of the 60 seconds a total of six seconds was spent doing I/O. Now, the six seconds suddenly do no sound so bad. Similar for the bytes, around 4.6MiB of data was read or written.

In general, the values obtained either from the monitoring graphs or from the underlying tables cannot be used to conclude whether the is a problem or not. They just show how much I/O was done at different times.

Similar for the values from the Performance Schema. On their own they do not tell much. You can almost say they are too neutral – they just state how much work was done, not whether it was too much or too little.

A more useful way to use this data is in case a problem is reported. This can be that system administrator reports the disks are 100% utilized or that end users report the system is slow. Then, you can go and look at what happened. If the disk I/O was unusually high at that point in time, then that is likely related, and you can continue your investigation from there.

There are more reports in MySQL Enterprise Monitor both as time series graphs and as point-in-time snapshots. The point-in-time snapshots are often using the sys schema views but allows sorting. An example is the Database File I/O reports:
MEM's Database File I/O ReportMySQL Workbench also provides performance reports based on the sys scheme. The equivalent to the previous report is the Top File I/O Activity Report:
MySQL Workbench's Top File I/O Activity ReportThe MySQL Workbench performance reports also allows you to export the report or copy the query used to generate the report, so you can execute it manually.

With respect to the wait/io/table/sql/handler events, then remember that I/O here does not mean disk I/O, but table I/O. So, all it means that time is accumulating for these events – including when looking at the per table is that the data in the table is used. There are also per index values in table_io_waits_summary_by_index_usage Performance Schema table and the schema_index_statistics sys view which have the same meaning. Whether a given usage is too high depends on many things. However, it can again be useful to investigate what is causing problems.

For the example data from this blog, it was triggered by loading the employees sample database. So, there was no problem. If you want to put data into your database, it must necessarily be written to the data file, otherwise it will not be persisted. However, if you think the data import took too long, you can use the data as part of your investigation on how to make the import faster.

The conclusion is that you should not panic if you see the I/O latencies peak. On their own they just mean that more work was done during that period that usual. Without context a peak is no worse than a dip (which could indicate something is preventing work from being done or that there is a natural slow period). Instead use the I/O latencies and bytes together with other observations to understand where MySQL is spending time and for which files data is being read and written.

References

I will recommend you look at the following references, if you want to understand more about the Performance Schema tables and sys schema views discussed in this blog:

The GitHub repository for the sys schema includes all the definitions of the views (and other objects). Since these are written in plain SQL, they are very useful to see in more depth where the data is coming from. The GitHub website allows you to browse through each of the files. Each sys schema object is defined in its own file.