MySQL NDB Cluster 7.6: Fast Import with ndb_import

A common problem when you need to set up a new database instance is to import the initial data. This can particularly be an issue, if you are converting an existing database so there is a large amount of data to import. In MySQL NDB Cluster 7.6 which just was announced as general available (GA) this week, there is a new tool to help with such imports: ndb_import.

The ndb_import utility takes a CSV file and imports the data into a table. (While C stands for comma, the tool is not limited to comma-separated files, in fact tabs is the default delimiter.) But hey, does that not sound familiar? It sure does. MySQL Server has “for ever” included the mysqlimport utility and LOAD DATA statement. So why do we need another utility and what does ndb_import do that we cannot already do with the existing tools?

The big advantage of ndb_import is that it is a native NDB tool. It does not connect to any of the SQL nodes; instead it connects directly to the data nodes as an NDB API program. This alone allows you to bypass the overhead of connecting to the SQL node and parse SQL statements. However, it is even better than that. The ndb_import tool can also take advantage of the parallelism offered by MySQL NDB Cluster. This means that in the end, the overall performance is expected to be quite a bit better than loading the data using mysqlimport or LOAD DATA.

Note: The speed-up will depend on your setup. A cluster with all nodes in a VM on a laptop may not experience any significant speed-up compared to other methods of importing data.

So, how does this work? Let us look at a simple example. We will import the following data from the data.txt file:

The data includes the first name, surname, and birthday for four persons separated by \t, and it will be imported into the db1.t1 table:

Note: You still need to create the table through an SQL node.

With the data and table, the data can be imported using the following ndb_import command:

The arguments in the first line of the command tells how to connect to the management node, the schema name, and the name of the file to import data from. Like for mysqlimport, the table name is derived from the file name by using the basename, t1 in this case, as the table name.

The –connections and –db-workers options defines the parallelism of the job. The –connections option is equivalent to the ndb-cluster-connection-pool option that can be set for SQL nodes and defines how many connections the ndb_import process creates to the data nodes. There must be at least as many free API node slots available as the number of connections requested. Otherwise the command will hang while waiting for enough connections to become available and eventually fail. The –db-workers option specifies the number of threads ndb_import will use per data node.

The remaining options are equivalent to the options available for mysqlimport, LOAD DATA, and SELECT … INTO OUTFILE. They specify the format of the data inside the source file and how many rows to skip – one in this case due to the header.

If you need to load data into several tables in the same schema, you can specify a file name for each table, and they will all be processed.

There are many more options available. I will recommend you to check out the reference manual page for ndb_import for more information.

The output of the command is:

This includes some metadata information about the job such as the table and file names as well as the total number of rows imported, how long time it took, and the number of rows imported.

What happens, if something goes wrong? If we thought the data was comma-separated and had used –fields-terminated-by=”,”, an error would have occurred:

Notice here how the fifth line of the output says “job-1 rejected 1 rows (limit 0), see t1.rej”. Each input file is considered a job, so “job-1” refers to the first (and in this case only) input file. It rejected 1 row, and the limit refers to the –rejects option which specifies how many permanent errors must be encountered before failing the whole job. The default is to fail on the first error. Finally, “see t1.rej” tells us to look at the t1.rej file (created by ndb_import) for more information about the failed job:

The t1.rej file is a tab-delimited file with information for each failed row. In this case, the error is that there are too few values: one value was found (because there are no commas in the data and we specified the wrong delimiter) but four values were expected. If –rejects had been set to 3, all four rows would have been included in t1.rej.

The ndb_import binary is included with the NDB Cluster installation and can be found in the bin directory. The next step is for you to try it out. Good luck importing data into MySQL NDB Cluster 7.6.

MySQL NDB Cluster Backups

Today – 31 March – is world backup day, so I thought I would write a little about backups in MySQL NDB Cluster.

Just because NDB Cluster offers built-in redundancy and high availability does not mean backups are not important. They are – as ever and as for everything in software. The redundancy does not protect against user errors (anyone ever executed DROP TABLE or DROP SCHEMA by accident?) neither does it protect against a natural disaster, fire, or another disaster hitting the data center. Similar with high availability.

In short, if the data is in any way remotely important for you, you ensure you have a backup. Furthermore, a backup is not worth any more than your ability to restore it. If a fire rages your data center, it does not help you have the best backup in the world hosted in that data center.

So, before actually creating and restoring a backup, let us look at two best practices when it comes to backups.

Best Practices

The best practices mentioned here are by no means unique to MySQL NDB Cluster nor even databases. They are not exhaustive either, but more meant as something guidelines to have in mind when designing your backups.

Use a Backup Method that Works with Your Product

It sounds pretty obvious – why would you ever use a backup solution that does not work? Obviously no one does that on purpose, but unfortunately it is too common that it has not been checked whether the backup solution is appropriate.

With respect to MySQL NDB Cluster, I can mention that rsync of the NDB file system will not work, neither will any other method of creating a binary backup from the file system (including MySQL Enterprise Backup). It does not work either to use mysqldump unless you keep the cluster read-only for example by putting the cluster into “single user mode” and locking all tables.

When you test your backups make sure that you make changes to the data while the backup is running. A backup method may work when the database is idle, but not when concurrent writes are occurring.

In a little bit, I will show what the recommended way to create an online backup in NDB Cluster is.

Ensure You Can Restore Your Backups

There are two parts to this: can you retrieve your backups even in the worst case scenario, and do you know how to restore your backups?

You cannot assume that a backup that is kept locally on the same host or even in the same data center will be available when you need it. Think in terms of a major disaster such as the entire data center gone. Is it likely to happen? Fortunately not, but from time to time really bad things happens: fires, earthquakes, flooding, etc. Even if it is a once a century event, do you want to run the risk?

So, ensure you are copying your backups off site. How far away you need to copy it depends on several factors, but at least ensure it is not in the same suburb.

The other aspect is that too often, the first time a restore is attempted is when there is a total outage and everyone is in panic mode. That is not the optimal time to learn about the restore requirements and gotchas. Make it routine to restore backups. It serves too purposes: it validates your backups – see also the previous best practice – and it validates your steps to restore a backup.

Creating a Backup

It is very easy to create an online backup of a cluster using MySQL NDB Cluster as it is built-in. In the simplest of cases, it is as trivial as to execute the START BACKUP command in the ndb_mgm client, for example:

Each backup has a backup ID. In the above example, the ID is 1 (“Backup 1 started from …”). When a backup is started without specifying a backup ID, MySQL NDB Cluster determines what the previously highest used ID is and adds one to that. However, while this is convenient, it does mean the backup ID does not carry any information other than the sequence the backups were made.

An alternative is to explicitly request a given ID. Supported IDs are 1 through 4294967294. One option is to choose the ID to be YYmmddHHMM where YY is the year, mm the month, dd the day, HH the hours in 24 hours format, and MM the minutes. Zero-padded the numbers if the value is less than 10. This makes the backup ID reflect when the backup was created.

To specify the backup ID explicitly specify the requested ID as the first argument after START BACKUP, for example (using the interactive mode of ndb_mgm this time):

Here the backup ID is 1803311603 meaning the backup was created on 31 March 2018 at 16:03.

There are other arguments that can be used, for example to specify whether the snapshot time (where the backup is consistent) should be at the start of the end (the default) of the backup. The HELP START BACKUP command can be used to get online help with the START BACKUP command.

Remember that START BACKUP only backs up NDBCluster tables. Use mysqldump, mysqlpump, or another backup program to backup the schema and/or non-NDBCluster tables.

Restoring a Backup

It is a little more complicated to restore a backup than to create it, but once you have tried it a few times, it should not provide any major issues.

The backups are restored using the ndb_restore program. It is an NDB API program that supports both restoring the schema and data. It is recommended to perform the restore in three steps:

  1. Restore the schema.
  2. Restore the data with indexes disabled.
  3. Rebuild the indexes.
In MySQL NDB Cluster 7.4 and later, restoring the schema with ndb_restore did not change the number of partitions to the default of the cluster you restore to. If you have not yet upgraded to MySQL NDB Cluster 7.5, it is recommended to restore the schema from a mysqldump or mysqlpump backup if the cluster does not have the same number of data nodes and LDM threads.

The restore examples assumes you are restoring into an empty cluster. There is also support for partial restores and renaming tables, but that will not be discussed here. Let us take a look at the three steps.

Step 1: Restore the Schema

The schema is restored using the –restore_meta option, for example:

The arguments used here are:

  • –ndb-connectstring=localhost:1186. The host and port number where to connect to the management node(s). This example is from a test cluster with all nodes on the same host. In general you will not be specifying localhost here (never ever have the management and data nodes on the same host or even the same physical server – a topic for another day).
  • –nodeid=1. This tells which node ID to restore from. This is based on the node ID from the cluster where the backup was created. Either data node can be used.
  • –backupid=18033311603. The backup ID to restore.
  • –backup_path=…. The location of the backup files.
  • –restore_meta. Restore the schema (called meta data).
  • –disable-indexes. Do not restore the indexes (we will rebuild them later).

You may wonder why we do not want to restore the indexes. I will get back to that after the restore has been completed.

You should only execute this command once and only for one node id. Before proceeding to the next step, ensure the step completed without errors. The next step is to restore the data.

Step 2: Restore the Data

The command to restore the data is very similar to restoring the schema. The main differences is that –restore_meta will be replaced by –restore_data and that ndb_restore should be used once for each data node that was in the cluster where the backup was created.

For example in case of two data nodes:

These steps can be run in parallel as long as it does not cause an overload of the data nodes. A rule of thumb is that you can execute one ndb_restore –restore_data per host you have data nodes one. I.e. if you have one data node per host, you can restore all parts in parallel. If you have two data nodes per host, it may be necessary to divide the restore into two parts.

The final step is to rebuild the indexes.

Step 3: Rebuild the Indexes

As we disabled the indexes while restoring the schema and data, it is necessary to recreate them. This is done in a similar way to restoring the data – i.e. it should only be done for one node ID, for example:

That’s it. You can use the data again. But why was it that the indexes where disabled? Let me return to that.

Why Disable Indexes During the Restore?

There are two reasons to disable the indexes while restoring the schema and data:

  • Performance
  • Constraints (unique indexes and foreign keys)

As such, it is only necessary to disable the indexes while restoring the data, but there is no reason to create the indexes during the schema restore just to remove them again in the next step.

By disabling the indexes, there is no need to maintain the indexes during the restore. This allows us to restore the data faster, but then we need to rebuild the indexes at the end. This is still faster though, and if BuildIndexThreads and the number of fragments per data node are greater than 1, the rebuild will happen in parallel like during a restart.

The second thing is that if you have unique keys or foreign keys, it is in general not possible to restore the backup with indexes enabled. The reason is that the backup happens in parallel across the data nodes with the changes happening during the backup recorded separately. When you restore the data, it is not possible to guarantee that data and log are restored in the same order as the changes occurred during the backup. So, to avoid unique key and foreign key errors, it is necessary to disable the indexes until after the data has been restored.

Do not worry – this does not mean that the restored data will be inconsistent. At the end of the backup – and rebuilding the indexes checks for this – the constraints are fulfilled again.

Want to Know More?

This blog really only scratches the surface of backups. If you want to read more, some references are:

What is MySQL NDB Cluster?

I have had the opportunity to write a blog for Apress with a brief introduction to MySQL NDB Cluster. The blog gives a brief overview of the history and why you should consider it. The architecture is described before some key characteristics are discussed.

If you are interested, the blog can be found at https://www.apress.com/us/blog/all-blog-posts/what-is-mysql-ndb-cluster/15454530.

Happy reading.

Over view of the MySQL NDB Cluster architecture.

Overview of the MySQL Server Architecture

Sometimes it can be useful to take a step back and look at the world from a bit larger distance than usual. So in this blog, I will take a look at the high level architecture of MySQL Server.

Info
This is meant as a simplified overview and does not include all details.
Overview of the MySQL Server Architecture

Overview of the MySQL Server Architecture

For the discussion I will be referring to the the following figure that shows some of the features and plugins of MySQL. Orange boxes are available both for the community version and the commercial (enterprise) version, whereas red means the plugin is exclusive for the commercial version. Ellipsoid elements are plugins, whereas square (yes with round corners) elements indicate other features. The figure is based on MySQL 5.7.20 where the audit log can be used to block access to tables.
At the top there are the various connectors and APIs that can be used in the application to connect to MySQL. As the mysql command-line client uses the C API, this is included in this part of the figure as well. There are more connectors and APIs than I could fit into the figure; for the complete list see the documentation index.

The large darkish box represents the server. At the top there is the connection handling where there are two choices: use one thread per connection or use the commercial thread pool plugin. One thread per connection is the original way of handling new connections and as the name implies, it is simply creating a new thread for each new connection. The MySQL Enterprise Thread Pool will limit the overall number of threads by re-using the threads. This leads to better scalability particularly in the case of many connections executing short queries. After the initial thread handling, it is possible to send the connection through the optional Connection-Control plugin which will throttle the connections for an account, when there are multiple attempts to login with the wrong password.

Once the connection has been cleared to move ahead, the query is checked against the Query Cache if it is a SELECT statement, and the Query Cache is enabled. For most workloads the Query Cache will be an overhead that does not justify the potential gain. So, in practice it is recommended to completely disable it.

Info
Note: The Query Cache has been deprecated and is removed in MySQL 8.0.3 and later.

Next is the optional Query Rewrite plugin which allows the DBA to define which queries should be rewritten based on normalized queries like those in the Performance Schema digest summary table. For example, to change the query SELECT 130 to SELECT * FROM world.City WHERE ID = 130 – and the same for all other IDs, the following rule can be used:

This assumes the Query Rewrite plugin has already been installed.

It is not until now that the query actual reaches the parser (this will not happen if the query has a match in the Query Cache). After the parser has processed the query, it is known which schemas, tables, columns, etc. that will be required to answer the query, which means it is also possible to verify whether the user is allowed to use these objects.

After the parser, the MySQL Enterprise Firewall will be invoked if it is installed and enabled. The firewall compares the query’s digest to a whitelist of digests and if the query is in the whitelist. If the query is not in the whitelist it can be recorded or blocked (depending on configuration).

The last plugin is the MySQL Enterprise Audit plugin. In MySQL 5.7.20 and later, the audit log can be used to block access to a schema or a table in addition to log the access. This is the time where the plugin checks whether the access is allowed.

Finally the optimizer will determine the query plan and the query will be executed. The query execution will in almost all cases involve requesting data from or sending data to one of the storage engines. MySQL supports several storage engines through the pluggable storage engine architecture. Nowadays, the main storage engine is InnoDB, but several others exists including the NDBCluster storage engine that is used in MySQL NDB Cluster to execute queries through the SQL nodes (MySQL Server instances).

If You Want to Learn More

One nice way to investigate which order plugins and features are executed is to use the Performance Schema. For example to investigate the above features, enable the following instruments and consumers in the MySQL configuration file (or online executing UPDATE statements):

Then use two connections – one for monitoring and one for executing the query. Disable instrumentation for the monitoring connection:

Determine the Performance Schema thread ID for the connection that will be executing queries:

This assumes the two connections use different usernames. If that is not the case, then you will need to adjust the above query.

Optionally, to make it easier to find the events just for the test query, truncate all tables in the Performance Schema (this will lose all historical data, so is most appropriate for a test system):

Now execute the query – for example one that gets rewritten by the Query Rewriter plugin using the rule above:

Finally, the Performance Schema events can be retrieved:

Have fun.

New Book: Pro MySQL NDB Cluster

It is with great pleasure, I can announce that a new book dedicated to MySQL NDB Cluster has just been released. The book Pro MySQL NDB Cluster is written by my colleague Mikiya Okuno and myself and is a nearly 700 pages deep dive into the world of MySQL NDB Cluster. The book is published by Apress.

Tip: There are several ways to cluster MySQL. This book is about the product MySQL Cluster (often called MySQL NDB Cluster to clarify which cluster it is). There is also MySQL InnoDB Cluster, clustering using replication, and clustering through operating or hardware features. Pro MySQL NDB Cluster is only about the former.

We very much hope you will enjoy the book. Feedback and questions are most welcome, for example on Twitter (@nippondanji and @JWKrogh).

Note: At the time of writing, only the eBook is available for purchase. A softcover version will follow as soon as it has been possible to print it; this can also be pre-ordered now. – Update: The softcover version of the book is now also available.

The book is divided into five parts and 20 chapters.

Part I – The Basics

The first part provides some background information on the various parts in MySQL NDB Cluster and how it works. The chapters are:

  • Chapter 1: Architecture and Core Concepts
  • Chapter 2: The Data Nodes

Part II – Installation and Configuration

The second part focuses on the installation and configuration related topics, including replication between clusters. The chapter are:

  • Chapter 3: System Planning
  • Chapter 4: Configuration
  • Chapter 5: Installation
  • Chapter 6: Replication

Part III – Daily Tasks and Maintenance

In the third part, the topics include tasks that is part of the daily routine as a database administrator plus a tutorial where the tasks discussed in parts II and III are handled through MySQL Cluster Manager. The chapters are:

  • Chapter 7: The NDB Management Client and Other NDB Utilities
  • Chapter 8: Backups and Restores
  • Chapter 9: Table Maintenance
  • Chapter 10: Restarts
  • Chapter 11: Upgrades and Downgrades
  • Chapter 12: Security Considerations
  • Chapter 13: MySQL Cluster Manager

Chapter IV – Monitoring and Troubleshooting

The fourth part continues with two topics that are also part of the daily routine: monitoring and troubleshooting. The chapters are:

  • Chapter 14: Monitoring Solutions and the Operating System
  • Chapter 15: Sources for Monitoring Data
  • Chapter 16: Monitoring MySQL NDB Cluster
  • Chapter 17: Typical Troubles and Solutions

Chapter V – Development and Performance Tuning

The final part covers topics that are related to development and getting the tuning the cluster with respect to performance. The chapters are:

  • Chapter 18: Developing Applications Using SQL with MySQL NDB Cluster
  • Chapter 19: MySQL NDB Cluster as a NoSQL Database
  • Chapter 20: MySQL NDB Cluster and Application Performance Tuning

InnoDB Locks Analysis: Why is Blocking Query NULL and How To Find More Information About the Blocking Transaction?

This post was originally published on the MySQL Support Team Blog at https://blogs.oracle.com/mysqlsupport/entry/innodb_locks_analysis_why_is on 14 April 2017.

Consider the scenario that you execute a query. You expect it to be fast – typically subsecond – but now it take not return until after 50 seconds (innodb_lock_wait_timeout seconds) and then it returns with an error:

You continue to investigate the issue using the sys.innodb_lock_waits view or the underlying Information Schema tables (INNODB_TRX, INNODB_LOCKS and INNODB_LOCK_WAITS).

Note: The above Information Schema tables with lock and lock waits information have been moved to the Performance Schema in 8.0 as the data_locks and data_lock_waits tables. The sys schema view however works the same.

However, when you query the locks information, the blocking query is returned as NULL. What does that mean and how to proceed from that to get information about the blocking transaction?

Setting Up an Example

Before proceeding, lets set up an example which will be investigated later in the blog. The example can be set up as (do not disconnect Connection 1 when the queries have been executed):

  1. Connection 1:
  2. Connection 2 (blocks for innodb_lock_wait_timeout seconds):
  3. The following output while Connection 2 is still blocking from sys.innodb_lock_waits shows that the blocking query is NULL (slightly reformatted):

    The warnings will only occur in the 5.7.14 and later as the InnoDB lock tables being moved to the Performance Schema in MySQL 8.0. It is recommended to use the sys.innodb_lock_waits view as that is updated accordingly in MySQL 8.0.

Investigating Idle Transactions

To investigate idle transactions, you need to use the Performance Schema to get this information. First determine the Performance Schema thread id for the blocking transaction. For this you need the blocking_pid, in the above example:

and use this with the The threads Table table like:

For the following queries insert the thread id found above for the THREAD_ID = … where clauses.

To get the latest query executed, use the The events_statements_current Table table or the The session and x$session Views view:

or:

In this case this does not explain why the lock is held as the last query update a different row then where the lock issue occurs. However if the events_statements_history consumer is enabled (it is by default in MySQL 5.7 and later), the The events_statements_history Table table will include the last 10 statements (by default) executed for the connection:

So now the history of the blocking transaction can be seen and it is possible to determine why the locking issue occur.

Note: The history also includes some queries executed before the transaction started. These are not related to the locking issue.

If transaction monitoring is also enabled (only available in MySQL 5.7 and later), it is possible to get more information about the transaction and automatically limit the query of the history to the current transaction. Transaction monitoring is not enabled by default. To enable it, use:

Note: This must be done before either of the transactions is started. Only transaction started after the transaction monitoring is enabled will be instrumented.

If the above was enabled before the blocking transaction started, you can get more details about the blocking transaction as:

And to get the statement history of the transaction:

 

Working Around MySQL Cluster Push Down Limitations Using Subqueries

This post was originally published on the MySQL Support Team Blog at https://blogs.oracle.com/mysqlsupport/entry/working_around_mysql_cluster_push on 5 August 2016.

I worked on an issue recently where a query was too slow when executed in MySQL Cluster. The issue was that Cluster has some restrictions when it comes to push down conditions.

As an example of this, consider the following query using the employees sample database. The query takes a look at the average salary based on how many years the employee has been with the company. As the latest hire date in the database is in January 2000, the query uses 1 February 2000 as the reference date.

Initially the query performs like (performance is with two data nodes and all nodes in the same virtual machine on a laptop, so the timings are not necessarily representative of a production system, though the improvements should be repeatable):

The straight join is needed as the performance is better than leaving the join order up to the optimizer.

The schema for the two tables in use is:

Why this poor performance? Looking at the EXPLAIN plan is the first step:

The EXPLAIN plan itself does not look bad – the index usage is as expected. However note the 3 warnings – one is the usual rewritten query after the application of rewriting and optimizer rules, but the two other gives more information why the performance is not what would be expected:

Here it can be seen that the tables are not pushable, because they are involved in the GROUP BY.

This gave me an idea. A couple of years ago, I wrote a post on the MySQL Server Blog about MySQL Server Blog: Better Performance for JOINs Not Using Indexes. These materialized temporary tables can also include auto keys that can improve the join performance significantly. What if a similar tactics is used with the above query?

Changing the join on the employees table to join a subquery:

That is more than a factor 3 improvement.

The new EXPLAIN plan is:

Note how <auto_key0> is used for the join on the derived table. This is what helps make the rewrite work.

Actually we can do a bit better. The above subquery selects all columns from the employees table even though only the emp_no and hire_date columns are used. So the next rewrite is:

The improvement of only including the columns needed will vary. For example if the internal temporary table ends up being converted into an on-disk table because of the extra data, or a covering index no longer can be used can increase the importance of only choosing the columns needed.

A final slight improvement can be gained by also converting the salaries table into a subquery:

The total speed up is from a little under 24 seconds to a little over 5 second or more than a factor 4.

Note: the above rewrite will not work out of the box in MySQL Cluster 7.5. The reason is that it is based on MySQL Server 5.7 where the optimizer can perform more advanced rewrites than it was possible on MySQL 5.6. So by default the above subqueries will be rewritten to normal joins and you end up with the original query again. To use the tactics of subqueries in Cluster 7.5, it is necessary first to disable the optimizer switch allowing the optimizer to rewrite the subqueries to normal joins:

 

Who Holds the Metadata Lock? MySQL 5.7.3 Brings Help

In MySQL 5.5 metadata locks were introduced to ensure consistency for some operations that previously could cause problems. For example if a transaction SELECTed from a table and another connection changed the structure of the table before the transaction was committed; in that case the two operations would be written in reverse order to the binary log which could prevent replaying the binary log.

However there was no way to see who held metadata locks. SHOW PROCESSLIST would show who were waiting for metadata locks, and often you could guess who held it, but it wasn’t so always. Consider for example the following:

In this case it happens to be the process with Id = 3 that holds the metadata lock, but that is not obvious from the above output.

In MySQL 5.7.3 this has changed. There is a new Performance Schema table metadata_locks which will tell which metadata locks are held and which are pending. This can be used to answer the question for a case like the one above.

First instrumentation of metadata locks should be enabled. To do that you must enable the wait/lock/metadata/sql/mdl instrument in setup_instruments. Additionally the global_instrumentation consumer must be enabled in setup_consumers.

Currently the wait/lock/metadata/sql/mdl instrument is not enabled by default. I have created a feature request to consider enabling it by default, but obviously whether that will happen also depends on the performance impact.

To enable metadata lock instrumentation:

Note: The global_instrumentation consumer is enabled by default.

With the metadata lock instrumentation enabled, it is now easy to answer who holds the metadata lock (I’ve excluded the connections own metalocks here as I’m only interested in the metadata lock contention going on between other queries):

So in this case there is a metadata lock GRANTED to process list id 3 whereas the other connections have a PENDING lock status for the metadata lock for the City table.

Install MySQL Enteprise Monitor (MEM) 3.0 Using Limited Resources

MySQL Enterprise Monitor (MEM) is the monitoring solution offered as part of MySQL Enterprise Edition and MySQL Cluster Carrier Grade Edition. In this blog, I will not go into details of the features of MEM, but rather focus on the case where you want to install MEM 3.0 to try it out on your personal desktop or laptop.

A trial version (the full product, but the can only be used for 30 days) is available from Oracle’s Software Delivery Cloud. If you are a MySQL customer, it is recommended that you download MEM from My Oracle Support (MOS).

Once you have downloaded and extracted the installation binaries, you can start the installation. You have the choice between using a GUI, text based, and unattended install. Here I will use the GUI install, but if you want to try one of the other install options, launch the installer with the –help option to get more information.

MEM 3.0 consists of two parts:

  • The Service Manager
  • Agents

Only the Service Manager is required, so that will be the first thing to install. As this is intended to show how you can test MEM, I will use a normal user rather than root for the installation.

It is also recommended to take a look at the MEM 3.0 Reference Manual.

Installation Wizard

When launching the installer the first screen asks which language to use – you have the choice of English and Japanese:

Installing the MEM 3.0 Service Manager - Step 1: Choose language

Step 1: Choose language

Next is an information screen that you should ensure you keep track of the usernames and passwords entered during the installation process:

Installing the MEM 3.0 Service Manager - Step 2: Remember the usernames and passwords entered during the installation process

Step 2: Remember the usernames and passwords entered during the installation process

The actual install process is now ready to start:

Installing the MEM 3.0 Service Manager - Step 3: Ready to start the actual install process

Step 3: Ready to start the actual install process

The following steps are to configure the installation – the first of which is to decide where to install the Service Manager:

Installing the MEM 3.0 Service Manager - Step 4: Choose the installation directory

Step 4: Choose the installation directory

The Service Manager will need three ports:

  • Tomcat Server Port: For the web UI when using non-SSL connections
  • Tomcat Shutdown Port: Used internally to shutdown the web server
  • Tomcat SSL Port: For the web UI when using SSL connections
Installing the MEM 3.0 Service Manager - Step 5: Choose the port numbers

Step 5: Choose the port numbers

The Service Manager uses a MySQL database to store the data collected. The next screen allows you to choose between using the MySQL database bundled with MEM or an existing one. Using the bundled instance has the advantage that MEM will configure it automatically and upgrades can be easier, however it will mean running two MySQL instances if you already have MySQL installed. For a test instance using the bundled instance also has the advantage that it’s easy to uninstall the whole installation again, so we will use the bundled instance in this example:

Installing the MEM 3.0 Service Manager - Step 6: Choose whether to use the bundled MySQL database or an existing one

Step 6: Choose whether to use the bundled MySQL database or an existing one

The final of the pre-installation configuration is to choose the username and password to use for the connection to the MySQL database. This is the username and password that you were reminded of earlier to make sure you remember:

Installing the MEM 3.0 Service Manager - Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring

Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring

Next a note that because we have chosen to install the Service Manager as a non-root user, it will not be started automatically when the server is restarted:

Installing the MEM 3.0 Service Manager - Step 8: Info that MEM will not start automatically when not installed as root

Step 8: Info that MEM will not start automatically when not installed as root

Ready to Install:

Installing the MEM 3.0 Service Manager - Step 9: Configuration completed

Step 9: Configuration completed – ready to install

The Service Manager is now being installed – this will take a little while as it both includes copying all the files in place as well as configuring and starting the web server and the bundled MySQL database:

Installing the MEM 3.0 Service Manager - Step 10: Installation is in progress

Step 10: Installation is in progress

MEM includes an uninstall script if you need to remove the Service Manager again:

Installing the MEM 3.0 Service Manager - Step 11: Files are now installed

Step 11: Files are now installed

To improve security MEM 3.0 by default uses SSL. The installation process adds a self-signed certificate, but you can choose to install your own certificate later:

Installing the MEM 3.0 Service Manager - Step 12: Information that MEM is using SSL with a self-signed certificate by default

Step 12: Information that MEM is using SSL with a self-signed certificate by default

The installer is now done and it is time to launch the UI:

Installing the MEM 3.0 Service Manager - Step 13: Ready to launch the Service Manager

Step 13: Ready to launch the Service Manager

Unattended Install

If you go through the installation process often, it is more convenient to use the unattended installation as it allows you to automate the installation. To perform the same installation as above using the unattended installation you can execute:

As several of the values are the default ones, you can skip some of the options, but they are included here for completeness.

When the above command completes, continue with the post-installation configuration as you would have done using the installation wizard.

Post-Installation Configuration

Once the Service Manager UI has opened in your browser there is a little post-installation configuration to take care of:

  • You need to create an admin user for the web UI
  • You need to create a user agents can use if you choose to install any agents
  • You can choose whether to have the Service Manager automatically check for updates and how for how long time to keep historical data
Installing the MEM 3.0 Service Manager - Step 14: Post installation configuration

Step 14: Post installation configuration

The last step if to choose your timezone and which locale to use in the user interface:

Installing the MEM 3.0 Service Manager - Step 15: Choose timezone and the locale

Step 15: Choose timezone and the locale

After this you are ready to play around with the monitor. If you choose to install agents, the steps are similar.

Reducing Memory Footprint

One thing to be aware of though is that the default configuration used by the Service Manager is aimed at a small to medium production installation. If you for example installed MEM in a virtual machine or computer with limited memory available, the default configuration may not be optimal.

With the installation above just completed, the memory usage of the web server (Tomcat) and the bundled MySQL database is:

So around 1.5G resident memory. That is without adding any additional agents and/or MySQL instances to monitor.

So when I use MEM for testing, the first thing I do is to change a few configuration values to reduce the memory usage. The configuration options I change are located in two files (paths are given relative to the installation directory):

  • mysql/my.cnf – options related to the bundled MySQL database
  • apache-tomcat/bin/setenv.sh – options related to the web server

mysql/my.cnf

For the MySQL database I change two setting:

  • innodb_buffer_pool_size – this is by default 768M, but if you only monitor a couple of MySQL instances, something like 100M is enough
  • table_definition_cache – this is by default 2048. One side effect of the value being so large is that the auto-sizing of the Performance Schema considers the installation as a large instance, so the Performance Schema tables are made large. Reducing the value to 800 is enough for testing and will reduce the memory usage of the Performance Schema with several hundred megabytes.

So my mysql/my.cnf looks something like this after the changes:

 apache-tomcat/bin/setenv.sh

It is only necessary to edit one line in the Tomcat configuration file – the default settings are:

I change that to:

I.e. the three setting I have changed are:

  • -Xmx (maximum  heap size) from 768M to 256M
  • -Xms (minimum heap size) from 768M to 256M
  • -XX:MaxPermSize from 512M to 200M

Enabling the New Configurations

It requires restarting the Service Manager to make the new configurations take effect. You can do the restart by going into the installation directory and execute:

The memory usage is now somewhat smaller:

 

MySQL Connect 2013: ps_tools

During my talk and hands on labs at MySQL Connect 2013 I used several views, functions, and procedures from ps_helper and ps_tools.

The slides from the two talks can be downloaded from Thanks For Attending MySQL Connect or from the MySQL Connect 2013 Content Catalogue.

You can read more about ps_helper on Mark Leith’s blog and you can download ps_helper from github.

To get ps_tools, download ps_tools.tgz from here. Once unpacked there is a ps_tools_5x.sql for each of the versions supported. The tools presented at MySQL Connect were all based on MySQL 5.6 and 5.7. Note that several of the included tools are not particularly useful on their own but are more meant as utility functions for some of the other tools. The actual implementations are organised so they are in the subdirectory corresponding to the earliest version it applies to. For a few tools, such as ps_reset_setup(), there are multiple versions depending on the MySQL version it applies to.

Several of the tools have a help text at the top of the source file.

The main tools are:

  • Function ps_thread_id(): returns the thread id for a given connection. Specify NULL to get the thread id for the current connection. For example:
  •  View ps_setup_consumers: similar to performance_schema.setup_consumers, but includes a column to display whether the consumer is effectively enabled. See also slide 10 from CON 5282.
  • Function substr_count(): See also A Couple of Substring Functions: substr_count() and substr_by_delim().
  • Function substr_by_delim(): See also A Couple of Substring Functions: substr_count() and substr_by_delim().
  • Function ps_account_enabled(): Check whether a given account is enabled according to performance_schema.setup_actors. For example:
  • View ps_accounts_enabled: Lists each account and whether it is enabled according to performance_schema.setup_actors.
  • Procedure ps_setup_tree_instruments(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting for more details.
  • Procedure ps_setup_tree_actors_by_host(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting for more details.
  • Procedure ps_setup_tree_actors_by_user(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting for more details.
  • Procedure ps_setup_tree_consumers(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting and slide 11 from CON 5282 for more details.
  • Procedure ps_dump_thread_stack(): This is the same as in ps_helper with one bug fix. See also slides 30-31 from CON 5282.
  • Procedure ps_enable_all(): Enable all consumers and instruments.
  • Procedure ps_reset_setup(): Reset consumers, instruments, actors, and threads to the defaults (without taking my.cnf into consideration).
  • View is_innodb_lock_waits: show information about waiting locks for InnoDB.
  • View is_engine_data: summary of the amount of data and indexes grouped by storage engine.
  • View ps_memory_by_thread_by_event_raw: the amount of memory used grouped by thread and event without any formatting and ordering.
  • View ps_memory_current_by_thread_by_event: The same as above but formatted output and ordered by current usage.
  • View ps_memory_high_by_thread_by_event: The same as above but formatted output and ordered by the high watermark.
  • Procedure schema_unused_routines(): Lists unused stored procedures and functions.