MySQL Shell: Built-In Help

It can be hard to recall all the details of how a program and API work. The usual way to handle that is to look at the manual or a book. Another – and in my opinion – nice way is to have built-in help, so you can find the information without changing between the program and browser. This blog discuss how to obtain help when you use MySQL Shell.

MySQL Shell is a client that allows you to execute queries and manage MySQL through SQL commands and JavaScript and Python code. It is a second generation command-line client with additional WebOps support. If you have not installed MySQL Shell yet, then you can download it from MySQL’s community downloads, Patches & Updates in My Oracle Support (MOS) (for customers), or Oracle Software Delivery Cloud (for trial downloads). You can also install it through MySQL Installer for Microsoft Windows.
MySQL Shell: Get help for a table object
MySQL Shell: Get help for a table object

MySQL Shell has a very nice and comprehensive built-in help. There is of course the help output produced using the --help option if you invoke the shell from the command line:

However, this help is not what makes MySQL Shell special. It is the help that you can see from within the shell when working in JavaScript or Python that is the worth some extra attention. There is both support for general help and obtaining help through objects.

General Help

The first layer of help is what is also known from the old mysql command-line client. A command existing of a backslash and a ?, h, or help (\?, \h or \help) will show information about the general usage of MySQL Shell:

This shows which commands and global objects are available. But there is more: you can also get help about the usage of MySQL Shell such as how to use the Admin API (for MySQL InnoDB Cluster), how to connect, or the SQL syntax. The search for relevant help topics are context sensitive, for example searching for the word select return different results depending on the mode and whether you are connected:

  • In Python or JavaScript mode without a connection, it is noted that information was found in the mysqlx.Table.select and mysqlx.TableSelect.select categories.
  • In Python or JavaScript mode with a connection, the SELECT SQL statement is included as a category.
  • In SQL mode the actual help text for the SELECT SQL statement is returned (requires a connection).

For example, to get help about the select method of a table object:

To get help for the SELECT SQL statement:

Note here how it is possible to get the help for the SELECT statement both from the Python (and JavaScript) as well as SQL modes, but the search term is different.

Tip: To get information about SQL statements, you must be connected to a MySQL instance.

When you use the JavaScript or Python modes there is another way to get  help based on your object. Let’s look at that.

Object Based Help

If you are coding in MySQL Shell using JavaScript or Python it may happen you need a hint how to use a given object, for example a table object. You can use the method described in the previous section to get help by searching for mysqlx.Table, however, you can also access the help directly from the object.

All of the X DevAPI objects in MySQL Shell has a help() method that you can invoke to have help returned for the object. For example, if you have an object named city for the city table in the world schema, then calling city.help() returns information about table object:

As you can see, the built-in help in MySQL Shell is a powerful resource. Make sure you use it.

MySQL Shell: Using External Python Modules

MySQL Shell is a great tool for working with MySQL. One of the features that make it stand out compared to the traditional mysql command-line client is the support for JavaScript and Python in addition to SQL statements. This allows you to write code you otherwise would have had to write outside the client. I showed a simple example of this in my post about the instant ALTER TABLE feature in MySQL 8.0.12 where a Python loop was used to populate a table with 1 million rows This blog will look further into the use of Python and more specifically external modules.

Using a customer table_tools module in MySQL Shell.
Using a customer table_tools module in MySQL Shell.

Using Standard Modules

Aforementioned loop that was used to populate a test table also showed another feature of MySQL Shell: You can use the standard Python modules just as you would do in any other Python script. For example, if you need to create UUIDs you can use the uuid module:

This on its own is great, but what about your own modules? Sure, that is supported as well. Before showing how you can access your own modules, let’s create a simple module to use as an example.

Example Module

For the purpose of this blog, the following code should be saved in the file table_tools.py. You can save it in whatever directory you keep your Python libraries. The code is:

The describe function takes a Table object from which it works backwards to get the session object. It then queries the information_schema.COLUMNS view to get the same information about the table as the DESC SQL command. Both the table and schema name can be found through the table object. Finally, the information is printed.

The example is overly simplified for general usage as it does not change the width of the output based on the length of the data, and there is no error handling whatsoever. However, this is on purpose to focus on the usage of the code from within MySQL Shell rather than on the code.

Note: The same code works in a MySQL Connector/Python script except that the rows are returned as mysqlx.result.Row objects. So, the loop printing the rows look a little different:

With the function ready, it is time to look at how you can import it into MySQL Shell.

Importing Modules Into MySQL Shell

In order to be able to import a module into MySQL Shell, it must be in the path searched by Python. If you have saved table_tools.py into a location already searched, then that is it. However, a likely more common scenario is that you have saved the file in a custom location. In that case, you need to tell Python where to look for the files.

You modify the search path in MySQL Shell just as you would in a regular Python program. If you for example have saved the file to D:\MySQL\Shell\Python, then you can add that to the path using the following code:

If this is something you need as a one off, then it is fine just to modify the path directly in MySQL Shell. However, if you are working on some utilities that you want to reuse, it becomes tedious. MySQL Shell has support for configuration files where commands can be executed. The one for Python is named mysqlshrc.py (and mysqlshrc.js for JavaScript).

MySQL Shell searches for the mysqlshrc.py file in four locations including global locations as well as user specific locations. You can see the full list and the search order in the MySQL Shell User Guide. The user specific file is %APPDATA%\MySQL\mysqlsh\mysqlshrc.py on Microsoft Windows and $HOME/.mysqlsh/mysqlshrc.py on Linux and macOS.

You can do more than just changing the search path in the mysqlshrc.py file. However, for this example nothing else is needed.

Using the Module

Now that MySQL Shell has been set up to search in the path where your module is saved, you can use it in MySQL Shell. For example to get the description of the world.city table, you can use the following commands:

The \use world command sets the default schema to the world database. As a side effect, it also makes the tables in the world database available as properties of the db object. So, it possible to pass an object for the world.city table as db.city to table_tools.describe() function.

That is it. Now it is your turn to explore the possibilities that have been opened with MySQL Shell.

MySQL 8.0: Persisted Variables

MySQL 8.0 introduced a new feature that allows you to persist configuration changes from inside MySQL. Previously you could execute SET GLOBAL to change the configuration at runtime, but you needed to update your MySQL configuration file in order to persist the change. In MySQL 8.0 you can skip the second step. This blog discuss how this works and how to backup and restore the configuration.

Using SET PERSIST to set a variable and the persisted_variables table in the Performance Schema to get a list of persisted variables.
Using SET PERSIST to set a variable and the persisted_variables table in the Performance Schema to get a list of persisted variables.

Persisting Variables

You persist changes with either the SET PERSIST or SET PERSIST_ONLY statement. The different is that SET PERSIST_ONLY only updates the configuration whereas SET PERSIST essentially combines SET GLOBAL and SET PERSIST_ONLY.

Note: Some variables such as innodb_buffer_pool_instances can only use PERSIST_ONLY, i.e. it requires a restart to make the changes take effect. Still others, such as datadir can currently not be persisted.

mysqld-auto.cnf and variables_info

The persisted variables are stored in the file mysqld-auto.cnf located in the data directory using the JSON format. It includes more information than just the persisted value. It also includes information such as who made the change and when. An example file is:

Since it is JSON, it is easy to reformat to make easier to read, for example:

This information is also available from the performance_schema.variables_info table:

Notice that the source for join_buffer_size is DYNAMIC whereas the two other variables have the source set to PERSISTED. Why? After all they all three existed in the mysqld-auto.cnf file. DYNAMIC means that the variable was changes since the last restart either using SET GLOBAL or SET PERSIST. Another thing to be aware of is that variables changed with SET PERSIST_ONLY will not show up in variables_info until after the next restart. I will soon get back to show a way to get the variables that have been persisted in one way or another.

Backup and Restore

As a simple way to back up the configuration is simply copy the mysqld-auto.cnf file to a safe location. Similarly, you can restore the configuration by copying it back.

However, what if you want most of the configuration but not everything or you want to edit some of the values? In that case you need another way of exporting the configuration as you should not manually edit mysqld-auto.cnf.

Warning: Do not edit the mysqld-auto.cnf file manually. It should only be changed with SET PERSIST and SET PERSIST_ONLY. If there are any errors in the file, MySQL will refuse to start.

Fortunately as it turns out, it is easy to export all persisted variables. The table performance_schema.persisted_variables includes all variables that has either been read from mysqld-auto.cnf or has been changed with SET PERSIST or SET PERSIST_ONLY since the last restart. The table include the persisted values. For example:

This can be used to create SET statements that can be used to recreate the configuration on another instance. For example:

Using the mysql command-line client, you can avoid the column names and table format by using the --skip-column-names and --batch options:

Note: On Microsoft Windows ensure everything is on one line and the backslashes are removed.

Now the file config.sql contains an export of the persisted variables:

This example creates SET PERSIST_ONLY statement as those will work with all persistable variables. When you replay the SET statements, it will require a restart of MySQL for the changes to take effect. If you want to use SET PERSIST where possible, then you need to take into consideration whether the variable support SET PERSIST.  A list of variables that require SET PERSIST_ONLY are included at the end.

The configuration can now be restored as:

PERSIST_ONLY Variables

As promised, I will conclude with a list of persistable variables that only supports SET PERSIST_ONLY. As of MySQL 8.0.12 without any plugins installed, the variables are:

 

MySQL NDB Cluster 7.6: Fast Import with ndb_import

A common problem when you need to set up a new database instance is to import the initial data. This can particularly be an issue, if you are converting an existing database so there is a large amount of data to import. In MySQL NDB Cluster 7.6 which just was announced as general available (GA) this week, there is a new tool to help with such imports: ndb_import.

The ndb_import utility takes a CSV file and imports the data into a table. (While C stands for comma, the tool is not limited to comma-separated files, in fact tabs is the default delimiter.) But hey, does that not sound familiar? It sure does. MySQL Server has “for ever” included the mysqlimport utility and LOAD DATA statement. So why do we need another utility and what does ndb_import do that we cannot already do with the existing tools?

The big advantage of ndb_import is that it is a native NDB tool. It does not connect to any of the SQL nodes; instead it connects directly to the data nodes as an NDB API program. This alone allows you to bypass the overhead of connecting to the SQL node and parse SQL statements. However, it is even better than that. The ndb_import tool can also take advantage of the parallelism offered by MySQL NDB Cluster. This means that in the end, the overall performance is expected to be quite a bit better than loading the data using mysqlimport or LOAD DATA.

Note: The speed-up will depend on your setup. A cluster with all nodes in a VM on a laptop may not experience any significant speed-up compared to other methods of importing data.

So, how does this work? Let us look at a simple example. We will import the following data from the data.txt file:

The data includes the first name, surname, and birthday for four persons separated by \t, and it will be imported into the db1.t1 table:

Note: You still need to create the table through an SQL node.

With the data and table, the data can be imported using the following ndb_import command:

The arguments in the first line of the command tells how to connect to the management node, the schema name, and the name of the file to import data from. Like for mysqlimport, the table name is derived from the file name by using the basename, t1 in this case, as the table name.

The –connections and –db-workers options defines the parallelism of the job. The –connections option is equivalent to the ndb-cluster-connection-pool option that can be set for SQL nodes and defines how many connections the ndb_import process creates to the data nodes. There must be at least as many free API node slots available as the number of connections requested. Otherwise the command will hang while waiting for enough connections to become available and eventually fail. The –db-workers option specifies the number of threads ndb_import will use per data node.

The remaining options are equivalent to the options available for mysqlimport, LOAD DATA, and SELECT … INTO OUTFILE. They specify the format of the data inside the source file and how many rows to skip – one in this case due to the header.

If you need to load data into several tables in the same schema, you can specify a file name for each table, and they will all be processed.

There are many more options available. I will recommend you to check out the reference manual page for ndb_import for more information.

The output of the command is:

This includes some metadata information about the job such as the table and file names as well as the total number of rows imported, how long time it took, and the number of rows imported.

What happens, if something goes wrong? If we thought the data was comma-separated and had used –fields-terminated-by=”,”, an error would have occurred:

Notice here how the fifth line of the output says “job-1 rejected 1 rows (limit 0), see t1.rej”. Each input file is considered a job, so “job-1” refers to the first (and in this case only) input file. It rejected 1 row, and the limit refers to the –rejects option which specifies how many permanent errors must be encountered before failing the whole job. The default is to fail on the first error. Finally, “see t1.rej” tells us to look at the t1.rej file (created by ndb_import) for more information about the failed job:

The t1.rej file is a tab-delimited file with information for each failed row. In this case, the error is that there are too few values: one value was found (because there are no commas in the data and we specified the wrong delimiter) but four values were expected. If –rejects had been set to 3, all four rows would have been included in t1.rej.

The ndb_import binary is included with the NDB Cluster installation and can be found in the bin directory. The next step is for you to try it out. Good luck importing data into MySQL NDB Cluster 7.6.

MySQL NDB Cluster Backups

Today – 31 March – is world backup day, so I thought I would write a little about backups in MySQL NDB Cluster.

Just because NDB Cluster offers built-in redundancy and high availability does not mean backups are not important. They are – as ever and as for everything in software. The redundancy does not protect against user errors (anyone ever executed DROP TABLE or DROP SCHEMA by accident?) neither does it protect against a natural disaster, fire, or another disaster hitting the data center. Similar with high availability.

In short, if the data is in any way remotely important for you, you ensure you have a backup. Furthermore, a backup is not worth any more than your ability to restore it. If a fire rages your data center, it does not help you have the best backup in the world hosted in that data center.

So, before actually creating and restoring a backup, let us look at two best practices when it comes to backups.

Best Practices

The best practices mentioned here are by no means unique to MySQL NDB Cluster nor even databases. They are not exhaustive either, but more meant as something guidelines to have in mind when designing your backups.

Use a Backup Method that Works with Your Product

It sounds pretty obvious – why would you ever use a backup solution that does not work? Obviously no one does that on purpose, but unfortunately it is too common that it has not been checked whether the backup solution is appropriate.

With respect to MySQL NDB Cluster, I can mention that rsync of the NDB file system will not work, neither will any other method of creating a binary backup from the file system (including MySQL Enterprise Backup). It does not work either to use mysqldump unless you keep the cluster read-only for example by putting the cluster into “single user mode” and locking all tables.

When you test your backups make sure that you make changes to the data while the backup is running. A backup method may work when the database is idle, but not when concurrent writes are occurring.

In a little bit, I will show what the recommended way to create an online backup in NDB Cluster is.

Ensure You Can Restore Your Backups

There are two parts to this: can you retrieve your backups even in the worst case scenario, and do you know how to restore your backups?

You cannot assume that a backup that is kept locally on the same host or even in the same data center will be available when you need it. Think in terms of a major disaster such as the entire data center gone. Is it likely to happen? Fortunately not, but from time to time really bad things happens: fires, earthquakes, flooding, etc. Even if it is a once a century event, do you want to run the risk?

So, ensure you are copying your backups off site. How far away you need to copy it depends on several factors, but at least ensure it is not in the same suburb.

The other aspect is that too often, the first time a restore is attempted is when there is a total outage and everyone is in panic mode. That is not the optimal time to learn about the restore requirements and gotchas. Make it routine to restore backups. It serves too purposes: it validates your backups – see also the previous best practice – and it validates your steps to restore a backup.

Creating a Backup

It is very easy to create an online backup of a cluster using MySQL NDB Cluster as it is built-in. In the simplest of cases, it is as trivial as to execute the START BACKUP command in the ndb_mgm client, for example:

Each backup has a backup ID. In the above example, the ID is 1 (“Backup 1 started from …”). When a backup is started without specifying a backup ID, MySQL NDB Cluster determines what the previously highest used ID is and adds one to that. However, while this is convenient, it does mean the backup ID does not carry any information other than the sequence the backups were made.

An alternative is to explicitly request a given ID. Supported IDs are 1 through 4294967294. One option is to choose the ID to be YYmmddHHMM where YY is the year, mm the month, dd the day, HH the hours in 24 hours format, and MM the minutes. Zero-padded the numbers if the value is less than 10. This makes the backup ID reflect when the backup was created.

To specify the backup ID explicitly specify the requested ID as the first argument after START BACKUP, for example (using the interactive mode of ndb_mgm this time):

Here the backup ID is 1803311603 meaning the backup was created on 31 March 2018 at 16:03.

There are other arguments that can be used, for example to specify whether the snapshot time (where the backup is consistent) should be at the start of the end (the default) of the backup. The HELP START BACKUP command can be used to get online help with the START BACKUP command.

Remember that START BACKUP only backs up NDBCluster tables. Use mysqldump, mysqlpump, or another backup program to backup the schema and/or non-NDBCluster tables.

Restoring a Backup

It is a little more complicated to restore a backup than to create it, but once you have tried it a few times, it should not provide any major issues.

The backups are restored using the ndb_restore program. It is an NDB API program that supports both restoring the schema and data. It is recommended to perform the restore in three steps:

  1. Restore the schema.
  2. Restore the data with indexes disabled.
  3. Rebuild the indexes.
In MySQL NDB Cluster 7.4 and later, restoring the schema with ndb_restore did not change the number of partitions to the default of the cluster you restore to. If you have not yet upgraded to MySQL NDB Cluster 7.5, it is recommended to restore the schema from a mysqldump or mysqlpump backup if the cluster does not have the same number of data nodes and LDM threads.

The restore examples assumes you are restoring into an empty cluster. There is also support for partial restores and renaming tables, but that will not be discussed here. Let us take a look at the three steps.

Step 1: Restore the Schema

The schema is restored using the –restore_meta option, for example:

The arguments used here are:

  • –ndb-connectstring=localhost:1186. The host and port number where to connect to the management node(s). This example is from a test cluster with all nodes on the same host. In general you will not be specifying localhost here (never ever have the management and data nodes on the same host or even the same physical server – a topic for another day).
  • –nodeid=1. This tells which node ID to restore from. This is based on the node ID from the cluster where the backup was created. Either data node can be used.
  • –backupid=18033311603. The backup ID to restore.
  • –backup_path=…. The location of the backup files.
  • –restore_meta. Restore the schema (called meta data).
  • –disable-indexes. Do not restore the indexes (we will rebuild them later).

You may wonder why we do not want to restore the indexes. I will get back to that after the restore has been completed.

You should only execute this command once and only for one node id. Before proceeding to the next step, ensure the step completed without errors. The next step is to restore the data.

Step 2: Restore the Data

The command to restore the data is very similar to restoring the schema. The main differences is that –restore_meta will be replaced by –restore_data and that ndb_restore should be used once for each data node that was in the cluster where the backup was created.

For example in case of two data nodes:

These steps can be run in parallel as long as it does not cause an overload of the data nodes. A rule of thumb is that you can execute one ndb_restore –restore_data per host you have data nodes one. I.e. if you have one data node per host, you can restore all parts in parallel. If you have two data nodes per host, it may be necessary to divide the restore into two parts.

The final step is to rebuild the indexes.

Step 3: Rebuild the Indexes

As we disabled the indexes while restoring the schema and data, it is necessary to recreate them. This is done in a similar way to restoring the data – i.e. it should only be done for one node ID, for example:

That’s it. You can use the data again. But why was it that the indexes where disabled? Let me return to that.

Why Disable Indexes During the Restore?

There are two reasons to disable the indexes while restoring the schema and data:

  • Performance
  • Constraints (unique indexes and foreign keys)

As such, it is only necessary to disable the indexes while restoring the data, but there is no reason to create the indexes during the schema restore just to remove them again in the next step.

By disabling the indexes, there is no need to maintain the indexes during the restore. This allows us to restore the data faster, but then we need to rebuild the indexes at the end. This is still faster though, and if BuildIndexThreads and the number of fragments per data node are greater than 1, the rebuild will happen in parallel like during a restart.

The second thing is that if you have unique keys or foreign keys, it is in general not possible to restore the backup with indexes enabled. The reason is that the backup happens in parallel across the data nodes with the changes happening during the backup recorded separately. When you restore the data, it is not possible to guarantee that data and log are restored in the same order as the changes occurred during the backup. So, to avoid unique key and foreign key errors, it is necessary to disable the indexes until after the data has been restored.

Do not worry – this does not mean that the restored data will be inconsistent. At the end of the backup – and rebuilding the indexes checks for this – the constraints are fulfilled again.

Want to Know More?

This blog really only scratches the surface of backups. If you want to read more, some references are:

Install MySQL Enteprise Monitor (MEM) 3.0 Using Limited Resources

MySQL Enterprise Monitor (MEM) is the monitoring solution offered as part of MySQL Enterprise Edition and MySQL Cluster Carrier Grade Edition. In this blog, I will not go into details of the features of MEM, but rather focus on the case where you want to install MEM 3.0 to try it out on your personal desktop or laptop.

A trial version (the full product, but the can only be used for 30 days) is available from Oracle’s Software Delivery Cloud. If you are a MySQL customer, it is recommended that you download MEM from My Oracle Support (MOS).

Once you have downloaded and extracted the installation binaries, you can start the installation. You have the choice between using a GUI, text based, and unattended install. Here I will use the GUI install, but if you want to try one of the other install options, launch the installer with the –help option to get more information.

MEM 3.0 consists of two parts:

  • The Service Manager
  • Agents

Only the Service Manager is required, so that will be the first thing to install. As this is intended to show how you can test MEM, I will use a normal user rather than root for the installation.

It is also recommended to take a look at the MEM 3.0 Reference Manual.

Installation Wizard

When launching the installer the first screen asks which language to use – you have the choice of English and Japanese:

Installing the MEM 3.0 Service Manager - Step 1: Choose language
Step 1: Choose language

Next is an information screen that you should ensure you keep track of the usernames and passwords entered during the installation process:

Installing the MEM 3.0 Service Manager - Step 2: Remember the usernames and passwords entered during the installation process
Step 2: Remember the usernames and passwords entered during the installation process

The actual install process is now ready to start:

Installing the MEM 3.0 Service Manager - Step 3: Ready to start the actual install process
Step 3: Ready to start the actual install process

The following steps are to configure the installation – the first of which is to decide where to install the Service Manager:

Installing the MEM 3.0 Service Manager - Step 4: Choose the installation directory
Step 4: Choose the installation directory

The Service Manager will need three ports:

  • Tomcat Server Port: For the web UI when using non-SSL connections
  • Tomcat Shutdown Port: Used internally to shutdown the web server
  • Tomcat SSL Port: For the web UI when using SSL connections
Installing the MEM 3.0 Service Manager - Step 5: Choose the port numbers
Step 5: Choose the port numbers

The Service Manager uses a MySQL database to store the data collected. The next screen allows you to choose between using the MySQL database bundled with MEM or an existing one. Using the bundled instance has the advantage that MEM will configure it automatically and upgrades can be easier, however it will mean running two MySQL instances if you already have MySQL installed. For a test instance using the bundled instance also has the advantage that it’s easy to uninstall the whole installation again, so we will use the bundled instance in this example:

Installing the MEM 3.0 Service Manager - Step 6: Choose whether to use the bundled MySQL database or an existing one
Step 6: Choose whether to use the bundled MySQL database or an existing one

The final of the pre-installation configuration is to choose the username and password to use for the connection to the MySQL database. This is the username and password that you were reminded of earlier to make sure you remember:

Installing the MEM 3.0 Service Manager - Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring
Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring

Next a note that because we have chosen to install the Service Manager as a non-root user, it will not be started automatically when the server is restarted:

Installing the MEM 3.0 Service Manager - Step 8: Info that MEM will not start automatically when not installed as root
Step 8: Info that MEM will not start automatically when not installed as root

Ready to Install:

Installing the MEM 3.0 Service Manager - Step 9: Configuration completed
Step 9: Configuration completed – ready to install

The Service Manager is now being installed – this will take a little while as it both includes copying all the files in place as well as configuring and starting the web server and the bundled MySQL database:

Installing the MEM 3.0 Service Manager - Step 10: Installation is in progress
Step 10: Installation is in progress

MEM includes an uninstall script if you need to remove the Service Manager again:

Installing the MEM 3.0 Service Manager - Step 11: Files are now installed
Step 11: Files are now installed

To improve security MEM 3.0 by default uses SSL. The installation process adds a self-signed certificate, but you can choose to install your own certificate later:

Installing the MEM 3.0 Service Manager - Step 12: Information that MEM is using SSL with a self-signed certificate by default
Step 12: Information that MEM is using SSL with a self-signed certificate by default

The installer is now done and it is time to launch the UI:

Installing the MEM 3.0 Service Manager - Step 13: Ready to launch the Service Manager
Step 13: Ready to launch the Service Manager

Unattended Install

If you go through the installation process often, it is more convenient to use the unattended installation as it allows you to automate the installation. To perform the same installation as above using the unattended installation you can execute:

As several of the values are the default ones, you can skip some of the options, but they are included here for completeness.

When the above command completes, continue with the post-installation configuration as you would have done using the installation wizard.

Post-Installation Configuration

Once the Service Manager UI has opened in your browser there is a little post-installation configuration to take care of:

  • You need to create an admin user for the web UI
  • You need to create a user agents can use if you choose to install any agents
  • You can choose whether to have the Service Manager automatically check for updates and how for how long time to keep historical data
Installing the MEM 3.0 Service Manager - Step 14: Post installation configuration
Step 14: Post installation configuration

The last step if to choose your timezone and which locale to use in the user interface:

Installing the MEM 3.0 Service Manager - Step 15: Choose timezone and the locale
Step 15: Choose timezone and the locale

After this you are ready to play around with the monitor. If you choose to install agents, the steps are similar.

Reducing Memory Footprint

One thing to be aware of though is that the default configuration used by the Service Manager is aimed at a small to medium production installation. If you for example installed MEM in a virtual machine or computer with limited memory available, the default configuration may not be optimal.

With the installation above just completed, the memory usage of the web server (Tomcat) and the bundled MySQL database is:

So around 1.5G resident memory. That is without adding any additional agents and/or MySQL instances to monitor.

So when I use MEM for testing, the first thing I do is to change a few configuration values to reduce the memory usage. The configuration options I change are located in two files (paths are given relative to the installation directory):

  • mysql/my.cnf – options related to the bundled MySQL database
  • apache-tomcat/bin/setenv.sh – options related to the web server

mysql/my.cnf

For the MySQL database I change two setting:

  • innodb_buffer_pool_size – this is by default 768M, but if you only monitor a couple of MySQL instances, something like 100M is enough
  • table_definition_cache – this is by default 2048. One side effect of the value being so large is that the auto-sizing of the Performance Schema considers the installation as a large instance, so the Performance Schema tables are made large. Reducing the value to 800 is enough for testing and will reduce the memory usage of the Performance Schema with several hundred megabytes.

So my mysql/my.cnf looks something like this after the changes:

 apache-tomcat/bin/setenv.sh

It is only necessary to edit one line in the Tomcat configuration file – the default settings are:

I change that to:

I.e. the three setting I have changed are:

  • -Xmx (maximum  heap size) from 768M to 256M
  • -Xms (minimum heap size) from 768M to 256M
  • -XX:MaxPermSize from 512M to 200M

Enabling the New Configurations

It requires restarting the Service Manager to make the new configurations take effect. You can do the restart by going into the installation directory and execute:

The memory usage is now somewhat smaller: