Who Holds the Metadata Lock? MySQL 5.7.3 Brings Help

In MySQL 5.5 metadata locks were introduced to ensure consistency for some operations that previously could cause problems. For example if a transaction SELECTed from a table and another connection changed the structure of the table before the transaction was committed; in that case the two operations would be written in reverse order to the binary log which could prevent replaying the binary log.

However there was no way to see who held metadata locks. SHOW PROCESSLIST would show who were waiting for metadata locks, and often you could guess who held it, but it wasn’t so always. Consider for example the following:

In this case it happens to be the process with Id = 3 that holds the metadata lock, but that is not obvious from the above output.

In MySQL 5.7.3 this has changed. There is a new Performance Schema table metadata_locks which will tell which metadata locks are held and which are pending. This can be used to answer the question for a case like the one above.

First instrumentation of metadata locks should be enabled. To do that you must enable the wait/lock/metadata/sql/mdl instrument in setup_instruments. Additionally the global_instrumentation consumer must be enabled in setup_consumers.

Currently the wait/lock/metadata/sql/mdl instrument is not enabled by default. I have created a feature request to consider enabling it by default, but obviously whether that will happen also depends on the performance impact.

To enable metadata lock instrumentation:

Note: The global_instrumentation consumer is enabled by default.

With the metadata lock instrumentation enabled, it is now easy to answer who holds the metadata lock (I’ve excluded the connections own metalocks here as I’m only interested in the metadata lock contention going on between other queries):

So in this case there is a metadata lock GRANTED to process list id 3 whereas the other connections have a PENDING lock status for the metadata lock for the City table.

Install MySQL Enteprise Monitor (MEM) 3.0 Using Limited Resources

MySQL Enterprise Monitor (MEM) is the monitoring solution offered as part of MySQL Enterprise Edition and MySQL Cluster Carrier Grade Edition. In this blog, I will not go into details of the features of MEM, but rather focus on the case where you want to install MEM 3.0 to try it out on your personal desktop or laptop.

A trial version (the full product, but the can only be used for 30 days) is available from Oracle’s Software Delivery Cloud. If you are a MySQL customer, it is recommended that you download MEM from My Oracle Support (MOS).

Once you have downloaded and extracted the installation binaries, you can start the installation. You have the choice between using a GUI, text based, and unattended install. Here I will use the GUI install, but if you want to try one of the other install options, launch the installer with the –help option to get more information.

MEM 3.0 consists of two parts:

  • The Service Manager
  • Agents

Only the Service Manager is required, so that will be the first thing to install. As this is intended to show how you can test MEM, I will use a normal user rather than root for the installation.

It is also recommended to take a look at the MEM 3.0 Reference Manual.

Installation Wizard

When launching the installer the first screen asks which language to use – you have the choice of English and Japanese:

Installing the MEM 3.0 Service Manager - Step 1: Choose language

Step 1: Choose language

Next is an information screen that you should ensure you keep track of the usernames and passwords entered during the installation process:

Installing the MEM 3.0 Service Manager - Step 2: Remember the usernames and passwords entered during the installation process

Step 2: Remember the usernames and passwords entered during the installation process

The actual install process is now ready to start:

Installing the MEM 3.0 Service Manager - Step 3: Ready to start the actual install process

Step 3: Ready to start the actual install process

The following steps are to configure the installation – the first of which is to decide where to install the Service Manager:

Installing the MEM 3.0 Service Manager - Step 4: Choose the installation directory

Step 4: Choose the installation directory

The Service Manager will need three ports:

  • Tomcat Server Port: For the web UI when using non-SSL connections
  • Tomcat Shutdown Port: Used internally to shutdown the web server
  • Tomcat SSL Port: For the web UI when using SSL connections
Installing the MEM 3.0 Service Manager - Step 5: Choose the port numbers

Step 5: Choose the port numbers

The Service Manager uses a MySQL database to store the data collected. The next screen allows you to choose between using the MySQL database bundled with MEM or an existing one. Using the bundled instance has the advantage that MEM will configure it automatically and upgrades can be easier, however it will mean running two MySQL instances if you already have MySQL installed. For a test instance using the bundled instance also has the advantage that it’s easy to uninstall the whole installation again, so we will use the bundled instance in this example:

Installing the MEM 3.0 Service Manager - Step 6: Choose whether to use the bundled MySQL database or an existing one

Step 6: Choose whether to use the bundled MySQL database or an existing one

The final of the pre-installation configuration is to choose the username and password to use for the connection to the MySQL database. This is the username and password that you were reminded of earlier to make sure you remember:

Installing the MEM 3.0 Service Manager - Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring

Step 7: Specify username and password for the Service Manager to connect to the MySQL database storing the data collected through the monitoring

Next a note that because we have chosen to install the Service Manager as a non-root user, it will not be started automatically when the server is restarted:

Installing the MEM 3.0 Service Manager - Step 8: Info that MEM will not start automatically when not installed as root

Step 8: Info that MEM will not start automatically when not installed as root

Ready to Install:

Installing the MEM 3.0 Service Manager - Step 9: Configuration completed

Step 9: Configuration completed – ready to install

The Service Manager is now being installed – this will take a little while as it both includes copying all the files in place as well as configuring and starting the web server and the bundled MySQL database:

Installing the MEM 3.0 Service Manager - Step 10: Installation is in progress

Step 10: Installation is in progress

MEM includes an uninstall script if you need to remove the Service Manager again:

Installing the MEM 3.0 Service Manager - Step 11: Files are now installed

Step 11: Files are now installed

To improve security MEM 3.0 by default uses SSL. The installation process adds a self-signed certificate, but you can choose to install your own certificate later:

Installing the MEM 3.0 Service Manager - Step 12: Information that MEM is using SSL with a self-signed certificate by default

Step 12: Information that MEM is using SSL with a self-signed certificate by default

The installer is now done and it is time to launch the UI:

Installing the MEM 3.0 Service Manager - Step 13: Ready to launch the Service Manager

Step 13: Ready to launch the Service Manager

Unattended Install

If you go through the installation process often, it is more convenient to use the unattended installation as it allows you to automate the installation. To perform the same installation as above using the unattended installation you can execute:

As several of the values are the default ones, you can skip some of the options, but they are included here for completeness.

When the above command completes, continue with the post-installation configuration as you would have done using the installation wizard.

Post-Installation Configuration

Once the Service Manager UI has opened in your browser there is a little post-installation configuration to take care of:

  • You need to create an admin user for the web UI
  • You need to create a user agents can use if you choose to install any agents
  • You can choose whether to have the Service Manager automatically check for updates and how for how long time to keep historical data
Installing the MEM 3.0 Service Manager - Step 14: Post installation configuration

Step 14: Post installation configuration

The last step if to choose your timezone and which locale to use in the user interface:

Installing the MEM 3.0 Service Manager - Step 15: Choose timezone and the locale

Step 15: Choose timezone and the locale

After this you are ready to play around with the monitor. If you choose to install agents, the steps are similar.

Reducing Memory Footprint

One thing to be aware of though is that the default configuration used by the Service Manager is aimed at a small to medium production installation. If you for example installed MEM in a virtual machine or computer with limited memory available, the default configuration may not be optimal.

With the installation above just completed, the memory usage of the web server (Tomcat) and the bundled MySQL database is:

So around 1.5G resident memory. That is without adding any additional agents and/or MySQL instances to monitor.

So when I use MEM for testing, the first thing I do is to change a few configuration values to reduce the memory usage. The configuration options I change are located in two files (paths are given relative to the installation directory):

  • mysql/my.cnf – options related to the bundled MySQL database
  • apache-tomcat/bin/setenv.sh – options related to the web server

mysql/my.cnf

For the MySQL database I change two setting:

  • innodb_buffer_pool_size – this is by default 768M, but if you only monitor a couple of MySQL instances, something like 100M is enough
  • table_definition_cache – this is by default 2048. One side effect of the value being so large is that the auto-sizing of the Performance Schema considers the installation as a large instance, so the Performance Schema tables are made large. Reducing the value to 800 is enough for testing and will reduce the memory usage of the Performance Schema with several hundred megabytes.

So my mysql/my.cnf looks something like this after the changes:

 apache-tomcat/bin/setenv.sh

It is only necessary to edit one line in the Tomcat configuration file – the default settings are:

I change that to:

I.e. the three setting I have changed are:

  • -Xmx (maximum  heap size) from 768M to 256M
  • -Xms (minimum heap size) from 768M to 256M
  • -XX:MaxPermSize from 512M to 200M

Enabling the New Configurations

It requires restarting the Service Manager to make the new configurations take effect. You can do the restart by going into the installation directory and execute:

The memory usage is now somewhat smaller:

 

MySQL Connect 2013: ps_tools

During my talk and hands on labs at MySQL Connect 2013 I used several views, functions, and procedures from ps_helper and ps_tools.

The slides from the two talks can be downloaded from Thanks For Attending MySQL Connect or from the MySQL Connect 2013 Content Catalogue.

You can read more about ps_helper on Mark Leith’s blog and you can download ps_helper from github.

To get ps_tools, download ps_tools.tgz from here. Once unpacked there is a ps_tools_5x.sql for each of the versions supported. The tools presented at MySQL Connect were all based on MySQL 5.6 and 5.7. Note that several of the included tools are not particularly useful on their own but are more meant as utility functions for some of the other tools. The actual implementations are organised so they are in the subdirectory corresponding to the earliest version it applies to. For a few tools, such as ps_reset_setup(), there are multiple versions depending on the MySQL version it applies to.

Several of the tools have a help text at the top of the source file.

The main tools are:

  • Function ps_thread_id(): returns the thread id for a given connection. Specify NULL to get the thread id for the current connection. For example:
  •  View ps_setup_consumers: similar to performance_schema.setup_consumers, but includes a column to display whether the consumer is effectively enabled. See also slide 10 from CON 5282.
  • Function substr_count(): See also A Couple of Substring Functions: substr_count() and substr_by_delim().
  • Function substr_by_delim(): See also A Couple of Substring Functions: substr_count() and substr_by_delim().
  • Function ps_account_enabled(): Check whether a given account is enabled according to performance_schema.setup_actors. For example:
  • View ps_accounts_enabled: Lists each account and whether it is enabled according to performance_schema.setup_actors.
  • Procedure ps_setup_tree_instruments(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting for more details.
  • Procedure ps_setup_tree_actors_by_host(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting for more details.
  • Procedure ps_setup_tree_actors_by_user(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting for more details.
  • Procedure ps_setup_tree_consumers(): Creates a tree displaying whether instruments are enabled. See Easier Overview of Current Performance Schema Setting and slide 11 from CON 5282 for more details.
  • Procedure ps_dump_thread_stack(): This is the same as in ps_helper with one bug fix. See also slides 30-31 from CON 5282.
  • Procedure ps_enable_all(): Enable all consumers and instruments.
  • Procedure ps_reset_setup(): Reset consumers, instruments, actors, and threads to the defaults (without taking my.cnf into consideration).
  • View is_innodb_lock_waits: show information about waiting locks for InnoDB.
  • View is_engine_data: summary of the amount of data and indexes grouped by storage engine.
  • View ps_memory_by_thread_by_event_raw: the amount of memory used grouped by thread and event without any formatting and ordering.
  • View ps_memory_current_by_thread_by_event: The same as above but formatted output and ordered by current usage.
  • View ps_memory_high_by_thread_by_event: The same as above but formatted output and ordered by the high watermark.
  • Procedure schema_unused_routines(): Lists unused stored procedures and functions.

Thanks For Attending MySQL Connect

MySQL Connect 2013 was held this past Saturday through Monday, and I would like to extend a big thank you to everyone who attended my sessions, I talked to or otherwise took part in the conference.

I had two sessions as well as participated in a Birds of the Feather session with the Community and Support teams. The slides have been uploaded the the Content Catalog but they are not available for download from there yet. Until then you can download them from the the links below:

The ps_helper views, procedures, and functions used in the above presentations can be downloaded from https://github.com/MarkLeith/dbahelper:

git clone https://github.com/MarkLeith/dbahelper

For ps_tools, I will follow up on this site with more information although some of the tools can be found in Easier Overview of Current Performance Schema Setting. Note: the presentation uses the naming convention that Performance Schema tools are prefixed ps_ – that was not the case in the above blog, so e.g. ps_setup_tree_consumers is in the blog call setup_tree_consumers.

And again: thanks for attending MySQL Connect 2013 – hope to see you again next year.

Speaking at MySQL Connect 2013

183037-mysql-tk-imspeaking-250x250-1951648It is hard to believe it is already closing in on a year since the last MySQL Connect, but it is true, it is time to start preparing again.

This year MySQL Connect will take place in the weekend of 21-23 September with the Monday being dedicated to tutorials. As last year MySQL Connect is part of Oracle OpenWorld and is hosted at Hilton, San Francisco Union Square.

I am fortunate enough this year to be taking part in three sessions:

  • Meet the MySQL Community and Support Teams [BOF2480]
    Join MySQL community and support team members in this BOF to ask all your questions as well as provide feedback, suggestions, and ideas related to the MySQL community, handling of bugs, and overall technical support at Oracle.

    This Birds-Of-a-Feather session will take place Saturday at 4:30 PM.

  • Making the Performance Schema Easier to Use [CON5282]
    The Performance Schema can seem daunting at first, with the vast amount of data available. This session focuses on tools, such as ps_helper views and stored programs, that make it easier to get started with the Performance Schema and perform common tasks. The presentation includes examples of how ps_helper can be used to simplify checking the current configuration, changing the configuration, and investigating the usage of the server.

    This talk will take place Sunday at 11:00 AM.

  • Improving Performance with MySQL Performance Schema [HOL9733]
    The Performance Schema feature of MySQL is MySQL’s gateway for looking into the engine room. It enables you not only to discover what is going on in the internals but also to get detailed information about the current connections and some historical data. MySQL 5.6, which is now GA, introduces significant enhancements to Performance Schema. This hands-on lab gives you the opportunity to use Performance Schema, going through the steps from initial configuration and high-level summaries to detailed wait events.

    This Hands-On-Labs session will take place Sunday at 2:00 PM.

However those three sessions is just a small part of the agenda for MySQL Connect. There are a total of more than 80 sessions from both Oracle developers, engineer, and staff as well as users, third party developers, and more. See also Bertrand Matthelié’s Top 10 Reasons to Attend MySQL Connect.

See you all there.

183037-mysql-tk-joinme-250x250-1951669

New Member of the Cluster API Family: The Native Node.js Connector

MySQL Cluster 7.3 went GA yesterday and with it came a new member of the MySQL Cluster API family: mysql-js – a native Node.js connector. mysql-js uses the NDB API to connect directly to the data nodes which improves performance compared to executing queries through the MySQL nodes.

For an introduction to mysql-js and installation instructions I will recommend taking a look at the official API documentation and Andrew Morgan’s blog; the latter also has an overview of the new features in MySQL Cluster 7.3 in general.

To get a feel for how the new API works, I went ahead and created a small test program that will take one or more files with delimited data (e.g. similar to what you get with SELECT … INTO OUTFILE and inserts the data into a table. I have tried to keep things simple. This means that no other external modules than mysql-js is used, not very much error handling has been included, the reading, parsing of the data files could be done much better, performance has not been considered, etc. – but I would rather focus on the usage of mysql-js.

The complete example can be found in the file nodejs_tabinsert.js. The following will go through the important bits.

Preparation

The first part of the script is not really specific to mysql-js, so I will go lightly over that. A few of the arguments deserve a couple of extra words:

  • –log-level: when set to debug or detail some output with information about what happens inside mysql-js is logged. This can be useful to learn more about the module or for debugging.
  • –basedir: this is the same as the basedir option for mysqld – it sets where MySQL has been installed. It is used for loading the mysql-js module. Default is /usr/local/mysql.
  • –database and –table: which table to insert the data into. The default database is test, but the table name must always be specified.
  • –connect-string: the script connects directly to the cluster nodes, so it needs the NDB connect-string similar to other NDB programs. The default is localhost:1186.
  • –delimiter: the delimiter used in the data files. The default is a tab (\t).

Setting Up mysql-js

With all the arguments parsed, it is not possible to load the mysql-js module:

The unified_debug class is part of mysql-js and allows to get debug information from inside mysql-js logged to the console.

The nosql.ConnectionProperties() method will return an object with the default settings for the chosen adapter – in this case ndb. After that we can change the settings where we do not want the defaults. It is also possible to use an object with the settings as the argument instead of ‘ndb’; that requires setting the name of the adapter using the “implementation” property. Currently the two supported adapters are ‘ndb’ (as in this example) and ‘mysql’ which connects to mysqld instead. ‘mysql’ required node-mysql version 2.0 and also support InnoDB.

As the ‘ndb’ adapter connects directly to the cluster nodes, no authentication is used. This is the same as for the NDB API.

Callbacks and Table Mapping Constructor

We will load each file inside a transaction. The trxCommit() callback will verify that the transaction was committed without error and then closes the session.

The onInsert callback checks whether each insert worked correctly. When all rows for the session (file) have been inserted, it commits the transaction.

The tableRow is the constructor later used for the table mapping. It is used to set up the object with the data to be inserted for that row. tableMeta is a TableMetaData object with information about the table we are inserting into.

The Session

This is were the bulk of the work is done. Each file will have it’s own session.

The onSession function is a callback that is used when creating (opening) the sessions.

The first step is to get the meta data for the table. As all data is inserted into the same table, in principle we could reuse the same meta data object for all sessions, but the getTableMetaData() method is a method of the session, so it cannot be fetched until this point.

Next a transaction is started. We get the transaction with the session.currentTransaction() method. This returns an idle transaction which can then be started using the begin() method. As such there is not need to store the transaction in a variable; as can be seen in the trxCommit() and onInsert() callbacks above, it is also possible to call session.currnetTransaction() repeatedly – it will keep returning the same transaction object.

The rest of the onSession function processes the actual data. The insert itself is performed with the session.persist() method.

Edit: using a session this way to insert the rows one by one is obviously not very efficient as it requires a round trip to the data nodes for each row. For bulk inserts the Batch class is a better choice, however I chose Session to demonstrate using multiple updates inside a transaction.

Creating the Sessions

First the table mapping is defined. Then a session is opened for each file. Opening a session means connecting to the cluster, so it can be a relatively expensive step.

Running the Script

To test the script, the table t1 in the test database will be used:

For the data files, I have been using:

t1a.txt:

t1b.txt:

Running the script:

One important observation is that even though the session for t1a.txt is created before the one for t1b, the t1b.txt file is ending up being inserted first. Actually if the inserts were using auto-increments, it would be possible to see that in fact, the actual assignment of auto-increment values will in general alternate between rows from t1b.txt and t1a.txt. The lesson: in node.js do not count on knowing the exact order of operations.

I hope this example will spark your interest in mysql-js. Feedback is most welcome – both bug reports and feature requests can be reports at bugs.mysql.com.

Easier Overview of Current Performance Schema Setting

While I prepared for my Hands-On Lab about the Performance Schema at MySQL Connect last year, one of the things that occurred to me was how difficult it was quickly getting an overview of which consumers, instruments, actors, etc. are actually enabled. For the consumers things are made more complicated as the effective setting also depends on parents in the hierarchy. So my thought was: “How difficult can it be to write a stored procedure that outputs a tree of the hierarchies.” Well, simple enough in principle, but trying to be general ended up making it into a lengthy project and as it was a hobby project, it often ended up being put aside for more urgent tasks.

However here around eight months later, it is starting to shape up. While there definitely still is work to be done, e.g. creating the full tree and outputting it in text mode (more on modes later) takes around one minute on my test system – granted I am using a standard laptop and MySQL is running in a VM, so it is nothing sophisticated.

The current routines can be found in ps_tools.sql.gz – it may later be merged into Mark Leith’s ps_helper to try to keep the Performance Schema tools collected in one place.

Note: This far the routines have only been tested in Linux on MySQL 5.6.11. Particularly line endings may give problems on Windows and Mac.

Description of the ps_tools Routines

The current status are two views, four stored procedure, and four functions – not including several functions and procedures that does all the hard work:

  • Views:
    • setup_consumers – Displays whether each consumer is enabled and whether the consumer actually will be collected based on the hierarchy rules described in Pre-Filtering by Consumer in the Reference Manual.
    • accounts_enabled – Displays whether each account defined in the mysql.user table has instrumentation enabled based on the rows in performance_schema.setup_actors.
  • Procedures:
    • setup_tree_consumers(format, color) – Create a tree based on setup_consumers displaying whether each consumer is effectively enabled. The arguments are:
      • format is the output format and can be either (see also below).:
        • Text: Left-Right
        • Text: Top-Bottom
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escapes sequences around the consumer names when used a Text format (ignored for Dot outputs).
    • setup_tree_instruments(format, color, only_enabled, regex_filter) – Create a tree based on setup_instruments displaying whether each instrument is enabled. The tree is creating by splitting the instrument names at each /. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escapes sequences around the instrument names when used a Text format (ignored for Dot outputs).
      • type – whether to base the tree on the ENABLED or TIMED column of setup_instruments.
      • only_enabled – if TRUE only the enabled instruments are included.
      • regex_filter – if set to a non-empty string only instruments that match the regex will be included.
    • setup_tree_actors_by_host(format, color) – Create a tree of each account defined in mysql.user and whether they are enabled; grouped by host. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escape sequences around the names when used a Text format (ignored for Dot outputs).
    • setup_tree_actors_by_user – Create a tree of each account defined in mysql.user and whether they are enabled; grouped by username. The arguments are:
      • format is the output format and can be either:
        • Text: Left-Right
        • Dot: Left-Right
        • Dot: Top-Bottom
      • color is whether to add bash color escape sequences around the names when used a Text format (ignored for Dot outputs).
  • Functions:
    • is_consumer_enabled(consumer_name) – Returns whether a given consumer is effectively enabled.
    • is_account_enabled(host, user) – Returns whether a given account (host, user) is enabled according to setup_actors.
    • substr_count(haystack, needle, offset, length) – The number of times a given substring occurs in a string. A port of the PHP function of the same name.
    • substr_by_delim(set, delimiter, pos) – Returns the Nth element from a delimiter string.

The two functions substr_count() and substr_by_delim() was also described in an earlier blog.

The formats for the four stored procedures consists of two parts: whether it is Text or Dot and the direction. Text is a tree that can be viewed directly either in the mysql command line client (coloured output not supported) or the shell (colours supported for bash). Dot will output a DOT graph file in the same way as dump_thread_stack() in ps_helper. The direction is as defined in the DOT language, so e.g. Left-Right will have the first level furthest to the left, then add each new level to the right of the parent level.

Examples

As the source code – including comments – is more than 1600 lines, I will not discuss it here, but rather go through some examples.

setup_tree_consumers

Using the coloured output:

setup_tree_consumers_tbor the same using a non-coloured output:
setup_tree_consumers_lr

setup_tree_instruments

setup_tree_instruments_lrHere a small part of the tree is selected using a regex.

setup_tree_actors_%

With only root@localhost and root@127.0.0.1 enabled, the outputs of setup_tree_actors_by_host and setup_tree_actors_by_user gives respectively:setup_tree_actors_by_host_lrsetup_tree_actors_by_user_lr

DOT Graph of setup_instruments

The full tree of setup_instruments can be created using the following sequence of steps (I am using graphviz to get support for dot files):

setup_tree_instruments_dot_lr_snipThe full output is rather large (6.7M). If you want to see if you can get to it at http://mysql.wisborg.dk/wp-content/uploads/2013/05/setup_tree_instruments_dot_lr.png.

Views

Conclusion

There is definitely more work to do on making the Performance Schema easier to access. ps_helper and ps_tools are a great start to what I am sure will be an extensive library of useful diagnostic queries and tools.

Changing the Size of the InnoDB Log Files In MySQL 5.6

In MySQL 5.5 and earlier, the steps to resize the InnoDB log files were a bit involved and for example included manually moving the log files out of the way as InnoDB would only create new files, if none existed.

In MySQL 5.6 a not so much talked about feature is the support to resize the log files in a way much more similar to changing other settings in MySQL. Now you simply update your MySQL configuration file and restart MySQL.

Let us look at an example. In MySQL 5.5 and earlier the total size of the InnoDB log files has to be less than 4G in total, so one way of staying within this limit is to have two files each 2047M large:

Now update the configuration file to take advantage of the fact that MySQL 5.6 allows much larger InnoDB log files; the actual limit is a total size of 512G, but here I will use two files each 4G large:

Restarting MySQL will then automatically resize the log files, and the error log will show something like:

One of the other requirements when changing the log file size in MySQL 5.5 and earlier was that innodb_fast_shutdown must be set to 0 or 1 (the default value is 1). What happens in MySQL 5.6 if you have innodb_fast_shutdown = 2 and try to change the log size? Well now InnoDB handles that as well – InnoDB will do its “crash recovery” and then resize the log files:

And a look into the error log for the restart (setting the size back to 2 times 2047M):

While it is not something this that makes an impact during normal operations, it just helps making the life of a DBA (or Support engineer) life a little easier.

Changes to Options and Variables in MySQL 5.6

With MySQL 5.6 just gone GA, I thought it would be good to take a look at the changes in options and variables that comes with the new release.

First of all, several of the existing options have get new default values. As James Day already have written a good post about that in his blog, I will refer to that instead of going through the changes. For a general overview of the new features and improvements, the recent blogs by Rob Young and Peter Saitsev are good starting points together with the What is New in MySQL 5.6 page in the Reference Manual are good places to start.

Instead I will focus a little on the new options that has been introduced. The first thing to note is that a in the current 5.5. release (5.5.30) there are 323 variables whereas 5.6 GA (5.6.10) returns 440 rows.

Note: this post is written using the Enterprise versions with the semi-synchronous replication plugins enabled in both versions plus the memcached and password validation plugins in 5.6.

Actually the number of new variables is not 117 but 129 as 12 variables have been removed in 5.6.

So what are all of these 129 new variables good for? Actually there is a good chance that you will never need to touch many of them as the default value is good enough, they simply have been added to provide the value of options already present in 5.5 but not exposed through SHOW GLOBAL VARIABLES, or that they are for features you are not using. If we try to group the new variables the distribution comes out as:

Feature
New Variables
Global Transaction IDs5
Other Replication19
Memcached Plugin6
Validate Password Plugin6
Other Security Related5
InnoDB54
Optimizer Traces5
Performance Schema15
Exposing Previously Existing Variables2
Other12

New Variables in MySQL 5.6

The 54 new InnoDB variables span a number of different changes and additions such as:

  • New adaptive flushing algorithm
  • Buffer Pool dumps to disk and restore
  • Support for additional checksum algorithms
  • Improvements for compression
  • Full text indexes
  • New monitoring options (the information_schema.metrics table)
  • Configurable page size
  • Persistent statistics
  • Undo logs improvements
  • And more …

For reference I have added a list of the new variables with the release they were introduced and the default value (additionally innodb_print_all_deadlocks is also new, but that was also added to 5.5.30):

Note that while the default values are for an installation on Linux, most will also apply to other platforms. See also the Reference Manual.

For good measure here is a list of the variables that have been removed in 5.6:

  • engine_condition_pushdown – deprecated in 5.5.3, use optimizer_switch instead.
  • have_csv – use SHOW ENGINES or information_schema.ENGINES instead.
  • have_innodb – use SHOW ENGINES or information_schema.ENGINES instead.
  • have_ndbcluster – use SHOW ENGINES or information_schema.ENGINES instead.
  • have_partitioning – use SHOW ENGINES or information_schema.ENGINES instead.
  • log – deprecated in 5.1.29, use general_log instead.
  • log_slow_queries – deprecated in 5.1.29, use slow_query_log instead.
  • max_long_data_size – deprecated in 5.5.11, is now automatically controlled by max_allowed_packet.
  • rpl_recovery_rank – previously unused.
  • sql_big_tables – hasn’t really been needed since 3.23.2.
  • sql_low_priority_updates – Use low_priority_updates instead.
  • sql_max_join_size

Merry Christmas

Greeting
Merry Christmas and Happy New Year
See you in 2013

mysql is the MySQL Command Line Client – my favourite tool when using MySQL. It is available on all platforms where you can install MySQL and is part of a standard installation. As you can see there are many options, including the -H which causes the output to be HTML formatted. Ok, I cheated here and created the table manually as my blog editor escapes HTML code – the actual output looked like:

So a Happy Christmas and Happy New Year to all MySQL users.