A Case for Write Optimizations in MySQL

Posted On November 21, 2011 | By Zardosht Kasheff | 1 comments

As a storage engine developer, I am excited for MySQL 5.6. Looking at http://dev.mysql.com/tech-resources/articles/whats-new-in-mysql-5.6.html, there has been plenty of work done to improve the performance of reads in MySQL for all storage engines (provided they take advantage of the new APIs).

What would be great to add is API improvements to increase the performance of writes, and more specifically, updates. For many applications that perform updates, such as applications that do click counting or impression counting, there are significant opportunities for improving write performance.

Take the following example of click counting (or impression counting). You have a website and want to save the number of times links on your website have been clicked. Your table may look something like:


create table num_clicks( link_id int, num_clicks int);

To update the number of clicks, you do something like:


insert into num_clicks (LINK_ID, 1) on duplicate key update set num_clicks=num_clicks+1;

With MySQL as it currently works, this is slower than it needs to be, as I explained here. At a high level, the reason is that MySQL forces the storage engine to check in the table if a value exists for LINK_ID. If a row is returned, MySQL performs the increment away from the storage engine, and passes a new row to the storage engine for an update. The check incurs a disk seek, which is very costly in terms of latency. Disks can do only hundreds of seeks per second. Furthermore, NoSQL solutions based on B-trees are similarly limited and can’t be significantly accelerated because updates incur disk I/O.

However, with some changes to MySQL, a storage engine can take advantage of this knowledge to improve its algorithms. All that’s needed is for the storage engine to know that the user wants to perform an insert or to perform this particular update, as opposed to getting individual handler calls of write_row, index_read, and update_row (which is the current design). Hence, what’s needed is a way for the storage engine layer to be able to apply updates on its own.

This change can help all storage engines. Although I am not an expert in MySQL Cluster, I imagine reducing these individual handler calls also helps MySQL Cluster avoid network hops to retrieve information. For in-memory databases, performance may increase due to reducing the number of calls made by the handler. InnoDB can potentially use its insertion buffer to store the “insert … on duplicate key update” operation, thereby giving the operation the same boost insertions into secondary keys get. For TokuDB, we estimate that these types of updates aided by this additional information could run much faster. In future posts, I will expand on how we think TokuDB can do this.

One Comment

  1. This would be a great way to expand the workloads for which MySQL is very good

Leave a Reply

Your email address will not be published. Required fields are marked *