TokuDB Fast Update Benchmark

Posted On March 28, 2013 | By Tim Callaghan | 0 comments

Last month my colleague Rich Prohaska covered the technical details of our “Fast Update” feature which we added to TokuDB in version 6.6.  The message based architecture of Fractal Tree Indexes allows us to defer certain operations while still maintaining the semantics that MySQL users require.

In the case of Fast Updates, TokuDB is avoiding the read-before-write requirement that the existing MySQL update statement imposes on storage engines.  We can simply inject an update message into the Fractal Tree Index, and apply that message at a later time.  The message is dynamically applied if a user selects that specific row, and permanently when the message buffers overflow and push it down into the leaf node.

There are trade-offs for not performing a read for every update:

  • The developer does not get a valid “# of rows updated” response from the server
  • Triggers can’t exist on the table (we’d need to read the row to fire the trigger)
  • The updated column(s) cannot be part of any indexes (we’d need to read the row to update the indexes)
  • Replication must be off or statement based (row based replication requires a before image of the row, forcing a read)

A single Sysbench OLTP transaction performs a series of point and range queries, an indexed update, a non-indexed update, a delete, and an insert.   I ran two tests, one with a single update per transaction and a second with 25 updates per transaction. These Sysbench tests used 16 tables, 10 million rows per table, and 64 client threads.  The server was started with 8GB of cache for each storage engine, each was using Direct IO, and the InnoDB on-disk size was 36GB.  The performance shown is the average transactional throughput for the 15 minute benchmark run.

The performance of InnoDB dropped 66.5% percent when we increased the number of updates per transaction from 1 to 25.  This was to be expected, as each of the 25 updates is likely to perform an IO.

In TokuDB using traditional updates, the transactional throughput dropped 54.9% (179.5 to 80.9) when increasing the number of updates per transaction from 1 to 25.  As with InnoDB, this is expected behavior as each requires an IO.  However, TokuDB’s fast updates only dropped 13.3% (179.5 to 155.5).  Fractal Tree Index messaging for the win!

TokuDB’s Fast Updates are just part of our SQL optimization story.  We have similar optimizations for INSERT … ON DUPLICATE KEY UPDATE, INSERT IGNORE, and REPLACE INTO.  If you rely heavily on these statements and need to increase your MySQL throughput then you should seriously consider evaluating TokuDB.

I’ll be presenting my benchmarking infrastructure at Percona Live in April.  If you are attending the show be sure to stop by our booth and learn more about TokuDB.

Leave a Reply

Your email address will not be published. Required fields are marked *