As previously noted, I've been playing with XtraDB a bit at work. Over a week ago I decided to test compression on one of our larger tables and it took a bit longer than I expected.

(root@db_server) [db_name]> ALTER TABLE table_name \
Query OK, 825994826 rows affected (8 days 14 hours 23 min 47.08 sec)
Records: 825994826  Duplicates: 0  Warnings: 0


It's too bad we couldn't use all the cores on the machine for the ALTER TABLE, huh?

On the plus side, the file sizes aren't too bad.


-rw-rw---- 1 mysql mysql 1638056067072 2009-05-24 09:23 table_name.ibd


-rw-rw---- 1 mysql mysql  587617796096 2009-05-27 07:14 table_name.ibd

I'll have more to say about XtraDB and the compression options in a later post. But given the interest that my Twitter messages about this big ALTER TABLE generated yesterday, I figured I'd share a bit more detail here.

For anyone doing the math at home, that's going from rougly 1.5TB to 500GB (the new file size is slightly inflated, since this slave managed to replicate about a week's worth of data before I caught it). I was hoping for 4:1 compression and managed about 3:1.

Posted by jzawodn at May 27, 2009 07:34 AM

Reader Comments
# Mike Malone said:

Hey Jeremy, this looks pretty interesting... what's the performance like pre/post compression? Since a large DB like this is typically IO bound it'd be interesting to know whether the reduction in IO due to compression more than offsets the increased CPU utilization to do decompression (I suspect it does).

on May 27, 2009 11:13 AM
# Jeremy Zawodny said:


Once the slave catches up, we'll have it serving live traffic so we can see. :-)

on May 27, 2009 11:17 AM
# Kevin Marks said:

Next time I hear 'mapreduce isn't as good as databases' I'll refer them to this post. Munging loads of data into a new form using lots of cores is exactly what mapreduce is good for.

on May 27, 2009 01:18 PM
# Sean said:

ls -lh is your friend. No need to do math on crazy large numbers. :)

on May 27, 2009 01:25 PM
# David Magda said:

Will you posting benchmarks on CPU usage and quantity of bytes read/written?

The SmugMug folks run MySQL on top of Solaris. With ZFS, there's the ability to enable on-the-fly compression on a file system, and they found enabling LZJB compression to have quite favourable performance enhancements in many cases:

The traditional gzip compression algorithm didn't come out that well in the comparison.

on May 27, 2009 03:30 PM
# Jeremy Zawodny said:


Yeah, I'll have more to say later.

Right now I'll just say that adding a second OS to the mix (despite how gung-ho one of our sysadmins might be about Open Solaris!) certainly has operational considerations.

I'm just doing some testing now to see how this will work out if we decide to go in this direction.

on May 27, 2009 03:58 PM
# David Magda said:

I wasn't implying you add (Open)Solaris to the mix (as much as I'm a fan of Solaris 10+), but thought you might find the data point interesting.

Given the growing number of cores becoming available, it's becoming more attractive to have a few them work towards reducing I/O bandwidth needs (and latency). It will probably take some experimenting to figure out the best place to use it (file system, DB core) under different circumstances.

on May 27, 2009 06:36 PM
# Jeremy Zawodny said:


Oh, no offense taken.

I've met with Don to talk about this stuff as well as reading his blog posts on the topic. In fact, it was his use of XtraDB that helped give me the confidence to make the jump.

on May 27, 2009 08:10 PM
# Brian Aker said:


This should be a pretty low hanging piece of fruit for someone to fix. I did multi-thread support for Archive in 6.0 (which I suspect is now lost in the tree flip flops (except for in Drizzle)). Even fire and forget threads could do this without a lot of effort.


on May 29, 2009 07:49 AM
# @kbedell said:

Working as 'root' instead of using 'sudo'? Tsk, Tsk...

on June 4, 2009 09:15 AM
# Jeremy Zawodny said:

Uh. MySQL doesn't have a sudo command.

on June 4, 2009 09:20 AM
# Yogish Baliga said:

Given that this file is 1.5TB, isn't this table a candidate for sharding and distribute the table across multiple machines?

on July 2, 2009 06:50 PM
Disclaimer: The opinions expressed here are mine and mine alone. My current, past, or previous employers are not responsible for what I write here, the comments left by others, or the photos I may share. If you have questions, please contact me. Also, I am not a journalist or reporter. Don't "pitch" me.


Privacy: I do not share or publish the email addresses or IP addresses of anyone posting a comment here without consent. However, I do reserve the right to remove comments that are spammy, off-topic, or otherwise unsuitable based on my comment policy. In a few cases, I may leave spammy comments but remove any URLs they contain.