The -q option to mysqldump is poorly documented and misunderstood as a result.
Several folks at Yahoo have been having trouble with mysqldump running out of memory. At first I figured it was some odd problem, specific to one person's setup. But after it happened again, I started to look into it.
I did a bit of testing and found that when dumping really big tables, mysqldump was using up large amounts of memory. It was very good about freeing up the memory when it moved from table to table, but eventually it'd hit The Big One and die.
I reasoned that it must be buffering everything in RAM before dumping. (Of course, you can watch the long pauses in output to the dump file to notice this too.) I then thought that PHP has a way of fetching results without sucking tons of memory (mysql_unbuffered_query()) and began to wonder what it'd take to get mysqldump to do the same.
I ran thru the PHP code to figure out how they did it and found that it was the difference between calling mysql_store_result() and mysql_use_result() at the C API level. That's it, really. So I checked the source code to mysqldump only to find that the option already existed, but I'd have never found it.
The docs for mysqldump's -q flag say:
Don't buffer query, dump directly to stdout. Uses mysql_use_result() to do this.
You see, unless you know the C API and understand the difference between buffered and unbuffered results from MySQL's point of view, the description in the manual is useless. I wouldn't have thought to try the -q option to solve this problem.
Sounds pretty trivial, doesn't it? It's not. The docs probably should say something about using it on larger tables and how you might run out of memory of you don't use it.
In fact, I'd go so far as to argue that maybe that option should be on by default. There's little downside for folks with small tables, and there's a lot of benefit for those with really big tables.
Anyway, this ends my public service announcement on the matter.
Posted by jzawodn at April 29, 2003 12:14 PM