The -q option to mysqldump is poorly documented and misunderstood as a result.

Several folks at Yahoo have been having trouble with mysqldump running out of memory. At first I figured it was some odd problem, specific to one person's setup. But after it happened again, I started to look into it.

I did a bit of testing and found that when dumping really big tables, mysqldump was using up large amounts of memory. It was very good about freeing up the memory when it moved from table to table, but eventually it'd hit The Big One and die.

I reasoned that it must be buffering everything in RAM before dumping. (Of course, you can watch the long pauses in output to the dump file to notice this too.) I then thought that PHP has a way of fetching results without sucking tons of memory (mysql_unbuffered_query()) and began to wonder what it'd take to get mysqldump to do the same.

I ran thru the PHP code to figure out how they did it and found that it was the difference between calling mysql_store_result() and mysql_use_result() at the C API level. That's it, really. So I checked the source code to mysqldump only to find that the option already existed, but I'd have never found it.

The docs for mysqldump's -q flag say:

Don't buffer query, dump directly to stdout. Uses mysql_use_result() to do this.

Huh?

You see, unless you know the C API and understand the difference between buffered and unbuffered results from MySQL's point of view, the description in the manual is useless. I wouldn't have thought to try the -q option to solve this problem.

Ugh!

Sounds pretty trivial, doesn't it? It's not. The docs probably should say something about using it on larger tables and how you might run out of memory of you don't use it.

In fact, I'd go so far as to argue that maybe that option should be on by default. There's little downside for folks with small tables, and there's a lot of benefit for those with really big tables.

Anyway, this ends my public service announcement on the matter.

Posted by jzawodn at April 29, 2003 12:14 PM

Reader Comments
# Ask Bjoern Hansen said:

I don't think there's much to misunderstand about: "Don't buffer query, dump directly to stdout."

?!? :-)


- ask

on April 29, 2003 02:52 PM
# Matt said:

Me neither. ;-)

Just use the --opt option and you'll get -q/--quick along with other goodies like multi-line INSERTs. :-)

on April 30, 2003 12:12 AM
# John Stanforth said:

I've found with large (100+ million row) tables that I not only need the -q option to dump without buffering, but also need the -e syntax to significantly lessen the write output via the extended insert syntax. With these two options, it makes it a lot easier to deal with large amounts of data. And of course, piping mysqldump output to gzip lessens the write load even further (given that I usually have a couple free CPUs during a dump, whereas I'm almost always disk i/o bound)...

on April 30, 2003 04:47 PM
# Basil Hussain said:

Yup. The --opt option is your friend. :-) Also, the help description for -q made perfect sense to me the first time I saw it.

on May 1, 2003 09:39 AM
# havoc said:

Maybe I'm just stupid, but I really appreciated this tip. I would have never thought that you'd buffer a raw data dump like that. Who'd'a thought it?

Yeah, maybe the wording is fine, but the fact that it's not the default is baffling.

on May 1, 2003 07:32 PM
# Jordan Ritter said:

He still makes a valid point -- it should probably be on by default.

on August 17, 2003 09:50 AM
# Tom P said:

Hey Z,

Thanks for this tip just fixed a problem that I was having.

-T

on January 18, 2007 04:00 PM
# Maurice said:

A big thanks from me as well. I did see the mysqldump process growing and growing when mysqldumping a big fat table and couldn't figure out why it would do that. I did play with a lot of other options and didn't recognize the -q option for solving thes problem either. After reading this, it's pretty obvious and I feel kind of stupid for running into this wall ;-)

I first thought that my database had gotten corrupted, since my dumpscript had been running for a year on a daily basis with no problems at all. Now, my server somehow crashed and went away during dumping. I'll paste my exact client crash error message here so others facing the same problem might be able to find a solution for their problem here:

mysqldump: Got error: 2013: Lost connection to MySQL server during query when retrieving data from server

on February 8, 2007 05:39 PM
# Don said:

Not sure so it is the same problem as Maurice but it is quite similar...Somehow the server crashes as well (but not mysql server but the http server which connects to mysql server) and I got this error:

Lost connection to MySQL server at 'reading authorization packet', system error: 0

See if this helps with the option.

will check if that works for me!

on July 19, 2007 01:11 AM
# Mike said:

Maybe I too am stupid but I greatly appreciate this tip.

It certainly wasn't obvious to me that the default behaviour was to buffer in memory.

on July 20, 2007 08:57 AM
# erin said:

hmmm. I'm having problems with mysqldump on MyISAM tables. The db isn't all that large either.

I run this:
mysqldump --opt db >db.sql

fails with
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table

I got further when I ran thsi:
mysqldump -q -e db >db.sql
failed on row 3,650,867 (previous failure was on row 1,544,510 - total is 4,679,973). There are 2 other huge tables that it sometimes fails on?

The memory usage is fine. This is so weird.

tips? suggestions?

thanks,
erin

on October 22, 2007 12:39 PM
# dhanson said:

I get the 2013 error when I redirect the mysqldump output to an NFS mount. It doesn't consistently happen on the same table or row, its always different. If I create the file on a local file system, its works fine every time. Any ideas?

on December 6, 2007 10:51 AM
# Sam M. said:

I've been having the same problem as dhanson too. NFS mounts fail from time to time with mysqldump, while local filesystem mysqldumps work everytime. Any ideas?

on October 2, 2008 12:34 PM
# D said:

I just woke up this morning to find that a mysqldump on a 190G database failed. This is mysql 5.0 on windows (not by choice). Whoever setup the server, didn't have logging enabled, and the command window that mysqldump was running from has been closed.. so no idea why it failed.

options passed where --verbose --single-transaction --flush-logs --master-data=2 --no-autocommit --all-databases --delete-master-logs

any thing else i should try for a massive db? I know, without an error message im out of luck...

I see people mentioning --opt and -q, which in the past for other databases i've used --opt. The manual indicates that --opt is on by default.

oh well.. fix the logging issue and start round 2 :)

on January 30, 2009 07:02 AM
# Arch said:

can any one help me i have a similar issue :
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `AssetEntityTheme` at row: 2431856

i even checked "my.conf" file is as presented below:
table_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 2M

I come across this issue with only 2 tables almost every day
i am just bugged with this error. This error is killing me
can any one plz help me.

on June 7, 2009 11:26 PM
# Math said:

Hi,

I select the value of a table using type and I also want the mysql query to its next and previous type all this has to written in single query

select * from tb1 where type='All calculators',id=(all cal+1)

Is there is any option to do all this in one query

on October 16, 2009 09:00 AM
# orczhou said:

When I redirect the mysqldump output to an NFS mount,the same error comes out:
Error 2013: Lost connection to MySQL server during query when dumping table `****` at row: ****

Change the variable net_write_timeout(e.g. 3600)really help me avoid the error.But it may consume more RAM(swap may get involved)

After your mysqldump finishes her job,it is necessary to change net_write_timeout to default value(e.g.60).

http://dev.mysql.com/doc/refman/5.0/en/error-lost-connection.html

on December 14, 2009 04:11 AM
Disclaimer: The opinions expressed here are mine and mine alone. My current, past, or previous employers are not responsible for what I write here, the comments left by others, or the photos I may share. If you have questions, please contact me. Also, I am not a journalist or reporter. Don't "pitch" me.

 

Privacy: I do not share or publish the email addresses or IP addresses of anyone posting a comment here without consent. However, I do reserve the right to remove comments that are spammy, off-topic, or otherwise unsuitable based on my comment policy. In a few cases, I may leave spammy comments but remove any URLs they contain.