The -q option to mysqldump is poorly documented and misunderstood as a result.
Several folks at Yahoo have been having trouble with mysqldump running out of memory. At first I figured it was some odd problem, specific to one person's setup. But after it happened again, I started to look into it.
I did a bit of testing and found that when dumping really big tables, mysqldump was using up large amounts of memory. It was very good about freeing up the memory when it moved from table to table, but eventually it'd hit The Big One and die.
I reasoned that it must be buffering everything in RAM before dumping. (Of course, you can watch the long pauses in output to the dump file to notice this too.) I then thought that PHP has a way of fetching results without sucking tons of memory (mysql_unbuffered_query()) and began to wonder what it'd take to get mysqldump to do the same.
I ran thru the PHP code to figure out how they did it and found that it was the difference between calling mysql_store_result() and mysql_use_result() at the C API level. That's it, really. So I checked the source code to mysqldump only to find that the option already existed, but I'd have never found it.
The docs for mysqldump's -q flag say:
Don't buffer query, dump directly to stdout. Uses mysql_use_result() to do this.
Huh?
You see, unless you know the C API and understand the difference between buffered and unbuffered results from MySQL's point of view, the description in the manual is useless. I wouldn't have thought to try the -q option to solve this problem.
Ugh!
Sounds pretty trivial, doesn't it? It's not. The docs probably should say something about using it on larger tables and how you might run out of memory of you don't use it.
In fact, I'd go so far as to argue that maybe that option should be on by default. There's little downside for folks with small tables, and there's a lot of benefit for those with really big tables.
Anyway, this ends my public service announcement on the matter.
Posted by jzawodn at April 29, 2003 12:14 PM
I don't think there's much to misunderstand about: "Don't buffer query, dump directly to stdout."
?!? :-)
- ask
Me neither. ;-)
Just use the --opt option and you'll get -q/--quick along with other goodies like multi-line INSERTs. :-)
I've found with large (100+ million row) tables that I not only need the -q option to dump without buffering, but also need the -e syntax to significantly lessen the write output via the extended insert syntax. With these two options, it makes it a lot easier to deal with large amounts of data. And of course, piping mysqldump output to gzip lessens the write load even further (given that I usually have a couple free CPUs during a dump, whereas I'm almost always disk i/o bound)...
Yup. The --opt option is your friend. :-) Also, the help description for -q made perfect sense to me the first time I saw it.
Maybe I'm just stupid, but I really appreciated this tip. I would have never thought that you'd buffer a raw data dump like that. Who'd'a thought it?
Yeah, maybe the wording is fine, but the fact that it's not the default is baffling.
He still makes a valid point -- it should probably be on by default.
Hey Z,
Thanks for this tip just fixed a problem that I was having.
-T
A big thanks from me as well. I did see the mysqldump process growing and growing when mysqldumping a big fat table and couldn't figure out why it would do that. I did play with a lot of other options and didn't recognize the -q option for solving thes problem either. After reading this, it's pretty obvious and I feel kind of stupid for running into this wall ;-)
I first thought that my database had gotten corrupted, since my dumpscript had been running for a year on a daily basis with no problems at all. Now, my server somehow crashed and went away during dumping. I'll paste my exact client crash error message here so others facing the same problem might be able to find a solution for their problem here:
mysqldump: Got error: 2013: Lost connection to MySQL server during query when retrieving data from server
Not sure so it is the same problem as Maurice but it is quite similar...Somehow the server crashes as well (but not mysql server but the http server which connects to mysql server) and I got this error:
Lost connection to MySQL server at 'reading authorization packet', system error: 0
See if this helps with the option.
will check if that works for me!
Maybe I too am stupid but I greatly appreciate this tip.
It certainly wasn't obvious to me that the default behaviour was to buffer in memory.
hmmm. I'm having problems with mysqldump on MyISAM tables. The db isn't all that large either.
I run this:
mysqldump --opt db >db.sql
fails with
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table
I got further when I ran thsi:
mysqldump -q -e db >db.sql
failed on row 3,650,867 (previous failure was on row 1,544,510 - total is 4,679,973). There are 2 other huge tables that it sometimes fails on?
The memory usage is fine. This is so weird.
tips? suggestions?
thanks,
erin
I get the 2013 error when I redirect the mysqldump output to an NFS mount. It doesn't consistently happen on the same table or row, its always different. If I create the file on a local file system, its works fine every time. Any ideas?
I've been having the same problem as dhanson too. NFS mounts fail from time to time with mysqldump, while local filesystem mysqldumps work everytime. Any ideas?
I just woke up this morning to find that a mysqldump on a 190G database failed. This is mysql 5.0 on windows (not by choice). Whoever setup the server, didn't have logging enabled, and the command window that mysqldump was running from has been closed.. so no idea why it failed.
options passed where --verbose --single-transaction --flush-logs --master-data=2 --no-autocommit --all-databases --delete-master-logs
any thing else i should try for a massive db? I know, without an error message im out of luck...
I see people mentioning --opt and -q, which in the past for other databases i've used --opt. The manual indicates that --opt is on by default.
oh well.. fix the logging issue and start round 2 :)
can any one help me i have a similar issue :
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `AssetEntityTheme` at row: 2431856
i even checked "my.conf" file is as presented below:
table_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 2M
I come across this issue with only 2 tables almost every day
i am just bugged with this error. This error is killing me
can any one plz help me.
Hi,
I select the value of a table using type and I also want the mysql query to its next and previous type all this has to written in single query
select * from tb1 where type='All calculators',id=(all cal+1)
Is there is any option to do all this in one query
When I redirect the mysqldump output to an NFS mount,the same error comes out:
Error 2013: Lost connection to MySQL server during query when dumping table `****` at row: ****
Change the variable net_write_timeout(e.g. 3600)really help me avoid the error.But it may consume more RAM(swap may get involved)
After your mysqldump finishes her job,it is necessary to change net_write_timeout to default value(e.g.60).
http://dev.mysql.com/doc/refman/5.0/en/error-lost-connection.html