(After having explained this for the 35th time, I decided it's time to simply put something on-line.)

When a MyISAM table grows large enough, you'll encounter the dreaded "The table is full" error. Now I could simply point at that page and leave this subject alone, but there's more to this story.

When this happens, the first reaction I hear is "You never told me that MySQL has a 4GB limit! What am I going to do?" Amusingly, I usually do describe the limit when I discuss the possibility of using MySQL with various groups--they often forget or underestimate the impact it will have. Putting that aside, the problem is easily fixed, as that page explains. You simply need to run an ALTER TABLE command.

And you'll need to wait. That ALTER TABLE is going to take some time. Really.

To protect yourself in the future, use the MAX_ROWS and AVG_ROW_LENGTH options at CREATE TABLE time if the table is likely to get big.

InnoDB tables do not have this limitation because their storage model is completely different.

Where does this limit come from?

In a MyISAM table with dynamic (variable length) rows, the index file for the table (tablename.MYI) stores row locations using 32-bit pointers into the data file (tablename.MYD). That means it can address only 4GB of space.

This problem is both a historical artifact and an optimization. Back when MySQL was created, it wasn't common to store that much data in a single table. Heck, for a long time 4GB was an entire hard disk and most operating systems had trouble with files larger than 2GB. Obviously those days are gone. Modern operating systems have no trouble with large files and hard disks larger than 100GB are quite common.

From an optimization point of view, however, the 32-bit pointer still makes sense. Why? Because most people are running MySQL on 32-bit hardware (Intel/Linux). That will change as use of AMD's Opteron becomes more widespread, but 32-bit will be the majority for the next few years. Using 32-bit pointers is the most efficient way to do this on 32-bit hardware. And even today, most MySQL installations don't have tables anywhere near 4GB in size. Sure, there are a lot of larger deployments emerging. They're all relatively new.

An Example

Here's a table that you might use to store weather data:

mysql> describe weather;
+-----------+--------------+------+-----+------------+-------+
| Field     | Type         | Null | Key | Default    | Extra |
+-----------+--------------+------+-----+------------+-------+
| city      | varchar(100) |      | MUL |            |       |
| high_temp | tinyint(4)   |      |     | 0          |       |
| low_temp  | tinyint(4)   |      |     | 0          |       |
| the_date  | date         |      |     | 0000-00-00 |       |
+-----------+--------------+------+-----+------------+-------+
4 rows in set (0.01 sec)

To find its size limit, we'll use SHOW TABLE STATUS

mysql> show table status like 'weather' \G
*************************** 1. row ***************************
           Name: weather
           Type: MyISAM
     Row_format: Dynamic
           Rows: 0
 Avg_row_length: 0
    Data_length: 0
Max_data_length: 4294967295
   Index_length: 1024
      Data_free: 0
 Auto_increment: NULL
    Create_time: 2003-03-03 00:43:43
    Update_time: 2003-03-03 00:43:43
     Check_time: 2003-06-14 15:11:21
 Create_options: 
        Comment: 
1 row in set (0.00 sec)

There it is. Notice that Max_data_length is 4GB. Let's fix that.

mysql> alter table weather max_rows = 200000000000 avg_row_length = 50;
Query OK, 0 rows affected (0.03 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> show table status like 'weather' \G
*************************** 1. row ***************************
           Name: weather
           Type: MyISAM
     Row_format: Dynamic
           Rows: 0
 Avg_row_length: 0
    Data_length: 0
Max_data_length: 1099511627775
   Index_length: 1024
      Data_free: 0
 Auto_increment: NULL
    Create_time: 2003-06-17 13:12:49
    Update_time: 2003-06-17 13:12:49
     Check_time: NULL
 Create_options: max_rows=4294967295 avg_row_length=50
        Comment: 
1 row in set (0.00 sec)

Excellent. Now MySQL will let us store a lot more data in that table.

Too Many Rows?

Now, the astute reader will notice the Create_options specify a limit of 4.2 billion rows. That's right, there's still a limit, but now it's a limit on number of rows, not the size of the table. Even if you have a table with rows that are 10 times as large, you're still limited to roughly 4.2 billion rows.

Why?

Again, this is 32-bit hardware. If you move to a 64-bit system, the limit is raised accordingly.

Posted by jzawodn at June 17, 2003 01:42 PM

Reader Comments
# chuqui said:

this, of course, assumes you're running some kind of OS that doesn't have max-file-size limits or max-filesystem-size limits. Some versions of solaris have their own file size limits that this won't fix...

chuq
(who has a table that takes 14 hours to rebuild. Encourages you to not let the database crash)

on June 17, 2003 02:54 PM
# Gerald Steffens said:

Does it make sense to use MySQL if you have to store billions of rows? Would performance be sufficient, and are there any other limits that could be a constraint? And - what's the size of the biggest database at Yahoo >:)

on June 17, 2003 03:21 PM
# Bill Mullin said:

I gave production tables with 125.0000.000 rows
whats too big?

on June 17, 2003 04:48 PM
# Joao Prado Maia said:

I'm not sure if this is a bug that was already fixed in newer versions of MySQL, but we actually found out at work that using mysqldump on a table that was changed in order to support more than 4 gigs of data, that the dump file didn't have the ALTER TABLE lines in them.

Which basically means that if you are dumping a huge database into a backup file, and you have this table with more than 4 gigs in it, then when you try to restore from the backup, you will get the 'The table is full' error message :)

Pretty annoying but we cannot upgrade to a newer version to check on this problem. Anyway, just thought it would be nice to warn people here.

--Joao

on June 17, 2003 05:29 PM
# Jeremy Zawodny said:

Did you ever report the bug?

on June 17, 2003 06:10 PM
# Joao Prado Maia said:

I figured that since we are using a 3 year old version of MySQL, that it was already fixed in newer releases.

So no, I never reported the bug. But since you were talking about it, I thought that you know that this feature does work properly with mysqldump-based backups.

--Joao

on June 18, 2003 06:51 AM
# Edmund said:

Sorry, I need this explained.

I'm not a real DBA, I just use MySQL to dump lots of network traffic accounting, which means I've tables with over 4GB already. Now, unless I did it in my sleep, I never did the alter table thing, but my Max_data_length is over 20GB.

Even if I create a new table, I get this large Max_data_length:

mysql> create table wiggle (woggle int(1));
Query OK, 0 rows affected (0.00 sec)

mysql> show table status like 'wiggle' \G
*************************** 1. row ***************************
Name: wiggle
Type: MyISAM
Row_format: Fixed
Rows: 0
Avg_row_length: 0
Data_length: 0
Max_data_length: 21474836479
Index_length: 1024
Data_free: 0
Auto_increment: NULL
Create_time: 2003-06-18 10:28:19
Update_time: 2003-06-18 10:28:19
Check_time: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

mysql>

I'm running 4.0.2-alpha on Redhat.

So, I'm wondering what changes the default size of tables.

on June 18, 2003 07:47 AM
# Harrison said:

The reason that wiggle can have so much is because it is a fixed width table, not a dynamic one. Jeremy did mention it at the very beginning, you just need to know what the difference is. A variable width table is any that have varchar, text, or blobs in them.

on June 18, 2003 08:31 PM
# Aron Pilhofer said:

Forgive me... I'm not a real DBA either. Just a happy user of MySQL.

I have not checked, but I know I have several enormous tables, several of which are close to the 4GB point. I have the MySQL Big_table session variable turned on. I was under the impression that would allow MyISAM tables to exceed 4GB. Yes? No?

on June 19, 2003 05:32 AM
# said:


Thanks Harrison, I missed the impact of using
those column types.

on June 19, 2003 06:52 AM
# Matt said:

The person that was saying that ALTER TABLE lines aren't in the dump file: they don't need to be. Any table options such as MAX_ROWS and/or AVG_ROW_LENGTH will be included in the CREATE TABLE statement (assuming they're there and you didn't just dump the data (-t/--no-create-info mysqldump option)). Because remember, anything done with ALTER TABLE is like it was done with CREATE. ;-)


BTW, can someone tell me what the file size limit is with current versions of Linux? I've heard it's 2G without a patch or whatever, which is pathetic. Or is it 4G? Or more? I mean, even FAT32's limit is 4G. And NTFS is like virtually unlimited I think.

So it'd be nice to hear the exact limits of modern Linux and what, if anything, determines them.

on June 19, 2003 01:07 PM
# dick said:

yeah, use postgresql...

on June 20, 2003 01:16 PM
# Louie said:

Your page helped me a lot when the database log for seventeenorbust.com went down due to the 4gig limit. Thanks Jeremy.

Cheers,
Louie

on July 31, 2003 04:05 PM
# m Yam said:

Too Many Rows?

Now, the astute reader will notice the Create_options specify a limit of 4.2 billion rows. That's right, there's still a limit, but now it's a limit on number of rows, not the size of the table. Even if you have a table with rows that are 10 times as large, you're still limited to roughly 4.2 billion rows.

Hi Jeremy
I tried to see if one can increase the limit to more than 4.2B rows on a 64 bit machine but I
couldn't. Is there any settable parameter other
than MAX_ROWS that will make mysql go beyond the
4.2B rows limit ? Or is there any simple modification to mysql source that will do the trick ?

on September 22, 2004 11:29 AM
# carmen said:

Hi Jeremy,

I was wondering when does memeory allocation takes place for the tables? If I use the create options or the alter table statements to change the max_data_length of the table to some exaggerated size(but less than 4.2 billion records), does it try to allocate the memory for that huge table?

Also, another question out of curiosity, how does the database handle the memory address once we go beyound the 4GB size of table?

Thanks.
Carmen

on September 24, 2004 01:41 PM
# Torsten Brumm said:

Thanks for this Tip, this saved my Life today!!!

Great to find guys with such good knowledge.

Kind Regards

Torsten

on March 4, 2005 03:52 AM
# Domas said:

What are you pumping to those tables to get higher than 4.2 B rows

on March 24, 2005 04:58 AM
# Colin Faulkingham said:

I just had the experience of trying to load a data file of 41 million rows using LOAD DATA INFLIE. MySQL would stop at about 5 million rows. Which, is well below the 4gb limit. However, after I altered the table to handle a larger data-set LOAD DATA INFILE worked perfectly.

I can only theorize that while LOAD DATA INFILE is executing it is constantly calculating the AVG_ROW_LEN * the NUM_ROWS in the file. And stops when it knows it won't fit.

on May 11, 2005 10:09 AM
# Kris said:

One thing Jeremy forgot to mention in his article was that when you do an ALTER TABLE command, it not only changes the table structure, it also DELETES ALL ROWS from your table. Therefore, if you do not have a good backup, your table has just been wiped out.

This should be added to the article since it is dangerous for anyone who doesn't know that bit of information.

on June 13, 2005 04:35 AM
# Vinay Bansal said:

I would like to know if there is any limit on the size of mysql database? how big it can grow?

Also is there any thing which talks about how long it takes to perform backup of the mysql database based on the size of the database

Any reply would be appreciated.
Thanks,
Vinay

on June 22, 2005 03:42 AM
# Rich said:

Thanks for the comment by Kris, though unfortunately it was too late, the rows went 'poof'. Not all alter tables commands delete rows, right? I've added fields / indices to tables with alter table statements to no visible effect on existing data... . Now let's see if the db admin really does the backing up he claims to.

on July 29, 2005 10:22 AM
# Kenneth Lindh said:

What about if I don't get the name of the table in the errormessage like:
java.sql.SQLException: The table '' is full
I've checked datalength on all my tables but none is near max_data_length...
Any ideas?

BR /Kenneth

on August 23, 2005 04:26 AM
# Jeff C said:

This is no longer required for mysql 5.0.6 and greater.

Note this minor change, coming from:
http://dev.mysql.com/doc/refman/5.0/en/news-5-0-6.html

Changed default value of myisam_data_pointer_size from 4 to 6. This allows us to avoid table is full errors for most cases.

on December 7, 2005 02:05 AM
# Richard Kure said:

I came across this problem aswell, because I store files in a table (yes, and no comments about you dont think they should not be in the database).

So I altered my table to contain 1T instead of 4G, this worked perfectly, I could now store more than 4G.

Now came the problem, it now takes forever to extract from the table, I have 1477 row in the table, and the size of the table is around 6.5G.
this sql line works like a charm:
"SELECT fileId FROM articlefile"
My guess is that it takes around 5-10 milliseconds.

But if I do one of these sql lines
"SELECT fileId, articleId, filename FROM articlefile"
"SELECT fileId, articleId FROM articlefile"
it then takes around 5 minuttes, which doesn't make sense to me.

Table Info
fileId, bigint, auto increment, primary key
articleid, bigint
filename, varchar, 255
contentType, varchar, 45
fileData, longblob
summary, text

Any help is much appreciated

on February 2, 2006 12:31 AM
# Keith said:

Thanks for this post Jeremy, it certainly helped better than the MySQL Docs :)

on February 7, 2006 06:53 PM
# robert kofler said:

for those who wonder the max number of rows in mysql.
I'm running mysql-4.1.16-r30 + gentoo-linux on Opteron252.
Now having 120270382 (120M) rows.
run into 'table full error'. now I try:
ALTER TABLE xyz MAX_ROWS=102003004005
but alter takes a while...

on February 12, 2006 03:28 PM
# robert kofler said:

- update -
tried to extend the MAX_ROWS for MyIsam with:
"ALTER TABLE tanimoto MAX_ROWS=180000111000" and realized, that the value of 180,000,111,000 caused a (smaller) maximum rowsize of only: 3,906,451,864. A higher value (280,...) caused the real value (used/shown by the db) to decrease again -> seems to be a bug.

I need the table vor statistical data evaluation (only SELECTS) - would innodb be a better choice? or an upgrade to version 5 (might have better 64bit support?)

(running mysql-4.1.16-r30 + gentoo-linux on Opteron252 + 6GB Ram)

on February 13, 2006 01:40 PM
# robert kofler said:

...now used mysql-5.0.18-r30
ALTER TABLE tbl_name MAX_ROWS=4294967295 ist the maximum which is accepted my MyIsam tables.
(...but 4.3 billion rows is quite a lot ;-)

on February 13, 2006 03:16 PM
# robert kofler said:

last statement about mysql:
had about 470 000 000 rows and about 45GB data files (myisam) when db crashed. now showing still 45GB but only 37000 rows (myisamchk produced nothing usefull)
this means a loss of 4 days of simulation data.

...now switching to postgres.

on February 17, 2006 01:36 PM
# Sam said:

For these large size tables, what sort of server spec is required for fast processing?

on March 11, 2006 09:23 AM
# Andy said:

To ensure that the alter table process does "repair by sorting" (as opposed to the much slower "repair by key cache"), add the following to /etc/my.cnf

# /etc/my.cnf
myisam_max_sort_file_size=150G
myisam_max_extra_sort_file_size=150G

on June 19, 2006 06:36 AM
# Shiraz said:

Now, the astute reader will notice the Create_options specify a limit of 4.2 billion rows. That's right, there's still a limit, but now it's a limit on number of rows, not the size of the table. Even if you have a table with rows that are 10 times as large, you're still limited to roughly 4.2 billion rows.

So, would that mean if I'm storing rows that contain large BLOBs (~0.5MB each), I can change MAX_ROWS to 8000 to double storage size?! And what should AVG_ROW_LENGTH be?

Thanks for putting this all together

on July 12, 2006 08:41 AM
# Ryan McParland said:

I have a highish transaction table, that creates lots of rows, but subsequently deletes the rows 15minutes after they are created. Monitoring the momory size, creating then deleting, the memory usage still grows? Why does that happen? Will it continue to happen? Or will some 'cache' be removed, after some limit?

on August 16, 2006 05:00 AM
# Rory McCann said:

For the record most default installs of recent Linux support files up to 4GiB. However you can use the 'RAID' option when creating tables to get much more than that. (see: http://www.devshed.com/c/a/MySQL/Storage-Engine-Table-Types/12/ as well as the create table syntax: http://dev.mysql.com/doc/refman/4.1/en/create-table.html).

Unfortuantly you'll have to drop the table and recreate it, so it might not be of much use.

on December 19, 2006 01:54 AM
# Ido said:

I've got a large table, and my max_data_size is big (much more than 4GB).
When the table exceeds 4GB, mysql goes crazy. It just ignores the first 4GB of data.
While queries now take a lot of time (several minutes for SELECT COUNT(*) FROM table), they ignore most of the data (the above query returns 3000, while the table contains over 100000 rows).
Did any of you encountered similar problems?

on April 25, 2007 05:42 AM
# Dan Moore said:

Regarding Kris' and Rich's comments (about the data being deleted), while it's always a good idea to backup your database before you make any major changes, I did not see data loss when I ran the 'alter table xxx max_rows ...' statement. I ran the statement on mysql 4.1.7-max-log.

on May 22, 2007 08:51 AM
# sanjay D said:

hey All...plz let me know how many entries can their in a single table of database in Mysql-5.0..thanks in advance..

regards
Deshwal sanjay

on October 12, 2007 12:39 AM
# cypress said:

# sanjay D said:

hey All...plz let me know how many entries can their in a single table of database in Mysql-5.0..thanks in advance..

regards
Deshwal sanjay
on October 12, 2007 12:39 AM

<<<

i dunno, but i'm sitting here and adding one by one, telling you when i'm done.

on December 7, 2007 04:39 AM
# Jatin said:

Hi,

I had about 38 lakh records in my table to which occupied 4GB of data storage. In order to increase the storage limit I ran an alter command (max_rows = 100000000 avg_row_length = 1115) to increase the storage. 1/4 of my data got deleted. Any particular reason for mysql 4.0 to delete the any random rows.

on January 2, 2008 05:53 PM
# Bor said:

I have MySQL 5.1
mysqladmin -u root -p variables sad, what myisam_data_pointer_size= 6 bytes.
But MAX_ROWS=4294967295 all tyhe same.
I think MAX_ROWS, if pointer size = 6 bytes, must have 48^2-1.

on February 8, 2008 12:29 AM
# jcn50 said:

Thanks, you saved my ass today~.

on June 3, 2008 04:21 AM
# said:

@Bor:

Your math is a bit wrong here. If we're using a MAX_ROWS pointer size of 6 bytes, it would be 2^48 - 1, not 48^2. Though I'm confident that you had the right idea in mind.

Of course with 6 bytes, we'd have about 281 trillion rows.

on September 8, 2008 04:51 PM
# php application development said:

Nice unique post, thanks for clearing all doubts...

on January 13, 2009 12:24 AM
# Fausto said:

Thank you very much for your help! I was almost formating a Debian Sarge install thinking that culprit was file system limitations but I thank this limit awful at a ext3 file system. Now, reading your text, I discovered the solution.

on March 11, 2009 05:58 AM
# runs4real said:

Thanks for your several posts on MySql database. However,just want to know whether MySql would be a good backend for a cargo management system running on a Windows 2000 system. The volume of transactions are expected to be quite high, so in no time 4GB would be exhausted and dropping the tables wont be an option then. Do I go for another database like Microsoft SQL Server or Oracle? Cost is a constraint though.
Thanks

on April 10, 2009 12:45 AM
# Vividisha said:

I have a problem, basically i want to build a forum for students where they can share ideas. i want it to make on phpBB and joomla, so i will be using mysql5.0, i want to ask how many users it can handle without crushing down. i am planning not to store BLOB information in the dadabase.
i will be using Windows(32 bit) Ntfs.
please help me out.

Thanks in advance.

on August 11, 2009 06:44 AM
# sourav ray said:

Hello Jeremy,
does Mysql Archive Storage Engine have similar limitation inter of size of table or number of rows?

on August 13, 2009 01:09 AM
# kkey said:

Do mysql has something similar to rman in oracle?

on August 27, 2009 02:56 PM
# Allen said:

Now that most intel, and AMD processors both support x86_64 and people have access to windows 7 x64 and ubuntu x86_64 the hardware and OS are easy enough to get a hold of.

(note that 1EB is 2^60 bytes)

64-bit pointers can address up to 16 EB

NTFS has a max file size of 16 EB minus 1K

Ext4 has a max file size of 1TB (2^40)

so if you use one of the above file systems on a 64 bit system your database can be much, much larger than 4 GB now.

on February 28, 2010 08:06 PM
# Frank said:

I wasn't looking for this -- I stumpbled upon it. But I had to say that this was interesting and informative. So, thank you.

You've also explained it clearly.

on April 7, 2010 12:23 PM
Disclaimer: The opinions expressed here are mine and mine alone. My current, past, or previous employers are not responsible for what I write here, the comments left by others, or the photos I may share. If you have questions, please contact me. Also, I am not a journalist or reporter. Don't "pitch" me.

 

Privacy: I do not share or publish the email addresses or IP addresses of anyone posting a comment here without consent. However, I do reserve the right to remove comments that are spammy, off-topic, or otherwise unsuitable based on my comment policy. In a few cases, I may leave spammy comments but remove any URLs they contain.