After re-thinking and re-tooling some of the work I've been doing to take advantage of Gearman, I've started to wish for a big file system in the sky. I guess it's no surprise that Google uses GFS with their Map/Reduce jobs and that Hadoop has HDFS as a major piece of its infrastructure.
The Wikipedia page List of file systems has a section on Distributed parallel fault tolerant file systems that appears to be a good list of what's out there. The problem, of course, is that it's little more than a list.
Do you have any experience with one or more of those? Recommendations?
I should say that I'm only interested in something that's Open Source and have a minor bias against big Java things as well as stuff that appear as though it would cease to exist if a single company went out of business.
I'm not too worried about POSIX compliance. The main use would be for writing large files that other machines or processes would then read all or part of. I don't need updates. The ability to append would probably be nice, but that's easy to work around.
More specifically, these three have my eye at the moment:
It's interesting that some solutions deal with blocks (often large) while others deal with files. I'm not sure I have a preference for either at the moment.
But I'm open to hearing about everything, so speak up! :-)
I recently was looking to make compressed backups of some files that exist in a tree that's actually a set of hard links (rsnapshot or rsnap style) to a canonical set of files.
In other words, I have a data directory and a data.previous directory. I would like to make a backup of the stuff in data.previous, most of the files being unchanged from data. And I'd like to do this without using lots of disk space.
The funny thing is that gzip is weird about hard links. If you try to gzip a file whose link count is greater than one, it complains.
I was puzzled by this and started to wonder if it actually over-writes the original input file instead of simply unlinking it when it is done reading it and generating the compressed version.
So I did a little experiment.
First I create a file with two links to it.
/tmp/gz$ touch a /tmp/gz$ ln a b
Then I check to ensure they have the same inode.
/tmp/gz$ ls -li a b 5152839 -rw-r--r-- 2 jzawodn jzawodn 0 2008-12-03 15:38 a 5152839 -rw-r--r-- 2 jzawodn jzawodn 0 2008-12-03 15:38 b
They do. So I compress one of them.
/tmp/gz$ gzip a gzip: a has 1 other link -- unchanged
And witness the complaint. The gzip man page says I can force it with the "-f" argument, so I do.
/tmp/gz$ gzip -f a
And, as I'd expect, the new file doesn't replaced the old file. It gets a new inode instead.
/tmp/gz$ ls -li a.gz b 5152840 -rw-r--r-- 1 jzawodn jzawodn 22 2008-12-03 15:38 a.gz 5152839 -rw-r--r-- 1 jzawodn jzawodn 0 2008-12-03 15:38 b
This leads me to believe that the gzip error/warning message is really trying to say something like:
gzip: a has 1 other link and compressing it will save no space
But I still don't see the danger. What can't that simply be an informational message? After all, you still need enough space to store the original and compressed versions since the original (in the normal case) exists until it is done writing the compressed version anyway. (I checked the source code later.)
So what's the rationale here? I don't get it.