[Linux-HA] high-available file systems
l.g.e at web.de
Fri Mar 12 03:15:55 MST 2004
/ 2004-03-12 09:38:45 +0100
\ Kristian Rink:
> Hi all,...
> ...trying to get serious with high-availability in our network (and
> facing the fact that our server availability has been pretty poor so
> far), I am in search for a software to solve the following problem:
> - Our servers are running both GNU/Linux and (in two cases) still
> old Windows NT4; most of the data still reside on RAID drives
> attached to the NT boxes while the rest of the server site is
> slowly migrating to GNU/Linux. Anyhow, this way I need a tool to
> replicate file-systems across different os platforms.
> - Volumes itself are rather big (more or less 500 GB on each of the
> production machines, rapidly growing). This is currently
> preventing me from using openafs after reading somewhere that
> openafs volumes are limited to 8 GB.
> - Folder structure on the data volumes is quite complex, that's why
> in the end using rsync to "mirror" file systems from Windows NT to
> GNU/Linux was stopped due to the fact that rsync took more than
> four hours just for building its file list before actually
> Currently, I am browsing linux-ha.org to find something appropriate
> to solve this situation, but in any cases there are some drawbacks
> preventing me from using a certain software (actually, most of the
> times it's that I can't use it to link Windows into my network). How
> are you handling setups like those? Would be thankful for any
> inspirations. :)
> TIA and bye,
Just some thoughts:
Plug out the RAID from NT, plug it into some linux samba server.
Use several rsyncs, each only processing one subtree, or even
only a partial subtree of about 100,000 files or one million or...
To do so the --include-from option of rsync helps.
More information about the Linux-HA