DragonFly users List (threaded) for 2009-08
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: is hammer for us
The I/O bottleneck is coming from the disk subsystem and network. I
was wondering if HAMMER can do parallel filesystem implementation
similar to GPFS or Lustre.
Also, the reads/writes are random access there is very little
sequential streaming, but the files are large.Each file is around 30GB
each
On Tue, Aug 11, 2009 at 11:42 PM, Matthew
Dillon<dillon@apollo.backplane.com> wrote:
>
> :I am a student doing fluid dynamics research. We generate a lot of
> :data (close to 2TB a day). We are having scalability problems with
> :NFS. We have 2 Linux servers with 64GB of RAM, and they are serving
> :the files.
> :
> :We are constantly running into I/O bottle neck problems. Would hammer
> :fix the scalability problems?
> :
> :TIA
>
> If you are hitting an I/O bottleneck you need to determine where the
> bottleneck is. Is it in the actual accesses to the disk subsystem?
> Are the disks seeking randomly or accessing data linearly? Is the
> transfer rate acceptable? Is it the network? Is it the NFS
> implementation? Is it the underlying filesystem on the server? Are
> there parallelism issues?
>
> You need to find the answer to those questions before you can determine
> a solution.
>
> Serving large files typically does not create a filesystem bottleneck.
> i.e. any filesystem, even something like ZFS, should still be able
> to serve large linear files at the platter rate. Having a lot of ram
> only helps if there is some locality of reference in the data set.
> i.e. if the data set is much larger then available memory but there
> is no locality of reference and the disk drives are hitting their seek
> limits, no amount of ram will solve the problem.
>
> (DragonFly's 64 bit support isn't reliable yet, so DragonFly can't
> access that amount of ram right now anyhow).
>
> -Matt
> Matthew Dillon
> <dillon@backplane.com>
>
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]