DragonFly commits List (threaded) for 2009-07
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: DragonFly-2.3.1.868.g90eca master sbin/hammer cmd_expand.c sys/vfs/hammer hammer_expand.c hammer_ioctl.h
Matthew Dillon schrieb:
:Yeah! But lets make expansion stable first. I think replacing the root
:volume can be quite tricky, as one has to move the layer1 array and has
:to take care that the undo log is correctly "moved" to another volume.
:Btw, is there a pointer to the undo log (i.e. can it be placed on
:any volume) or is the undo log implicitly assumed to be on the
:root-volume?
:
:When I expand a hammer filesystem, upon umount I still get the following
:message:
The undo log is on the volume marked as the root volume. The final
switch-over the root volume could be problematic. The vol_rootvol
field in the hammer_volume_ondisk structure has to be set the same
on all volumes.
: HAMMER: umount flushing......... giving up
It's possible you are generating too much UNDO without flushing
inbetween.
You may be able to trivially avoid generating too much undo with
a trick. You are generating undo for all your layer2 formatting,
but if layer2 previously did not contain any valid data then you
are not actually overwriting anything and you do not have to
generate undo for the layer2 updates.
I did that in the first "version", where I used bread() functions to
completely format layer2 entries while the volume was offline and only
wrote layer1 entry/entries while online. Despite not generating many
UNDOs (exactly one for one layer1 entry) the "umount flushing" issue
appeared.
I would experiment with that first. If it doesn't prevent the
flushing problems then add hammer_flusher_meta_halflimit() checks
in the layer1 loop as appropriate (similar to what is in
hammer_rebalance.c). Remember that everything must be unlocked
before you can call hammer_flusher_wait().
Will try this approach.
Unfortunately there might be an issue with the layer1 formatting
as well. You have to write out undo for that, and if you flush
in the middle there will be a partially activated volume in the
filesystem even though the entire formatting has not been completed.
In case of a failure? Sure, but its not critical, as no data loss will
happen. Its easy enough to "release" the volume (once this is
implemented) and retry again.
When formatting layer2 while the volume is online, I better
generate UNDOs for layer2 entries as well, otherwise really bad
things might happen, like layer1 is there but layer2 entries are
not initialized. And thats what I am doing.
The amount of data that I am writing is fairly small, approx. 8 MB
for a volume up to 4 TB in size (excluding UNDO record overheads),
so I don't think that this matters much.
If that case occurs you have to detect the condition where the
volume has already been added, skip the bits already formatted,
and continue formatting from where you left off. You can do this
by testing whether the related layer1 entry has already been formatted
or not (since if it hasn't it will be marked as HAMMER_BLOCKMAP_UNAVAIL).
Another possibility is that you are forgetting to unlock something,
but so far as I can tell the locking looks ok.
:Right now, I split the layer1/layer2 formatting into two steps.
:At first I format layer2 (using bread/bwrite), then after the call to
:hammer_install_volume() I do all the layer1 formatting using
:hammer_bread (for current hard disks this is just one entry) etc.
:
:Do you think it makes sense to do all the formatting using hammer_xxx
:functions? I think it is easier to not split the layer1/layer2
:formatting into two steps.
:
:Regards,
:
: Michael
If the volume is live... that is, it is connected to the mount,
then layer2 must be formatted first and layer1 last as the
layer2 buffer will become addressable the moment the related layer1
entry is formatted.
Exactly what I am doing.
Thanks! I'll try some flushing in-between writes. Hopefully this helps.
Regards,
Michael
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]