DragonFly kernel List (threaded) for 2006-02
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: namecache coherency, 2nd turn
:Yes, with my implementation the lock structure would reside in B.
:When you do the A -> B cut, the group is locked, so after the cut B will
:be in a locked state with in a natural way. Furthermore, my code makes
:sure that the locking stuff in A is set to "locked" before the cut, so
:when the cut takes place, A will end up locked, too.
:
:Interface-wise it would be a cache_shadow_detach(A) call, which would
:then return B. If the caller doesn't need B she can just cache_put() it,
:making it accessible for contenders. (Actually, I always do it
:like "cache_put(cache_shadow_detach(ncp))", as I never wanted to do
:anything with the "B" part [maybe the cache_put() call should be integrated
:into cache_shadow_detach()...]).
The problem is not what happens after you lock the group, the problem
is the locking operation itself.
Lets say you have A->B(L).
Process #2 locks B. The lock succeeds.
Process #1 tries to lock A, and blocks on the lock residing in B.
Process #2 detaches A from B, then destroys B. From its point of view,
it has clear access to the entire group when, in fact, it doesn't.
Process #1 blows up because B(L) no longer exists.
This case cannot occur pre-patch because any given namecache record
will be ref'd (nc_refs) and this prevents the namecache record from
being destroyed while a contended lock is pending on it.
But post-patch B(L)'s only refs will be from process #2 and from the
shadow association. B(L) will not have a ref from process #1 as far
as I can tell. Both refs go away when process #2 detaches A
from B and then unlocks, sees that the ref count on B is zero, and
then destroys B. Process #1 then breaks.
Embedding the lock in B in this case greatly reduces our coding
flexibility during splits and merges. The code needs to be more
flexible. For example, having a separately allocated shadow
information structure allows us to implement a symmetric algorithm
with respect to a process attempting to lock A and a process attempting
to lock B. They would bump a ref count in the shared shadow info
structure and then simply check that the association still exists
after the lock has been successfully obtained. If the lock is
embedded in B, aka B(L), then A would have to bump B's ref count
prior to attempting to obtain the lock, to prevent B from going away.
This is highly assymetric and could introduce hard to find bugs.
--
I understand what you are trying to do by defering tree cleanups until
re-resolution, but I think its a mistake. It's always a bad idea to
allow inconsistent data topologies. Implementation details can come
out and bite you and tracking down the bugs becomes almost impossible
because the code that caused the bug might have run seconds or minutes
before the code that, during re-resolution, hits the bug.
Algorithmic simplicity is a lot more important then avoiding malloc()'s.
It's too easy to program yourself in a corner.
-Matt
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]