DragonFly kernel List (threaded) for 2004-04
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: serializing token
On Apr 25, 2004, at 1:17 AM, Matthew Dillon wrote:
:...
:> lwkt_gettoken(..)
:> while (someglobal) {
:> someglobal &= ~someotherprocedurethatmightbloc();
:> call procedure that might block()
:> }
:> lwkt_reltoken(..)
:> }
:
:Let's assume for a minute that the above code is a consumer on a
work-queue
:and that the "if (someglobal)" evaluates true if there is work to be
done.
:
:Would this "gettoken" "reltoken" usage be suitable to support a
consumer on
:a work queue?
It would be suitable for manipulating the work queue to pull
something
off of it, as long as you intend to allow other consumers to also
pull things off the work queue while your consume is potentially
blocked in the midst of doing the work that it had pulled off.
I think I am getting it now... basically the token is subtly different
from a mutex
in that it will only allow the *scheduler to run* one of the threads
that has the
token at a time. This may not give you as strong a guarantee as a
mutex but
it does give you some knowledge about how things are going to be run by
the OS. This knowledge is a lot like the RCU stuff being done in linux
to
avoid using a lock and to improve SMP performance.
Is that pretty close? It seems that RCU uses knowledge of how the
scheduler
works and tokens actually tell the scheduler how it is allowed to work.
:Clearly the ability to call blocking functions while holding the lock
is a
:nice feature :).
This is what the above code would like with mutexes:
lwkt_getmutex(&mutex)
while (someglobal) {
someglobal &= ~someotherprocedurethatmightbloc();
call procedure that might block(&mutex)
}
lwkt_relmutex(&mutex)
You would have to pass the mutex down into the procedure that might
block so the procedure can release the mutex prior to blocking, and
reacquire it after waking up again, before it returns to the
caller.
== Major code pollution. In my world,
procedure-that-might-block() has
no business knowing the locking state of its caller, and with
tokens
it doesn't have to.
Right I think I understand now :). Since a token can't starve another
thread
who has/wants it like a mutex can there is no need to do complex
unlocking
and relocking code. You just have to program with them as if you have
a token
not a mutex which requires a slightly different way of thinking about
your
code.
:> This code snippit above is using a token to protect a global
variable.
:> It is able to optimize itself by not bothering to get the token
unless
:> it thinks it has work to do, and then integrating the recheck
case in
:> the while(). The token is protecting against preemptive
modifications
:> to the global variable rather then protecting against changes
made to
:> the global variable by the called procedures or as a side
effect if
:> either procedure blocks.
:>
:
:What would be a change to the global variable that the token doesn't
protect
:if th thread does block on one of those procedures?
What changes are allowed by the algorithm. Lets say the global
represents a bitmap. A possible change might be that another bit
in the
global might be set or cleared while we are blocked.
There is an example of this exact mechanism in the codebase, but
protected by a critical section rather then a token. It's the
DORETI code in /usr/src/sys/i386/isa/ipl.s. This code looks at
interrupt pending bits and interrupt-adjusted request flags and
processing them. At the same time new interrupts may set new bits
in the same flags fields.
:What does the scheduler do when another thread tries to get the token
and
:the original token holder is blocked? I think the answer to this
question
:may clarify to me how this works better.
The second thread will successfully acquire the token if the first
thread is blocked, and the first thread will not be allowed to
wakeup
until the second thread has released the token or has blocked
itself.
So on a UP system a token really only locks out interrupts that try to
acquire
a token already held by the pre-empted thread. On an SMP system the
tokens
tell the scheduler not to schedule more than one thread holding a token
at a time.
In either case [SMP or UP] if a thread blocks while holding a token the
scheduler
is free to run another thread that also has the token, but no more than
one other
thread holding the token can run.
If a thread blocks while holding a token, and it is the only thread
with the token,
can an interrupt which also wants the token then run as a normal thread
would?
:> pointers such as unreferenced vnode pointers, vm_page's, and so
forth.
:> There are plenty of ways to protect such structures. The vnode
case is
:> probably the most complex (and it is just as complex or even
more complex
:> when implemented with mutexes), most of the rest of the
structures can be
:> prevented from moving around with a ref count.
:
:Is the reference count protected by tokens or mutexes? :)
In DragonFly there are no mutexes, but we still have the Big Giant
Lock.
Most of our major structures are still protected by the BGL and
only
need a critical section to safely increment/decrement a reference
count.
I need to learn more about critical sections in dragonfly too then it
seems. My
understanding is they only allow one thread in a section at a time...
this sounds
a little like tokens but I am not sure how they are different yet.
As we remove the BGL we have to decide how to protect such
operations...
a token is certainly reasonable (it is in fact exactly how
lockmgr()
locks protect fields in the struct lock data structure while
executing
the higher level lockmgr functions).
Another solution is to make the code in question lockless.. with no
locking requirements at all. This is accomplished by guarenteeing
that
all potential conflicts occur on the same cpu.
For example, Jeff's work on the TCP stack is moving all protocol
processing
for a particulr PCB to a particular cpu. Since only one cpu is
working
on a particular PCB, the protocol code can access that PCB without
acquiring any locks, mutexes, or tokens of any sort.
That sounds really cool too.
Thanks for taking the time to discuss this with me. I think I
understand these
a bit better now.
-Matt
Matthew Dillon
<dillon@xxxxxxxxxxxxx>
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]