DragonFly kernel List (threaded) for 2004-08
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: VFS ROADMAP (and vfs01.patch stage 1 available for testing)
-On [20040817 07:12], Matthew Dillon (dillon@xxxxxxxxxxxxxxxxxxxx) wrote:
> Well, the intent isn't really to emulate a kernel within a kernel. It's
> more a matter of design. Either we encapsulate all the kernel data
> associated with a cluster node in a structure and then refer to it via
> that structure, sort of like how parts of a jail work now, or we
> compile a subset of the kernel into a KLD and use kernel globals
> as per normal (but in actuality they would be private variables
> within the KLD).
I guess part of this is solving the kernel bound/bootstrapped to one CPU
issue, no? Right now if you have a SMP box the kernel always gets bound to
CPU #0, if CPU 0 + n /\ n > 0 fails the kernel can continue to run, minus
some processes which were bound to any of the other CPUs. However, if CPU
#0 dies you loose the box.
> The biggest advantage of the KLD methodology is that there is no way
> to accidently (both from a software/compilation viewpoint and from a
> runtime viewpoint) access information that doesn't belong to the cluster
> node. The only hooks the cluster node would have to the real kernel
> would be to the LWKT subsystems (LWKT scheduler, slab allocator,
> certain block devices, and VM). So, for example, the cluster node
> would have its own private user process scheduler, it's own protocol
> stacks, it's own (virtual) network interfaces, its own filesystems,
> its own buffer cache, its own sysctl set, etc.
Sounds logical.
Rendering clusters would love a simple set-up like the one you proposed.
--
Jeroen Ruigrok van der Werven <asmodai(at)wxs.nl> / asmodai / kita no mono
Free Tibet! http://www.savetibet.org/ | http://www.tibet.nu/
http://www.tendra.org/ | http://www.in-nomine.org/
Sorrow paid for valour is too much to recall...
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]