DragonFly kernel List (threaded) for 2003-07
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: just curious
>
>:> or mmapping() it. mmap()ing it would result in the data being shared,
>:> but with mmap() one can also make things copy-on-write (MAP_PRIVATE), and
>:> so forth. We already have the VM object layering model in place in the
>:> kernel to make it all possible so it would not require any special effort.
>:
>:So you'll be able to use the normal UNIX system calls to access
>:arbitrary VM objects? So if I have a VM object that represents a buffer
>:in another user process, it'll just look like a file to me? What happens
>:if I overrun the end of the buffer, ENOSPC?
>:
>:That's got some amazingly cool possibilities.
>
> Yes. Compare it against the Mach 'I'm going to map the data into your
> address space' model. There will be some cases where having the kernel
> map the address space is more efficient (saving two system calls),
> but most of the time there is no need or reason to map the data and in
> those cases passing a descriptor is far more efficient.
>
I should note this is exactly the sort of thing I am looking for... I want
access to the other end's data... I don't care if its read-only or not...
After all, a send can be seen as nothing more than a remote read permission
from the sender's side to a region of memory... at which point a receive is just
a copy of read only memory from the other process.
Even if I cannot remotely "POKE" into someone elses address space [like a Portals Put,
http://www.sandiaportals.org , or a MPI_Put] I can still have send/receive semantics
but without all those damned copies laying around [in the kernel... a mmaped file etc.] :).
That would be cool in itself.
> Not to mention the space abstraction... passing a descriptor allows you
> to abstract I/O operations of any size. You could handle ten someones
> each trying to write a 2GB buffer to your userland VFS. And people are
> always forgetting atomicy. There is no expectation of atomicy with
> memory mappings but there is one for I/O operations like read() or write().
queuing == atomic ordering of ops?
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]