DragonFly kernel List (threaded) for 2009-03
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: Porting tmpfs
:But there is one problem with functions that need memory mapping. Usually
:filesystems for dragonfly use vop_stdgetpages() and vop_stdputpages() with
:buffer cache (except NFS). I think there is no need in my case to use buffer
:cache, because it's memory fs. Here is a problem to properly implement this.
:
:All my conclusions I've made from source code, so there may be some errors.
:If I'm wrong, please correct me.
:
:NetBSD uses uvm_bio* functions for read and write operations, FreeBSD uses
: it's own tmpfs_mappedread() and tmpfs_mappedwrite(). Try to use FreeBSD
:variant causes an error, because these functions doesn't change vnode object,
: but vnode_pager_generic_getpages() intends some changes after VOP_READ().
:
:I think I must use generic getpages/putpages-functions with some changes in
: my read/write-functions to get correct memory mapping. Can anyone direct me
: in the right way?
I don't think it is possible without using the buffer cache. The buffer
cache does all the necessary cache coherency handling between the VM
page cache and the filesystem.
The question is how to do this without creating duplicate storage. We
would want to be able to move VM pages between the per-vnode buffer
cache and the backing object(s) for tmpfs.
I think for now just use the buffer cache and get it working, even
though that means syncing data between the buffer cache and the backing
uobj.
I think the way to fix this is to implement a feature in the real
kernel that tells it that the VM object backing the vnode should
never be cleaned (so clean pages in the object are never destroyed),
and then instead of destroying the VM object when the vnode is
reclaimed we simply remove the vnode association and keep the VM
object as the backing uobj.
-Matt
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]