DragonFly kernel List (threaded) for 2005-02
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: phkmalloc vs. mmap direct
Well,
These plots are meant to show a particular feature of
phkmalloc--high latency when fulfilling requests that it
cannot fit within the existing keg.
It uses an internal dynamic allocator which it builds around
the mmap system call. It is uses this allocator to store a
construction that it calls the page_dir. The page_dir is
something akin to a userspace version of the page table.
Rather than simply denote a 16MB? address region for this
table, the table is continually remade. It is kept to
fit precisely the number of entries that correspond to
the exisiting allocation. Thus, when you require more
address space the following sequence of events happens:
1) brk is called
2) it is determined the addresses now made available via brk
do not fit in the existing page_dir structure.
3) a new empty page_dir structure is made by using mmap to
get a new region
4) memcpy is used to transfer the old page_dir structure into
the new mmap allocation
5) munmap is applied to the old region
This is done in the belief that this conserves the RES size
of the process--a point which I'm inclined to dispute, as
phkmalloc is otherwise structured so that you will never
read or write to an address of memory comprising the page_dir
unless there is an entry there that corresponds to memory
that is going to be or was just in use.
Back when phkmalloc was written, ('96), the situation was
entirely different. A program was morely likely to be
bogged down by paging delays, thus masking this particular
behavior if the program did anything with the memory it
allocated.
Do real programs incur these penalties? For sure, any
program that engages in a pseudo lifo allocation
pattern--i.e., any program that allocates lots of memory
during a recursive algorithm.
-Jon
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]