DragonFly kernel List (threaded) for 2009-12
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: kernel leaking memory somewhere
2009/12/16 Matthew Dillon
<dillon@apollo.backplane.com>
:Hi guys,
:
:This is TOP output on FBSD 7 box that runs the same software as the DF
:box. Actually the FBSD box also runs MySQL and Ruby on Rails!
:
:...
:As you can see the active memory here is 294MB.
:
:49 processes: 49 running
:CPU states: 0.0% user, 0.0% nice, 0.6% system, 0.0% interrupt, 99.4% idle
:Memory: 1213M Active, 1836M Inact, 320M Wired, 25M Cache, 199M Buf, 114M Free
:Swap: 4096M Total, 2692K Used, 4093M Free
:...
:
:The difference is clear. The 2 servers have same amount of physical memory.
:
:Im gonna try the memory program, but Im telling you all I really need to
:do is start doing some heavy IO and get those postgres processes going
:(they usually only use about 30MB RES memory) and the box starts swapping.
:
:
:Petr
This has nothing to do with actual memory pressure. The core pageout
code is similar but the code that manages the memory pressure is
completely different between FreeBSD and DragonFly and that controls
the balance between the inactive and active queues.
Also, and this is important... DragonFly maintains the active/inactive
state for VM pages cached by the filesystem via the buffer cache,
particularly if a page goes from wired to unwired or vise-versa.
I have no idea if FreeBSD did any similar work.
I also did work a few years ago related to properly treating the
entire available VM as (memory+swap) instead of just (swap), which
changes how memory pressure related to dirty pages is accounted for.
i.e. in DFly if you do not configure swap space the system will run all
the way to the point where 90% of the VM pages in the system are dirty.
How hard would it be to change those settings at runtime, especially how much
memory is available for disk caching and total available memory space?
In virtualized environments (e.g. kvm or vkernel), it could make great sense
to "return" memory pages to the host. Think about vkernel's that don't need
a fixed-sized memory image, which then use most of the space for disk-caching,
which probably makes no sense altogether in a vkernel (as it would double-cache
those blocks, once in the host and another time in vkernel).
Just my thoughts as I recently stumbled across the virtio-baloon driver :)
Regards,
Michael
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]