DragonFly kernel List (threaded) for 2005-02
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: phk malloc, was (Re: ptmalloc2)
Matthew Dillon wrote:
:
:Matthew Dillon wrote:
:> program that would benefit from it. Not one. The answer is: libc
:> has no support for it because 99.9% of the programs that exist in this
:> world have no use for it and would not benefit from it. If you want
:
:I disagree with you about the benefits. If an email proxy will lose
:messages because it can't make certain memory guarantees through the OS,
:I can't use that OS with the email proxy in question.
Nonsense. An email proxy is not designed to use up all the machines
resources and then gracefully back down when the machine tells it to
stop. Nobody in their right mind runs a high volume service without
putting hard limit on the number of connections it serves simultaniously,
especially not in the email world.
What you are basically arguing is that smart admins hard-limit all
essential limits--but doesn't that eliminate the one benefit of allowing
overcommit (higher resource usage)?
All you do is tell the email service to not handle more then N (where N
can be like 500 or 600) connections simultaniously. That's what every
sane sysop in the world does when they are running heavily loaded
services.
An overcommit knob is not going to magically make an email proxy work
better. You can't just let that sort of software run loose and hope
it will work efficiently.
An overcommit know will prevent my init from getting killedd: I have
intentionally instigated an overcommit situation and I have seen that
this can happen. getty's and inetd can also stop leaving the box
isolated from the admin.
:In MessageWall the buffer size is user-configurable. The problem is the
:OS just won't give that physical memory to MessageWall. People don't want to
:possibly lose or delay email due to memory errors. It's just not acceptable.
But who cares? If you have a machine with 1GB of ram and your
per-connection overhead is, say, 100KB, that means you can handle
10,000 simultanous connections. If you have 4GB of swap, then the
failure condition is that the machine starts slowing down rather then
just failing, which means that it's an EASY problem for any
sysadmin to tune. If you have misconfigured the program and it is
giving you trouble, don't blame overcommit. Blame yourself for
misconfiguring the program.
You are missing the point. The problem is not performance, its
reliability. If you want to avoid SWAPPING, you have to lock the memory
in core. If you want to avoid being KILLED because the system lied when
you malloc()'ed some memory, what can you do?
Having the OS deny memory to any application on a whim, and hoping that
that application will actually *handle* that denial gracefully, is
a pipe dream. As I said, you only see that sort of software in
satellites, space probes, and the space shuttle. Oh, I suppose in an
aircraft control system too... very bounded problems, mind you, not
open-ended like accepting connections over the internet. You would
have to rewrite nearly every piece of software in existance to even
come close to making it operate gracefully in that sort of situation.
-Matt
Matthew Dillon
<dillon@xxxxxxxxxxxxx>
Basically your argument is that applications ignore what malloc()
returns anyway, so the OS should just lie to them? However, even systems
which allow overcommit do occassoanlly return NULL from malloc() (such
as when you request more pages than exist in the system), so such
applications are still buggy. Overcommit is good because of BROKEN
applications?
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]