Status-NetworkStack
TODO
Parallel TCP
Parallel routing
Parallel IPv6
DONE
Parallel IPv4
Selective ACKnowledgement options (SACK) RFC 2018
Advanced TCP congestion algorithms
For some basic algorithms and background, see RFC 2581.
Eifel Detection RFC 3522, RFC 4015
An algorithm to detect unecessary loss recovery, which is costly, using TCP Timestamps options. Good in wireless LAN environments where this might happen more often.
Limited Transmit RFC 3042
Improves TCP's loss recovery. This helps busy webservers.
Early Retransmit IETF draft
Speeds up short transfers, like many web requests.
D-SACK (Duplicate-SACK) RFC 2883
An extension to the SACK option.
Appropriate Byte Counting RFC 3465
An algorithm for increasing TCP's congestion window, which improves throughput and protects against optimistic ACK attacks.
Comments
06:44 <dt_> does anyone have any idea what the maximum number of simultanous tcp network connections df can have is?
06:45 <hsu> depends on how much memory you have in the machine
06:46 <hsu> as a rough measure, it's mem/1000
06:46 <hsu> for inactive connections
06:46 <dt_> mem in bytes?
06:46 <hsu> for active connections, it's more like mem/32K
06:46 <hsu> yes
06:47 <dt_> so more memory is allocated for accepted connections?
06:47 <hsu> most of the memory for active connections is sitting in the sockbuf
06:47 <hsu> that dwarves everything else
06:48 <hsu> however, some large network servers have millions of inactive connections
06:48 <dt_> thanks hsu
06:48 <hsu> big difference in divisors between 1K and 32K
06:49 <hsu> note: this is kmem and oversimplifies by assuming you're using all of kmem for network connections and you don't need it for anything else, like filesystem buffers
06:49 <dt_> still much better than windows, nt 4.0 is limited to 12,800 and w2k to 25,600 active connections
06:49 <hsu> but's it's accurate to the rough order of magnitude
06:49 <dt_> despite memory
06:49 <hsu> oh, you can have much more than that w/ dfly
06:49 <hsu> you do have to up some sysctl limits though
06:50 <hsu> like kern.ipc.maxsockets
06:50 <hsu> and kern.ipc.nmbufs
06:51 <hsu> also, if you decrease kern.ipc.maxsockbuf to, say, 8K, then you can have roughly kmem/8K number of active connections
06:52 <hsu> basically, you're limited by the amount of memory that each connection takes, not by cpu resources
06:52 <hsu> for inactive connections, the tcpcb state is roughly 1K
06:52 <dt_> can thos sysctrl be adjusted at any time? will new connections change with them?
06:52 <hsu> for active connections, the amount of data sitting in sockbufs dominates, so it depends on your setting of kern.ipc.maxsockbuf
06:52 <hsu> yes
06:52 <dt_> awsome
06:53 <hsu> it helps if you're on a 64-bit processor w/ > 4GB of memory
06:53 <hsu> before, that 4GB limit was pretty hard to overcome
06:54 <hsu> now, just stuff up your board and you can increase capacity
06:54 <hsu> oh, you might have to increase kern.maxfilesperproc too
06:54 <hsu> if your network server is all in one process
06:54 <hsu> service
06:55 <hsu> finally, if you use sendfile, then you don't have to worry about sockbuf memory, since you get to share file buffers among connections
06:56 <hsu> applicable for the case where you have a web server sending out static file content
Note that DragonFlyBSD hasn't been ported to any 64-bit architectures yet.