researchprojectspage
DragonFly Projects
This is an off-shoot of the main ProjectsPage intended to house more bizarre projects and concepts. It should be noted that if you decide to implement one of these projects, even if you do an amazing job, it is entirely likely that it will never be committed to the main source repository. If you intend your work to be committed, please discuss your plans with the DragonFly community at large on the public mailing lists before you make a significant investment in your work.
Please feel free to add projects to this page, or annotate existing project ideas with your own thoughts.
Kernel projects
Code generation hooks in the build system
- Well defined kernel build mechanisms for code generation
- This will require discussion
On-disk / Over-the-wire structure codegen + CAPS
- Somewhat analogous to google protocol buffers / etc.
- Take a normalized definition of data, metadata or an operation and generate a structure, serialization routines and accessor routines for it/them
- Must be able to generate structs binary compatible with existing on-disk formats (including warts)
- Should magically create formats that are 32/64bit agnostic OR fixup serializers/unserializers
- Accessor routines and thread safety? Do we make you hang these objects somewhere that you store your synchronization objects or allow you to include them?
- Optional versioning?
- Potential uses: ... HAMMER, UFS, HAMMER mirror streams, message passing, ...
- CAPS (DragonFly's message passing system, which exists but is not functional at the time of this writing) has an over-the-wire format with serialization/deserialization. This codegeneration serialization solution should be amenable to the purposes of CAPS or whatever future message passing infrastructure might be implemented in DragonFly.
- CAPS References: 1 2 3
- Messaging References: 1
Asynchronous system call framework
- Probably best implemented as a message passing interface to kernel pass messages in, threads pick them up and execute, return through kevent notifications
- Would require a well-considered proposal
Kernel VIRTUAL MACHINE
- opcode vm in kernel for various purposes? What could be accomplished with this?
Allocator meta-madness
- Modern general purpose allocators usually use 2-3 or more different allocation strategies for differently sized allocations.
- Modern general purpose allocators try very hard to optimize for all circumstances, many allocations/frees, many allocations/few frees, short-running applications, long-running applications, internal and external fragmentation, etc.
- It may be possible to develop a framework by which simple allocators are matched up to programs based on their allocation behavior that performs better all-around than complex modern general purpose allocators using a variety of simple allocators.
- Profile programs memory allocation and usage behavior to determine an optimal allocator type to use.
- Short-running applications will simply want something really cheap to setup, etc.
- Dispatch to the most efficient allocator implementation on future execution.
- Perhaps allocators can export their size range and a series of flags indicating what they are suited for, and the profiling could build a set of flags based on application behavior.
- This could be done per-thread in a lockless fashion and coalesced with weights atexit().
- The scope of this work might be appropriate for a thesis.
- Such a framework likely would not replace the base system allocator but if it were lightweight enough could possibly augment (and/or incorporate) it.
- For applications definitely known to work better under some configuration, flags could be seeded and pkgsrc could link those apps against the best alternative meta-allocator.
Opt-in userland thread scheduler
- A great deal of research has been done on this topic, implementations exist for other operating systems.
- References: 1
Make the use of XIO pervasive
- XIO could be applied liberally to the I/O stack
- Determine the best use cases for XIO and create a list of paths to target.
- Slog through the identified kernel paths converting existing I/O magic to XIO.
- References: 1
Evaluate/Improve Context Switching Performance
- When a context switch happens, both the virtually tagged L1 cache and non-tagged TLB have to be flushed.
- How expensive is this flushing of those caches, taking into account how many cycles are spent in reloading the cache.
- Small-Address Spaces 1 could overcome this context switch cost by multiplexing the virtual address space using segmentation.