DragonFly kernel List (threaded) for 2013-03
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: Google Summer of Code
On Fri, Mar 01, 2013 at 10:20:18PM +0530, Mohit Dhingra wrote:
>*Hi All,
>
>I am a final year student, Masters by Research, at Indian Institute of
>Science Bangalore. I want to work in GSoC 2013. I did a survey of projects
>that people did earlier, and I am interested in the projects related to
>device drivers and virtualization. I looked at "porting virtio device
>drivers from NetBSD to DragonFly BSD". I have done a research project on
> virtualization and presented a paper "Resource Usage Monitoring in Clouds"
>in IEEE/ACM Grid in Beijing last year. Can someone please suggest some
>topics related to this which are yet to be implemented on DragonFly BSD?
Hi,
What area(s) of virtualization are you interested in? DFly as a guest?
As a host? What VMMs?
If you're interested in DragonFly as a virtualization guest on
qemu/KVM or an other platform that exports virtio paravirtualized
devices, there is some work left on the virtio guest drivers --
* virtio-blk:
I cleaned up and brought in Tim Bisson's port of FreeBSD's virtio-blk
driver in January, based on work dating back to last April. The driver
works okay, but has a few issues:
** qemu exports a 128-entry virtqueue for virtio-blk; DragonFly as a
guest doesn't support indirect ring entries and can issue up to 64 KiB
I/Os, so we can very easily slam into the virtqueue size limit. If we
force qemu to expose a larger virtqueue, we can reach ~95% of host
throughput. Virtio supports up to 16k-entry virtqueue, but qemu+SeaBIOS
can only boot from 128-entry or smaller queues. Adding indirect ring
support would help w/ bandwidth here; this is a pretty small project.
** virtio-blk currently 'kicks' the host VMM directly from its
strategy() routine; it may make sense to defer this to a taskqueue. This
is a tiny change, but understanding the performance implications may
take a bit longer.
** virtio-blk doesn't support dumps (crash/panics). Fixing this would be
pretty straightforward and small in scope.
* virtio-net:
I have a port of virtio-net, again based on Tim's work,
but it is usually testing as slower than em (e1000) on netperf
TCP_STREAM/TCP_RR tests. Improving on this port, re-adding indirect
ring support (much less of a win for virtio-net compared to -blk),
and implementing multiqueue would perhaps be large enough for a SoC
project.
* Our virtio drivers don't support MSI-X; this is not a huge deal for
virtio-blk, but virtio-net would really benefit from it.
* There are other virtio devices; virtio-scsi is a paravirtualized SCSI
adapter, there is a FreeBSD driver that could serve as a good start
for a port. virtio-scsi allows multiple targets per-PCI device, which
is nice, and newer versions of the adapter support multiple request
queues. Porting this and implementing multiqueue would be a nice
project too.
* When running DragonFly as a guest on KVM, we take a lot of VM exits,
particularly to our host/VMM's APIC device. We could implement a
kvmclock time source to avoid some timer exits and support for the
paravirtualized EOI (https://lwn.net/Articles/502176/) in our platform
code to cut down on VM exits.
---
DragonFly also has a neat mechanism to allow DFly kernels to run as user
processes on itself ('vkernels'). If you are interested in vkernels,
there is a fair bit of performance work that could be done on them:
* vkernels currently use a shadow translation scheme for virtual memory
for its guests. (see http://acm.jhu.edu/~me/vpgs.c for an example of
how that interface works). The shadow translation scheme works on
every x86/x86-64 CPU, but on CPUs w/ second generation virtualization
(RVI on AMD or EPT on Intel/Via), we can do much better using those
hardware guest pagetable walkers.
* DragonFly BSD has simple process checkpointing/restore. In 2011, Irina
Presa worked on a project to enable checkpoint save/restore of a
virtual kernel
(http://leaf.dragonflybsd.org/mailarchive/kernel/2011-04/msg00008.html).
Continuing this GSoC would be pretty neat, the ability to freeze and
thaw a virtual kernel would allow all sorts of interesting use cases.
* Virtual kernel host code have simple implementations of
paravirtualized devices, vkdisk and vknet; understanding their
performance and perhaps replacing them with host-side virtio devices
might be worthwhile.
---
A brave last project would be to re-implement the Linux KVM (or some
subset-of) interface on DragonFly. If you're interested in this kind
of project, I'd be happy to explain a bit further...
Take care,
-- vs;
http://ops101.org/
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]