DragonFly users List (threaded) for 2012-08
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
Re: HAMMER2 progress report - 07-Aug-2012
:
:On Wed, Aug 8, 2012 at 10:44 AM, Matthew Dillon
:<dillon@apollo.backplane.com> wrote:
:
:> Full graph spanning tree protocol so there can be loops, multiple ways
:> to get to the same target, and so on and so forth. The SPANs propagate
:> the best N (one or two) paths for each mount.
:>
:
:Can this be tested in development.
:What commands should be used? :-)
:
:--Thanks
It's a bit opaque but basically you create the /etc/hammer2 directory
infrastructure and then setup some hammer2 filesystems.
hammer2 rsainit
mkdir /etc/hammer2/remote
cd /etc/hammer2/remote
(create <IPADDR>.pub or <IPADDR>.none files)
Essentially you copy the rsa.pub key from the source machine
into /etc/hammer2/remote/<IPADDR>.pub on the target machine.
You can then connect from the source machine to the target
machine as described below.
Normally you also create a localhost link and, for testing purposes,
it isn't root protected and you can tell it not to use encryption:
touch /etc/hammer2/remote/127.0.0.1.none
In order to be able connect to the service daemon and have the
daemon be able to connect to other service daemons you need to
set up encryption on the machine:
#!/bin/csh
#
# example disk by serial number
set disk = "<SERNO>.s1d"
newfs_hammer2 /dev/serno/$disk
mount /dev/serno/$disk@ROOT /mnt
cd /mnt
hammer2 pfs-create TEST1
set uuid = `hammer2 pfs-clid TEST1`
echo "cluster uuid $uuid"
foreach i ( TEST2 TEST3 TEST4 TEST5 TEST6 TEST7 TEST8 TEST9 )
hammer2 -u $uuid pfs-create $i
mkdir -p /test$i
mount /dev/serno/$disk@TEST$i /test$i
end
The mounts will start up a hammer2 service daemon which connects
to each mount. You can kill the daemon and start it manually and
it will reconnect automatically. The service daemon runs in the
background. To see all the debug output kill it and start it in
the foreground with -d:
killall hammer2
hammer2 -d service
I usually do this on each test machine. Then I connect the service
daemons to each other in various ways.
# hammer2 -s /mnt status
# hammer2 -s /mnt connect span:test28.fu.bar.com
# hammer2 -s /mnt status
CPYID LABEL STATUS PATH
1 ---.00 span:test28.fu.bar.com
(you can 'disconnect' a span as well. The spans will attempt to
reconnect every 5 seconds forever while in the table).
(the connection table is on the media, thus persistent)
You can also connect a 'shell' to a running service daemon, as long
as /etc/hammer2/remote allows it:
hammer2 shell 127.0.0.1
(do various commands)
only the 'tree' command is really useful here though you can
also manually connect spans. You can't kill them though.
In anycase, that's the jist of it for the moment. The 'tree' command
from the shell gives you a view of the spans from the point of view of
whatever machine you are connecting to.
Remember that the HAMMER2 filesystem itself is not production ready...
it can't free blocks, for example (kinda like a PROM atm). And, of
course, there are no protocols running on the links yet. I haven't
gotten routing working yet.
The core is fairly advanced... encryption, messaging transactions,
notification of media config changes, automatic reconnect on connection
failure, etc.
-Matt
Matthew Dillon
<dillon@backplane.com>
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]