DragonFly users List (threaded) for 2009-06
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]
dragonfly- UFS + HAMMER + mirroring setup designed instead of RAID - Is this OK?
Hi,
Since the Installer allows only one of either hammer or UFS and since
RAID parity writing can be too long for 2 500 GB disks on mirror
after an unclean shutdown I followed these steps to create a backup
server using mirroring instead of RAID1. this purely my idea after a
lot of trial and errors and since I am new to dragonflybsd It would be
great to hear the opinion of other seasoned users. thanks.
First I installed dragonfly base system on the 1st 500GB hard disk
using UFS partitions and then booted into it and edited the disklabel
to add a partition h: . Now the disklabel looks like
# /dev/ad0s1:
type: unknown
disk: amnesiac
label: fictitious
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 128
sectors/cylinder: 8064
cylinders: 130031
sectors/unit: 1048574961
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # milliseconds
track-to-track seek: 0 # milliseconds
drivedata: 0
16 partitions:
# size offset fstype
a: 2097152 0 4.2BSD # 1024.000MB
b: 2097152 2097152 swap # 1024.000MB
c: 1048574961 0 unused # 511999.493MB
d: 2097152 4194304 4.2BSD # 1024.000MB
e: 2097152 6291456 4.2BSD # 1024.000MB
f: 10485760 8388608 4.2BSD # 5120.000MB
g: 2097152 18874368 4.2BSD # 1024.000MB
h: 1027603441 20971520 unused # 501759.493MB
$
then I installed the base system on another disk and again edited the
disklabel to add partition h:
# /dev/ad1s1: ------------------> It shows ad1s1 because it is
attached to the system and I booted from the first disk
type: unknown
disk: amnesiac
label: fictitious
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 128
sectors/cylinder: 8064
cylinders: 130031
sectors/unit: 1048574961
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # milliseconds
track-to-track seek: 0 # milliseconds
drivedata: 0
16 partitions:
# size offset fstype
a: 2097152 0 4.2BSD # 1024.000MB
b: 2097152 2097152 swap # 1024.000MB
c: 1048574961 0 unused # 511999.493MB
d: 2097152 4194304 4.2BSD # 1024.000MB
e: 2097152 6291456 4.2BSD # 1024.000MB
f: 10485760 8388608 4.2BSD # 5120.000MB
g: 2097152 18874368 4.2BSD # 1024.000MB
h: 1027603441 20971520 unused # 501759.493MB
Then I created hammer filesystem on h: partitions of both the disks using
#newfs_hammer -L Backup1 /dev/ad0s1h
#newfs_hammer -L Backup2 /dev/ad1s1h
and mounted them using the command
#mount -a
after creating the directories "/Backup1" and "/Backup2"and adding the
following entries in /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ad0s1a / ufs rw 1 1
/dev/ad0s1d /home ufs rw 2 2
/dev/ad0s1e /tmp ufs rw 2 2
/dev/ad0s1f /usr ufs rw 2 2
/dev/ad0s1g /var ufs rw 2 2
/dev/ad0s1h /Backup1 hammer rw 2 2
/dev/ad1s1h /Backup2 hammer rw 2 2
/dev/ad0s1b none swap sw 0 0
proc /proc procfs rw 0 0
df -h shows
#df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 1.0G 144M 783M 16% /
/dev/ad0s1d 1.0G 20K 927M 0% /home
/dev/ad0s1e 1.0G 14K 927M 0% /tmp
/dev/ad0s1f 4.9G 346M 4.2G 7% /usr
/dev/ad0s1g 1.0G 3.1M 924M 0% /var
Backup1 488G 8.0M 488G 0% /Backup1
Backup2 488G 8.0M 488G 0% /Backup2
procfs 4.0K 4.0K 0B 100% /proc
then I ran the command
#hammer cleanup
Which created the snapshots directories
/Backup1/snapshots
/Backup2/snapshots
with the "config file and a snapshot in each.
I edited the config files in both snapshots directories to
snapshots 0d 1m
prune 1d 5m
reblock 1d 5m
recopy 30d 10m
inorder to disable snapshoting and to cleanup the snapshots when
"hammer cleanup" runs next
then I ran
#hammer cleanup
which deleted the snapshot taked earlies from both snapshot directories.
Then I created a Master PFS in the first hard disk using
#hammer pfs-master /Backup1/Data
#hammer pfs-status /Backup1/Data
/Backup1/Data PFS #1 {
sync-beg-tid=0x0000000000000001
sync-end-tid=0x0000000100018290
shared-uuid=7f37a084-6188-11de-958a-535400123456
unique-uuid=7f37a583-6188-11de-958a-535400123456
label=""
operating as a MASTER
snapshots dir for master defaults to <fs>/snapshots
}
Then I created a Slave PFS for the above Master PFS in the second hard
disk with the same shared-uuid as the master PFS using.
#hammer pfs-slave /Backup2/Data shared-uuid=7f37a084-6188-11de-958a-535400123456
#hammer pfs-status /Backup2/Data
/Backup2/Data PFS #1 {
sync-beg-tid=0x0000000000000001
sync-end-tid=0x0000000100018200
shared-uuid=7f37a084-6188-11de-958a-535400123456
unique-uuid=1ed27498-618c-11de-958a-535400123456
slave
label=""
operating as a SLAVE
snapshots directory not set for slave
}
Then I created a few files in /Backup1/Data using
#touch 1 2
#ls /Backup1/Data
1 2
then I did mirror copy from master to slave using.
#hammer mirror-copy /Backup1/Data /Backup2/Data
it copied all the files from Master to Slave
#ls /Backup2/Data
1 2
I shut down the computer and booted only with the second disk.
and mounted the hammer partion in ti using the command
#mount -a
after creating the directory "/Backup2"and adding the following
entries in /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ad0s1a / ufs rw 1 1
/dev/ad0s1d /home ufs rw 2 2
/dev/ad0s1e /tmp ufs rw 2 2
/dev/ad0s1f /usr ufs rw 2 2
/dev/ad0s1g /var ufs rw 2 2
/dev/ad0s1h /Backup2 hammer rw 2 2
/dev/ad0s1b none swap sw 0 0
proc /proc procfs rw 0 0
#df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 1.0G 144M 783M 16% /
/dev/ad0s1d 1.0G 20K 927M 0% /home
/dev/ad0s1e 1.0G 14K 927M 0% /tmp
/dev/ad0s1f 4.9G 346M 4.2G 7% /usr
/dev/ad0s1g 1.0G 3.1M 924M 0% /var
Backup2 488G 8.0M 488G 0% /Backup2
procfs 4.0K 4.0K 0B 100% /proc
I could access all the data in /Backup2/Data
#ls /Backup2/Data
1 2
Now I plan to run dragonfly with both disks and edit the crontab for
root in the first disk to let the command
hammer mirror-copy /Backup1/Data /Backup2/Data
on an hourly basis so that I will have the Data ( one hour back ) on
the second hard disk if the first hard disk goes down.
If the first hard disk goes down then I can remove it and boot from
the second hard disk and run
#hammer pfs-upgrade /Backup2/Data
to make the pfs writable and continue operation.
I can add a second disk later with a slave pfs in it.
I will be using backuppc as the backup software if I can get it
compiled successfully on dragonflybsd ( since it is not in pkgsrc ) of
I will be using boxbackup available as binary package.
The only trouble is when I update using git or install a new software.
I will have to do it in the first disk and there after mount the UFS
partitions in the second disk under /mnt and chroot to /mnt and do the
updates and installation again so that they are available on the
second disks as well.
I hope this is the best you can get with dragonfly now to avoid long
fsck and raid parity writes.
Please let me know if somebody thinks this is a blunder :-)
And many thanks to Dillon who taught me the basics of hammer :-))))))
thanks
Siju
[
Date Prev][
Date Next]
[
Thread Prev][
Thread Next]
[
Date Index][
Thread Index]