[Zetaback-devel] [zetaback commit] r95 - branches/replay

svn-commit at lists.omniti.com svn-commit at lists.omniti.com
Wed Jun 10 15:30:10 EDT 2009


Author: mark
Date: 2009-06-10 15:30:10 -0400 (Wed, 10 Jun 2009)
New Revision: 95

Added:
   branches/replay/THOUGHTS
Log:
Adding a thoughts file on development of the replay feature

Added: branches/replay/THOUGHTS
===================================================================
--- branches/replay/THOUGHTS	                        (rev 0)
+++ branches/replay/THOUGHTS	2009-06-10 19:30:10 UTC (rev 95)
@@ -0,0 +1,63 @@
+- On the client side, you have to keep just the last incremental snapshot
+
+- What do you do if the snapshot isn't there?
+    * Bail out noisily
+    * Perhaps give the user some info on what to do to fix this
+
+- Implement the filesystem backup as a different type than
+    incremental/full - type 'd' - dataset
+    dataset/d/dset - method name in various situations
+
+- Looking through the plan_and_run function:
+
+    - zetaback_agent -l lists the filesystem and snapshots - that will need
+    changing scan_for
+    - scan_for_backups() looks for backups and makes a backup dir if needed.
+    This needs changing to support zfs filesystems and make a zfs filesystem
+    if needed.
+    - The bit after 'should we do a backup?' comment is where the backup type
+    is determined. We should return something in backup_info for last dataset
+    backup.
+    - Force full/force incremental need dealing with there - we could probably
+      just ignore them.
+    - The if($backup_type eq ....) lines
+        - add another for dataset
+        - Delete the old snapshot after the backup
+        - The backup needs an old snapshot to diff against
+
+- The lock file location needs to be changed? - it currently acquires the lock
+  in the store itself.
+  - Depends on the filesystem format. It may not need to be changed if there
+    is nothing in the filesystem itself.
+
+
+Filesystem layout:
+
+store = /data/zetaback/%h
+
+Auto detect the zfs filesystem for store. Store _has_ to be the root of a zfs
+filesystem in this case:
+
+    zfs list -H | \
+        perl -na -e 'next unless ($F[-1] eq "/data/zetaback"); print "$F[0]\n";'
+
+    If no output is found, give up and print an error (store isn't a zfs
+    filesystem)
+
+Store in data/zetaback/%h/%z
+    %h == hostname
+    %z == zfs filesystem
+    Question:
+        do we store data/zones/nnn as
+            data/zetaback/%h/data/zones/nnn
+        or as
+            data/zetaback/%h/data_zones_nnn
+    Issues are:
+        What if the data/zones/nnn is backed up before data/zones?
+            You can only receive into a non-existing filesystem (full) or into
+            a filesystem with an existing backup snapshot. This won't be the
+            case if you store as data/zones/nnn and data/zones/nnn is backed
+            up first - you will have a blank data/zones filesystem with no
+            __zb_dset_ snapshot to work from.
+        If we store as data_zones_nnn however, it's not as intuitive to access.
+    Do data_zones_nnn for now - it presents fewer issues.



More information about the Zetaback-devel mailing list