Reverting to a previous snapshot has been possible for over a year!!!!! How did I miss that ?? This has for a long time been one of my only real criticisms of LVM and I just discovered that it was quietly committed into the kernel back in 2.6.33
The command used to do the revert is contained within lvconvert. From the lvconvert man page:
--merge
Merges a snapshot into its origin volume. To check if your ker‐
nel supports this feature, look for 'snapshot-merge' in the out‐
put of 'dmsetup targets'. If both the origin and snapshot vol‐
ume are not open the merge will start immediately. Otherwise,
the merge will start the first time either the origin or snap‐
shot are activated and both are closed. Merging a snapshot into
an origin that cannot be closed, for example a root filesystem,
is deferred until the next time the origin volume is activated.
When merging starts, the resulting logical volume will have the
origin's name, minor number and UUID. While the merge is in
progress, reads or writes to the origin appear as they were
directed to the snapshot being merged. When the merge finishes,
the merged snapshot is removed. Multiple snapshots may be spec‐
ified on the commandline or a @tag may be used to specify multi‐
ple snapshots be merged to their respective origin.
A quick check using the command ‘dmsetup targets’ shows that it is definitely in my kernel so I thought I would give it a quick run through and test it lot. I created a testing logical volume and then put some data on it, took a snapshot, changed the data and then reverted to the snapshot. Here is what I did.
- Create a new logical volume with a file system and mount it.
:
[root@titanium ~]# lvcreate -L1G -n lv_test vg_titanium
Logical volume "lv_test" created
[root@titanium ~]# mke2fs -j /dev/vg_titanium/lv_test
mke2fs 1.41.14 (22-Dec-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 33 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@titanium ~]# mkdir /mnt/test
[root@titanium ~]# mount /dev/vg_titanium/lv_test /mnt/test
Create some test data then take a snapshot and create some more test data.
[root@titanium ~]# touch /mnt/test/testdata-$(date +%Y%m%d%H%M%S)
[root@titanium ~]# ls -l /mnt/test/
total 20
drwx------. 2 root root 16384 Sep 19 00:39 lost+found
-rw-r--r--. 1 root root 0 Sep 19 00:39 testdata-20110919003936
[root@titanium ~]# SNAP_NAME="lv_test-snapshot-$(date +%Y%m%d%H%M%S)"
[root@titanium ~]# lvcreate -s /dev/vg_titanium/lv_test -L 1G -n $SNAP_NAME
Logical volume "lv_test-snapshot-20110919003936" created
[root@titanium ~]# touch /mnt/test/testdata-$(date +%Y%m%d%H%M%S)
[root@titanium ~]# ls -l /mnt/test/
total 24
drwx------. 2 root root 16384 Sep 19 00:39 lost+found
-rw-r--r--. 1 root root 0 Sep 19 00:39 testdata-20110919003936
-rw-r--r--. 1 root root 0 Sep 19 00:39 testdata-20110919003937
Here is the actual merge command.
[root@titanium ~]# lvconvert --merge /dev/vg_titanium/$SNAP_NAME
Can't merge over open origin volume
Merging of snapshot lv_test-snapshot-20110919003936 will start next activation.
You will need to deactivate and activate to get the merge to start. You can immediately remount the filesystem as your view of it will be that of the snapshot once the merge has started. Once mounted you can check the data.
[root@titanium ~]# umount /dev/vg_titanium/lv_test
[root@titanium ~]# lvchange -an /dev/vg_titanium/lv_test
[root@titanium ~]# lvchange -ay /dev/vg_titanium/lv_test
[root@titanium ~]# mount /dev/vg_titanium/lv_test /mnt/test
[root@titanium ~]# ls -l /mnt/test/
total 20
drwx------. 2 root root 16384 Sep 19 00:39 lost+found
-rw-r--r--. 1 root root 0 Sep 19 00:39 testdata-20110919003936
As we can see we have reverted to the filesystem at the time we took the snapshot.
Clean up :-)
[root@titanium ~]# umount /dev/vg_titanium/lv_test
[root@titanium ~]# lvremove /dev/vg_titanium/lv_test
Logical volume "lv_test-snapshot-20110919003936" successfully removed
Do you really want to remove active logical volume lv_test? [y/n]: y
Logical volume "lv_test" successfully removed
rmdir /mnt/test