My Profile Photo


Using liberty-minded opensource tools, and using them well

Distributed Duplicity

I need to have several machines be backed up to a central machine, and have been recommended Duplicity to perform said backups. I can just make a cron job, but I want to also be able to do manually-triggered automated backups as well.

I’m Not Getting Paid Enough For This Shit

sigh all this shit: Volume was signed by key x, not y, Task ‘RESTORE’ failed with exit code ‘22’

[root@mail-relay ~]# gpg --list-keys
gpg: checking the trustdb
gpg: no ultimately trusted keys found

[root@mail-relay ~]# ls /etc/duply/weekly
conf  exclude  gpgkey.4D201731.sec.asc

[root@mail-relay ~]# cat /etc/duply/weekly/conf

[backups@mail-relay ~]$ sudo duply weekly backup
Start duply v1.11.3, time is 2017-04-13 01:28:59.
Using profile '/etc/duply/weekly'.
Using installed duplicity version 0.7.11, python 2.7.5, gpg 2.0.22 (Home: ~/.gnupg), awk 'GNU Awk 4.0.2', grep 'grep (GNU grep) 2.20', bash '4.2.46(1)-release (x86_64-redhat-
Encryption public key '4D201731' not found.
Import keyfile '/etc/duply/weekly/' to keyring (OK)
Import keyfile '/etc/duply/weekly/gpgkey.4D201731.sec.asc' to keyring (OK)
Autoset trust of key '4D201731' to ultimate (OK)
Autoset found secret key of first GPG_KEY entry '4D201731' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to '4D201731' & Sign with '4D201731' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.9723.1492061340_*'(OK)

--- Start running command PRE at 01:29:01.696 ---
Skipping n/a script '/etc/duply/weekly/pre'.
--- Finished state OK at 01:29:01.779 - Runtime 00:00:00.082 ---

--- Start running command BKP at 01:29:01.848 ---

Reading globbing filelist /etc/duply/weekly/exclude
Synchronizing remote metadata to local cache...
Deleting local /root/.cache/duplicity/duply_weekly/duplicity-full-signatures.20170413T045259Z.sigtar.gz (not authoritative at backend).
Deleting local /root/.cache/duplicity/duply_weekly/duplicity-full.20170413T045259Z.manifest (not authoritative at backend).
Last full backup date: none
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1492061342.90 (Thu Apr 13 01:29:02 2017)
EndTime 1492061345.85 (Thu Apr 13 01:29:05 2017)
ElapsedTime 2.95 (2.95 seconds)
SourceFiles 2587
SourceFileSize 26469877 (25.2 MB)
NewFiles 2587
NewFileSize 26469877 (25.2 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 2587
RawDeltaSize 26377939 (25.2 MB)
TotalDestinationSizeChange 9940271 (9.48 MB)
Errors 0

--- Finished state OK at 01:29:11.957 - Runtime 00:00:10.108 ---

--- Start running command POST at 01:29:12.029 ---
Skipping n/a script '/etc/duply/weekly/post'.
--- Finished state OK at 01:29:12.114 - Runtime 00:00:00.085 ---

[root@mail-relay ~]# gpg --list-keys
pub   2048R/4D201731 2017-04-12
uid                  backup-gpg-key <root@bu-node-webserver4.vmlab>
sub   2048R/93C10009 2017-04-12

[backups@mail-relay ~]$ sudo duply weekly restore /tmp/etc
Start duply v1.11.3, time is 2017-04-13 01:29:33.
Using profile '/etc/duply/weekly'.
Using installed duplicity version 0.7.11, python 2.7.5, gpg 2.0.22 (Home: ~/.gnupg), awk 'GNU Awk 4.0.2', grep 'grep (GNU grep) 2.20', bash '4.2.46(1)-release (x86_64-redhat-
Autoset found secret key of first GPG_KEY entry '4D201731' for signing.
Checking TEMP_DIR '/tmp' is a folder and writable (OK)
Test - Encrypt to '4D201731' & Sign with '4D201731' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.10176.1492061373_*'(OK)

--- Start running command RESTORE at 01:29:34.564 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Apr 13 01:29:02 2017
Volume was signed by key DAD6DEF64D201731, not 4D201731
01:29:38.495 Task 'RESTORE' failed with exit code '22'.
--- Finished state FAILED 'code 22' at 01:29:38.495 - Runtime 00:00:03.930 ---

Anything below here really doesn’t matter if we can’t even get this simple thing to work, now does it?

I’m doing a demo with two machines to back up, and one machine that is going to be backed up and storing said backups. kanboard.vmlab and mail-relay are going to be the two remotes, and bu-node-webserver4.vmlab is going to be the CnC host.



Here I mean, the actual Unix account backups that we’re using for ssh sessions. This user account is present on all systems.

# useradd -m backups
# passwd backups

SSH Auth

Just a quick rundown to setup ssh keys.

ssh-keygen -b 4096 -t rsa
ssh-copy-id backups@{bu-node-webserver4,kanboard,mail-relay}.vmlab

Let’s consider who’s going to need access to whom.

  1. root on the CnC machine (bu-node...) needs ssh keys to each backups on the remote machines to initiate the backup over ssh (for manually triggered runs)
  2. Each root on the remote machines needs ssh keys to backups on the CnC machine. This is to push the backups and pull the restoration files.
  3. root on the CnC machine needs ssh keys to backups on the same machine. It’s easier to use the same protocol everywhere.

Storage location

This could go any of a number of places. It could go under /usr{,/local}/share, /var, or /srv. Personally, I prefer /srv for things that are meant to be accessible as a service. But this is also being accessed over scp, so it may be considered as a local file, therefore to be put under /usr/local. Lastly, since it is constantly changing, it can be put under /var. Whichever place it needs to go, as long as it’s owned by backups and not world-readable, that should be fine.

Under whichever directory you would like to place it, I would suggest the following layout:

└── backups/
    ├── offline/
    │   ├── kanboard.vmlab/
    │   ├── mail-relay.vmlab/
    │   └── bu-node-webserver4.vmlab/
    └── online/
        ├── kanboard.vmlab/
        ├── mail-relay.vmlab/
        └── bu-node-webserver4.vmlab/


Since the command to trigger the backups should be able to be sent manually from the CnC host, and Duply should have root privileges to read all of the files, backups should have access to sudo duply daily backup without having to enter a password. This is achieved by editing the sudoers file using visudo. Then, as long as a line exists similar to the following, /usr/bin/duply will be able to be executed without a password by the user backups.

backups ALL=NOPASSWD: /usr/bin/duply

This doesn’t allow any other command to be executed as sudo though, only /usr/bin/duply can run as sudo, and without a password.


Key generation

Generating a gpg key is pretty easy, but it takes awhile. Since we are only going to need one pair of gpg keys though, we can make the info global, or non-specific. It doesn’t matter what info is put in, it shouldn’t affect anything later on. This should be done as root on the CnC machine.

gpg --gen-key
    Key type: (1) RSA and RSA (default)
    bits: 4096
    valid for: 0
    Real name: duply-backup-gpg-key
    Email: root@bu-node-webserver4.vmlab
    Passphrase: <keepassx>

Also, using dd to speed up the random bit generation is helpful. Something like dd if=/dev/sda of=/dev/zero or find / | xargs file should work as well. The latter even works without root. But definitely do one of these because it can take forever if you don’t.

List Keys

Later on, Duply needs the GPG key and passphrase. The passphrase we can get from keepassx, but the keys we can get from gpg --list-keys. The 8 hex digits after the / on the line that begins with pub is the GPG_KEY the Duply needs.

The key id can be found doing a 'gpg --list-keys'. In the  example output
below the key id would be FFFFFFFF for the public key.

pub   1024D/FFFFFFFF 2007-12-17
uid                  duplicity
sub   2048g/899FE27F 2007-12-17

It does not need to be prepended with a 0x in the configuration file.



Duplicity’s native interface is not known for being particularly user friendly, so there are several wrappers that make duplicity stupidly simple. Duply is one of those. For CentOS, it’s in the EPEL under duply. duplicity is a dependency of duply so installing only duply will bring in everything you’d need.

Config Files

To generate the config files, you’ll run duply, which has a command to create a new named profile. This profile can be named something like daily, weekly, offline, full, etc. As long as it’s descriptive and not something like backups.

duply <name> create

The config files will be put under /etc/duply/<name>/conf as long as the following command to create them is run under sudo. Otherwise, it creates a directory ~/.duply/<name> under the user running it. If the latter happens, move the created directory to /etc/duply/<name> for ease-of-use and permissions. Make sure the former location is gone before continuing to customize the new location.

In the file conf in that directory, there are four settings that need to be set:

  • GPG_KEY - see above
  • GPG_PW - see above
  • TARGET - This is where the backup is going
    • scp://backups@bu-node-webserver4.vmlab//srv/backups/online/kanboard.vmlab
  • SOURCE - Directory to back up

Seeing as the gpg password needs to be in that file, it’s best to make that directory not world-readable.

chown -R root:root /etc/duply
chmod -R u+rwX,g-rwx,o-rwx /etc/duply


  • Online


  • Force a full backup
  • Cloud
  • Offline


  • Force a full backup
  • Offline
  • Cloud


First Run

That’s the setup for the CnC machine, so let’s push these configs out to the other remote machines. Before we can do so, however, we have to run duply first, so that it’ll export our gpg keys to transfer over later.

# su backups
# sudo duply <name> backup

Once that’s done, you should see two additional files - /etc/duply/<name>/gpgkey.FFFFFFFF.{pub,sec}.asc. Now we can transfer the directory to the remote machines.

Mongo Push

There’s no really good way to do this, since backups is the only user (in the scope of this walk-through) that we have ssh access to, and they don’t have ownership over the /etc/duply/ directory. You can change the permissions, or change the sudo commands, or whatever - just get that directory installed so that all machines have the same copy at /etc/duply/<name>.

After that, each /etc/duply/<name>/conf file is going to need to be changed just to reflect the correct TARGET directory that it is being sent to. As detailed above, I usually set this as the FQDN name of the host.

All together now

If you’re like me, you’ve got all of these terminals in a tmux session. If so, synchronize the next command over all of the open panes right now using <Leader>:setw synchronize-panes and type in:

sudo duply <name> backup

And the same command disables synchronizing the panes as well. Depending on how your ssh-copy-ids went, you may have to confirm the CnC fingerprint on several remote machines. However, the gpg keys should be imported automatically once it is run.


We’re having backups run this cron job so we’ll edit the crontab with crontab -u backups -e and enter in:

# minute/hour/DOM/mo./DOW/CMD
#<timing>       <command>
# Daily @ 21:00
00 21 * * * sudo /usr/bin/duply daily backup
# Weekly @ 21:00
00 21 * * Fr sudo /usr/bin/duply weekly backup
# Monthly First DOM @ 21:00
00 21 01 * * sudo /usr/bin/duply monthly backup
# Quarterly @ 21:00
00 21 01 */4 * sudo /usr/bin/duply quarterly backup

It may be worth it to migrate this to systemd timers by the time the updates are caught up.


The bash script pussh that had been put up on github by BEARSTECH, who appears to be a French Devops team, is going to work perfectly for my needs here. Basically, it will let me parallelize ssh sessions over several machines to run a specific command. Installing it is easy enough:

git clone /opt/pussh
ln -sT /opt/pussh/pussh /usr/local/bin/pussh

And then all we need to do is specify (in order:) which hosts to run it on, which user to use (backups) and the command to run.

/usr/local/bin/pussh -h kanboard.vmlab,mail-relay.vmlab -l backups sudo /usr/bin/duply daily backup

Our sudo trick works, namely that we achieved:

  • Passwordless ssh between all hosts
  • Passwordless running of sudo for the duply command for user backups.
  • duply runs with root privileges
  • Remote machines can push and pull backup files
  • Simultaneous manually-triggered runs
  • Automatically timed jobs


Secret decoder rings were easier

First of all, this shit: Volume was signed by key x, not y, Task ‘RESTORE’ failed with exit code ‘22’ Also, what is not specified is that the solution of adding --sign-key to the profile conf file only works during a restore. It breaks backups by prompting manually for a gpg key passphrase. When automating this over ssh, me no likely prompts.

So the solution here is to create two profiles:

  1. <profile>-backup that doesn’t specify --sign-key
  2. <profile>-restore that does specify --sign-key

This seems like a really roundabout way of doing things, but it’s the best we’re gunna get if we wanna have restorable backups sometime this year.

How dare you try to restore what you want to back up

Even though the entirety of duply is a shell script, You still have to manually rsync the files from the restore point to the intended restore point. Yep - I mean that unless you pass --force, duply, and maybe duplicity will not restore into the actual filesystem. All it does is to fall over whining and complaining and ultimately print out the word FAILED in all caps. Ah-yep.

So if you’re going to restore, I’d say that you’re best off at restoring to /tmp/XXX, and then rsyncing from there. And heaven forbid that you try to restore to an existing directory - the entire computer might as well be engulfed in flames for how much good that’ll do you. Nope! No error handling here.


How do you even get that longer key in the first place? It’s not specified anywhere in gpg - you’d think that it would be able to output it somewhere. No! It’s just this magical key that shares the latter 8 hex digits with the key. Yeah, yeah, I’m sure that’s significant somehow, but c’mon, is it really that hard? I mean, gpg’s been around for decades, and we can’t even get a simple python script (duplicity) to use it correctly? Are you kidding me? What a sad, sad state that these fuckers are in. This is why nobody uses encryption. And this is like the bare minimum too - there’s nothing special about this. And you’re telling me that you do block-level deduplication, but you can’t even give me the right freakin’ gpg key?!?!?! Un-fucking-believable.

Look, I’m not trying to bitch and whine about FOSS, but I just don’t have the time to put up with stupid errors when I’m trying to do work for others as well. We’re all trying to get work done, and if, say, your bike chain breaks on your way to work, I don’t expect you to fashion a brand new one out of spare mobos and other metal that you have laying around the house. No, you’re going to get one, and probably have it put on for you, ‘cause you are interested in riding the bike to work to work on the software that the bike shop uses for inventory tracking. Just…freakin’ make it work.