Automated Backups with Ansible on Exoscale S3 (SOS)

Apart from the snapshots we create on Exoscale on a regular basis we also wanted to have an automated backup of some key files that would simplify data extraction and specifically, easy access to old database versions.

Exoscale is the Swiss Amazon AWS. Its S3 implementation is pretty advanced and the storage buckets are also accessible through a nice and fast web interface on exoscale.ch (each organisation has its own instances and storage).

Here is how we integrated Exoscale S3 into our deployment:

# This role should be executed as root
---

- name: create s3cmd config file
  template: src=s3cfg.j2 dest=/root/.s3cfg
  become: yes
  become_user: root

- name: Install S3 packages
  apt: pkg={{ item }} update-cache=yes cache_valid_time=3600
  become: yes
  become_user: root
  with_items:
    - s3cmd

- name: create the bucket
  command: chdir={{ project_root }} s3cmd mb s3://{{ project_name }}
  become: yes
  become_user: root


- name: Dump postgres db
  become_user: postgres
  shell: pg_dump {{ db_name }} > /tmp/db.sql
  when: database_system == "postgres"
  become: yes
  become_user: root


- name: Backup media directory
  command: chdir={{ project_root }} s3cmd put --recursive {{ project_media }} s3://{{ project_name }}/{{ backup_folder_name }}/
  become: yes
  become_user: root

- name: Backup locale directory
  command: chdir={{ project_root }} s3cmd put --recursive {{ project_root }}/locale s3://{{ project_name }}/{{ backup_folder_name }}/
  become: yes
  become_user: root

- name: Backup sqlite db
  command: chdir={{ project_root }} s3cmd put {{ project_root }}/db.sqlite3 s3://{{ project_name }}/{{ backup_folder_name }}/
  when: database_system == "sqlite"
  become: yes
  become_user: root

- name: Backup postgres dump
  command: chdir={{ project_root }} s3cmd put /tmp/db.sql s3://{{ project_name }}/{{ backup_folder_name }}/
  when: database_system == "postgres"
  become: yes
  become_user: root

The s3cmd config file looks like this:

[default]
host_base = sos.exo.io
host_bucket = %(bucket)s.sos.exo.io
access_key = {{ exoscale_s3_key }}
secret_key = {{ exoscale_s3_secret }}
use_https = True
signature_v2 = True

This is based on https://community.exoscale.ch/documentation/storage/quick-start/

You can get the key and secret for exoscale S3 as well as the S3 endpoint url from https://portal.exoscale.ch/account/profile/api

The ansible defaults vars are

---

# can be used to create unique directory and file names
datetime_stamp: "{{ lookup('pipe', 'date +%Y%m%d-%H%M') }}"

backup_folder_name: "{{ datetime_stamp }}-{{ deploy_type }}"