gitlab ci – execute .gitlab-ci.yml locally with gitlab runner (gitlab-ci-multi-runner)

Local Installation on Mac OS X

  • see https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/osx.md (you dont have to register your local runner with gitlab)
  • almost certainly you want to run the runner in a docker container. Install docker.
  • add a public docker image to your .gitlab-ci.yml (for example add image: python:alpine as first line of your .gitlab-ci.yml or select another one from here https://store.docker.com)

Local Execution

  • go to your project’s root
  • execute gitlab-ci-multi-runner exec docker deploy_dev (where deploy_dev is the job defined in the projects .gitlab-ci.yml)

If you have variables in your .gitlab-ci.yml (and almost certainly you will) you can load values locally into these variables as follows:

gitlab-ci-multi-runner exec docker deploy_dev --env ANSIBLE_PROJECT_DEPLOY_KEY="$ANSIBLE_PROJECT_DEPLOY_KEY" --env SECRETS_YML="$SECRETS_YML" --env SERVER_SSH_KEY="$SERVER_SSH_KEY"

Hint: whole text file contents can be loaded into bash variables as such:

SECRETS_YML=`cat ./credentials/secrets.yml`

Sample .gitlab-ci.yml

image: python:alpine

before_script:
  # install required packages
  - apk update && apk add openssh-client python git python python-dev ansible
  - pip install --upgrade pip
  - pip install boto httplib2

  # trust gitlab.com
  - mkdir ~/.ssh/
  - touch ~/.ssh/known_hosts
  - ssh-keyscan gitlab.com >> /root/.ssh/known_hosts

  # dump ssh key
  - echo "$ANSIBLE_PROJECT_DEPLOY_KEY" > ansible-project-deploy-key.pem
  - chmod 600 ansible-project-deploy-key.pem

  # Run ssh-agent (inside the build environment)
  - eval $(ssh-agent -s)

  # clone the ansible script
  # http://superuser.com/questions/232373/how-to-tell-git-which-private-key-to-use
  - ssh-agent sh -c "ssh-add ansible-project-deploy-key.pem; git clone git@gitlab.com:what-digital/deploy-cvcube.git deploy-script"

  # get deploy script
  - cd deploy-script
  - git submodule update --recursive

  # copy credentials to their place
  - mkdir credentials
  - echo "$SERVER_SSH_KEY" > credentials/server-dev.pem
  - echo "$SERVER_SSH_KEY" > credentials/server-stage.pem
  - echo "$SERVER_SSH_KEY" > credentials/server-production.pem
  - echo "$SECRETS_YML" > credentials/secrets.yml
  - chmod 600 -R credentials/*


stages:
  - deploy

deploy_production:
  script:
    - ./deploy production 0-local.yml
    - ./deploy production 3-deploy-site.yml
  stage: deploy
  only:
    - master

deploy_stage:
  script:
    - ./deploy stage 0-local.yml
    - ./deploy stage 3-deploy-site.yml
  stage: deploy
  only:
    - stage

deploy_dev:
  script:
    - ./deploy dev 0-local.yml
    - ./deploy dev 3-deploy-site.yml
  stage: deploy
  only:
    - dev

DjangoCMS Migration from sqlite3 to postgres

Big surprise: Dumping from sqlite3 doesnt seem to generate SQL that can be read correctly by postgres.

Thankfully, in django we have a high-level export / import function.

Export everything to json from db.sqlite3:

./manage dumpdata > db.json

Then switch your local djangoCMS installation to postgres (don’t forget to reload your env vars in case they contain your db settings).

createdb db-name
./manage migrate
# the following works on a mac (possibly because of the Postgres.app)
psql
# the following tables need to be flushed because they are populated with default data that collides with the imported data
delete from auth_group_permissions; delete from auth_permission; delete from django_admin_log; delete from django_content_type;
# now you can import
./manage.py loaddata db.json
# doesnt hurt
./manage.py cms fix-tree

automatic .env sourcing with virtualenv

add this to your .virtualenvs/project-name/bin/postactivate:

#!/bin/bash
# This hook is sourced after this virtualenv is activated.

if [ -f ${PWD}/.env ]; then
    echo "activating .env..."
    set -a
    . ${PWD}/.env
    set +a
fi

The .env file is now sourced on activating the environment.

WordPress Shortcuts

Search & Replace the Base URL in a multi site setup

sed -i.bak "s/http:\/\/pfjournal.dev.infel.ch\//http:\/\/pfjournal.local\//g" pfj-20170112135105.sql

Locally import database in MAMP on Mac OS X

/Applications/MAMP/Library/bin/mysql --host=localhost -uroot -proot postfinance < /Users/mario/Downloads/pfj-20170112135105.sql

Automated Backups with Ansible on Exoscale S3 (SOS)

Apart from the snapshots we create on Exoscale on a regular basis we also wanted to have an automated backup of some key files that would simplify data extraction and specifically, easy access to old database versions.

Exoscale is the Swiss Amazon AWS. Its S3 implementation is pretty advanced and the storage buckets are also accessible through a nice and fast web interface on exoscale.ch (each organisation has its own instances and storage).

Here is how we integrated Exoscale S3 into our deployment:

# This role should be executed as root
---

- name: create s3cmd config file
  template: src=s3cfg.j2 dest=/root/.s3cfg
  become: yes
  become_user: root

- name: Install S3 packages
  apt: pkg={{ item }} update-cache=yes cache_valid_time=3600
  become: yes
  become_user: root
  with_items:
    - s3cmd

- name: create the bucket
  command: chdir={{ project_root }} s3cmd mb s3://{{ project_name }}
  become: yes
  become_user: root


- name: Dump postgres db
  become_user: postgres
  shell: pg_dump {{ db_name }} > /tmp/db.sql
  when: database_system == "postgres"
  become: yes
  become_user: root


- name: Backup media directory
  command: chdir={{ project_root }} s3cmd put --recursive {{ project_media }} s3://{{ project_name }}/{{ backup_folder_name }}/
  become: yes
  become_user: root

- name: Backup locale directory
  command: chdir={{ project_root }} s3cmd put --recursive {{ project_root }}/locale s3://{{ project_name }}/{{ backup_folder_name }}/
  become: yes
  become_user: root

- name: Backup sqlite db
  command: chdir={{ project_root }} s3cmd put {{ project_root }}/db.sqlite3 s3://{{ project_name }}/{{ backup_folder_name }}/
  when: database_system == "sqlite"
  become: yes
  become_user: root

- name: Backup postgres dump
  command: chdir={{ project_root }} s3cmd put /tmp/db.sql s3://{{ project_name }}/{{ backup_folder_name }}/
  when: database_system == "postgres"
  become: yes
  become_user: root

The s3cmd config file looks like this:

[default]
host_base = sos.exo.io
host_bucket = %(bucket)s.sos.exo.io
access_key = {{ exoscale_s3_key }}
secret_key = {{ exoscale_s3_secret }}
use_https = True
signature_v2 = True

This is based on https://community.exoscale.ch/documentation/storage/quick-start/

You can get the key and secret for exoscale S3 as well as the S3 endpoint url from https://portal.exoscale.ch/account/profile/api

The ansible defaults vars are

---

# can be used to create unique directory and file names
datetime_stamp: "{{ lookup('pipe', 'date +%Y%m%d-%H%M') }}"

backup_folder_name: "{{ datetime_stamp }}-{{ deploy_type }}"

Debugging Web Views on Android

  1. connect your phone through USB (+enable USB debugging on the phone somewhere in the settings).
  2. run application on phone (leave it on login screen)
  3. run Chrome on the PC and go to the chrome://inspect/devices#devices URL
  4. Click “Inspect” at the bottom of your phone (should be named)
  5. On the phone login to the system using your credentials
  6. Check chrome what does it have in the console tab. It should print something useful.

by kamil@what.digital