DjangoCMS and ManifestStaticFilesStorage

The expires header in HTTP works like the TTL (Time to live) for DNS caching – once cached, the browser wont request the asset anymore until its individual cache-until date expires.

If the file content changes on the server, the browser will not see those changes before the expiration date of the file’s cache.

That can cause problems, for example when jacascript or styles in the cached file dont match the html code of the document anymore.

Also, when we deploy, we want the new version to be visible right away, to everyone.

In order to have both browser caching AND full version control in the user’s browser, the idea is to add a unique hash to the filename of all files that are collected by django’s staticfiles. So that if they are changed, the filename changes. Like this, a new asset is fetched instantly by the browser as caching is done by filename.

The expiry header can then safely be set to an “infinite” date (a date far in the future) – we therefore call this “far future expiry header”.

  • You can do this by setting far future expiry header on the static directory, where django puts the collected static files. This is done in the configuration of the webserver that serves the static files (normally nginx).
  • https://docs.djangoproject.com/en/2.1/ref/contrib/staticfiles/#ManifestStaticFilesStorage
  • Allows to remove hashing from the webpack setup which is convenient
  • Allows to also hash images via {% static 'path/to/asset.jpg' %}, respectively assets that are not bundled via webpack
  • Hashing is only active if debug is False
  • On divio.com, if you are using aldryn-django then static file hashing can be activated like this: https://docs.divio.com/en/latest/reference/addons-aldryn-django.html#hash-static-file-names

How to fix collectstatic if there are errors

  • The collectstatic will fail if any app that includes static assets in templates is not in INSTALLED_APPS
  • On production, collectstatic will also sometimes fail, in such cases, DEBUG has to be turned on, ./manage.py collectstatic has to be run, then DEBUG can be turned off again. See https://github.com/divio/djangocms-text-ckeditor/issues/474
  • If you use inlined data urls and the svg code itself uses url()s and these contain url_encoded values, then collectstatic command will fail because it will try to analyse these url()s and you have to work around this like https://code.djangoproject.com/ticket/21080#comment:12
  • you cannot use absolute URLs that are otherwise accepted by Django. This works {% static "my_app/example.jpg" %} but this will not {% static "/my_app/example.jpg" %}

How to test locally in production mode

ManifestStaticFilesStorage is only activated if DEBUG = False. However disabling debug mode in django will stop the runserver from serving the static files. There might also be a couple of other issues that you need to resolve before you can see ManifestStaticFilesStorage in action on your local dev env:

  • on Divio projects, set these env vars in .env-local:
DEBUG=False
STAGE=live
DIVIO_ENV=live
  • Set SECURE_SSL_REDIRECT to False in your settings.py
  • Set HTTP_PROTOCOL to http in your settings.py
  • Add the following to your urls.py:
from django.conf.urls.static import static


...) + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)

Now you can build your frontend (something like yarn run build) and then collect the static files using ./manage.py collectstatic and you will be running with ManifestStaticFilesStorage enabled on your local dev env system!

NEO python Shortcuts

Installation

  • create a python virtualenv
  • pip install neo-python

Usage

  • np-prompt in your console

Private Network Setup

  • docker pull cityofzion/neo-privatenet (see https://github.com/CityOfZion/neo-privatenet-docker)
  • np-prompt -p connects to your docker private network
  • then you can import the wallet that is provided in the documentation. It has already all the NEO and GAS.

Clean up Jenkins to save disk space

-> Manage Jenkins -> Script Console

from https://gist.github.com/pkouman/c987ce8cd622820cce9111ea34662c6b

MAX_BUILDS = 10 // max builds to keep

def jobs = Jenkins.instance.items;

for (job in jobs) {
    println "Job: " + job.name
    def recent = job.builds.limit(MAX_BUILDS)
    println "Recent Builds: "  + recent
    println "============================="
    for (build in job.builds) {
        if (!recent.contains(build) && !build.isBuilding()) {
            println "Deleting: " + build
            build.delete()
            println ""
        }
    }
}

replace raven with sentry-sdk

since raven is now deprecated, this is how we should integrate sentry into our django projects: via the new pip install sentry-sdk:

settings.py: No more messing around with LOGGING is required, the following code plugs right into django:

import sentry_sdk
import logging
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.logging import LoggingIntegration

sentry_sdk.init(
   dsn="https://1234@sentry.io/123",
   integrations=[
      DjangoIntegration(),
      LoggingIntegration(
         level=logging.INFO, # Capture info and above as breadcrumbs
         event_level=None # Send no events from log messages
      )
   ],
   environment=env('DJANGO_ENV', 'develop'),
)

django translation tips: makemessages and gettext pitfalls & blocktrans best practises

Below I share some best practises and experiences I had with Django’s gettext implementations. Please make sure that you are familiar with django translations framework before applying below: https://docs.djangoproject.com/en/3.0/topics/i18n/translation/

General Hints

It’s a smart thing to ignore the huge node_modules when collecting translatable strings. Also specifying a language will actually create the language if it’s not already there. Not specifying the language will do absolutely nothing.

./manage.py makemessages --ignore=node_modules -l de

Fuzzy translations

Fuzzy translations are one of the bigger challenge when using gettext for translation. The following is a best practise in coping with them:

… “Fuzzy” messages for not make it into the translation. They are not considered to be translated correctly, but they are not deleted, since they are likely to be “almost correct” and just need a small update.

Part of normal translator operations is to search out any “fuzzy” messages and verify that they are either still correct (just delete the fuzzy comment, but do not delete the python-format bit) or update them and then remove the “fuzzy”.

This isn’t anything special to Django, by the way. It’s the way
gettext-backed i18n support operates.

Malcolm Tredinnick at https://groups.google.com/forum/#!msg/django-users/SjqjUhZwJzU/DaHmdwZeZLkJ

and also:

When I run manage.py makemessages I found some messages which were in the .po file like this:

msgid "Example"
msgstr "Example"

Transformed to this, after I ran the command:

#~ msgid "Example"
#~ msgstr "Example"

Django will comment-out all messages that are no longer in your code. It won’t remove them, so you won’t lose it, but that way this messages won’t end up in compiled .mo file.

https://stackoverflow.com/questions/32749854/django-makemessages-decides-to-comment-already-existing-translations/35040165#35040165

The Django blocktrans tag

Best practice: Always use the `trimmed` option. This results in a nicely formatted block of text in the po file which makes it simpler for translators to handle it. If the translation key is not trimmed, arbitrary line breaks make it difficult to translate properly.

{% blocktrans trimmed with business_name=business_name context "password reset email" %}
You're receiving this email because you requested a password reset for your user account at {{ business_name }}.

Another line, which will be trimmed into one single block of text inside the po file so that this can be properly formatted.
{% endblocktrans %}

Translation Process

  1. After changes in the code and templates the developer updates the po files with manage.py makemessages command and review the updates in the re-generated po files and then commits them to the source, best via a Merge Request. Its not a good idea to add new translation entries from templates or code to a po file manually. django makemessages command takes care of this and makes sure everything is updated correctly. (edited)
  2. manage.py compilemessages is then normally run during deployment in the deploy script. It generates the compiled .mo files that are needed by Django to display the translations.

A word about context (pgettext_lazy)

pgettext_lazy(): the p stands for particular. So this means that the same translation key can occur multiple times in a .po file (with different context). pgettext_lazy() therefore requires two arguments: 1. the translation key (i.e. First name) and the name of the context (i.e. booking-email).

from django.utils.translation import pgettext_lazy

LANGUAGES = (
('de', pgettext_lazy('German', 'language menu')),
('fr', pgettext_lazy('French', 'language menu')),
('it', pgettext_lazy('Italian', 'language menu')),
)

Warning: We import pgettext_lazy() without custom name so there is no confusion between gettext_lazy() and pgettext_lazy()! Do NOT import pgettext_lazy as _ it does not work in django (it’s probably a bug)!

Warning: If the same translation key is referred to by pgettext_lazy() and gettext_lazy() in different places across the source code, things get difficult to debug. Therefore keep strict order and tidy up!

If there is no context, you need to use gettext_lazy() which is normally imported as follows:

from django.utils.translation import gettext_lazy as _

WIP: Django Project Dockerization

docker-compose.yml

version: "2"

services:
  web:
    build: "."
    ports:
      - "8000:80"
    volumes:
      - ".:/app:rw"
    command: python manage.py runserver 0.0.0.0:80

Dockerfile

FROM aldryn/base-project:py3-3.23

# System upgrade
RUN apt-get update && apt-get upgrade -yq

# Setup for ssh onto github
RUN mkdir -p /root/.ssh
ADD credentials/divio-deploy-key.pem /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host gitlab.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config

# Python
# we want to keep project-specific sources in the "wee" folder
ENV PYTHONPATH=/app/wee:$PYTHONPATH
COPY requirements.txt /app/
RUN pip install -r requirements.txt

# nvm environment variables
ENV NODE_VERSION=8.11.3
RUN . $NVM_DIR/nvm.sh && nvm install $NODE_VERSION

# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
COPY package.json /app/
RUN npm install

COPY . /app

RUN npm run build

RUN DJANGO_MODE=build python manage.py collectstatic --noinput

create ethereum keystore from private key

I havent yet found a python way of doing this but here is an npm solution:

https://ethereum.stackexchange.com/questions/11166/how-to-generate-a-keystore-utc-file-from-the-raw-private-key

npm install ethereumjs-wallet
node

then:

var Wallet = require('ethereumjs-wallet');
var key = Buffer.from('<private key>', 'hex');
var wallet = Wallet.fromPrivateKey(key);
wallet.toV3String('<empty string or password');

Derive bitcoin addresses from HD xpub keys

Objectives

we want to

  • generate Bitcoin receiving addresses from an xpub key (so that no secret information needs to be shared)
  • do this independently from a service / API, in python
  • be compatible with https://iancoleman.io/bip39/
  • be compatible with wallet software such as Electrum or blockchain.info so that the received funds can be easily managed

A note in regards to blockchain.info:

  • blockchain.info allows to export the mnemonic and also the xpub key for each wallet account created
  • the xprv key for each account can be derived via https://iancoleman.io/bip39/ by setting the account to {n}, where 0 corresponds to the first blockchain.info account, 1 to the second, and so on.
  • this xprv can then be imported into Electrum

Solution

https://github.com/primal100/pybitcointools/ appears to be the continuation of Vitalik Buterin’s abandoned https://github.com/vbuterin/pybitcointools.

Then:

from cryptos import *

words = 'word1 word2 word3 word4 word5 word6 word7 word8 word9 word10 word11 word12'
coin = Bitcoin()
private_wallet = coin.wallet(words)

xpub = private_wallet.keystore.xpub

# this is where you would start with an address calculation feature in an insecure environment like a web server
pub_keystore = keystore.from_xpub(xpub, coin, "p2pkh")
pub_wallet = wallet.HDWallet(pub_keystore)
print(pub_wallet.new_receiving_addresses(num=3))

Divio/DjangoCMS – export/import a DB dump

The following works, placing dump.sql into a mounted volume

docker-compose exec db psql --username postgres db < ./dump.sql

dumping can be achieved similarly:

docker-compose exec db pg_dump --username postgres db > ./dump.sql

This is based on http://support.divio.com/local-development/divio-shell/how-to-interact-with-your-local-projects-database

First, get rid of the old db:

docker ps
docker stop projectname_db_1
docker rm projectname_db_1
docker-compose up db

then import

cat db-dump.sql | docker exec -i projectname_db_1 psql -U postgres db