These are a few notes that I came across while trying to get GitLab CI working.

Fulfil the system requirements

There are some pretty insane system requirements for GitLab. You need at least 4GB of memory, which is not always so easy to come by in a VPS environment. Even when you fulfil the system requirements, GitLab will run out of memory and have to be "kicked" sometimes, in my experience. You could probably automate this with some kind of systemd configuration, but I haven't tried that yet.

Realize that things differ depending on your package

gitlab hosts Debian packages themselves that are more up to date, but perhaps less integrated with the rest of the system. For reasons, I was reluctant to use the packages from upstream. Instead, I used some backported versions for Jessie that were created by Pirate Praveen. You don't need to worry about this, because gitlab has migrated to stretch, so you just need to choose: use the upstream packages, or use the official Debian stable packages. You won't have problems, unless you run across features that you need from the newer versions.

Understand the GitLab CI environment

There are several things to realize about GitLab CI. The environment can differ a lot. The two primary environments are 'docker' and 'shell'. If you use docker, you build up all of your infrastructure prerequisites from a docker base image. Whereas if you use shell, your builds run under a regular shell environment, as a special (non-root) user.

Personally, I found that although docker was easier to get started with, I got benefits by moving to the shell executor, because having a preconfigured environment eliminated some run-time from the integration test suite. These problems could also have been resolved by creating my own docker base image, but that seems to be putting the cart before the horse, in a way: I already have all my infrastructure under configuration management, so why would I invest in a new method of ensuring that the docker base image meets my needs when I can address it with existing tools? Ther's also the problem that docker can't use background processes as it doesn't have a real PID1.

Understand Gitlab CI files

They're in YAML format, which is well documented. There's an existing lint tool for gitlab-ci.yml, which has stricter semantic rules than a simple YAML violation. If you do just want to validate your YAML, you can use the snippet:

validate_yaml() {
    ruby -e "require 'yaml';puts YAML.load_file(ARGV[0])" "$1"
}

Assuming you have ruby.

You can use "tags" to separate groups of runners. If you are trying to move from docker to shell executors or vice versa, you can set tags in a job to ensure that that job doesn't get executed on the wrong runner.

The key to understanding the execution cycle of GitLab CI jobs is that any job could essentially be executed on any runner. That means that you can't make assumptions about what files will be available at any time -- because if a job j1 executed on host mercury, when job j2 executes later on host venus then the output files produced by the build tool won't be available in venus.

There are two ways to get around this.

The first is to declare that the job has build artifacts.

variables:
  OUTPUT_TAR_PATH: "mybuild.tar"

compile:
  stage: compile
  script:
    - sh do_build.sh $OUTPUT_TAR_PATH
  tags:
    - shell
  artifacts:
    paths:
      - $OUTPUT_TAR_PATH

The GitLab CI runtime will automatically make sure that $OUTPUT_TAR_PATH is copied between any runner hosts that are used to execute jobs in this build.

Another related mechanism is the cache:

cache:
  untracked: true
  key: "$CI_BUILD_REF_NAME"
  paths:
    - node_modules/

This is useful for JavaScript projects, otherwise you're going to end up downloading >1GB of npm modules at every phase of your project.

One last point: command under a shell list are parsed in a special way and line breaks or special characters may not always work as you expect. So far, it was mostly trial and error for me. Don't assume that you need to quote unless you try it and it fails: AFAIK, the interpreter does its own mangling to the shell lines.