When using CentOS 6 as a container under a Debian Sid host, you may face the following problem.

amoe@inktvis $ sudo lxc-create -n mycontainer -t centos
Host CPE ID from /etc/os-release: 
This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release
Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ... 
Downloading CentOS minimal ...

You have enabled checking of packages via GPG keys. This is a good thing. 
However, you do not have any GPG public keys installed. You need to download
the keys for packages you wish to install and install them.
You can do that by running the command:
    rpm --import public.gpg.key


Alternatively you can specify the url to the key you would like to use
for a repository in the 'gpgkey' option in a repository section and yum 
will install it for you.

For more information contact your distribution or package provider.

Problem repository: base
/usr/share/lxc/templates/lxc-centos: line 405: 24156 Segmentation fault      (core dumped) chroot $INSTALL_ROOT rpm --quiet -q yum 2> /dev/null
Reinstalling packages ...
mkdir: cannot create directory ‘/var/cache/lxc/centos/x86_64/6/partial/etc/yum.repos.disabled’: File exists
mv: cannot stat '/var/cache/lxc/centos/x86_64/6/partial/etc/yum.repos.d/*.repo': No such file or directory
mknod: /var/cache/lxc/centos/x86_64/6/partial//var/cache/lxc/centos/x86_64/6/partial/dev/null: File exists
mknod: /var/cache/lxc/centos/x86_64/6/partial//var/cache/lxc/centos/x86_64/6/partial/dev/urandom: File exists
/usr/share/lxc/templates/lxc-centos: line 405: 24168 Segmentation fault      (core dumped) chroot $INSTALL_ROOT $YUM0 install $PKG_LIST
Failed to download the rootfs, aborting.
Failed to download 'CentOS base'
failed to install CentOS
lxc-create: lxccontainer.c: create_run_template: 1427 container creation template for mycontainer failed
lxc-create: tools/lxc_create.c: main: 326 Error creating container mycontainer

This is due to vsyscall changes in recent kernels. To get this working, you need to add vsyscall=emulate parameter to your kernel command line (to be perfectly specific, the command line of the host because containers share a kernel.) To do this you can modify /etc/default/grub and run update-grub.

Posted 2017-11-03

Update 2017-11-13: Now you need to use ca-cert=/etc/ssl/certs/QuoVadis_Root_CA_2_G3.pem. I suppose they changed the certificate.

Here's a NetworkManager connection for Eduroam. This can live under /etc/NetworkManager/system-connections. Fill in your MAC address, your username and password, plus a unique UUID.

[connection]
id=eduroam
uuid=3ef39a8f-3020-4cf3-8e07-382719a4e6f9
type=wifi
permissions=

[wifi]
mac-address=10:02:B5:ED:5D:EB
mac-address-blacklist=
mode=infrastructure
ssid=eduroam

[wifi-security]
auth-alg=open
key-mgmt=wpa-eap

[802-1x]
ca-cert=/etc/ssl/certs/AddTrust_External_Root.pem
eap=peap;
identity=db57@sussex.ac.uk
password=YOUR_PASSWORD_GOES_HERE
phase2-auth=mschapv2

[ipv4]
dns-search=
method=auto

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto
Posted 2017-11-02

Gear Description & Review, Nov 2017

I have a lot of gear configured in a very specific setup that has been that way for going on 2 years now. We can call this "configuration 2". Before configuration 2, I used a Kaoss Pad KP3+, plus the MS-20 and the ESX1 sampler for the setup, this I eventually found limiting because of the sequencing capabilities of the sampler. You can arrange songs and record effect sequences, but triggering chords on the synth is very limited, and getting seamless looping of chords for sampled pads is nigh-on impossible using the ESX1.

The current setup is:

  • Output: Behringer Xenyx ZB319 mixer
  • Path 1: Juno 106 → Eventide Space → mixer ch1
  • Path 2: MS-20 mini → Tonebone Hot British tube distortion → Moog Clusterflux → mixer ch2
  • Path 3: Nord Lead → Mojo Hand Colossus fuzz → Strymon Timeline → mixer ch3
  • Path 4: Korg ESX1 with replacement JJ ECC803 vacuum tubes (IIRC)

All driven by Sequentix Cirklon. Space & Timeline have MIDI inputs which can be driven from the Cirklon as well. The ESX1 has an audio in port which enables its use as an effects unit. It can receive a pre-mixed copy and apply effects using the Audio In part on the step sequencer. (Although this can easily create a feedback loop, leading to some fairly painful squealing.)

All gear is plugged into surge protected adapters and uses shielded cables for connection, this is key, a lot of noise happened before I did this. It's worth noting that there's still a lot of noise with this setup.

  1. The MS-20 is noisy, not insanely so, but noticably.
  2. The Juno chorus is noisy. The rest is fine.
  3. The Tonebone & Colossus are insanely noisy, but you'd expect that given that they are fuzz pedals.
  4. The ESX1 is noisy (although less noisy than before the tube replacement.)

The expensive pedals, Space, Timeline and Clusterflux are very clean-sounding.

Path 1: 106 → Space

The Space is a bit of an enigma. It seems to be capable of a huge variety of sounds, but it's a pain in the arse to use. The BlackHole and Shimmer algorithms sounds huge and are perfect for the type of music that I do. ModEchoVerb is also extremely useful. The trouble is that I tend to use the 106 for pad sounds and in this case the 106's legendary chorus is somewhat disrupted by the Space. It's difficult to get sounds that are "in the middle": either the effects from the Space are barely noticable (although I don't use headphones), or they are completely swamping the character of the input (for instance, when using Shimmer and BlackHole, the 106's setting is nigh-on irrelevant).

I am speculating on moving the Space to MS-20.

Path 2: MS-20 → Hot British → Clusterflux

The original goal of this setup was because I know from experience that the MS-20 excels at raw and cutting sounds, it's quite an aggressive sounding synth. The Hot British combination is definitely great, to be honest it would probably sound great through nearly any distortion, so I'm not totally sure that this is taking advantage of the high-end quality of the HB. And because the MS-20 is a monosynth I tend to use it less -- given that this is basically an amp in a box, as I understand it, it would probably be more suited to a polysynth, where you can imagine dialing in giant stoner chords. Regardless, if you feed an arpeggio into the MS-20 and crank the resonance/cutoff you're in acid heaven. I might replace it with some type of MIDI-controllable distortion. What I really want for this is a really quiet digital distortion (or a noise gate, I guess).

The ClusterFlux -- I just don't get on with it, I'm not sure why, because I love chorus and phasing. It would be very neat for some nice phased acid lead, but I don't find that it gives that much use for what I do. It does give a fatter chorus sounds that goes nicely with the MS-20's dual-VCO mode (one of the most notorious "basic settings" I've heard). The phasing sounds really great but I hardly ever use it. It's just too extreme a sound.

Incidentally I've found that the best route with the MS-20 is to take the time initially to find a nice sound and design the rest of the piece around it. It doesn't work very well the other way around. It's very difficult to find a matching sound to an existing composition. You don't even need to use it for melody, what can be nice is to just design little "hits" and articulations on the beat.

Path 3: Nord → Colossus → Timeline

This works the best out of the paths. The Nord is designed for leads, hence the name and is a treble-heavy synth. The Colossus transforms everything into trancy acid and the Timeline -- well, I barely know how to program the Timeline, I just use presets and they sound gorgeous. Some of the Timeline's presets are a touch subtle, you need to have "space" in the sound to be able to detect them. Just the initial preset, Melt Away, will blow you away when you hear it, it makes everything sound gorgeous. All the other presets sound gorgeous and there's a looping feature which I never use but it's nice to know it's there. Basically everything that comes out of this path sounds good. The Colossus is pretty tweakable as well -- perhaps a more restricted range of tones than the HB but a much more practical range.

Future directions: I'd like to integrate the KP3 back into the setup. The ESX1 I'd like to replace with another, more basic sampler -- it's got too much functionality for what I use it for now. Search for "super basic sampler" for some suggestions. Reverb and delay on drums can sound nice, but I don't want to use the Eventide for those moments. I think the reverb could sound good on the MS-20. The problem with the MS-20 is that the sound can be a little bit sparse. I also want to replace the MS-20 with a module. But the distorted MS-20 is a key sound so maybe put a nova drive on there? And put the Hot British onto the 106. This can sharpen up the tones of the 106 and stop it sounding so 80s. Plus we can get a low level output signal from the 106 to stop the HB blowing up.

The Clusterflux -- well, the 106 doesn't need it. The MS-20 wouldn't need it after you added the Space. All in all I may just sell it. Possibly if I saved some space I could add another synth and put it through that -- that would be a Waldorf FM synth -- but then we are starting to run out of connections for the Cirklon. I'd say that the Clusterflux is focused on a sort of guitar-specific sound, a kind of funky syncopated sound which we don't really use with the MS-20. It probably sounds better on something that's being played "live", where you're manually varying the note gate times to sync with the FX. I also want to replace the 106 with a 106 module and the MS-20 with an MS-20 module. I rarely play the MS-20 keyboard because the keyboard is too small. Full size keyboards are quite underrated. But it is kind of useful having two keyboards when you are working with someone else. Probably it's worth trying the Clusterflux on both the 106 and the Lead to see what sounds better.

Posted 2017-10-29

Org-Mode with Capture and Xmonad

This is a useful configuration for a distraction-free note-taking system. Often you want to quickly note something but don't want to break your flow too much.
The result will be a single key-combination that will pop a frame, allow you to write in a task, and have it be written to your default org-mode file.

Nearly all of the meat of this configuration comes from Phil Windley's blog post. I assume you already have experience with org-mode.

;; org-capture code for popping a frame, from Phil Windley
;; http://www.windley.com/archives/2010/12/capture_mode_and_emacs.shtml

;; We configure a very simple default template
(setq org-capture-templates
      '(("t" "Todo" entry (file "")  ; use the default notes files
         "* TODO %? %i %t")))

(defadvice org-capture-finalize 
    (after delete-capture-frame activate)  
  "Advise capture-finalize to close the frame"  
  (if (equal "capture" (frame-parameter nil 'name))  
      (delete-frame)))  

(defadvice org-capture-destroy 
    (after delete-capture-frame activate)  
  "Advise capture-destroy to close the frame"  
  (if (equal "capture" (frame-parameter nil 'name))  
      (delete-frame)))  

;; make the frame contain a single window. by default org-capture  
;; splits the window.  
(add-hook 'org-capture-mode-hook  
          'delete-other-windows)  

(defun make-capture-frame ()  
  "Create a new frame and run org-capture."  
  (interactive)  
  (make-frame '((name . "capture") 
                (width . 120) 
                (height . 15)))  
  (select-frame-by-name "capture") 
  (setq word-wrap 1)
  (setq truncate-lines nil)
  ;; Using the second argument to org-capture, we bypass interactive selection
  ;; and use the existing template defined above.
  (org-capture nil "t"))

Now we add the binding to your (hopefully existing) call to the xmonad additionalKeysP function, which uses emacs-style notation for the keybindings.

popCaptureFrameCommand = "emacsclient -n -e '(make-capture-frame)'"

myConfig = desktopConfig
    { 
       -- your existing config here
    }
    `additionalKeysP` [("M-o c", spawn popCaptureFrameCommand),
                       -- probably more existing bindings...
                      ]

Now you can type M-o c, you'll be popped into a capture buffer, then C-c C-c will save and file it and close the window. It'll appear as a top-level heading in the file. You can change the template definition if you are more OCD-minded but I find that this simplistic configuration works and stays out of my way.

Posted 2017-10-29

This is now reaching the height of some ridiculousness, but this was made with a base formed through the following process:

  1. Forming an oil-based marinade with the Srichacha rub mentioned in my earlier post about the kaeng pa.
  2. Taking the remnants of the marinade which weren't absorbed by the tofu.
  3. Make a chicken stock from the carcass of a whole chicken that was leftover from making Sri Owen's ayam bakar, about which more later.
  4. When the ayam bakar is produced, it produces scorched chicken skin which melts into a mixture of rendered fat and unidentified black pieces.
  5. Combine the stock with the Srichacha marinade and the chicken skin.
  6. Raise to a rolling boil for 10 minutes for safety.
  7. Re-blend the entire mixture until smooth.
  8. Let the mixture cool, the fat will rise to the surface.
  9. Skim off all the fat, you should be able to just use a teaspoon. It will form a kind of foamy mass. You won't get rid of all of it, though.
  10. Pass the mixture through a fat separator. This will get rid of any clumps and hopefully remove remaining fat.

First cook the veggies by sauteeing them for around 5 minutes. It's good to use shallots, though you don't need any spices. Don't overcook the vegetables.

You'll now have a rather concentrated stock with a slightly bitter flavour. To form it into something suitable for soup, you simply dilute with equal amount of water (50% stock, 50% water). Then bring to the boil. Once boiling add the noodles. Do the noodles until al dente. Don't add salt.

Wait for the stock & noodle mix to cool, now add the veggies.

Post script: It's actually better not to mix the noodles with the soup until you're ready to eat them, because they can absorb too much liquid and end up mushy. You can save a separate container of the stock for boiling the noodles, or you can just do them with water. But keep them separated if you want to freeze the finished soup.

Posted 2017-10-17

In Emerick, Carper & Grand's 2012 O'Reilly book, Clojure Programming, they give an example of using Java interop to create JAX-RS services using a Grizzly server. These examples are now outdated and don't work with recent versions of Jersey.

Here's an updated version that is working correctly, at least for this tiny example.

jaxrs_application.clj

(ns cemerick-cp.jaxrs-application
  (:gen-class :name cemerick_cp.MyApplication
              :extends javax.ws.rs.core.Application)
  (:import [java.util HashSet])
  (:require [cemerick-cp.jaxrs-annotations]))


(defn- -getClasses [this]
  (doto (HashSet.)
    (.add  cemerick_cp.jaxrs_annotations.GreetingResource)))

jaxrs_annotations.clj

(ns cemerick-cp.jaxrs-annotations
  (:import [javax.ws.rs Path PathParam Produces GET]))

(definterface Greeting
  (greet [^String vistor-name]))

(deftype ^{Path "/greet/{visitorname}"} GreetingResource []
  Greeting
  (^{GET true Produces ["text/plain"]} greet        ; annotated method
   [this ^{PathParam "visitorname"} visitor-name]   ; annotated method argument
   (format "Hello, %s!" visitor-name)))

jaxrs_server.clj

(ns cemerick-cp.jaxrs-server
  (:import [org.glassfish.jersey.grizzly2.servlet GrizzlyWebContainerFactory]))

(def myserver (atom nil))

(def properties
  {"javax.ws.rs.Application" "cemerick_cp.MyApplication"})

(defn start-server []
  (reset! myserver (GrizzlyWebContainerFactory/create "http://localhost:8080/"
                                                      properties)))

This uses the following Leiningen coordinates to run.

[org.glassfish.jersey.containers/jersey-container-grizzly2-servlet "2.26"]
[org.glassfish.jersey.inject/jersey-hk2 "2.26"]]

You probably also need to AOT some of these namespaces, I used :aot :all for this example.

Posted 2017-10-14

These are a few notes that I came across while trying to get GitLab CI working.

Fulfil the system requirements

There are some pretty insane system requirements for GitLab. You need at least 4GB of memory, which is not always so easy to come by in a VPS environment. Even when you fulfil the system requirements, GitLab will run out of memory and have to be "kicked" sometimes, in my experience. You could probably automate this with some kind of systemd configuration, but I haven't tried that yet.

Realize that things differ depending on your package

gitlab hosts Debian packages themselves that are more up to date, but perhaps less integrated with the rest of the system. For reasons, I was reluctant to use the packages from upstream. Instead, I used some backported versions for Jessie that were created by Pirate Praveen. You don't need to worry about this, because gitlab has migrated to stretch, so you just need to choose: use the upstream packages, or use the official Debian stable packages. You won't have problems, unless you run across features that you need from the newer versions.

Understand the GitLab CI environment

There are several things to realize about GitLab CI. The environment can differ a lot. The two primary environments are 'docker' and 'shell'. If you use docker, you build up all of your infrastructure prerequisites from a docker base image. Whereas if you use shell, your builds run under a regular shell environment, as a special (non-root) user.

Personally, I found that although docker was easier to get started with, I got benefits by moving to the shell executor, because having a preconfigured environment eliminated some run-time from the integration test suite. These problems could also have been resolved by creating my own docker base image, but that seems to be putting the cart before the horse, in a way: I already have all my infrastructure under configuration management, so why would I invest in a new method of ensuring that the docker base image meets my needs when I can address it with existing tools? Ther's also the problem that docker can't use background processes as it doesn't have a real PID1.

Understand Gitlab CI files

They're in YAML format, which is well documented. There's an existing lint tool for gitlab-ci.yml, which has stricter semantic rules than a simple YAML violation. If you do just want to validate your YAML, you can use the snippet:

validate_yaml() {
    ruby -e "require 'yaml';puts YAML.load_file(ARGV[0])" "$1"
}

Assuming you have ruby.

You can use "tags" to separate groups of runners. If you are trying to move from docker to shell executors or vice versa, you can set tags in a job to ensure that that job doesn't get executed on the wrong runner.

The key to understanding the execution cycle of GitLab CI jobs is that any job could essentially be executed on any runner. That means that you can't make assumptions about what files will be available at any time -- because if a job j1 executed on host mercury, when job j2 executes later on host venus then the output files produced by the build tool won't be available in venus.

There are two ways to get around this.

The first is to declare that the job has build artifacts.

variables:
  OUTPUT_TAR_PATH: "mybuild.tar"

compile:
  stage: compile
  script:
    - sh do_build.sh $OUTPUT_TAR_PATH
  tags:
    - shell
  artifacts:
    paths:
      - $OUTPUT_TAR_PATH

The GitLab CI runtime will automatically make sure that $OUTPUT_TAR_PATH is copied between any runner hosts that are used to execute jobs in this build.

Another related mechanism is the cache:

cache:
  untracked: true
  key: "$CI_BUILD_REF_NAME"
  paths:
    - node_modules/

This is useful for JavaScript projects, otherwise you're going to end up downloading >1GB of npm modules at every phase of your project.

One last point: command under a shell list are parsed in a special way and line breaks or special characters may not always work as you expect. So far, it was mostly trial and error for me. Don't assume that you need to quote unless you try it and it fails: AFAIK, the interpreter does its own mangling to the shell lines.

Posted 2017-10-13

This is pretty thrown together, it's mostly adapted from the Rasa Malaysia recipe (despite being a Thai dish). I had Srichacha marinade left over plus some yellow curry paste from an old housemate.

There's really nothing to it: stir fry courgette and red pepper, add curry paste, use bamboo shoots and fish sauce, add coconut milk, add previously deep-friend tofu. Now you've got something insanely yummy.

Posted 2017-10-13

This is Sri Owen's ayam bakar (grilled chicken) which is rather different from many of the other Javanese style recipes that I have found online. This one requires roasting of the full chicken first, then a later grilling. It also includes the chicken breast where most recipes include the leg only, and omits the kecap manis.

Start off by roasting the whole chicken. The inside and outside is vigorously rubbed with a lemon, butter and salt mix.

After this you need to joint the whole chicken, which is not shown as it's rather gory. (You can email me and I'll send you the X-rated version.)

Creating a spicy marinade, marinating the jointed chicken and grilling at the top of the oven until blackened. The most notable thing about the marinade is that it uses terasi, shrimp paste, which you can get from Thai-focused oriental shops in the UK. This stuff is extremely pungent.

The jury's still out on the flavour of this one. Eat it with rice and cucumber slices, of course. I hope to update this post soon to give you a better idea, but there are a lot of different recipes for this widespread Indonesian dish. I think you're going to want some sambal anyway.

Posted 2017-10-12

The integration test vs unit test debate is a minefield. More particularly, see J.B. Rainsberger's talk, "Integrated Tests Are A Scam". I don't want to dig in to this particularly for this post, except to say that definitely if you are working on a greenfield project, watch that talk first.

Now for those who for reasons have to or want to do integration tests, there are several options. The fundamental problem is that databases are inherently stateful. Therefore, you need some way of getting back to a clean state at the beginning of each test.

We assume that the user is using some kind of database migration tool to manage the database development. That is that they already have "up" and "down" migrations. In this case, it's reasonable to solve the problem by doing the following:

  • Before each test:
    1. Run the "down" migrations.
    2. Run the "up" migrations.

This is relatively trivial to implement as a fixture, assuming that you are using clojure.test.

The problem comes when this becomes intolerably slow, which can easily happen once you get above 10s of migrations. Let me refer back to the Rainsberger talk now, which is a must-watch. If you want to stick with integration testing, you'll need some hack or workaround. There are several possibilities.

One possibility is using database transactional rollbacks. I found that this was impractical when interacting with real code, because of the possibility of transactions committing from within the system under test itself.

There are several guides to this method:

  • http://www.lispcast.com/clojure-database-test-faster
  • http://bobnadler.com/articles/2015/03/04/database-tests-with-rollbacks-in-clojure.html
  • https://www.reddit.com/r/Clojure/comments/376wjn/using_database_transactions_in_tests/

However I found this wouldn't work for when the code under test was using more hairy database interactions.

Another method that you can use is the following:

  • Create a template database t that already has all "up" migrations applied.
  • Assume an ephemeral database named e.
  • For each test:
    1. Drop e.
    2. Make sure that t has all up migrations applied (should be a no-op).
    3. Create e from template database t.
    4. Run test against e.

The concept of a "template database" is PostgreSQL-specific, hence the post title. In my experience, this procedure will introduce about a second of latency to each test in practice (that's a rough measurement based on my laptop). In this way it definitely precludes running the whole suite at once (and please refer back to the Rainsberger talk), but can make isolated feature development much faster.

You can see the code below. Note that some-reporter is a Ragtime reporter, which has the signature [data-store op id], as below.

(defn reporter [data-store op id]
  ; ignore data-store
  (case op
    :up (debugp "migrating" id)
    :down (debugp "rolling back %s" id)
    :else (break)))

This is the functionality, with drop-and-create being the top-level function that the fixture should use.

; If the migration already happened, these tests will be fast in any case.
(defn migrate [db-spec]
  (tracep "using authentication details" db-spec)
  (repl/migrate {:datastore  (ragtime.jdbc/sql-database db-spec)
                 :migrations (ragtime.jdbc/load-resources "migrations")
                 :reporter some-reporter}))


;; returns a vector
(defn get-drop-commands [test-databases-conf]
  (let [leaf-db-name (get-in test-databases-conf [:leaf :database])]
    [(format "DROP DATABASE IF EXISTS %s" leaf-db-name)
     (format "CREATE DATABASE %s WITH TEMPLATE %s OWNER %s"
             leaf-db-name 
             (get-in test-databases-conf [:root :database])
             (get-in test-databases-conf [:leaf :username]))]))

;; also returns a vector
(defn get-reassign-commands [test-databases-conf]
  (tracep "about to reassign" true)
  [(format "REASSIGN OWNED BY %s TO %s"
           (get-in test-databases-conf [:root :username])
           (get-in test-databases-conf [:leaf :username]))])

(defn drop-and-create [test-databases-conf]
  (try 
    (jdbc/db-do-commands test-db/root-postgres-db false
                         (get-drop-commands test-databases-conf))
    (catch Exception e
      (doseq [e0 e]
        (println (.getMessage e0)))))

  (try
    (jdbc/db-do-commands (merge test-db/leaf-postgres-db
                                (select-keys test-db/root-postgres-db [:user :password]))
                         false
                         (get-reassign-commands test-databases-conf))

    (catch java.sql.SQLException e
      (doseq [e0 e]
        (errorf e "unable to reassign permissions to ephemeral test role")))))

The following function also exists to forcibly disconnect other connections, because this functionality has the unfortunate property that it will fail if any other connections are already open.

;; Requires account used by the template database to have the
;; superuser privilege in postgres.
(defn disconnect-other-connections []
  (try
    (jdbc/db-do-commands test-db/root-postgres-db false 
                         "SELECT pg_terminate_backend(pg_stat_activity.pid)
                      FROM pg_stat_activity
                      WHERE pg_stat_activity.datname = 'my_ephemeral_test'
                      AND pid <> pg_backend_pid()")
    (catch Exception e
      (doseq [e0 e]
       (println (.getMessage e0))))))

All that remains is to wrap this up in a clojure.test fixture. You can use the disconnect-other-connections functionality or you can choose not to. Bear in mind that this will introduce a large amount of complexity into your test environment. Now you have to manage several databases in the test environment, as well as role name/password permissions for each one.

Posted 2017-10-12

This blog is powered by coffee and ikiwiki.