This is something of a pain in the arse, there are several main points to remember. These points apply to the version 8.5.8+dfsg-5, from Ubuntu universe.

Install the packages from universe

Work around bug 1574349

You'll come across a bug: https://bugs.launchpad.net/ubuntu/+source/gitlab/+bug/1574349

The tell-tale sign of this bug is a message about a gem named devise-two-factor. As far as I can tell, there's no way to work around this and stay within the package system.

You have to work around this, but first:

Install bundler build dependencies

apt install cmake libmysqlclient-dev automake autoconf autogen libicu-dev pkg-config

Run bundler

Yes, you're going to have to install gems outside of the package system.

  • # cd /usr/share/gitlab
  • # bundler

And yes, this is a bad situation.

Unmask all gitlab services

[Masking] one or more units... [links] these unit files to /dev/null, making it impossible to start them.

For some reason the apt installation process installs all the gitlab services as masked. No idea why but you'll need to unmask them.

systemctl unmask gitlab-unicorn.service
systemctl unmask gitlab-workhorse.service
systemctl unmask gitlab-sidekiq.service
systemctl unmask gitlab-mailroom.service

Interactive authentication required

You're going to face this error, too. You need to create an override so that gitlab gets started with the correct user. You can do that with systemctl edit gitlab, this will create a local override.

Insert this in the text buffer:

[Service]
User=gitlab

Save and quit and now you need to reload & restart.

systemctl daemon-reload
systemctl start gitlab

Purging debconf answers

Since gitlab is sometimes an interactively configured package, sometimes stale information can get stored in the debconf database, which will hinder you. To clear this out and reset them, do the following:

debconf-show gitlab
echo PURGE | debconf-communicate gitlab

This is the first time I've had to learn about this in a good 10 years of using and developing Debian-derived distributions. That's how successful an abstraction debconf is.

Posted 2018-03-09

[Originally written 2017-09-22. I don't have time to finish this post now, so I might as well just publish it while it's still not rotted.]

While coding large backend applications in Clojure I noticed a pattern that continued to pop up.

When learning FP initially, you initially learn the basics: your function should not rely on outside state. It should not mutate it, nor observe it, unless it's explicitly passed in as an argument to the function. This rule generally includes mutable resources in the same namespace, e.g. an atom, although constant values are still allowed. Any atom that you want to access must be passed in to the function.

Now, this makes total sense at first, and it allows us to easily implement the pattern described in Gary Bernhardt's talk "Boundaries", of "Functional Core, Imperative Shell" [FCIS]. This means that we do all I/O at the boundaries.

(defn sweep [db-spec]
  (let [all-users (get-all-users db-spec)]
    (let [expired-users (get-expired-users all-users)]
      (doseq [user expired-users]
        (send-billing-problem-email! user)))))

This is a translation of Gary's example. A few notes on this implementation.

  1. sweep as a whole is considered part of the imperative shell.
  2. get-all-users and send-billing-problem-email! are what we'll loosely refer to as "boundary functions".
  3. get-expired-users is the "functional core".

The difference that Gary stresses is that the get-expired-users function contains all the decisions and no dependencies. That is, all the conditionals are in the get-expired-users function. That function purely operates on a data in, data out basis: it knows nothing about I/O.

This is a small-scale paradigm shift for most hackers, who are used to interspersing their conditionals with output; consider your typical older-school PHP bespoke system, which is bursting with DB queries that have their result spliced directly into pages. But, this works very well for this simple example. It accomplishes the goal of making everything testable pretty well. And you'd be surprised how far overall this method can take you.

It formalizes as this: Whenever you have a function that intersperses I/O with logic, separate out the logic and the I/O, and apply them separately. This is usually harder for output than for input, but it's usually possible to construct some kind of data representation of what output operation should in fact be effected -- what I'll call an "output command" -- and pipe that data to a "dumb" driver that just executes that command.

You can reconstruct most procedures in this way. The majority of problems, particularly in a backend REST system, break down to "do some input operation", "run some logic", "do some output operation". Here I'm referring to the database as the source and target of IO. This is the 3-tier architecture described by Fowler in PoEAA.

However, you probably noticed an inefficiency in the code above. Likely we get all users and then decide within the language runtime whether a given user is expired or not. We've given up the ability of the database to answer this question for us. Now we're reading the entire set of users into memory, and mapping them to objects, before we make any decision about whether they're expired or not.

Realistically, this isn't likely to be a problem, depending on the number of users. Obviously Gmail is going to have a problem with this approach. But surely you're fine until perhaps 10,000 users, assuming that your mapping code is relatively efficient.

Anyway, this isn't the problem that led me to discover this. The problem happened when I was implementing the basics of the REST API, and attempting to be as RESTfully-correct as possible, I wanted to use linking. This seems easy, when you only need to produce internal links, right? In JSON, we chose a certain representation (the Stormpath representation).

GET /users/1

{
   "name": "Dave",
   "pet": {"href": <url>},
   "age": 31
}

Now, assume we also have a separate resource for a user's pet. In REST, that's represented by the URL /pets/1 for a pet with identifier 1. We have the ability to indicate this pet through either relative or absolute URLs. Assume that our base URL for the API is https://cool-pet-tracker.solasistim.net/api.

  • The relative URL is /pets/1.
  • The absolute URL is https://cool-pet-tracker.solasistim.net/api.

If you search around a bit, you'll find that from what small amount of consensus exists, REST URLs that get returned are always required to be absolute. This pretty much makes sense, given that a link represents a concrete resource that is available at a certain point in time, in the sense of "Cool URLs Don't Change".

Now the problem becomes, say we have a function that attempt to implement the /users/:n API. We'll write this specifically NOT in the FCIS style, so we'll entangle the I/O. (Syntax is specific to Rook.)

 (defn show [id ^:injection db-spec]
   (let [result (get-user db-spec {:user (parse-int-strict id)})]
     {:name (:name result)
      :pet nil
      :age (:age result)}))

You'll notice that I left out the formation of the link. Let's add the link.

 (defn show [id request ^:injection db-spec]
   (let [result (get-user db-spec {:user (parse-int-strict id)})]
     {:name (:name result)
      :pet (make-rest-link request "/pet" (:pet_id result))
      :age (:age result)}))

Now, we define make-rest-link naively as something like this.

 (defn make-rest-link [request endpoint id]
    (format "%s/%s/%s" (get-in request [:headers "host"])
                       endpoint
                       id))

Yeah, there's some edges missed here but that's the gist of it. The point is that we use whatever Host URI was requested to send back the linked result. [This has some issues with reverse proxy servers that sometimes calls for a more complicated solution, but that's outside the scope of this document.]

Now did you notice the issue? We had to add the request to the signature of the function. Now, that's pretty much a small deal in this case: the use of the request is a key part of the function's purpose, and it makes sense for every function to have knowledge of it. But just imagine that we were dealing with a deeply nested hierarchy.

(defn form-branch []
  {:something-else 44})

(defn form-tree []
  {:something-else 43
   :branch (form-branch)})

(defn form-hole [id]
   {:something 42
    :tree (form-tree)})

(defn show [id ^:injection db-spec]
  (form-hole id))

As you can see, this is a nested structure: a hole has a tree and that tree itself has a branch. That's fine so far, but we don't really want to go any deeper than 3 layers. Now, the branch gets a "limb" (this is a synonym for "bough", a large branch). But we only want to put a link to it.

(defn form-limb [request]
  (make-rest-link request "/limbs" 1))

(defn form-branch [request]
  {:something-else 44
   :limb (form-limb request)})})

(defn form-tree [request]
  {:something-else 43
   :branch (form-branch request)})

(defn form-hole [id request]
   {:something 42
    :tree (form-tree request)})

(defn show [id request ^:injection db-spec]
  (form-hole id request))

Now we have a refactoring nightmare. All of the intermediate functions, that mirror the structure of the entity, had to be updated to know about the request. Even though they themselves did not examine the request at all. This isn't bad just because of the manual work involved: it's bad because it clouds the intent of the function.

Now anyone worth their salt will be thinking of ways to improve this. We could cleverly invert control and represent links as functions.

(defn form-branch []
   {:something-else 44
    :limb #(make-rest-link % "/limbs" 1)})

Then, though, we need to run over the entire structure before coercing it to REST and specially treat any functions. This could be accomplished using clojure.walk and it would probably work OK.

What's actually being required here? What's happened is a function deep in the call stack has a need for context that's only available in the outside of the stack. But, that information is really only peripheral to its purpose. As you can see we were able to form an adequate representation of the link as a function, which by no means obscures its purpose from the reader. If anything the purpose is clearer.

This problem can also pop up in other circumstances that seem less egregious. In general, any circumstance where you need to use I/O for a small part of the result at a deep level in the stack will result in a refactoring cascade as all intervening functions end up with added parameters. There are several ways to ameliorate this.

1: The "class" method

This method bundles up the context with the functionality as a record. The context then becomes referrable to by any function within that protocol.

(defprotocol HoleShower
  (show [id] "Create JSON-able representation of the given hole."))

(defrecord SQLHoleShower [request db-spec]
  HoleShower
  (show [this id]
    {:something 42
     :tree (form-tree id)})
  (form-tree [this id]
    {:something-else 44
     :branch (make-rest-link request "/branches" 1)}))

As you can see, we don't need to explicitly pass request because every instance of an SQLHoleShower automatically has access to the request that was used to construct it. However, it has the very large downside that these functions then become untestable outside of the context of an SQLHoleShower. They're defined, but not that useful.

2. The method of maker

This is a library by Tamas Jung that implements a kind of implicit dependency resolution algorithm. Presumably it's a topo-sort equivalent to the system logic in Component.

(ns clojure-playground.maker-demo
  (:require [maker.core :as maker]))

(def stop-fns (atom (list)))

(def stop-fn
  (partial swap! stop-fns conj))

(maker/defgoal config []
  (stop-fn #(println "stop the config"))
  "the config")

;; has the more basic 'config' as a dependency
;; You can see that 'defgoal' actually transparently manufactures the dependencies.
;; After calling (make db-conn), (count @stop-fns) = 2:
;; that means that both db-conn AND its dependency config were constructed.

(maker/defgoal db-conn [config]
 (stop-fn #(println "stop the db-conn"))
  (str "the db-conn"))

;; This will fail at runtime with 'Unknown goal', until we also defgoal `foo`
(maker/defgoal my-other-goal [foo]
  (str "somthing else"))

The macro defgoal defines a kind of second class 'goal' which is only known about by the maker machinery. When a single item anywhere in the graph is "made" using the make function, the library knows how to resolve all the intermediaries. It's kind of isomorphic to the approach taken by Claro, although it relies on more magical macrology.

https://www.niwi.nz/2016/03/05/fetching-and-aggregating-remote-data-with-urania/ https://github.com/kachayev/muse https://github.com/facebook/Haxl https://www.youtube.com/watch?v=VVpmMfT8aYw

See this: Retaking Rules for developers: https://www.youtube.com/watch?v=Z6oVuYmRgkk&feature=youtu.be&t=9m54s

And of course, the "Out of the Tar Pit" paper.

Posted 2018-02-27

The most interesting thing about this piece is the butter/saffron/milk mixture, pictured below. This turns out delicately spiced and exotic, with a kind of floral note from the cardamom. I don't know how you're supposed to eat it: for me, I may have failed immediately by not using sufficient rice.

I don't really buy the pastry rim, as pictured here. This is supposed to create a more tight seal on the dish while it's in the oven. But it seems like a bit of a waste; the visual presentation is wonderful, though.

I found it interesting to read, in the history of the biryani, that beef biryani is a favourite in Kerala. This would be a nice next try.

I deviated from the recipe by not including cauliflower, this was unintentional. I'd say the overall result is dominated by the chana dal. It has that kind of 'grainy' taste associated with a dal.

In a way, I can't agree that this is 'perfect': in a way, it's too subtle for me. The flavours don't quite punch through enough. I think an addition of deep-fried onions, another addition from Cloake that I couldn't add, would improve it. Ironically this is kind of the opposite of Cloake's chilli, that I made previously, which was if anything too pungent.

Actually I'd say the flavour issue with this is the sour balance. It's just a touch too sour in a way that's not mitigated by the other flavours. I suppose you have to remember that yoghurt is sour. I think that I may have overdone the amount of yoghurt in this recipe. It was supposed to be 200ml of yoghurt for ~700g of main ingredient, but I have probably done about 500ml for 500g instead. And then also added lime juice on top of this. Cloake actually anticipates that the yoghurt will be insufficient and recommends diluting it. So the lesson here is to go for the balance of 1/4 yoghurt to main ingredient ratio.

This means that you'd normally want to buy smaller portions of yoghurt, about 250ml containers, and thin them with water or milk when you want to marinate. And perhaps leave the addition of other souring agents to later, when you already use yoghurt in a curry.

Posted 2018-02-25

Deploying desktop applications on a Mac, for us Linux guys, can be strange. I didn't even know what the final deliverable file was.

Well, that file is the DMG, or Apple Disk Image.

To create the DMG, you first need to create an intermediate result, the App Bundle.

I assume you're using Scons as your build system. If you're not, which admittedly is quite likely, then go and read another post.

To create the App Bundle you can use an Scons tool from a long time ago which doesn't seem to have a real home at the moment. It'd be a good project to try to rehabilitate this.

In the meantime, I've created a Gist that contains the code for that. Download it and put it in the same directory as your SConstruct.

To use it, you have to bear in mind that it's going to overwrite your environment quite heavily. So I suggest using a totally new environment for it.

Your final SConstruct is going to look something like this:

# SConstruct

import os
from osxbundle import TOOL_BUNDLE

def configure_qt():
    qt5_dir = os.environ.get('QT5_DIR', "/usr")

    env = Environment(
        tools=['default', 'qt5'],
        QT5DIR=qt5_dir
    )
    env['QT5_DEBUG'] = 1
    maybe_pkg_config_path = os.environ.get('PKG_CONFIG_PATH')
    if maybe_pkg_config_path:
        env['ENV']['PKG_CONFIG_PATH'] = maybe_pkg_config_path

    env.Append(CCFLAGS=['-fPIC', '-std=c++11'])
    env.EnableQt5Modules(['QtCore', 'QtWidgets', 'QtNetwork'])

    return env

# A Qt5 env is used to build the program...
env = configure_qt()
env.Program('application', source=['application.cc'])

# ... but a different env is needed in order to bundle it.
bundle_env = Environment()
TOOL_BUNDLE(bundle_env)

bundledir = "the_bundle.app"
app = "application"   # The output object?
key = "foobar"
info_plist = "info_plist.xml"
typecode = 'APPL'

bundle_env.MakeBundle(bundledir, app, key, info_plist, typecode=typecode)

As an explanation of these arguments, bundledir is the output directory, which must always end in .app. app is the name of your executable program (the result of compiling the C++ main function). key is unclear, some other context suggests that it's used for a Java-style reversed domain organization identifier, such as net.solasistim.myapplication.

You can also provide icon_file (a path) and resources (a list) which are then folded into the /Contents/Resources path inside the .app.

Once you've got your .app you now need to create a DMG file. Like this.

$ macdeployqt the_bundle.app -dmg

You should now find the_bundle.dmg floating in your current directory. Nice.

Posted 2018-02-19

How simple can a spinner be, and still be devoid of hacks? Let's see:

Markup in a Vue component, using v-if to hide and show:

<svg v-if="inProgressCount > 0" height="3em" width="3em" class="spinner">
  <circle cx="50%"
          cy="50%"
          r="1em"
          stroke="black"
          stroke-width="0.1em"
          fill="#001f3f" />
</svg>

The CSS to create the animation, and "pin" it so that it's always visible:

svg.spinner {
    position: fixed;
    left: 0px;
    top: 0px;
}

svg.spinner circle {
    animation: pulse 1s infinite;
}

@keyframes pulse {
    0% {
        fill: #001f3f;
    }

    50% {
        fill: #ff4136;
    }

    100% {
        fill: #001f3f;
    }
}

The only thing that I don't understand here is why it's necessary to list duplicate fill values for 0% and 100%. That's needed to create a proper loop. Answers on a postcard.

Posted 2018-02-05

To record my FFXII builds pre Act 8.

Vaan - Time Battlemage / Monk

Penetrator Crossbow, Lead Shot, Giant's Helmet, Carabineer Mail, Hermes Sandals

Balthier - White Mage / Machinist

Spica, Celebrant's Miter, Cleric's Robes, Sage's Ring

Fran - Archer / Uhlan

Yoichi Bow, Parallel Arrows, Giant's Helmet, Carabineer Mail, Sash

Basch - Knight / Foebreaker

Save the Queen, Dragon Helm, Dragon Mail, Power Armlet

Ashe - Red Battlemage / Bushi

Ame-no-Murakumo, Celebrant's Miter, Cleric's Robes, Nishijin Belt

Penelo - Black Mage / Shikari

Platinum Dagger, Aegis Shield, Celebrant's Miter, Cleric's Robes, Sash

Posted 2018-02-02

Sometimes you may need to extract content from a word document. You will need to be aware of the structure. Extremely simplified, a Word document has the following structure:

  1. At the top level is a list of "parts".
  2. One part is the "main document part", m.
  3. The part m contains some w:p elements, represented in Docx4j as org.docx4j.wml.P objects. Semantically this represents a paragraph.
  4. Each paragraph consists of "runs" of text. These are w:r elements. I think that the purpose of these is to allows groups within paragraphs to have individual stylings, roughly like span in HTML.
  5. Each run contains w:t elements, or org.docx4j.wml.Text. This contains the meat of the text.

Here's how you define a traversal against a Docx file:

public class TraversalCallback extends TraversalUtil.CallbackImpl {
    @Override
    public List<Object> apply(Object o) {
        if (o instanceof org.docx4j.wml.Text) {
            org.docx4j.wml.Text textNode = (org.docx4j.wml.Text) o;

            String textContent = textNode.getValue();

            log.debug("Found a string: " + textContent);

            root.appendChild(element);
        } else if (o instanceof org.docx4j.wml.Drawing) {
            log.warn("FOUND A DRAWING");
        }
        return null;
    }

    @Override
    public boolean shouldTraverse(Object o) {
        return true;
    }
}

Note that we inherit from TraversalUtil.CallbackImpl. This allows us to avoid implementing walkJAXBElements() method ourselves -- although you still might need to, if your algorithm can't be defined in the scope of the apply method. It seems like the return value of apply is actually ignored by the superclass implementation of walkJAXBElements, so you can just return NULL.

To bootstrap it from a file, you just do the following:

URL theURL = Resources.getResource("classified/lusty.docx");

WordprocessingMLPackage opcPackage = WordprocessingMLPackage.load(theURL.openStream());
MainDocumentPart mainDocumentPart = opcPackage.getMainDocumentPart();

TraversalCallback callback = new TraversalCallback();
callback.walkJAXBElements(mainDocumentPart);

By modifying the apply method, you can special-case each type of possible element from Docx4j: paragraphs, rows, etc.

Posted 2018-01-04

Sometimes you may have a reason to deploy certain code. This normally involves something like the following: you copy some files to a certain server somewhere, and perhaps restart a server. This is all well known territory, but due to the vagaries of SSH, automating it can often be a pain. There are existing tools for this, Fabric and Capistrano, that are fairly well known but -- it seems to me -- underused. Anyway, they're certainly far from standard, and particularly with regard to Fabric (which I like and use on a near-daily basis, I should point out) they can be tricky to get installed and configured in their own right.

I devised this simple, perhaps even simplistic, plan to handle deployments.

  1. Create a UNIX user that will be used for deployments. For this article we'll refer to this user as dply, although the name is immaterial. This user must exist on the hosts that are the target of the deployments.

  2. Distribute SSH keys to the hosts that need to initiate deployments. This will often be a worker node in a CI system but people may also manually initiate deployments.

  3. Each deployment target receives an appropriate sudoers file that allows them to execute one command (and one only): the deployment processor, with the NOPASSWD specifier.

  4. The deployment user dply can write to a mode 700 directory that is used to receive deployment artifacts. Artifacts are written by a simple scp process to this directory, /home/dply or whatever you like.

  5. The deployment processor script, which is distributed identically to all the nodes and lives in /usr/local/bin, knows about all existing deployments, which are hardcoded with plaintext aliases like main-site, backend, etc, and knows to look for the artifacts in /home/dply or whatever.

  6. Nodes simply scp up the deployment archive, ssh to the relevant server and invoke sudo /usr/local/bin/deployment-processor backend. The processor then looks for the files in a hardcoded location and does whatever's needed to actually deploy them. Concretely in this case every handler is just a function in Perl which can then do many tasks. The key is that it doesn't get any input from the user, thus mitigating some security issues. It's easy to do the various things you may need to do, untar an archive, perhaps chmod some files, restart a service, etc.

It's secure in some senses, but not in others. There's no access isolation between nodes so any node can deploy any code. Once a CI worker node is assumed penetrated, a malicious user can indeed wipe out a production site, but they can't do damage to existing servers. (for whatever that's worth...)

It should be noted that no consensus exists around solutions in this space. It has some virtues over Fabric and probably Capistrano to, by being markedly less complicated, because it only relies on the presence of ssh and scp on the client boxes, which are near-universal. If you wanted to formalize it you could develop cross-platform deployment client binaries in Go or something similar, but I haven't found this necessary. Anecdotally I've had many unpleasant problems with fabric, although it remains a very useful piece of software.

I don't like to deploy with Git because I don't see Git as something that's related to deployments, Git is related to source code history which is distinct from something that I might consider a "release artifact". FWIW release artifacts are also built using a separate processing step, which (for me) is often just a "narrowing" of the file tree according to a set of rsync patterns and tarring up this narrowed tree.

Heroku also have an approach that involves creating "slugs" and "releases" where each release corresponds to a deployment, and "to create a release" is synonymous with "to deploy". This is much more featureful than the above approach but it's over-engineered for this case.

There's also WAR deployment which is interesting but specific to a rather small area of Java development. If you're a Java-only shop, this can probably be nice.

Something that was also on my radar in my department is the Perl-based Rex, which I never got the chance to investigate.

Posted 2017-12-09

I'm doing a project in C++ at present and experiencing mixed feelings about it. One of the worst things about using C++ is the necessity to come into contact with CMake, which is the bane of my existence. When I used to work on ProjectM, I used to wrestle with this system and ended up hating it. Anyway, now I'm starting a fresh C++ project, I started using the less popular (but far more ergonomic) SCons.

Anyway, like many C++ projects GoogleMock has a bias for CMake. So if you want to use SCons to build instead, here is a tiny SConstruct that you can use.

googletest_framework_root = "/home/amoe/vcs/googletest"

googletest_include_paths = [
    googletest_framework_root + "/googletest",
    googletest_framework_root + "/googletest/include",
    googletest_framework_root + "/googlemock",
    googletest_framework_root + "/googlemock/include"
]

gtest_all_path = googletest_framework_root + "/googletest/src/gtest-all.cc"
gmock_all_path = googletest_framework_root + "/googlemock/src/gmock-all.cc"

env = Environment(CPPPATH=googletest_include_paths)

env.Program(
    target='main',
    source=["main.cc", gtest_all_path, gmock_all_path],
    LIBS=['pthread']
)

Where your main.cc is a regular test driver, as such:

#include <gmock/gmock.h>

int main(int argc, char **argv) {
    testing::InitGoogleMock(&argc, argv);
    return RUN_ALL_TESTS();
}

You'll need to find some answer to actually getting the source of the test framework into the build filesystem -- Google Test doesn't build as a system library. That could be git submodules, a download script, or just dumping the repository inside yours.

Posted 2017-11-30

This is based on Camilla Panjabi's recipe. The only variations were, not using any cloves (that she mentioned in the recipe but not in the ingredients list -- a possible erratum?) and using pre-cooked lamb. I got the lamb from the butcher, a large leg joint on the bone. I stewed the entire joint for an hour and a half in a large pot, with some curry powder & balti masala for flavouring, which I presume didn't form a large part of the flavour of this dish itself, but I thought that since I plan to reuse the stock I may as well infuse something into it. The meat slid off the bone rather easily after that, with small pinker patches inside after cutting.

This curry has the singular innovation of creating the cumin-flavoured potatoes first: you saute a big handful of cumin seeds and shallow-fry whole small potatoes to give them a crispy skin on the outside. It looks very attractive when finished. The cumin clings to the outside of the potato. Then later you submerge these in curry liquid and boil them for about 10 minutes. This gives whole potatoes that are still firm to the palate. I also used large chunky sea salt on these potatoes.

The rest of it is rather standard. I didn't use a curry base for this one because I had ran out, so the onions are reduced from scratch. The first thing I noticed is what a long time it takes to get the onions the correct colour. It took nearly a whole hour. That's the real benefit of using the base, IMO, the time differential; there's probably not any large flavour benefit from a curry base, perhaps there's even a flavour deficit.

This one strangely has garam masala formed into a paste and added relatively early in cooking, which is somewhat of a departure.

I look forward to using this meat & potatoes pattern in the future; potatoes are great cupboard stock because they're cheap and last for ages. When you can boost the bulk and variation of a curry by this addition, everybody wins.

Posted 2017-11-19

This blog is powered by coffee and ikiwiki.