Prepare for a start-points re-architecture

A change to the cyber-dojo start-points architecture is coming.
The plan is to phase out the old architecture completely in a few months time.
Until then both architectures will work "side by side".
Here's how to try out the new architecture.

Recap: Retiring Architecture

  • cyber-dojo stores its start-points as docker volumes.
  • You create three start-point volumes by naming the sources using the --dir, --git, --list options.
    For example:
    $ ./cyber-dojo start-point create tutorials --dir=file:///Users/fred/custom-tutorials $ ./cyber-dojo start-point create tdd --git=https://github.com/org/repo/exercises-tdd.git $ ./cyber-dojo start-point create ruby --list=https://raw.githubusercontent.com/org/repo/master/langs-ruby-urls
  • You bring up your cyber-dojo server by naming (up to) three start-point volumes, one for each start-point type.
    You can omit any of the three -- settings and it will use its default.
    For example:
    $ ./cyber-dojo up \ --custom=tutorials \ --exercises=tdd \ --languages=ruby

Overview: Incoming Architecture

  • cyber-dojo will store it's start-points as docker images.
  • The --dir,--git,--list options will not be supported.
  • You will create one start-points image by naming all sources as git-repo-urls in a new bash script.
    Again, you can omit any of the three -- settings and it will use its default.
    For example:
    $ ./cyber_dojo_start_points_create.sh \ acme/my-start-points \ --custom \ file:///Users/fred/custom-tutorials \ --exercises \ https://github.com/org/repo/exercises-tdd.git \ --languages \ $(curl --silent https://raw.githubusercontent.com/org/repo/master/langs-ruby-urls)
  • You will bring up your cyber-dojo server by also naming the one start-points docker image.
    Leave the other three -- settings (all of which are optional) the same as in the retiring architecture.
    For example:
    $ ./cyber-dojo up \ --starter=acme/my-start-points \ --custom=tutorials \ --exercises=tdd \ --languages=ruby

Do an update

On your server, do an update:
$ ./cyber-dojo update

Install the new bash script

$ curl -O https://raw.githubusercontent.com/cyber-dojo/start-points-base/master/cyber_dojo_start_point_create.sh $ chmod 700 cyber_dojo_start_point_create.sh

To get detailed help:
$ ./cyber_dojo_start_points_create.sh --help Use: $ ./cyber_dojo_start_points_create.sh \ <image-name> \ [--custom <git-repo-url>...]... \ [--exercises <git-repo-url>...]... \ [--languages <git-repo-url>...]... ...

Create a manifest.json file for each exercise

In the retiring architecture, each --exercises start-point is simply a file called instructions. The name associated with each instructions file (when you set up a practice session) is the name of the directory where it lives (with underscores replaced by spaces).
In the incoming architecture, each -exercises start-point is specified by a manifest.json file. Its format is a subset of the --custom and --languages manifest.json files and has only two entries:
  • You must specify a display_name
  • You must specify the visible_filenames
  • visible_filenames cannot contain a file called cyber-dojo.sh
For example:
{ "display_name": "Fizz Buzz", "visible_filenames": [ "instructions" ] }

--dir=DIR details

Suppose you currently create an (exercises) start-point docker-volume named tdd with:
$ ./cyber-dojo start-point create \ tdd \ --dir=file:///Users/fred/tdd-exercises
and you bring up your cyber-dojo server with:
$ ./cyber-dojo up \ ... --exercises=tdd ...

--dir=DIR will not be supported.
To use the named DIR as a git-repo-url, DIR must contain a git repo.
$ cd /Users/fred/tdd-exercises $ git init $ git add . $ git config --global user.email "EMAIL" $ git config --global user.name "NAME" $ git commit -m "initial commit"
Create a new start-point docker-image named acme/my-start-points containing these (--exercises) start-points with:
$ ./cyber_dojo_start_points_create.sh \ acme/my-start-points \ --custom \ ... \ --exercises \ file:///Users/fred/tdd-exercises \ --languages \ ...
Bring up your cyber-dojo server by also naming the start-point image:
$ ./cyber-dojo up \ --starter=acme/my-start-points \ --custom=... \ --exercises=tdd \ --languages=...

--git=URL details

Suppose you currently create a (custom) start-point docker-volume named my-ruby-tutorials with:
$ ./cyber-dojo start-point create \ my-ruby-tutorials \ --git=https://github.com/org/repo/ruby-tutorials.git
and you bring up your cyber-dojo server with:
$ ./cyber-dojo up \ ... --custom=my-ruby-tutorials ...

--git=URL will not be supported.
Simply use the URL as a git-repo-url.
Create a new start-point docker-image named acme/my-start-points containing these (--custom) start-points with:
$ ./cyber_dojo_start_points_create.sh \ acme/my-start-points \ --custom \ https://github.com/org/repo/ruby-tutorials.git \ --exercises \ ... \ --languages \ ...
Bring up your cyber-dojo server by also naming the start-point image:
$ ./cyber-dojo up \ --starter=acme/my-start-points \ --custom=my-ruby-tutorials \ --exercises=... \ --languages=...


--list=LIST_URL details

Suppose you currently create a (languages) start-point docker-volume named common-langs with:
$ ./cyber-dojo start-point create \ common-langs \ --list=https://raw.githubusercontent.com/org/repo/master/common-langs-urls
where:
$ curl --silent https://raw.githubusercontent.com/org/repo/master/common-langs-urls https://github.com/cyber-dojo-languages/csharp-nunit.git https://github.com/cyber-dojo-languages/java-junit.git https://github.com/cyber-dojo-languages/javascript-cucumber.git https://github.com/cyber-dojo-languages/python-pytest.git https://github.com/cyber-dojo-languages/ruby-minitest.git
and you bring up your cyber-dojo server with:
$ ./cyber-dojo up \ ... --languages=common-langs ...

--list=LIST_URL will not be supported.
Simply use curl to get the URLs inside LIST_URL.
Create a new start-point docker-image named acme/my-start-points containing these (--languages) start-points with:
$ ./cyber_dojo_start_points_create.sh \ acme/my-start-points \ --custom \ ... \ --exercises \ ... \ --languages \ $(curl --silent https://raw.githubusercontent.com/org/repo/master/common-langs-urls)
which expands to:
$ ./cyber_dojo_start_points_create.sh \ acme/my-start-points \ --custom \ ... \ --exercises \ ... \ --languages \ https://github.com/cyber-dojo-languages/csharp-nunit.git \ https://github.com/cyber-dojo-languages/java-junit.git \ https://github.com/cyber-dojo-languages/javascript-cucumber.git \ https://github.com/cyber-dojo-languages/python-pytest.git \ https://github.com/cyber-dojo-languages/ruby-minitest.git
Bring up your cyber-dojo server by also naming the new start-point image:
$ ./cyber-dojo up \ --starter=acme/my-start-points \ --custom=... \ --exercises=... \ --languages=common-langs

Summary

  • A change to the cyber-dojo start-points architecture is coming.
  • To prepare for this change you can, for a while, use either architecture.
  • Do a [./cyber-dojo update]
  • Install the new bash script, cyber_dojo_start_points_create.sh
  • Create a manifest.json file for each exercise.
  • Note down all the --dir,--git,--list values you use when creating your three start-point docker volumes.
  • Create your new start-point docker image using only git-repo URLs:
    • --dir=DIR will not be supported. Create a git repository in DIR. The git-repo-url will be file://DIR
    • --git=URL will not be supported. The git-repo-url will be URL.
    • --list=LIST_URL will not be supported. Use curl to get the git-repo-urls inside LIST_URL.
  • Bring up your cyber-dojo server by also naming the new start-point image.


How to access old practice sessions

There's been a major change in the way cyber-dojo stores its practice sessions.
It used to store them in a docker data-container.
It now saves them directly to a volume-mounted host directory.
To access to your practice sessions you need to do a one-time port...

SSH into your server and curl the porting script:
curl -O https://raw.githubusercontent.com/cyber-dojo/porter/master/port_cyber_dojo_storer_to_saver.sh chmod 700 port_cyber_dojo_storer_to_saver.sh

Pull the latest docker images for the required services:
docker pull cyberdojo/storer docker pull cyberdojo/saver docker pull cyberdojo/porter docker pull cyberdojo/mapper

Bring down your cyber-dojo server:
./cyber-dojo down

Run the script, read what it says, and follow its instructions carefully.
You will be asked to run a couple of one-time-only mkdir and chown commands.
You do not need to create any new users.
Please be patient, the script takes several seconds to initialize.
./port_cyber_dojo_storer_to_saver.sh

Once your final [--all] command has completed, update your server:
./cyber-dojo update

Finally, bring your server back up:
./cyber-dojo up ...


directory details...


After this update all practice-sessions will be available directly on the host server under the /cyber-dojo dir:
  • /cyber-dojo/groups/ holds the group practice sessions by ID.
    For example, a group practice session with an ID of 5yv7JT will live at /cyber-dojo/groups/5y/v7/JT/
  • /cyber-dojo/katas/ holds the individual practice sessions by ID.
    For example, an individual practice session with an ID of 3e9H2W will live at /cyber-dojo/katas/3e/9H/2W/
  • The /cyber-dojo dir is volume-mounted into the saver service (uid=19663, gid=65533).
Porting also creates a /porter dir on the host server:
  • For example, if an old session with an ID of 733E9E16FC (10 characters long) is ported to a new ID of 5yv7JT (6 characters long) then /porter/mapped-ids/73/3E9E16FC will be a file containing the 6 characters 5yv7JT.
  • This mapping is used to provide access to old practice sessions using both their old and their new IDs.
  • Access to practice sessions via their old (10 character) IDs is deprecated.
  • The /porter dir is volume-mounted into the mapper service (uid=19664, gid=65533).
  • The /porter/raised-ids/ dir contains information on practice sessions that raised an exception when their port was attempted.


running your own cyber-dojo server

set up your server and install the cyber-dojo script

  • On Linux
  • On a Mac (use Docker-Toolbox to create a Linux+docker Virtual Machine and open a Docker-Quickstart-Terminal)
  • On Windows (use Docker-Toolbox to create a Linux+docker Virtual Machine and open a Docker-Quickstart-Terminal)



bring up your cyber-dojo server

If you want to install all the language+testFrameworks (takes ~15 mins the first time),
in a terminal, type:
./cyber-dojo up

If you want to install only the more common language+testFrameworks (C, C++, C#, Java, Javascript, Python),
in a terminal, type:
URL=https://raw.githubusercontent.com/cyber-dojo/start-points-languages/master/languages_list_common ./cyber-dojo start-point create common --list=${URL} ./cyber-dojo up --languages=common

Put your cyber-dojo server's IP address into your browser. That's it.


bring down your cyber-dojo server

In a terminal, type:
./cyber-dojo down


ACCU C++ Countdown Pub Quiz

The ACCU conference is one of the highlights of my year. I ran a brand new session, a C++ Pub Quiz with an emphasis on fun and interaction, based loosely on the popular UK TV game show Countdown.

In the TV version, contestants play individually and have 30 seconds to find the longest word using only a small set of letters. In this version, contestants play in teams, and have ~7 minutes to write the smallest valid C++ program containing a small set of tokens.

For example, if the tokens were:
catch -> [ ; -- foobar operator

Then a winning program (53 characters long) might be:
class c { c operator->(){ foobar: try{ } catch(c x[]){ x--; } } };


This might not sound very interesting but the format works. Here's what some of the participants said:
Pub quiz countdown was enormous fun. It showed me what C++ features I don’t really use very often and allowed me to deploy some very sneaky tricks that I wouldn’t touch in production code. Everyone should play it. At least once. With their boss. (Guy Davidson, Coding Manager, Creative Assembly)
This session was one of the highlights of the conference for me: it was so much fun! It was an extremely entertaining session and I really hope to see similar ideas in the future.
(Vittorio Romeo, Bloomberg)
The format was really great; it encouraged healthy competition and allowed for some pretty heroic submissions, but it also kept the barrier for entry very low so that everyone could take part. I especially enjoyed seeing the ridiculous hacks which others would employ so that I could unceremoniously steal them for the next round. (Simon Brand, Senior Software Engineer, Codeplay Software Ltd)
The “Countdown pub quiz” was a combination of excellent geeky fun, fiendish puzzle, and competitive challenge. If you have a group of C++ programmers who enjoy the language, are playful, and want to improve their skills, I’d highly recommend getting involved. (Pete Goodliffe)
Not being a fan of the original Countdown TV programme, I was dubious about its use as a C++ teaching format. I needn't have worried - the session was engaging, enjoyable and educational. (Seb Rose, Cucumber)

We used cyber-dojo with some custom C++17 start-points which automatically told you your program's size and score. The rules were as follows:
  • The judges decision was final
  • Only non-whitespace characters were counted
  • Programs had to compile
  • Warnings were allowed
  • Extra tokens were allowed
  • Each token has to be a single whole token. For example the . token had to be the member access token; you could not use ... ellipsis or 4.2 floating point literal


The winners and the tokens were as follows (can you find smaller programs?)
Round 1: snakes, 75 character program, dynamic_cast snafu += return switch final
Round 2: wolves,koalas tied, 54 character program, catch ; foobar operator -- [
Round 3: frogs, 62 character program, else ~ default -> using foobar 0x4b
Round 4: tigers, 44 character program, string include for auto template 42
Round 5: pandas, tigers tied, 82 character program, virtual typename x reinterpret_cast static_cast 30ul
Round 6: wolves, 64 character program, constexpr override goto wibble . this
The raccoons and lions won the conundrum rounds.

The result was very close.
In 3rd place snakes with 481 points.
In 2nd place alligators with 488 points.
In 1st place tigers with 495 points.
A big thank you to my co-presenter Rob Chatley, to all the contestants for being such good sports, and to Bloomberg for sponsoring the Quiz.

large http POST == Broken pipe (Errno::EPIPE)

My friend Seb Rose recently found an interesting bug in cyber-dojo. He found it when trying to create a start-point for our upcoming ACCU pre-conference tutorial on Testable Architecture. The cyber-dojo server first http POSTs the incoming code and test files to the runner service which runs the tests and determines the colour of the traffic-light (red, amber, or green). Then the code and test files, together with stdout and the traffic-light colour are http POSTed to the storer service. When the size of the POST request reached a certain size the storer failed with a Broken pipe (errno::EPIPE) message. What was interesting was that the runner did not fail. This was interesting because the runner and the storer services both use the same sinatra web server each running in their own docker container controlled by Docker Compose. It turned out the settings in the docker-compose.yml file for runner and storer were slightly different...

... services: runner: image: cyberdojo/runner container_name: cyber-dojo-runner read_only: true tmpfs: /tmp ... storer: image: cyberdojo/storer container_name: cyber-dojo-storer read_only: true ...

Both services were running in a read_only container, runner had a temporary file-system, storer did not. This was the difference. I'm guessing that somewhere under the hood sinatra switches to writing to /tmp when the incoming http POST gets bigger than a certain size. Adding [tmpfs: /tmp] to storer fixed the bug! Thanks Seb.

de-centralized service locator pattern

All hexagonal-architecture diagrams I've seen tend to look similar to this one. The external-adaptors live in the outer-ring and the application code you're focused on lives inside. Conceptually, they are neatly separated. One of the things I don't like about Dependency Injection is how it blurs this separation. The external-adaptors living in the outer-ring have to be injected into the application objects living in the inside. So are the external-adaptor objects still on the outside or not? The words we're using suggest we just injected them so now they are in the inside! For this and other reasons I tend to avoid Dependency Injection. But of course, I still need the objects on the inside to be able to communicate with the objects on the outside. So cyber-dojo uses the Service Locator pattern. But it does not use a central registry. I prefer the connections they communicate over to be obvious and explicit. And I prefer those connections to actually be connections that connect the outside and the inside together. I feel the need for a Ruby example...

Suppose I have an inside Runner class that needs access to an outside Shell service

class Runner def run(...) ... stdout,_ = shell.assert_exec("docker run #{args} #{image_name} sh") ... end end

The Runner object locates the shell object using the nearest_ancestors mix-in:

require_relative 'nearest_ancestors' class Runner def initialize(parent, ...) @parent = parent ... end attr_reader :parent private include NearestAncestors def shell nearest_ancestors(:shell) end end

module NearestAncestors def nearest_ancestors(symbol) who = self loop { unless who.respond_to? :parent fail "#{who.class.name} does not have a parent" end who = who.parent if who.respond_to? symbol return who.send(symbol) end } end end

All objects know their parent object and nearest_ancestors chains back parent to parent to parent until it finds an object with the required symbol or runs out of parents. I can create a root object that simply holds the external-adaptors (eg shell). Conceptually, this root object lives at the boundary between the outside and the inside.

require 'sinatra/base' require_relative 'shell' require_relative 'runner' class MicroService < Sinatra::Base post '/run' do runner.run(...) end def shell @shell ||= Shell.new(self) end def runner @runner ||= Runner.new(self, ...) end end

All the internal objects can access all the external-adaptors. I love how trivial moving a piece of code from one class to another class becomes. Another thing I love about this pattern is the effect it has on my tests.

require_relative 'shell_stubber' ... class RunnerTest < MiniTest::Test def test_runner_run_with_stubbed_shell @shell = ShellStubber.new(self) ... runner.run(...) ... end attr_reader :shell def runner @runner ||= Runner.new(self, ...) end end

  • In the MicroService class self refers to the MicroService object which becomes the parent used in nearest_ancestors. Thus in runner.run, shell resolves to shell inside the MicroService object.
  • In the RunnerTest class self refers to the RunnerTest object which becomes the parent used in nearest_ancestors. Thus in runner.run, shell resolves to shell inside the RunnerTest object.
The RunnerTest class effectively doubles as the top-level MicroService class and I create, and hold, my test doubles locally. I find it greatly improves locality of reference and habitability in general.