the design and implementation of cyber-dojo

At the excellent Agile on the Beach conference in Cornwall I did a presentation outlining some of the history, design and implementation of cyber-dojo. The video has just gone live on youtube.

Travis build pipeline

I've been working on the Travis build pipeline for cyber-dojo. The biggest gotcha I hit was that a failure in an after_success: section of a .travis.yml file does not fail the build. This was an issue because after successfully building and testing a docker image I wanted to do two things (and know if they failed):
  • push the docker image to dockerhub
  • trigger git repos dependent on the docker image so they in turn run their .travis.yml files.
I solved this by doing both steps at the end of the .travis.yml script: section.

... language: node_js script: - ... - curl -O - chmod +x - ./ [DEPENDENT-REPO...] looks like this:

#!/bin/bash set -e ... if [ "${TRAVIS_PULL_REQUEST}" == "false" ]; then BRANCH=${TRAVIS_BRANCH} else BRANCH=${TRAVIS_PULL_REQUEST_BRANCH} fi if [ "${BRANCH}" == "master" ]; then docker login --username "${DOCKER_USERNAME}" --password "${DOCKER_PASSWORD}" TAG_NAME=$(basename ${TRAVIS_REPO_SLUG}) docker push cyberdojo/${TAG_NAME} echo "PUSHED cyberdojo/${TAG_NAME} to dockerhub" npm install travis-ci script=trigger-build.js curl -O${script} node ${script} ${*} fi

trigger-build.js looks like this:

... var Travis = require('travis-ci'); var travis = new Travis({ version: '2.0.0', headers: { 'User-Agent': 'Travis/1.0' } }); var exit = function(call,error,response) { console.error('ERROR:travis.' + call + 'function(error,response) { ...'); console.error(' error:' + error); console.error('response:' + JSON.stringify(response, null, '\t')); process.exit(1); }; travis.authenticate({ github_token: process.env.GITHUB_TOKEN }, function(error,response) { var repos = process.argv.slice(2); if (error) { exit('authenticate({...}, ', error, response); } repos.forEach(function(repo) { var parts = repo.split('/'); var name = parts[0]; var tag = parts[1]; travis.repos(name, tag).builds.get(function(error,response) { if (error) { exit('repos(' + name + ',' + tag + ').builds.get(', error, response); }{ build_id: response.builds[0].id }, function(error,response) { if (error) { exit('{...}, ', error, response); } console.log(repo + ':' + response.flash[0].notice); }); }); }); });

Note the [set -e] in and the [process.exit(1)] in trigger-build.js
Hope this proves useful!

Python + behave

A big thank you to Millard Ellingsworth @millard3 who has added Python+behave to cyber-dojo.

Woohoo :-)

cyber-dojo Raspberry Pies in action

Liam Friel, who helps to run a CoderDojoBray (in Ireland) asked me for some Raspberry Pies which I was more than happy to give him, paid for from the donations lots of you generous people have made from cyber-dojo.

Liam sent me this wonderful photo of a CoderDojoBray session and writes:

Your Raspberry Pies have been getting a lot of use... We've got 8 Pies in total. Got a reasonably steady turnout at the dojo, 75-85 kids turning up each week.

Awesome. If, like Liam, you would like some Raspberry Pies to help kids learn about coding, please email. Thanks

running your own cyber-dojo server on Windows

install Docker-Toolbox for Windows

From here.

Note that running Docker natively on Windows is not supported.
This is because cyber-dojo requires a case-sensitive file system.

open a Docker-Quickstart-Terminal

get your cyber-dojo server's IP address

In the Docker-Quickstart-Terminal, type:
$ docker-machine ip default
It will print something like

shell into your cyber-dojo server

In the Docker-Quickstart-Terminal, type:
$ docker-machine ssh default

get the cyber-dojo script

In your cyber-dojo server, type:
$ curl -O $ chmod +x cyber-dojo

Now use this cyber-dojo script, from your cyber-dojo server, to run your own cyber-dojo server.

running your own cyber-dojo server on a Mac

install Docker-Toolbox for Mac

From here.

Note that running Docker natively on a Mac is not supported.
This is because cyber-dojo requires a case-sensitive file system.

open a Docker-Quickstart-Terminal

get your cyber-dojo server's IP address

In the Docker-Quickstart-Terminal, type:
$ docker-machine ip default
It will print something like

shell into your cyber-dojo server

In the Docker-Quickstart-Terminal, type:
$ docker-machine ssh default

get the cyber-dojo script

In your cyber-dojo server, type:
$ curl -O $ chmod +x cyber-dojo

Now use this cyber-dojo script, from your cyber-dojo server, to run your own cyber-dojo server.

running your own cyber-dojo server on Linux

install docker

If docker is not already installed, install it. There are two ways to do this:
  1. follow the instructions on the docker website.
  2. curl OR wget the quick-and-easy install script at
    $ curl -sSL | sh $ wget -qO- | sh

add your user to the docker group

Eg, something like
$ sudo usermod -aG docker YOUR_USERNAME

log out and log in again

You need to do this for the previous usermod to take effect.

install the cyber-dojo shell script

In a terminal, type:
$ curl -O $ chmod 700 cyber-dojo

Now use this cyber-dojo script to run your own cyber-dojo server.

tar-piping a dir in/out of a docker container

The bottom of the docker cp web page has some examples of tar-piping into and out of a container. I couldn't get them to work. I guess there are different varieties of tar. The following is what worked for me on Alpine Linux. It assumes
  • ${container} is the name of the container
  • ${src_dir} is the source dir
  • ${dst_dir} is the destination dir

copying a dir out of a container

docker exec ${container} tar -cf - -C $(dirname ${src_dir}) $(basename ${src_dir}) | tar -xf - -C ${dst_dir}

copying a dir into a container

tar -cf - -C $(dirname ${src_dir}) $(basename ${src_dir}) | docker exec -i ${container} tar -xf - -C ${dst_dir}

setting files uid/gid

A tar-pipe can also set the destination files owner/uid and group/gid:
tar --owner=UID --group=GID -cf - -C $(dirname ${src_dir}) $(basename ${src_dir}) | docker exec -i ${container} tar -xf - -C ${dst_dir}
This is useful because unlike [docker run] the [docker cp] command does not have a [--user] option.

Alpine tar update

The default tar on Alpine Linux does not support the --owner/--group options. You'll need to:
apk --update add tar

Hope this proves useful!

the design and evolution of cyber-dojo

I've talked about the design and evolution of cyber-dojo at two conferences this year. First at NorDevCon in Norwich and then also at Agile on the Beach in Falmouth. Here's the slides.

cyber-dojo web server default  start-points

  • This page holds the choices where you select your language and test framework (eg C#, NUnit) and exercise (eg Fizz Buzz).
  • This start-point is called languages.
  • By default, the languages+testFrameworks is created from the languages_list file which contains a list of repo-URLs in the cyber-dojo-languages github organization, each of which contain a manifest.json file.
  • By default, the exercises list is created from the start-points-exercises github repo which contains instructions text files.

  • This page holds the customized choices.
  • This start-point is called custom
  • By default, custom is created from the start-points-custom github repo which contains manifest.json files.

creating a new default start-point

To use a different default start-point simply bring down the server, delete the one you wish to replace, create a new one with that name, and bring the server back up. For example, to create a new languages start-point:
$ ./cyber-dojo down $ ./cyber-dojo start-point rm languages $ ./cyber-dojo start-point create languages --dir=... $ ./cyber-dojo up

$ ./cyber-dojo start-point ...

Start-points are controlled using the start-point command of the cyber-dojo script.
$ ./cyber-dojo start-point Use: cyber-dojo start-point [COMMAND] Manage cyber-dojo start-points Commands: create Creates a new start-point inspect Displays details of a start-point latest Updates pulled docker images named inside a start-point ls Lists the names of all start-points pull Pulls all the docker images named inside a start-point rm Removes a start-point Run 'cyber-dojo start-point COMMAND --help' for more information on a command

For example:
$ ./cyber-dojo start-point ls NAME TYPE SRC custom custom exercises exercises languages languages

For example:
$ ./cyber-dojo start-point inspect languages DISPLAY_NAME IMAGE_NAME PULLED? Asm, assert cyberdojofoundation/nasm_assert yes BCPL, all_tests_passed cyberdojofoundation/bcpl_all_tests_passed yes Bash, bash_unit cyberdojofoundation/bash_unit yes ... C (clang), Cgreen cyberdojofoundation/clang_cgreen yes ... C (gcc), Cgreen cyberdojofoundation/gcc_cgreen yes ... C#, Moq cyberdojofoundation/csharp_moq yes ... C++ (clang++), Cgreen cyberdojofoundation/clangpp_cgreen yes ... C++ (g++), Boost.Test cyberdojofoundation/gpp_boosttest yes ... Clojure, Midje cyberdojofoundation/clojure_midje yes ... CoffeeScript, jasmine cyberdojofoundation/coffeescript_jasmine yes D, unittest cyberdojofoundation/d_unittest yes Erlang, eunit cyberdojofoundation/erlang_eunit yes F#, NUnit cyberdojofoundation/fsharp_nunit yes Fortran, FUnit cyberdojofoundation/fortran_funit yes Go, testing cyberdojofoundation/go_testing yes Groovy, JUnit cyberdojofoundation/groovy_junit yes ... Haskell, hunit cyberdojofoundation/haskell_hunit yes Java, Cucumber cyberdojofoundation/java_cucumber yes ... Javascript, Mocha+chai+sinon cyberdojofoundation/javascript-node_mocha_chai_sinon yes ... PHP, PHPUnit cyberdojofoundation/php_phpunit yes Perl, Test::Simple cyberdojofoundation/perl_test_simple yes Python, py.test cyberdojofoundation/python_pytest yes Python, unittest cyberdojofoundation/python_unittest yes R, RUnit cyberdojofoundation/r_runit yes Ruby, Cucumber cyberdojofoundation/ruby_cucumber yes ... Rust, test cyberdojofoundation/rust_test yes Swift, XCTest cyberdojofoundation/swift_xctest yes VHDL, assert cyberdojofoundation/vhdl_assert yes VisualBasic, NUnit cyberdojofoundation/visual-basic_nunit yes

$ ./cyber-dojo start-point create Use: cyber-dojo start-point create NAME --list=URL|FILE Creates a start-point named NAME from git-clones of all the URLs listed in URL|FILE Use: cyber-dojo start-point create NAME --git=URL Creates a start-point named NAME from a git clone of URL Use: cyber-dojo start-point create NAME --dir=DIR Creates a start-point named NAME from a copy of DIR NAME's first letter must be [a-zA-Z0-9] NAME's remaining letters must be [a-zA-Z0-9_.-] NAME must be at least two letters long

cyber-dojo new release

The new release of cyber-dojo just went live :-)

creating your own server start-points

cyber-dojo's new architecture has customisable start-points.
If you want to use your own start-points you do not need to build a new web server image.

preparing your custom/languages start-point

  • Create a folder for the start-point
    $ md douglas
  • In the top-level folder create a file start_point_type.json
    $ touch douglas/start_point_type.json
    This file must specify the type of start-point.
    • languages (the type where setup will ask for an exercise)
      { 'type' : 'languages' }
    • custom (the type where setup will not ask for an exercise)
      { 'type' : 'custom' }
  • Create a sub-folder for each start-point entry
    $ md douglas/first
  • Create a manifest.json file in each folder
    $ nano douglas/first/manifest.json
  • Here's an example. Here's an explanation of the manifest.json format.
  • In each folder create the visible files named in manifest.json
    Here's an example

preparing your exercises start-point

  • Create a folder for the start-point
    $ md arthur
  • In the top-level folder create a file start_point_type.json
    $ touch arthur/start_point_type.json
    This file must specify the type of start-point.
    { 'type' : 'exercises' }
  • Create a sub-folder for each exercise
    $ md arthur/first
  • Create an instructions file in each folder
    $ nano arthur/first/instructions
    Here's an example

creating your start-point

Use the cyber-dojo script to create a new start-point. For example
$ cd douglas $ [sudo] ./cyber-dojo start-point create adams --dir=${PWD}
which attempts to create a start-point called adams from all the files in the douglas directory. If the creation fails the cyber-dojo script will print diagnostics.

starting your server with your start-point

eg with a type=custom start-point called hiker
$ [sudo] ./cyber-dojo up --custom=hiker
eg with a type=languages start-point named adams
$ [sudo] ./cyber-dojo up --languages=adams
eg with a type=exercises start-point named arthur
$ [sudo] ./cyber-dojo up --exercises=arthur
eg with a combination
$ [sudo] ./cyber-dojo up --languages=adams --exercises=arthur

adding a new language + test-framework to cyber-dojo

It will be updated properly soon.
Meanwhile, follow step 0 below, and then look at these examples from the cyber-dojo-languages github organization: Note how each repo's .travis.yml file simply runs the script which builds and tests the docker image and any associated start-point code. In particular the script augments the Dockerfile commands in various ways (eg adding users for the 64 avatars). Alternatively, if you are building your docker-image using a raw [docker build] command you must base your docker-image FROM a cyber-dojo-languages dockerhub image.

0. Install docker

1. Create a docker-image for just the language

Make this docker-image unit-test-framework agnostic.
If you are adding a new unit-test-framework to an existing language skip this step.
For example, suppose you were building Lisp
  • Create a new folder for your language
    $ md lisp
  • In your language's folder, create a file called Dockerfile
    $ cd Lisp $ touch Dockerfile
    If you can, base your new image on Alpine-linux as this will help keep images small. To do this make the first line of Dockerfile as follows
    FROM cyberdojofoundation/language-base
    Here's one based on Alpine-linux (217 MB: C#) Dockerfile
    Here's one not based on Alpine (Ubuntu 1.26 GB: Python) Dockerfile
  • Use the Dockerfile to build a docker-image for your language.
    For example
    $ docker build -t cyberdojofoundation/lisp .
    which, if it completes, creates a new docker-image called cyberdojofoundation/lisp using the Dockerfile (and build context) in . (the current folder).

2. Create a docker-image for the language and test-framework

Repeat the same process, building FROM the docker-image you created in the previous step.
For example, suppose your Lisp unit-test framework is called lunit
  • Create a new folder underneath your language folder
    $ cd lisp $ md lunit
  • In your new test folder, create a file called Dockerfile
    $ cd lunit $ touch Dockerfile
    The first line of this file must name the language docker-image you built in the previous step.
    Add lines for all the commands needed to install your unit-test framework...
    FROM cyberdojofoundation/lisp RUN apt-get install -y lispy-lunit RUN apt-get install -y ...
  • Create a file called red_amber_green.rb
    $ touch red_amber_green.rb
  • In red_amber_green.rb write a Ruby lambda accepting three arguments. For example, here is the C#-NUnit red_amber_green.rb:
    lambda { |stdout,stderr,status| output = stdout + stderr return :red if /^Errors and Failures:/.match(output) return :green if /^Tests run: (\d+), Errors: 0, Failures: 0/.match(output) return :amber }
    cyber-dojo uses this to determine the test's traffic-light colour by passing it the stdout, stderr, and status outcomes of the test run.
  • The Dockerfile for your language+testFramework must COPY red_amber_green.rb into the /usr/local/bin folder of your image. For example:
    FROM cyberdojofoundation/lisp RUN apt-get install -y lispy-lunit RUN apt-get install -y ... COPY red_amber_green.rb /usr/local/bin
    I usually start with a red_amber_green.rb that simply returns :red. Then, once I have a start-point using the language+testFramework docker-image, I use cyber-dojo to gather outputs which I use to build up a working red_amber_green.rb
  • Use the Dockerfile to try and build your language+testFramework docker-image.
    The name of an image takes the form hub-name/image-name. Do not include a version number in the image-name. For example
    $ docker build -t cyberdojofoundation/lisp_lunit .
    which, if it completes, creates a new docker image called cyberdojofoundation/lisp_lunit using the Dockerfile in . (the current folder).

3. Use the language+testFramework docker-image in a new start-point

Use the new image name (eg cyberdojofoundation/lisp_lunit) in a new manifest.json file in a new start-point.

cyber-dojo start-points manifest.json entries explained

Example: the manifest.json file for Java/JUnit currently looks like this:

{ "display_name": "Java, JUnit", "visible_filenames": [ "", "", "" ], "image_name": "cyberdojofoundation/java_junit", "runner_choice": "stateless", "filename_extension": ".java", "tab_size": 4, "progress_regexs" : [ "Tests run\\: (\\d)+,(\\s)+Failures\\: (\\d)+", "OK \\((\\d)+ test(s)?\\)" ] }

Required entries

"display_name": string

The name as it appears in the start-point setup pages where you select your language-plus-test-framework. For example, "Java, JUnit" means that "Java, JUnit" will appear as a selectable entry. A single string with the major name first, then a comma, then the minor name.

"visible_filenames": [ string, string, ... ]

Filenames that will be visible in the browser's editor when an animal initially enter's a cyber-dojo. Each of these files must exist in the manifest.json's directory. Filenames can be in nested sub-directories, eg "tests/". Must include This is because is the name of the shell file assumed by the runner to be the start point for running the tests. You can write any actions inside but clearly any programs it tries to run must be installed in the docker image_name. For example, if runs gcc to compile C files then gcc has to be installed. If runs javac to compile java files then javac has to be installed.

"image_name": string

The name of the docker image used to run a container in which is executed. Do not include any version numbers (eg of the compiler or test-framework). The docker image must contain a file called red_amber_green.rb in the /usr/local/bin directory. The runner uses this to determine the traffic-light colour of each test run outcome. For example, here's the one for Java-JUnit.

"runner_choice": string

The string "stateless" or "stateful". Each test run is handled by either the stateless-runner or the stateful-runner. The stateless runner does not maintain state between test runs, the stateful runner does. In other words, when using the stateless runner the binary files produced by your script (eg .o files created from .c files via a makefile) do not exist at the start of the next test run. With the stateful runner, they do. Use the stateless runner unless you can gain a significant speed up with the stateful runner (which requires extra disk-space and cpu from the host server).
This is now ignored and the only runner is the stateless runner.

"filename_extension": [ string, string, ... ]

The extensions of filenames that identify source files which are listed above the output file(s) in the filename list. The first entry is also used when creating a new filename. For example, if set to ".java" the new filename will be If you only have a single filename extension you can use a single string instead of an array containing a single string.

Optional entries

"max_seconds": int

The maximum number of seconds has to complete the tests.
An integer between 1 and 20.
Defaults to 10.

"tab_size": int

The number of spaces a tab character expands to in the browser's textarea editor.
An integer between 1 and 12.
Defaults to 4.

"hidden_filenames": [ string, string, ... ]

When the runner runs, it it often creates files. All text-files that are created are returned to the browser unless their name matches any of the string regexs.
An array of strings used to create Ruby regexs, used by cyber-dojo, like this For example, to hide files ending in .d you can use the following string ".*\\.d"
Defaults to [ ].

"progress_regexs": [ string, string ]

Used on the dashboard to show the test output line (which often contains the number of passing and failing tests) of each animal's most recent red/green traffic light. Useful when your practice session starts from a large number of pre-written tests and you wish to monitor the progress of each animal.
An array of two strings used to create Ruby regexs. The first one to match a red traffic light's test output, and the second one to match a green traffic light's test output.
Defaults to [ ].

"highlight_filenames": [ string, string, ... ]

Filenames whose appearance is highlighted in the browser. This can be useful if you have many "visible_filenames" and want to mark which files form the focus of the practice.
An array of strings. A strict subset of "visible_filenames".
Defaults to [ ].

running your own cyber-dojo web server

cyber-dojo traffic-lights!

My friend Byran who works at the awesome Bluefruit Software in Redruth has hooked up his cyber-dojo web server to an actual traffic-light! Fantastic. Check out the video below :-)

Byran writes
It started out as a joke between myself and Josh (one of the testers at Bluefruit). I had the traffic lights in my office as I was preparing a stand to promote the outreach events (Summer Huddle, Mission to Mars, etc...) Software Cornwall runs. The conversation went on to alternative uses for the traffic lights, I was planning to see if people would pay attention to the traffic lights if I put them in a corridor at the event; we then came up with the idea that we could use them to indicate TDD test status.
Although it started out as a joke I am going to use it at the Summer Huddle, the lights change every time anyone runs a test so it should give an idea of how the entire group are doing without highlighting an individual pair.
The software setup is very simple, there is a Python web server (using the Flask library) running on a Raspberry Pi that controls the traffic lights using GPIO Zero. When the appendTestTrafficLight() function (in run_tests.js.erb) appends the traffic light image to the webpage I made it send an http 'get' request to the Raspberry Pi web server to set the physical traffic lights at the same time. At the moment the IP address of the Raspberry Pi is hard coded in the 'run_tests.js.erb' file so I have to rebuild the web image if anything changes but it was only meant to be a joke/proof of concept. The code is on a branch called traffic_lights on my fork of the cyber-dojo web repository.
The hardware is also relatively simple, there is a converter board on the Pi; this only converts the IO pin output connector of the Raspberry Pi to the cable that attaches to the traffic lights.
The other end of the cable from the converter board attaches to the board in the top left of the inside the traffic lights; this has some optoisolators that drive the relays in the top right which in turn switch on and off the transformers (the red thing in the bottom left) that drive the lights.
I have to give credit to Steve Amor for building the hardware for the traffic lights. They are usually used during events we run to teach coding to children (and sometimes adults). The converter board has LEDs, switches and buzzers on it to show that there isn't a difference between writing software to toggle LEDs vs driving actual real world systems, it's just what's attached to the pin. Having something where they can run the same code to drive LEDs and drive real traffic lights helps to emphasise this point.

a new architecture is coming

Between my recent fishing trips I have been working hard on a new cyber-dojo architecture.
  • pluggable start-points so you can now use your own language/tests/exercises lists on the setup page
  • a new setup page for custom start-points
  • once I've got the output parse functions inside the start-point volume I'll be switching the public cyber-dojo server to this image and updating the running-your-own-server instructions.
  • I've switched all development to a new github repo which has instructions if you want to try it now.

nordevcon cyber-dojo presentation

It was a pleasure to speak at the recent norfolk developers conference. My talk was "cyber-dojo: executing your code for fun and not for profit". I spoke about cyber-dojo, demo'd its features, discussed its history, design, difficulties and underlying technology. Videos of the talk are now on the infoq website. The slide-sync is not right at the start of part 2 but it soon gets corrected.

Docker tar pipe

This blog post is now out of date. The docker tar-pipe is now executed from micro-services implemented in Ruby.
For example, see runner_stateless.

I've been working on re-architecting cyber-dojo so the web-server (written in Rails) runs in a Docker image...
  • the server receives its source files from the browser
  • it saves them to a temporary folder
  • it back-ticks a shell file which
  • ...puts the source files into a docker container
  • ...runs the source files (by executing as user=nobody
  • ...limits execution to 10 seconds

My shell file started like this:
#!/bin/sh SRC_DIR=$1 # where source files are IMAGE=$2 # the image to run them in MAX_SECS=$3 # how long they've got to complete TAR_FILE=`mktemp`.tgz # source files are tarred into this SANDBOX=/sandbox # where tar is untarred to inside container # - - - - - - - - - - - - - - - - - - - # 1. Create the tar file cd ${SRC_DIR} tar -zcf ${TAR_FILE} . # - - - - - - - - - - - - - - - - - - - # 2. Start the container CID=$(sudo docker run --detach \ --interactive \ --net=none \ --user=nobody \ ${IMAGE} sh) # - - - - - - - - - - - - - - - - - - - # 3. Pipe the source files into the container cat ${TAR_FILE} \ | sudo docker exec --interactive \ --user=root \ ${CID} \ sh -c "mkdir ${SANDBOX} \ && tar zxf - -C ${SANDBOX} \ && chown -R nobody ${SANDBOX}" # - - - - - - - - - - - - - - - - - - - # 4. After max_seconds, remove the container (sleep ${MAX_SECS} && sudo docker rm --force ${CID}) & # - - - - - - - - - - - - - - - - - - - # 5. Run in the container sudo docker exec --user=nobody \ ${CID} \ sh -c "cd ${SANDBOX} && ./ 2>&1" # - - - - - - - - - - - - - - - - - - - # 6. If the container isn't running, the sleep woke and removed it RUNNING=$(sudo docker inspect --format="{{ .State.Running }}" ${CID}) if [ "${RUNNING}" != "true" ]; then exit 137 # (128=timed-out) + (9=killed) else exit 0 fi

Things to note:
  • The container is started in detached mode. This is so I can get its CID and setup the backgrounded sleep task (4) before running (5)
  • I use [sudo docker] because I do not put the current user into the docker group. Instead I sudo the current user to run the docker binary without a password.
  • The first [docker exec] user is root but this is root inside the CID container not root where the shell file is being run.
  • I can pipe STDIN from the shell into the container
  • The sleep task (4) kills the container if it runs out of time and step (6) detects this.

I realized I could avoid creating the (physical) tar file completely by using a 'proper' tar pipe:
#!/bin/sh ... (cd ${SRC_DIR} && tar -zcf - .) \ | sudo docker exec --interactive \ --user=root \ $CID \ sh -c "mkdir ${SANDBOX} \ && tar -zxf - -C ${SANDBOX} \ && chown -R nobody ${SANDBOX}" ...

  • [tar -zcf] means create a compressed tar file
  • [-] means don't write to a named file but to STDOUT
  • [.] means tar the current directory
  • which is why there's a preceding cd
At the other end of the pipe...
  • [tar -zxf] means extract files from the compressed tar file
  • [-] means don't read from a named file but from STDIN
  • [-C ${SANDBOX}] means save the extracted files to the ${SANDBOX} directory

Then I realized I could combine the two [docker exec]s into one and drop the chown...
... # - - - - - - - - - - - - - - - - - - - # 1. Start the container CID=$(sudo docker run \ --detach \ --interactive \ --net=none \ --user=nobody \ ${IMAGE} sh) # - - - - - - - - - - - - - - - - - - - # 2. After max_seconds, remove the container (sleep ${MAX_SECS} && sudo docker rm --force ${CID}) & # - - - - - - - - - - - - - - - - - - - # 3. Tar pipe the source files into the container and run (cd ${SRC_DIR} && tar -zcf - .) \ | sudo docker exec --interactive \ --user=nobody \ ${CID} \ sh -c "mkdir ${SANDBOX} \ && cd ${SANDBOX} \ && tar -zxf - -C . \ && ./" # - - - - - - - - - - - - - - - - - - - # 4. If the container isn't running, the sleep woke and removed it RUNNING=$(sudo docker inspect --format="{{ .State.Running }}" ${CID}) if [ "${RUNNING}" != "true" ]; then exit 137 # (128=timed-out) + (9=killed) else exit 0 fi

This worked but the backgrounded sleep created a zombie process.
That's for another blog.