cyber-dojo at Bristol Docker meetup

Here's a video of a short presentation I did (at the inaugural Bristol Docker meetup) explaining cyber-dojo and how it uses Docker. The projection is mostly invisible I'm afraid. The security flaws (such as running the containers as root) have now been fixed.



the test page




  • click it to run your tests
  • the stdout or stderr file opens and display the results
  • a new right-most traffic-light appears

Click any traffic-light to open the history view
red - the tests ran but one or more failed.
amber - the tests did not run, eg syntax error.
green - the tests ran and all passed.
the tests did not complete in 15 seconds. Accidentally coded an infinite loop? Too many concurrent cyber-dojos? Lost your network connection?
the total number of traffic-lights (in the most recent traffic light's colour).
your animal, if you are in a group practice.

keyboard shortcuts

  • Alt-O toggles through the editor tabs
  • Alt-J cycles downwards through the filenames
  • Alt-K cycles upwards through the filenames
  • Alt-T runs the tests

Here a full list of shorcuts. Some useful ones for seach are replace are:

Start searching == Ctrl-F / Cmd-F
Find next == Ctrl-G / Cmd-G
Find previous == Shift-Ctrl-G / Shift-Cmd-G
Replace == Shift-Ctrl-F / Cmd-Option-F
Replace all == Shift-Ctrl-R / Shift-Cmd-Option-F
Jump to line == Alt-G


evidence for pairing effectiveness

When I run a cyber-dojo I always ask the participants to work in pairs, two people per computer. I've been doing some research on pairing and I've come across a book called Visible Learning by John Hattie. It's a synthesis of over 800 experiments and papers relating to achievement in schools. On page 225 there is a section called The use of computers is more effective when peer learning is optimized. It reads, and I quote...
  • Lou, Abrami, and d'Apollonia (2001) reported higher effects for pairs than individuals or more than two in a group.
  • Liao (2007) also found greater effects for small groups (d=0.96) than individuals (d=0.56) or larger groups (d=0.39).
  • Gordon (1991) found effects were larger for learning in pairs (d=0.54) compared to alone (d=0.25).
  • Kuchler (1998) reported d=0.69 for pairs and d=0.29 for individuals.
  • Lou, Abrami, and d'Apollonia (2001) reported that students learning in pairs had a higher frequency of positive peer interactions (d=0.33), higher frequency of using appropriate learning or task strategies (d=0.50), persevered more on tasks (d=0.48), and more students succeeded (d=0.28) than those learning individually when using computers.
What do the numbers mean? Quoting from the front of the book...
An effect size of d=1.0 indicates an increase of one standard deviation on the outcome - in this case the outcome is improving school achievement. A one standard deviation increase is typically associated with advancing children's achievement by two to three years, improving the rate of learning by 50%, or a correlation between some variable and acheivement of approximately r=0.50. When implementing a new program, an effect size of 1.0 would mean that, on average, students receiving the treatement would exceed 84% of the students not receiving that treatment.
Of course, things are rarely absolutely black and white, but these are impressive numbers.

  • Lou, Y., Abrami, P.C., & Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449-521
  • Liao, Y.K.C. (2007). Effects of computer-assisted instruction on students' achievement in Taiwan: A meta-analysis. Computers and Education, 48(2), 216-233.
  • Gordon, M.B. (1991). A quantitative analysis of the relationship between computer graphics and mathematics achievement and problem-solving. Unpublished Ed.D., University of Cincinnati, OH.
  • Kuchler, J.M. (1998). The effectiveness of using computers to teach secondary school (grades 6-12) mathematics: A meta-analysis. Unpublished Ed.D., University of Massachusetts Lowell, MA.

facilitating a cyber-dojo tips

When I'm facilitating a cyber-dojo with a new group here's how I typically start:
  1. I suggest that developers habits and thinking is strongly influenced by their development environments. If you use Eclipse to develop software then when you use Eclipse your default mentality is one of development. Not practising. Since we're practising, we deliberately don't use a development environment.
  2. I point out that cyber-dojo is not a personal development environment, it's a shared practice environment. In a development environment it makes sense to have tools such as colour syntax highlighting and code-completion to help you go faster so you can ship sooner. In a practice environment it doesn't. When you're practising you don't want to go faster, since you're not shipping anything. You want to go slower. You want your practice to be more deliberate.
  3. I observe that since it is so different to a development environment, participants may feel some slight discomfort when first using cyber-dojo. This discomfort is also deliberate! Discomfort can bring learning opportunities.
  4. I do a short demo explaining...
    • the files on the left side
    • the initial source files bear no relation to the exercise
    • the test button
    • the output file
    • the meaning of the red, amber, green traffic lights
  5. I ask the participants to enter their dojo in pairs. Pairing is an important part of the learning. Occasionally a few choose not to pair (and that's fine) but most do.

the history page

Hovering over a traffic-light shows its diff summary in a tool-tip.
Clicking on a traffic-light opens the history-view.
For example, this is the history-view for heron's 32nd traffic light.


avatar navigator


Moves you to different animals. Only visible in a group exercise.
  • the left-arrow moves to the previous animal.
  • the right-arrow moves to the next animal.
  • when the diff checkbox is checked, moving to another animal moves to their first traffic-light.
  • when the diff checkbox is unchecked, moving to another animal moves to their last traffic-light.


traffic-light navigator


Moves you forward and backward through the traffic-lights.
  • the small left-arrow moves to the first traffic-light.
  • the larger left-arrow moves to the previous traffic-light.
  • the current traffic-light in the current colour (eg 32 green).
  • the larger right-arrow moves to the next traffic-light.
  • the smaller right-arrow moves to the last traffic-light.


traffic lights


The scrollable traffic-light sequence.
  • hover over each traffic-light to show its diff summary in a tool-tip.
  • click on any traffic-light to navigate directly to it.
  • the current traffic-light is marked with an underbar.


file name


The currently selected filename.
  • the number of lines deleted, in red (click to hide/view toggle).
  • the number of lines added, in green (click to hide/view toggle).
  • the filename (click it to auto-scroll its next diff-chunk into view).


file content


The currently selected file.
  • deleted lines are in light red, there is a - next to the line-number.
  • added lines are in light green, there is a + next to the line-number.


forking


The fork button.
  • creates a brand new exercise, with its own 6-character id.
  • the new exercise's starting files will be copied from the currently displayed traffic light.
  • a dialog box will ask whether you want an individual exercise or a group exercise.


checking out


The checkout button.
  • checks out (as in git checkout) the files in the currently displayed traffic light, and submits them for test.
  • not available from a dashboard review.




the dashboard page



Each row corresponds to one avatar and displays, from left to right:
the avatar. Click to open a history page in non-diff mode showing their current code.
a pie-chart indicating the total number of red,amber,green traffic-lights so far.
total number of traffic-lights (in the most recent traffic light's colour). Click to open the history view in non-diff mode showing the animals current code.
oldest-to-newest traffic-lights. Click on any traffic-light to open a history view showing the diff for that traffic-light for that animal.

  • when checked the dashboard auto-refreshes every ten seconds.
  • turn auto-refresh on during the coding.
  • turn auto-refresh off during the review.

  • when unchecked the traffic-lights of different animals are not vertically time-aligned.
  • when checked each vertical column corresponds to one minute and contains all the traffic-lights created by all the animals in that one minute.
  • if no animals press their button during one minute the column will contain no traffic-lights at all (instead it will contain a single dot and be very thin).


If available, displays slightly more information about the most recent non-amber traffic-light of each animal, usually the number of passing and failing tests.



cyber-dojo traffic lights


Press and stdout+stderr+status are displayed in the output tab, and you get a new traffic-light.
Each traffic-light is coloured:
  • red if the tests ran but one or more failed.
  • amber if the tests did not run, eg syntax error.
  • green if the tests ran and all passed.
  • if the tests did not complete in ~10 seconds.

If test-prediction is enabled (click the cog/gear icon to open the settings dialog), traffic-lights look like this:
  • correct prediction (of green).
  • incorrect prediction (of red or amber).
  • auto-revert (back to green).

Click any traffic-light to open the history page showing:
  • diffs for any traffic-light's files, for any animal.
  • a button to checkout (git checkout) the files from any traffic light.
  • a button to fork a new exercise from any traffic light's files.


adding a new exercise

This page is out of date.
This is the page you are looking for.

cyber-dojo now runs Javascript mocha+chai+sinon

Many thanks to Steve Coffman for adding this.

cyber-dojo now runs C# Specflow

Many thanks to Seb Rose who has added C# Specflow to cyber-dojo. Seb has written a blog entry using specflow on mono from the command line detaling the steps involved.

breaking down the problem

One of my favourite programming problems on cyber-dojo is Print Diamond:
Given a letter print a diamond starting with 'A'
with the supplied letter at the widest point.

For example: print-diamond 'E' prints

    A
   B B
  C   C
 D     D
E       E
 D     D
  C   C
   B B
    A

For example: print-diamond 'C' prints

  A
 B B
C   C
 B B
  A
A lot of participants are surprised how tricky this simple looking exercise is. It's a good exercise to explore ways of working step by step. How would you break it down? I urge you to try the exercise now. On cyber-dojo naturally. Then come back here and read on.


How did you do it? Was your first test something like this (Ruby)
def test_diamond_B
  assert_equal [" A ",
                "B B",
                " A "], diamond('B')
end
Maybe then a bit of slime:
def diamond(widest)
  [" A ",
   "B B",
   " A "
  ]
end
What then? Perhaps observe that the slime is not using the widest parameter, so write another test for diamond('C'). What then? Slime that too? Then what?

What I find really interesting is something my friend Seb Rose pointed out to me recently - almost no participants try to create steps by breaking down the problem itself.
For example:
Step 1:
def test_only_letters
  assert_equal ["A"], diamond_letters('A')
  assert_equal ["A","B","A"], diamond_letters('B')
  assert_equal ["A","B","C","B","A"], diamond_letters('C')
end
Step 2:
def test_plus_cardinality
  assert_equal ["A"], diamond_cardindality('A')
  assert_equal ["A","BB","A"], diamond_cardindality('B')
  assert_equal ["A","BB","CC","BB","A"], diamond_cardindality('C')
end
Step 3:
def test_plus_leading_space
  assert_equal ["A"], diamond_leading_space('A')
  assert_equal [" A",
                "BB",
                " A"], diamond_leading_space('B')
  assert_equal ["  A",
                " BB",
                "CC",
                " BB",
                "  A"], diamond_leading_space('C')
end
Step 4:
def test_plus_mid_space
  assert_equal ["A"], diamond('A')
  assert_equal [" A",
                "B B",
                " A"], diamond('B')
  assert_equal ["  A",
                " B B",
                "C   C",
                " B B",
                "  A"], diamond('C')
end
It's amazing how a tiny exercise like Print Diamond can so effectively mirror project scale Waterfall style development!

6 * 9 == 42

The starting files for all languages in cyber-dojo take the same format:
  • A test file that asserts answer() == 42
  • A file with answer() defined to returns 6 * 9
For example, in C, answer() looks like this...
int answer() { return 6 * 9; }
Thus the initial files give you a red traffic-light (indicating a failing test) since 6*9 == 54.
In a cyber-dojo the other day one of the developers (hi Ian) rewrote answer() like this:
int answer() { return SIX * NINE; }
which he made pass like this:
#define SIX 1+5 #define NINE 8+1
Excellent!

thank yous



overview of how cyber-dojo uses git

This blog entry has been commented out.
cyber-dojo no longer uses git in its storer service or its runner service.

pulling the latest cyber-dojo github repo

This page is for an old version of cyber-dojo.
Start from here.

overview of how cyber-dojo's language docker-containers work

When you set up your cyber-dojo cyber-dojo will only offer entries whose languages/ subfolder's manifest.json file has an image_name entry that exists. For example, if cyber-dojo/languages/Java/JUnit/manifest.json contains this...
{ "image_name": "cyberdojofoundation/java_junit" "display_name": "Java, JUnit", ... }
then [Java, JUnit] will only be offered if the docker image cyberdojofoundation/java_junit exists on the server, as determined by running
$ docker images


cyber-dojo will start a container from the docker image_name to execute an animals cyber-dojo.sh file each time the animal presses the [test] button.

cyber-dojo language's manifest.json entries explained

This page is for an old version of cyber-dojo.
This is the page you're looking for.

interviewing and cyber-dojo

Why not ask potential interview candidates to do an exercise in cyber-dojo? Candidates simply email you their cyber-dojo URL (which contains their id) when they're finished. From this you can look not only at their final solution, but also at their tests, and how they got there. We know several companies already doing this. They report that it's best to be clear and up-front about what you want. That it's not uncommon for candidates to work on a solution outside of cyber-dojo and then paste in their final code (and maybe the test code!). This gives you no clue as to how they got there. Ask them not to do this.

Another useful idea is to provide concrete feedback to the candidate and ask them to try again.
Then look at their second submission to see if/how they adopted the feedback.

If your team sometimes pair-programs, why not mirror the pairing in your recruitment process?
Instead of interviewing 5 candidates sequentially, see how five candiates fare technically (and socially) in a 10 person cyber-dojo, pairing each candidate with a developer from the team they're hoping to join. Do several iterations swapping partners each time. Note that if your team never pair-programs it is incongruent to pair in the interview.
Teamwork characteristics ... cannot be determined if you interview ... one at a time.
W. Edwards Deming


some stats on 15,000 practice sessions

As I write this (September 2014) the number of practice dojos on cyber-dojo.org is about 15,000.

The most popular day to run a cyber-dojo is Thursday.
  • 1375 Saturday
  • 1022 Sunday
  • 2488 Monday
  • 2588 Tuesday
  • 2575 Wednesday
  • 2660 Thursday
  • 2307 Friday

The most popular language-unit-test framework is Java-JUnit (but the figures probably depend a lot on which ones have been installed the longest).
  • 3125 Java JUnit
  • 2881 C# NUnit
  • 1087 Python unittest
  • 1071 C++ assert
  • 904 C assert
  • 736 PHP PHPUnit
  • 701 Javascript assert
  • 600 C++ GoogleTest
  • 525 Ruby TestUnit
  • 384 Python pytest
  • 357 Java Approval
  • 324 Java Mockito
  • 278 C++ CppUTest
  • 219 Ruby RSpec
  • 217 Java Cucumber
  • 205 Javascript jasmine
  • 157 Haskell hunit
  • 154 Clojure .test
  • 131 Go testing
  • 125 C++ CATCH
  • 120 Perl TestSimple
  • ...

The most popular exercise is Fizz Buzz (again, some have been installed longer than others).
  • 2091 Fizz Buzz
  • 1830 100 doors
  • 924 Verbal
  • 787 Calc Stats
  • 778 Leap Years
  • 733 Roman Numerals
  • 722 Game of Life
  • 713 Anagrams
  • 644 Print Diamond
  • 549 Tennis
  • 498 Prime Factors
  • 476 LCD Digits
  • 466 Yatzy
  • 405 Bowling Game
  • 355 Count Coins
  • 329 Number Names
  • 298 Harry Potter
  • 287 Mine Field
  • 286 Phone Numbers
  • 192 Poker Hands
  • ...


terms & conditions

  • cyber-dojo is free for non-commercial use.
  • Commercial use of the public server requires a license. For example, if you work for a profit-making organization, and you're using https://cyber-dojo.org at work, you need a license.
  • The cyber-dojo Foundation issues licenses
  • cyber-dojo is provided in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.

please email us your feedback

What did you like?
What did you dislike?
What should I add?
What should I remove?
Please email me your feedback.

how is cyber-dojo implemented?



what is cyber-dojo?

cyber-dojo at Trondheim XP meetup
  • a dojo is a place where martial artists practice martial arts.
  • cyber-dojo is where programmers practice programming!
  • cyber-dojo is not an individual development environment.
  • cyber-dojo is a shared learning environment.
  • in a cyber-dojo you focus on improving rather than finishing.
  • in a cyber-dojo you practice by going slower.

cyber-dojo tips

  • Repeat your practice. Repetition frees up mental capacity creating space for improvement.
    Don't be too concerned about finishing; think about improving.
  • cyber-dojo is designed to encourage team practice and works well with two (or more) people at each computer, periodically rotating the current keyboard drivers to different computers as navigators.
  • After each practice use the dashboard to start a review. Look for evidence of...
  • Practice refactoring.
  • Allow at most N amber traffic-lights (total or in-a-row)...
    • per animal
    • per cyber-dojo
    • repeat with reduced value of N
  • Turn on traffic-light colour prediction. Allow at most N incorrect predictions...
    • per animal
    • per cyber-dojo
    • repeat with reduced value of N
  • Set social challenges...
    • Change keyboard drivers before starting each new practice
    • Change keyboard drivers during a practice
    • Change pairs before starting each new practice
    • Change pairs during a practice
  • Set technical challenges...
    • no loops
    • no conditionals
    • no division or modulus
    • immutable data structures only
    • no mouse
    • no data structures
    • maximum 5 lines per method
    • maximum 2 levels of indentation
    • the possibilities are endless!
  • When everyone is at green set a challenge to either...
    • find some code you can delete and the tests still all pass
    • find a bug, and write a failing test for it
  • Create a custom starting point...
    • with all the tests, but none of the code
    • with all the code, but none of the tests
    • with a specific fault - which pair can fix it first?
  • Play the average-time-to-green-game.