the cyber-dojo test view




  • click it to run your tests
  • the stdout/stderr/status files open and display the results
  • a new right-most traffic-light appears


Click any traffic-light to open the history view
red - the tests ran but one or more failed.
amber - the tests did not run, eg syntax error.
green - the tests ran and all passed.
the tests did not complete in 15 seconds. Accidentally coded an infinite loop? Too many concurrent cyber-dojos? Lost your network connection?
the total number of traffic-lights (in the most recent traffic light's colour).
a pie-chart indicating the number of red, amber and green traffic-lights so far.
your animal, if you are in a group practice session. Click this to open a dashboard view.

  • click to create a new file
  • click to rename the current file
  • click to delete the current file
  • click a filename to open it in the editor
  • the stdout/stderr/status files are read-only

shortcuts

  • Alt-T runs the tests
  • Alt-O toggles to-from the stdout/stderr/status files
  • Alt-J cycles forwards through the 'top' files (above the stdout/stderr/status files)
  • Alt-K cycles backwards through the 'top' files (above the stdout/stderr/status files)


search and replace

Start searching == Ctrl-F / Cmd-F
Find next == Ctrl-G / Cmd-G
Find previous == Shift-Ctrl-G / Shift-Cmd-G
Replace == Shift-Ctrl-F / Cmd-Option-F
Replace all == Shift-Ctrl-R / Shift-Cmd-Option-F
Jump to line == Alt-G


evidence for pairing effectiveness

When I run a cyber-dojo I always ask the participants to work in pairs, two people per computer. I've been doing some research on pairing and I've come across a book called Visible Learning by John Hattie. It's a synthesis of over 800 experiments and papers relating to achievement in schools. On page 225 there is a section called The use of computers is more effective when peer learning is optimized. It reads, and I quote...
  • Lou, Abrami, and d'Apollonia (2001) reported higher effects for pairs than individuals or more than two in a group.
  • Liao (2007) also found greater effects for small groups (d=0.96) than individuals (d=0.56) or larger groups (d=0.39).
  • Gordon (1991) found effects were larger for learning in pairs (d=0.54) compared to alone (d=0.25).
  • Kuchler (1998) reported d=0.69 for pairs and d=0.29 for individuals.
  • Lou, Abrami, and d'Apollonia (2001) reported that students learning in pairs had a higher frequency of positive peer interactions (d=0.33), higher frequency of using appropriate learning or task strategies (d=0.50), persevered more on tasks (d=0.48), and more students succeeded (d=0.28) than those learning individually when using computers.
What do the numbers mean? Quoting from the front of the book...
An effect size of d=1.0 indicates an increase of one standard deviation on the outcome - in this case the outcome is improving school achievement. A one standard deviation increase is typically associated with advancing children's achievement by two to three years, improving the rate of learning by 50%, or a correlation between some variable and acheivement of approximately r=0.50. When implementing a new program, an effect size of 1.0 would mean that, on average, students receiving the treatement would exceed 84% of the students not receiving that treatment.
Of course, things are rarely absolutely black and white, but these are impressive numbers.

  • Lou, Y., Abrami, P.C., & Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449-521
  • Liao, Y.K.C. (2007). Effects of computer-assisted instruction on students' achievement in Taiwan: A meta-analysis. Computers and Education, 48(2), 216-233.
  • Gordon, M.B. (1991). A quantitative analysis of the relationship between computer graphics and mathematics achievement and problem-solving. Unpublished Ed.D., University of Cincinnati, OH.
  • Kuchler, J.M. (1998). The effectiveness of using computers to teach secondary school (grades 6-12) mathematics: A meta-analysis. Unpublished Ed.D., University of Massachusetts Lowell, MA.

facilitating a cyber-dojo tips

When I'm facilitating a cyber-dojo with a new group here's how I typically start:
  1. I suggest that developers habits and thinking is strongly influenced by their development environments. If you use Eclipse to develop software then when you use Eclipse your default mentality is one of development. Not practising. Since we're practising, we deliberately don't use a development environment.
  2. I point out that cyber-dojo is not a personal development environment, it's a shared practice environment. In a development environment it makes sense to have tools such as colour syntax highlighting and code-completion to help you go faster so you can ship sooner. In a practice environment it doesn't. When you're practising you don't want to go faster, since you're not shipping anything. You want to go slower. You want your practice to be more deliberate.
  3. I observe that since it is so different to a development environment, participants may feel some slight discomfort when first using cyber-dojo. This discomfort is also deliberate! Discomfort can bring learning opportunities.
  4. I do a short demo explaining...
    • the files on the left side
    • the initial source files bear no relation to the exercise
    • the test button
    • the output file
    • the meaning of the red, amber, green traffic lights
  5. I ask the participants to enter their dojo in pairs. Pairing is an important part of the learning. Occasionally a few choose not to pair (and that's fine) but most do.

the cyber-dojo history view

The history view shows the code for any traffic-light for any animal.
For example, this is the kangaroo's 96th traffic light.





Click on any traffic-light to navigate directly to it.
The current traffic-light is marked with an underbar.

moves to the previous traffic-light.
the current traffic-light number (in its colour).
moves to the next traffic-light.


  • the current filename (click it to auto-scroll its next diff-chunk into view)
  • the number of lines deleted (click to hide/view toggle)
  • the number of lines added (click to hide/view toggle)
  • the file's diff
    deleted lines are in red
    added lines are in green

downloads the currently displayed traffic light's files together with a manifest.json file ready to use these files as a custom starting point.
forks a brand new cyber-dojo, with its own id. The new cyber-dojo's starting files will be copied from the currently displayed traffic light.
reverts the current files to the files in the currently displayed traffic light.




the cyber-dojo dashboard view



Each horizontal row corresponds to one animal and displays, from left to right:
oldest-to-newest traffic-lights. Click on any traffic-light to open a history view showing the diff for that traffic-light for that animal.
total number of traffic-lights (in the most recent traffic light's colour). Click to open the history view in non-diff mode showing the animals current code.
a pie-chart indicating the total number of red,amber,green traffic-lights so far.
the animal. Click to open the history view in non-diff mode showing the animals current code.

  • when checked the dashboard auto-refreshes every ten seconds.
  • turn auto-refresh on during the coding.
  • turn auto-refresh off during the review.

  • when unchecked the traffic-lights of different animals are not vertically time-aligned.
  • when checked each vertical column corresponds to one minute and contains all the traffic-lights created by all the animals in that one minute.
  • if no animals press their button during one minute the column will contain no traffic-lights at all (instead it will contain a single dot and be very thin).


If available, displays slightly more information about the most recent non-amber traffic-light of each animal, usually the number of passing and failing tests.
Downloads a .tar.gz file of all the traffic-lights (intended for server administrators).



cyber-dojo traffic lights


The result of pressing is displayed in the stdout/stderr/status 'files' and also as a new traffic-light.
Each traffic-light is coloured
  • red if the tests ran but one or more failed.
  • amber if the tests did not run, eg syntax error.
  • green if the tests ran and all passed.
  • if the tests did not complete in 15 seconds
    perhaps you've accidentally coded an infinite loop?
    maybe the server is overloaded with too many concurrent cyber-dojos?
    have you lost your network connection?

Click any traffic-light to open the history view showing
  • diffs for any traffic-light for any animal
  • a button to revert to any traffic light
  • a button to fork a new cyber-dojo session from any traffic light


adding a new exercise

This page is out of date.
This is the page you are looking for.

cyber-dojo now runs Javascript mocha+chai+sinon

Many thanks to Steve Coffman for adding this.

cyber-dojo now runs C# Specflow

Many thanks to Seb Rose who has added C# Specflow to cyber-dojo. Seb has written a blog entry using specflow on mono from the command line detaling the steps involved.

breaking down the problem

One of my favourite programming problems on cyber-dojo is Print Diamond:
Given a letter print a diamond starting with 'A'
with the supplied letter at the widest point.

For example: print-diamond 'E' prints

    A
   B B
  C   C
 D     D
E       E
 D     D
  C   C
   B B
    A

For example: print-diamond 'C' prints

  A
 B B
C   C
 B B
  A
A lot of participants are surprised how tricky this simple looking exercise is. It's a good exercise to explore ways of working step by step. How would you break it down? I urge you to try the exercise now. On cyber-dojo naturally. Then come back here and read on.


How did you do it? Was your first test something like this (Ruby)
def test_diamond_B
  assert_equal [" A ",
                "B B",
                " A "], diamond('B')
end
Maybe then a bit of slime:
def diamond(widest)
  [" A ",
   "B B",
   " A "
  ]
end
What then? Perhaps observe that the slime is not using the widest parameter, so write another test for diamond('C'). What then? Slime that too? Then what?

What I find really interesting is something my friend Seb Rose pointed out to me recently - almost no participants try to create steps by breaking down the problem itself.
For example:
Step 1:
def test_only_letters
  assert_equal ["A"], diamond_letters('A')
  assert_equal ["A","B","A"], diamond_letters('B')
  assert_equal ["A","B","C","B","A"], diamond_letters('C')
end
Step 2:
def test_plus_cardinality
  assert_equal ["A"], diamond_cardindality('A')
  assert_equal ["A","BB","A"], diamond_cardindality('B')
  assert_equal ["A","BB","CC","BB","A"], diamond_cardindality('C')
end
Step 3:
def test_plus_leading_space
  assert_equal ["A"], diamond_leading_space('A')
  assert_equal [" A",
                "BB",
                " A"], diamond_leading_space('B')
  assert_equal ["  A",
                " BB",
                "CC",
                " BB",
                "  A"], diamond_leading_space('C')
end
Step 4:
def test_plus_mid_space
  assert_equal ["A"], diamond('A')
  assert_equal [" A",
                "B B",
                " A"], diamond('B')
  assert_equal ["  A",
                " B B",
                "C   C",
                " B B",
                "  A"], diamond('C')
end
It's amazing how a tiny exercise like Print Diamond can so effectively mirror project scale Waterfall style development!

6 * 9 == 42

The starting files for all languages in cyber-dojo take the same format:
  • A test file that asserts answer() == 42
  • A file with answer() defined to returns 6 * 9
For example, in C, answer() looks like this...
int answer() { return 6 * 9; }
Thus the initial files give you a red traffic-light (indicating a failing test) since 6*9 == 54.
In a cyber-dojo the other day one of the developers (hi Ian) rewrote answer() like this:
int answer() { return SIX * NINE; }
which he made pass like this:
#define SIX 1+5 #define NINE 8+1
Excellent!