Pages

the test page




  • click it to run your tests
  • the stdout or stderr file opens and display the results
  • a new right-most traffic-light appears

Click any traffic-light to open the history view
red - the tests ran but one or more failed.
amber - the tests did not run, eg syntax error.
green - the tests ran and all passed.
the tests did not complete in 15 seconds. Accidentally coded an infinite loop? Too many concurrent cyber-dojos? Lost your network connection?
the total number of traffic-lights (in the most recent traffic light's colour).
your animal, if you are in a group practice.

keyboard shortcuts

  • Alt-O toggles through the editor tabs
  • Alt-J cycles downwards through the filenames
  • Alt-K cycles upwards through the filenames
  • Alt-T runs the tests

Here a full list of shorcuts. Some useful ones for seach are replace are:

Start searching == Ctrl-F / Cmd-F
Find next == Ctrl-G / Cmd-G
Find previous == Shift-Ctrl-G / Shift-Cmd-G
Replace == Shift-Ctrl-F / Cmd-Option-F
Replace all == Shift-Ctrl-R / Shift-Cmd-Option-F
Jump to line == Alt-G


evidence for pairing effectiveness

When I run a cyber-dojo I always ask the participants to work in pairs, two people per computer. I've been doing some research on pairing and I've come across a book called Visible Learning by John Hattie. It's a synthesis of over 800 experiments and papers relating to achievement in schools. On page 225 there is a section called The use of computers is more effective when peer learning is optimized. It reads, and I quote...
  • Lou, Abrami, and d'Apollonia (2001) reported higher effects for pairs than individuals or more than two in a group.
  • Liao (2007) also found greater effects for small groups (d=0.96) than individuals (d=0.56) or larger groups (d=0.39).
  • Gordon (1991) found effects were larger for learning in pairs (d=0.54) compared to alone (d=0.25).
  • Kuchler (1998) reported d=0.69 for pairs and d=0.29 for individuals.
  • Lou, Abrami, and d'Apollonia (2001) reported that students learning in pairs had a higher frequency of positive peer interactions (d=0.33), higher frequency of using appropriate learning or task strategies (d=0.50), persevered more on tasks (d=0.48), and more students succeeded (d=0.28) than those learning individually when using computers.
What do the numbers mean? Quoting from the front of the book...
An effect size of d=1.0 indicates an increase of one standard deviation on the outcome - in this case the outcome is improving school achievement. A one standard deviation increase is typically associated with advancing children's achievement by two to three years, improving the rate of learning by 50%, or a correlation between some variable and acheivement of approximately r=0.50. When implementing a new program, an effect size of 1.0 would mean that, on average, students receiving the treatement would exceed 84% of the students not receiving that treatment.
Of course, things are rarely absolutely black and white, but these are impressive numbers.

  • Lou, Y., Abrami, P.C., & Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449-521
  • Liao, Y.K.C. (2007). Effects of computer-assisted instruction on students' achievement in Taiwan: A meta-analysis. Computers and Education, 48(2), 216-233.
  • Gordon, M.B. (1991). A quantitative analysis of the relationship between computer graphics and mathematics achievement and problem-solving. Unpublished Ed.D., University of Cincinnati, OH.
  • Kuchler, J.M. (1998). The effectiveness of using computers to teach secondary school (grades 6-12) mathematics: A meta-analysis. Unpublished Ed.D., University of Massachusetts Lowell, MA.

facilitating a cyber-dojo tips

When I'm facilitating a cyber-dojo with a new group here's how I typically start:
  1. I suggest that developers habits and thinking is strongly influenced by their development environments. If you use Eclipse to develop software then when you use Eclipse your default mentality is one of development. Not practising. Since we're practising, we deliberately don't use a development environment.
  2. I point out that cyber-dojo is not a personal development environment, it's a shared practice environment. In a development environment it makes sense to have tools such as colour syntax highlighting and code-completion to help you go faster so you can ship sooner. In a practice environment it doesn't. When you're practising you don't want to go faster, since you're not shipping anything. You want to go slower. You want your practice to be more deliberate.
  3. I observe that since it is so different to a development environment, participants may feel some slight discomfort when first using cyber-dojo. This discomfort is also deliberate! Discomfort can bring learning opportunities.
  4. I do a short demo explaining...
    • the files on the left side
    • the initial source files bear no relation to the exercise
    • the test button
    • the output file
    • the meaning of the red, amber, green traffic lights
  5. I ask the participants to enter their dojo in pairs. Pairing is an important part of the learning. Occasionally a few choose not to pair (and that's fine) but most do.

the history page

Hovering over a traffic-light shows its diff summary in a tool-tip.
Clicking on a traffic-light opens the history-view.
For example, this is the history-view for heron's 32nd traffic light.


avatar navigator


Moves you to different animals. Only visible in a group exercise.
  • the left-arrow moves to the previous animal.
  • the right-arrow moves to the next animal.
  • when the diff checkbox is checked, moving to another animal moves to their first traffic-light.
  • when the diff checkbox is unchecked, moving to another animal moves to their last traffic-light.


traffic-light navigator


Moves you forward and backward through the traffic-lights.
  • the small left-arrow moves to the first traffic-light.
  • the larger left-arrow moves to the previous traffic-light.
  • the current traffic-light in the current colour (eg 32 green).
  • the larger right-arrow moves to the next traffic-light.
  • the smaller right-arrow moves to the last traffic-light.


traffic lights


The scrollable traffic-light sequence.
  • hover over each traffic-light to show its diff summary in a tool-tip.
  • click on any traffic-light to navigate directly to it.
  • the current traffic-light is marked with an underbar.


file name


The currently selected filename.
  • the number of lines deleted, in red (click to hide/view toggle).
  • the number of lines added, in green (click to hide/view toggle).
  • the filename (click it to auto-scroll its next diff-chunk into view).


file content


The currently selected file.
  • deleted lines are in light red, there is a - next to the line-number.
  • added lines are in light green, there is a + next to the line-number.


forking


The fork button.
  • creates a brand new exercise, with its own 6-character id.
  • the new exercise's starting files will be copied from the currently displayed traffic light.
  • a dialog box will ask whether you want an individual exercise or a group exercise.


checking out


The checkout button.
  • checks out (as in git checkout) the files in the currently displayed traffic light, and submits them for test.
  • not available from a dashboard review.




the dashboard page



Each row corresponds to one avatar and displays, from left to right:
the avatar. Click to open a history page in non-diff mode showing their current code.
a pie-chart indicating the total number of red,amber,green traffic-lights so far.
total number of traffic-lights (in the most recent traffic light's colour). Click to open the history view in non-diff mode showing the animals current code.
oldest-to-newest traffic-lights. Click on any traffic-light to open a history view showing the diff for that traffic-light for that animal.

  • when checked the dashboard auto-refreshes every ten seconds.
  • turn auto-refresh on during the coding.
  • turn auto-refresh off during the review.

  • when unchecked the traffic-lights of different animals are not vertically time-aligned.
  • when checked each vertical column corresponds to one minute and contains all the traffic-lights created by all the animals in that one minute.
  • if no animals press their button during one minute the column will contain no traffic-lights at all (instead it will contain a single dot and be very thin).


If available, displays slightly more information about the most recent non-amber traffic-light of each animal, usually the number of passing and failing tests.



cyber-dojo traffic lights


Press and stdout+stderr+status are displayed in the output tab, and you get a new traffic-light.
Each traffic-light is coloured:
  • red if the tests ran but one or more failed.
  • amber if the tests did not run, eg syntax error.
  • green if the tests ran and all passed.
  • if the tests did not complete in ~10 seconds.

If test-prediction is enabled (click the cog/gear icon to open the settings dialog), traffic-lights look like this:
  • correct prediction (of green).
  • incorrect prediction (of red or amber).
  • auto-revert (back to green).

Click any traffic-light to open the history page showing:
  • diffs for any traffic-light's files, for any animal.
  • a button to checkout (git checkout) the files from any traffic light.
  • a button to fork a new exercise from any traffic light's files.


adding a new exercise

This page is out of date.
This is the page you are looking for.

cyber-dojo now runs C# Specflow

Many thanks to Seb Rose who has added C# Specflow to cyber-dojo. Seb has written a blog entry using specflow on mono from the command line detaling the steps involved.

breaking down the problem

One of my favourite programming problems on cyber-dojo is Print Diamond:
Given a letter print a diamond starting with 'A'
with the supplied letter at the widest point.

For example: print-diamond 'E' prints

    A
   B B
  C   C
 D     D
E       E
 D     D
  C   C
   B B
    A

For example: print-diamond 'C' prints

  A
 B B
C   C
 B B
  A
A lot of participants are surprised how tricky this simple looking exercise is. It's a good exercise to explore ways of working step by step. How would you break it down? I urge you to try the exercise now. On cyber-dojo naturally. Then come back here and read on.


How did you do it? Was your first test something like this (Ruby)
def test_diamond_B
  assert_equal [" A ",
                "B B",
                " A "], diamond('B')
end
Maybe then a bit of slime:
def diamond(widest)
  [" A ",
   "B B",
   " A "
  ]
end
What then? Perhaps observe that the slime is not using the widest parameter, so write another test for diamond('C'). What then? Slime that too? Then what?

What I find really interesting is something my friend Seb Rose pointed out to me recently - almost no participants try to create steps by breaking down the problem itself.
For example:
Step 1:
def test_only_letters
  assert_equal ["A"], diamond_letters('A')
  assert_equal ["A","B","A"], diamond_letters('B')
  assert_equal ["A","B","C","B","A"], diamond_letters('C')
end
Step 2:
def test_plus_cardinality
  assert_equal ["A"], diamond_cardindality('A')
  assert_equal ["A","BB","A"], diamond_cardindality('B')
  assert_equal ["A","BB","CC","BB","A"], diamond_cardindality('C')
end
Step 3:
def test_plus_leading_space
  assert_equal ["A"], diamond_leading_space('A')
  assert_equal [" A",
                "BB",
                " A"], diamond_leading_space('B')
  assert_equal ["  A",
                " BB",
                "CC",
                " BB",
                "  A"], diamond_leading_space('C')
end
Step 4:
def test_plus_mid_space
  assert_equal ["A"], diamond('A')
  assert_equal [" A",
                "B B",
                " A"], diamond('B')
  assert_equal ["  A",
                " B B",
                "C   C",
                " B B",
                "  A"], diamond('C')
end
It's amazing how a tiny exercise like Print Diamond can so effectively mirror project scale Waterfall style development!

6 * 9 == 42

The starting files for all languages in cyber-dojo take the same format:
  • A test file that asserts answer() == 42
  • A file with answer() defined to returns 6 * 9
For example, in C, answer() looks like this...
int answer() { return 6 * 9; }
Thus the initial files give you a red traffic-light (indicating a failing test) since 6*9 == 54.
In a cyber-dojo the other day one of the developers (hi Ian) rewrote answer() like this:
int answer() { return SIX * NINE; }
which he made pass like this:
#define SIX 1+5 #define NINE 8+1
Excellent!