We have just had a competition that was judged by those attending the session. The so called “community”. The reason we resorted to community judging was because our allocated judge did not feel they were comfortable to use a zoom platform, and with the various covid restrictions we had not got to the point where we could undertake the judging within the hall.
The reason
In fact this is not the first time that we have been caught short at the last minute without a judge. On one previous occasion we had suggested community judging when we were without a judge, but being ill prepared had to resort to one of our members withdrawing their own entries and judging at the last minute. This went well, apart from Mark having to withdraw his and Jenny’s images. From this time we have been toying with the idea of having a ready made community option that can be used at a moments notice. This last competition offered us another opportunity, but with a bit more time to prepare.
The aims
My objective was that we should be able to emulate what a judge offers in a standard competition. We needed a score for each image. The score needed to be seen as valid. Members required confidence that the score was legitimate and could be used in the cumulative score for the year. We also wanted constructive comments about the images to help entrants understand how their mark was derived.
I also had a set of preferences in scoring. There needs to be a spread of marks. However scores of 5 or less are demoralising and we don’t want to do that to our members. Generally judging goes well if the range of scores is from 6 to 10. I would like to see a single 10 score for every category of the competition. Too many 10’s is not a good thing, and takes away from the honour of getting the highest score. The high scores (8,9 and 10) should only be given to about 40-50% of the total scores allocated.
I think that these score ranges are what we have come to expect from a well balanced judge.
To be avoided
I did not want it to be difficult for people to comprehend. We want to avoid too many rules. We needed simplicity in the execution. It should be more like draughts than like chess. We needed to avoid time consuming collation of results. We wanted to avoid bias. We did not want people grandstanding, or promoting their own images. We needed people to be thoughtful and consider the merits of other people’s work.
Our method
I spent some time discussing the method with Helen Whitford and Duart McLean and this is what we came up with.
Voting – The simplest format I could think of, was to select your favourite image. This unfortunately would give only a small spread of votes over a few images. However if we ask people to select their favourite 3 images, with enough participants we should get a broad spread of votes over most of the images. Also this was not too taxing on each participant. (Nothing like giving a score for each image) We would make 1 stipulation. “Don’t vote for your own images”.
This method would be easy to collate by hand. However we chose to use a google form to collect the votes electronically and collate the votes for us. The form can be accessed from a link and opened as a separate window in the browser or alternatively on a separate device, a smart phone or tablet.
Histogram – Once the votes were collected we could generate a histogram, indicating how many votes each image had received.
Allocation of scores – I suggested we tackle the histogram with a quota system. The top scoring image would receive a 10. The next two would receive a 9, the next 4 would receive a score of 8. All images with 0 votes would receive 6 and the remaining images 7.
Of course there would be difficulties sometimes when two images had the same score, and so we would adjust the quota to make a best fit. Likewise when there were fewer images we had to reduce the quota. In the 4 image category our quota was 1 ten, 1 nine and 1 eight. I elected to put the histogram up on the screen and discuss it with the group so that we would achieve a consensus.
Comments – after voting we read out the scores (without names) and then asked people to give positive or constructive comments. We did this for each image, one slide at a time. We encouraged people to say why they had selected a certain image. We asked that people did not discuss their own image (unless asked to). We asked people to refrain from negative comments (unless it was part of a constructive comment).
How it went
On the night everyone was able to access both the zoom screen and the google form. Some opened the form in a separate window, some used their phones or tablets. We had 11 to 12 people voting on 27 images in 4 categories. The number of images per category ranged from 4 to 16. We asked people to select only 2 images in the smallest category. We ran through the images twice before voting and then put up a page of icons to help people select their 3 choices.
We found the inclusion of the name on the top of the form useful in identifying who had not submitted a vote yet. We also used this to ensure people did not vote twice.
Interestingly there was a clear favourite (scoring 10) in each category. There was a good spread of votes, with only few images receiving 0 votes (score of 6). The allocation of scores was relatively straightforward and we achieved a consensus for each of the 4 groups. People felt that the scores were valid and equivalent to having a formal judge. The comments were courteous and on the whole helpful.
Conclusion
Overall participants were pleased with the judging and I suggest that we use this format in the future should we be called upon to do community judging again. It might be worth sharing this model with other clubs, should they be interested in using our method or developing their own community judging.