If using an algorithm to adjust marks is unfair, as it has been deemed to be this year, then surely this practice must cease going forward.
The last few weeks have been filled with issues surrounding exam results. One of these was being how the A-Level results were adjusted from centre assessed grades based on a statistical algorithm. This was deemed to be unfair as it penalised some students or groups of students more than others. The lack of equity was clearly evident due to the ability for schools to compare their centre assessed grades with the finally awarded grades. It was therefore evident how the statistical adjustment, carried out in the interests of keeping results generally in line with previous year’s results, impacted on individual students. The faces and lives of individual students could be attached to the grade adjustments. This was deemed unacceptable.
My worry here is that this statistical adjustment has always gone on. Normally students would sit exams with their resulting score undergoing adjustment in the form of changes in the grade boundaries. Again, this was done in the interests of keeping results generally in line with previous years results and again some groups of students would likely be penalised more than others. The grade boundaries changed due to the exam being deemed generally easier/harder. The focus on the difficulty of the exam meant that seldom did we associate resulting grade changes with individual students; we don’t generally attach faces to this change, yet some students would have received lesser grades than had the adjustment not been carried out, the same as happened this year. This seemed acceptable, and has been the way things have been done for decades, but I don’t see how this is any fairer that what happened this year.
Maybe following this years issues, we need to take another look at how we assess/measure students learning and achievement including the associated processes.