The off ramp: tests, trials, and the myth of meritocracy
A series outlining my concerns with the “two-lane” approach to assessment security. Here, I explain the ways in which it exacerbates the myth of meritocracy and deepens inequity in education.
A refresher: what is the two-lane approach?
Australian universities are rolling out tough new assessment security approaches in response to the so-called “Age of AI”, which is seeing a tidal wave of cheating that is rendering university degrees worthless as currency for the job market.
For a brief refresher on two-lane, review the University of Sydney’s explainer. In even-more-brief:
Lane 1 assessments are “secure”, meaning the conditions for student work are tightly controlled and supervised. Generative AI can be banned.
Lane 2 assessments are “open”. This means the conditions for completing them are flexible and unrestricted, and generative AI cannot be banned.
In this article I want to focus on what we’re actually trying to “secure” in the first instance, and what we risk by labelling the rest “open”.
On meritocracy, and why we’re not in one
Theoretically, education is central to a meritocratic society. That is, it’s a system in which people are rewarded based on their talents, which are employed to achieve success. In a meritocratic system, cheating is bad because it means non-talented people are trying to get away with being successful.
In the context of a meritocracy, cheating on a test looks a bit like…
Of course, anyone who knows anything about education understands that meritocracy is a myth. Yes, students are assessed and graded based on their results for the test, but in order to complete that test, they are also subjected to a battery of what I’ll call “trials”. These trials include wearing the correct uniform correctly. Staying focused in class. Getting to class at all. Getting on with the teacher, and with other students. Having time and support outside of school to finish their homework. Accessing resources to support their learning.
So, it might be more accurate to picture cheating like this…
To achieve success without cheating, students don’t just need to pass the test, but all of the trials as well. What helps? Well, resources like money, social capital, geographic location, health, and money (you know exactly why it’s there twice). These resources are, mostly unavoidably, unequally distributed. So the ability to pass these trials is not fair or equal.
Thus education does not support a pure meritocracy, but one plagued with existing biases and inequities. We try — hard — to balance these, but true fairness will always elude us. That doesn’t mean we don’t try.
Fairness in assessment design: impossible and important
Here’s the thing. In a meritocratic system, where people are rewarded according to their talents, inequality is positioned as justified. High ranks to the high achievers, low ranks to the low. But, as we know, the thing we call talent is really a messy assemblage of economic, cultural, social and other capitals.
The assessments we conduct are designed to measure students’ performance.1 We set tasks; they complete them. We expand or restrict the ways in which they can do this. We do this for a lot of reasons, validity being chief among them. Another vital reason is fairness.
So, let’s look at the ways in which assessment methods are expanded or restricted in the two-lane approach.
We can go “open” (unsupervised and therefore unrestricted) or “secure” (supervised and restricted). “Open” assessments are flexible, probably self-paced, permitting the free use of digital technologies. The flexibility of these assessments significantly reduces the trials a student has to grapple with in order to engage with the test. Flexible time, flexible location, flexible expectations, flexible formats, and of course a clear brief for the task, provided well in advance.
Openness also enables students to draw on their full range of resources beyond the classroom, which are of course unequally distributed too. Openness reduces barriers, including those placed deliberately to prevent unfairness. This isn’t easy! But we need to be extremely careful that “open” assessments are not positioned as a free-for-all.
However, it’s “secure” assessments that will earn students their degrees. These are the tasks that will be mapped at a whole-course or whole-program level. These are the tasks that will make up a minimum of 50% of each subject’s grade at the University of Melbourne.
Students — who are getting less and less patient with us — will therefore recognise “open” assessments for what they are: empty pageantry that they can give very little attention to. The attention needs to be here: the test. The “secure” test.
It’s cheap to cheat in “open” assessments. But it’s expensive to cheat at “secure” assessments.
In a “secure” assessment — remember, these are high-stakes time-constrained in-person supervised assessments — we close off access to the material resources we believe may enable students to cheat: time, technology, privacy. But we can’t close off everything. As we know, wearable technologies are advancing, and so it is not impossible — not even unlikely — for today’s students to present for their exams wearing smart glasses, in-ear audio devices or some other AI-integrated widget.
So it’s clear this isn’t speculative, this happened two days ago.
And look, even the body is an unreliable thing. Unless we adopt Sam Altman’s eyeball scanners for IDing students at exam centres, it’s still possible to hire someone else to sit your exam in your place.
Yes, of course it’s possible to cheat at a “secure” assessment.
But it’s far, far more likely that vast numbers of students will not. Vast numbers of students will struggle intensely without mediating resources, like time and control of their performance conditions, and will do poorly at “secure” assessments.
As I’ve noted before, securing assessments likely means rolling back educational equity. It means taking away flexibility, imposing mental health burdens, and reducing means of engagement.
We may see an avalanche of requests for accommodation from students who cannot manage the burdens imposed by securing assessment: rural students who struggle to reach physical campuses; working and parenting students whose schedules disallow the scheduled assessment times; disabled students for whom the conditions are prohibitive; anxiety-prone students for whom time pressure and invigilation cause mental health risks.
But worse, we may see a great many more students who need accommodations, but do not request them. They have a far higher risk of failing under “secure” assessment conditions — and it won’t be because we prevented them from cheating; it will be because we prevented them from performing.
Let me be unambiguous about this.
Without acknowledging the inequitable trials embedded in all aspects of assessment “security”, the adoption of a two-lane approach feeds directly the myth of meritocratic education. It claims “assurance of learning” where what is actually being assured is that every student must experience a battery of trials in order to even attempt a test of their learning.
And the students with the resources to bypass the trials are, logically and demographically, those who can still afford to cheat.
We assess students’ performance, not their learning. Our assessments don’t tell us when or where or how they learned what they’re performing; only whether they are performing it.
This is an excellent essay. Thank you! Your endnote encapsulates the main issue: "We assess students’ performance, not their learning. Our assessments don’t tell us when or where or how they learned what they’re performing; only whether they are performing it." Indeed. We need to keep this in mind, especially in courses outside the humanities, where the performance seems to be an even bigger deal. If we want to motivate students to learn, we must focus on learning and deprioritize grades and their performance on singular, formal assignments.