Blue books are a bad idea
How Universities want to solve their AI problems is misguided
OK. So you know the problem. AI allows students to do assignments without doing work or learning. Trying to modify assignments is an arms race. Getting students to use AI and critique it still allowed them to use AI for the critique. AI was all over the assessment, and the signal it provided was being distorted. In the future, maybe students won’t learn at all.
Enter Clay Shirky. Clay Shirky was one of the best observers of the Internet during its social era. He wrote an excellent book, Here Comes Everybody, that shaped my thinking during those times. He is now a Vice-Provost at NYU and just wrote a New York Times piece on the impact of AI on education. My hopes were high. Surely, Clay would bring forward-thinking thought to the issue.
Here is what we got:
Learning is a change in long-term memory; that’s the biological correlate of what we do in the classroom. Now that most mental effort tied to writing is optional, we need new ways to require the work necessary for learning. That means moving away from take-home assignments and essays and toward in-class blue book essays, oral examinations, required office hours and other assessments that call on students to demonstrate knowledge in real time. The shift is already happening: The Wall Street Journal reported on booming sales of blue books last school year.
Yes, the solution to using AI in assessment is to provide assessment in a cocoon so that students can’t use AI in assessment. That is, old-style, probably written, supervised exams. You know, like I used to have 40 odd years ago when we didn’t have computers to help us type assignments neatly. When you pushed everything into memory for a couple of weeks, only to have it promptly fall out again. And that was before we realised that there were whole classes of students for which the artificial pressure put on by all of this led them to not be able to perform well in exams and drop out of education altogether. And we didn’t realise it because all of the professors supporting the existing system were a successful product of it. This is an example of what has been called the Shirky principle: “Institutions will try to preserve the problem for which they are the solution.” (The economist version is that organisations will promote the value of something that is scarce that they own.)
So now that he was in charge, it seems to me at least that Shirky has adopted the Shirky principle to heart. We know how to do AI-free assessment, so we should do that. In so doing, we need not get rid of the problem of AI and assessment, but just offer our solution.
Suffice it to say, I think this is a terrible direction to be heading.
Why assessment?
When faced with an attack on a thing — in this case, assessment — the first step is to remind ourselves why we assess students to begin with.
There are two broad answers. The first is that there are people, including students themselves, who would like to know if the students actually learned things. Who better to determine that than the person whose job it is to teach them? Actually, when I put it that way, that sounds like a terrible idea. You are making the defendant also the judge. Who in their right mind would set up a verification institution that way? But set it up, we have. And it is pretty clear, from the increasing barrage of tests that employers put students through these days, that the would-be recipients of that verification aren’t too trusting of it.
The second answer is that we need assessment to push students to learn. One way of viewing this is that students are basically unmotivated agents who will just laze about and not think about their future, which relies on them knowing stuff. But if you think about it, that isn’t a call for assessment necessarily, but instead to take the students and bundle them into a monastic existence cut off from the outside world, where they have nothing better to do than learn. Shirky seems to understand that this is a bad idea, “Contrary to much popular opinion, college is not in the information transfer business; we are in the identity formation business.” We actually want students to make the choice to study in the face of competing activities. It is part of the learning process itself.
But we do have to reckon with the fact that even if students aren’t really going to laze away their time at University, the reason for assessment — tied to some consequence such as failure to continue, parental oversight or future job prospects — is because fundamentally, we, the professors want students to spend more time studying than they would otherwise choose to do.
If that isn’t the case, why are we doing it? The students likely don’t want assessment. The professors, almost to a fault, do not want to provide assessment. But like parents forced to take their children to a park when everyone wants to just stay home and watch TV, we do it anyway. The point of assessment is to induce students to study more than they would otherwise choose to do. Call it a conflict of interest or a paternalistic judgment, but the effect is the same. Without assessment, students will study as much as they want to.
The impact of AI
Given this, the impact of AI in an assessment apocalypse is to move to a world where students are studying as much as they want to. Being a professor in a business school with adult students who are paying their own way to be here, I have to say, why am I trying to push them into studying more than they want? They may want the verifiable signal. In which case, why am I providing it? Surely, it should be some independent process, and they can spend their time worrying about AI. We have made assessment our own problem when, stepping back, we are the wrong people to be dealing with it on multiple levels.
Which brings me to the other opportunity from AI. It could dramatically lower the cost of assessment. We already know that even if we went back to blue books, it is so unlikely that the future me will actually grade them sans the use of AI. But more to the point, if the goal is to give students a push to study more, moving to an expensive form of assessment is a terrible way to do that. Cramming hardly encourages long-term memory change, and this does not assist whatsoever in encouraging continual learning, which surely is a better bet for the goal of the enterprise. But forcing assessment to be in cocoons raises the cost of each assessment point. It is the wrong direction.
Instead, AI provides the opportunity to offer students continual assessment. And not just for verification but for mastery. We should be allowing students to redo the test or assignments until they know the stuff. But we don’t because it is too costly.
A few years back, I actually offered students the opportunity to revise and resubmit assignments for a better grade. It was more costly, but the reason I abandoned it was that students who did well the first time around objected to others getting a second chance. I regret not resisting that pressure. This isn’t a fairness question but a learning one, and that is all that matters.
But with AI, we can get there. We can test, assign, and have oral exams. Whatever. And we can keep students doing it until they know the material. We can also make the friction of using AI in doing that a little high, so it is just not worth it relative to playing the game. (Yes, I know that may be fantasy, but I have good reasons to believe that — a topic for another day.)
The point is that this should be an opportunity to provide assessment that we would provide as if it were costless to do so. Assessment needs to become the abundant factor, not the scarce one, that justifies an institution. Shirky’s approach raises costs. In that regard, it is doomed to failure. Instead, we need to embrace the fact of AI in all of its dimensions and recast how we do things.


