The Value of AI for Uneven Work
AI will be valuable for work that has lots of downtime with brief but large surges
There continues to be much discussion regarding the impact of AI these days of Generative AI on jobs. This has certainly come up in relation to our launch of All Day TA for the excellent reason that we designed that system to join the teaching team for a course. Moreover, the entire purpose of doing that was that we believed that the ability to serve students well in education — in our case, higher education — was fundamentally limited by the scarcity of teacher attention. The aim is to use AI to relax the teacher-attention constraint. But teachers (including professors and their teaching assistants) are winners when attention is scarce. So it is not surprising that a product like ours would raise concerns.
However, we believe it is important not to exclude students from the equation. They are the ones who suffer when teachers have limited attention. This is why textbooks allow students to learn without the teacher's attention. Indeed, it is why we have teaching assistants: to economise on professors' constraints. And like those standard responses to attention constraints, AI is poised to play an important role.
Our experience thus far with All Day TA in our classrooms shows precisely why. Here is a graph showing its use in a course with 250 MBA students.
The first thing to note is the volume of questions: 8,224, which averages 33 per student. However, the semester isn’t quite done yet. It should bump up to an average of 50 by the end. The volume is precisely the relaxation of the teacher-attention constraint. I know of no teaching team that answers any more than a fraction of the questions that are asked here (let alone when they are asked, much of which tends to be outside of normal work hours).
Our rough estimate is that 30 per cent of questions are the “normal” ones you expect asking to understand concepts better, 30 per cent are ones where the students are searching for definitions or places in the materials where they can learn more and 30 per cent are questions that students are otherwise embarrassed to ask anybody. (There are studies that show this type of embarrassment is a thing, including this one by my Rotman colleagues showing that when online ordering for pizzas was introduced, the pizza’s order became more, shall we say, “interesting.”) The final ones are mostly administrative questions. The point here is that you may have thought your teaching team was serving your students, but they likely have many unanswered questions.
The second thing to note is variability. You can see three “events.” First, the course began in September (squint, and you can see a blip in usage). Second, there were two MAEs (what we call “massive assessment events”). You guessed it: a mid-term and a final. In between there was some regular activity which can be put down to students learning to make use of the AI.
Herein lies something very important, not about AI per se, but software. Imagine employing a person whose job it was to wait around all semester doing very little until a couple of days when they have to do more than would be humanly possible. This is not an unusual situation, of course, but the data here show just how extreme the work requirements of a teaching assistant are. So much so that we don’t expect them to be able to serve the majority of demand during the surge periods. My colleague, Ajay Agrawal, reacting to this graph, pointed out to me that this is precisely why software is valuable. It is built for tasks that have precisely this sort of unevenness.
AI means that software can now step in for uneven work that is cognitive in nature. The way it will work is that during these surge times, it will triage the low-hanging but important fruit of student queries. This means that those who turn up at their teaching assistant office hours or write emails directly to them will be those whose questions really require human input. They will normally be for tail questions either from struggling students or very advanced ones. Thus, teacher attention will be better allocated to where it is of the most value and away from where it is trivial. This is the promise of the “software eating the world” trend coming into the teaching world.
Of course, there is a meta-question that needs to be addressed. Is the fact that student activity is so uneven a good thing? It is a long-standing fact of educational life, but I don’t know a professor who wouldn’t want their students to study more consistently over the semester. We have that on our agenda for All Day TA. We are working on ways to nudge students into more continuous learning. This can be done by building in self-testing with feedback (another thing that cannot be done due to teacher-attention constraints) and bringing the type of Khan Academy focus on mastery to higher education. To be sure, that will undermine precisely the specific value proposition for AI I have discussed here. Overall, it will improve the student experience.
Finally, in the spirit of “always be selling,” if you are a professor and want to try out All Day TA, please sign up for a 14-day trial here.