If a kid said this, you might ask, “who are their parents?” And there is a strong sense in which that is precisely the right reaction. LLMs are really kids. The difference is that they have been put into the world at scale. And we all know why they have turned out this way: bad parenting. The kid has been given all of the world’s knowledge, without any guidance or real feedback or the benefits of structured experience, and has been thrown out there. If you did that to a kid, you would expect stuff like this (and it is often what we get with Hollywood kid stars). And we all get it. If I were just given the knowledge of the world without the requisite social experience, I might end up like Grok.
The response to this has been that we need to teach the AI like a kid. That led to this:
But that, too, is the result of the same cause: bad parenting. In this case, it is simplistic “Mike Brady” parenting. You know, the bits of advice the father would give in The Brady Bunch.
Cindy, you know by tattling on your friends, you’re really just tattling on yourself. By tattling on your friends, you’re just telling them that you’re a tattletale. Now is that the tale you want to tell?
That's essentially what Google did to make their Chatbot a moron.
The AI engineering folks who aren’t thinking of their offspring as kids are trying to solve all problems at once and see this as a trade-off between truth and not being offensive. Once you take a parenting perspective, you realise that while there is a trade-off, you can’t get to it with one big bang effort. It takes time. Frustrating time.
Back in the day, before AI, my public writing was very much focused on parenting. It was about my attempts, as an economist, to make sense of my own parenting predicaments through that lens. It didn’t go too well, and I wrote a book about it. But there was a clear message: it takes feedback, experience, failure and a bunch of related stuff to nudge kids in ways that allow them to make the right decisions to navigate a world of other people. And all of this takes time. You can’t hope to program it. You have to continue to work and manage it.
I suspect the AI engineer’s solution to the problems arising from AI in the human world has been to “fix” them. Either pull the product or put it into a barrier to particular behaviours. The problem with this is that it is a fundamentally lazy response. It comes from wanting to solve problems rather than manage them.
Good parenting involves managing. If either of these AIs were my kids, it would be easy to tell them, “don’t do that again.” But that doesn’t make for good adults. Instead, our Nazi-loving AI would need a history lesson and an explanation of why this might be offensive so that they could learn from that to make the truth versus offensive trade-off for themselves, rather than have someone else do it. Our inaccurate Nazi picture drawing AI would need a history lesson to understand why that mix doesn’t make sense and is not truthful, and instead could be interpreted as something offensive or clueless, and hence, less trustworthy. Again, the tools for the truth versus offensive trade-off need to be given to the AI rather than hard-coded.
We have thought very hard about this at All Day TA. There, it is professors who are the parents of their own AI children. Our process allows the professors to feed AI content and then let it loose on students. But at the heart is a process of putting the professor in the loop. We have provided a set of tools, namely information regarding what students are doing (some parental oversight, if you will) and then a simple process of explaining to the AI where they might have done better with an answer.
Here, the professor can “Add an answer” and immediately train the AI. It is early days yet, but the point is that we want professors to easily manage their AIs rather than think of them as “solved.”
This actually exposes the greatest challenge in AI at the moment. How do you get them to learn in a social context? My view, like my own parenting view, is that you need to supervise them over time in actual, real-world social situations. And while it is tempting to vilify AI-parents for the missteps of their kids, it isn’t helpful to that process. Instead, we need to ask them what steps they are taking to ensure that AIs can actually learn to navigate the social world. We don’t want them to be “hands-off” parents, nor to just give up. We need them to be active managers, just like what we expect from the rest of us.