A rather strange story appeared the other day.
Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information, misleading him into buying a full-price ticket.
Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”.
Well, colour me “intrigued.”
It actually gets better because this wasn’t one of your ordinary customer service issues. It’s one that has PR nightmare written all over it. Which makes it all the more surprising that it (a) got to Court and (b) got to have a Court decision.
That case of Moffatt v. Air Canada was in the Civil Resolution Tribunal or Small Claims Court or the sort of dispute Judge Judy might have handled well if it didn’t involve a major corporation. And you don’t have to read far to find out that the plaintiff, Jake Moffatt, had been interacting with Air Canada’s automated chat option that they push onto you in order to “Avoid the Wait” on the phone.
Right now, if you click one of those links, nothing happens, and it seems for good reason. Jake Moffatt had been using the automated chat to find out if he could obtain a bereavement fare because his grandmother had just passed. As you might do in that situation, you don’t really want to spend time mucking around. He was satisfied when the chatbot said he could apply for any discount afterwards. Here is what the chatbot said.
Air Canada offers reduced bereavement fares if you need to travel because of an imminent death or a death in your immediate family.
…
If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form. (emphasis in original)
But it turned out that wasn’t Air Canada’s policy.
Mr. Moffatt later learned from Air Canada employees that Air Canada did not permit retroactive applications.
By “later learned,” he means that he applied for the bereavement fare within 90 days of flying (which was a discount of about $600) and then was told that wasn’t Air Canada policy and he should have applied for it beforehand. He was actually able to get a screenshot of the chatbot response and send it to Air Canada, who apparently said the chatbot response was “misleading.” Air Canada’s main defence was that their intended policy was on their website. Indeed, it would have come up by clicking the link in the chatbot. But as the Judge in this matter noted:
While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled “Bereavement travel” was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website.
Now you could be forgiven for wondering why it is that Air Canada decided to make a big deal about this. I mean, is it the end of the world that they accept the discount ex post rather than ex ante especially if, as any sensible person would guess, it seems like the reasonable response to make it easier for someone to get on a flight for a funeral than to go through some bureaucratic process right away. It’s almost as if Air Canada is saying to its customers, “yeah, you can have a discount, but you have to deal with us while you are at your most bereaved rather than later or you can just pay full fare because you are booking a flight a day or so in advance of travelling and we don’t get lots of chances for people to pay full fare …” It is almost as if Air Canada wanted to tackle this in Court so it could use it as an opportunity to explain to Canadians that if you have a family tragedy just make sure you get your affairs in order first and then you can happily go on with all the bereaving. If so, mission accomplished.
But I digress. Let’s get to the AI portion of today’s festivities.
Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.
Now, I’m not one of those fancy people who can easily interpret all the legal language of those judges, yes, but it seems to me that the judge here was saying WTAF Air Canada! Apparently, it was all in some contract, to which the Judge commented: “Air Canada is a sophisticated litigant that should know it is not enough in a legal process to assert that a contract says something without actually providing the contract.”
To sum up. Air Canada had a chatbot whose information didn’t line up with its actual policies, but they pushed customers to use said chatbot. Someone, in a moment of likely stress, took the chatbot at its word only to find out it was all wrong. But instead of just finding some resolution, Air Canada forced their customer to take legal action, which took a number of years on the defence. “Oh, those chatbots, what are you going to do? They aren’t really an employee but something else and so we aren’t responsible for whatever nonsense they might sprout.” And then the Judge, jaw on the floor, had to spend time calculating the actual damages that no ordinary person could do and came to the conclusion that Air Canada had to pay Moffatt $812.02, which included the $125 in fees he had to pay for the Court case which, when I look at this, is way too low in terms of the cost all of this.
As a final addendum, what about the other part of the damages calculation? Well, you know how all those fees and charges are difficult to work out, well the Judge had to work through all of that. Here is the relevant screenshot if you are interested.
LOL. This is a substantial part of the judgment.
So where are we? We actually have an important precedent now (not that it was needed) that businesses are going to be responsible for what chatbots say. When these chatbots use generative AI, that means you are going to have to be extra careful to make sure they aren’t hallucinating. This case all happened in November 2022 when ChatGPT was first released, so it was about plain old poor coding. But it also means that you don’t want to throw an AI-trained bot into the wild. The answers need to be checked and re-checked. There isn’t that much most companies will have to do, but as the Judge in this case noted, it is the least you can do.
The other lesson is to work out how to be reasonable with customers before you have it go to court and become the PR nightmare it now is.