> Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Here's a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like "Warning: the information you receive may be incorrect and irrelevant." Getting correct and relevant information from a human will be impossible.
If that is such a perfect way to avoid getting sued, why don't they put that on every page of their website and train all of their customer service staff to say that when they talk to customers?
Don't know if you've had the pleasure of interacting with US health insurance, but they do this for coverage and cost estimates all the time, so there is unfortunately precedent in the States.
Well, if the information on the website is too inaccurate to trust then it is also too inaccurate for purposes of contract. The logical next step is to say that all contracts made with chatbots are unenforceable as they lack any trustworthy meeting of the minds.
Because you would choose someone else for things that really matter to you, probably the same person you would chose in that kind of scenario anyway.
Eg. for my initial llc where I was single employee, I didn't care about my accounting basically saying in the contract "we ll do our best but can't be liable for anything", whereas now that I have bigger llc with many employees I picked (what I perceived) the best option on the market, for a premium price. They take the liability for their work and have insurance.
I used to work for a company that sold factory automation technology, and had hundreds of manuals for all the products they sold. In the front matter of every manual was a disclaimer that nothing in the manual was warranted to be true. This was automation equipment running things like steel mills, factories, petroleum plants, where failures could result in many deaths and grave financial losses.
Humans aren't perfect but they're reliable enough to be trusted with customer service work. Humans (generally) have a desire and need to keep their jobs and therefore hold themselves to a high enough standard of performance to keep their job.
And maybe employment contracts have language that offloads liability to an employee if they go rogue and start giving away company resources. Chatbots aren't accountable in any way and we don't know yet if their creators ever will be either.
Because it is a bad look, I assume. If I'm interacting with a company that constantly disclaims everything they say as probable bullshit, I'll go find a competitor that at least pretends to try harder.
IANAL, but AFAIK you can't disclaim liability that you actually have. I'd love to hear an actual lawyer who knows (not a know-it-all amateur) declaim on this, but:
A Ferris Wheel operator cannot make you sign a disclaimer that they're not responsible if it collapses and kills you. Or rather, they can, but it will not hold up in court.
Similarly, you can say in your manual, "We're not responsible for anything we say here" but you still are.
I don't know about chatbots, but I'd expect that judges will look for other precedents that are analogous.
Personal anecdote: A few years back I left my car at a dealership for some warranty work that was going to take a few days. It has a soft top and they left it in their gated lot overnight, where it got broken into (slashed the top, ripped open the glove box, stole a cheap machete I got from a white elephant exchange). They claimed that they weren't liable at all since I signed a waiver and should go through my own insurance. After a little push back, they caved and covered it under their insurance like they should have from the beginning. I don't go to that dealership for anything anymore, for that and other reasons.
much like Tesla's Autopilot which cannot be responsible for an accident because you're supposed to be hands-on-wheel and alert at all times while using it.
I think the recall is dumb. If you are not paying attention and have an accident, then you are at fault. You already have to click-through agree to use the feature properly, and Tesla has an interior camera to capture a photo of the person agreeing and telemetry to send it to the mothership. For that matter, Tesla could make you read the waiver on camera and capture that before enabling autopilot or FSD.
Also, the chatbot proposed a very reasonable solution. Book the fight, send us a death certificate when you have it and you’ll get the discount.
That’s actually what the policy should be and it’s a quite reasonable error for a human to make too.
Exactly. They are paid under $20/hr pre-tax ($40K) benefits (LOL for that job) won't exceed, what $5K of I'm betting mega generous.
Phone jockeys don't need real estate but in the event they do go to an office an individual employee would account for what, $50 a month of that expense? And equipment is trivial maybe a couple of hundred at most (laptop and headset).
Presumably this is why there is a trend in consumer rights legislation towards being explicit that anything a buyer has been told by a seller or actively told the seller before the sale is material to the contract of sale regardless of the seller's small print that says "This contract of sale we wrote by ourselves and won't negotiate with you and only this contract means anything". Then they can't promise the world to get the sale and then wash their hands of any resulting commitments two seconds after they get your money. Which seems entirely fair and reasonable to me, whether the promise came from a real person or a chatbot.
Part of disclaimer at irs.gov for their interactive assistant: "Answers do not constitute written advice in response to a specific written request of the taxpayer within the meaning of section 6404(f) of the Internal Revenue Code."
Can’t they link to sources? Like Perplexity.ai or Arc Search does.
I don’t even need it to tell me anything. Links are all that is relevant. Google Analytics on the Web does something similar. You can ask questions in the search box and it takes you to a relevant page.
“Can I get refund on my flight 2 hours in advance?”
“Here is a link to refund policies w.r.t time before flight”
Air Canada's chatbot returned sources along with its answer. Quotes from another article[1].
> Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.
choosing to accept an unnecessary quantifiable liability and wrapping it in a disclaimer as part of a critical business process is not a recipe for sustained growth or profit.
AC and other corporations would do well to put the brakes on this instead. identify ways to transfer risk (AI Insurance for example) or avoid risk (scrap the AI bot until the risk is lowered demonstrably.)
savvy advertisers would jump on this opportunity to show just how much AC cares about the customer and eat the loss quietly before it ever went to trial.
And? Using American technology, huffing American propaganda, voting for American political ideals. I'm Canadian and I see us as no different from Americans. We are Americans.
> "Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot."
This is very reasonable-- AI or not, companies can't expect consumers to know which parts of their digital experience are accurate and which aren't.
Forget about digital experiences for a moment. Forget entirely about chatbots.
> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives
That includes EMPLOYEES. So they tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms? That's absolutely fucked.
I once booked a flight to meet my then-fiancee in Florida on vacation. Work travel came up unexpectedly, and I booked my work travel from ORD > SFO > TPA.
Before I made that booking, I called the airline specifically to ask them if skipping the ORD > TPA leg of my personal travel was going to cause me problems. The agent confirmed, twice that it would not. This was a lie.
Buried in the booking terms is language meant to discourage gaming the system by booking travel where you skip certain legs. So if you skip a leg of your booking, the whole thing is invalidated. It's not suuuuper clear, I had to read it a few times, but I guess it kinda said that.
Anyways - my itinerary was invalidated by skipping the first flight, and I got lucky enough that someone canceled at the last minute and I could buy my own seat back on the now-full flight for 4x the original ticket price I paid (which was not refunded!).
I followed up to try and get to the bottom of it, but they were insistent they had no record of my call prior, and just fell back on "It's in the terms, and I do not know why you were told wrong information". Very painful lesson to learn.
I try and make a habit of recording phone conversations with agents now, if legal in where I'm physically located at the time.
In the US at least once that notification that "the call may be recorded for quality assurance " happens, both parties have been notified and you're good to record regardless of the state you are in.
I used to use a couple of apps like Cube ACR, or Call Recorder Pro, but these no longer work, and I'm skeptical of the workarounds to get them working again.
Given the new restrictions in Android 10, probably the way forward is a passthrough device which uses 3.5mm connectors to MITM the audio. I haven't found one which is a sure bet yet.
It's ridiculous both iOS and Android largely have no way to record calls/try to prevent it, especially when some of us live in jurisdictions where we are well within our legal rights do to that very thing.
Pixels don't have call recording enabled (in most regions?). It's frustrating that I'm forced to use a third party app to provide a feature that the phone app should have built-in, but Google decided I can't be trusted with.
EDIT: It used to be geo-blocked, now it's "This feature isn't available on Pixel."
That just gets me an option for "Start RTT Call". Nothing like call recording.
I'm on an S10e, which is not that new, so that could be part of it. I bought an older phone, because I really like my top grain leather case, so I wanted to get a phone which fit the case more than I wanted a more cutting edge phone.
It's inconvenient, but I've observed journalists who recorded calls by putting the phone on Speaker mode to increase the volume, and then used a second device (such as a laptop or iPad) to record the call.
> They tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms. That's fucked.
Pretty standard behavior for big companies. Airlines and telcos are the utter worst... you have agent A on the phone on Monday, who promises X to be done by Wednesday. Thursday, you call again, get agent B, who says he doesn't see anything, not even a call log, from you, but of course he apologizes and it will be done by Friday. (Experienced customers of telcos will know that the drama will unfold that way for months... until you're fed up and involve lawyers)
I had a problem with Verizon FIOS that went on for more than half a year, where they'd charge me for a service I wasn't signed up for, then I'd call in to complain and demand a refund, then they'd refund me and apologize profusely and swear up and down that they had fixed the problem for sure and that it would definitely not happen again, then it would of course happen again the next month, rinse&repeat.
Finally I filed an FCC consumer complaint which then forces a written response from the company within 30 days. I got a call a few days later from someone at Verizon's "Executive Relations" who fixed it immediately. It was such a frustrating dance, but the real trick is that when this happens don't mess around with a hostile company. Just go directly to the regulator agency they're required to answer to.
I had this happen with Rogers! Another loved and treasured bigcorp in Canada! Called to tell them I was moving in 2 months and to cancel my service and the agent says, oh well if you want to still have internet in the mean time, you'll have to call back in two months when you move. Ok. Great. I do that. Agent number 2: Well you didn't cancel with X many days of notice so there's a cancellation fee now on your acoount. Pay up!
I suppose agent 1 was jerking my chain so he wouldn't take the hit on his retention metrics so I don't blame him 100%. I blame Rogers bullshit system of incentives for their employees and their bullshit contracts they force on consumers who have little to no choice in the market here.
I bet if we removed the requirement to get a lawyer and file a lawsuit that those behaviors would vanish real quick, if all you had to do as the wronged consumer was report to an authority that Company X is doing business dishonestly.
You are probably not wrong, however take note that this is exactly the line of thinking that led to the DMCA. Beware the law of unintended consequences.
To misquote you.
"I bet if we removed the requirement to get a lawyer and file a lawsuit that those behaviors would vanish real quick, if all you had to do as the wronged copyright holder was report to an authority(the hosting service) that Company X is infringing on your copyright."
It also led to things like easy small claims court, consumer protection agencies and DPOs which protect consumers against corporations here in Europe without them having to shell out thousands of euros for lawyers and court cases.
> I bet if we removed the requirement to get a lawyer and file a lawsuit that those behaviors would vanish real quick, if all you had to do as the wronged consumer was report to an authority that Company X is doing business dishonestly.
They can still appeal the decisions of the authority in the courts of law.
They will just get the authority disbanded altogether.
Previously, on hn
`Amazon argues that national labor board is unconstitutional (apnews.com)`
https://news.ycombinator.com/item?id=39411829
In Germany, we have the Verbraucherschutz as a low-level institution to handle such claims. They can and do consolidate such reports and can file lawsuits if there is evidence of systemic misbehavior.
I think I could price for a few tens of thousands of dollars a service that creates a bunch of such wronged consumers. There are a bunch of homeless people in San Francisco we will represent. To make it easy for someone on the streets to be able to complain, we actually operate as a non-profit that advocates on their behalf for having wronged the company. You could use my service to attack another company on demand, but I advise that you do it at critical moments. Ideally, two weeks before a big launch or so should do it so that we have enough time to stagger out sufficient number of complaints.
This isn't a new business idea. There are law firms that specialize in consumer class action suits, and part of their skillset is finding lots of wronged consumers to represent.
Signing up homeless people by the hundred isn't exactly the gold standard of what these firms do, but it's not a million miles removed.
I can sort of see it. On the one hand, it's reasonable to hold them accountable when an employee gives you the wrong discount. But if an employee, on their last day at work, decides to offer the next person calling all of the seats on a single flight for just $10, I think we'd all agree that it would be unreasonable to expect the airline to honor that offer.
It's the degree of misinformation that's relevant.
> But if an employee, on their last day at work, decides to offer the next person calling all of the seats on a single flight for just $10, I think we'd all agree that it would be unreasonable to expect the airline to honor that offer.
The airline is free to go after the lying employee for compensation if they find out too late. It is never acceptable for the airline to cheat their customers.
I once ordered a gift for my father for christmass. The order page indicated that it would arrive on time. When it didn't arrive, I requested a refund. They then pointed to their FAQ page where they said that orders during the holidays would incur extra processing time, and refused the refund.
I wrote back that unless they issused a refund, I would issue a charge back. You don't get to present the customer with one thing and then do otherwise because you say so on a page the customer has never read when ordering.
This actually sounds like an interesting case to me because the details make a huge legal difference in my mind. (But IANAL, maybe I'm entirely off base here.)
E.g., did they tell you the shipping date after you placed the order, or before? If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship. If they told you before, then was it before they knew your shipping address, or after? If it was beforehand, then again, it should've been clear that they wouldn't be able to guarantee it without knowing the address. If it was after they got the address but before you placed the order, then that makes for a strong case, since it was specific to your order and what you agreed to before placing it.
> If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship.
> Sellers have to ship your order within the time they (or their ads) say. That goes whether they say “2-Day Shipping” or “In Stock & Ships Today.” If they don’t give a time, they must ship within 30 days of when you placed your order.
I believe that's referring to "shipment" in the sense of "when this gets mailed", not the arrival time like we were discussing? I guess it might depend on where the delays were incurred, and what exactly was promised.
Why would they tell the date after placing? Every online shopping I ever used shows the shipping date together with shipping price and shipping options if there are any.
If they're going to ship immediately, then they can know before you place the order. If there's another entity involved (third party seller, backorder, etc.) then they might not be able to know when it will be shipped with much certainty.
I expect employees to know the correct answers and give them to me. When an employee says something that contradicts other policy pages I'm expecting that to be a change to company policy to me - they represent the company.
If the company doesn't agree to that, then they need to show the employee was trained on company policy and was disciplined (on first offense maybe just a warning, but this needs to be a clear step on the path to firing the employee) for failing to follow it. Even then they should stand by their employee if the thing said was reasonable (refund you $million may be unreasonable, but refund purchase price is reasonable)
This line of argument is crazy and infuriating. "Air Canada essentially argued, 'the chatbot is a separate legal entity that is responsible for its own actions,' a court order said." Do they expect people to sue the chatbot? Are they also implying that people have to sue individual agents if they cause a problem?
> 27. Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.
> https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/202...
Real legal comedy. Since this was in small claims court maybe it was an amateur on Air Canada's side?
If they could reasonably expect to be able to hire people who would agree to accept all liability incurred during their work for the company, they absolutely would.
Same with chatbots. Even better, because once it's "trained", you don't have to pay it.
There's a few instances of expecting digital entities to shoulder the entirety of legal liability here in the last few years; DAOs are another example of this in the crypto space.
I don't like Ars Technica because they break reader mode and load articles chunk by chunk – I consider it hostile towards the user and am glad since Wired work much better.
What browser? Just tried it out on Firefox for Android (version 122.1.0, with uBlock Origin enabled but JS still allowed on ars) and for the link above, I see the whole article after immediately switching to reader mode.
In early days of computerization, companies tried to dodge liability due to "computer errors". That didn't work, and I hope the "It was the AI, not us" never gets allowed either.
Not yet. The Inquiry is still taking evidence. They haven't taken evidence from the big-hitters yet, that begins in April.
In fact the Inquiry doesn't "hold humans accountable"; but they can compel witnesses, who testify under oath.
Gerald Barnes is the Fujitsu engineer who stated in the prosecution cases that Horizon was reliable ("robust"). His testimony to the Inquiry has been delayed, because on the morning he was supposed to testify, the Post Office "discovered" a million or so emails that they'd failed to disclose. So he'll be on the stand in April, along with the senior execs.
The Inquiry videos make quite pleasant watching; the lawyers and the judge are immaculately polite, there are no trick questions, and it's all about finding out what happened. I'm looking forward to seeing the senior execs on the stand.
The police are following the Inquiry; nobody's been charged, and my guess is they'll hold off on charging people until the Inquiry is over (that's part of the purpose of statutory inquiries). So the succession of Post Office Ministers that have overseen this disgrace will all be out of office by then.
> With the British Post Office, the issue is whether or not a software system is inscrutable
Not sure what "inscrutable" means in that context. Is it supposed to mean it can't be scrutinized?
A law was passed some years ago that says evidence obtained from a computer system should be accepted as true, unless evidence is provided that opens it to question. That means, in the Post Office case, that postmasters couldn't demand that the Post Office prove that Horizon was working correctly. They had to prove that it was defective, which was difficult; they were kicked out of their shops, and denied access to their own records, including the Horizon terminals they had been using.
Of course if the chatbot cannot what is the point of them - I'll have to get something else to verify anything the chatbot says. Sure the chatbot can say "hello" in 10,000 words or whatever, but it can't do anything useful.
The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
Good. If you use a tool that does not give the correct answers you should be held liable for the mistake. The takeaway is, you better vet your tool. If the amount of money you loses from mistakes with the tool is less than the money you saved using it then you make money, if not, you may want to reconsider that cost saving measure.
I'm glad to see that cases are starting to be decided about the liability of using AI generated content. This is something the general public should not need to second-guess.
Honestly LLMs aren’t ready for customer service. If I’m talking to a company I need to have a high degree of accuracy. LLMs are less accurate than trained humans.
This is my personal perception, but I think it's important that there is a clear definition of liability so that companies are able to make their own determinations of what is ready and what isn't.
Few front-line agents have deep knowledge about their company's products or services. They trace their finger through some branches on a flowchart then dictate from a knowledgebase.
Agreed, and I think following flowchart-type logic is within today's AI capabilities. This thread is full of people getting inaccurate responses from humans. I think when it comes to accuracy, a well-trained LLM likely beats the status quo of high-churn low-paid employees following a rote diagram.
Of course there should always be a way to reach a human, a senior agent with actual knowledge that can be applied in subjective ways to solve more complex problems.
My father died in hospice the night before a flight to see him. I missed the flight because there was no longer any reason to get to the airport before dawn. I called to reschedule a few hours later.
The human on the other end rescheduled and gave me a bereavement rate. She told me it was less money, but didn't mention the reason. I didn't put that together until later. She just helped me out because she had compassion.
I am too cynical to think that an AI controlled by a corporation will do this.
Good. I hope people out there also discover chatbot holes and exploit them. Chatbots are one of the most useless and time wasters things out there, they literally serve for absolutely no purpose. And most of them work exactly like nested dropdowns where you select one option after the other. Oh and when you really want to talk to a human being in almost every scenario that option is not available. What a wonderful world powered by "AI".
Would a company be liable to uphold its promises if a rogue human customer service agent promised something ridiculous such as 1 million dollars worth of free flights?
We live in an interesting world. In the US, a corporation is legally a person, and a chatbot is not a person[0]. I'm looking forward to the first Supreme Court case involving a corporation consisting of chatbots.
[0] I'm handwaving in this lead-in to the fantasy here, so, dear reader, please give me a break for oversimplifying and ignoring technicalities.
The company would have to prove the human was knowingly acting outside of their job/training and was disciplined for that. Such discipline must be on the path to firing the employee if the behavior isn't corrected. Note that training is important here, an employee who isn't trained is assumed to have more authorization than someone who is.
Or in this case they need to take the AI out of service immediately until they can get a corrected version that does not do such a thing. I will accept that the AI can be tricked to do such a thing and remain in service, but only if they can show the tricks are something an honest human wouldn't attempt. (I don't know what this is, but I'll allow the idea for someone else to propose in enough detail that we can debate if a honest people would ever do that)
I don't even see how this is a big story. An expressed representation about what a product is or does, how it works, and the terms of sale in consumer contexts are binding on the seller. It is the same in America. If you go into a store and they say that you can return it if you don't like it, then you can. If you buy a TV and the guy at the store tells you it also makes pancakes, you can get your money back if it turns out that it does not make pancakes. This is true even if the representation is made by some 16-year-old kid working at Best Buy. By extension it would still be true even if it is made be an automaton.
The interest in this story comes from the fact that Air Canada tried to fight it using the argument that they aren’t liable for anything their agents say. Businesses around the world are so eager to start laying people off that it’s almost certain that they will not be the only ones to try a variation on that claim. I see the important point being reinforcing liability laws and especially not allowing them to shirk responsibility with disclaimers — otherwise we’ll just see more companies cut staffing on their phone support to force people to use a system with a “you are responsible for checking everything this chat bot says for accuracy” notice.
I mean, they're welcome to try, and this can't be the first attempt. But there's plenty of jurisprudence at least in the USA that you can't disclaim expressed representations.
I think this article's full import is not being properly processed yet by a lot of people. The stock market is in an absolute AI frenzy. But this article trashes one of the current boom's biggest supposed markets. If AIs can't be put in contact with customers without exposing the company to an expected liability cost greater than the cost of a human customer representative, one of their major supposed use cases is gone, and that means the money for that use case is gone too. There's probably halo effects in a lot of other uses as well.
Now, in the medium or long term, I expect there to be AIs that will be able to do this sort of thing just fine. As I like to say I expect future AIs will not "be" LLMs but merely use LLMs as one of their component parts, and the design as a whole will in fact be able to accurately and reliably relay corporate policies as a result. But the stock market is not currently priced based on "AIs will be pretty awesome in 2029", they're priced on "AIs are going to be pretty awesome in July".
LLMs are a huge step forward, but they really aren't suitable for a lot of uses people are trying to put them to in the near term. They don't really "know" things, they're really, really good at guessing them. Now, I don't mean this in the somewhat tedious "what is knowing anyhow" sense, I mean that they really don't have any sort of "facts" in them, just really, really good language skills. I fully expect that people are working on this and the problem will be solved in some manner and we will be able to say that there is an AI design that "knows" things. For instance, see this: https://deepmind.google/discover/blog/alphageometry-an-olymp... That's in the direction of what I'm talking about; this system does not just babble things that "look" or "sound" like geometry proofs, it "knows" it is doing geometry proofs. This is not quite ready to be fed a corporate policy document, but it is in that direction. But that's got some work to be done yet.
(And again, I'm really not interested in another rehash of what "knows" really means. In this specific case I'm speaking of the vector from "a language model" and "a language model + something else like a symbolic engine" as described in that post, where I'm simply defining the latter as "knowing" more about geometry than the former.)
The real story here is that Air Canada's lawyers argued, among other things, that the chatbot was a separate and independent legal entity from Air Canada and therefore Air Canada was not obligated to honor the made up policy.
In other words, this was possibly the first historical argument made in a court that AI's are sentient and not automated chattel.
The real desire here is to get it to promise a lifetime of free service.
edit: arguing that the chatbot is a separate legal entity is a wild claim. It would imply to me that air canada could sue the ai company for damages if it makes bad promises; not that air canada is excused from paying the customer.
If I were the agent who made that promise what would they do to me? Fire me - then fire the chatbot. Put me through training - then train the chatbot. Stand behind me and do what I say - then stand behind their chatbot.
I bet they already have policies in place for this - while how they apply to AI may be different, they shouldn't let this slide.
The key term here is "apparent authority". You don't need to be granted authority for each and every thing you do, but if you start grossly doing things you were not supposed to do, that's mostly just fraud on your end.
The common example in textbooks is someone continuing to do business as an employee after having been fired. They can still make valid deals with other entities due to apparent authority if they're not clearly made out to be separated.
That might be too wishful thinking. The tribunal would take into account damages, and whether it was reasonable to believe that you're entitled to free service.
In this case, the chatbot promised a ~$800 discount, and the tribunal awarded ~$800. But I doubt they'd make the same decision again, or deem the lifetime service enforceable/un-cancellable.
I think it's within the bounds of reason if you can cite emotional damages to grant pretty much any recourse. It maybe won't fly if you're clearly trying to trick the AI, but I think there will be general plausible deniability.
It may feel weird, but it's utterly insane to delegate customer interactions to an agent that has nobody's interest in mind, not even their own, whom you cannot trust to abide by policy.
Similarly, if the bot negotiated any sort of special deal, I think it would be very, very difficult to argue that it lacked apparent authority to make deals or that it's not a fair consideration.
I don't really understand why generative LLM output is being presented to (and interpreted by) users as intelligence. This seems incredibly disingenuous, and I would prefer to see it characterized as as fiction, creative, realism, etc. -- words that make it clear to average people that while this might be entertaining and even mimicking reality, it's completely distinct from the writing of a person. Disclaimers (often small, low contrast, or "ToS;DR"-esque) are insufficient when the UI is specifically crafted to appear like chatting with a person.
I hope there can be some reasonable middle ground. I think in this case it's good the woman got her money. But Air Canada, presumably scared of what the next case might cost them decided to turn the chat bot off entirely. I think that's a bit unfortunate.
I don't know what the solution looks like. Maybe some combination of courts only upholding "reasonable" claims by AI. And then insurance to cover the gaps?
If the company is not confident with the output of the chatbot or the liability then they should turn it off. All business decisions are based on assessing risk. Ultimately either insurance will need to step in or the company is willing to take the hit for mistakes their system makes because it saves them more money. This is how fraud protection it assessed with banks, credit cards etc. as well.
I agree. If bad information is given and the customer makes a decision on it, then either the customer must bear the risk or the business. I think it's more fair for the business to be out the money.
In this case this could have just been a $650 'bug bounty' had Air Canada issued a quick refund. A reasonable QA expense to find out that your AI agent is misleading your customers.
Hmm. “Let me turn on this lead water pipeline which may or may not poison an entire town and then blame faul when it does.”
I don’t think failing to take adequate precautions is preventing AI tools from being used. I think this was plain corporate incompetence and greediness. They started using a system without properly testing it and don’t want to pay for the consequences.
What if Boeing says “Oops. We forgot to put the bolts that keeps the door in place but we shouldn’t be kept accountable for our actions”? The fact that they used a tool for it shouldn’t change the outcome unless we are going to create indemnity for big cooperations.
I think that there can be an error rate that is low enough to still be very useful and a net-positive for the economy. But if there is a small chance to just end the company on the spot, then no one will use it despite the benefits. So how can we have the good parts without the bad, this is my question.
I guess the onus is on people to prove that using AI specifically is somehow a net-positive for the economy because that's in no way a given. But I don't feed good about Air Canada trying to make that case.
I don't see why chatbots can't be kept to the same standard as human staff. If an airline support agent lied to me to sell me tickets, no shit I'd want a refund and compensation! Chatbots should be allowed to be wrong, but the company should be prepared to face the consequences of that.
Dupe https://news.ycombinator.com/item?id=39378235 (400+ comments)
Pro-tip OP: https://hn.algolia.com/?dateRange=pastWeek&page=0&prefix=fal...
Here's the real punchline:
> Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Here's a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like "Warning: the information you receive may be incorrect and irrelevant." Getting correct and relevant information from a human will be impossible.
If that is such a perfect way to avoid getting sued, why don't they put that on every page of their website and train all of their customer service staff to say that when they talk to customers?
Don't know if you've had the pleasure of interacting with US health insurance, but they do this for coverage and cost estimates all the time, so there is unfortunately precedent in the States.
Well, if the information on the website is too inaccurate to trust then it is also too inaccurate for purposes of contract. The logical next step is to say that all contracts made with chatbots are unenforceable as they lack any trustworthy meeting of the minds.
Because you would choose someone else for things that really matter to you, probably the same person you would chose in that kind of scenario anyway. Eg. for my initial llc where I was single employee, I didn't care about my accounting basically saying in the contract "we ll do our best but can't be liable for anything", whereas now that I have bigger llc with many employees I picked (what I perceived) the best option on the market, for a premium price. They take the liability for their work and have insurance.
I used to work for a company that sold factory automation technology, and had hundreds of manuals for all the products they sold. In the front matter of every manual was a disclaimer that nothing in the manual was warranted to be true. This was automation equipment running things like steel mills, factories, petroleum plants, where failures could result in many deaths and grave financial losses.
I could see that for paper manuals or normal pdfs since once they leave the premises of the company anyone could alter the pages of the manual.
Humans aren't perfect but they're reliable enough to be trusted with customer service work. Humans (generally) have a desire and need to keep their jobs and therefore hold themselves to a high enough standard of performance to keep their job.
And maybe employment contracts have language that offloads liability to an employee if they go rogue and start giving away company resources. Chatbots aren't accountable in any way and we don't know yet if their creators ever will be either.
And disclaimers are used in lots of contexts too.
Because it is a bad look, I assume. If I'm interacting with a company that constantly disclaims everything they say as probable bullshit, I'll go find a competitor that at least pretends to try harder.
....Because an actual judge hadn't said it yet.
I still.remember when Microsoft updated thr 360 TOS to force arbitration the day after it was deemed legal in a completely separate case.
Rest assured there is an incoming flood of TOS updates.
That's why I get all my questions answered by CC'ing legal@example.com and privacy@example.com
IANAL, but AFAIK you can't disclaim liability that you actually have. I'd love to hear an actual lawyer who knows (not a know-it-all amateur) declaim on this, but:
A Ferris Wheel operator cannot make you sign a disclaimer that they're not responsible if it collapses and kills you. Or rather, they can, but it will not hold up in court.
Similarly, you can say in your manual, "We're not responsible for anything we say here" but you still are.
I don't know about chatbots, but I'd expect that judges will look for other precedents that are analogous.
Personal anecdote: A few years back I left my car at a dealership for some warranty work that was going to take a few days. It has a soft top and they left it in their gated lot overnight, where it got broken into (slashed the top, ripped open the glove box, stole a cheap machete I got from a white elephant exchange). They claimed that they weren't liable at all since I signed a waiver and should go through my own insurance. After a little push back, they caved and covered it under their insurance like they should have from the beginning. I don't go to that dealership for anything anymore, for that and other reasons.
much like Tesla's Autopilot which cannot be responsible for an accident because you're supposed to be hands-on-wheel and alert at all times while using it.
That issue is media mostly optics I think. Once you sit down with it, it's simply a better cruise control.
Now, if you are talking about 'Full Self Driving' - then yea, there's a waiver and a point there.
It's not optics. It's regulatory intervention: https://static.nhtsa.gov/odi/rcl/2023/RCAK-23V838-3395.pdf
I think the recall is dumb. If you are not paying attention and have an accident, then you are at fault. You already have to click-through agree to use the feature properly, and Tesla has an interior camera to capture a photo of the person agreeing and telemetry to send it to the mothership. For that matter, Tesla could make you read the waiver on camera and capture that before enabling autopilot or FSD.
OR they could just honor what the chatbot said and forget the legal battle.
The chatbot's error cost them what, $200? And it probably replaced at $100000/year employee?
I would assume the average salary of a customer support worker is dramatically lower than that. The point is still valid regardless.
Salary is always lower than the employer's total cost to employ, so that tracks.
Also, the chatbot proposed a very reasonable solution. Book the fight, send us a death certificate when you have it and you’ll get the discount. That’s actually what the policy should be and it’s a quite reasonable error for a human to make too.
You think front-line support staff account for $100k in compensation?
Total overhead. Compensation + benefits + payroll taxes + all the per-employee real estate and equipment to support them on the job
Exactly. They are paid under $20/hr pre-tax ($40K) benefits (LOL for that job) won't exceed, what $5K of I'm betting mega generous.
Phone jockeys don't need real estate but in the event they do go to an office an individual employee would account for what, $50 a month of that expense? And equipment is trivial maybe a couple of hundred at most (laptop and headset).
We're not talking anything near $100K
Presumably this is why there is a trend in consumer rights legislation towards being explicit that anything a buyer has been told by a seller or actively told the seller before the sale is material to the contract of sale regardless of the seller's small print that says "This contract of sale we wrote by ourselves and won't negotiate with you and only this contract means anything". Then they can't promise the world to get the sale and then wash their hands of any resulting commitments two seconds after they get your money. Which seems entirely fair and reasonable to me, whether the promise came from a real person or a chatbot.
Part of disclaimer at irs.gov for their interactive assistant: "Answers do not constitute written advice in response to a specific written request of the taxpayer within the meaning of section 6404(f) of the Internal Revenue Code."
or its buried on the 4th page of the disclaimer
if you read them there is often stuff like that, the most flagrant one I read said “everything above should be considered apocryphal”
Can’t they link to sources? Like Perplexity.ai or Arc Search does.
I don’t even need it to tell me anything. Links are all that is relevant. Google Analytics on the Web does something similar. You can ask questions in the search box and it takes you to a relevant page.
“Can I get refund on my flight 2 hours in advance?”
“Here is a link to refund policies w.r.t time before flight”
Air Canada's chatbot returned sources along with its answer. Quotes from another article[1].
> Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.
[1] https://techhq.com/2024/02/air-canada-refund-for-customer-wh...
Yes, but:
"The answer is yes, you may receive a refund within 2 hours of departure. More details are here: (link)"
is _entirely_ different from
"Our refund policy is here: (link)"
choosing to accept an unnecessary quantifiable liability and wrapping it in a disclaimer as part of a critical business process is not a recipe for sustained growth or profit.
AC and other corporations would do well to put the brakes on this instead. identify ways to transfer risk (AI Insurance for example) or avoid risk (scrap the AI bot until the risk is lowered demonstrably.)
savvy advertisers would jump on this opportunity to show just how much AC cares about the customer and eat the loss quietly before it ever went to trial.
the standard disclaimer for fraud
[flagged]
The story is about a Canadian airline, in a Canadian civil resolution tribunal.
And? Using American technology, huffing American propaganda, voting for American political ideals. I'm Canadian and I see us as no different from Americans. We are Americans.
> "Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot."
This is very reasonable-- AI or not, companies can't expect consumers to know which parts of their digital experience are accurate and which aren't.
Forget about digital experiences for a moment. Forget entirely about chatbots.
> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives
That includes EMPLOYEES. So they tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms? That's absolutely fucked.
I have had this happen in a sense.
I once booked a flight to meet my then-fiancee in Florida on vacation. Work travel came up unexpectedly, and I booked my work travel from ORD > SFO > TPA.
Before I made that booking, I called the airline specifically to ask them if skipping the ORD > TPA leg of my personal travel was going to cause me problems. The agent confirmed, twice that it would not. This was a lie.
Buried in the booking terms is language meant to discourage gaming the system by booking travel where you skip certain legs. So if you skip a leg of your booking, the whole thing is invalidated. It's not suuuuper clear, I had to read it a few times, but I guess it kinda said that.
Anyways - my itinerary was invalidated by skipping the first flight, and I got lucky enough that someone canceled at the last minute and I could buy my own seat back on the now-full flight for 4x the original ticket price I paid (which was not refunded!).
I followed up to try and get to the bottom of it, but they were insistent they had no record of my call prior, and just fell back on "It's in the terms, and I do not know why you were told wrong information". Very painful lesson to learn.
I try and make a habit of recording phone conversations with agents now, if legal in where I'm physically located at the time.
In the US at least once that notification that "the call may be recorded for quality assurance " happens, both parties have been notified and you're good to record regardless of the state you are in.
What do you use for recording your calls?
I used to use a couple of apps like Cube ACR, or Call Recorder Pro, but these no longer work, and I'm skeptical of the workarounds to get them working again.
Given the new restrictions in Android 10, probably the way forward is a passthrough device which uses 3.5mm connectors to MITM the audio. I haven't found one which is a sure bet yet.
It's ridiculous both iOS and Android largely have no way to record calls/try to prevent it, especially when some of us live in jurisdictions where we are well within our legal rights do to that very thing.
The motorola android I have has phone recording in the stock phonecall app, so I just use that.
Nothing like that on the Samsungs I've owned. Next phone will be a Pixel, so we'll see.
> Next phone will be a Pixel, so we'll see.
Pixels don't have call recording enabled (in most regions?). It's frustrating that I'm forced to use a third party app to provide a feature that the phone app should have built-in, but Google decided I can't be trusted with.
EDIT: It used to be geo-blocked, now it's "This feature isn't available on Pixel."
In an active call, tap the 3 dot menu up at the top right, you should see the option to record.
That just gets me an option for "Start RTT Call". Nothing like call recording.
I'm on an S10e, which is not that new, so that could be part of it. I bought an older phone, because I really like my top grain leather case, so I wanted to get a phone which fit the case more than I wanted a more cutting edge phone.
At least GrapheneOS does have a built-in recording function.
It's inconvenient, but I've observed journalists who recorded calls by putting the phone on Speaker mode to increase the volume, and then used a second device (such as a laptop or iPad) to record the call.
Is that actually true? I always thought that was one of those internet sayings that had no basis in law, ie one party, two party protections etc...
It is. There's good information linked here.
https://recordinglaw.com/party-two-party-consent-states/
> They tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms. That's fucked.
Pretty standard behavior for big companies. Airlines and telcos are the utter worst... you have agent A on the phone on Monday, who promises X to be done by Wednesday. Thursday, you call again, get agent B, who says he doesn't see anything, not even a call log, from you, but of course he apologizes and it will be done by Friday. (Experienced customers of telcos will know that the drama will unfold that way for months... until you're fed up and involve lawyers)
I had a problem with Verizon FIOS that went on for more than half a year, where they'd charge me for a service I wasn't signed up for, then I'd call in to complain and demand a refund, then they'd refund me and apologize profusely and swear up and down that they had fixed the problem for sure and that it would definitely not happen again, then it would of course happen again the next month, rinse&repeat.
Finally I filed an FCC consumer complaint which then forces a written response from the company within 30 days. I got a call a few days later from someone at Verizon's "Executive Relations" who fixed it immediately. It was such a frustrating dance, but the real trick is that when this happens don't mess around with a hostile company. Just go directly to the regulator agency they're required to answer to.
I had this happen with Rogers! Another loved and treasured bigcorp in Canada! Called to tell them I was moving in 2 months and to cancel my service and the agent says, oh well if you want to still have internet in the mean time, you'll have to call back in two months when you move. Ok. Great. I do that. Agent number 2: Well you didn't cancel with X many days of notice so there's a cancellation fee now on your acoount. Pay up!
I suppose agent 1 was jerking my chain so he wouldn't take the hit on his retention metrics so I don't blame him 100%. I blame Rogers bullshit system of incentives for their employees and their bullshit contracts they force on consumers who have little to no choice in the market here.
I bet if we removed the requirement to get a lawyer and file a lawsuit that those behaviors would vanish real quick, if all you had to do as the wronged consumer was report to an authority that Company X is doing business dishonestly.
You are probably not wrong, however take note that this is exactly the line of thinking that led to the DMCA. Beware the law of unintended consequences.
To misquote you.
"I bet if we removed the requirement to get a lawyer and file a lawsuit that those behaviors would vanish real quick, if all you had to do as the wronged copyright holder was report to an authority(the hosting service) that Company X is infringing on your copyright."
It also led to things like easy small claims court, consumer protection agencies and DPOs which protect consumers against corporations here in Europe without them having to shell out thousands of euros for lawyers and court cases.
Do all the upsides outweigh all the downsides, such as DMCA?
Point to you there, but also I would never in a thousand years trust a company, especially a publicly traded one, with that kind of power.
> I bet if we removed the requirement to get a lawyer and file a lawsuit that those behaviors would vanish real quick, if all you had to do as the wronged consumer was report to an authority that Company X is doing business dishonestly.
They can still appeal the decisions of the authority in the courts of law. They will just get the authority disbanded altogether. Previously, on hn `Amazon argues that national labor board is unconstitutional (apnews.com)` https://news.ycombinator.com/item?id=39411829
In Germany, we have the Verbraucherschutz as a low-level institution to handle such claims. They can and do consolidate such reports and can file lawsuits if there is evidence of systemic misbehavior.
You can file in small claims without a lawyer.
I think I could price for a few tens of thousands of dollars a service that creates a bunch of such wronged consumers. There are a bunch of homeless people in San Francisco we will represent. To make it easy for someone on the streets to be able to complain, we actually operate as a non-profit that advocates on their behalf for having wronged the company. You could use my service to attack another company on demand, but I advise that you do it at critical moments. Ideally, two weeks before a big launch or so should do it so that we have enough time to stagger out sufficient number of complaints.
This isn't a new business idea. There are law firms that specialize in consumer class action suits, and part of their skillset is finding lots of wronged consumers to represent.
Signing up homeless people by the hundred isn't exactly the gold standard of what these firms do, but it's not a million miles removed.
Fortunately, in this case, the consumer had enough proof of what happened and the court rightly told Air Canada to get fucked with that argument.
I can sort of see it. On the one hand, it's reasonable to hold them accountable when an employee gives you the wrong discount. But if an employee, on their last day at work, decides to offer the next person calling all of the seats on a single flight for just $10, I think we'd all agree that it would be unreasonable to expect the airline to honor that offer.
It's the degree of misinformation that's relevant.
> But if an employee, on their last day at work, decides to offer the next person calling all of the seats on a single flight for just $10, I think we'd all agree that it would be unreasonable to expect the airline to honor that offer.
The airline is free to go after the lying employee for compensation if they find out too late. It is never acceptable for the airline to cheat their customers.
Standard for airlines
I once ordered a gift for my father for christmass. The order page indicated that it would arrive on time. When it didn't arrive, I requested a refund. They then pointed to their FAQ page where they said that orders during the holidays would incur extra processing time, and refused the refund.
I wrote back that unless they issused a refund, I would issue a charge back. You don't get to present the customer with one thing and then do otherwise because you say so on a page the customer has never read when ordering.
They eventually caved, but man, the nerve.
This actually sounds like an interesting case to me because the details make a huge legal difference in my mind. (But IANAL, maybe I'm entirely off base here.)
E.g., did they tell you the shipping date after you placed the order, or before? If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship. If they told you before, then was it before they knew your shipping address, or after? If it was beforehand, then again, it should've been clear that they wouldn't be able to guarantee it without knowing the address. If it was after they got the address but before you placed the order, then that makes for a strong case, since it was specific to your order and what you agreed to before placing it.
> If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship.
> Sellers have to ship your order within the time they (or their ads) say. That goes whether they say “2-Day Shipping” or “In Stock & Ships Today.” If they don’t give a time, they must ship within 30 days of when you placed your order.
from the FTC https://consumer.ftc.gov/articles/what-do-if-youre-billed-th...
I believe that's referring to "shipment" in the sense of "when this gets mailed", not the arrival time like we were discussing? I guess it might depend on where the delays were incurred, and what exactly was promised.
Why would they tell the date after placing? Every online shopping I ever used shows the shipping date together with shipping price and shipping options if there are any.
If they're going to ship immediately, then they can know before you place the order. If there's another entity involved (third party seller, backorder, etc.) then they might not be able to know when it will be shipped with much certainty.
I expect employees to know the correct answers and give them to me. When an employee says something that contradicts other policy pages I'm expecting that to be a change to company policy to me - they represent the company.
If the company doesn't agree to that, then they need to show the employee was trained on company policy and was disciplined (on first offense maybe just a warning, but this needs to be a clear step on the path to firing the employee) for failing to follow it. Even then they should stand by their employee if the thing said was reasonable (refund you $million may be unreasonable, but refund purchase price is reasonable)
This is especially true since, as it comes to refund policies, businesses make it exceedingly difficult to sort through the information.
It’s not the consumers fault that the AI hallucinated a result (as they are known to do with high frequency).
This line of argument is crazy and infuriating. "Air Canada essentially argued, 'the chatbot is a separate legal entity that is responsible for its own actions,' a court order said." Do they expect people to sue the chatbot? Are they also implying that people have to sue individual agents if they cause a problem?
> 27. Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot. > https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/202...
Real legal comedy. Since this was in small claims court maybe it was an amateur on Air Canada's side?
If they could reasonably expect to be able to hire people who would agree to accept all liability incurred during their work for the company, they absolutely would.
Same with chatbots. Even better, because once it's "trained", you don't have to pay it.
There's a few instances of expecting digital entities to shoulder the entirety of legal liability here in the last few years; DAOs are another example of this in the crypto space.
Why are we posting this when it's just rephrasing an Ars Technica article? It's even mentionned at the bottom:
"This story originally appeared on Ars Technica."
Give the clicks to the original article:
https://arstechnica.com/tech-policy/2024/02/air-canada-must-...
Both sites are owned by Conde Nast
I don't like Ars Technica because they break reader mode and load articles chunk by chunk – I consider it hostile towards the user and am glad since Wired work much better.
What browser? Just tried it out on Firefox for Android (version 122.1.0, with uBlock Origin enabled but JS still allowed on ars) and for the link above, I see the whole article after immediately switching to reader mode.
Reader mode also works for me. (Firefox 122.0 for Ubuntu)
In early days of computerization, companies tried to dodge liability due to "computer errors". That didn't work, and I hope the "It was the AI, not us" never gets allowed either.
> That didn't work
It worked in the British Post Office Scandal: https://en.m.wikipedia.org/wiki/British_Post_Office_scandal
Since it's a scandal, it worked until it didn't.
And AFAICT "the computer did it" wasn't the argument, it was "the computer did it so it must be correct because the experts said so".
So did they held any humans accountable then? This wasn't the case when I checked but I probably missed some updates
> So did they held any humans accountable then?
Not yet. The Inquiry is still taking evidence. They haven't taken evidence from the big-hitters yet, that begins in April.
In fact the Inquiry doesn't "hold humans accountable"; but they can compel witnesses, who testify under oath.
Gerald Barnes is the Fujitsu engineer who stated in the prosecution cases that Horizon was reliable ("robust"). His testimony to the Inquiry has been delayed, because on the morning he was supposed to testify, the Post Office "discovered" a million or so emails that they'd failed to disclose. So he'll be on the stand in April, along with the senior execs.
The Inquiry videos make quite pleasant watching; the lawyers and the judge are immaculately polite, there are no trick questions, and it's all about finding out what happened. I'm looking forward to seeing the senior execs on the stand.
The police are following the Inquiry; nobody's been charged, and my guess is they'll hold off on charging people until the Inquiry is over (that's part of the purpose of statutory inquiries). So the succession of Post Office Ministers that have overseen this disgrace will all be out of office by then.
I think this is a very different kind of thing. IIUC:
With Air Canada, the question is whether or not a chat bot can be treated as a company representative that makes binding commitments.
With the British Post Office, the issue is whether or not a software system is inscrutable during legal proceedings.
> With the British Post Office, the issue is whether or not a software system is inscrutable
Not sure what "inscrutable" means in that context. Is it supposed to mean it can't be scrutinized?
A law was passed some years ago that says evidence obtained from a computer system should be accepted as true, unless evidence is provided that opens it to question. That means, in the Post Office case, that postmasters couldn't demand that the Post Office prove that Horizon was working correctly. They had to prove that it was defective, which was difficult; they were kicked out of their shops, and denied access to their own records, including the Horizon terminals they had been using.
Of course if the chatbot cannot what is the point of them - I'll have to get something else to verify anything the chatbot says. Sure the chatbot can say "hello" in 10,000 words or whatever, but it can't do anything useful.
The fallout from this isn't done yet.
They still do. Bank errors are an example.
https://decisions.civilresolutionbc.ca/crt/crtd/en/item/5254...
The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
Thanks for sharing. I really enjoyed the read and the concise decision-making and legal terms used.
Good. If you use a tool that does not give the correct answers you should be held liable for the mistake. The takeaway is, you better vet your tool. If the amount of money you loses from mistakes with the tool is less than the money you saved using it then you make money, if not, you may want to reconsider that cost saving measure.
What if the company responds that they don't know how to vet the tool?
After all, we're still not 100% sure how LLMs make their decisions in what they string together as output, so the company's not _technically_ lying.
I'm glad to see that cases are starting to be decided about the liability of using AI generated content. This is something the general public should not need to second-guess.
Honestly LLMs aren’t ready for customer service. If I’m talking to a company I need to have a high degree of accuracy. LLMs are less accurate than trained humans.
Me: Chatbot, are you speaking for $COMPANY? Are you $COMPANY's agent? Can I take your statements as being $COMPANY's legal position?
Chatbot: <waffle>
Me: Please put me through to a person that can articulate $COMPANY's legal position. This conversation can serve no more purpose.
This is my personal perception, but I think it's important that there is a clear definition of liability so that companies are able to make their own determinations of what is ready and what isn't.
Few front-line agents have deep knowledge about their company's products or services. They trace their finger through some branches on a flowchart then dictate from a knowledgebase.
Flowcharts are reliable
Agreed, and I think following flowchart-type logic is within today's AI capabilities. This thread is full of people getting inaccurate responses from humans. I think when it comes to accuracy, a well-trained LLM likely beats the status quo of high-churn low-paid employees following a rote diagram.
Of course there should always be a way to reach a human, a senior agent with actual knowledge that can be applied in subjective ways to solve more complex problems.
Peter Watts comments:
* https://www.rifters.com/crawl/?p=10977
My father died in hospice the night before a flight to see him. I missed the flight because there was no longer any reason to get to the airport before dawn. I called to reschedule a few hours later.
The human on the other end rescheduled and gave me a bereavement rate. She told me it was less money, but didn't mention the reason. I didn't put that together until later. She just helped me out because she had compassion.
I am too cynical to think that an AI controlled by a corporation will do this.
Good. I hope people out there also discover chatbot holes and exploit them. Chatbots are one of the most useless and time wasters things out there, they literally serve for absolutely no purpose. And most of them work exactly like nested dropdowns where you select one option after the other. Oh and when you really want to talk to a human being in almost every scenario that option is not available. What a wonderful world powered by "AI".
Would a company be liable to uphold its promises if a rogue human customer service agent promised something ridiculous such as 1 million dollars worth of free flights?
No, the specific employee would most likely be liable if there is criminal conduct (this varies obviously). But a chatbot is not a person.
> But a chatbot is not a person.
We live in an interesting world. In the US, a corporation is legally a person, and a chatbot is not a person[0]. I'm looking forward to the first Supreme Court case involving a corporation consisting of chatbots.
[0] I'm handwaving in this lead-in to the fantasy here, so, dear reader, please give me a break for oversimplifying and ignoring technicalities.
The company would have to prove the human was knowingly acting outside of their job/training and was disciplined for that. Such discipline must be on the path to firing the employee if the behavior isn't corrected. Note that training is important here, an employee who isn't trained is assumed to have more authorization than someone who is.
Or in this case they need to take the AI out of service immediately until they can get a corrected version that does not do such a thing. I will accept that the AI can be tricked to do such a thing and remain in service, but only if they can show the tricks are something an honest human wouldn't attempt. (I don't know what this is, but I'll allow the idea for someone else to propose in enough detail that we can debate if a honest people would ever do that)
Google "too good to be true contract law" and there's some info, seems the answer is "no".
hopefully
I don't even see how this is a big story. An expressed representation about what a product is or does, how it works, and the terms of sale in consumer contexts are binding on the seller. It is the same in America. If you go into a store and they say that you can return it if you don't like it, then you can. If you buy a TV and the guy at the store tells you it also makes pancakes, you can get your money back if it turns out that it does not make pancakes. This is true even if the representation is made by some 16-year-old kid working at Best Buy. By extension it would still be true even if it is made be an automaton.
The interest in this story comes from the fact that Air Canada tried to fight it using the argument that they aren’t liable for anything their agents say. Businesses around the world are so eager to start laying people off that it’s almost certain that they will not be the only ones to try a variation on that claim. I see the important point being reinforcing liability laws and especially not allowing them to shirk responsibility with disclaimers — otherwise we’ll just see more companies cut staffing on their phone support to force people to use a system with a “you are responsible for checking everything this chat bot says for accuracy” notice.
I mean, they're welcome to try, and this can't be the first attempt. But there's plenty of jurisprudence at least in the USA that you can't disclaim expressed representations.
I think this article's full import is not being properly processed yet by a lot of people. The stock market is in an absolute AI frenzy. But this article trashes one of the current boom's biggest supposed markets. If AIs can't be put in contact with customers without exposing the company to an expected liability cost greater than the cost of a human customer representative, one of their major supposed use cases is gone, and that means the money for that use case is gone too. There's probably halo effects in a lot of other uses as well.
Now, in the medium or long term, I expect there to be AIs that will be able to do this sort of thing just fine. As I like to say I expect future AIs will not "be" LLMs but merely use LLMs as one of their component parts, and the design as a whole will in fact be able to accurately and reliably relay corporate policies as a result. But the stock market is not currently priced based on "AIs will be pretty awesome in 2029", they're priced on "AIs are going to be pretty awesome in July".
LLMs are a huge step forward, but they really aren't suitable for a lot of uses people are trying to put them to in the near term. They don't really "know" things, they're really, really good at guessing them. Now, I don't mean this in the somewhat tedious "what is knowing anyhow" sense, I mean that they really don't have any sort of "facts" in them, just really, really good language skills. I fully expect that people are working on this and the problem will be solved in some manner and we will be able to say that there is an AI design that "knows" things. For instance, see this: https://deepmind.google/discover/blog/alphageometry-an-olymp... That's in the direction of what I'm talking about; this system does not just babble things that "look" or "sound" like geometry proofs, it "knows" it is doing geometry proofs. This is not quite ready to be fed a corporate policy document, but it is in that direction. But that's got some work to be done yet.
(And again, I'm really not interested in another rehash of what "knows" really means. In this specific case I'm speaking of the vector from "a language model" and "a language model + something else like a symbolic engine" as described in that post, where I'm simply defining the latter as "knowing" more about geometry than the former.)
Reminds me of https://www.moralmachine.net/
The real story here is that Air Canada's lawyers argued, among other things, that the chatbot was a separate and independent legal entity from Air Canada and therefore Air Canada was not obligated to honor the made up policy.
In other words, this was possibly the first historical argument made in a court that AI's are sentient and not automated chattel.
Or maybe it simply means that it's a service offered by another company
Did Air Canada use a chatGPT for their legal defense?
Also:
> Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"
What does this mean?
That the chatbot was provided by a third party hence they are responsible for the content provided?
Or that, literally, a chat bot can be considered a legal entity?
The real desire here is to get it to promise a lifetime of free service.
edit: arguing that the chatbot is a separate legal entity is a wild claim. It would imply to me that air canada could sue the ai company for damages if it makes bad promises; not that air canada is excused from paying the customer.
If I were the agent who made that promise what would they do to me? Fire me - then fire the chatbot. Put me through training - then train the chatbot. Stand behind me and do what I say - then stand behind their chatbot.
I bet they already have policies in place for this - while how they apply to AI may be different, they shouldn't let this slide.
The key term here is "apparent authority". You don't need to be granted authority for each and every thing you do, but if you start grossly doing things you were not supposed to do, that's mostly just fraud on your end.
The common example in textbooks is someone continuing to do business as an employee after having been fired. They can still make valid deals with other entities due to apparent authority if they're not clearly made out to be separated.
This is pretty close: https://www.msn.com/en-us/autos/news/ai-blunder-chat-bot-sel...
> promise a lifetime of free service.
That might be too wishful thinking. The tribunal would take into account damages, and whether it was reasonable to believe that you're entitled to free service.
In this case, the chatbot promised a ~$800 discount, and the tribunal awarded ~$800. But I doubt they'd make the same decision again, or deem the lifetime service enforceable/un-cancellable.
I think it's within the bounds of reason if you can cite emotional damages to grant pretty much any recourse. It maybe won't fly if you're clearly trying to trick the AI, but I think there will be general plausible deniability.
It may feel weird, but it's utterly insane to delegate customer interactions to an agent that has nobody's interest in mind, not even their own, whom you cannot trust to abide by policy.
Similarly, if the bot negotiated any sort of special deal, I think it would be very, very difficult to argue that it lacked apparent authority to make deals or that it's not a fair consideration.
I don't really understand why generative LLM output is being presented to (and interpreted by) users as intelligence. This seems incredibly disingenuous, and I would prefer to see it characterized as as fiction, creative, realism, etc. -- words that make it clear to average people that while this might be entertaining and even mimicking reality, it's completely distinct from the writing of a person. Disclaimers (often small, low contrast, or "ToS;DR"-esque) are insufficient when the UI is specifically crafted to appear like chatting with a person.
This type of failure is becoming more and more common as companies roll out AI systems without robust accuracy audits & human supervision.
I'm working on something to make this easier - reach out if I can be helpful (email in bio).
This mistake could have been done by a human agent as well and the consequences would have been most likely the same, wouldn't it?
This is why you link the source content from the RAG pipeline instead of pretending the bot knows everything.
This should hold. If AI can remind us of what humane policies are, so be it.
i would expect something like this to severely stunt chatbot adoption
i think the only reason this should go through is if it didn't have a proper disclaimer at the beginning of the conversation
Sometimes the courts get it right
[dead]
hahahahaha
hahahhahaha
[flagged]
I hope there can be some reasonable middle ground. I think in this case it's good the woman got her money. But Air Canada, presumably scared of what the next case might cost them decided to turn the chat bot off entirely. I think that's a bit unfortunate.
I don't know what the solution looks like. Maybe some combination of courts only upholding "reasonable" claims by AI. And then insurance to cover the gaps?
If the company is not confident with the output of the chatbot or the liability then they should turn it off. All business decisions are based on assessing risk. Ultimately either insurance will need to step in or the company is willing to take the hit for mistakes their system makes because it saves them more money. This is how fraud protection it assessed with banks, credit cards etc. as well.
I agree. If bad information is given and the customer makes a decision on it, then either the customer must bear the risk or the business. I think it's more fair for the business to be out the money.
In this case this could have just been a $650 'bug bounty' had Air Canada issued a quick refund. A reasonable QA expense to find out that your AI agent is misleading your customers.
Hmm. “Let me turn on this lead water pipeline which may or may not poison an entire town and then blame faul when it does.”
I don’t think failing to take adequate precautions is preventing AI tools from being used. I think this was plain corporate incompetence and greediness. They started using a system without properly testing it and don’t want to pay for the consequences.
What if Boeing says “Oops. We forgot to put the bolts that keeps the door in place but we shouldn’t be kept accountable for our actions”? The fact that they used a tool for it shouldn’t change the outcome unless we are going to create indemnity for big cooperations.
>> "... turn the chat bot off entirely"
This sounds like an ideal outcome!
Why is that unfortunate? I honestly don't think there's any value in using AI for this if it's not guaranteed to give you correct answer.
I think that there can be an error rate that is low enough to still be very useful and a net-positive for the economy. But if there is a small chance to just end the company on the spot, then no one will use it despite the benefits. So how can we have the good parts without the bad, this is my question.
I guess the onus is on people to prove that using AI specifically is somehow a net-positive for the economy because that's in no way a given. But I don't feed good about Air Canada trying to make that case.
The "reasonable" middle ground is that chatbots should not be used unless and until their answers are reliable.
I don't see why chatbots can't be kept to the same standard as human staff. If an airline support agent lied to me to sell me tickets, no shit I'd want a refund and compensation! Chatbots should be allowed to be wrong, but the company should be prepared to face the consequences of that.
I struggle to comprehend how it's "unfortunate" that they turned off the customer support bot that lied to people.
Ideally they would fix it rather than give up on the idea entirely.
Or just hire a customer service department. That worked fine for the last hundred or so years.