Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.
Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day).
Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
In the US there are no consequences for people in power failing to follow procedures, laws or regulations - except for being told to stop doing whatever illegal thing they're doing, and possibly getting sued way down the line, which gets paid by taxpayers.
From reading more into the case, it seems the issue may be related to how her lawyer handled the case.
They probably did “identity challenge” arguing that are not a right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.
Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.
I've been hearing "it's not just... it's a" touted as an AI sign recently, personally I think it's an AI sign because it's a human thinking shortcut sign, and AI copies it, but it would be funny if AI wrote the article and then hallucinated this specific money quote.
I doubt this happened here, but FWIW, AI does have a habit of "cleaning up" (read: hallucinating) interview transcript quotes if you ask it to go through a transcript and pull quotes. You have to prompt AI very specifically to get it to not "clean up" the quotes when you ask it to do that task.
... which is why the institutions that assign responsibility and consequences need to make it really clear that excuse won't fly. With illustrative examples.
This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.
What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument
There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.
The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
When somebody uses a tool to hurt somebody, they need to be held accountable. If I smack you with a hammer, that needs to be prosecuted. Using AI is no different.
The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.
Systems are also a tool. Whoever institutes and helps build the system that systematically results in harm is also responsible.
That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]
Yes. But doing the investigation negates much of the incentive for using AI.
Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.
In this case it sounds as though AI could have been used to generate preliminary leads. When someone calls a tip line with information, police don’t just take their word for it, they investigate it. They know that tips they receive may be incorrect. They should have done the exact same here, but they didn’t.
I’m very opposed to AI in general, but this one is clearly human failure.
The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.
The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.
So in the same way that the failure rolled down hill, liability needs to roll back up.
We will find out. But relying on AI is likely to cost the city of Fargo in one way or another. They say they have already stopped using AI and returned to good old fashioned human investigation.
This, she likely had a shitty public defender that did the bare minimum requirements because they were catering to paying clients. The state was playing hardball because they wanted to make a profit off the poor person with a shitty defense and the public defender was sitting on the bench at a teeball tournament because they werent getting paid enough and didn't want to try.
What? Women are much more sympathetic figures when it comes to crime and punishment. And there are 10x more men in prison in america than women. If you were trying to "introduce" some nefarious law enforcement system to the US you would use it on undesirable men first (drug addicts and gang members)
Probably just reading the room, with States like texas making abortions illegal and allowing random citizens from enforcing that.
Famously, abortions are a woman thing.
Anyway, looking through the facts, it's just some random woman. There's better evidence that these facial recognition systems are much worse at minorities rather than genders.
Although you can probably interpret the facts differently, we've seen how any search function gets enshittified: Once people get used to searching for things, they tend to select something that returns results vs something that fails to return results.
Rather than the user blaming themselves, they blame the searcher. As such, any search system overtime will bias towards returning search (eg, Outlook), rather than accuracy.
So if these systems easily miss certain classes of people, women, minorities, they'll more likely be surfaced as inaccurate matches rather than men who'll have a higher confidence of being screened out.
This appears to be The Sort in action again. The 50% of Americans below IQ 100 also need jobs and so on. Perhaps with AI pushing out people from high-intelligence jobs, we will get a large number of intelligent people in jobs like police or retail pharmacists or so on. Currently, these guys can barely read text and follow instructions. In fact, most of them are likely functionally illiterate and are coaxed through their programs by a system that is punished if it does not pass people.
The average policeman will find his brain sorely taxed by the average incident report form. Describing the phrase "false positive" to them is like trying to explain calculus to a mouse.
Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.
Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".
But this is the worst part of the story:
> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”
That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.
[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/
[1]: https://news.ycombinator.com/item?id=23628394
I think you missed many important points.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
It was worse than that, the reporting from an earlier story[0]
There is not a jury in the country that will side against the woman.(Also, what happened to journalism - no Oxford comma?)
[0] https://news.ycombinator.com/item?id=47356968
This is a weak or misleading story about AI.
First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.
Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.
The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?
Can someone clarify how that process works?
In the US there are no consequences for people in power failing to follow procedures, laws or regulations - except for being told to stop doing whatever illegal thing they're doing, and possibly getting sued way down the line, which gets paid by taxpayers.
From reading more into the case, it seems the issue may be related to how her lawyer handled the case.
They probably did “identity challenge” arguing that are not a right person. But from Tennessee’s perspective, she was considered the correct person to be arrested, so there was no “mistaken identity” in their system. In other words, North Dakota Wanted person x and here is person x.
Once a judge in North Dakota reviewed the full evidence (and found that person they issued warrant for arrest is not one they want), the case was dismissed.
Money quote from someone quoted in the article:
"[I]t’s not just a technology problem, it’s a technology and people problem."
I can't. I just can't.
I've been hearing "it's not just... it's a" touted as an AI sign recently, personally I think it's an AI sign because it's a human thinking shortcut sign, and AI copies it, but it would be funny if AI wrote the article and then hallucinated this specific money quote.
I doubt this happened here, but FWIW, AI does have a habit of "cleaning up" (read: hallucinating) interview transcript quotes if you ask it to go through a transcript and pull quotes. You have to prompt AI very specifically to get it to not "clean up" the quotes when you ask it to do that task.
Earlier discussion (405 comments):
https://news.ycombinator.com/item?id=47356968
AI is a liability issue waiting to happen. And this is just another example.
It's the opposite, it's absolution from liability. "The AI did it" is the ultimate excuse to avoid accepting responsibility and consequences.
Courts are already refusing to accept this excuse.
https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...
... which is why the institutions that assign responsibility and consequences need to make it really clear that excuse won't fly. With illustrative examples.
It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.
This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.
The tool, like Google search, is likely biased towards returning results regardless of confidence.
What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument
There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.
The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
When somebody uses a tool to hurt somebody, they need to be held accountable. If I smack you with a hammer, that needs to be prosecuted. Using AI is no different.
The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.
Systems are also a tool. Whoever institutes and helps build the system that systematically results in harm is also responsible.
That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]
[1] https://stavroulapabst.substack.com/p/techxgeopolitics-18-te...
Used incorrectly will lead to errors.
Only one small little problem --- there is no way to tell if you are using it "correctly".
The only way to be sure is to not use it.
Using it basically boils down to, "Do you feel lucky?".
The Fargo police didn't get lucky in this case. And now the liability kicks in.
Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.
Yes. But doing the investigation negates much of the incentive for using AI.
Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.
In this case it sounds as though AI could have been used to generate preliminary leads. When someone calls a tip line with information, police don’t just take their word for it, they investigate it. They know that tips they receive may be incorrect. They should have done the exact same here, but they didn’t.
I’m very opposed to AI in general, but this one is clearly human failure.
The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.
...but this one is clearly human failure.
Absolutely.
The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.
So in the same way that the failure rolled down hill, liability needs to roll back up.
AI can provide leads. Someone still needs to verify them and decide.
Generating and verifying bad leads costs money. Not verifying bad leads can cost much more.
At some point, you have to decide if wasting good money on bad intel makes sense.
Now the "qualified" immunity kicks in.
We will find out. But relying on AI is likely to cost the city of Fargo in one way or another. They say they have already stopped using AI and returned to good old fashioned human investigation.
https://www.lawlegalhub.com/how-much-is-a-wrongful-arrest-la...
Dynamite is a tool. But we don't hand it out to anyone who wants to play with it.
We used to until quite recently. Anybody could buy dynamite at the hardware store. We had to end this because of criminals using it to hurt people.
I admit I was surprised to see you could buy dynamite in a hardware store until 1970.
Look for AI to follow a similar trajectory over time.
Impossible at this point. You cannot download dynamite.
Yes, regulation is inevitable.
Regulation is impossible. The AI barons literally control the federal government, so not even state regulations get tried.
Except this time the criminals are police.
They are far more often than anyone wants to admit. That's how we got 25% of the world's prison population.
AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.
Now cruel people wield a two-tiered shield. It's not an accident that this happened to a woman, but make no mistake they are coming for men next.
I would say much more likely that it was because she was poor and couldn't afford a good lawyer.
This, she likely had a shitty public defender that did the bare minimum requirements because they were catering to paying clients. The state was playing hardball because they wanted to make a profit off the poor person with a shitty defense and the public defender was sitting on the bench at a teeball tournament because they werent getting paid enough and didn't want to try.
What? Women are much more sympathetic figures when it comes to crime and punishment. And there are 10x more men in prison in america than women. If you were trying to "introduce" some nefarious law enforcement system to the US you would use it on undesirable men first (drug addicts and gang members)
You think they deliberately chose to do this to a woman? Why?
Probably just reading the room, with States like texas making abortions illegal and allowing random citizens from enforcing that.
Famously, abortions are a woman thing.
Anyway, looking through the facts, it's just some random woman. There's better evidence that these facial recognition systems are much worse at minorities rather than genders.
Interesting biases are own-gendeR: https://pmc.ncbi.nlm.nih.gov/articles/PMC11841357/
Racial bias:
https://mitsloan.mit.edu/ideas-made-to-matter/unmasking-bias...
Miss rates:
https://par.nsf.gov/servlets/purl/10358566
Although you can probably interpret the facts differently, we've seen how any search function gets enshittified: Once people get used to searching for things, they tend to select something that returns results vs something that fails to return results.
Rather than the user blaming themselves, they blame the searcher. As such, any search system overtime will bias towards returning search (eg, Outlook), rather than accuracy.
So if these systems easily miss certain classes of people, women, minorities, they'll more likely be surfaced as inaccurate matches rather than men who'll have a higher confidence of being screened out.
That's how I interpret this 2 second commment.
This appears to be The Sort in action again. The 50% of Americans below IQ 100 also need jobs and so on. Perhaps with AI pushing out people from high-intelligence jobs, we will get a large number of intelligent people in jobs like police or retail pharmacists or so on. Currently, these guys can barely read text and follow instructions. In fact, most of them are likely functionally illiterate and are coaxed through their programs by a system that is punished if it does not pass people.
The average policeman will find his brain sorely taxed by the average incident report form. Describing the phrase "false positive" to them is like trying to explain calculus to a mouse.