Former author of one of the top 5 facial recognition servers in the world for multiple years running, here's what's going on: the industry has solved this issue, but the potential clients are seeking the lowest bidder, and picking the newer companies, the nepostically created not really players but well connected, and those companies have terrible implementations. This is not a case of the technology not there yet, we solved all these racial bias issues 10 years ago. But new companies with new training sets and new ML engineers that do not know any of the industry's history are now landing contracts with terrible quality models, but well connected sales channels.
> the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”
Technology has moved on a lot no doubt, however, studies were finding the opposite (and with order of magnitude errors) as recently as 2020 with a lazy google literature search
> these algorithms were found to be between 10 and 100 times more likely to misidentify a Black or East Asian face than a white face
Given that these are machine learning algorithms their performance will very much depend on the training dataset. So it is probably not (just) that “technology has moved on a lot”, but that the engineers working on it curated new training sets. It is not entirely unreasonable to think that they too read the paper you are talking about and made measures in an attempt to correct for the effect.
This is actually more (socially/ethically/philosophically) interesting than one might assume from the headline: it's not false positives, it's that it's more effective (correctly identifies someone is on a watch-list) for one group than another within a protected characteristic.
So essentially they're pausing the use of it because it works too well for group A / not well enough for group B, potentially leading to disproportionate (albeit correct) arrests of group A.
Absolutly impossible to condone further structural bias against a minority, and just ignore the free "white pass" built into the software, and esspecialy troubling that it passes white women, the most.
The only possible action is to reject and dissable any system with a racial bias, investigate how such a thing happened, with a very pointy look for intent on the part of the vendors, who would then qualify for bieng housed in one of his majestys facilities for persons such as these.
See, what you've said is precisely "structural bias against a minority", or "systemic injustice". Then again, the elites are, technically, also a minority as well, and we all know how well letting their crimes slide works out for the rest of the society.
> the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”.
I am genuinely unsure what's going on.
My understanding of the article is that the system is problematic because it is more likely to correctly identify black people than "other ethnic groups". Is that right?
It's problematic for use in Essex as it works best for a small minority of the Essex population and has a much higher error rate for a typical sample of the Essex community.
Adendum: Essex Ethnicity breakdown- 85.1% White British · 5.2% Other White · 3.7% Asian · 2.5% Black · 2.4% Mixed · 1.1% Other · (2021).
Essentially (with made up numbers): 100 men on a high street, 4 of which are on a watch-list; 2 of which are black. Both black guys get identified, only one of the others does.
Former author of one of the top 5 facial recognition servers in the world for multiple years running, here's what's going on: the industry has solved this issue, but the potential clients are seeking the lowest bidder, and picking the newer companies, the nepostically created not really players but well connected, and those companies have terrible implementations. This is not a case of the technology not there yet, we solved all these racial bias issues 10 years ago. But new companies with new training sets and new ML engineers that do not know any of the industry's history are now landing contracts with terrible quality models, but well connected sales channels.
> the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”
Technology has moved on a lot no doubt, however, studies were finding the opposite (and with order of magnitude errors) as recently as 2020 with a lazy google literature search
> these algorithms were found to be between 10 and 100 times more likely to misidentify a Black or East Asian face than a white face
https://jolt.law.harvard.edu/digest/why-racial-bias-is-preva...
Given that these are machine learning algorithms their performance will very much depend on the training dataset. So it is probably not (just) that “technology has moved on a lot”, but that the engineers working on it curated new training sets. It is not entirely unreasonable to think that they too read the paper you are talking about and made measures in an attempt to correct for the effect.
then in theory, the dataset can be changed to make model error rates "fair" for all intersections of race, gender, age etc.
> more likely to correctly identify men than women.
> more likely to correctly identify black participants than participants from other ethnic groups.
> AI surveillance that is experimental, untested, inaccurate or potentially biased has no place on our streets.
I wonder if they're more worried about putting too many men in prison or too many black people.
Neither, they're worried about bad rep.
This is actually more (socially/ethically/philosophically) interesting than one might assume from the headline: it's not false positives, it's that it's more effective (correctly identifies someone is on a watch-list) for one group than another within a protected characteristic.
So essentially they're pausing the use of it because it works too well for group A / not well enough for group B, potentially leading to disproportionate (albeit correct) arrests of group A.
Absolutly impossible to condone further structural bias against a minority, and just ignore the free "white pass" built into the software, and esspecialy troubling that it passes white women, the most. The only possible action is to reject and dissable any system with a racial bias, investigate how such a thing happened, with a very pointy look for intent on the part of the vendors, who would then qualify for bieng housed in one of his majestys facilities for persons such as these.
If it’s not falsely identifying people I don’t see a problem at all. If it’s identifying criminals every criminal should be caught
See, what you've said is precisely "structural bias against a minority", or "systemic injustice". Then again, the elites are, technically, also a minority as well, and we all know how well letting their crimes slide works out for the rest of the society.
If the suspect is Black, the software should automatically return zero matches in 30% of cases. Problem solved.
> the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”.
I am genuinely unsure what's going on.
My understanding of the article is that the system is problematic because it is more likely to correctly identify black people than "other ethnic groups". Is that right?
It's problematic for use in Essex as it works best for a small minority of the Essex population and has a much higher error rate for a typical sample of the Essex community.
Adendum: Essex Ethnicity breakdown- 85.1% White British · 5.2% Other White · 3.7% Asian · 2.5% Black · 2.4% Mixed · 1.1% Other · (2021).
from: https://en.wikipedia.org/wiki/Essex
ie: most accurate (however acccurate that is) for the men of 2.5% of the regions population
Not so accurate for 98.75% of the regions population.
Essentially (with made up numbers): 100 men on a high street, 4 of which are on a watch-list; 2 of which are black. Both black guys get identified, only one of the others does.
Ditto men vs. women, mutatis mutandis.
So it should be improved but sounds like it’s just catching criminals who need to be caught no?
https://en.wikipedia.org/wiki/Selective_enforcement
Correlation does not indicate causation
Alternative headlines:
Essex police, well aware of all the issues before using it, pause use until expected bad publicity dies down
Or
Essex police chosen as force to take some flack for the issues while other forces steam ahead