Humans are subject to bias almost inherently, although some more than others. However, there is increasing evidence that algorithms and artificial intelligence (AI) which are thought to be non-subjective may also be biased.

A report out of The White House during the Obama administration warned that automated decision-making "raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed."

Research is piling up that suggests algorithms and AI seem to have disproportionate impacts on already socially downtrodden groups, especially those of color.

Ironically, the things we are utilizing to move us forward into the future might be the very thing replicating a negative part of our past boosting social inequalities.

Photo by: Geralt / Pixabay

Data and Critical Thinking

Data-driven technology is not going anywhere, in fact, it is growing stronger and more prevalent by the second, and the discovery of their biases is increasing with it. A popular algorithm called COMPAS assesses whether criminal defendants and convicts are likely to commit a crime in the future. The results of COMPAS carry weight in sentencing rendered by the judge.

COMPAS seems unbiased on the surface as white and black defendants given higher risk scores tended to re-offend at roughly the same rate.However, a closer analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to re-offend. Black defendants were also potentially treated more severely by the justice system because of this.

On the other hand, white defendants who committed a new crime in the two years after their COMPAS assessment were twice as likely as black defendants to have been mislabeled as low-risk. (COMPAS developer Northpointe, which recently rebranded as Equivant, issued a rebuttal in response to the ProPublica analysis; ProPublica, in turn, issued a counter-rebuttal.)

Cathy O'Neil a data scientist who wrote the National Book Award-nominated "Weapons of Math Destruction," said "Northpointe answers the question of how accurate it is for white people and black people," she added "but it does not ask or care about the question of how inaccurate it is for white people and black people. How many times are you mislabeling somebody as high-risk?"

Another thing that needs to be considered is whether or not the data being entered into these systems might reflect and reinforce societal inequality.

What to Fix

The same sort of concerns arise when looking into facial-recognition technology. Over 117 million American adults have had their images entered into a law-enforcement agency's face-recognition database, often without their consent or knowledge, and the technology remains largely unregulated.

A technologist from the FBI coauthored a 2012 study which found that the facial-recognition algorithms it studied were less accurate when identifying the faces of black people, along with women and adults under 30.

The Georgetown Center on Privacy and Technology found a key point in there 2016 study when they examined 15,000 documents: "police face recognition will disproportionately affect African Americans." (The study also provided models for policy and legislation that could be used to regulate the technology on both federal and state levels.)

A key finding of a 2016 study by the Georgetown Center on Privacy and Technology, which examined 15,000 pages of documentation, was that "police face recognition will disproportionately affect African Americans." (The study also provided models for policy and legislation that could be used to regulate the technology on both federal and state levels.)

"Unfortunately, it is impossible to make any absolute statements [about facial-recognition technology]," says Elke Oberg, the marketing manager at Cognitec, a company whose facial-recognition algorithms have been used by law-enforcement agencies in California, Maryland, Michigan, and Pennsylvania. Oberg also said, "Any measurements on face-recognition performance depends on the diversity of the images within the database, as well as their quality and quantity."

Big Brother and a Fix?

The European Union passes a law last year called the General Data Protection Regulation. The legislation includes several restrictions on the automated processing of personal data and requires transparency about "the logic involved" in those systems. It appears the United States is not in any hurry to go the same route considering the FCC and Congress are pushing to either stall or dismantle federal data-privacy protections, however, there are some states, including Illinois and Texas, who have passed their own biometric privacy laws to protect the type of personal data often used by algorithmic decision-making tools.

Jonathan Frankle, a former staff technologist for the Georgetown University Law Center who has experimented with facial-recognition algorithms says "If we're using a predictive sentencing algorithm where we can't interrogate the factors that it is using or a credit scoring algorithm that can't tell you why you were denied credit — that's a place where good regulation is essential, [because] these are civil rights issues," Frankle stated "The government should be stepping in."

There are some protections with existing U.S.laws, for instance, certain types of discrimination specifically in places like credit, housing, and hiring. It should be noted though that these regulations haven't been updated to address the way new technologies intersect with old prejudices.

It remains to be seen whether this will get worked out or swept under the rug. In any case, at least there are some who are recognized this unlikely problem.

저작권자 © 메디컬리포트뉴스 무단전재 및 재배포 금지