Is artificial intelligence racist or biased?

US-based Wired magazine published an article revealing that artificial intelligence-based search engines spread racist content. According to the study, artificial intelligence systems from companies such as Google, Microsoft and Perplexity produce answers based on unscientific data showing the genetic superiority of white people.

These claims are based on the work of Patrik Hermansson, a researcher with the UK-based group Hope Not Hate. Hermansson found that Google's AI Overviews feature returned results based on an old white supremacist dataset for simple searches like “Pakistan IQ” or “Sierra Leone IQ.” Pakistan's IQ was proven to be 80, while Sierra Leone's was 45.7. However, this information came from sources with unscientific backgrounds, supported by Nazi sympathizers.

It was also discovered that systems such as Microsoft Copilot and Perplexity also use this content. This situation shows once again that artificial intelligence is not impartial and also learns and spreads false and biased information on the Internet.

He learns all the information he comes across without using the true-false filter. This also includes racist content on the Internet. Therefore, AI systems are not neutral, but often reflect learned bias.

Tolga Mürtezaoğlu IT specialist

Although Google announced that it had removed incorrect data, the reliability of artificial intelligence-based search engines remains controversial. Do these systems provide information or reinforce biases? Do AI Overviews Force Racist Data? He explained the details with IT specialist Tolga Mürtezaoğlu.


“Artificial intelligence systems are not neutral”

Artificial intelligence is considered the future of technology. However, IT expert Tolga Mürtezaoğlu draws attention to the dangers behind this technology. He believes the reason artificial intelligence acts in a racist or biased manner lies in fundamental problems with the way it learns. “He learns every piece of information that comes his way without using the true-false filter. This also includes racist content on the Internet. Therefore, artificial intelligence systems are not neutral, but often reflect the biases they have learned,” says Mürtezaoğlu.

Examples of this problem are quite striking. Mürtezaoğlu cites a 2018 study by MIT researcher Joy Buolamwini. When testing facial recognition algorithms, Buolamwini noticed that black women were not recognized correctly. He even had to wear a white mask to present his face to the system. This has sparked a powerful fight against algorithmic bias and spawned initiatives such as the Algorithmic Justice League (AJL).


“The symptoms of black patients may be underestimated”

However, the problem is not just with facial recognition. Mürtezaoğlu recalls that for a time Twitter highlighted white people when processing photos, classifying black women as housewives and black men as housing servants or potential criminals. Similar problems can also be observed in the health sector; Some diagnostic algorithms may underestimate the symptoms of black patients.

Another major danger is that artificial intelligence systems experience “hallucinations” from time to time. This means that systems perceive false information as true. While Mürtezaoğlu draws attention to this situation, he gives a striking example:

“Someone who wants advice on healthy eating can use artificial intelligence to recommend eating a stone every day based on the minerals it contains. Because it has no emotions, morals or ethics. “When data is linked, completely illogical conclusions can be reached.”


“He gives answers that show the low level of intelligence in African countries”

Artificial intelligence uses information that is widely available on the Internet. If scientific articles often contain incorrect or racist information on a topic, artificial intelligence recognizes this as correct. Mürtezaoğlu states that some artificial intelligence models provide answers that indicate that the intelligence level of African countries is low. However, he emphasizes that the responsibility here does not lie with artificial intelligence, but with the systems that teach it incorrect information.


Bias can also make AI unsafe

Initiatives like the Algorithmic Justice League are of great importance against such problems. Mürtezaoğlu states that these initiatives serve as a shield to ensure that the algorithms work more fairly and impartially. However, the technology has a bigger problem: bias.

Mürtezaoğlu draws attention to situations such as censorship of Palestinian posts by social media companies and says such biased practices can make artificial intelligence even more unsafe.

According to Mürtezaoğlu, the biggest danger associated with artificial intelligence is that this technology has entered our lives before it is mature. “We launched early because of the competition,” he says, adding:

“If ethics, justice and transparency are not a priority, artificial intelligence may cause bigger problems in the future.”

Finally, Tolga Mürtezaoğlu's warnings show once again that we need a careful testing and development process to ensure that artificial intelligence works safely and without bias. It is clear that if this problem is not resolved, the trust between technology and humanity could be deeply shaken.


Yayımlandı

kategorisi

yazarı:

Etiketler:

Yorumlar

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir