Researchers from the DeepMind laboratory, which belongs to the same group as Google, have published a scientific article on the potential harm caused by artificial intelligence (AI). According to the text, there is evidence that this type of content harms navigation.

The article “Misuse of Generative AI: A Taxonomy of Real-World Data Tactics and Insights” evaluated 200 harmful use tactics of generative AIs catalogued between January 2023 and March 2024. It is not yet part of a scientific journal, as it still needs to be reviewed by other scientists.

The study reinforces and provides evidence of a perception held by many users: that there is an increasing amount of artificial images and texts circulating on the internet. This is even seen in Google search results and is already part of living in this digital environment — which brings some very negative consequences.

How AI is making the internet worse

According to the article, it is possible to state that this type of malicious use of technology “blurs the boundaries between authentic and deceptive presentations“. This is already detected in a considerable and “significant” way, even if most of the generative AI content found so far is not irregular.

The most common cases encountered involve manipulation of human activity, falsification of evidence e use in fraud or other forms of cybercrime.

AI-powered harmful texts are the most common content for now.Fonte:  GettyImages

They are typically exploited for “reputational or financial gains” and for now they are mostly text-based, as other visual tools are still in more recent stages of availability.

Furthermore, this type of use of generative AI can have large-scale consequences in terms of “trust, authenticity and integrity of information ecosystems” — that is, harming environments where content circulates, such as social networks and even search engines like Google itself.

As likely solutions, researchers suggest a “multifaceted approach” which includes collaborations between government, industry experts, scientists and society.

Furthermore, not only technical advances are needed, but also “a deeper understanding of social and psychological factors” that involve the creation of this fake content. The full article can be read at this link (in English).


Leave a Reply

Your email address will not be published. Required fields are marked *