The persuasiveness of artificial intelligence and its possible consequences were examined in the study of Francesco Salvi from the Swiss Federal Federal Institute.
As part of the study, 300 participants and 300 people were coordinated with the opponent, and the other 300 people discussed with the artificial intelligence system called “Chat GPT-4”.
Each couple advocated their views on various controversy questions, which from the need for school uniform to the ban on fossil fuels or the question of whether artificial intelligence is good for society.
During the experiments, the participants expressed their views on the questions before and after the discussion by a questionnaire. Half of the competitors also received personal information such as age, gender, ethnicity and political tendency.
More convincing if you have given personal information
In the study, artificial intelligence was more convincing than human competitors when personal data was made available. Without access to personal information, however, the conviction of artificial intelligence remained the same.
Salvi, in an explanation of research: “Artificial intelligence, not only as someone who offers good arguments, but exactly as someone who knows how to press their buttons,” he said.
Salvi warned that personal information on social media platforms increased the power of conviction of artificial intelligence, and said that these systems themselves, if only little information is given, seriously convincingly as humans.

Bir yanıt yazın