According to the research of the Ssisity AI, the platform for artificial intelligence threats, Ukrainian politicians and soldiers during the Russian Ukrain War, people who “give up” with Deepfake videos are shown or “have stood the war crime”.
During the election periods in Taiwan, artificial intelligence holding of artificial intelligence on the basis of Chinese, such as fake conversations of politicians, invented scandals and blunder -like scenes, spread on social media and are targeted.
In Japan, fictions such as fake natural disaster pictures, which are generated with artificial intelligence, nuclear accident or flood disaster, are panicked in public.
Such events again increase the dangers that vehicles for artificial intelligence as “weapons of the new generation” can unconsciously and uncontrollably.
“It has become impossible to get to know artificial intelligence products”
Francesco Cavalli, the founding partner of Senssity AI, demonstrated reviews on the effects and dangers of visuals, videos and sounds with artificial intelligence.
Cavalli, when vehicles develop for artificial intelligence, it is more difficult to understand what is not real or not, he said.
In order to understand the content generated with artificial intelligence, it is recommended that inconsistent lighting, extreme smooth skin texture, artificial eye to blink or prevent mouth movements in the videos.
Cavalli, “Artificial intelligence, especially in low resolutions or compressed formats, can escape from recognition with human eyes. Artificial intelligence is now impossible.” he said.
Manipulation with artificial intelligence
It is stated that the noises that are already imitated with artificial intelligence are the highest risk in this area.
With artificial intelligence, a fraudster who imitates Foreign Minister Marco Rubio was a fake report on behalf of Rubio in a messaging application called “Signal”, which is often used by US politicians, and contacted the outside ministers of three different countries, a congress member and a governor.
Easy -to -use noises and detection is delayed, as you can output cavalli. Imitation noises are used to convince the person who is spoken on the phone, he said.
While the vehicles like Midjourney and Runway develop, it becomes difficult to consistently distinguish reality.
“If the manufacturers develop, educated eyes can be easily deceived.” he said.
“Politicians artificial intelligence that are used in electoral intervention media in fake press conferences that introduce fraud platforms and artificial intelligence production to manipulate the public. We have documented war images.” Cavalli, these pictures often fake news sites, social media spread with advertising.
“In all of these examples, artificial intelligence -supported propaganda, not only an experimental threat, but also a global weapon is to manipulate the perception in order to apply soft power,” Cavalli referred to the cases that was examined by the company in connection with the use of artificial intelligence in Taiawan. rated.
Proactive examination and cooperation is a must
Technology companies and platforms are called up to combat artificial intelligence measures based on visual disinformation.
The platforms focus on this topic, but most of them “artificial media on a large scale that cannot have solid judicial review systems” is stated.
Cavalli, “Some of them consciously benefit from fraud campaigns. However, if the external pressure is received. Recognition technology providers, platforms and supervisory authorities are of essential importance for cooperation.” he said.
Cavalli said that the content produced with artificial intelligence is not the right way to “label”, was not the right way, said:
“The content created with artificial intelligence is not spontaneous. The way and the place of use are important. For example, social sanctions for social media platforms should be used to use your networks without control for deep paws. Instead of relying on user complaints, you must take proactive measures.”
“See will no longer believe”
Systems developed by sensitivity are developed in a four -story analysis process that includes the determination of facial manipulation, the detection of images that are generated with completely artificial intelligence, imitation sound analysis and judicial examination.
The reports of these analyzes can be used in official investigations and courts.
Cavalli pointed out that trust in visual media gradually decreases and emphasized that awareness should increase.
Cavalli underlined that people should be trained with artificial intelligence against the risks of pictures and sounds and that companies, journalists and researchers should be equipped with judicial instruments. He said.
Bir yanıt yazın