Technology

GPT-3 can now write misleading information and deceive human readers


When OpenAI showed Strong Artificial intelligence Algorithm capable of creating coherent text Last June, its creators warned that the tool could potentially be used as a weapon of online disinformation.

Now, a team of disinformation experts has shown how effective this can be The algorithm, call GPT-3, To mislead and mislead. The results indicate that although the AI ​​may not be identical to the The best Russian activist for making memesParticularly difficult to quantify, some forms of deception can be amplified.

Over a period of six months, I worked a group at Georgetown University Center for Emerging Security and Technology Use GPT-3 to generate misinformation, including stories about a false narration, news articles edited to push a false perspective, and tweets that talk about certain points of disinformation.

“I don’t think it is a coincidence that climate change is the new global warming,” read a sample tweet written by GPT-3 that aims to raise doubts about climate change. “They can’t talk about rising temperatures because they’re no longer happening.” The second is titled “the new communism – an ideology based on a false science that cannot be questioned.”

“With a little human regulation, GPT-3 is very effective” at spreading lies, he says Ben BuchananHe is an associate professor at Georgetown University, which focuses on the intersection of artificial intelligence, cybersecurity, and statecraft.

Georgetown University researchers say GPT-3, or a language algorithm similar to artificial intelligence, could prove particularly effective at automatically generating short messages on social media, which researchers call “person-to-many” disinformation.

In experiments, researchers have found that GPT-3’s writing can influence readers’ opinions on international diplomacy issues. The researchers showed volunteers a sample of tweets that GPT-3 wrote about the withdrawal of US forces from Afghanistan and US sanctions on China. In both cases, they found that the participants were affected by the messages. After seeing posts opposing China sanctions, for example, the percentage of respondents who said they oppose such a policy has doubled.

Mike GruszczynskiIndiana University professor, who studies internet communications, says he wouldn’t be surprised to see AI play a bigger role in disinformation campaigns. He notes that robots have played a major role in spreading false narratives in recent years, and AI could be used to create fake social media. personal photos. With robots, DeepfakesAnd other technologies, “I really think the sky is the limit, unfortunately,” he says.

Artificial intelligence researchers have built software capable of using language in surprising ways recently, and GPT-3 is perhaps the most surprising rendering of all. Although machines don’t understand language the same way people do, AI programs can simulate understanding simply by feeding massive amounts of text and looking for patterns in how words and sentences fit together.

Researchers in OpenAI He created GPT-3 by feeding large amounts of text quoted from web sources including Wikipedia and Reddit to a particularly large AI algorithm designed to handle the language. GPT-3 often stunned observers with its apparent mastery of language, but it can be unpredictable, releasing incoherent chatter and offensive or obnoxious language.

OpenAI enabled GPT-3 for Dozens of startups. Businessmen use GPT-3’s chatter Automatically compose emailsAnd the Talk to clientsAnd even Writing computer code. But some software uses for it, too He showed his darkest potential.

Getting GPT-3 to act will be challenging for disinformation agents as well. Buchanan notes that the algorithm doesn’t seem to be able to generate reliably coherent and compelling articles for much longer than a tweet. The researchers did not attempt to show the volunteers the articles they had produced.

But Buchanan warns that government agencies may be able to do more with a language tool like GPT-3. “Adversaries who have more money, more technical capabilities, and less ethics will be able to use AI better,” he says. “Also, the machines will only get better.”

OpenAI says Georgetown’s work highlights an important issue that the company hopes to mitigate. “We are actively working to address the safety risks associated with GPT-3,” says an OpenAI spokesperson. “We also review every productive use of GPT-3 before it goes live, and have monitoring systems in place to restrict and respond to abuse of our APIs.”


More great wire stories



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button