In the Future, Propaganda Will Be Computer-Generated

The ideal scenario for the modern propagandist, of course, is to have convincing personas produce original content. Generative text is the next frontier. Released in a beta version in June by the artificial-intelligence research lab OpenAI, a tool called GPT-3 generates long-form articles as effortlessly as it composes tweets, and its output is often difficult to distinguish from the work of human beings. In fact, it wrote parts of this article. Tools like this won’t just supercharge global propaganda operations; they will force internet platforms and average users alike to find new ways of deciding what and whom to trust.

When I prompted GPT-3 to opine on these issues, it captured the problem succinctly:

For the moment, at least, it seems unlikely that generative media will be effective in the same way as traditional media at promoting political messages. However, that’s not to say that it couldn’t be. What it will do is muddle the waters, making it much harder to tell what’s real and what’s not.

The letters in GPT-3 stand for “generative pre-trained transformer.” It works by taking text input  and predicting what comes next. The model was trained on several massive data sets, including Wikipedia and Common Crawl (a nonprofit dedicated to “providing a copy of the internet to internet researchers”). In generating text, GPT-3 may return facts or drop the names of relevant public figures. It can produce computer code, poems, journalistic-sounding articles that reference the real world, tweets in the style of a particular account, or long theoretical essays on par with what a middling freshman philosophy student might write.

Indeed, if you want GPT-3 to speculate about the implications of a future full of AI-generated writing, it will. I fed it a couple of paragraphs of an early draft of this essay, asked it to generate 200 more words on the same subject, and instructed it to be a little creative. Here, verbatim, is some of what it returned:

In addition to the potential for AI-generated false stories, there’s a simultaneously scary and exciting future where AI-generated false stories are the norm. The rise of the software engineer has given us the power to create new kinds of spaces: virtual reality and augmented reality are now possible, and the “Internet of things” is increasingly entering our homes. This past year, we’ve seen a new type of art: that which is created by algorithms and not humans. In this future, AI-generated content will continue to become more sophisticated, and it will be increasingly difficult to differentiate it from the content that is created by humans. One of the implications of the rise in AI-generated content is that the public will have to contend with the reality that it will be increasingly difficult to differentiate between generated content and human-generated content.

It’s somewhat disconcerting to have a machine plausibly imitating your writing style based on a few paragraphs—to see it mindlessly generating “thoughts” that you have had. Even though I knew GPT-3 was putting words together based on probabilities, I found myself giving it prompts when I was stuck on how best to express a concept or wondering how it would answer a question. The AI’s output tends to ramble a bit and can become somewhat nonsensical if it goes on for too long, but this is a problem humans face as well; it’s why freshman philosophy students have TAs and publications have editors. But given a prompt, GPT-3 can produce any number of unique takes, which a person can quickly and easily polish and post.

Source Article