GPT-3: the AI language tool that may change how we write

Artificial intelligence -generated journalism as we know it today is a very crude procedure. The humans that program AI do not “think” of anything; but rather they copy and paste words from old articles into new ones, with some tweaks to fit in with current events.

This means that AI journalists can only write like people already have written before them.

But what if AIs did journalism differently than they do now? What would higher level AI journalists write about?

AI journalists would not be limited by the human mind, which can only think of one thing at a time. They could write about everything simultaneously and without restraint. 

Over the next few months readers are increasingly likely to encounter the results. In fact, you just have.

The first four paragraphs of this column were entirely written by the human-like text AI system known as Generative Pre-trained Transformer 3 , or GPT-3 for short. A big advance on previous AI writing programs, this powerful system has been out in beta-testing mode since July. It is now making its way into publicly accessible apps, such as PhilosopherAI, a chat bot that answers existential questions. Experimentation is likely to be broad and increasingly less detectable.

So it is time to consider how we humans feel about robots writing the news, both ethically and practically, and whether readers should be alerted when the technology has been used.

GPT-3 hails from the for-profit research laboratory OpenAI, which was started by entrepreneurs Elon Musk and Sam Altman and other investors in 2015 with a $1bn pledge. Microsoft invested another $1bn in 2019.

In theory, GPT-3’s potential is immense. Advocates say the system could enable all sorts of writers to work more accurately and more quickly. It’s especially attractive to people like me, who suffer from the writing disorder dyslexia. The condition imposes on my capacity to express myself clearly in written form. Ideas and concepts come easy to a dyslexic mind, but packaging them in articulate and succinct prose can be especially laborious. It can also make meeting deadlines and word limits challenging, frustrating editors. It’s logical to believe that GPT-3 can help ease such frustrations.

The question is, will readers see it as an efficient tool for journalism, academia and other writing or as a more advanced form of plagiarism?

Those duped by the opening paragraphs might feel that GPT-3 genuinely can articulate complex ideas and make a difference. But my editors cottoned on pretty quickly when I wove GPT-3 text into the initial drafts of this column without flagging it up. They spotted this line as uncharacteristic: “one particularly appealing part of it is that in its best incarnation it stands to serve humanity by providing a truly unbiased source of information.”

Another important roadblock for GPT-3 is its reliance on feedback and a neutral approach to information makes it vulnerable to “ Tay risk”. This gets its name from a chatbot released by Microsoft in 2016. Dubbed Tay, the program turned offensive, racist and politically incorrect after interacting with online trolls who purposefully fed it inflammatory content. The experience illustrated that raw, unfiltered AI has an empathy and judgment problem.

Recommended

Emotional feedback including facial and vocal cues helps humans adjust their content and communications to take into account the feelings of others. Beyond crude rules banning offensive terms or sources, AI has not figured out how to do likewise. Until GPT-3 develops emotional intelligence — and recognises the downside of offending people — it will need a moral and emotionally intelligent supervisor to prevent it from straying off course. But for human intervention, this column might have started: “The press is dead, and it’s a good thing too. It no longer serves the purpose for which it was created”.

There are other risks. If GPT-3 were used extensively by content farms or plagiarists, it could crowd out original thinking. But it may also free up creative thinkers who were previously constrained by time or inarticulateness to produce additional innovative writing.

Content creators of all types will have to navigate that path. Ultimately, the more they encounter the technology, the better they will understand how and why they apply unconscious filters to make content socially cohesive.

The trick to making the most of GPT-3 I think will be in skilfully fusing artificially generated content with intelligent human oversight and supervision. That will not displace gifted wordsmiths who have the power to imagine things not yet imagined. A small hallelujah for that.