“I have seen the future and it is going to arrive in the next few years,” says Tyler Cowen, the American economist and polymath. What’s got him hyped is the advances in machine-learning language programs, which he thinks will soon transform the internet — and workplaces.

Next time you have a half-hour to spare, visit OpenAI’s website and access its “Playground”. Here you can experiment by asking questions or offering prompts to GPT‑3, a deep-learning algorithm, for it to generate human-like text output. I requested “a five-minute video script on the economic costs of Brexit”, “a letter of complaint to a restaurant from which I contracted food poisoning”, and “a poem about The Times in the style of Shakespeare”. The results convinced me Cowen’s assessment was correct.

Sure, sometimes the content itself was dull and generic. The striking feature, however, was how well it followed the prompt and how hard it would have been to tell that its response was written by a machine. These programs have improved dramatically in just a year. It’s easy to envisage how they will eventually transform the process of writing essays, academic papers, letters, speeches, stories and scripts.

Paul Buchheit, the former Google employee who created Gmail, believes that Google search will face “total disruption” from this in just a few years. With GPT‑3 and its successors, users won’t need to trawl through various listed links to get a summary of a topic. Their prompt will allow the program to do the dive and give the best answer.

Once these models get integrated by third-party websites, they will alter social media too. One can imagine an app enabling you to say: “Show me tweets about Harry Kane’s performance tonight.”

Yes, there are fears that any automation will create substantial net job losses — a concern one sees everywhere today except in the unemployment data. In a world where these artificial intelligence programs can churn out half-decent columns, reports and customer service chats, however, surely those whose careers entail summarising existing information could find themselves at risk?

Some might. History suggests, however, that the vast majority will work with the programs. One already sees research papers where AI prompts help to write literature reviews, with humans contributing the novel analysis. Think tanks, news aggregators and firms sending out mass correspondence could likewise use these tools to ensure consistency and accuracy in their writing, but with humans adding flair and context.

As these programs go mainstream, there will be short-term backlashes. Students could AI-generate essays that are hard for universities to detect. Hiring managers might find written application content less persuasive in judging job candidates’ abilities. By drawing from the existing web, some fear the programs’ outputs will be politically biased.

Such concerns, though, seem overblown, or to misunderstand the scale of what the tech could achieve. A job applicant or student can already outsource their work to other humans, of course. Over time, the way we assess students or employees will evolve, with their ability to use these programs itself seen as an essential skill.

In fact, by automating very mundane writing tasks, perhaps including emails, GPT‑3 could reduce project times significantly, delivering a much-needed gain to productivity. The smart money is surely that this will deliver a larger payoff to innovative thinkers, not least because they will enjoy more time to be creative.

So, what’s the biggest risk of this technology? Probably that bad actors will find ways to game the program to push misinformation — a problem that already exists with the web. You don’t need to take my word for this anxiety. That’s the answer the program gave me.