Blog >
Why AI Won’t Replace All Writers 


Why AI Won’t Replace All Writers 

Davina Stanley explains why AI is not yet a match for human insight and the limitations of using AI when writing. 

There is much talk about how artificial intelligence (AI) can write for us.

Nikki Gemmell wrote in The Australian newspaper that ‘We scribblers and hacks are staring at the abyss in terms of the chatbot future roaring at us’.

Professional copywriter Leanne Shelton lamented its impact on her business. She expects her copywriting business to take a 35 percent hit this year thanks to OpenAI releasing ChatGPT last November.

I am seeing clients experiment with a range of AI tools to help with their work too.

Yet, like Nikki Gemmel, I am not concerned about AI taking my job.

AI can help the writing process and will stretch us to think harder and better but is not (yet, at least) a match for human insight.

Let me explain why.

AI can’t make a judgement call

AI relies on humans asking really good questions

AI can’t explain how it arrived at its answer

AI’s writing ability is surprisingly poor

AI is inherently biased

Let me unpack each of these further

 AI can’t make a judgement call

Even when organisations (eventually) set up their private AI instance AI can only offer limited help. This is so even after proprietary data is fed into it and appropriate access permissions are set up. 

Let’s imagine that we feed the past decade’s board and senior leadership team papers into a proprietary database. We then add an AI engine on top. Leaders and board members could enter queries such as: ‘What is our company’s data security strategy’. The AI engine would then ‘read’ all of its material and summarise it to explain what the papers say about our company’s data security strategy. That is useful as far as it goes.

But what if we asked it: ‘How could we improve our data security strategy?’. Again it would summarise what the papers in its database say about the potential risks inherent in our current strategies. Again, useful as far as it goes.

Assuming the information in the papers is both accurate and complete, the summary may be helpful. I also assume, but don’t know, if it would place the strategy at a point in time or give all the information equal weighting. For example, a five year old data security strategy would be out of date. Would it qualify the information from that strategy as being from five years ago, or merge it with all the other data security items? Would it give these equal weighting? I am not sure, but for this kind of information to be useful we would need to know.

The limitations become even more obvious when we ask the question that we really need an answer to. What would it say if we asked it: ‘What is the right data security strategy for our company in today’s context?’

This is where the human comes in. Opining on what the ‘right strategy’ for a specific company is relies on judgement. So far at least, AI doesn’t have the ability to make a judgement call.


Key points include: 

  • AI relies on humans asking really good questions
  • AI can’t explain how it arrived at it answer
  • AI is inherently biased

Read the full article, Can AI write your papers?, on