With more professionals relying on AI tools to help generate marketing content, we wondered what happens when an editor  commissions an article by a legal expert but then receives a piece that appears to have been written by AI? 

To explore this issue, we spoke to four leading journalists who kindly shared their perspectives by answering some key questions. The journalists were:

The journalists’ answers were revealing, and we weren’t surprised that the general consensus is that legal experts should not use AI to write commissioned pieces because they will lack originality and authentic content based on the author’s own experiences. When a submission has telltale signs of  AI, the author may be asked to rewrite it or it may be rejected. As Tim Clark mentioned, “…good opinion pieces show a bit of the person who wrote it, and as such, the human input is hard to replace.”

Do your contributor guidelines restrict the use of AI tools such as ChatGPT in article submissions?

Ima Jackson-Obot“Yes, we introduced a clause banning the use of AI some time ago.”

Tim Clark“We have contributor guidelines that were put in place in the last year to limit or dissuade people from using AI-generated copy for guest columns or opinion pieces. The reason for this is that we received a comment piece which had a number of indications that it was either written by AI prompts, or the final copy had re-edited previously AI-written copy.”

Jenna Brown“Professional Advisor has not established rules on AI use in contributed content as of yet. From my perspective, I feel my relationships with contributors are strong and AI wouldn’t be used without them letting me know.”

Eduardo Reyes“We don’t have guidelines specifically referencing AI. We do, though, insist that articles accepted for publication are just for the Gazette, and haven’t appeared elsewhere. Given the way large language model GenAI works (GenAI can do ‘it’ because someone already has), a lack of originality or reuse of material from elsewhere would be a reason to reject an article.”

Have you received articles that you have commissioned that you suspect were written using AI and what were the telltale signs?

Jenna Brown“I do not believe whole articles have been AI generated but it is inevitable people are using it for content ideas/starting points. However, I also think the topic areas on Prof Adviser are usually so niche it would come across quite clearly. 

Sentence structure in AI content is also sometimes a red flag. The regular contributors we use on PA have certain styles which I don’t think AI could replicate – or maybe it could and I am totally being hoodwinked?!”

Eduardo Reyes“This has happened for a non-technical article on, from memory, ‘wellbeing’. It was simply very general, not very good, and not up to our standards. It said nothing new or particularly insightful, or specific to our readership. So we didn’t use it for those reasons. We didn’t ‘accuse’ the author, but did note the ways in which it didn’t meet our standards.”

Tim Clark“One red flag is the use of paragraphs which are well written, but don’t provide very much depth or insight. It is a signal that an AI tool such as ChatGPT has scraped data that is usually quite high level and doesn’t have the nuance that a real expert in a particular field would have.

Another is length, many AI-generated summaries are long – much longer than necessary, this is because for an AI it is very easy to keep producing written answers. Humans don’t like to over-explain things. 

Third, the use of long dashes, “z” in words using American English is also a clue. None of these are on their own an indicator but they build a picture.”

When you suspect an article has been generated by AI, how do you typically respond? Do you reject the piece outright, ask the contributor to revise it, or take another approach?

Jenna Brown – “If I thought AI had been used I would check with the author and after weighing the value of the piece state specifically in the article that the author used AI to generate the content. I think you have to be honest with readers about authenticity and topic area knowledge – reputation in the field of financial advice is extremely important. Professional Adviser itself also needs be transparent with its audience to maintain its reputation.”

Ima Jackson-Obot“I’d probably ask the contributor to revise it and remind them of the contributor’s agreement they have signed to adhere to our editorial guidelines.”

Tim Clark – “We ask politely if it has been written using AI, and listen to the response. We do use AI-checkers’ that although not foolproof are a useful initial screening tool. I’ll also raise it with colleagues such as my deputy editor and we will either ask for a rewrite or will not accept the piece. 

If it was a writer using an AI-generated piece then it would be rejected.

Do you see any positive role for AI in editorial?

Jenna Brown“I think transcription services are useful to a point (I don’t use them but do use Grammarly), but you still have to be engaged and listen carefully to the conversation or what is the point of talking to people? I think there is a danger of overreliance on technology to the point of making journalists redundant. And who wants that?”

Eduardo Reyes“It is being used (though not by us currently) to suggest headlines that will generate the greatest interest/attract the most readers. There may be something to be said for that in an open and competitive online news environment. But a brilliant headline, the type we venerate, are very original one-offs – ‘Freddy Star ate my hamster’, ‘Squirrels addicted to Crack’, ‘Whitstable mum in custard shortage’, the (in fact made up) ‘North-east man lost at sea. 1,500 perish in Titanic disaster’, or New York Times (2008), simply ‘OBAMA’.”

Tim Clark“At the moment the jury is out. AI could be useful, but for me personally it makes a lot of mistakes. For example, in property if you asked it who rents a particular building, it may give “company X” as the answer, not because that is true, but that the AI has looked at previous examples and decided that on probability it is likely to be “Company X”, it assumes fact when it very well may not be. In this case it is a risk to the title’s reputation to overly-rely on it because our accuracy and fact-checking is one of the things that people subscribe to the magazine for (though humans do make mistakes too).

I do know that compared to national newspapers, our approach may not be the norm for the industry – especially news stories. I have talked with colleagues on national titles who have begun to use AI to form the bulk of stories which are then ‘checked’. This checking though is not always very thorough, and it can be longer to check and article than to write it. 

If AI can become fail-safe – as in advance beyond the point where it simply makes facts up, or uses false information, then it may be relied upon more in the media than it is now. I do know that our publishing group are using it to draw from our titles to offer up search results – not Property Week but other title within Emap. In that respect, having a powerful in-house search of a 150-year old magazine’s history can be useful for journalists and readers alike.”

What advice would you give to expert contributors who may be tempted to use AI to speed up the writing process?

Jenna Brown“If you are doing it then be honest about it. It is essential people know where content is coming from. Then they can use their own judgement as to whether it is useful or not.”

Ima Jackson-Obot – “I would mostly advise against it. When it comes to actually writing the article, if there has been some involvement of AI, the article needs to be thoroughly checked by a human eye, to make sure it all makes sense and is written well and accurately, with everything fact-checked before it is submitted.”


Eduardo Reyes
“Don’t use it in any area in which you are not knowledgeable, as the risks increase. You need to be able to spot a fake. Ask yourself if anything generated is either too general or is unoriginal. Check facts and citations using verifiable sources – AI may still have helped speed up your work overall, but you can’t skip this phase. Authenticity counts for a lot – ask yourself if assistance you have received from AI tools has compromised that. Be ready to reread and revise.”

Tim Clark“I would say it can be useful for background research but you need to choose the right one, and ensure it isn’t simply scraping false information. I’d say that good opinion pieces show a bit of the person who wrote it and as such the human input is hard to replace.”