Facts and figures
Over one million users had registered for ChatGPT within the first five days of its launch in November 2022.
- The system can write, translate, correct and summarise texts in around 100 languages. It can also write code and generate Excel formulas.
- Over 8 million documents containing more than 10 billion words between them were used to train the system.
- The database is not connected to the internet. That means it can’t provide answers about topics that have come about since 2021.
- On 23 January 2023, it was announced that Microsoft is planning to invest over USD 10 billion into the tech company OpenAI.
I recently paid a visit to an old school friend. As a coder, he creates complex websites with integrated applications – the sort you might use to configure the high-end watch you’re about to buy. “I’ve started working with ChatGPT to generate some of the code and I can confirm that it works.” I have to say that I was amazed. But is it the same story for copywriting and translation?
Is it possible to spot AI-generated texts?
I don’t think we can escape the fact that this particular technological advancement has the potential to be very disruptive. Putting something like this out in the world for all to use for free is problematic for teachers, professors and educational institutions everywhere. How do we know that a text – say an essay or a thesis – that sounds perfectly natural hasn’t actually been written by AI? People are definitely coming up with ways to spot an AI-generated text, but will they ever be completely reliable? I’m not so sure.
Knowledge test
Let’s try out the system for ourselves. I’m wondering which is the highest mountain in the canton of Zurich. So I type into the chat window: “Which is the highest mountain in the canton of Zurich?” It literally couldn’t be easier to use. Tödi is the answer the system comes back with. However, this particular mountain is actually on the border between the cantons of Glarus and Graubünden. The canton of Zurich is over 50 kilometres away from the peak. Part of what ChatGPT told me is correct. The mountain is located mostly in the canton of Graubünden like the system says – but the rest of the mountain is not in the canton of Zurich at all.
This example leaves no doubt that you cannot rely on ChatGPT to deliver factual statements. The developers aren’t worried about hiding that fact, though. In the list of the system’s limitations, it clearly states that there’s a risk of misinformation being shared. This means that we’ll need to apply our post-editing skills to AI-generated texts going forward too.
Use in subject areas you’re vaguely familiar with
So when does the software come into its own as an efficient tool? If you ask me, you need to have at least a rough idea about whatever you’re writing about. You can rely on the chatbot to take care of the (time-consuming) tasks of fact-finding, drafting and writing, but on the understanding that you’ll definitely need to check the facts before publishing the output.
Translation test
The artificial tool also seems to handle translations well on the face of it, but we need to proceed with caution here too. I put the system to the test with a translation from German into French and here’s the result:
I selected a paragraph from a nurse’s thesis to use as my input. This is one of the sentences in the original German: Ich konnte anhand der Kurzzeit- und Langzeitgedächtnis-Aufgaben feststellen, wie ihre Leistungsfähigkeit im Gedächtnis aussieht.
And this is the translation in French: J’ai pu évaluer leur capacité de mémoire en fonction des tâches de mémoire à court terme et à long terme.
Sure, the translation is an accurate rendition of the original text in another language. But the subject is one old lady. And the chatbot should have been able to identify that based on the other sentences I’d entered before. In the French translation, the pronoun ‘leur’ is incorrect because it means ‘their’ rather than ‘her’. The French pronoun here needs to be ‘sa’. In the original German text, the pronoun ‘ihre’ can mean ‘her’ or ‘their’ and it’s obvious from the context which is required in this case. In other words, translations generated using the AI tool still need to be checked by a human too.
apoWRITER
Did you know that Apostroph has its own AI tool? Our software developers started working hard with clients to test AI tools last summer. And the result was apoWRITER with client-specific features. It can help communication departments and editorial teams to:
- Collate blog ideas
- Write blog posts
- Edit social media posts
- Write meta text
- Process text automatically (e.g. creating summaries or extracting keywords)
We’re on hand to help our clients get started with the AI writing tool, showing them where it makes the most sense to use it and pointing out where a human touch is still essential. In other words, there’s no chance of our copywriters being replaced. Post-editing is an absolutely critical step in ensuring that the content is factually accurate. And our creative copywriters definitely don’t need to worry about any competition when it comes to writing texts that need to have an emotional impact and nail the messaging.
Did you enjoy reading this post? Would you like to share your ChatGPT experiences? Let us know by sending an e-mail to freelance@apostrophgroup.ch.