For several years, I have used writing assistants, such as Grammarly and LanguageTool. They do provide an invaluable service for those of us who think a lot faster than we can speak, let alone write.
In the first sentence of this article alone, they reminded me to add a comma after the word “years” and that LanguageTool is written as a single word. Those are helpful indications of things I frequently slip on and would only notice when proofreading before posting. So far, so good.
There is a darker side to these tools, however. They do their damnedest to make sure you write the shortest possible sentences. I know everyone is always telling us that we should not use more words than necessary to make our text easier to understand, but there is too long, and then there is long enough.
Sometimes, the changes they suggest actually make the text not only less clear, but also take away the beauty of what you wrote. Sometimes, a longer sentence is necessary so that you can better express your feelings about whatever you’re writing about.
At first, I used to accept most suggestions as I felt they were in line with the mantra that shorter sentences, always in an active voice, were essential to improving my text. As I started to write longer and longer documents using these tools, I started to notice that the proposed changes didn’t always result in something that was easier to understand.
It was when I started to translate a book written by a friend into English that finally noticed just how bad these tools are in suggesting changes to anything other than business documents. They are great for helping you catch basic issues of grammar, but they can surely kill anything that remotely relies on your personal writing style.
In my case, it wasn’t so much my writing style as I was translating, but the author’s. For one thing, why would it recommend corrections other than possibly spelling errors when dealing with quoted text? The quotes denote that you are reproducing someone’s speech. If that is the case, even if it figures that a particular part of the text is not the best option for saying something, the software should respect that it is supposed to be a reproduction of text or speech from someone other than the author and thus not up for change.
Recently, I have been writing a new book about how to use Artificial Intelligence in support of your learning, essentially making it into your personal tutor. For the first time, I am using an AI to assist in the writing process, essentially as a co-author off whom I bounce ideas. I also ask it to turn my short drafts into the final format of chapters and to create exercises, for example. This led me to worry about the possibility that any portion of text that the AI was created could have included text from a previous work.
Though I generally only ask the AI to rework the form of the content that I provide and not to create new texts, I thought it was something that needed to be guarded against. What did I do? I decided to use Grammarly’s Plagiarism and AI Text detection tool.
The first part worked great and put my mind at ease. The only bits it suggested matched previous publications were generic bits of text such as “You need to remain flexible and be open to new ideas.” On the other hand, it suggested that about 20% of my text might be what it called AI Text.
While I didn’t precisely know what constituted AI Text, I decided to explore, check, and, if appropriate, change those parts of the text. That’s when it got fascinating, as it kept telling me that text that I had fully written by hand was possibly AI Text.
When I moved over to the suggestions tab, things got even nuttier, as whenever I accepted one of its suggestions on how to improve the text, that portion would immediately show up as possibly being AI Text.
At that point, I simply settled down on the decision that I didn’t even want to look at whatever suggestions it had about my text.
We’ve come to the point where not only are the tools that we use able to help us a lot, but they can also detract from our work. We may be at a tipping point between tools being able to offer good suggestions on what might be wrong and wanting to suggest more broad changes that they are not ready to do effectively.
At this point, I am finishing my manuscript review using my eyes, my brain, and a little bit of help from LanguageTool, which does not attempt to provide suggestions as ambitious as Grammarly’s, but whose changes don’t get flagged as AI Text, either.