I first tried Grammarly, the online writing editor, in middle school. Back then, it checked only errors in spelling and punctuation, of which, in my papers, it found many — enough to offend me and keep me away from the website for many years. I returned to Grammarly after hearing that it now checks for clarity, tone and style too because I doubted a robot could recognize good writing any more than a good singing voice or a good painting.
After testing out its new updates, I remain convinced that good writing cannot be reduced to a single formula.
I understand how programmers design software to spot lapses in grammar. The English language has concrete rules — phrases that need commas, words that need hyphens and letters that need to be capitalized — which a computer can learn. But Grammarly promises to go beyond these rules.
As its website says, “Grammarly’s team of computational linguists and deep learning engineers designs cutting-edge algorithms that learn the rules and hidden patterns of good writing by analyzing millions of sentences from research corpora.”
Ignoring the horrid prose in this sentence, with its “deep learning engineers,” how did the ‘cutting-edge algorithms’ miss that? Its claim rests on a false premise; namely, that one can write well simply by following the rules of good writing.
There are some helpful writing pointers — otherwise Will Strunk and E.B. White would not have sold so many copies of “The Elements of Style.” Yet, even in that writer’s bible, the master of style himself, E.B. White, admits that “there is no satisfactory explanation of style, no infallible guide to good writing, no assurance that a person who thinks clearly will be able to write clearly, no key that unlocks the door.”
Grammarly may give their software the world’s most pompous name, “sophisticated artificial intelligence technology” — yet nobody, robot or human, can ever know why some sentences dissolve on the ear like butter, while others crash like shards of broken glass. Or why Shakespeare’s words, which are drawn from the same dictionary as the rest of ours, produce an unmatched effect.
To test this machine, I wrote the most convoluted, unreadable, stroke-inducing paragraph I have ever produced, under the careful watch of Grammarly’s web extension. I aimed to make its meaning as vague, its cadence as grating, its sound as unpleasant, as possible — while adhering to the basic rules of good writing, to stump my robotic editor. It worked; Grammarly failed to offer a single tip for clarity, tone or readability in this paragraph:
“My old ladder was tall and heavy, my new one small and nimble. Unsure which one to keep, I sought the opinion of my friend, who used to be a Mormon. The former Latter Day Saint replied that he preferred the former — the former latter — to the latter — the latter ladder — because the former ladder was sturdier than the latter ladder. But, he went on, we should ask the owner of the hardware store, who has built many ladders. This owner, a former of ladders, agreed with the Former Latter-Day Saint that my former ladder was better than my latter ladder. In the end, I took the advice of both the Former Latter-Day Saint and the former of ladders and kept my former latter instead of my latter ladder.”
Is that good writing? Grammarly seems to think so. Why would it not? I followed the official rules of good writing. What I broke were unofficial rules — the rule, for instance, that forbids you from giving the reader a headache. But a robot, unable to feel a headache, missed the paragraph’s main defect. How long it will be until robots can feel headaches, and other sensitivities, remains to be seen. Until then, I will go on using human eyes to edit my work and decry these online editors as false prophets.
Leave a Comment