Why Using AI for Essay Writing Is Not the Best Idea
In Brief
Generative AIs and LLM, in particular, have taken the world by storm, but the world has started recovering and sobering up. Don’t be the last to know.
As a person who writes for a living, I can go on forever on why you should never ever give writing away to a machine, yet this is not what I want to talk about today.
Yes, I love writing, so it’s a rewarding activity for me: a source of satisfaction, pride, and a means of self-expression. Yet, I understand that not everyone sees writing this way. For many students, endless essays for English classes or general education English studies courses in college are tiresome. I get that.
However, even if you find writing less than enjoyable, it doesn’t mean you don’t benefit from it. All those essay assignments aren’t meant to make your life difficult. They foster qualities that employers today so fervently seek: critical thinking, analytical skills, synthesis, problem-solving, ability to look at things from unexpected angles, creativity, etc. Writing is a way of thinking, and the more you exercise this muscle, the better at thinking you become.
But you already know it. Even if you don’t believe it, I am not the first one to tell you this. Suppose you only use AI to write essays you find useless or resort to it as your last chance to submit that assignment on time and save your GPA. Life happens.
That’s why I decided to be practical about it and inspect the reasons why using AI for writing your essays for school or college can land you in hot water based on the inherent flaws of the AI tools and why, in case of emergency, you will be better off outsourcing your assignment to a paper writing service. By the way, not all of them cost an arm and a leg. There are plenty of websites that write essays for you free or offer tools that enable you to speed up writing and edit essays effortlessly. So many options are better than AI, and here is why.
AI is unreliable
This issue has been discussed many times by professors and journalists, but I’ll reiterate because it’s a big one. ChatGPT and similar generative AIs are text-generating systems trained on textual data. Text is all they know – and to be generative, they are allowed to improvise with it. They don’t know what “reality” is, and their only understanding of facts is what is presented as such in the text. AI cannot verify its sources and identify fakes or assess how far from the truth it strayed in the attempts to be “original.” It only summarizes and generates coherent texts to return pieces of relevant information answering your queries. If you train AI on the course books and encyclopedias, you’ll get a valuable educational resource. If you train it on everything there is on the vast expanses of the Internet, you’ll get… an interesting conversation partner who is often deluded, sometimes dangerously.
For several experiments, I have tasked AI with writing essays on some college-level topics and specifically asked it to provide sources for its claims with in-text citations. Sometimes it failed to comply, and sometimes it did provide the list of credibly-looking sources that appeared to be academic articles in specialized journals. Only here is the thing: the experts existed, the journals existed, but no such articles were ever published either in the indicated issues or anywhere else. AI hallucinated those based on the bibliographies it saw. It even created articles loosely around the topics that each respective expert explored, but they were still fictitious.
Therefore, first and foremost, AI cannot write essays. It can imitate what an average essay looks like, but it will invent facts and mix things up. Fact-checking is a whole detective job you can only do when you have enough time, which, of course, you don’t. Otherwise, you wouldn’t be generating the essays you were supposed to write.
AI is detectable
This is a subject of debate, but AI detectors usually can tell when the text is AI-generated. They deal in probability, of course, so no definitive answer can ever be given. The use of AI detectors in schools is controversial. Reports were made of multiple “false positives” that got students into trouble for their average essays that the AI-detecting system found predictable.
However, the problem was most often tied not to the use of AI detectors by the school per se but instead the failure of college officials to do right by the student by giving them the benefit of the doubt or conducting a proper investigation before penalizing the accused based on this score alone. In the most egregious cases, schools didn’t even do due diligence to seek a “second opinion” from other detectors. This is true and can be used in your defense if suspicions are raised on account of your authorship.
However, the high probability for AI scores, especially from several different detectors, does look iffy and rightfully leads to investigations. If you did use AI to generate your essay, you are likely to fail other checks, like additional questioning on the topic or requests to provide notes, drafts, or other evidence of bona fide research. If your only argument is that AI detectors aren’t 100% foolproof, consider your case lost. Even if you won’t be suspended or expelled right away, you will be given a strike and placed under close watch in the school’s system.
AI is repetitive
The AI’s lack of style and elegance in writing is a whole other kettle of fish, but its propensity to spew numerous passages that speak about the same thing in different words over and over truly grinds my gears – and will do the same for your professor too, I can assure you. As I already mentioned, AI doesn’t differentiate between reality and the words that describe it. If something is called differently enough, it must be a different thing – at least, in AI’s perception. For example, it will list “feeling better” and “having more energy” as two distinct benefits of switching coffee for water, or “communication,” “collaboration,” and “teamwork” as three different functions of group assignments while describing them through the same situations and examples.
It doesn’t necessarily scream “generated,” but it does raise questions about your level of understanding, which must be poor if you fail to recognize that the three different arguments you put forward in your persuasive essay are, in fact, one argument repeated thrice in synonymous expressions. Don’t expect good marks for this kind of text.
AI is not original
Let me be clear: I am not talking about creativity and contributing new breakthrough ideas. I am talking about plagiarism. Detectable pieces of identical text. Shocker, I know.
You see, one of the top qualities of ChatGPT essays, lauded by proponents of AI-powered cheating, was their originality. That is to say that the algorithm minces words so effectively that its sources of facts and statements cannot be traced. AI paraphrases so well that it cannot be caught plagiarizing.
This, however, is only partially true. Although ChatGPT’s variability seems endless, it still has a finite number of text structures and synonyms to shuffle. Of course, the first AI-generated papers had 0% plagiarism scores, but that’s only because they were the first ones to pop out. However, don’t expect that 100 essays on the same old topic, all generated by AI based on the same datasets fed into it previously, will have zero overlap between them. As it turned out, AI has some stock expressions and “favorite” phrases that show up during the plagiarism scans.
Don’t believe me? Take any AI of your choice and task it to create a blog article for you on a random topic. Then, take any plagiarism-detecting tool that scans the internet. There is a high chance that at least one article on the same topic published within the last year will show up for using nearly identical structure and suspiciously similar phrasing. Granted, the percentage of coincidence might not be very high, but you can bet your money that more and more similar articles with higher coincidence rates will show up as the general public lazily exploits AI to death for churning out generic blogs filled with “effortless original content.”
AI writing will always be average at best
Although the level of eloquence of AI-generated texts might seem aspirational for newbie writers or language learners who haven’t yet gained confidence and fluency in their target language, it is, for all intents and purposes, average. By its very definition, AI’s output is a mean value based on everything the LLM was fed. No matter what marketing copy says, it’s not “excellent.” It’s not even “good.” It’s “okay” at best. Are you sure you are content with this mediocrity ceiling? Don’t you want to do better and try your limits? Okay, maybe I’m asking it the wrong way. Are you happy with gaining a C- at best?
Things are highly unlikely to improve. AI is limited by what we, creative and mutable humans, can feed into it. If more and more people stop writing in favor of quick and easy grades, clicks, page views, or likes, AI’s fountain of Hippocrene dries out very fast, and we will find ourselves three years from now, hitting that “Regenerate” button in hopes of eliciting something new, something remotely original, something that we haven’t already read a dozen times. Yet it will be as hard as squeezing water from a stone.
Using AI to write papers for school isn’t the best idea because those papers are weak, repetitive, full of erroneous information and invented sources, and can easily be identified as generated with AI detectors and, quite possibly, plagiarism detectors, too. Yet being caught shouldn’t even top the list of reasons why outsourcing your writing to AI is a no-no. Ultimately, writing is an act of communication. We write to tell something about ourselves, to make a case for our opinions, defend our standpoints, influence others, and through them, reshape this world into a version we would like better than the current one. If we just throw at each other pieces of generated texts that no one cares to read, nothing is happening. No communication, no conflict, no progress. Wouldn’t that be a dreary world to live in?
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Gregory, a digital nomad hailing from Poland, is not only a financial analyst but also a valuable contributor to various online magazines. With a wealth of experience in the financial industry, his insights and expertise have earned him recognition in numerous publications. Utilising his spare time effectively, Gregory is currently dedicated to writing a book about cryptocurrency and blockchain.
More articlesGregory, a digital nomad hailing from Poland, is not only a financial analyst but also a valuable contributor to various online magazines. With a wealth of experience in the financial industry, his insights and expertise have earned him recognition in numerous publications. Utilising his spare time effectively, Gregory is currently dedicated to writing a book about cryptocurrency and blockchain.