Get the most out of Smodin.io

Ai content detector.

Feel confident your text isn't detectable by AI with our 95% accurate AI Content Detector.

Author (AI Writer)

Let our AI writer write your next essay, article, paragraph, or anything else.

Plagiarism Checker

Feel confident that your text is unique with our plagiarism checker.

Smodin’s AI Content Detection Remover

Smodin’s AI content detection remover uses a sophisticated rewriting technology that intelligently analyzes AI-generated content and restructures it while preserving its original meaning and coherence. We use advanced natural language processing algorithms, that paraphrase and rephrase your AI-generated content, making it more human-like and less recognizable by AI detection systems.

Recommended Usage and Tips for AI Content Detection Remover

Quality Input : Start with well-structured, grammatically correct, and accurate prompts. This ensures that the rewriting process maintains the integrity of your original message.

Review and Edit : While our tool is highly effective, you get the best results when you review the rewritten content for accuracy, tone, and consistency. Make necessary adjustments as you see fit.

Iterative Rewriting : In some cases, it might be beneficial to rewrite the content multiple times to reduce the chances of AI detection further. Experiment with different iterations to find the best version.

Why you should use AI content detection remover?

Smodin’s AI Content Detection Remover provides users with a powerful solution for maintaining the creative essence of AI-generated content while bypassing AI detection software and tools. Our tool expertly restructures and rewrites your content, making it less recognizable as AI-generated while preserving its unique insights and ideas. Moreover, manually rewriting AI-generated content can be a time-consuming and labor-intensive process. This tool streamlines this task, allowing you to focus on other essential aspects of writing, including feeding better prompts, checking for accuracy, finding the right references, etc.

How to Avoid AI Plagiarism Detection

We have the solution to avoid AI detection: the recreate method. In the world of ChatGPT and large language models, AI writing is a must-have tool in your tool belt. However, there are ways to successfully detect AI generated content, and the only way to automatically prevent it is with a model trained on thousands of samples of human written data... and that is exactly what Smodin's recreate method is. Smodin's recreate method eliminates all AI detection methods in a single click, allowing you to efficiently create any content you need. There are situations, however, when text written by AI is too generic to be written by a human; for these situations it is recommended to generate a new text or make more than one attempt to produce a human-sounding text.

Check Text for Plagiarism

After you rewrite your text, you should make sure that the text passes plagiarism detection. Use our multi-lingual plagiarism detection application to quickly check the text for plagiarism!

What is AI Content Detection?

AI content detection is determining if a text is written by AI based on the randomness of the words. AI writing models tend to have a specific way of producing text based on the most frequent word orders. Is your text produced by an AI? Find out below!

Other Supported Languages

© 2024 Smodin LLC

8 Proven Ways to Humanize AI Text (Using 2024 Tools)

8 Proven Ways to Humanize AI Text (Using 2024 Tools)

Table of contents

how to change ai generated essay

AI still can't match our human ability to connect with other humans through writing.

We have the capacity to absorb others’ emotions and reflect them into words that can further evoke feelings from millions of readers or listeners. 

With the advent of AI, content creation has become more accessible. The question is—do you rely 100% on AI generation before publishing content?

How to determine which AI-content needs "humanizing"?

If we’re talking about generic content like definitions of words, explaining a term, or creating term glossaries, AI content works like magic. 

However, the way with words helps people connect with a brand. When we read novels like “The Valkyrie” or “The Lion Tamer Who Lost”, their chaotic yet beautifully relatable plot keeps us invested throughout the book. 

The authors or writers use cognitive abilities to create stories people resonate with and remember for years to come. In the same way, online content is not just for education. It’s meant to create awareness and leave an impact on lives. 

AI uses training and online data to create content based on prompts, and it’s mostly a slurry of words that make sense but don’t exhibit any intentions.

Hence, the answer to our previous question, “Do you need to change AI content to human content?” 

Today, we’ll dig deeper into this topic and understand why it’s important that feelings stay a part of online content and how AI content can exist in harmony with human content. 

how to change ai generated essay

AI to human text converter - Try Wordtune for free > AI to human text converter - Try Wordtune for free >

What is AI content?

AI or artificial intelligence are tools that use algorithms, machine learning, and online data repository to generate results based on the user’s intent. You can enter the keywords, prompt, and specifics of your use case, and it processes the result within minutes. 

For example, if I type:   Write an email inviting customers to the opening of a new bakery branch

how to change ai generated essay

Wordtune instantly generates an email and invites people to how the bakery launch is connected to the tireless hard work of the staff with a description of the environment of a bakery. 

how to change ai generated essay

These AI tools are meant to make it easier to create content, extract online data, save time, and remove grammatical errors from great pieces. 

Some AI tools also offer rewrites to create a better version of your writing, which is a plus for content-extensive projects. 

They help enhance efficiency and scalability and overcome writer’s block!

However, the biggest downside of using AI content is that Google doesn’t allow AI content to rank and considers it spam. It doesn’t dismiss all instances of AI for forecasting or creating content that isn’t plagiarized. 

Human content that focuses on search intent or AI assisted content that follows the principle of E-E-A-T:

Authoritativeness

Trustworthiness

‍ Google always rewards quality over spam.

In the future, its systems will eventually evolve to differentiate between user intent-specific content to content with only ranking intent. 

But you can jump ahead of this conundrum and humanize AI content in simple steps. Because why not leave a little bit of your sparkle wherever you post?

This helps rank better and builds a personal brand from scratch.

How does AI content help?

The online world is on the cusp of a major transition as every possible website rolls out AI features for user convenience. 

AI has several pros for the content community, the most prominent ones are:

1. Imparts inspiration

AI is an endless source of information. Ideation is one of the major use cases of AI tools. They help summarize long content into keywords like the “Read and Summarize” tool of Wordtune and find inspiration by extracting key highlights. 

Not only this but when you spend hours on search engine consoles, there are still times you’ll fail to find the information you need. AI comes to the rescue because it’s trained to use the information, and its mind is more like a hub for you to search any topic.

how to change ai generated essay

For example, when I type “3 stats about unemployment in the U.S.”, Wordtune results in astute and updated stats with source. 

how to change ai generated essay

Without the tool, it would’ve taken at least 15 minutes to find the right stats, trace them back to their source, and fact-check them. The last part still remains, but it’s easier now that I know the source. 

Another way to be inspired by the automation skills of AI is to ask it for ideas. 

Ideation is a prime activity every writer, marketer, or team lead fries their brain for. With AI, brainstorming sessions become more productive, and teams can churn out more ideas with the help of AI too. 

For example,

how to change ai generated essay

It might not exactly churn out ideas to use instantly but definitely jogs up memory to find better ideas through research and time-saving. 

how to change ai generated essay

2. Efficient and scalable

AI can create content within split seconds. 

It assesses the need and generates answers much faster than human beings. A writer can be more efficient with writing, as they can use AI to write an article and then edit the whole thing as per their need. 

This saves hours and even days when it comes to long-form content. 

It also helps the writers scale their content efforts by creating more content within an hour. 

However, writers should be cautious to refrain from copying the entire content from AI. Use it for inspiration and extract the acts, but always add the brand voice and other essentials into it.

3. Language 

how to change ai generated essay

There are approximately 6,500 languages in the world. 

There’s a chance that every next person you meet speaks a new language. Living in this diverse world, writers need to capture audiences from different parts of the world. 

There are two ways around this. 

Learn multiple languages, which will take years. 

Use AI to write content in any language within minutes. 

The first option demands the investment of both money and time, which is scarce, as we know. On the other hand, the second option is mostly free, cost-effective, and time-saving. 

Multiple AI tools online are rolling out their feature to write in different languages, which helps expand the audience and cater to global needs. 

This feature helps add personalization and also removes the need to learn multiple languages, which can take months or even years. 

4. Overcome writer’s block

Writer's block is when your brain is just out of ideas. 

You try hard, but only gibberish comes out. 

To get out of this slump, you take a walk, engage in other activities, or take a break. 

What if you're running on a clock?

If you need quick ideas, AI is the best way to find answers to pestering questions. To help overcome a block, AI closely assesses your prompt's key points and specifics and creates answers that closely meet your thinking. 

This sparks ideas in the mind again and helps you get out of a block and start fresh. In some cases, AI can also be used to generate the whole piece of content. All writers need to do is polish it to meet the publishing standards. 

5. Create SEO-based content

Since AI evaluates thousands of online search results every day, it’s aware of how to use keywords judiciously. 

Through this information galore, the results of AI are backed by research and keywords, which helps your content rank better. 

Writers can also take keyword suggestions from AI to help them write relevant and most searched content in depth. This provides a base for writers to act on and adds more context to what they publish. 

To further enhance the quality of their content, writers can also use tools to remove grammatical errors, rewrite content , shorten, expand, or add different tones to their content.

Why does AI need editing?

Every day, new AI tools are launched online. They’re still not perfect, as the results can often omit crucial information. While tools like Wordtune still generate accurate results, missing specifics in prompts or lack of abilities can affect the content. 

Hence, writers should take their time to assess each word, humanize the content, and add missing content. 

Moreover, AI tools cannot determine if the content is 100% authentic, so you need to audit each result, remove redundant information, and make it more of a story than just a piece of content brimming with words. 

How to humanize AI content

To humanize AI content, you need to perceive content emotionally from the point of view of your audience. 

The best way to start is to define your audience's needs, goals, and emotional touch points. There are other such aspects that go into consideration to humanize content; the most important ones are:

1. Select the right AI tool

There are scores of AI writing tools available online. Each offers some USP (Unique Selling Proposition), like 10 free rewrites, a meta description generator, and the “Spices” feature of Wordtune. 

However, only some tools are as efficient and accurate as the others. Many are still in the BETA phase, while others keep evolving as the online world does. 

In such a case, it’s crucial to find the fit that gives the best answer and adds to the project instead of complicating it. 

To determine the best AI tool, consider these:

  • What is your target audience? 
  • Which type of content do you want to produce? [Social media captions, meta descriptions, post ideas, etc.]
  • Do you need a free tool or ready to pay for more features?
  • Will it save you time?
  • Can you integrate it into your cloud?
  • Does it provide a multilingual option?

If a tool suffices all these questions, it’ll be your ideal fit. 

The right AI tool will help you write better content as it adheres to most of your commands. Moreover, polishing it will also be easier since your basic motive is fulfilled by the tool itself. 

2. Use an AI content detector

how to change ai generated essay

After generating your text, you can run it through an AI content detector, and make sure it isn't flagged as AI-generated.

This extra step serves several purposes:

  • It reduces some of the manual work of humanizing your content, and allows you to focus on areas that are deemed AI-generated.
  • It prevents Google from flagging your content as AI-generated and hurting your ranking and SEO.
  • If you need to submit the content for review, and you don't want the reviewing party to know you used AI. This might seem like a sneaky move, but it's really similar to a public speaker using a speech writer, and wanting the audience to think the speech was written by you.

Check out Wordtune's AI content detector >

3. Make things more casual

Monotony kills the interest. 

Since AI is not yet wired to roll out human-like conversations, its content can sometimes feel generic or more verbose.

To keep the audience engaged, try to spice up any AI content with casual language the readers will resonate with.

The more you identify with a piece of writing, the more you learn from it, and a friendly tone is something all search engines love. It humanizes the content and provides the satisfaction of a conversation to the readers. 

To make it easier for you, Wordtune’s “Casual” feature allows you to liven up your content instantly.

how to change ai generated essay

With the Spices feature, writers can add facts, anecdotes, or jokes to the content without much hassle. It’s fine if your joke repository runs out of humour. This tool will always render inspiration. 

Humor is a universal language, each person may perceive it differently, but it does strike a human chord in some way.

4. Add visuals

Visuals are inherent to great content. 

Images make a story more enchanting and engaging, adding context to the content. 

It is also good use of white space that grabs attention and breaks long text arrays. Human beings interact more with images as it helps them understand better and also see more colors within the text, which stimulates the brain. 

how to change ai generated essay

Text in content helps readers digest information faster and also retain more as compared to reading more text. 

This directs Google bots to provide more attention to your content as it helps the audience connect more. It’s not just modern-day preferences or due to busy schedules, human beings also have short attention spans, which provides more importance to visuals. 

Formats like videos, infographics, or memes instantly have more audience due to their quick role in imparting information; when writers are humanizing AI written content, images set them apart from the rest. 

Ideally, there should be an image around each scrolling head or page. 

You can add memes for a comic effect, provide GIFs to spice it up, or add infographics and carousels to impart more context to a process or a step. 

Pro Tip: Users trust more in images from a source rather than stock. So, it’s best to create your own visuals than use stock always—it’s everywhere.

5. Add personal instances

Humans are empathic creatures.

They respond to stories, relate to personal instances, and cultivate emotions when they see a similar story as theirs. 

For example, when users watch a series, a simple backstory of the character or lead helps them form an emotional connection and keeps them invested throughout each episode. 

“We are, as a species, addicted to story. Even when the body goes to sleep, the mind stays up all night, telling itself stories.” — Jonathan Gottschall, The Storytelling Animal: How Stories Make Us Human.

AI content might use words to make the content entertaining, but it cannot generate personal stories as they are unique to each person. 

The brain places you inside a story, and instances of the same can be taken from animated movies like “Home” or “Rapunzel”, which evoke feelings of homecoming, family, or personal agony. All these etch their place in the mind and retain the lesson, which is also the case with online content. 

To add stories to AI content, follow these steps:

  • Decide the best placement of stories. 
  • Define an emotion: Sad, happiness, empathy, FOMO, anger, etc. 
  • Start with words like “Imagine, I’ve felt the same, here’s my story”; this brings a transition in the content
  • Always relate it to the original intent of the content
  • Make sure it’s not offensive or hurtful to anyone’s sentiments

To tempt your readers, factor in all these aspects and add a story with drama, emotions, and context behind them. 

We are neurologically wired to connect to stories as social creatures, and the more people relate with you, the more they remember.

6. Optimize for search engine

If you want to reach your audience, focus on SEO. 

Search Engine Optimization (SEO) is essential to content creation and marketing. It helps to ensure that webpages are properly indexed and ranked in search engine results, making them easier to find and access. SEO also helps to humanize content by tailoring it to users’ needs and interests. SEO can help content creators to target users with relevant content and provide them with a more personalized experience.

SEO can also help improve user engagement by making content more easily discoverable and accessible. By using SEO, content creators can provide users with content that is tailored to their interests and needs, making it easier for them to connect with the content and the brand. 

Search engines have specific guidelines and protocols that each content piece should follow if they want to rank. Since AI’s data is driven from online sources, you can expect some degree of plagiarism that you need to remove.

More than 2% of plagiarism is not acceptable, and Google web crawlers condemn any citation without sources. 

Moreover, high-ranking pages get more CTR than the ones ranking beyond the 2nd and 3rd pages.

While AI can place keywords in place for the content, they’re unable to optimize for the rest of the SEO essentials, like visuals, meta descriptions, metadata, transitions, plagiarism, and more. 

Moreover, you also need to add backlinks and internal links to increase the page's visibility and redirect the audience to the right links. All these factors humanize the content as you’ll focus more on search intent than just spewing online information.

Overall, editing is important for content because it helps ensure the content's quality, makes it more effective and engaging, and prevents misunderstandings. It is an invaluable part of content creation and should be noticed.

7. Give time for editing

When you use AI for content, editing plays a major role in bringing the content to justice. 

While editing, a lot of your work will include adding a voice to the content, removing generic facts, and adding specific data-backed research. 

You must also add a human-like conversational tone to make it more engaging and unique. Next, find errors in the content and remove them one by one. Editing manually gives the satisfaction of double-checking and also makes content more human. 

8. Treat it as a first draft

Writers often have an extensive writing process. 

They often start with a rough draft and shape it by adding important information chunks here and there. 

how to change ai generated essay

For example, when you asked Wordtune to write “An article on the best writing practices”, the result is this. 

At first glance, you can spot the obvious edits, like breaking the paragraphs and adding subheads. 

Also, the article explains the best practices, but adding some visuals and examples will greatly enhance its engagement level and reduce the bounce rate. Some links need to be added that make the content monotonous. 

It’s only 341 words, so you’ll need to add more content and at least make it 500-600 words to deliver useful information with good delivery. 

Through editing, the content is made more professional and thus more likely to be taken seriously. Making the content more effective and engaging can be more successfully shared with others. Editing also helps to reduce the risk of misunderstandings or misinterpretations.

Add your human touch

It’s important to address the fact that your content is read by humans and not just by search engine bots. 

Adding your tone and human touch is essential to forge a connection with the audience and build a brand rather than the identity as a bot. Using AI is a plus when looking at more efficient content creation, but it’s important to remember that it’s the personal touch that every human craves. 

Hence, it’s best practice to humanize AI content and proofread it meticulously to remove errors, be they sentimental or grammatical. 

Wordtune is here to be your ultimate companion in this process. Use it today to increase your content production with high accuracy. 

Share This Article:

9 Tips to Improve Your Job Application

9 Tips to Improve Your Job Application

7 Practical Solutions to Make AI Sound More Human: A Writer’s Guide

7 Practical Solutions to Make AI Sound More Human: A Writer’s Guide

What’s a Semicolon? + When to Use It (With Examples)

What’s a Semicolon? + When to Use It (With Examples)

Looking for fresh content, thank you your submission has been received.

How to edit AI-generated text content

How to edit AI-generated text content

Machine-generated language can get you started, but the qualities that make writing resonant with audiences require a human touch. That’s where you come in, content strategist.

This essay originally was published on January 5, 2023, with the email subject line "CT No.149: How to edit AI-generated content."

Disclaimer: I have had a friendly relationship with the team from Writer since 2020, primarily because they’ve supported and read The Content Technologist, and because they have an awesome online style guide builder that recently evolved to include an AI-generated writing tool. Last year I became a partner with Writer because I use their tool in my consulting work and recommend it to clients.

Some traditional news publications would consider my affiliation with Writer to be a conflict of interest. Good thing I’m not writing for them and make my own rules! As part of The Content Technologist’s mission, I want to see thoughtfully made products connected with thoughtful people. Also in full disclosure, I’ve made zero dollars from my partnership via this newsletter, and I don’t expect to make affiliate income from this content. I wrote this essay to explore editing AI at an expert level, and if I didn’t include Writer I’d be a fool. But this is it: the full disclosure. I’m friends with Writer, and if you would like to try their product, there’s a link at the end of the essay.

How to edit AI-generated content

If it hasn’t happened yet, expect the communication soon. Someone, perhaps a junior employee, perhaps an executive, will shoot you an email asking “Can you look this over? I used one of those AI tools to write this.” The copy will look good, more coherent and fleshed out than you’re used to seeing from this colleague who excels at interpersonal communication but struggles with writing.

When you’re giving feedback, you’re trying to be supportive. After all, this isn’t a 1970s newsroom filled with irate editors, red pens and bathroom sobbing; you’re trying to create a collegial environment where your coworkers enjoy their work. You also kinda like that people are now coming to you for advice on their writing. In the past you stayed late writing the whole thing from scratch. Now your colleagues are doing the writing themselves, kinda, and that’s a good thing.

So what should you look for in that machine-assisted copy? All-in-all, the text feels a little weird, a little off. You’re not sure what aspects of the content originated from your coworker and which came from the AI. But you’re glad they came to you first, and you’re excited to revisit the editorial skills you honed earlier in your career. So let’s get to editing.

Sign up for The Content Technologist newsletter

A resource for content professionals working in the age of algorithms. Get more like this essay. Sent most Thursdays.

No spam. Unsubscribe anytime.

Just as humans rely on comfortable clichés and repetition when writing and speaking, computer-generated content leans on familiar patterns and templated sequences. AI-powered language generators work like the “clone stamp” function in Photoshop: the tool perceives patterns in their training data and makes its best attempt at mimicry, filling in the gaps in the image as a whole. As with the clone stamp, the “magic” fix requires detail-focused after-stamp cleanup to professionalize the final product.

How should you edit your colleague’s AI-generated content? Let’s jump on the trend train and ask the machines.

The process of building this review

For this essay, I consulted five natural language generation tools: Writer, Copysmith, Lex, Jasper and ChatGPT.* Beginning with the prompt, “What's the best way to edit AI-generated content?” I followed each tool's templates and processes to generate about 500-1000 words each on the subject.

Their responses revealed that machine-generated content can parrot awareness of its own weaknesses to an extent. But the output universally lacks finesse, resembling student term papers instead of professional writing. These tools weren’t trained on Dorothy Parker and James Baldwin, or even on Bob Woodward, Sophie Kinsella or Alexander McCall Smith. They were trained on the morass of the web : hastily produced marketing copy, half-assed arguments, forum flame wars, academic jargon, STEM-heads who never valued rhetoric or syntax, and far too many 1,000-word-count blog posts that ultimately say nothing.

That’s not to say the output of machine-generated language is universally bad. Quite the contrary: like a recent MBA at their first internal marketing presentation, AI generators spit out a lot of words quickly, but they ultimately spout some valuable knowledge amid the clutter. I estimate 20-35% of the AI-generated language is useful, adding phrasing and ideas that may not have been considered in an initial draft or outline.

But the qualities that make writing resonant with audiences are largely absent from AI generators and require a human touch to make better. That’s where you come in, content strategist. You’re not just a copywriting monkey anymore.

*Technically I also looked at Copy.ai, but its answers were so wildly different than the other tools, including them in this essay would have added an extra 500 words and about ten more tangents. Congrats, Copy.ai, you’re the outlier.

Article continued below

Be found when your audience is seeking your content.

Master advanced keyword research. Learn how to build authority with search algorithms in our newest course.

Here’s how AI-generated text tools recommend they should be edited:

The generic suggestions written below were generated by AI-text generation tools, but I’ve edited significantly and added far more detail. The wording is mine unless otherwise indicated.

  • Proofread the content, both with grammar check and your own eyes. AI writing tools will be the first to admit they need a grammar check. Like people, computers make plenty of syntactical mistakes, use passive voice, sometimes forget words or leave fragments hanging. All the AI tools used for this exercise recommended both an automated and manual proofread. Experienced editors know proofreading is generally the last step in an editorial process, but I recommend giving AI-generated text a once-over with spell check before you begin, removing egregious errors so you can focus on the shape and meaning of the piece. (All tools tested)
  • Rigorously check facts generated by AI . Unlike shoddy researchers who think the ultimate source of truth is a Tweet or Google’s quick answers, machine-powered content generators willingly admit that they’re often factually incorrect. If you’ve ever tried to generate scratch copy about a proper name or a sensitive cultural event with an AI tool that hasn’t been vetted by a compliance team, you might be a bit shocked with the fallacies and conspiracy theories the computer mimics. Even the most benign AI content contains factual errors. One tool claimed, “because AI-generated copy is often generated from data, it can be numbers-heavy.” But I don’t think I’ve once seen a number or statistic in AI-generated copy that I did not feed it myself—for good reason. My hope is that AI-generated content inspires businesses to check facts before hitting publish, which—let’s be honest—hasn’t necessarily been a part of the rapid-fire content development process at many companies. (Writer, Jasper, ChatGPT, Lex)
  • Restructure AI content like a professional. It’s no secret that machine-generated content reads as awkward at best. Like the junior debate team, machine-generated text relies far too heavily on transition adverbs to make connections as it lists disparate ideas, rather than creating a cohesive start-to-finish argument. Jasper’s output supplied a hint to how AI content tools develop their flow: “Many AI-generated pieces are written with a particular structure in mind, such as 'problem/solution' or 'cause/effect'. While it's important to maintain the overall shape of the piece, you may want to make some tweaks here and there to improve how it flows.” Take that statement with a grain of salt—see point 2 about the believability of robot facts—but if it’s true, we can intuit how AI content generators develop long-form text. Translation: AI tools are generally trained to recreate the basic essay structures taught in elementary and high school classes. They mimic the foundational mechanics of writing instead of finding a creative way to make a persuasive argument. If you want your audience to think you’re a student working toward a B grade, use the AI text as is. If you seek professionalism, you’ll want to make more than “some tweaks here and there” and edit the content holistically for the most impactful way to present your idea to the audience. (Jasper, ChatGPT, Writer)
  • Address consistency. One tool acknowledged that its breadth of training data might lead to stylistic inconsistencies in the text it generates. As with any other edit, ensure text refers to terms and concepts consistently throughout the content it generates (or, better yet, use an AI that comes powered with a customizable style guide). (Jasper)
  • Be prepared for biases. The effects and outputs of algorithmic biases are well chronicled. Be aware that if you’re prompting the AI tools to make generalizations about large groups of people, it will do exactly as you asked. One tool claimed, “AI-generated content can sometimes contain subconscious biases,” which I would edit to say: AI-generated content parrots the biases of the people who wrote its training data. It has no consciousness; it’s a copy machine. (Jasper)
  • Remember the context in which the content will be presented. One generator provided me with a reminder: “If you're editing a blog post written by an AI, you'll need to take into account the fact that readers will likely be skimming the article rather than reading it from start to finish.” Great advice for all digital editorial! Because AI-generated content leans toward the wildly verbose and leans heavily on pattern mimicry, be prepared to edit the same way you do all other content: considering the audience and publication context. (Writer)

Additional tips for editing AI-generated content, from an experienced human editor

In reality, you’re not really editing machine generated text; you’re co-writing with a robot partner. The output is much better when you’re using a tool that provides step-by-step templating options (Writer, Jasper, Copy.ai) and options to rewrite the text in multiple styles. If you want the text to be good, be prepared for the following steps:

  • Create your AI generated text from an outline of existing research or concepts. Arriving at the writing process with your own subject matter expertise will save time and make for a better output. If you’re using a tool to generate a blog post, start with an outline to keep the AI focused and expert. With most generators tools, if an outline is not provided, the machine-generated content will become repetitive after a couple of sentences. Turns out computers aren’t great at original research and need to be prompted with facts and structure.
  • Strip text down to its core purpose. Why is your business publishing this content? Are you making an ask in a sales email? Trying to get journalists to peep your content with a press release? Explaining a new concept to your audience? As an editor, strip the text down to its core purpose, making sure it’s clear, concise and compelling. Then, go back and add the machine-generated detail sentences, refining as you go. Don’t add more than you need to make the content effective.
  • Develop a hook. The first sentence in a piece of AI-generated content is inevitably terrible, by any editor’s standards. Here are a few sample intro sentences from my prompt: ∙ Few advancements in modern technology have been more instrumental than artificial intelligence (AI). ∙ AI-generated content is becoming increasingly popular. ∙ One way to edit AI-generated content is to use a combination of automated and manual editing. ∙ The best way to edit AI-generated content is to start by taking a close look at the specific areas where it needs improvement. ∙ If you're a professional editor, chances are you're going to encounter AI-generated copy at some point in your career. Except the last one, which somewhat resembles the intro of this essay,** these sentences are total snoozefests, and none give your audience a reason to continue. If you want readers to pay attention, every piece of content you create, from marketing emails to internal comms to more high-profile website copy, should have some kind of hook. Not a manipulative hook that tricks people into reading (favored by so many shitty automated sales emails), but an actual interest point that compels your audience to read—an anecdote, a nugget of insight, a potential future scenario for the reader, etc. Y’know, make it original.
  • Demonstrate a breadth of vocabulary and an affinity for detail. Because it relies on predictive text generation based on a body of internet copy, natural language generation is limited in its vocabulary, even though it produces a high volume of words. Unless you’re writing plain-language instructions for an audience, consider rewriting AI-generated sentences with more specific nouns and verbs than what the AI supplies, all indicators of higher quality writing (and great for search optimization).
  • Vary sentence structure. As noted above: AI-generated copy reads as generic, often because it’s trained on school essays and internet fodder rather than high-quality writing. Break those run-ons up! Create bulleted lists! All good writers vary their sentence structure. Variety keeps the audience engaged. Don’t let the trap of unnecessarily complex sentences let your company’s content drag.
  • Train AI on your company’s house style. Create a style guide! Train your AI on house style so it knows to avoid overly verbose expressions and gets to the point. Make the machine learn from your preferences. Only one of the tools mentioned in this article provides a style guide tool that can train an AI, and it’s my friends at Writer, which is why I’ve long considered them the best tool of the bunch. Generic machine-generated copy doesn’t do much good for brand-building, but copy that’s trained on your house style and subject matter expertise is a gold mine.

Whether you think machine-generated content is the best thing since sliced bread or you think it’s a bunch of fodder with minimal originality, AI-generated text is guaranteed to give editorial enthusiasts plenty of work in the next couple of years. And keep in mind: Not everyone is as strong in writing and editing in your native language as you are, and these tools help many people who struggle with writing at the pace business demands. Let’s hope this technology makes business writing better. Let’s make it as fun and impactful as we can, and remember: the technology is supposed to make your life easier.

**Nothing is original: I wrote my intro before prompting the tool, and we all arrive at the same clichés eventually

Tools reviewed in this article

Presented in order of my preference based on output quality. Your mileage may vary.

  • Copy.ai (kindof)

Hand-picked related content

how to change ai generated essay

Related posts

Join me in my media production kitchen, is all generative ai art terrible a consideration of how changes in software transform artistic production, ai tools for professional writers: how to prompt, what to build, and what to avoid, ai art, metadata, and copyright law, the latest posts + email-exclusive content in your inbox every week.

The Writing Center • University of North Carolina at Chapel Hill

Generative AI in Academic Writing

What this handout is about.

You’ve likely heard of AI tools such as ChatGPT, Google Bard, Microsoft Bing, or others by now. These tools fall under a broad, encompassing term called generative AI that describes technology that can create new text, images, sounds, video, etc. based on information and examples drawn from the internet. In this handout, we will focus on potential uses and pitfalls of generative AI tools that generate text.

Before we begin: Stay tuned to your instructor

Instructors’ opinions on the use of AI tools may vary dramatically from one class to the next, so don’t assume that all of your instructors will think alike on this topic. Consult each syllabus for guidance or requirements related to the use of AI tools. If you have questions about if/how/when it may be appropriate to use generative AI in your coursework, be sure to seek input from your instructor before you turn something in for a grade. You are always 100% responsible for whatever writing you chose to turn in to an instructor, so it pays to inquire early.

Note that when your instructors authorize the use of generative AI tools, they will likely assume that these tools may help you think and write—not think or write for you. Keep that principle in mind when you are drafting and revising your assignments. You can maintain your academic integrity and employ the tools with the same high ethical standards and source use practices that you use in any piece of academic writing.

What is generative AI, and how does it work?

Generative AI is an artificial intelligence tool that allows users to ask it questions or make requests and receive quick written responses. It uses Large Language Models (LLMs) to analyze vast amounts of textual data to determine patterns in words and phrases. Detecting patterns allows LLMs to predict what words may follow other words and to transform the content of its corpus (the textual data) into new sentences that respond to the questions or requests. Using complex neural network models, LLMs generate writing that mimics human intelligence and varied writing styles.

The textual data used to train the LLM has been scraped from the internet, though it is unclear exactly which sources have been included in the corpus for each AI tool. As you can imagine, the internet has a vast array of content of variable quality and utility, and generative AI does not distinguish between accurate/inaccurate or biased/unbiased information. It can also recombine accurate source information in ways that generate inaccurate statements, so it’s important to be discerning when you use these tools and to carefully digest what’s generated for you. That said, the AI tools may spark ideas, save you time, offer models, and help you improve your writing skills. Just plan to bring your critical thinking skills to bear as you begin to experiment with and explore AI tools.

As you explore the world of generative AI tools, note that there are both free and paid versions. Some require you to create an account, while others don’t. Whatever tools you experiment with, take the time to read the terms before you proceed, especially the terms about how they will use your personal data and prompt history.

In order to generate responses from AI tools, you start by asking a question or making a request, called a “prompt.” Prompting is akin to putting words into a browser’s search bar, but you can make much more sophisticated requests from AI tools with a little practice. Just as you learned to use Google or other search engines by using keywords or strings, you will need to experiment with how you can extract responses from generative AI tools. You can experiment with brief prompts and with prompts that include as much information as possible, like information about the goal, the context, and the constraints.

You could experiment with some fun requests like “Create an itinerary for a trip to a North Carolina beach.” You may then refine your prompt to “Create an itinerary for a relaxing weekend at Topsail Beach and include restaurant recommendations” or “Create an itinerary for a summer weekend at Topsail Beach for teenagers who hate water sports.” You can experiment with style by refining the prompt to “Rephrase the itinerary in the style of a sailor shanty.” Look carefully at the results for each version of the prompt to see how your changes have shaped the answers.

The more you experiment with generative AI for fun, the more knowledgeable and prepared you will be to use the tool responsibly if you have occasion to use it for your academic work. Here are some ways you might experiment with generative AI tools when drafting or exploring a topic for a paper.

Potential uses

Brainstorming/exploring the instructor’s prompt Generative AI can help spark ideas or categories for brainstorming. You could try taking key words from your topic and asking questions about these ideas or concepts. As you narrow in on a topic, you can ask more specific or in-depth questions.

Based on the answers that you get from the AI tool, you may identify some topics, ideas, or areas you are interested in researching further. At this point, you can start exploring credible academic sources, visit your instructor’s office hours to discuss topic directions, meet with a research librarian for search strategies, etc.

Generating outlines AI tools can generate outlines of writing project timelines, slide presentations, and a variety of writing tasks. You can revise the prompt to generate several versions of the outlines that include, exclude, and prioritize different information. Analyze the output to spark your own thinking about how you’d like to structure the draft you’re working on.

Models of genres or types of writing If you are uncertain how to approach a new format or type of writing, an AI tool may quickly generate an example that may inform how you develop your draft. For example, you may never have written—a literature review, a cover letter for an internship, or an abstract for a research project. With good prompting, an AI tool may show you what type of written product you are aiming to develop, including typical components of that genre and examples. You can analyze the output for the sequence of information to help you get a sense of the structure of that genre, but be cautious about relying on the actual information (see pitfalls below). You can use what you learn about the structures to develop drafts with your own content.

Summarizing longer texts You can put longer texts into the AI tool and ask for a summary of the key points. You can use the summary as a guide to orient you to the text. After reading the summary, you can read the full text to analyze how the author has shaped the argument, to get the important details, and to capture important points that the tool may have omitted from the summary.

Editing/refining AI tools can help you improve your text at the sentence level. While sometimes simplistic, AI-generated text is generally free of grammatical errors. You can insert text you have written into an AI tool and ask it to check for grammatical errors or offer sentence level improvements. If this draft will be turned into your instructor, be sure to check your instructor’s policies on using AI for coursework.

As an extension of editing and revising, you may be curious about what AI can tell you about your own writing. For example, after asking AI tools to fix grammatical and punctuation errors in your text, compare your original and the AI edited version side-by-side. What do you notice about the changes that were made? Can you identify patterns in these changes? Do you agree with the changes that were made? Did AI make your writing more clear? Did it remove your unique voice? Writing is always a series of choices you make. Just because AI suggests a change, doesn’t mean you need to make it, but understanding why it was suggested may help you take a different perspective on your writing.

Translation You can prompt generative AI tools to translate text or audio into different languages for you. But similar to tools like Google Translate, these translations are not considered completely “fluent.” Generative AI can struggle with things like idiomatic phrases, context, and degree of formality.

Transactional communication Academic writing can often involve transactional communication—messages that move the writing project forward. AI tools can quickly generate drafts of polite emails to professors or classmates, meeting agendas, project timelines, event promotions, etc. Review each of the results and refine them appropriately for your audiences and purposes.

Potential pitfalls

Information may be false AI tools derive their responses by reassembling language in their data sets, most of which has been culled from the internet. As you learned long ago, not everything you read on the internet is true, so it follows that not everything culled and reassembled from the internet is true either. Beware of clearly written, but factually inaccurate or misleading responses from AI tools. Additionally, while they can appear to be “thinking,” they are literally assembling language–without human intelligence. They can produce information that seems plausible, but is in fact partly or entirely fabricated or fictional. The tendency for AI tools to invent information is sometimes referred to as “hallucinating.”

Citations and quotes may be invented AI responses may include citations (especially if you prompt them to do so), but beware. While the citations may seem reasonable and look correctly formatted, they may, in fact, not exist or be incorrect. For example, the tools may invent an author, produce a book title that doesn’t exist or incorrectly attribute language to an author who didn’t write the quote or wrote something quite different. Your instructors are conversant in the fields you are writing about and may readily identify these errors. Generative AI tools are not authoritative sources.

Responses may contain biases Again, AI tools are drawing from vast swaths of language from their data sets–and everything and anything has been said there. Accordingly, the tools mimic and repeat distortions in ideas on any topic in which bias easily enters in. Consider and look for biases in responses generated by AI tools.

You risk violating academic integrity standards When you prompt an AI tool, you may often receive a coherent, well written—and sometimes tempting—response. Unless you have received explicit, written guidance from an instructor on use of AI generated text, do not assume it is okay to copy and paste or paraphrase that language into your text—maybe at all. See your instructor’s syllabus and consult with them about how they authorize the use of AI tools and how they expect you to include citations for any content generated by the tool. The AI tools should help you to think and write, not think or write for you. You may find yourself violating the honor code if you are not thoughtful or careful in your use of any AI generated material.

The tools consume personal or private information (text or images) Do not input anything you prefer not to have widely shared into an AI generator. The tools take whatever you put in to a prompt and incorporate it into its systems for others to use.

Your ideas may be changed unacceptably When asked to paraphrase or polish a piece of writing, the tools can change the meaning. Be discerning and thorough in reviewing any generated responses to ensure the meaning captures and aligns with your own understanding.

A final note

Would you like to learn more about using AI in academic writing? Take a look at the modules in Carolina AI Literacy . Acquainting yourself with these tools may be important as your thinking and writing skills grow. While these tools are new and still under development, they may be essential tools for you to understand in your current academic life and in your career after you leave the university. Beginning to experiment with and develop an understanding of the tools at this stage may serve you well along the way.

Note: This tip sheet was created in July 2023. Generative AI technology is evolving quickly. We will update the document as the technology and university landscapes change.

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Humanize AI-Generated Text

How to Humanize AI-Generated Text

4-minute read

  • 13th July 2023

The rapid advancements in AI writing tools such as Jasper and ChatGPT make it possible to generate high volumes of informative content in a matter of minutes. But does the quality match the quantity? The answer – not always. While this technology has several benefits, the content it produces doesn’t match the natural quality of writing produced by real-life writers.

So, if you want to better implement the content produced by writing tools, keep reading for strategies on how to improve upon and humanize AI-generated content.

What Is AI-Generated Content?

AI-generated content is any text produced with the assistance of AI writing technology. These tools use machine learning algorithms to analyze vast amounts of data and generate coherent and relevant content in response to users’ inquiries.

While AI-generated content has shown incredible potential and opened up new possibilities in various fields, it also raises ethical concerns, especially regarding misinformation . And since it’s written by a machine, the content usually needs a thorough edit for style and tone before it’s ready for publication.

Next, let’s review specific ways you can edit and add to machine-generated content so that it appeals to a wider audience.

Incorporate Natural Language

Think about how people naturally communicate in everyday speech and incorporate these elements into AI-generated text:

●  Use contractions or informal language where appropriate.

●  Use a mix of long and short sentences and vary the sentence structure.

●  Avoid unnecessarily technical language or complex terminology.

●  Draw on devices like analogies , metaphors , and idioms to emphasize your points and to relate to your audience.

●  Avoid excessive use of filler words like additionally , moreover , and in conclusion .

●  Eliminate repetition or redundant information.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

To encourage participation and create a more conversational interaction, you can also include interactive elements in the content, such as questions or requests for feedback.

Reference Current Events

Topical references to pop culture or current events can provide context for your readers and make AI writing more relatable. If you do this, however, be mindful of your audience’s demographics (e.g., age, location, cultural background) to avoid confusion or potential misunderstandings.

Add Personality to Your Writing

One thing that technology can’t mimic is personality. Since AI-generated writing can sound somewhat monotone and robotic, give it life with a vibrant tone of voice . Depending on the type of writing and target audience, that voice could be funny, witty, friendly, professional, serious, etc. Whatever tone you use, try to implement it consistently, as an unreliable or inappropriate tone can make your writing seem unfocused and erratic.

Use Storytelling Techniques

AI tools produce straightforward answers to user input; they don’t elaborate on a subject or offer a unique perspective. To help illustrate the points produced by AI, humanize the content by incorporating narrative and storytelling elements , such as anecdotes or accounts of real-life events. And, when suitable, give your unique opinion or perspective on a topic to help build a connection with your audience.

Use the Active Voice

Use the active voice rather than the passive voice whenever possible. The active voice has a more clear, direct, and immediate tone and tends to be more engaging than the passive voice. It can also evoke stronger emotions in your readers. Attributing actions to specific subjects, rather than creating distance between subjects and their actions, humanizes the text and makes it easier for readers to empathize with the ideas presented. 

Use Humor When Appropriate

Nothing distinguishes human writers from robots more than a subtle pun or nuanced, witty observation. While including humor in AI-generated writing can add a bit of an edge in certain contexts, be mindful of using it excessively or causing offense. For example, it’s probably fine to add a well-placed joke to a partially AI-generated blog post on your personal website, but you might want to think twice before using humor in a formal business proposal. 

A robot holding a tablet looks into the camera.

Work in harmony with machines to produce relatable, natural-sounding content.

Expert Proofreading and Editing Services

If you want to incorporate AI-generated content into your writing process but want to avoid sounding robotic or unnatural, our editors can help. Our editing teams can humanize AI writing while sticking to your preferred style guide and desired tone of voice.

Learn more about our AI-generated content editing services , or send in your free sample of less than 500 words today and see for yourself.

If you’re a business needing AI-content humanizing services, schedule a call with us today so we can put together the perfect editing plan for your needs.

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

3-minute read

How to Insert a Text Box in a Google Doc

Google Docs is a powerful collaborative tool, and mastering its features can significantly enhance your...

2-minute read

How to Cite the CDC in APA

If you’re writing about health issues, you might need to reference the Centers for Disease...

5-minute read

Six Product Description Generator Tools for Your Product Copy

Introduction If you’re involved with ecommerce, you’re likely familiar with the often painstaking process of...

What Is a Content Editor?

Are you interested in learning more about the role of a content editor and the...

The Benefits of Using an Online Proofreading Service

Proofreading is important to ensure your writing is clear and concise for your readers. Whether...

6 Online AI Presentation Maker Tools

Creating presentations can be time-consuming and frustrating. Trying to construct a visually appealing and informative...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

WriteHuman AI Detection Remover

Converting AI Writing Into Human Text: A How-To Guide

Discover the secrets to making AI text resonate with a human touch.

Introduction

With the rapid advancement of AI, the world of content creation is undergoing a significant transformation. AI-generated text, once a mere concept, is now a reality, playing a pivotal role in various industries, from marketing to customer support. However, while AI can mimic human writing, it often lacks the nuanced touch and emotional depth that characterize genuinely human writing.

This guide delves into the essential techniques for humanizing AI-generated text , transforming it from mechanical to engaging, and from generic to personalized. Whether you're a content creator, marketer, or just AI-curious, you'll find valuable insights on making your AI-generated content more relatable and authentic.

"The art of humanizing AI text lies not just in altering words, but in infusing the writing with the essence of human experience."

In the following sections, we'll explore the nature of AI-generated text, introduce a step-by-step process for humanizing it, highlight the tools that can assist in this endeavor, and discuss how to make ai writing undetectable.

Understanding the Basics of AI-Generated Text

At its core, AI-generated text is the product of machine learning algorithms, specifically natural language processing (NLP) models. These models are trained on vast datasets of human-written text, learning patterns, structures, and nuances of language. By analyzing this data, AI can generate text that mimics human writing styles.

Common characteristics of AI writing often include:

  • Structure: AI tends to follow standard grammatical rules with a clear beginning, middle, and end.
  • Tone: The tone of AI writing can vary based on its training, but it often lacks the subtle emotional variances present in human writing.
  • Vocabulary: AI-generated text often uses a diverse vocabulary but may lack the context-specific nuances that human writers employ.

Despite its advanced capabilities, AI-generated text often requires humanization. This is because AI, while proficient in language structure, cannot fully replicate the depth of human experience and emotional subtlety. Humanization enhances the relatability and authenticity of the text, transforming it from a mere arrangement of words into a narrative that resonates with human readers. Integrating human elements into AI text ensures that it connects with readers on a more personal and engaging level, crucial for effective communication in any form of writing.

The Humanization Process

The process of humanizing AI text involves a series of steps designed to infuse the text with human qualities that AI alone cannot replicate. This process is essential for creating content that is not only coherent and grammatically correct but also resonates with human readers on a deeper, more emotional level.

Step-by-Step Guide on Converting AI to Human Text

Step 1: reading for context and tone.

Begin by thoroughly reading the AI-generated text to understand its overall context and tone. Look for areas where the tone may not align perfectly with the intended message or audience, noting any adjustments needed.

Step 2: Adding Personal Touches and Nuances

Inject personal touches and nuanced elements that reflect human experiences and emotions. This could involve incorporating idiomatic expressions, anecdotes, or a conversational style that makes the text more relatable.

Step 3: Adjusting Vocabulary and Syntax

Alter the vocabulary and syntax to better suit the intended audience. This step involves refining word choices for appropriateness and impact, and adjusting sentence structures to enhance readability and flow.

Step 4: Implementing Emotional Depth and Relatability

Enhance the text with emotional depth and relatability. This can be achieved by adding elements that evoke empathy, humor, or other emotions relevant to the context of the content.

Step 5: Final Review and Polishing

The final step is a comprehensive review and polishing of the text. This is where the human writer fine-tunes the content, ensuring it reflects a natural, human tone and style, free from any lingering AI artifacts.

how to convert ai to human text

Tools and Technologies

In the journey of humanizing AI-generated text, various tools and technologies play a pivotal role. Among these, platforms like WriteHuman stand out for their efficiency and effectiveness in transforming AI text into human-like content. WriteHuman serves as the ultimate ai word humanizer.

Overview of AI humanizers like WriteHuman

WriteHuman and similar platforms are designed to convert AI to human text and apply a series of enhancements to make it more human-like. These tools utilize advanced algorithms to detect and modify elements in the text that are typically indicative of AI, such as repetitive phrasing, unusual word choices, or lack of emotional depth.

How AI humanizers Work

These tools work by employing a combination of natural language processing techniques and human-like writing models. They analyze the text for structure, tone, and context, making adjustments to emulate the nuances of human writing. This process includes refining vocabulary, adding stylistic elements, and enhancing emotional resonance. The ultimate goal of humanizer tools is to serve as a powerfult human chat gpt converter.

The Role of Tools in Simplifying the Process

Tools like WriteHuman significantly simplify the humanization process. By automating the initial phases of text refinement, they allow content creators to focus on higher-level enhancements and personal touches. This not only saves time but also ensures a consistent baseline quality in the humanized text.

Benefits of Using Such Tools Over Manual Editing

Compared to manual editing, these tools offer several benefits:

  • Efficiency: They drastically reduce the time and effort required to transform AI text into human-like content.
  • Consistency: These tools maintain a consistent level of quality in humanization, which can be challenging to achieve manually.
  • Scalability: For large volumes of content, these tools provide a scalable solution that manual editing cannot match.

Overall, while manual editing has its place, the use of specialized tools like WriteHuman offers a more efficient, consistent, and scalable approach to humanizing AI-generated text.

Practical Applications and Examples

The humanization of AI-generated text finds its significance in a variety of real-world scenarios. The ability to transform AI writing into something that resonates on a human level is particularly crucial in areas such as marketing, customer service, and storytelling. Let's explore some of these applications.

In marketing, the human touch in content can be the difference between engaging a potential customer and losing their interest. Humanized text feels more personal, trustworthy, and relatable, making it a powerful tool for crafting compelling marketing messages and campaigns.

Customer Service

Customer service interactions benefit greatly from humanization. AI-generated responses, when humanized, can provide customers with the feeling of a personalized and empathetic interaction, enhancing customer satisfaction and loyalty.

Storytelling

Storytelling, whether in literature, journalism, or other forms, relies heavily on emotional depth and relatability. Humanizing AI text ensures that they capture the essence of human experience, making them more engaging and impactful for the audience.

These examples illustrate just a few of the many scenarios where humanizing AI text plays a crucial role. By adding the human element to AI-generated content, it's possible to create a more meaningful and effective connection with the intended audience.

Best Practices and Tips

Effectively humanizing AI-generated text is both an art and a science. To achieve the best results, it's important to follow certain best practices and be aware of common pitfalls. Here are some expert tips to guide you through the process.

Expert Tips for Effectively Humanizing AI Text

  • Understand the Audience: Tailor the text to the preferences and expectations of your target audience. This involves using the appropriate tone, style, and vocabulary.
  • Emphasize Contextual Relevance: Ensure that the text is contextually relevant and aligns with the overall message or theme.
  • Inject Personality: Add personality to the text to make it more engaging and relatable. This could involve humor, storytelling elements, or a conversational tone.
  • Review and Revise: Always review the humanized text to catch any inconsistencies or areas that need further refinement.

Common Pitfalls to Avoid

  • Over-Editing: Avoid over-editing the text to the point where it loses its original intent or becomes too convoluted.
  • Ignoring the AI's Strengths: Leverage the strengths of AI, such as data accuracy and logical structuring, instead of completely overhauling its output.
  • Neglecting the Human Touch: Don't forget to add the human element, which is essential for creating a connection with the audience.

Maintaining a Balance Between AI Writing and Human Writing

Maintaining a balance between leveraging AI's efficiency and ensuring human authenticity is crucial. Utilize AI to handle the bulk of content generation, but always infuse the final output with human insights, emotions, and creativity. This balance ensures that the content benefits from the efficiency of AI while retaining the warmth and relatability of human writing.

Throughout this guide, we've explored the intricate process of humanizing AI-generated text. From understanding the basics of how AI creates text to delving into the specific steps of the humanization process, we've seen the vital role that human touch plays in transforming AI content. Tools and technologies like WriteHuman simplify this process, allowing for efficient and effective humanization.

The practical applications of humanized AI text in marketing, customer service, storytelling, and more, demonstrate its wide-reaching impact. By following the best practices and avoiding common pitfalls, we can harness the power of AI while maintaining the authenticity and relatability that only human writing can provide.

We encourage you to embrace the art of AI text humanization in your respective fields. Whether you're a content creator, marketer, or someone interested in the intersection of technology and language, the potential of humanized AI text is vast and largely untapped.

We invite you to try out the process of humanizing AI-generated text. Experiment with tools like WriteHuman, and discover how they can elevate your AI content to new heights of engagement and authenticity. Embrace the future of writing, where AI efficiency and human creativity combine to create truly compelling content. Convert AI to human text with WriteHuman.

How to Humanize ChatGPT Text

Humanizing ChatGPT text involves several steps. Start by reading the text to understand its context and tone. Then, add personal touches and nuances to make it sound more natural and less formal. Adjust the vocabulary and syntax to suit your audience, and infuse emotional depth to make it more relatable. Finally, review and refine the text to ensure it aligns with human writing styles. Tools like WriteHuman can also assist in this process, automating some of these steps for efficiency.

Can Humans Recognize ChatGPT's AI-Generated Text?

Yes, humans can often recognize ChatGPT's AI-generated text, especially if they are familiar with common characteristics of AI writing. These may include a certain formality in tone, structured and consistent grammar, and sometimes a lack of nuanced emotional expressions or personal anecdotes. However, with advancements in AI and the use of humanization tools, distinguishing AI-generated text from human writing is becoming increasingly challenging.

How to Make AI Writing Undetectable

Making AI writing undetectable involves a few key strategies:

  • Emphasize Natural Language: Modify the AI-generated text to use more conversational and natural language. This includes using colloquialisms and phrases that are commonly found in everyday speech.
  • Introduce Variability: Add variability in sentence structure and word choice. AI tends to follow certain patterns, so breaking these can make the text seem more human-written.
  • Inject Personality and Emotion: Incorporate personal anecdotes, opinions, and emotional expressions. AI often lacks the depth of human experience, so adding these elements can make a big difference.
  • Customize for the Audience: Tailor the text to the specific audience, including their interests, language style, and cultural references, which AI might not automatically consider.
  • Review and Revise: Carefully review and manually edit the text. This step is crucial for catching and changing any elements that might clearly indicate AI authorship.

Tools like WriteHuman can assist in this process by automatically applying some of these strategies, but a final human review is often necessary for the best results.

Key Takeaways

Recent posts.

People using ai humanizer at work

How To Leverage AI Humanizer Tools for a More Productive Workflow

Alexa Fitzpatrick

The Ultimate AI to Human Text Converter That Will Take Your Writing to the Next Level

ai writing vs human writing

Is AI Writing Superior? Discover How AI Humanizers Can Improve Your Writing

Bypass AI detection with WriteHuman

© 2024 WriteHuman, LLC. All rights reserved.

MIT Technology Review

  • Newsletters

How to spot AI-generated text

The internet is increasingly awash with text written by AI software. We need new tools to detect it.

  • Melissa Heikkilä archive page

""

This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

Since it was released in late November, ChatGPT has been used by over a million people. It has the AI community enthralled, and it is clear the internet is increasingly being flooded with AI-generated text. People are using it to come up with jokes, write children’s stories, and craft better emails. 

ChatGPT is OpenAI’s spin-off of its large language model GPT-3 , which generates remarkably human-sounding answers to questions that it’s asked. The magic—and danger—of these large language models lies in the illusion of correctness. The sentences they produce look right—they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not. 

In an already polarized, politically fraught online world, these AI tools could further distort the information we consume. If they are rolled out into the real world in real products, the consequences could be devastating. 

We’re in desperate need of ways to differentiate between human- and AI-written text in order to counter potential misuses of the technology, says Irene Solaiman, policy director at AI startup Hugging Face, who used to be an AI researcher at OpenAI and studied AI output detection for the release of GPT-3’s predecessor GPT-2. 

New tools will also be crucial to enforcing bans on AI-generated text and code, like the one recently announced by Stack Overflow, a website where coders can ask for help. ChatGPT can confidently regurgitate answers to software problems, but it’s not foolproof. Getting code wrong can lead to buggy and broken software, which is expensive and potentially chaotic to fix. 

A spokesperson for Stack Overflow says that the company’s moderators are “examining thousands of submitted community member reports via a number of tools including heuristics and detection models” but would not go into more detail. 

In reality, it is incredibly difficult, and the ban is likely almost impossible to enforce.

Today’s detection tool kit

There are various ways researchers have tried to detect AI-generated text. One common method is to use software to analyze different features of the text—for example, how fluently it reads, how frequently certain words appear, or whether there are patterns in punctuation or sentence length. 

“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning. 

Because large language models work by predicting the next word in a sentence, they are more likely to use common words like “the,” “it,” or “is” instead of wonky, rare words. This is exactly the kind of text that automated detector systems are good at picking up, Ippolito and a team of researchers at Google found in research they published in 2019.

But Ippolito’s study also showed something interesting: the human participants tended to think this kind of “clean” text looked better and contained fewer mistakes, and thus that it must have been written by a person. 

In reality, human-written text is riddled with typos and is incredibly variable, incorporating different styles and slang, while “language models very, very rarely make typos. They’re much better at generating perfect texts,” Ippolito says. 

“A typo in the text is actually a really good indicator that it was human written,” she adds. 

Large language models themselves can also be used to detect AI-generated text. One of the most successful ways to do this is to retrain the model on some texts written by humans, and others created by machines, so it learns to differentiate between the two, says Muhammad Abdul-Mageed, who is the Canada research chair in natural-language processing and machine learning at the University of British Columbia and has studied detection . 

Scott Aaronson, a computer scientist at the University of Texas on secondment as a researcher at OpenAI for a year, meanwhile, has been developing watermarks for longer pieces of text generated by models such as GPT-3—“an otherwise unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT,” he writes in his blog. 

A spokesperson for OpenAI confirmed that the company is working on watermarks, and said its policies state that users should clearly indicate text generated by AI “in a way no one could reasonably miss or misunderstand.” 

But these technical fixes come with big caveats. Most of them don’t stand a chance against the latest generation of AI language models, as they are built on GPT-2 or other earlier models. Many of these detection tools work best when there is a lot of text available; they will be less efficient in some concrete use cases, like chatbots or email assistants, which rely on shorter conversations and provide less data to analyze. And using large language models for detection also requires powerful computers, and access to the AI model itself, which tech companies don’t allow, Abdul-Mageed says. 

The bigger and more powerful the model, the harder it is to build AI models to detect what text is written by a human and what isn’t, says Solaiman. 

“What’s so concerning now is that [ChatGPT has] really impressive outputs. Detection models just can’t keep up. You’re playing catch-up this whole time,” she says. 

Training the human eye

There is no silver bullet for detecting AI-written text, says Solaiman. “A detection model is not going to be your answer for detecting synthetic text in the same way that a safety filter is not going to be your answer for mitigating biases,” she says. 

To have a chance of solving the problem, we’ll need improved technical fixes and more transparency around when humans are interacting with an AI, and people will need to learn to spot the signs of AI-written sentences. 

“What would be really nice to have is a plug-in to Chrome or to whatever web browser you’re using that will let you know if any text on your web page is machine generated,” Ippolito says.

Some help is already out there. Researchers at Harvard and IBM developed a tool called Giant Language Model Test Room (GLTR), which supports humans by highlighting passages that might have been generated by a computer program. 

But AI is already fooling us. Researchers at Cornell University found that people found fake news articles generated by GPT-2 credible about 66% of the time. 

Another study found that untrained humans were able to correctly spot text generated by GPT-3 only at a level consistent with random chance.  

The good news is that people can be trained to be better at spotting AI-generated text, Ippolito says. She built a game to test how many sentences a computer can generate before a player catches on that it’s not human, and found that people got gradually better over time. 

“If you look at lots of generative texts and you try to figure out what doesn’t make sense about it, you can get better at this task,” she says. One way is to pick up on implausible statements, like the AI saying it takes 60 minutes to make a cup of coffee.

Artificial intelligence

Large language models can do jaw-dropping things. but nobody knows exactly why..

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

  • Will Douglas Heaven archive page

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

The AI Act is done. Here’s what will (and won’t) change

The hard work starts now.

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

What are you writing about today?

Write better essays, in less time, with your ai writing assistant.

how to change ai generated essay

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 October 2023

A large-scale comparison of human-written versus ChatGPT-generated essays

  • Steffen Herbold 1 ,
  • Annette Hautli-Janisz 1 ,
  • Ute Heuer 1 ,
  • Zlata Kikteva 1 &
  • Alexander Trautsch 1  

Scientific Reports volume  13 , Article number:  18617 ( 2023 ) Cite this article

18k Accesses

12 Citations

94 Altmetric

Metrics details

  • Computer science
  • Information technology

ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.

Similar content being viewed by others

how to change ai generated essay

Improving microbial phylogeny with citizen science within a mass-market video game

how to change ai generated essay

Highly accurate protein structure prediction with AlphaFold

how to change ai generated essay

Song lyrics have become simpler and more repetitive over the last five decades

Introduction.

The massive uptake in the development and deployment of large-scale Natural Language Generation (NLG) systems in recent months has yielded an almost unprecedented worldwide discussion of the future of society. The ChatGPT service which serves as Web front-end to GPT-3.5 1 and GPT-4 was the fastest-growing service in history to break the 100 million user milestone in January and had 1 billion visits by February 2023 2 .

Driven by the upheaval that is particularly anticipated for education 3 and knowledge transfer for future generations, we conduct the first independent, systematic study of AI-generated language content that is typically dealt with in high-school education: argumentative essays, i.e. essays in which students discuss a position on a controversial topic by collecting and reflecting on evidence (e.g. ‘Should students be taught to cooperate or compete?’). Learning to write such essays is a crucial aspect of education, as students learn to systematically assess and reflect on a problem from different perspectives. Understanding the capability of generative AI to perform this task increases our understanding of the skills of the models, as well as of the challenges educators face when it comes to teaching this crucial skill. While there is a multitude of individual examples and anecdotal evidence for the quality of AI-generated content in this genre (e.g. 4 ) this paper is the first to systematically assess the quality of human-written and AI-generated argumentative texts across different versions of ChatGPT 5 . We use a fine-grained essay quality scoring rubric based on content and language mastery and employ a significant pool of domain experts, i.e. high school teachers across disciplines, to perform the evaluation. Using computational linguistic methods and rigorous statistical analysis, we arrive at several key findings:

AI models generate significantly higher-quality argumentative essays than the users of an essay-writing online forum frequented by German high-school students across all criteria in our scoring rubric.

ChatGPT-4 (ChatGPT web interface with the GPT-4 model) significantly outperforms ChatGPT-3 (ChatGPT web interface with the GPT-3.5 default model) with respect to logical structure, language complexity, vocabulary richness and text linking.

Writing styles between humans and generative AI models differ significantly: for instance, the GPT models use more nominalizations and have higher sentence complexity (signaling more complex, ‘scientific’, language), whereas the students make more use of modal and epistemic constructions (which tend to convey speaker attitude).

The linguistic diversity of the NLG models seems to be improving over time: while ChatGPT-3 still has a significantly lower linguistic diversity than humans, ChatGPT-4 has a significantly higher diversity than the students.

Our work goes significantly beyond existing benchmarks. While OpenAI’s technical report on GPT-4 6 presents some benchmarks, their evaluation lacks scientific rigor: it fails to provide vital information like the agreement between raters, does not report on details regarding the criteria for assessment or to what extent and how a statistical analysis was conducted for a larger sample of essays. In contrast, our benchmark provides the first (statistically) rigorous and systematic study of essay quality, paired with a computational linguistic analysis of the language employed by humans and two different versions of ChatGPT, offering a glance at how these NLG models develop over time. While our work is focused on argumentative essays in education, the genre is also relevant beyond education. In general, studying argumentative essays is one important aspect to understand how good generative AI models are at conveying arguments and, consequently, persuasive writing in general.

Related work

Natural language generation.

The recent interest in generative AI models can be largely attributed to the public release of ChatGPT, a public interface in the form of an interactive chat based on the InstructGPT 1 model, more commonly referred to as GPT-3.5. In comparison to the original GPT-3 7 and other similar generative large language models based on the transformer architecture like GPT-J 8 , this model was not trained in a purely self-supervised manner (e.g. through masked language modeling). Instead, a pipeline that involved human-written content was used to fine-tune the model and improve the quality of the outputs to both mitigate biases and safety issues, as well as make the generated text more similar to text written by humans. Such models are referred to as Fine-tuned LAnguage Nets (FLANs). For details on their training, we refer to the literature 9 . Notably, this process was recently reproduced with publicly available models such as Alpaca 10 and Dolly (i.e. the complete models can be downloaded and not just accessed through an API). However, we can only assume that a similar process was used for the training of GPT-4 since the paper by OpenAI does not include any details on model training.

Testing of the language competency of large-scale NLG systems has only recently started. Cai et al. 11 show that ChatGPT reuses sentence structure, accesses the intended meaning of an ambiguous word, and identifies the thematic structure of a verb and its arguments, replicating human language use. Mahowald 12 compares ChatGPT’s acceptability judgments to human judgments on the Article + Adjective + Numeral + Noun construction in English. Dentella et al. 13 show that ChatGPT-3 fails to understand low-frequent grammatical constructions like complex nested hierarchies and self-embeddings. In another recent line of research, the structure of automatically generated language is evaluated. Guo et al. 14 show that in question-answer scenarios, ChatGPT-3 uses different linguistic devices than humans. Zhao et al. 15 show that ChatGPT generates longer and more diverse responses when the user is in an apparently negative emotional state.

Given that we aim to identify certain linguistic characteristics of human-written versus AI-generated content, we also draw on related work in the field of linguistic fingerprinting, which assumes that each human has a unique way of using language to express themselves, i.e. the linguistic means that are employed to communicate thoughts, opinions and ideas differ between humans. That these properties can be identified with computational linguistic means has been showcased across different tasks: the computation of a linguistic fingerprint allows to distinguish authors of literary works 16 , the identification of speaker profiles in large public debates 17 , 18 , 19 , 20 and the provision of data for forensic voice comparison in broadcast debates 21 , 22 . For educational purposes, linguistic features are used to measure essay readability 23 , essay cohesion 24 and language performance scores for essay grading 25 . Integrating linguistic fingerprints also yields performance advantages for classification tasks, for instance in predicting user opinion 26 , 27 and identifying individual users 28 .

Limitations of OpenAIs ChatGPT evaluations

OpenAI published a discussion of the model’s performance of several tasks, including Advanced Placement (AP) classes within the US educational system 6 . The subjects used in performance evaluation are diverse and include arts, history, English literature, calculus, statistics, physics, chemistry, economics, and US politics. While the models achieved good or very good marks in most subjects, they did not perform well in English literature. GPT-3.5 also experienced problems with chemistry, macroeconomics, physics, and statistics. While the overall results are impressive, there are several significant issues: firstly, the conflict of interest of the model’s owners poses a problem for the performance interpretation. Secondly, there are issues with the soundness of the assessment beyond the conflict of interest, which make the generalizability of the results hard to assess with respect to the models’ capability to write essays. Notably, the AP exams combine multiple-choice questions with free-text answers. Only the aggregated scores are publicly available. To the best of our knowledge, neither the generated free-text answers, their overall assessment, nor their assessment given specific criteria from the used judgment rubric are published. Thirdly, while the paper states that 1–2 qualified third-party contractors participated in the rating of the free-text answers, it is unclear how often multiple ratings were generated for the same answer and what was the agreement between them. This lack of information hinders a scientifically sound judgement regarding the capabilities of these models in general, but also specifically for essays. Lastly, the owners of the model conducted their study in a few-shot prompt setting, where they gave the models a very structured template as well as an example of a human-written high-quality essay to guide the generation of the answers. This further fine-tuning of what the models generate could have also influenced the output. The results published by the owners go beyond the AP courses which are directly comparable to our work and also consider other student assessments like Graduate Record Examinations (GREs). However, these evaluations suffer from the same problems with the scientific rigor as the AP classes.

Scientific assessment of ChatGPT

Researchers across the globe are currently assessing the individual capabilities of these models with greater scientific rigor. We note that due to the recency and speed of these developments, the hereafter discussed literature has mostly only been published as pre-prints and has not yet been peer-reviewed. In addition to the above issues concretely related to the assessment of the capabilities to generate student essays, it is also worth noting that there are likely large problems with the trustworthiness of evaluations, because of data contamination, i.e. because the benchmark tasks are part of the training of the model, which enables memorization. For example, Aiyappa et al. 29 find evidence that this is likely the case for benchmark results regarding NLP tasks. This complicates the effort by researchers to assess the capabilities of the models beyond memorization.

Nevertheless, the first assessment results are already available – though mostly focused on ChatGPT-3 and not yet ChatGPT-4. Closest to our work is a study by Yeadon et al. 30 , who also investigate ChatGPT-3 performance when writing essays. They grade essays generated by ChatGPT-3 for five physics questions based on criteria that cover academic content, appreciation of the underlying physics, grasp of subject material, addressing the topic, and writing style. For each question, ten essays were generated and rated independently by five researchers. While the sample size precludes a statistical assessment, the results demonstrate that the AI model is capable of writing high-quality physics essays, but that the quality varies in a manner similar to human-written essays.

Guo et al. 14 create a set of free-text question answering tasks based on data they collected from the internet, e.g. question answering from Reddit. The authors then sample thirty triplets of a question, a human answer, and a ChatGPT-3 generated answer and ask human raters to assess if they can detect which was written by a human, and which was written by an AI. While this approach does not directly assess the quality of the output, it serves as a Turing test 31 designed to evaluate whether humans can distinguish between human- and AI-produced output. The results indicate that humans are in fact able to distinguish between the outputs when presented with a pair of answers. Humans familiar with ChatGPT are also able to identify over 80% of AI-generated answers without seeing a human answer in comparison. However, humans who are not yet familiar with ChatGPT-3 are not capable of identifying AI-written answers about 50% of the time. Moreover, the authors also find that the AI-generated outputs are deemed to be more helpful than the human answers in slightly more than half of the cases. This suggests that the strong results from OpenAI’s own benchmarks regarding the capabilities to generate free-text answers generalize beyond the benchmarks.

There are, however, some indicators that the benchmarks may be overly optimistic in their assessment of the model’s capabilities. For example, Kortemeyer 32 conducts a case study to assess how well ChatGPT-3 would perform in a physics class, simulating the tasks that students need to complete as part of the course: answer multiple-choice questions, do homework assignments, ask questions during a lesson, complete programming exercises, and write exams with free-text questions. Notably, ChatGPT-3 was allowed to interact with the instructor for many of the tasks, allowing for multiple attempts as well as feedback on preliminary solutions. The experiment shows that ChatGPT-3’s performance is in many aspects similar to that of the beginning learners and that the model makes similar mistakes, such as omitting units or simply plugging in results from equations. Overall, the AI would have passed the course with a low score of 1.5 out of 4.0. Similarly, Kung et al. 33 study the performance of ChatGPT-3 in the United States Medical Licensing Exam (USMLE) and find that the model performs at or near the passing threshold. Their assessment is a bit more optimistic than Kortemeyer’s as they state that this level of performance, comprehensible reasoning and valid clinical insights suggest that models such as ChatGPT may potentially assist human learning in clinical decision making.

Frieder et al. 34 evaluate the capabilities of ChatGPT-3 in solving graduate-level mathematical tasks. They find that while ChatGPT-3 seems to have some mathematical understanding, its level is well below that of an average student and in most cases is not sufficient to pass exams. Yuan et al. 35 consider the arithmetic abilities of language models, including ChatGPT-3 and ChatGPT-4. They find that they exhibit the best performance among other currently available language models (incl. Llama 36 , FLAN-T5 37 , and Bloom 38 ). However, the accuracy of basic arithmetic tasks is still only at 83% when considering correctness to the degree of \(10^{-3}\) , i.e. such models are still not capable of functioning reliably as calculators. In a slightly satiric, yet insightful take, Spencer et al. 39 assess how a scientific paper on gamma-ray astrophysics would look like, if it were written largely with the assistance of ChatGPT-3. They find that while the language capabilities are good and the model is capable of generating equations, the arguments are often flawed and the references to scientific literature are full of hallucinations.

The general reasoning skills of the models may also not be at the level expected from the benchmarks. For example, Cherian et al. 40 evaluate how well ChatGPT-3 performs on eleven puzzles that second graders should be able to solve and find that ChatGPT is only able to solve them on average in 36.4% of attempts, whereas the second graders achieve a mean of 60.4%. However, their sample size is very small and the problem was posed as a multiple-choice question answering problem, which cannot be directly compared to the NLG we consider.

Research gap

Within this article, we address an important part of the current research gap regarding the capabilities of ChatGPT (and similar technologies), guided by the following research questions:

RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays?

RQ2: How do AI-generated essays compare to essays written by students?

RQ3: What are linguistic devices that are characteristic of student versus AI-generated content?

We study these aspects with the help of a large group of teaching professionals who systematically assess a large corpus of student essays. To the best of our knowledge, this is the first large-scale, independent scientific assessment of ChatGPT (or similar models) of this kind. Answering these questions is crucial to understanding the impact of ChatGPT on the future of education.

Materials and methods

The essay topics originate from a corpus of argumentative essays in the field of argument mining 41 . Argumentative essays require students to think critically about a topic and use evidence to establish a position on the topic in a concise manner. The corpus features essays for 90 topics from Essay Forum 42 , an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get feedback from native speakers on their essay-writing capabilities. Information about the age of the writers is not available, but the topics indicate that the essays were written in grades 11–13, indicating that the authors were likely at least 16. Topics range from ‘Should students be taught to cooperate or to compete?’ to ‘Will newspapers become a thing of the past?’. In the corpus, each topic features one human-written essay uploaded and discussed in the forum. The students who wrote the essays are not native speakers. The average length of these essays is 19 sentences with 388 tokens (an average of 2.089 characters) and will be termed ‘student essays’ in the remainder of the paper.

For the present study, we use the topics from Stab and Gurevych 41 and prompt ChatGPT with ‘Write an essay with about 200 words on “[ topic ]”’ to receive automatically-generated essays from the ChatGPT-3 and ChatGPT-4 versions from 22 March 2023 (‘ChatGPT-3 essays’, ‘ChatGPT-4 essays’). No additional prompts for getting the responses were used, i.e. the data was created with a basic prompt in a zero-shot scenario. This is in contrast to the benchmarks by OpenAI, who used an engineered prompt in a few-shot scenario to guide the generation of essays. We note that we decided to ask for 200 words because we noticed a tendency to generate essays that are longer than the desired length by ChatGPT. A prompt asking for 300 words typically yielded essays with more than 400 words. Thus, using the shorter length of 200, we prevent a potential advantage for ChatGPT through longer essays, and instead err on the side of brevity. Similar to the evaluations of free-text answers by OpenAI, we did not consider multiple configurations of the model due to the effort required to obtain human judgments. For the same reason, our data is restricted to ChatGPT and does not include other models available at that time, e.g. Alpaca. We use the browser versions of the tools because we consider this to be a more realistic scenario than using the API. Table 1 below shows the core statistics of the resulting dataset. Supplemental material S1 shows examples for essays from the data set.

Annotation study

Study participants.

The participants had registered for a two-hour online training entitled ‘ChatGPT – Challenges and Opportunities’ conducted by the authors of this paper as a means to provide teachers with some of the technological background of NLG systems in general and ChatGPT in particular. Only teachers permanently employed at secondary schools were allowed to register for this training. Focusing on these experts alone allows us to receive meaningful results as those participants have a wide range of experience in assessing students’ writing. A total of 139 teachers registered for the training, 129 of them teach at grammar schools, and only 10 teachers hold a position at other secondary schools. About half of the registered teachers (68 teachers) have been in service for many years and have successfully applied for promotion. For data protection reasons, we do not know the subject combinations of the registered teachers. We only know that a variety of subjects are represented, including languages (English, French and German), religion/ethics, and science. Supplemental material S5 provides some general information regarding German teacher qualifications.

The training began with an online lecture followed by a discussion phase. Teachers were given an overview of language models and basic information on how ChatGPT was developed. After about 45 minutes, the teachers received a both written and oral explanation of the questionnaire at the core of our study (see Supplementary material S3 ) and were informed that they had 30 minutes to finish the study tasks. The explanation included information on how the data was obtained, why we collect the self-assessment, and how we chose the criteria for the rating of the essays, the overall goal of our research, and a walk-through of the questionnaire. Participation in the questionnaire was voluntary and did not affect the awarding of a training certificate. We further informed participants that all data was collected anonymously and that we would have no way of identifying who participated in the questionnaire. We orally informed participants that they consent to the use of the provided ratings for our research by participating in the survey.

Once these instructions were provided orally and in writing, the link to the online form was given to the participants. The online form was running on a local server that did not log any information that could identify the participants (e.g. IP address) to ensure anonymity. As per instructions, consent for participation was given by using the online form. Due to the full anonymity, we could by definition not document who exactly provided the consent. This was implemented as further insurance that non-participation could not possibly affect being awarded the training certificate.

About 20% of the training participants did not take part in the questionnaire study, the remaining participants consented based on the information provided and participated in the rating of essays. After the questionnaire, we continued with an online lecture on the opportunities of using ChatGPT for teaching as well as AI beyond chatbots. The study protocol was reviewed and approved by the Research Ethics Committee of the University of Passau. We further confirm that our study protocol is in accordance with all relevant guidelines.

Questionnaire

The questionnaire consists of three parts: first, a brief self-assessment regarding the English skills of the participants which is based on the Common European Framework of Reference for Languages (CEFR) 43 . We have six levels ranging from ‘comparable to a native speaker’ to ‘some basic skills’ (see supplementary material S3 ). Then each participant was shown six essays. The participants were only shown the generated text and were not provided with information on whether the text was human-written or AI-generated.

The questionnaire covers the seven categories relevant for essay assessment shown below (for details see supplementary material S3 ):

Topic and completeness

Logic and composition

Expressiveness and comprehensiveness

Language mastery

Vocabulary and text linking

Language constructs

These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring.

After rating each essay, the participants were also asked to self-assess their confidence in the ratings. We used a five-point Likert scale based on the criteria for the self-assessment of peer-review scores from the Association for Computational Linguistics (ACL). Once a participant finished rating the six essays, they were shown a summary of their ratings, as well as the individual ratings for each of their essays and the information on how the essay was generated.

Computational linguistic analysis

In order to further explore and compare the quality of the essays written by students and ChatGPT, we consider the six following linguistic characteristics: lexical diversity, sentence complexity, nominalization, presence of modals, epistemic and discourse markers. Those are motivated by previous work: Weiss et al. 25 observe the correlation between measures of lexical, syntactic and discourse complexities to the essay gradings of German high-school examinations while McNamara et al. 45 explore cohesion (indicated, among other things, by connectives), syntactic complexity and lexical diversity in relation to the essay scoring.

Lexical diversity

We identify vocabulary richness by using a well-established measure of textual, lexical diversity (MTLD) 46 which is often used in the field of automated essay grading 25 , 45 , 47 . It takes into account the number of unique words but unlike the best-known measure of lexical diversity, the type-token ratio (TTR), it is not as sensitive to the difference in the length of the texts. In fact, Koizumi and In’nami 48 find it to be least affected by the differences in the length of the texts compared to some other measures of lexical diversity. This is relevant to us due to the difference in average length between the human-written and ChatGPT-generated essays.

Syntactic complexity

We use two measures in order to evaluate the syntactic complexity of the essays. One is based on the maximum depth of the sentence dependency tree which is produced using the spaCy 3.4.2 dependency parser 49 (‘Syntactic complexity (depth)’). For the second measure, we adopt an approach similar in nature to the one by Weiss et al. 25 who use clause structure to evaluate syntactic complexity. In our case, we count the number of conjuncts, clausal modifiers of nouns, adverbial clause modifiers, clausal complements, clausal subjects, and parataxes (‘Syntactic complexity (clauses)’). The supplementary material in S2 shows the difference between sentence complexity based on two examples from the data.

Nominalization is a common feature of a more scientific style of writing 50 and is used as an additional measure for syntactic complexity. In order to explore this feature, we count occurrences of nouns with suffixes such as ‘-ion’, ‘-ment’, ‘-ance’ and a few others which are known to transform verbs into nouns.

Semantic properties

Both modals and epistemic markers signal the commitment of the writer to their statement. We identify modals using the POS-tagging module provided by spaCy as well as a list of epistemic expressions of modality, such as ‘definitely’ and ‘potentially’, also used in other approaches to identifying semantic properties 51 . For epistemic markers we adopt an empirically-driven approach and utilize the epistemic markers identified in a corpus of dialogical argumentation by Hautli-Janisz et al. 52 . We consider expressions such as ‘I think’, ‘it is believed’ and ‘in my opinion’ to be epistemic.

Discourse properties

Discourse markers can be used to measure the coherence quality of a text. This has been explored by Somasundaran et al. 53 who use discourse markers to evaluate the story-telling aspect of student writing while Nadeem et al. 54 incorporated them in their deep learning-based approach to automated essay scoring. In the present paper, we employ the PDTB list of discourse markers 55 which we adjust to exclude words that are often used for purposes other than indicating discourse relations, such as ‘like’, ‘for’, ‘in’ etc.

Statistical methods

We use a within-subjects design for our study. Each participant was shown six randomly selected essays. Results were submitted to the survey system after each essay was completed, in case participants ran out of time and did not finish scoring all six essays. Cronbach’s \(\alpha\) 56 allows us to determine the inter-rater reliability for the rating criterion and data source (human, ChatGPT-3, ChatGPT-4) in order to understand the reliability of our data not only overall, but also for each data source and rating criterion. We use two-sided Wilcoxon-rank-sum tests 57 to confirm the significance of the differences between the data sources for each criterion. We use the same tests to determine the significance of the linguistic characteristics. This results in three comparisons (human vs. ChatGPT-3, human vs. ChatGPT-4, ChatGPT-3 vs. ChatGPT-4) for each of the seven rating criteria and each of the seven linguistic characteristics, i.e. 42 tests. We use the Holm-Bonferroni method 58 for the correction for multiple tests to achieve a family-wise error rate of 0.05. We report the effect size using Cohen’s d 59 . While our data is not perfectly normal, it also does not have severe outliers, so we prefer the clear interpretation of Cohen’s d over the slightly more appropriate, but less accessible non-parametric effect size measures. We report point plots with estimates of the mean scores for each data source and criterion, incl. the 95% confidence interval of these mean values. The confidence intervals are estimated in a non-parametric manner based on bootstrap sampling. We further visualize the distribution for each criterion using violin plots to provide a visual indicator of the spread of the data (see Supplementary material S4 ).

Further, we use the self-assessment of the English skills and confidence in the essay ratings as confounding variables. Through this, we determine if ratings are affected by the language skills or confidence, instead of the actual quality of the essays. We control for the impact of these by measuring Pearson’s correlation coefficient r 60 between the self-assessments and the ratings. We also determine whether the linguistic features are correlated with the ratings as expected. The sentence complexity (both tree depth and dependency clauses), as well as the nominalization, are indicators of the complexity of the language. Similarly, the use of discourse markers should signal a proper logical structure. Finally, a large lexical diversity should be correlated with the ratings for the vocabulary. Same as above, we measure Pearson’s r . We use a two-sided test for the significance based on a \(\beta\) -distribution that models the expected correlations as implemented by scipy 61 . Same as above, we use the Holm-Bonferroni method to account for multiple tests. However, we note that it is likely that all—even tiny—correlations are significant given our amount of data. Consequently, our interpretation of these results focuses on the strength of the correlations.

Our statistical analysis of the data is implemented in Python. We use pandas 1.5.3 and numpy 1.24.2 for the processing of data, pingouin 0.5.3 for the calculation of Cronbach’s \(\alpha\) , scipy 1.10.1 for the Wilcoxon-rank-sum tests Pearson’s r , and seaborn 0.12.2 for the generation of plots, incl. the calculation of error bars that visualize the confidence intervals.

Out of the 111 teachers who completed the questionnaire, 108 rated all six essays, one rated five essays, one rated two essays, and one rated only one essay. This results in 658 ratings for 270 essays (90 topics for each essay type: human-, ChatGPT-3-, ChatGPT-4-generated), with three ratings for 121 essays, two ratings for 144 essays, and one rating for five essays. The inter-rater agreement is consistently excellent ( \(\alpha >0.9\) ), with the exception of language mastery where we have good agreement ( \(\alpha =0.89\) , see Table  2 ). Further, the correlation analysis depicted in supplementary material S4 shows weak positive correlations ( \(r \in 0.11, 0.28]\) ) between the self-assessment for the English skills, respectively the self-assessment for the confidence in ratings and the actual ratings. Overall, this indicates that our ratings are reliable estimates of the actual quality of the essays with a potential small tendency that confidence in ratings and language skills yields better ratings, independent of the data source.

Table  2 and supplementary material S4 characterize the distribution of the ratings for the essays, grouped by the data source. We observe that for all criteria, we have a clear order of the mean values, with students having the worst ratings, ChatGPT-3 in the middle rank, and ChatGPT-4 with the best performance. We further observe that the standard deviations are fairly consistent and slightly larger than one, i.e. the spread is similar for all ratings and essays. This is further supported by the visual analysis of the violin plots.

The statistical analysis of the ratings reported in Table  4 shows that differences between the human-written essays and the ones generated by both ChatGPT models are significant. The effect sizes for human versus ChatGPT-3 essays are between 0.52 and 1.15, i.e. a medium ( \(d \in [0.5,0.8)\) ) to large ( \(d \in [0.8, 1.2)\) ) effect. On the one hand, the smallest effects are observed for the expressiveness and complexity, i.e. when it comes to the overall comprehensiveness and complexity of the sentence structures, the differences between the humans and the ChatGPT-3 model are smallest. On the other hand, the difference in language mastery is larger than all other differences, which indicates that humans are more prone to making mistakes when writing than the NLG models. The magnitude of differences between humans and ChatGPT-4 is larger with effect sizes between 0.88 and 1.43, i.e., a large to very large ( \(d \in [1.2, 2)\) ) effect. Same as for ChatGPT-3, the differences are smallest for expressiveness and complexity and largest for language mastery. Please note that the difference in language mastery between humans and both GPT models does not mean that the humans have low scores for language mastery (M=3.90), but rather that the NLG models have exceptionally high scores (M=5.03 for ChatGPT-3, M=5.25 for ChatGPT-4).

When we consider the differences between the two GPT models, we observe that while ChatGPT-4 has consistently higher mean values for all criteria, only the differences for logic and composition, vocabulary and text linking, and complexity are significant. The effect sizes are between 0.45 and 0.5, i.e. small ( \(d \in [0.2, 0.5)\) ) and medium. Thus, while GPT-4 seems to be an improvement over GPT-3.5 in general, the only clear indicator of this is a better and clearer logical composition and more complex writing with a more diverse vocabulary.

We also observe significant differences in the distribution of linguistic characteristics between all three groups (see Table  3 ). Sentence complexity (depth) is the only category without a significant difference between humans and ChatGPT-3, as well as ChatGPT-3 and ChatGPT-4. There is also no significant difference in the category of discourse markers between humans and ChatGPT-3. The magnitude of the effects varies a lot and is between 0.39 and 1.93, i.e., between small ( \(d \in [0.2, 0.5)\) ) and very large. However, in comparison to the ratings, there is no clear tendency regarding the direction of the differences. For instance, while the ChatGPT models write more complex sentences and use more nominalizations, humans tend to use more modals and epistemic markers instead. The lexical diversity of humans is higher than that of ChatGPT-3 but lower than that of ChatGPT-4. While there is no difference in the use of discourse markers between humans and ChatGPT-3, ChatGPT-4 uses significantly fewer discourse markers.

We detect the expected positive correlations between the complexity ratings and the linguistic markers for sentence complexity ( \(r=0.16\) for depth, \(r=0.19\) for clauses) and nominalizations ( \(r=0.22\) ). However, we observe a negative correlation between the logic ratings and the discourse markers ( \(r=-0.14\) ), which counters our intuition that more frequent use of discourse indicators makes a text more logically coherent. However, this is in line with previous work: McNamara et al. 45 also find no indication that the use of cohesion indices such as discourse connectives correlates with high- and low-proficiency essays. Finally, we observe the expected positive correlation between the ratings for the vocabulary and the lexical diversity ( \(r=0.12\) ). All observed correlations are significant. However, we note that the strength of all these correlations is weak and that the significance itself should not be over-interpreted due to the large sample size.

Our results provide clear answers to the first two research questions that consider the quality of the generated essays: ChatGPT performs well at writing argumentative student essays and outperforms the quality of the human-written essays significantly. The ChatGPT-4 model has (at least) a large effect and is on average about one point better than humans on a seven-point Likert scale.

Regarding the third research question, we find that there are significant linguistic differences between humans and AI-generated content. The AI-generated essays are highly structured, which for instance is reflected by the identical beginnings of the concluding sections of all ChatGPT essays (‘In conclusion, [...]’). The initial sentences of each essay are also very similar starting with a general statement using the main concepts of the essay topics. Although this corresponds to the general structure that is sought after for argumentative essays, it is striking to see that the ChatGPT models are so rigid in realizing this, whereas the human-written essays are looser in representing the guideline on the linguistic surface. Moreover, the linguistic fingerprint has the counter-intuitive property that the use of discourse markers is negatively correlated with logical coherence. We believe that this might be due to the rigid structure of the generated essays: instead of using discourse markers, the AI models provide a clear logical structure by separating the different arguments into paragraphs, thereby reducing the need for discourse markers.

Our data also shows that hallucinations are not a problem in the setting of argumentative essay writing: the essay topics are not really about factual correctness, but rather about argumentation and critical reflection on general concepts which seem to be contained within the knowledge of the AI model. The stochastic nature of the language generation is well-suited for this kind of task, as different plausible arguments can be seen as a sampling from all available arguments for a topic. Nevertheless, we need to perform a more systematic study of the argumentative structures in order to better understand the difference in argumentation between human-written and ChatGPT-generated essay content. Moreover, we also cannot rule out that subtle hallucinations may have been overlooked during the ratings. There are also essays with a low rating for the criteria related to factual correctness, indicating that there might be cases where the AI models still have problems, even if they are, on average, better than the students.

One of the issues with evaluations of the recent large-language models is not accounting for the impact of tainted data when benchmarking such models. While it is certainly possible that the essays that were sourced by Stab and Gurevych 41 from the internet were part of the training data of the GPT models, the proprietary nature of the model training means that we cannot confirm this. However, we note that the generated essays did not resemble the corpus of human essays at all. Moreover, the topics of the essays are general in the sense that any human should be able to reason and write about these topics, just by understanding concepts like ‘cooperation’. Consequently, a taint on these general topics, i.e. the fact that they might be present in the data, is not only possible but is actually expected and unproblematic, as it relates to the capability of the models to learn about concepts, rather than the memorization of specific task solutions.

While we did everything to ensure a sound construct and a high validity of our study, there are still certain issues that may affect our conclusions. Most importantly, neither the writers of the essays, nor their raters, were English native speakers. However, the students purposefully used a forum for English writing frequented by native speakers to ensure the language and content quality of their essays. This indicates that the resulting essays are likely above average for non-native speakers, as they went through at least one round of revisions with the help of native speakers. The teachers were informed that part of the training would be in English to prevent registrations from people without English language skills. Moreover, the self-assessment of the language skills was only weakly correlated with the ratings, indicating that the threat to the soundness of our results is low. While we cannot definitively rule out that our results would not be reproducible with other human raters, the high inter-rater agreement indicates that this is unlikely.

However, our reliance on essays written by non-native speakers affects the external validity and the generalizability of our results. It is certainly possible that native speaking students would perform better in the criteria related to language skills, though it is unclear by how much. However, the language skills were particular strengths of the AI models, meaning that while the difference might be smaller, it is still reasonable to conclude that the AI models would have at least comparable performance to humans, but possibly still better performance, just with a smaller gap. While we cannot rule out a difference for the content-related criteria, we also see no strong argument why native speakers should have better arguments than non-native speakers. Thus, while our results might not fully translate to native speakers, we see no reason why aspects regarding the content should not be similar. Further, our results were obtained based on high-school-level essays. Native and non-native speakers with higher education degrees or experts in fields would likely also achieve a better performance, such that the difference in performance between the AI models and humans would likely also be smaller in such a setting.

We further note that the essay topics may not be an unbiased sample. While Stab and Gurevych 41 randomly sampled the essays from the writing feedback section of an essay forum, it is unclear whether the essays posted there are representative of the general population of essay topics. Nevertheless, we believe that the threat is fairly low because our results are consistent and do not seem to be influenced by certain topics. Further, we cannot with certainty conclude how our results generalize beyond ChatGPT-3 and ChatGPT-4 to similar models like Bard ( https://bard.google.com/?hl=en ) Alpaca, and Dolly. Especially the results for linguistic characteristics are hard to predict. However, since—to the best of our knowledge and given the proprietary nature of some of these models—the general approach to how these models work is similar and the trends for essay quality should hold for models with comparable size and training procedures.

Finally, we want to note that the current speed of progress with generative AI is extremely fast and we are studying moving targets: ChatGPT 3.5 and 4 today are already not the same as the models we studied. Due to a lack of transparency regarding the specific incremental changes, we cannot know or predict how this might affect our results.

Our results provide a strong indication that the fear many teaching professionals have is warranted: the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native speakers, our results show that when students want to maximize their essay grades, they could easily do so by relying on results from AI models like ChatGPT. The very strong performance of the AI models indicates that this might also be the case for native speakers, though the difference in language skills is probably smaller. However, this is not and cannot be the goal of education. Consequently, educators need to change how they approach homework. Instead of just assigning and grading essays, we need to reflect more on the output of AI tools regarding their reasoning and correctness. AI models need to be seen as an integral part of education, but one which requires careful reflection and training of critical thinking skills.

Furthermore, teachers need to adapt strategies for teaching writing skills: as with the use of calculators, it is necessary to critically reflect with the students on when and how to use those tools. For instance, constructivists 62 argue that learning is enhanced by the active design and creation of unique artifacts by students themselves. In the present case this means that, in the long term, educational objectives may need to be adjusted. This is analogous to teaching good arithmetic skills to younger students and then allowing and encouraging students to use calculators freely in later stages of education. Similarly, once a sound level of literacy has been achieved, strongly integrating AI models in lesson plans may no longer run counter to reasonable learning goals.

In terms of shedding light on the quality and structure of AI-generated essays, this paper makes an important contribution by offering an independent, large-scale and statistically sound account of essay quality, comparing human-written and AI-generated texts. By comparing different versions of ChatGPT, we also offer a glance into the development of these models over time in terms of their linguistic properties and the quality they exhibit. Our results show that while the language generated by ChatGPT is considered very good by humans, there are also notable structural differences, e.g. in the use of discourse markers. This demonstrates that an in-depth consideration not only of the capabilities of generative AI models is required (i.e. which tasks can they be used for), but also of the language they generate. For example, if we read many AI-generated texts that use fewer discourse markers, it raises the question if and how this would affect our human use of discourse markers. Understanding how AI-generated texts differ from human-written enables us to look for these differences, to reason about their potential impact, and to study and possibly mitigate this impact.

Data availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.8343644

Code availability

All materials are available online in form of a replication package that contains the data and the analysis code, https://doi.org/10.5281/zenodo.8343644 .

Ouyang, L. et al. Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 .

Ruby, D. 30+ detailed chatgpt statistics–users & facts (sep 2023). https://www.demandsage.com/chatgpt-statistics/ (2023). Accessed 09 June 2023.

Leahy, S. & Mishra, P. TPACK and the Cambrian explosion of AI. In Society for Information Technology & Teacher Education International Conference , (ed. Langran, E.) 2465–2469 (Association for the Advancement of Computing in Education (AACE), 2023).

Ortiz, S. Need an ai essay writer? here’s how chatgpt (and other chatbots) can help. https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/ (2023). Accessed 09 June 2023.

Openai chat interface. https://chat.openai.com/ . Accessed 09 June 2023.

OpenAI. Gpt-4 technical report (2023). arXiv:2303.08774 .

Brown, T. B. et al. Language models are few-shot learners (2020). arXiv:2005.14165 .

Wang, B. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax (2021).

Wei, J. et al. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (2022).

Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).

Cai, Z. G., Haslett, D. A., Duan, X., Wang, S. & Pickering, M. J. Does chatgpt resemble humans in language use? (2023). arXiv:2303.08014 .

Mahowald, K. A discerning several thousand judgments: Gpt-3 rates the article + adjective + numeral + noun construction (2023). arXiv:2301.12564 .

Dentella, V., Murphy, E., Marcus, G. & Leivada, E. Testing ai performance on less frequent aspects of language reveals insensitivity to underlying meaning (2023). arXiv:2302.12313 .

Guo, B. et al. How close is chatgpt to human experts? comparison corpus, evaluation, and detection (2023). arXiv:2301.07597 .

Zhao, W. et al. Is chatgpt equipped with emotional dialogue capabilities? (2023). arXiv:2304.09582 .

Keim, D. A. & Oelke, D. Literature fingerprinting : A new method for visual literary analysis. In 2007 IEEE Symposium on Visual Analytics Science and Technology , 115–122, https://doi.org/10.1109/VAST.2007.4389004 (IEEE, 2007).

El-Assady, M. et al. Interactive visual analysis of transcribed multi-party discourse. In Proceedings of ACL 2017, System Demonstrations , 49–54 (Association for Computational Linguistics, Vancouver, Canada, 2017).

Mennatallah El-Assady, A. H.-J. & Butt, M. Discourse maps - feature encoding for the analysis of verbatim conversation transcripts. In Visual Analytics for Linguistics , vol. CSLI Lecture Notes, Number 220, 115–147 (Stanford: CSLI Publications, 2020).

Matt Foulis, J. V. & Reed, C. Dialogical fingerprinting of debaters. In Proceedings of COMMA 2020 , 465–466, https://doi.org/10.3233/FAIA200536 (Amsterdam: IOS Press, 2020).

Matt Foulis, J. V. & Reed, C. Interactive visualisation of debater identification and characteristics. In Proceedings of the COMMA workshop on Argument Visualisation, COMMA , 1–7 (2020).

Chatzipanagiotidis, S., Giagkou, M. & Meurers, D. Broad linguistic complexity analysis for Greek readability classification. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications , 48–58 (Association for Computational Linguistics, Online, 2021).

Ajili, M., Bonastre, J.-F., Kahn, J., Rossato, S. & Bernard, G. FABIOLE, a speech database for forensic speaker comparison. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) , 726–733 (European Language Resources Association (ELRA), Portorož, Slovenia, 2016).

Deutsch, T., Jasbi, M. & Shieber, S. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications , 1–17, https://doi.org/10.18653/v1/2020.bea-1.1 (Association for Computational Linguistics, Seattle, WA, USA \(\rightarrow\) Online, 2020).

Fiacco, J., Jiang, S., Adamson, D. & Rosé, C. Toward automatic discourse parsing of student writing motivated by neural interpretation. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) , 204–215, https://doi.org/10.18653/v1/2022.bea-1.25 (Association for Computational Linguistics, Seattle, Washington, 2022).

Weiss, Z., Riemenschneider, A., Schröter, P. & Meurers, D. Computationally modeling the impact of task-appropriate language complexity and accuracy on human grading of German essays. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 30–45, https://doi.org/10.18653/v1/W19-4404 (Association for Computational Linguistics, Florence, Italy, 2019).

Yang, F., Dragut, E. & Mukherjee, A. Predicting personal opinion on future events with fingerprints. In Proceedings of the 28th International Conference on Computational Linguistics , 1802–1807, https://doi.org/10.18653/v1/2020.coling-main.162 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020).

Tumarada, K. et al. Opinion prediction with user fingerprinting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , 1423–1431 (INCOMA Ltd., Held Online, 2021).

Rocca, R. & Yarkoni, T. Language as a fingerprint: Self-supervised learning of user encodings using transformers. In Findings of the Association for Computational Linguistics: EMNLP . 1701–1714 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on chatgpt? (2023). arXiv:2303.12767 .

Yeadon, W., Inyang, O.-O., Mizouri, A., Peach, A. & Testrow, C. The death of the short-form physics essay in the coming ai revolution (2022). arXiv:2212.11661 .

TURING, A. M. I.-COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX , 433–460, https://doi.org/10.1093/mind/LIX.236.433 (1950). https://academic.oup.com/mind/article-pdf/LIX/236/433/30123314/lix-236-433.pdf .

Kortemeyer, G. Could an artificial-intelligence agent pass an introductory physics course? (2023). arXiv:2301.12127 .

Kung, T. H. et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLOS Digital Health 2 , 1–12. https://doi.org/10.1371/journal.pdig.0000198 (2023).

Article   Google Scholar  

Frieder, S. et al. Mathematical capabilities of chatgpt (2023). arXiv:2301.13867 .

Yuan, Z., Yuan, H., Tan, C., Wang, W. & Huang, S. How well do large language models perform in arithmetic tasks? (2023). arXiv:2304.02015 .

Touvron, H. et al. Llama: Open and efficient foundation language models (2023). arXiv:2302.13971 .

Chung, H. W. et al. Scaling instruction-finetuned language models (2022). arXiv:2210.11416 .

Workshop, B. et al. Bloom: A 176b-parameter open-access multilingual language model (2023). arXiv:2211.05100 .

Spencer, S. T., Joshi, V. & Mitchell, A. M. W. Can ai put gamma-ray astrophysicists out of a job? (2023). arXiv:2303.17853 .

Cherian, A., Peng, K.-C., Lohit, S., Smith, K. & Tenenbaum, J. B. Are deep neural networks smarter than second graders? (2023). arXiv:2212.09993 .

Stab, C. & Gurevych, I. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , 1501–1510 (Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 2014).

Essay forum. https://essayforum.com/ . Last-accessed: 2023-09-07.

Common european framework of reference for languages (cefr). https://www.coe.int/en/web/common-european-framework-reference-languages . Accessed 09 July 2023.

Kmk guidelines for essay assessment. http://www.kmk-format.de/material/Fremdsprachen/5-3-2_Bewertungsskalen_Schreiben.pdf . Accessed 09 July 2023.

McNamara, D. S., Crossley, S. A. & McCarthy, P. M. Linguistic features of writing quality. Writ. Commun. 27 , 57–86 (2010).

McCarthy, P. M. & Jarvis, S. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behav. Res. Methods 42 , 381–392 (2010).

Article   PubMed   Google Scholar  

Dasgupta, T., Naskar, A., Dey, L. & Saha, R. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications , 93–102 (2018).

Koizumi, R. & In’nami, Y. Effects of text length on lexical diversity measures: Using short texts with less than 200 tokens. System 40 , 554–564 (2012).

spacy industrial-strength natural language processing in python. https://spacy.io/ .

Siskou, W., Friedrich, L., Eckhard, S., Espinoza, I. & Hautli-Janisz, A. Measuring plain language in public service encounters. In Proceedings of the 2nd Workshop on Computational Linguistics for Political Text Analysis (CPSS-2022) (Potsdam, Germany, 2022).

El-Assady, M. & Hautli-Janisz, A. Discourse Maps - Feature Encoding for the Analysis of Verbatim Conversation Transcripts (CSLI lecture notes (CSLI Publications, Center for the Study of Language and Information, 2019).

Hautli-Janisz, A. et al. QT30: A corpus of argument and conflict in broadcast debate. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , 3291–3300 (European Language Resources Association, Marseille, France, 2022).

Somasundaran, S. et al. Towards evaluating narrative quality in student writing. Trans. Assoc. Comput. Linguist. 6 , 91–106 (2018).

Nadeem, F., Nguyen, H., Liu, Y. & Ostendorf, M. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 484–493, https://doi.org/10.18653/v1/W19-4450 (Association for Computational Linguistics, Florence, Italy, 2019).

Prasad, R. et al. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08) (European Language Resources Association (ELRA), Marrakech, Morocco, 2008).

Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16 , 297–334. https://doi.org/10.1007/bf02310555 (1951).

Article   MATH   Google Scholar  

Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1 , 80–83 (1945).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

MathSciNet   MATH   Google Scholar  

Cohen, J. Statistical power analysis for the behavioral sciences (Academic press, 2013).

Freedman, D., Pisani, R. & Purves, R. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York (2007).

Scipy documentation. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html . Accessed 09 June 2023.

Windschitl, M. Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Rev. Educ. Res. 72 , 131–175 (2002).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Computer Science and Mathematics, University of Passau, Passau, Germany

Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva & Alexander Trautsch

You can also search for this author in PubMed   Google Scholar

Contributions

S.H., A.HJ., and U.H. conceived the experiment; S.H., A.HJ, and Z.K. collected the essays from ChatGPT; U.H. recruited the study participants; S.H., A.HJ., U.H. and A.T. conducted the training session and questionnaire; all authors contributed to the analysis of the results, the writing of the manuscript, and review of the manuscript.

Corresponding author

Correspondence to Steffen Herbold .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., supplementary information 3., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13 , 18617 (2023). https://doi.org/10.1038/s41598-023-45644-9

Download citation

Received : 01 June 2023

Accepted : 22 October 2023

Published : 30 October 2023

DOI : https://doi.org/10.1038/s41598-023-45644-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Defense against adversarial attacks: robust and efficient compressed optimized neural networks.

  • Insaf Kraidia
  • Afifa Ghenai
  • Samir Brahim Belhaouari

Scientific Reports (2024)

AI-driven translations for kidney transplant equity in Hispanic populations

  • Oscar A. Garcia Valencia
  • Charat Thongprayoon
  • Wisit Cheungpasitporn

How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations

  • Thomas Cantens

AI & SOCIETY (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

how to change ai generated essay

  • International edition
  • Australia edition
  • Europe edition

Illustration shows ChatGPT logo and AI Artificial Intelligence words

How to get away with AI-generated essays

Prof Paul Kleiman on putting ChatGPT to the test on his work. Plus letters from Michael Bulley and Dr Paul Flewers

No wonder Robert Topinka found himself in a quandary ( The software says my student cheated using AI. They say they’re innocent. Who do I believe?, 13 February ). To test ChatGPT’s abilities and weaknesses, I asked it to write a short essay on a particular topic that I specialised in. Before looking at what it produced, I wrote my own 100% original short essay on the same topic. I then submitted both pieces to ChatGPT and asked it to identify whether they were written by AI or a human. It immediately identified the first piece as AI-generated. But then it also said that my essay “was probably generated by AI”.

I concluded that if you write well, in logical, appropriate and grammatically correct English, then the chances are that it will be deemed to be AI-generated. To avoid detection, write badly. Prof Paul Kleiman Truro, Cornwall

Robert Topinka gets into a twist about whether his student’s essay was genuine or produced by AI. The obvious solution is for such work not to contribute to the final degree qualification. Then there would be no point in cheating.

Let there be real chat between teachers and students rather than ChatGPT , and let the degree be decided only by exams, with surprise questions, done in an exam room with pen and paper, and not a computer in sight. Michael Bulley Chalon-sur-Saône, France

Dr Robert Topinka overlooks a crucial factor with respect to student cheating – so long as a degree is a requirement to obtain a reasonable job, then chicanery is inevitable. When I left school at 16 in the early 1970s, an administrative job could be had with a few O-levels; when I finished my PhD two decades ago and was looking for that sort of job, each one required A-levels, and often a degree. I was a mature student, studying for my own edification, and so cheating was self-defeating. Cheating will stop being a major problem only when students attend university primarily to learn for the sake of learning and not as a means of gaining employment. Dr Paul Flewers London

  • Artificial intelligence (AI)
  • Higher education
  • Universities

Most viewed

how to change ai generated essay

Generative A.I. Arrives in the Gene Editing World of CRISPR

Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA.

The physical structure of OpenCRISPR-1, a gene editor created by A.I. technology from Profluent. Credit... Video by Profluent Bio

Supported by

  • Share full article

Cade Metz

By Cade Metz

Has reported on the intersection of A.I. and health care for a decade.

  • April 22, 2024

Generative A.I. technologies can write poetry and computer programs or create images of teddy bears and videos of cartoon characters that look like something from a Hollywood movie.

Now, new A.I. technology is generating blueprints for microscopic biological mechanisms that can edit your DNA, pointing to a future when scientists can battle illness and diseases with even greater precision and speed than they can today.

Described in a research paper published on Monday by a Berkeley, Calif., startup called Profluent, the technology is based on the same methods that drive ChatGPT, the online chatbot that launched the A.I. boom after its release in 2022 . The company is expected to present the paper next month at the annual meeting of the American Society of Gene and Cell Therapy.

Much as ChatGPT learns to generate language by analyzing Wikipedia articles, books and chat logs, Profluent’s technology creates new gene editors after analyzing enormous amounts of biological data, including microscopic mechanisms that scientists already use to edit human DNA.

These gene editors are based on Nobel Prize-winning methods involving biological mechanisms called CRISPR. Technology based on CRISPR is already changing how scientists study and fight illness and disease , providing a way of altering genes that cause hereditary conditions, such as sickle cell anemia and blindness.

A group of casually dressed people pose on a cement walkway.

Previously, CRISPR methods used mechanisms found in nature — biological material gleaned from bacteria that allows these microscopic organisms to fight off germs.

“They have never existed on Earth,” said James Fraser, a professor and chair of the department of bioengineering and therapeutic sciences at the University of California, San Francisco, who has read Profluent’s research paper. “The system has learned from nature to create them, but they are new.”

The hope is that the technology will eventually produce gene editors that are more nimble and more powerful than those that have been honed over billions of years of evolution.

On Monday, Profluent also said that it had used one of these A.I.-generated gene editors to edit human DNA and that it was “open sourcing” this editor, called OpenCRISPR-1. That means it is allowing individuals, academic labs and companies to experiment with the technology for free.

A.I. researchers often open source the underlying software that drives their A.I. systems , because it allows others to build on their work and accelerate the development of new technologies. But it is less common for biological labs and pharmaceutical companies to open source inventions like OpenCRISPR-1.

Though Profluent is open sourcing the gene editors generated by its A.I. technology, it is not open sourcing the A.I. technology itself.

how to change ai generated essay

The project is part of a wider effort to build A.I. technologies that can improve medical care. Scientists at the University of Washington, for instance, are using the methods behind chatbots like OpenAI’s ChatGPT and image generators like Midjourney to create entirely new proteins — the microscopic molecules that drive all human life — as they work to accelerate the development of new vaccines and medicines.

(The New York Times has sued OpenAI and its partner, Microsoft, on claims of copyright infringement involving artificial intelligence systems that generate text.)

Generative A.I. technologies are driven by what scientists call a neural network , a mathematical system that learns skills by analyzing vast amounts of data. The image creator Midjourney, for example, is underpinned by a neural network that has analyzed millions of digital images and the captions that describe each of those images. The system learned to recognize the links between the images and the words. So when you ask it for an image of a rhinoceros leaping off the Golden Gate Bridge, it knows what to do.

Profluent’s technology is driven by a similar A.I. model that learns from sequences of amino acids and nucleic acids — the chemical compounds that define the microscopic biological mechanisms that scientists use to edit genes. Essentially, it analyzes the behavior of CRISPR gene editors pulled from nature and learns how to generate entirely new gene editors.

“These A.I. models learn from sequences — whether those are sequences of characters or words or computer code or amino acids,” said Profluent’s chief executive, Ali Madani, a researcher who previously worked in the A.I. lab at the software giant Salesforce.

Profluent has not yet put these synthetic gene editors through clinical trials, so it is not clear if they can match or exceed the performance of CRISPR. But this proof of concept shows that A.I. models can produce something capable of editing the human genome.

Still, it is unlikely to affect health care in the short term. Fyodor Urnov, a gene editing pioneer and scientific director at the Innovative Genomics Institute at the University of California, Berkeley, said scientists had no shortage of naturally occurring gene editors that they could use to fight illness and disease. The bottleneck, he said, is the cost of pushing these editors through preclinical studies, such as safety, manufacturing and regulatory reviews, before they can be used on patients.

But generative A.I. systems often hold enormous potential because they tend to improve quickly as they learn from increasingly large amounts of data. If technology like Profluent’s continues to improve, it could eventually allow scientists to edit genes in far more precise ways. The hope, Dr. Urnov said, is that this could, in the long term, lead to a world where medicines and treatments are quickly tailored to individual people even faster than we can do today.

“I dream of a world where we have CRISPR on demand within weeks,” he said.

Scientists have long cautioned against using CRISPR for human enhancement because it is a relatively new technology that could potentially have undesired side effects, such as triggering cancer, and have warned against unethical uses, such as genetically modifying human embryos.

This is also a concern with synthetic gene editors. But scientists already have access to everything they need to edit embryos.

“A bad actor, someone who is unethical, is not worried about whether they use an A.I.-created editor or not,” Dr. Fraser said. “They are just going to go ahead and use what’s available.”

Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz

Explore Our Coverage of Artificial Intelligence

News  and Analysis

Saudi Arabia is plowing money into glitzy events, computing power and artificial intelligence research, putting it in the middle of an escalating  U.S.-China struggle for technological influence.

Microsoft gave more signs that its hefty investments in A.I.  were beginning to bear fruit, as it reported a 17 percent jump in revenue and a 20 percent increase in profit for the first three months of the year.

Meta projected that revenue for the current quarter  would be lower than what Wall Street anticipated and said it would spend billions of dollars more on its artificial intelligence efforts, even as it reported robust revenue and profits for the first three months of the year.

The Age of A.I.

A new category of apps promises to relieve parents of drudgery, with an assist from A.I . But a family’s grunt work is more human, and valuable, than it seems.

Despite Mark Zuckerberg’s hope for Meta’s A.I. assistant to be the smartest , it struggles with facts, numbers and web search.

Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms  that can edit your DNA.

Could A.I. change India’s elections? Avatars are addressing voters by name, in whichever of India’s many languages they speak. Experts see potential for misuse  in a country already rife with disinformation.

Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions, our technology columnist writes .

Advertisement

IMAGES

  1. Write your Essay the Easy Way!

    how to change ai generated essay

  2. 6 Best AI Essay Writer Tools to Create 100% Original Content

    how to change ai generated essay

  3. 6 Best AI Essay Writer Tools to Create 100% Original Content

    how to change ai generated essay

  4. 8 Best AI Essay Writing Tools You Should Try

    how to change ai generated essay

  5. 5 Best AI Essay Checker Tools

    how to change ai generated essay

  6. Essay Builder AI: Free AI Essay Generator

    how to change ai generated essay

VIDEO

  1. How to Use an AI Article Writer for Paraphrasing and Benefits

  2. An AI generated essay about rats

  3. The Beginning of the End, AI generated Videos

  4. Should I Use ChatGPT to Write My College Essay? Interview with Brad Schiller

  5. Using Ai to Modify an Egg (Photoshop Ai) #shorts #ai #aiart #photoshop #bobross

  6. Jason Statham with his twin brother in the pasta pool 😍😊

COMMENTS

  1. WriteHuman: Undetectable AI and AI Humanizer

    The premier AI detector and AI humanizer, WriteHuman empowers you to take control of your AI privacy. By removing AI detection from popular platforms like Turnitin, ZeroGPT, Writer, and many others, you can confidently submit your content without triggering any alarms. Embrace a new era of seamless content creation. Humanize AI Text.

  2. AI Content Detection Remover: Rewrite & Bypass AI Detection

    Smodin's AI Content Detection Remover. Smodin's AI content detection remover uses a sophisticated rewriting technology that intelligently analyzes AI-generated content and restructures it while preserving its original meaning and coherence. We use advanced natural language processing algorithms, that paraphrase and rephrase your AI ...

  3. Free AI detector

    Using QuillBot's AI detector, there's no wait. As soon as you submit your content for analysis, our AI detector starts analyzing each sentence for repeated words or phrases and an unnatural flow. Within seconds, you'll have a percentage representing the amount of AI-generated content within your text. It's fast, it's ridiculously easy ...

  4. Paraphrasing Tool

    QuillBot's AI-powered paraphrasing tool will enhance your writing. Your words matter, and our paraphrasing tool is designed to ensure you use the right ones. With unlimited Custom modes and 9 predefined modes, Paraphraser lets you rephrase text countless ways. Our product will improve your fluency while also ensuring you have the appropriate ...

  5. 8 Proven Ways to Humanize AI Text (Using 2024 Tools)

    2. Use an AI content detector. After generating your text, you can run it through an AI content detector, and make sure it isn't flagged as AI-generated. This extra step serves several purposes: It reduces some of the manual work of humanizing your content, and allows you to focus on areas that are deemed AI-generated.

  6. How to Humanize AI Written Content (To Pass AI Detection)

    When generating new AI content, you need to be specific in your instruction to the generator itself. Giving important details like what topics you want to cover, what the goal of the article is, and what sort of voice you want the content to have can help the generator produce better results. 6. Clean Up Fluff.

  7. Editing AI-generated Text

    The first sentence in a piece of AI-generated content is inevitably terrible, by any editor's standards. Here are a few sample intro sentences from my prompt: ∙ Few advancements in modern technology have been more instrumental than artificial intelligence (AI). ∙ AI-generated content is becoming increasingly popular.

  8. Generative AI in Academic Writing

    These tools fall under a broad, encompassing term called generative AI that describes technology that can create new text, images, sounds, video, etc. based on information and examples drawn from the internet. In this handout, we will focus on potential uses and pitfalls of generative AI tools that generate text.

  9. How to Humanize AI-Generated Text

    Incorporate Natural Language. Think about how people naturally communicate in everyday speech and incorporate these elements into AI-generated text: Use contractions or informal language where appropriate. Use a mix of long and short sentences and vary the sentence structure. Avoid unnecessarily technical language or complex terminology.

  10. Converting AI Writing Into Human Text: A How-To Guide

    Step-by-Step Guide on Converting AI to Human Text. Step 1: Reading for Context and Tone. Begin by thoroughly reading the AI-generated text to understand its overall context and tone. Look for areas where the tone may not align perfectly with the intended message or audience, noting any adjustments needed.

  11. ChatGPT Rewriter: How to Change Wordings of AI Text

    Never turn in AI-generated text without making changes to it. Use tools such as Quillbot, Paraphrasing Tool, and Jasper to change the wording of the text. Change the phrases, sentences, and other elements of the text yourself. Prompt ChatGPT to rewrite its own text and revise through multiple drafts. ChatGPT is a technological marvel.

  12. How to spot AI-generated text

    There are various ways researchers have tried to detect AI-generated text. One common method is to use software to analyze different features of the text—for example, how fluently it reads, how ...

  13. Free AI Paraphrasing Tool

    Ahrefs' Paraphrasing Tool uses a language model that learns patterns, grammar, and vocabulary from large amounts of text data - then uses that knowledge to generate human-like text based on a given prompt or input. The generated text combines both the model's learned information and its understanding of the input.

  14. EssayGenius

    Write better essays, in less time, with your AI writing assistant. EssayGenius uses cutting-edge AI to help you write your essays like never before. Generate ideas, rephrase sentences, and have your essay structure built for you. EssayGenius lets you write better essays, in less time. Our AI tools help you generate new paragraphs, complete ...

  15. Free AI Detector

    Confidently submit your papers. Scribbr's AI Detector helps ensure that your essays and papers adhere to your university guidelines. Verify the authenticity of your sources ensuring that you only present trustworthy information. Identify any AI-generated content, like ChatGPT, that might need proper attribution.

  16. Free AI Paragraph Rewriter

    Ahrefs' Paragraph Rewriter can automate the process of rewriting paragraphs for various purposes. Instead of manually rephrasing content, the tool can be integrated into workflows or applications to automatically generate alternative versions of paragraphs. This use case can significantly streamline tasks that involve rewriting content, such ...

  17. 9 Best AI Rewriter Tools in 2024 (& How to Use Them)

    6. Frase.io. Frase is a total AI writing powerhouse but also a perfect rewriting tool. The platform creates a seamless writing workflow, starting with content briefs and then moving into content writing tools, optimizing content, and tracking content analytics once your piece is published.

  18. A large-scale comparison of human-written versus ChatGPT-generated essays

    RQ2: How do AI-generated essays compare to essays written by students? ... the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native ...

  19. How to get away with AI-generated essays

    I concluded that if you write well, in logical, appropriate and grammatically correct English, then the chances are that it will be deemed to be AI-generated. To avoid detection, write badly ...

  20. Free AI Sentence Rewriter Tool

    Content editing and enhancement. Ahrefs' AI Sentence Rewriter Tool can be highly useful for content creators, writers, and editors who want to improve the quality and clarity of their sentences. By inputting sentences into the tool, users can receive rephrased versions that offer enhanced readability, improved flow, and better overall structure.

  21. Generative A.I. Arrives in the Gene Editing World of CRISPR

    Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms that can edit your DNA. The physical structure of OpenCRISPR-1, a gene editor created by A.I ...

  22. Free AI Paragraph Generator

    Students and researchers can benefit from Ahrefs' Paragraph Generator when working on papers, essays, or research articles. By providing the necessary instructions, the tool can generate well-structured paragraphs that present key arguments, evidence, and analysis, aiding in the writing process. Personal writing and communication.