Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

  • Link to facebook
  • Link to linkedin
  • Link to twitter
  • Link to youtube
  • Writing Tips

How to Write Your Essay Using ChatGPT

How to Write Your Essay Using ChatGPT

5-minute read

  • 2nd May 2023

It’s tempting, isn’t it? You’ve read about and probably also witnessed how quickly ChatGPT can knock up text, seemingly in any genre or style and of any length, in less time than it takes you to make a cup of tea. However, getting ChatGPT to write your essay for you would be plagiarism . Universities and colleges are alive to the issue, and you may face serious academic penalties if you’re found to have used AI in that way.

So that’s that, right? Not necessarily.

This post is not about how to get ChatGPT to write your essay . It’s about how you can use the tool to help yourself write an essay .

What Is ChatGPT?

Let’s start with the basics. ChatGPT is one of several chatbots that can answer questions in a conversational style, as if the answer were coming from a human. It provides answers based on information it receives in development and in response to prompts you provide.

In that respect, like a human, ChatGPT is limited by the information it has. Where it lacks the information, it has a tendency to fill the gaps regardless . This action is dangerous if you’re relying on the accuracy of the information, and it’s another good reason you should not get ChatGPT to write your essay for you.

How Can You Use ChatGPT to Help With Your Essay?

Forget about the much talked-about writing skills of ChatGPT – writing is your thing here. Instead, think of ChatGPT as your assistant. Here are some ideas for how you can make it work for you.

Essay Prompts

If your task is to come up with your own essay topic but you find yourself staring at a blank page, you can use ChatGPT for inspiration. Your prompt could look something like this:

ChatGPT can offer several ideas. The choice of which one to write about (and you may, of course, still come up with one of your own) will be up to you, based on what interests you and the topic’s potential for in-depth analysis.

Essay Outlines

Having decided on your essay topic – or perhaps you’ve already been given one by your instructor – you may be struggling to figure out how to structure the essay. You can use ChatGPT to suggest an outline. Your prompt can be along these lines:

Just as you should not use ChatGPT to write an essay for you, you should not use it to research one – that’s your job.

If, however, you’re struggling to understand a particular extract, you can ask ChatGPT to summarize it or explain it in simpler terms.

Find this useful?

Subscribe to our newsletter and get writing tips from our editors straight to your inbox.

That said, you can’t rely on ChatGPT to be factually accurate in the information it provides, even when you think the information would be in its database, as we discovered in another post. Indeed, when we asked ChatGPT whether we should fact-check its information, the response was:

An appropriate use of ChatGPT for research would be to ask for academic resources for further reading on a particular topic. The advantage of doing this is that, in going on to locate and read the suggested resources, you will have checked that they exist and that the content is relevant and accurately set out in your essay.

Instead of researching the topic as a whole, you could use ChatGPT to generate suggestions for the occasional snippet of information, like this:

Before deciding which of its suggestions – if any – to include, you should ask ChatGPT for the source of the fact or statistic so you can check it and provide the necessary citation.

Referencing

Even reading the word above has probably made you groan. As if writing the essay isn’t hard enough, you then have to not only list all the sources you used, but also make sure that you’ve formatted them in a particular style. Here’s where you can use ChatGPT. We have a separate post dealing specifically with this topic, but in brief, you can ask something like this:

Where information is missing, as in the example above, ChatGPT will likely fill in the gaps. In such cases, you’ll have to ensure that the information it fills in is correct.

Proofreading

After finishing the writing and referencing, you’d be well advised to proofread your work, but you’re not always the best person to do so – you’d be tired and would likely read only what you expect to see. At least as a first step, you can copy and paste your essay into ChatGPT and ask it something like this:

You’ve got the message that you can’t just ask ChatGPT to write your essay, right? But in some areas, ChatGPT can help you write your essay, providing, as with any tool, you use it carefully and are alert to the risks.

We should point out that universities and colleges have different attitudes toward using AI – including whether you need to cite its use in your reference list – so always check what’s acceptable.

After using ChatGPT to help with your work, you can always ask our experts to look over it to check your references and/or improve your grammar, spelling, and tone. We’re available 24/7, and you can even try our services for free .

Share this article:

Post A New Comment

Got content that needs a quick turnaround? Let us polish your work. Explore our editorial business services.

Free email newsletter template.

Promoting a brand means sharing valuable insights to connect more deeply with your audience, and...

6-minute read

How to Write a Nonprofit Grant Proposal

If you’re seeking funding to support your charitable endeavors as a nonprofit organization, you’ll need...

9-minute read

How to Use Infographics to Boost Your Presentation

Is your content getting noticed? Capturing and maintaining an audience’s attention is a challenge when...

8-minute read

Why Interactive PDFs Are Better for Engagement

Are you looking to enhance engagement and captivate your audience through your professional documents? Interactive...

7-minute read

Seven Key Strategies for Voice Search Optimization

Voice search optimization is rapidly shaping the digital landscape, requiring content professionals to adapt their...

4-minute read

Five Creative Ways to Showcase Your Digital Portfolio

Are you a creative freelancer looking to make a lasting impression on potential clients or...

Logo Harvard University

Make sure your writing is the best it can be with our expert English proofreading and editing.

  • Share full article

Advertisement

Supported by

Student Opinion

Should Students Let ChatGPT Help Them Write Their College Essays?

If so, how? Tell us what you are thinking, and what practical and ethical questions these new A.I. tools raise for you.

Natasha Singer

Hey, ChatGPT, can you help me write my college admissions essays?

Absolutely! Please provide me with the essay prompts and any relevant information about yourself, your experiences, and your goals.

Katherine Schulten

By Katherine Schulten

Teachers: We also have a lesson plan that accompanies this Student Opinion forum.

Are you working on a college application essay? Have you sought help from an adult? How about from an A.I. chatbot like ChatGPT or Bard? Were either useful? If so, how?

The New York Times recently published two articles about the questions these new tools are raising for the college process. One explores how A.I. chatbots are upending essay-writing. The other details what happened when a reporter fed application questions from Harvard, Yale, Princeton and Dartmouth to different bots.

Here’s how the first article, “ Ban or Embrace? Colleges Wrestle With A.I.-Generated Admissions Essays ,” explains what’s going on:

The personal essay has long been a staple of the application process at elite colleges, not to mention a bane for generations of high school students. Admissions officers have often employed applicants’ essays as a lens into their unique character, pluck, potential and ability to handle adversity. As a result, some former students say they felt tremendous pressure to develop, or at least concoct, a singular personal writing voice. But new A.I. tools threaten to recast the college application essay as a kind of generic cake mix, which high school students may simply lard or spice up to reflect their own tastes, interests and experiences — casting doubt on the legitimacy of applicants’ writing samples as authentic, individualized admissions yardsticks.

The piece continues:

Some teachers said they were troubled by the idea of students using A.I. tools to produce college essay themes and texts for deeper reasons: Outsourcing writing to bots could hinder students from developing important critical thinking and storytelling skills. “Part of the process of the college essay is finding your writing voice through all of that drafting and revising,” said Susan Barber, an Advanced Placement English literature teacher at Midtown High School, a public school in Atlanta. “And I think that’s something that ChatGPT would be robbing them of.” In August, Ms. Barber assigned her 12th-grade students to write college essays. This week, she held class discussions about ChatGPT, cautioning students that using A.I. chatbots to generate ideas or writing could make their college essays sound too generic. She advised them to focus more on their personal views and voices. Other educators said they hoped the A.I. tools might have a democratizing effect. Wealthier high school students, these experts noted, often have access to resources — alumni parents, family friends, paid writing coaches — to help them brainstorm, draft and edit their college admissions essays. ChatGPT could play a similar role for students who lack such resources, they said, especially those at large high schools where overworked college counselors have little time for individualized essay coaching.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Computers and Electronics
  • Online Communications

How to Get ChatGPT to Write an Essay: Prompts, Outlines, & More

Last Updated: June 2, 2024 Fact Checked

Getting ChatGPT to Write the Essay

Using ai to help you write, expert interview.

This article was co-authored by Bryce Warwick, JD and by wikiHow staff writer, Nicole Levine, MFA . Bryce Warwick is currently the President of Warwick Strategies, an organization based in the San Francisco Bay Area offering premium, personalized private tutoring for the GMAT, LSAT and GRE. Bryce has a JD from the George Washington University Law School. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 51,864 times.

Are you curious about using ChatGPT to write an essay? While most instructors have tools that make it easy to detect AI-written essays, there are ways you can use OpenAI's ChatGPT to write papers without worrying about plagiarism or getting caught. In addition to writing essays for you, ChatGPT can also help you come up with topics, write outlines, find sources, check your grammar, and even format your citations. This wikiHow article will teach you the best ways to use ChatGPT to write essays, including helpful example prompts that will generate impressive papers.

Things You Should Know

  • To have ChatGPT write an essay, tell it your topic, word count, type of essay, and facts or viewpoints to include.
  • ChatGPT is also useful for generating essay topics, writing outlines, and checking grammar.
  • Because ChatGPT can make mistakes and trigger AI-detection alarms, it's better to use AI to assist with writing than have it do the writing.

Step 1 Create an account with ChatGPT.

  • Before using the OpenAI's ChatGPT to write your essay, make sure you understand your instructor's policies on AI tools. Using ChatGPT may be against the rules, and it's easy for instructors to detect AI-written essays.
  • While you can use ChatGPT to write a polished-looking essay, there are drawbacks. Most importantly, ChatGPT cannot verify facts or provide references. This means that essays created by ChatGPT may contain made-up facts and biased content. [1] X Research source It's best to use ChatGPT for inspiration and examples instead of having it write the essay for you.

Step 2 Gather your notes.

  • The topic you want to write about.
  • Essay length, such as word or page count. Whether you're writing an essay for a class, college application, or even a cover letter , you'll want to tell ChatGPT how much to write.
  • Other assignment details, such as type of essay (e.g., personal, book report, etc.) and points to mention.
  • If you're writing an argumentative or persuasive essay , know the stance you want to take so ChatGPT can argue your point.
  • If you have notes on the topic that you want to include, you can also provide those to ChatGPT.
  • When you plan an essay, think of a thesis, a topic sentence, a body paragraph, and the examples you expect to present in each paragraph.
  • It can be like an outline and not an extensive sentence-by-sentence structure. It should be a good overview of how the points relate.

Step 3 Ask ChatGPT to write the essay.

  • "Write a 2000-word college essay that covers different approaches to gun violence prevention in the United States. Include facts about gun laws and give ideas on how to improve them."
  • This prompt not only tells ChatGPT the topic, length, and grade level, but also that the essay is personal. ChatGPT will write the essay in the first-person point of view.
  • "Write a 4-page college application essay about an obstacle I have overcome. I am applying to the Geography program and want to be a cartographer. The obstacle is that I have dyslexia. Explain that I have always loved maps, and that having dyslexia makes me better at making them."

Tyrone Showers

Tyrone Showers

Be specific when using ChatGPT. Clear and concise prompts outlining your exact needs help ChatGPT tailor its response. Specify the desired outcome (e.g., creative writing, informative summary, functional resume), any length constraints (word or character count), and the preferred emotional tone (formal, humorous, etc.)

Step 4 Add to or change the essay.

  • In our essay about gun control, ChatGPT did not mention school shootings. If we want to discuss this topic in the essay, we can use the prompt, "Discuss school shootings in the essay."
  • Let's say we review our college entrance essay and realize that we forgot to mention that we grew up without parents. Add to the essay by saying, "Mention that my parents died when I was young."
  • In the Israel-Palestine essay, ChatGPT explored two options for peace: A 2-state solution and a bi-state solution. If you'd rather the essay focus on a single option, ask ChatGPT to remove one. For example, "Change my essay so that it focuses on a bi-state solution."

Step 5 Ask for sources.

Pay close attention to the content ChatGPT generates. If you use ChatGPT often, you'll start noticing its patterns, like its tendency to begin articles with phrases like "in today's digital world." Once you spot patterns, you can refine your prompts to steer ChatGPT in a better direction and avoid repetitive content.

Step 1 Generate essay topics.

  • "Give me ideas for an essay about the Israel-Palestine conflict."
  • "Ideas for a persuasive essay about a current event."
  • "Give me a list of argumentative essay topics about COVID-19 for a Political Science 101 class."

Step 2 Create an outline.

  • "Create an outline for an argumentative essay called "The Impact of COVID-19 on the Economy."
  • "Write an outline for an essay about positive uses of AI chatbots in schools."
  • "Create an outline for a short 2-page essay on disinformation in the 2016 election."

Step 3 Find sources.

  • "Find peer-reviewed sources for advances in using MRNA vaccines for cancer."
  • "Give me a list of sources from academic journals about Black feminism in the movie Black Panther."
  • "Give me sources for an essay on current efforts to ban children's books in US libraries."

Step 4 Create a sample essay.

  • "Write a 4-page college paper about how global warming is changing the automotive industry in the United States."
  • "Write a 750-word personal college entrance essay about how my experience with homelessness as a child has made me more resilient."
  • You can even refer to the outline you created with ChatGPT, as the AI bot can reference up to 3000 words from the current conversation. For example: "Write a 1000 word argumentative essay called 'The Impact of COVID-19 on the United States Economy' using the outline you provided. Argue that the government should take more action to support businesses affected by the pandemic."

Step 5 Use ChatGPT to proofread and tighten grammar.

  • One way to do this is to paste a list of the sources you've used, including URLs, book titles, authors, pages, publishers, and other details, into ChatGPT along with the instruction "Create an MLA Works Cited page for these sources."
  • You can also ask ChatGPT to provide a list of sources, and then build a Works Cited or References page that includes those sources. You can then replace sources you didn't use with the sources you did use.

Expert Q&A

  • Because it's easy for teachers, hiring managers, and college admissions offices to spot AI-written essays, it's best to use your ChatGPT-written essay as a guide to write your own essay. Using the structure and ideas from ChatGPT, write an essay in the same format, but using your own words. Thanks Helpful 0 Not Helpful 0
  • Always double-check the facts in your essay, and make sure facts are backed up with legitimate sources. Thanks Helpful 0 Not Helpful 0
  • If you see an error that says ChatGPT is at capacity , wait a few moments and try again. Thanks Helpful 0 Not Helpful 0

write essays with chat gpt

  • Using ChatGPT to write or assist with your essay may be against your instructor's rules. Make sure you understand the consequences of using ChatGPT to write or assist with your essay. Thanks Helpful 1 Not Helpful 0
  • ChatGPT-written essays may include factual inaccuracies, outdated information, and inadequate detail. [3] X Research source Thanks Helpful 0 Not Helpful 0

You Might Also Like

How Do You Know Someone Blocked You on Discord

Thanks for reading our article! If you’d like to learn more about completing school assignments, check out our in-depth interview with Bryce Warwick, JD .

  • ↑ https://help.openai.com/en/articles/6783457-what-is-chatgpt
  • ↑ https://platform.openai.com/examples/default-essay-outline
  • ↑ https://www.ipl.org/div/chatgpt/

About This Article

Bryce Warwick, JD

  • Send fan mail to authors

Is this article up to date?

write essays with chat gpt

Featured Articles

Enjoy Your Preteen Years

Trending Articles

DnD Name Generator

Watch Articles

Make Fluffy Pancakes

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

wikiHow Tech Help Pro:

Level up your tech skills and stay ahead of the curve

write essays with chat gpt

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form .

How to use ChatGPT to do research for papers, presentations, studies, and more

screenshot-2024-03-27-at-4-28-37pm.png

ChatGPT is often thought of as a tool that will replace human work on tasks such as writing papers for students or professionals. But ChatGPT can also be used to support human work, and research is an excellent example. 

Whether you're working on a research paper for school or doing market research for your job, initiating the research process and finding the correct sources can be challenging and time-consuming. 

Also:  5 handy AI tools for school that students, teachers, and parents can use, too

ChatGPT and other AI chatbots can help by curtailing the amount of time spent finding sources, allowing you to jump more quickly to the actual reading and research portion of your work.

Picking the right chatbot 

Before we get started, it's important to understand the limitations of using ChatGPT . Because ChatGPT is not connected to the internet, it will not be able to give you access to information or resources after 2021, and it will also not be able to provide you with a direct link to the source of the information. 

Also :  The best AI chatbots: ChatGPT and other noteworthy alternatives

Being able to ask a chatbot to provide you with links for the topic you are interested in is very valuable. If you'd like to do that, I recommend using a chatbot connected to the internet, such as Bing Chat , Claude , ChatGPT Plus , or Perplexity . 

This how-to guide will use ChatGPT as an example of how prompts can be used, but the principles are the same for whichever chatbot you choose.

1. Brainstorm

When you're assigned research papers, the general topic area is generally assigned, but you'll be required to identify the exact topic you want to pick for your paper or research. ChatGPT can help with the brainstorming process by suggesting ideas or even tweaking your own. 

Also:  How ChatGPT (and other AI chatbots) can help you write an essay

For this sample research paper, I will use the general topic of "Monumental technological inventions that caused pivotal changes in history." If I didn't have a specific idea to write about, I would tell ChatGPT the general theme of the assignment with as much detail as possible and ask it for some proposals. 

My prompt: I have to write a research paper on "Monumental technological inventions that caused pivotal changes in history." It needs to be ten pages long and source five different primary sources. Can you help me think of a specific topic? 

As seen by the screenshot (below), ChatGPT produced 10 viable topics, including "The Printing Press and the Spread of Knowledge", "The Internet and the Digital Age", "The Telegraph and the Communication Revolution", and more. 

Also:  How to use the new Bing (and how it's different from ChatGPT)

You can then follow up with ChatGPT to ask for further information. You can even tweak these topics with an angle you like more, and continue the feedback loop until you have a topic you are settled on. 

2. Generate an outline

Once you have selected a topic, you can ask ChatGPT to generate an outline, including as much detail for your assignment as possible. For this example, I used the first topic that ChatGPT suggested in the previous step.

My prompt: Can you give me an outline for a research paper that is ten pages long and needs to use five primary sources on this topic, "The Printing Press and the Spread of Knowledge"? 

ChatGPT generated a 13-point outline that carefully described the areas I should touch on in my paper, as seen in the photo (above). You can then use this outline to structure your paper and use the points to find sources, using ChatGPT as delineated below. 

3. Tell ChatGPT your topic and ask for sources

Now that you have a topic and outline established, you can ask ChatGPT about the topic of your project and ask it to deliver sources for you.

My prompt: Can you give me sources for a ten-page long paper on this topic, "The Printing Press and the Spread of Knowledge"?

ChatGPT outputs a list of five primary and five secondary sources that you can include in your paper. Remember, because ChatGPT can't give you internet links, you will need to seek out the specific resources on your own, whether that's Googling or visiting your school library. 

Also:  How to use Stable Diffusion AI to create amazing images

When I asked Bing Chat the same question, it provided sources with clickable links that you can use to access the material you need quicker. For that reason, I would use Bing Chat for this step. 

4. Describe a specific idea and ask for sources

Instead of describing the whole topic, you can also use a chatbot to find sources for a specific aspect of your paper.

Also:  How (and why) to subscribe to ChatGPT Plus

For example, I asked ChatGPT for sources for a specific bullet in the paper outline that it generated above. 

My prompt: Can you give me sources for the social and intellectual climate of when the printing press was generated?

As in the prior example, ChatGPT generated five primary and five secondary resources for the topic. 

Using this feature for smaller chunks of your essay is a good alternative because it gives you more options on sources and provides tailored insight that you can use to carefully craft your piece. 

5. Ask for examples of a specific incident

I use this prompt a lot in my workflow because I can sometimes remember that something specific happened, but can't pinpoint what it was or when it happened. 

This tool can also be used when you need to find a specific example to support your topic. 

Also:  How to use ChatGPT to write an essay

In both cases, you can ask ChatGPT to help you identify a specific event or time period, and incorporate those details in your article. 

In our essay example, if I wanted to include a rebuttal and delineate a time when implementing technology had negative impacts, but couldn't think of an incident on my own, I could ask ChatGPT to help me identify one.

My prompt: What was a time in history when implementing technology backfired on society and had negative impacts?

Within seconds, ChatGPT generated 10 examples of incidents that I could weave into the research as a rebuttal. 

6. Generate citations

Creating a page of the works you cited, although valuable and necessary for integrity, is a pain. Now, you can ask ChatGPT to generate citations for you by simply dropping the link or the title of the work, and asking it to create a citation in the style of your paper. 

Also:  How to make ChatGPT provide sources and citations

I asked ChatGPT to generate a citation for this article for ZDNET. As seen by the photo (above), the tool asked me to include the access date and the style for the citation, and then quickly generated a complete citation for the piece.

ChatGPT generated: 

Great, here's the MLA citation for the web link "How to Use ChatGPT to Write an Essay" from ZDNET, accessed on September 15: "How to Use ChatGPT to Write an Essay." ZDNET, https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/. Accessed 15 Sept. 2023.

If you used something other than a website as a source, such as a book or textbook, you can still ask ChatGPT to provide a citation. The only difference is that you might have to input some information manually. 

Artificial Intelligence

Study finds ai-generated research papers on google scholar - why it matters, google's notebooklm can discuss your notes with you now. how to access it (and why you should), searching through your google photos is about to get a lot easier, thanks to ai.

  • Helldivers 2
  • Dragon’s Dogma 2
  • Wuthering Waves
  • Genshin Impact
  • Counter Strike 2
  • Honkai Star Rail
  • Lego Fortnite
  • Stardew Valley
  • NYT Strands
  • NYT Connections
  • Apple Watch
  • Crunchyroll
  • Prime Video
  • Jujutsu Kaisen
  • Demon Slayer
  • Chainsaw Man
  • Solo Leveling
  • Beebom Gadgets

How to Use ChatGPT to Write Essays That Impress

' src=

Step 1: Use ChatGPT to Find and Refine Essay Topics

  • Log into the service and type the following prompt into ChatGPT:

How to Use ChatGPT to Write Essays That Impress

  • As you can see, ChatGPT gave several good ideas for our essay. If you want to refine the idea further, you can ask the chatbot to cut out some parts of the idea and replace them. Or, you can ask for more context in certain parts. Example – “Expand more on topic number 5 and what it means.”

Step 2: Ask ChatGPT to Construct an Outline

  • With the same chat open, type out “ Give me an essay outline for <selected topic>. Make sure to keep it structured as I’ll use it to write my essay .” In this case, I will use topic number 2 since it aligns with what I had in mind.

Essay outline chatgpt

  • As you can see above, we now have a structured outline for our essay. We can use this to write our essay or have ChatGPT do that job. Nonetheless, it’s a good starting point. As always, you can have the AI chatbot cut out parts of the outline or specifically add new ones depending on your requirement.

Step 3: Get ChatGPT to Cite Sources for Your Essay

Even though we have the idea and the outline, we will need to do our research for proof supporting our essay. Thankfully, ChatGPT can be of some help here. Since the chatbot is adept at moderate research, users can get a general idea of where to look for gathering information. Let’s begin doing that.

  • Let’s begin asking ChatGPT for sources. With the same chat open, type in the following prompt:

Credible sources chatgpt

  • Now we have a list of 10 sources we can reference from. However, you can also see that ChatGPT mentions the year 2021 in some of them. Therefore, it’s best to use these websites but navigate to the latest pages pertaining to your essay for research. This applies to every topic, so always do it. Also, chatbots like ChatGPT have a habit of hallucinating and making up information, so do be careful.

Step 4: Have ChatGPT Write the Essay

  • In the same chat, type the following prompt – “With the topic and outline available to you, generate a 700-word essay. Make sure to keep it structured and concise yet informational. Also, keep in mind my target audience is <Insert target audience> so cater to that accordingly.”
  • In the middle of the essay, ChatGPT might stop and not answer. Simply type “ Continue ,” and it will finish the rest of the essay.

Finished essay ChatGPT

Step 5: Edit the Essay with ChatGPT

No matter if you have used ChatGPT to draft a complete essay or have written one yourself, you can use this step to make ChatGPT your co-editor and grammar checker. While your essay might need an initial look from a human, you can definitely use the bot to hash out the tone and add little details.

  • Either open up the same chat or have your essay already in the clipboard. With that done, type out the following prompt:

How to Use ChatGPT to Write Essays That Impress

Step 6: Export the Essay for Submission

However, for those who want to export the essay into a more aesthetic format, we have just the thing for you. There is no shortage of best ChatGPT Chrome extensions on the internet right now. We have one such selection linked in our list that can export selective chats onto beautiful image formats if you want to show off your essay. Check it out and let us know how you liked it.

Bonus: ChatGPT and AI Apps to Write Essays

1. writesonic.

writesonic chatgpt essay

Ryter is another helpful AI writing assistant that not only helps with essays but all types of articles. The service is powered by a language model that gives it intelligence. Rytr comes with 40+ different use cases and 20+ writing tones for all types of written material. For those who don’t want to stick to English, it even comes with support for 30+ languages.

Rytr chatgpt essay

Upanishad Sharma

Combining his love for Literature and Tech, Upanishad dived into the world of technology journalism with fire. Now he writes about anything and everything while keeping a keen eye on his first love of gaming. Often found chronically walking around the office.

Add new comment

The Pixel 9 Reimagine Feature Has Scared the AI Out of Me

write essays with chat gpt

  • Gradehacker
  • Meet the Team
  • Essay Writing
  • Degree Accelerator
  • Entire Class Bundle
  • Learning Center
  • Gradehacker TV
  • Fill a Request
  • Book a Call
  • Video Tutorials
  • YouTube Video Tutorials

Write an Essay From Scratch With Chat GPT: Step-by-Step Tutorial

Santiago mallea.

  • Best Apps And Tools , Writing Tips

Chat GPT Essay Writer

Chief of Content At Gradehacker

  • Updated on June, 2024

How can I use ChatGPT to write an essay from scratch?

To write an essay with Chat GPT, you need to:

  • Understand your prompt
  • Choose a topic
  • Write the entire prompt in Chat GPT
  • Break down the arguments you got
  • Write one prompt at a time
  • Check the sources
  • Create your first draft
  • Edit your draft

write essays with chat gpt

Want an actual human help you write?

If you are looking for a more personalized approach, get in touch with our team and get a quality AI-free essay

How amazing would it be if there was a robot willing to help you write a college essay from scratch?

A few years ago, that may have sounded like something so futuristic that could only be seen in movies. But actually, we are closer than you might think so.

Artificial Intelligence tools are everywhere , and college students have noticed it. Among all, there is one revolutionary AI that learns over time and writes all types of content, from typical conversations to academic texts.

But can Chat GPT write essays from scratch?

We tried it, and the answer is kind so (for now at least.)

Here at Gradehacker, we have years of being the non-traditional adult students’ #1 resource.

We have lots of experience helping people like you write their essays on time or get their college degree sooner , and we know how important it is to be updated with the latest tools.

AIs and Chat GPT are going to stay for a while , so you better learn how to use them properly. If you ever wondered whether it was possible to write an essay from scratch with Chat GPT, you are about to find out!

Now, in case you aren’t familiarized with Chat GPT or don’t know the basics of how it works, we recommend watching our video first!

How we Used ChatGPT to Write Essays

So, to try our experiment with ChatGPT, we created two different college assignments that any student could find:

  • An argumentative essay about America's healthcare system
  • A book review of George Orwell's 1984

Our main goal is to test ChatGPT’s essay-writing skills and see how much students can use it to write their academic assignments.

Now, we are pretty aware that this (or any) artificial intelligence can carry a wide range of problems such as:

  • Giving you incorrect premises and information
  • Delivering a piece of writing that is plagiarised from somewhere else
  • Does not include citations or list the sources it used
  • Is not always available to use

That’s why after receiving our first rough draft, we’ll edit the parts of the text that are necessary and run what we get through our plagiarism checker. After our revision, we’ll ask the AI to expand on the information or make the changes we need.

We’ll consider that final version after our revision as the best possible work that ChatGPT could have done to write an essay from scratch.

And to cover the lack of citations, we’ll see what academic sources the chatbot considers worthy for us to use when writing our paper.

Now, we don’t think that AIs are ready to deliver fully edited and well-written academic writing assignments that you can simply submit to your professor without reading them first.

But is it possible to speed up the writing process and save time by asking Chat GPT to write essays?

Let’s see!

Can ChatGPT Write an Argumentative Paper?

First, we’ll see how it can handle one of the most common academic essays: an argumentative paper.

We chose the American healthcare system as our topic, but as we know that we to find a specific subject with a wide range of sources to write a strong and persuasive essay, we are focusing on structural racism in our healthcare system and how African Americans accessed it during covid.

It’s a clear and specific topic that we included in our list of best topics for your research paper. If you want similar alternatives for college papers, be sure to watch our video !

Instructions and Essay Prompt

Take a position on an issue and compose a 5-page paper that supports it.

In the introduction, establish why your topic is important and present a specific, argumentative thesis statement that previews your argument.

The body of your essay should be logical, coherent, and purposeful. It should synthesize your research and your own informed opinions in order to support your thesis.

Address other positions on the topic along with arguments and evidence that support those positions. 

Write a conclusion that restates your thesis and reminds your reader of your main points.

First Results

After giving ChatGPT this prompt, this is what we received:

The first draft we received

To begin with, after copying and pasting these paragraphs into a word document, it only covered two and a half pages.

While the introduction directly tackles the main topic, it fails to provide a clear thesis statement. And even if it’s included in a separate section, the thesis is broad and lacks factual evidence or statistics to support it.

Throughout the body of the text, the AI lists many real-life issues that contribute to the topic of the paper. Still, these are never fully explained nor supported with evidence.

For example, in the first paragraph, it says that “African Americans have long experienced poorer health outcomes compared to other racial groups.” Here it would be interesting to add statistics that prove this information is correct.

Something that really stood up for us, was that ChatGPT credited a source to back up important data, even though it didn’t cite it properly. It talks about a study conducted by the Kaiser Family Foundation that supports that in 2019, 11% of African Americans and 6% of non-Hispanic Whites were uninsured. 

We checked the original article and found that the information was almost 100% accurate . The correct rates were 8% for White Americans and 10.9% for African Americans, but the biggest issue was that the study included more updated statistics from 2021.

write essays with chat gpt

Then, when addressing other issues like transportation and discrimination, the problem is presented without any problems, but once again, there are no sources that support them .

Once the essay starts developing the thesis statement on how these issues could be fixed, we can see the same problem.

But even if they lack supporting evidence , the arguments listed are cohesive and make sense . These were:

  • Expanding Medicaid coverage
  • Provide incentives for healthcare providers to practice in underserved areas
  • Invest in telehealth services
  • Improve transportation infrastructure, particularly in rural areas
  • Train healthcare providers on cultural competence and anti-racism
  • Increase diversity in the healthcare workforce
  • Implement patient-centered care models

These are all strong ideas that could be stronger and more persuasive with specific information and statistics.

Still, the main problem is that there is no counter-argument that is against the essay’s main arguments.

Overall, ChatGPT delivered a cohesive first draft that tackled the topic by explaining its multiple issues and listing possible solutions. However, there is a clear lack of evidence, no counter-arguments were included, and the essay we got was half the length we needed.

Changes and Final Results

In our second attempt, we asked the AI to expand on each section and subtopic of the essay . While the final result ended up repeating some parts on multiple occasions, ChatGPT wrote more extensively and even included in-text citations with their corresponding reference.

By pasting all these new texts (without editing) into a new document, we get more than seven pages, which is a great starting point for writing a better essay.

Explanation of the issues and use of sources

The new introduction stayed pretty much the same, but the difference is that now the thesis statement is stronger and even had a cited statistic to back it up . Unfortunately, while the information is correct, the source isn’t.

Clicking on the link included in the references took us to a non-existing page , and after looking for that data on Google, we found that it actually belonged to a study from the National Library of Medicine.

write essays with chat gpt

But then, the AI did a solid job expanding on the issues that were related to the paper’s topic. But again, while some sources were useful, sometimes the information reflected in the text didn’t correspond to it.

For example, when citing an article posted in KFF to evidence the importance of transportation as a critical factor in health disparities, when we go to the site, we don’t find any mention of that issue.

Similarly, when addressing the higher rates of infection and death compared to White Americans, the AI once again cited the wrong source. The statistics came from a study conducted by the CDC , but from a different article than the one that is credited.

And sometimes, the information displayed was incorrect.

In that same section, when listing the percentages of death in specific states, we see in the cited source that the statistics don’t match.

However, what’s interesting is that if we search for that data on Google, we find a different study that backs it up. So, even if Chat GPT didn’t include inaccurate information in the text, it failed to properly acknowledge the real source.

And so did this problem of having correct information but citing the wrong source continued throughout the paper.

Chat GPT Argumentative Paper Counter-arguments

Solutions and counter-arguments

When we asked the AI to write more about the solutions it mentioned in the first draft, we received more extensive arguments with supporting evidence for each case.

As we were expecting , the statistics were real, but the source credited wasn’t the original and didn’t mention anything related to what was included in the text. 

And it wasn’t any different with the counterarguments. They made sense and had a strong point, but the sources credited weren’t correct. 

For instance, regarding telehealth services, it recognized the multiple barriers it would take for low-income areas to adopt this modality. It credited an article posted in the KKF mainly written by “Gillespie,” but after searching for the information, we see that the original study was conducted by other people.

Still, the fact that Chat GPT now provided us with multiple data and information we could use to develop counter-arguments and later refute them is excellent progress. 

Chat GPT wrote more detailed solutions

The good news is that none of the multiple paragraphs that Chat GPT delivered had plagiarism issues.

After running them through our plagiarism checker, it only found a few parts that had duplicated content, but these were sentences composed of commonly used phrases that other articles about different topics also had.

For example, multiple times it recognized as plagiarism phrases like “according to the CDC” or “according to a report by the Kaiser Family Foundation.” And even these “ plagiarism issues ” could be easily solved by rearranging the order or adding new words.

Checking for plagiarism is a critical part of the essay writing process. If you are not using one yet, be sure to pick one as soon as possible. We recommend checking our list of best plagiarism checkers.

Key Takeaways

So, what did we learn by asking Chat GPT to write an argumentative paper?

  • It's better if the AI writes section per section
  • It can give you accurate information related to issues, solutions, and counterarguments
  • There is a high chance the source credited won't be the right one
  • The texts, which can have duplicated content among themselves, don't appear to be plagiarized

It’s clear that we still need to do a lot of editing and writing.

However, considering that Chat GPT wrote this in less than an hour , the AI proved to be a solid tool. It gave us many strong arguments, interesting and accurate statistics, and an order that we cal follow to structure our argumentative paper.

If writing these types of assignments isn’t your strength, be sure to watch our tutorial on how to write an exceptional argumentative essay!

write essays with chat gpt

You deserve reliable study support

Get a quality reference and discover how Gradehacker can help you succeed with 100% original samples written by real experts (not AI)

Can Chat GPT Write a Book Review?

For our second experiment, we want to see if Chat GPT can write an essay for a literature class.

To do so, we picked one of the novels we consider one of the 5 must-read books any college student should read: 1984 by George Orwell. There is so much written and discussed about this literary classic that we thought it would be a perfect choice for an artificial intelligence chatbot like Chat GPT to write something about.

Write a book review of the book 1984 by George Orwell. The paper needs to include an introduction with the author’s title, publication information (Publisher, year, number of pages), genre, and a brief introduction to the review.

Then, write a summary of the plot with the basic parts of the plot: situation, conflict, development, climax, and resolution.

Continue by describing the setting and the point of view and discussing the book’s literary devices.

Lastly, analyze the book, and explain the particular style of writing or literary elements used.

And then write a conclusion.

This is the first draft we got:

The first draft we got

Starting with the introduction, all the information is correct , while including the number of pages is worthless as it depends on the edition of the book.

The summary is also accurate, but it relies too heavily on the plot instead of the context and world described in the novel , which is arguably the reason 1984 transcended time. For example, there is no mention of Big Brother, the leader of the totalitarian superstate.

Now, the setting and point of view section is the poorest section written by Chat GPT . It is very short and lacks development.

The literary devices are not necessarily wrong, but it would be better to focus more on each . For instance, talk more about the importance of symbolism or explain how the book critiques propaganda, totalitarianism, and individual freedom.

The analysis of Orwell’s writing is simple , but the conclusion is clear and straightforward, so it might be the best piece that the AI wrote.

For the second draft, instead of submitting the entire prompt, we wrote one command per section . As a result, Chat GPT focused on each part of the review and tossed more paragraphs with more detailed information in every case.

Chat GPT Book Review Better Analysis 1

It’s clear that this way, the AI can write better and more developed texts that are easier to edit and improve . Each section analyzes more in-depth the topic it’s reviewing, which facilitates the upcoming process of structuring the most useful paragraphs into a cohesive essay.

For example, it now added more literary devices used by Orwell and gave specific examples of the symbolism of the novel.

Of course, there are many sentences and ideas that are repeated throughout the different sections. But now, because each has more specific information, we can take these parts and structure a new paragraph that comprises the most valuable sentences.

Book Review Literary Devices

Now, even if sometimes book reviews don’t need to include citations from external sources apart from the novel we are analyzing, Chat GPT gave us five different options for us to choose from.

The only problem was that we couldn’t find any of them on Google.

The names of the authors were real people, but the titles of the articles and essays were nowhere to be found. This made us think that it’s likely that the AI picked real-life writers and created a title for a fictional essay about 1984 or George Orwell .

write essays with chat gpt

Finally, we need to see if the texts are original or plagiarized material.

After running it through our plagiarism detection software, we found that it was mostly original content with only a few issues on sight . But nothing too big to worry about.

One easy-to-solve example is in the literary devices section, where it directly quotes a sentence from the book. In this case, we would just need to add the in-text citation.

The biggest plagiarism problem was with one sentence (or six words, to be more specific) from the conclusion that linked to the introduction from a summary review . But by rearranging the word order or adding synonyms, this issue can be easily solved too.

So, what are the most important tips we can take from Chat GPT writing a book review?

  • It will review each section more in-depth if you ask it one prompt at a time
  • The analysis and summary of the book were accurate
  • If you ask it to list scholarly sources, the AI will create unexisting sources based on real authors
  • Very few plagiarism issues

Once again, there is still a lot of work to do.

The writing sample chat GPT gave us is a solid start, but we need to rearrange all the paragraphs into one cohesive essay that perfectly summarizes the different aspects of the novel. Plus, we would also have to find scholarly sources on our own.

Still, the AI can do the heavy lifting and give you a great starting point.

If writing book reviews isn’t your strong suit, you have our tutorial and tips!

write essays with chat gpt

Transform your adult college journey

Book a 30-min coaching call for free and find the personalized support and mentorship you deserve!

Save Time And Use Chat GPT to Write Your Essay

We know that writing essays can be a tedious task.

Sometimes, kicking off the process can be harder than what it looks. That’s why understanding how to use a powerful tool like Chat GPT can truly make the difference.

It may not have the critical thinking skills you have or write a high-quality essay from scratch, but by using our tips, it can deliver you a solid first draft to start writing your entire essay.

But if you want to have an expert team of writers giving you personalized support or aren’t sure about editing an AI-written essay, you can trust Gradehacker to help you with your assignments.

You can also check out our related blog posts if you want to learn how to take your writing skills to the next level!

How To Be More Productive

How To Be More Productive | Tips For Non-Traditional Students

Mnemonic-Techniques cover

Studying with Mnemonic Techniques 

Discussion Boards Cover

How To Nail Every Discussion Board | Tips To Improve Your Discussion Posts

Study Habits That Keep College Students Focused Cover

Study Habits That Keep College Students Focused

Picture of Santiago Mallea

Santiago Mallea is a curious and creative journalist who first helped many college students as a Gradehacker consultant in subjects like literature, communications, ethics, and business. Now, as a Content Creator in our blog, YouTube channel, and TikTok, he assists non-traditional students improve their college experience by sharing the best tips. You can find him on LinkedIn .

  • Best Apps and Tools
  • Writing Tips
  • Financial Tips and Scholarships
  • Career Planning
  • Non-Traditional Students
  • Student Wellness
  • Cost & Pricing

write essays with chat gpt

  • 2525 Ponce de Leon Blvd Suite 300 Coral Gables, FL 33134 USA
  • Phone: (786) 991-9293
  • Gradehacker 2525 Ponce de Leon Blvd Suite 300 Coral Gables, FL 33134 USA

About Gradehacker

Business hours.

Mon - Fri: 10:00 am - 7 pm ET ​​Sat - Sun: 10 am - 3 pm ET ​

© 2024 Gradehacker LLC All Rights Reserved.

Get answers. Find inspiration. Be more productive.

Free to use. Easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming, and more.

Writes, brainstorms, edits, and explores ideas with you

A conversation between a user and ChatGPT on an interface about rewriting an email to appear friendly and professional.

Summarize meetings. Find new insights. Increase productivity.

A conversation between a user and ChatGPT on an interface about summarizing meeting notes.

Generate and debug code. Automate repetitive tasks. Learn new APIs.

A conversation between a user and ChatGPT on an interface about creating CSS with specific paramaters.

Learn something new. Dive into a hobby. Answer complex questions.

A conversation between a user and ChatGPT on an interface about gathering a list of things needed to start a herb garden.

Explore more features in ChatGPT

Type, talk, and use it your way.

With ChatGPT, you can type or start a voice conversation by tapping the headphone icon in the mobile app. 

Browse the web

ChatGPT can answer your questions using its vast knowledge and with information from the web.

Analyze data and create charts

Upload a file and ask ChatGPT to help analyze data, summarize information or create a chart. 

Talk about an image

Take or upload an image and ask ChatGPT about it.

3a

Customize ChatGPT for work, daily tasks or inspiration with GPTs

​​ Explore the GPT store and see what others have made. ChatGPT Plus users can also create their own custom GPTs. 

4a

Create images

A sk ChatGPT to create images using a simple sentence or detailed paragraph.

2a

Apple & ChatGPT

At WWDC in June 2024, we announced a partnership with Apple to integrate ChatGPT into experiences within iOS, iPadOS, and macOS.

ChatGPT > Two Up Text + Media > Plus > Apple + ChatGPT > Media Item > Apple abstract

Get started with ChatGPT today

Assistance with writing, problem solving and more

Access to GPT-4o mini

Limited access to GPT-4o

Limited access to advanced data analysis, file uploads, vision, web browsing, and image generation

Use custom GPTs

Early access to new features

Access to GPT-4, GPT-4o, GPT-4o mini

Up to 5x more messages for GPT-4o

Access to advanced data analysis, file uploads, vision, and web browsing

DALL·E image generation

Create and use custom GPTs

  • $20 / month

Join hundreds of millions of users and try ChatGPT today.

PrepScholar

Choose Your Test

  • Search Blogs By Category
  • College Admissions
  • AP and IB Exams
  • GPA and Coursework

Can You Use ChatGPT for Your College Essay?

author image

College Admissions , College Essays

feaeture-robot-writing-studying-AI-cc0

ChatGPT has become a popular topic of conversation since its official launch in November 2022. The artificial intelligence (AI) chatbot can be used for all sorts of things, like having conversations, answering questions, and even crafting complete pieces of writing.

If you’re applying for college, you might be wondering about ChatGPT college admissions’ potential.  Should you use a ChatGPT college essay in your application ?

By the time you finish reading this article, you’ll know much more about ChatGPT, including how students can use it responsibly and if it’s a good idea to use ChatGPT on college essays . We’ll answer all your questions, like:

  • What is ChatGPT and why are schools talking about it?
  • What are the good and bad aspects of ChatGPT?
  • Should you use ChatGPT for college essays and applications?
  • Can colleges detect ChatGPT?
  • Are there other tools and strategies that students can use, instead?

We’ve got a lot to cover, so let’s get started!

body-robot-teacher-cc0-1

Schools and colleges are worried about how new AI technology affects how students learn. (Don't worry. Robots aren't replacing your teachers...yet.)

What Is ChatGPT and Why Are Schools Talking About It?

ChatGPT (short for “Chat Generative Pre-trained Transformer”) is a chatbot created by OpenAI , an artificial intelligence research company. ChatGPT can be used for various tasks, like having human-like conversations, answering questions, giving recommendations, translating words and phrases—and writing things like essays. 

In order to do this, ChatGPT uses a neural network that’s been trained on thousands of resources to predict relationships between words. When you give ChatGPT a task, it uses that knowledge base to interpret your input or query. It then analyzes its data banks to predict the combinations of words that will best answer your question. 

So while ChatGPT might seem like it’s thinking, it’s actually pulling information from hundreds of thousands of resources , then answering your questions by looking for patterns in that data and predicting which words come next.  

Why Schools Are Concerned About ChatGPT

Unsurprisingly, schools are worried about ChatGPT and its misuse, especially in terms of academic dishonesty and plagiarism . Most schools, including colleges, require students’ work to be 100% their own. That’s because taking someone else’s ideas and passing them off as your own is stealing someone else’s intellectual property and misrepresenting your skills. 

The problem with ChatGPT from schools’ perspective is that it does the writing and research for you, then gives you the final product. In other words, you’re not doing the work it takes to complete an assignment when you’re using ChatGPT , which falls under schools’ plagiarism and dishonesty policies.  

Colleges are also concerned with how ChatGPT will negatively affect students’ critical thinking, research, and writing skills . Essays and other writing assignments are used to measure students’ mastery of the material, and if students submit ChatGPT college essays, teachers will just be giving feedback on an AI’s writing…which doesn’t help the student learn and grow. 

Beyond that, knowing how to write well is an important skill people need to be successful throughout life. Schools believe that if students rely on ChatGPT to write their essays, they’re doing more than just plagiarizing—they’re impacting their ability to succeed in their future careers. 

Many Schools Have Already Banned ChatGPT

Schools have responded surprisingly quickly to AI use, including ChatGPT. Worries about academic dishonesty, plagiarism, and mis/disinformation have led many high schools and colleges to ban the use of ChatGPT . Some schools have begun using AI-detection software for assignment submissions, and some have gone so far as to block students from using ChatGPT on their internet networks. 

It’s likely that schools will begin revising their academic honesty and plagiarism policies to address the use of AI tools like ChatGPT. You’ll want to stay up-to-date with your schools’ policies. 

body-technical-problem-oops-cc0

ChatGPT is pretty amazing...but it's not a great tool for writing college essays. Here's why.

ChatGPT: College Admissions and Entrance Essays

College admissions essays—also called personal statements—ask students to explore important events, experiences, and ideas from their lives. A great entrance essay will explain what makes you you !  

ChatGPT is a machine that doesn’t know and can’t understand your experiences. That means using ChatGPT to write your admissions essays isn’t just unethical. It actually puts you at a disadvantage because ChatGPT can’t adequately showcase what it means to be you. 

Let’s take a look at four ways ChatGPT negatively impacts college admissions essays.

#1: ChatGPT Lacks Insight

We recommend students use u nexpected or slightly unusual topics because they help admissions committees learn more about you and what makes you unique. The chat bot doesn’t know any of that, so nothing ChatGPT writes can’t accurately reflect your experience, passions, or goals for the future. 

Because ChatGPT will make guesses about who you are, it won’t be able to share what makes you unique in a way that resonates with readers. And since that’s what admissions counselors care about, a ChatGPT college essay could negatively impact an otherwise strong application.  

#2: ChatGPT Might Plagiarize 

Writing about experiences that many other people have had isn’t a very strong approach to take for entrance essays . After all, you don’t want to blend in—you want to stand out! 

If you write your essay yourself and include key details about your past experiences and future goals, there’s little risk that you’ll write the same essay as someone else. But if you use ChatGPT—who’s to say someone else won’t, too? Since ChatGPT uses predictive guesses to write essays, there’s a good chance the text it uses in your essay already appeared in someone else’s.  

Additionally, ChatGPT learns from every single interaction it has. So even if your essay isn’t plagiarized, it’s now in the system. That means the next person who uses ChatGPT to write their essay may end up with yours. You’ll still be on the hook for submitting a ChatGPT college essay, and someone else will be in trouble, too.

#3: ChatGPT Doesn’t Understand Emotion 

Keep in mind that ChatGPT can’t experience or imitate emotions, and so its writing samples lack, well, a human touch ! 

A great entrance essay will explore experiences or topics you’re genuinely excited about or proud of . This is your chance to show your chosen schools what you’ve accomplished and how you’ll continue growing and learning, and an essay without emotion would be odd considering that these should be real, lived experiences and passions you have!

#4: ChatGPT Produced Mediocre Results

If you’re still curious what would happen if you submitted a ChatGPT college essay with your application, you’re in luck. Both Business Insider and Forbes asked ChatGPT to write a couple of college entrance essays, and then they sent them to college admissions readers to get their thoughts. 

The readers agreed that the essays would probably pass as being written by real students—assuming admissions committees didn’t use AI detection software—but that they both were about what a “very mediocre, perhaps even a middle school, student would produce.” The admissions professionals agreed that the essays probably wouldn’t perform very well with entrance committees, especially at more selective schools.  

That’s not exactly the reaction you want when an admission committee reads your application materials! So, when it comes to ChatGPT college admissions, it’s best to steer clear and write your admission materials by yourself. 

body-magnifying-glass-icon-cc0

Can Colleges Detect ChatGPT?

We’ve already explained why it’s not a great idea to use ChatGPT to write your college essays and applications , but you may still be wondering: can colleges detect ChatGPT? 

In short, yes, they can! 

Software Can Detect ChatGPT

As technology improves and increases the risk of academic dishonesty, plagiarism, and mis/disinformation, software that can detect such technology is improving, too. For instance, OpenAI, the same company that built ChatGPT, is working on a text classifier that can tell the difference between AI-written text and human-written text .  

Turnitin, one of the most popular plagiarism detectors used by high schools and universities, also recently developed the AI Innovation Lab —a detection software designed to flag submissions that have used AI tools like ChatGPT. Turnitin says that this tool works with 98% confidence in detecting AI writing. 

Plagiarism and AI companies aren’t the only ones interested in AI-detection software. A 22-year old computer science student at Princeton created an app to detect ChatGPT writing, called Zero GPT. This software works by measuring the complexity of ideas and variety of sentence structures.  

Human Readers Can Detect ChatGPT 

It’s also worth keeping in mind that teachers can spot the use of ChatGPT themselves , even if it isn’t confirmed by a software detector. For example, if you’ve turned in one or two essays to your teacher already, they’re probably familiar with your unique writing style. If you submit a college essay draft essay that uses totally different vocabulary, sentence structures, and figures of speech, your teacher will likely take note.

Additionally , admissions committees and readers may be able to spot ChatGPT writing, too. ChatGPT (and AI writing, in general) uses more simplistic sentence structures with less variation, so that could make it easier to tell if you’ve submitted a ChatGPT college essay. These professionals also read thousands of essays every year, which means they know what a typical essay reads like. You want your college essay to catch their attention…but not because you used AI software! 

body-children-celebrating-computer-cc0

If you use ChatGPT responsibly, you can be as happy as these kids.

Pros and Cons of ChatGPT: College Admissions Edition

ChatGPT is a brand new technology, which means we’re still learning about the ways it can benefit us. It’s important to think about the pros and the cons to any new tool …and that includes artificial intelligence!

Let’s look at some of the good—and not-so-good—aspects of ChatGPT below. 

ChatGPT: The Good

It may seem like we’re focused on just the negatives of using ChatGPT in this article, but we’re willing to admit that the chatbot isn’t all bad. In fact, it can be a very useful tool for learning if used responsibly !

Like we already mentioned, students shouldn’t use ChatGPT to write entire essays or assignments. They can use it, though, as a learning tool alongside their own critical thinking and writing skills.

Students can use ChatGPT responsibly to:

  • Learn more about a topic . It’s a great place to get started for general knowledge and ideas about most subjects.
  • Find reputable and relevant sources on a topic. Students can ask ChatGPT for names and information about leading scholars, relevant websites and databases, and more. 
  • Brainstorm ideas for assignments. Students can share the ideas they already have with ChatGPT, and in return, the chatbot can suggest ideas for further exploration and even organization of their points.
  • Check work (that they’ve written themselves!) for errors or cla rity. This is similar to how spell- and grammar-checking software is used. ChatGPT may be even better than some competitors for this, because students can actually ask ChatGPT to explain the errors and their solutions—not just to fix them. 

Before you use ChatGPT—even for the tasks mentioned above—you should talk to your teacher or school about their AI and academic dishonesty policies. It’s also a good idea to include an acknowledgement that you used ChatGPT with an explanation of its use. 

body-man-sad-cc0

This guy made some bad decisions using ChatGPT. Don't be this guy.

ChatGPT: The Bad

The first model of ChatGPT (GPT-3.5) was formally introduced to the public in November 2022, and the newer model (GPT-4) in March 2023. So, it’s still very new and there’s a lot of room for improvement .  

There are many misconceptions about ChatGPT. One of the most extreme is that the AI is all-knowing and can make its own decisions. Another is that ChatGPT is a search engine that, when asked a question, can just surf the web for timely, relevant resources and give you all of that information. Both of these beliefs are incorrect because ChatGPT is limited to the information it’s been given by OpenAI . 

Remember how the ‘PT’ in ChatGPT stands for “Pre-trained”? That means that every time OpenAI gives ChatGPT an update, it’s given more information to work with (and so it has more information to share with you). In other words, it’s “trained” on information so it can give you the most accurate and relevant responses possible—but that information can be limited and biased . Ultimately, humans at OpenAI decide what pieces of information to share with ChatGPT, so it’s only as accurate and reliable as the sources it has access to.

For example, if you were to ask ChatGPT-3.5 what notable headlines made the news last week, it would respond that it doesn’t have access to that information because its most recent update was in September 2021!

You’re probably already familiar with how easy it can be to come across misinformation, misleading and untrue information on the internet. Since ChatGPT can’t tell the difference between what is true and what isn’t, it’s up to the humans at OpenAI to make sure only accurate and true information is given to the chatbot . This leaves room for human error , and users of ChatGPT have to keep that in mind when using and learning from the chatbot.

These are just the most obvious problems with ChatGPT. Some other problems with the chatbot include:

  • A lack of common sense. ChatGPT can create seemingly sensical responses to many questions and topics, but it doesn’t have common sense or complete background knowledge.
  • A lack of empathy. ChatGPT doesn’t have emotions, so it can’t understand them, either. 
  • An inability to make decisions or problem solve . While the chatbot can complete basic tasks like answering questions or giving recommendations, it can’t solve complex tasks. 

While there are some great uses for ChatGPT, it’s certainly not without its flaws.

body-bootcamp-cc0

Our bootcamp can help you put together amazing college essays that help you get into your dream schools—no AI necessary.

What Other Tools and Strategies Can Help Students Besides ChatGPT?

While it’s not a good idea to use ChatGPT for college admissions materials, it’s not the only tool available to help students with college essays and assignments.

One of the best strategies students can use to write good essays is to make sure they give themselves plenty of time for the assignment. The writing process includes much more than just drafting! Having time to brainstorm ideas, write out a draft, revise it for clarity and completeness, and polish it makes for a much stronger essay. 

Teachers are another great resource students can use, especially for college application essays. Asking a teacher (or two!) for feedback can really help students improve the focus, clarity, and correctness of an essay. It’s also a more interactive way to learn—being able to sit down with a teacher to talk about their feedback can be much more engaging than using other tools.

Using expert resources during the essay writing process can make a big difference, too. Our article outlines a complete list of strategies for students writing college admission essays. It breaks down what the Common Application essay is, gives tips for choosing the best essay topic, offers strategies for staying focused and being specific, and more.

You can also get help from people who know the college admissions process best, like former admissions counselors. PrepScholar’s Admissions Bootcamp guides you through the entire application process , and you’ll get insider tips and tricks from real-life admissions counselors that’ll make your applications stand out. Even better, our bootcamp includes step-by-step essay writing guidance, so you can get the help you need to make sure your essay is perfect.

If you’re hoping for more technological help, Grammarly is another AI tool that can check writing for correctness. It can correct things like misused and misspelled words and grammar mistakes, and it can improve your tone and style. 

It’s also widely available across multiple platforms through a Windows desktop app, an Android and iOS app, and a Google Chrome extension. And since Grammarly just checks your writing without doing any of the work for you, it’s totally safe to use on your college essays. 

The Bottom Line: ChatGPT College Admissions and Essays

ChatGPT will continue to be a popular discussion topic as it continues evolving. You can expect your chosen schools to address ChatGPT and other AI tools in their academic honesty and plagiarism policies in the near future—and maybe even to restrict or ban the use of the chatbot for school admissions and assignments.

As AI continues transforming, so will AI-detection. The goal is to make sure that AI is used responsibly by students so that they’re avoiding plagiarism and building their research, writing, and critical thinking skills. There are some great uses for ChatGPT when used responsibly, but you should always check with your teachers and schools beforehand.

ChatGPT’s “bad” aspects still need improving, and that’s going to take some time.Be aware that the chatbot isn’t even close to perfect, and it needs to be fact-checked just like other sources of information.

Similarly to other school assignments, don’t submit a ChatGPT college essay for college applications, either. College entrance essays should outline unique and interesting personal experiences and ideas, and those can only come from you.  

Just because ChatGPT isn’t a good idea doesn’t mean there aren’t resources to help you put together a great college essay. There are many other tools and strategies you can use instead of ChatGPT , many of which have been around for longer and offer better feedback. 

body-next-future-cc0

What’s Next?

Ready to write your college essays the old-fashioned way? Start here with our comprehensive guide to the admissions essays.  

Most students have to submit essays as part of their Common Application . Here's a complete breakdown of the Common App prompts —and how to answer them.

The most common type of essay answers the "why this college?" prompt. We've got an expert breakdown that shows you how to write a killer response , step by step. 

Want to write the perfect college application essay?   We can help.   Your dedicated PrepScholar Admissions counselor will help you craft your perfect college essay, from the ground up. We learn your background and interests, brainstorm essay topics, and walk you through the essay drafting process, step-by-step. At the end, you'll have a unique essay to proudly submit to colleges.   Don't leave your college application to chance. Find out more about PrepScholar Admissions now:

Trending Now

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

ACT vs. SAT: Which Test Should You Take?

When should you take the SAT or ACT?

Get Your Free

PrepScholar

Find Your Target SAT Score

Free Complete Official SAT Practice Tests

How to Get a Perfect SAT Score, by an Expert Full Scorer

Score 800 on SAT Math

Score 800 on SAT Reading and Writing

How to Improve Your Low SAT Score

Score 600 on SAT Math

Score 600 on SAT Reading and Writing

Find Your Target ACT Score

Complete Official Free ACT Practice Tests

How to Get a Perfect ACT Score, by a 36 Full Scorer

Get a 36 on ACT English

Get a 36 on ACT Math

Get a 36 on ACT Reading

Get a 36 on ACT Science

How to Improve Your Low ACT Score

Get a 24 on ACT English

Get a 24 on ACT Math

Get a 24 on ACT Reading

Get a 24 on ACT Science

Stay Informed

Get the latest articles and test prep tips!

Follow us on Facebook (icon)

Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 October 2023

A large-scale comparison of human-written versus ChatGPT-generated essays

  • Steffen Herbold 1 ,
  • Annette Hautli-Janisz 1 ,
  • Ute Heuer 1 ,
  • Zlata Kikteva 1 &
  • Alexander Trautsch 1  

Scientific Reports volume  13 , Article number:  18617 ( 2023 ) Cite this article

25k Accesses

43 Citations

98 Altmetric

Metrics details

  • Computer science
  • Information technology

ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.

Similar content being viewed by others

write essays with chat gpt

ChatGPT-3.5 as writing assistance in students’ essays

write essays with chat gpt

Perception, performance, and detectability of conversational artificial intelligence across 32 university courses

write essays with chat gpt

L2 writer engagement with automated written corrective feedback provided by ChatGPT: A mixed-method multiple case study

Introduction.

The massive uptake in the development and deployment of large-scale Natural Language Generation (NLG) systems in recent months has yielded an almost unprecedented worldwide discussion of the future of society. The ChatGPT service which serves as Web front-end to GPT-3.5 1 and GPT-4 was the fastest-growing service in history to break the 100 million user milestone in January and had 1 billion visits by February 2023 2 .

Driven by the upheaval that is particularly anticipated for education 3 and knowledge transfer for future generations, we conduct the first independent, systematic study of AI-generated language content that is typically dealt with in high-school education: argumentative essays, i.e. essays in which students discuss a position on a controversial topic by collecting and reflecting on evidence (e.g. ‘Should students be taught to cooperate or compete?’). Learning to write such essays is a crucial aspect of education, as students learn to systematically assess and reflect on a problem from different perspectives. Understanding the capability of generative AI to perform this task increases our understanding of the skills of the models, as well as of the challenges educators face when it comes to teaching this crucial skill. While there is a multitude of individual examples and anecdotal evidence for the quality of AI-generated content in this genre (e.g. 4 ) this paper is the first to systematically assess the quality of human-written and AI-generated argumentative texts across different versions of ChatGPT 5 . We use a fine-grained essay quality scoring rubric based on content and language mastery and employ a significant pool of domain experts, i.e. high school teachers across disciplines, to perform the evaluation. Using computational linguistic methods and rigorous statistical analysis, we arrive at several key findings:

AI models generate significantly higher-quality argumentative essays than the users of an essay-writing online forum frequented by German high-school students across all criteria in our scoring rubric.

ChatGPT-4 (ChatGPT web interface with the GPT-4 model) significantly outperforms ChatGPT-3 (ChatGPT web interface with the GPT-3.5 default model) with respect to logical structure, language complexity, vocabulary richness and text linking.

Writing styles between humans and generative AI models differ significantly: for instance, the GPT models use more nominalizations and have higher sentence complexity (signaling more complex, ‘scientific’, language), whereas the students make more use of modal and epistemic constructions (which tend to convey speaker attitude).

The linguistic diversity of the NLG models seems to be improving over time: while ChatGPT-3 still has a significantly lower linguistic diversity than humans, ChatGPT-4 has a significantly higher diversity than the students.

Our work goes significantly beyond existing benchmarks. While OpenAI’s technical report on GPT-4 6 presents some benchmarks, their evaluation lacks scientific rigor: it fails to provide vital information like the agreement between raters, does not report on details regarding the criteria for assessment or to what extent and how a statistical analysis was conducted for a larger sample of essays. In contrast, our benchmark provides the first (statistically) rigorous and systematic study of essay quality, paired with a computational linguistic analysis of the language employed by humans and two different versions of ChatGPT, offering a glance at how these NLG models develop over time. While our work is focused on argumentative essays in education, the genre is also relevant beyond education. In general, studying argumentative essays is one important aspect to understand how good generative AI models are at conveying arguments and, consequently, persuasive writing in general.

Related work

Natural language generation.

The recent interest in generative AI models can be largely attributed to the public release of ChatGPT, a public interface in the form of an interactive chat based on the InstructGPT 1 model, more commonly referred to as GPT-3.5. In comparison to the original GPT-3 7 and other similar generative large language models based on the transformer architecture like GPT-J 8 , this model was not trained in a purely self-supervised manner (e.g. through masked language modeling). Instead, a pipeline that involved human-written content was used to fine-tune the model and improve the quality of the outputs to both mitigate biases and safety issues, as well as make the generated text more similar to text written by humans. Such models are referred to as Fine-tuned LAnguage Nets (FLANs). For details on their training, we refer to the literature 9 . Notably, this process was recently reproduced with publicly available models such as Alpaca 10 and Dolly (i.e. the complete models can be downloaded and not just accessed through an API). However, we can only assume that a similar process was used for the training of GPT-4 since the paper by OpenAI does not include any details on model training.

Testing of the language competency of large-scale NLG systems has only recently started. Cai et al. 11 show that ChatGPT reuses sentence structure, accesses the intended meaning of an ambiguous word, and identifies the thematic structure of a verb and its arguments, replicating human language use. Mahowald 12 compares ChatGPT’s acceptability judgments to human judgments on the Article + Adjective + Numeral + Noun construction in English. Dentella et al. 13 show that ChatGPT-3 fails to understand low-frequent grammatical constructions like complex nested hierarchies and self-embeddings. In another recent line of research, the structure of automatically generated language is evaluated. Guo et al. 14 show that in question-answer scenarios, ChatGPT-3 uses different linguistic devices than humans. Zhao et al. 15 show that ChatGPT generates longer and more diverse responses when the user is in an apparently negative emotional state.

Given that we aim to identify certain linguistic characteristics of human-written versus AI-generated content, we also draw on related work in the field of linguistic fingerprinting, which assumes that each human has a unique way of using language to express themselves, i.e. the linguistic means that are employed to communicate thoughts, opinions and ideas differ between humans. That these properties can be identified with computational linguistic means has been showcased across different tasks: the computation of a linguistic fingerprint allows to distinguish authors of literary works 16 , the identification of speaker profiles in large public debates 17 , 18 , 19 , 20 and the provision of data for forensic voice comparison in broadcast debates 21 , 22 . For educational purposes, linguistic features are used to measure essay readability 23 , essay cohesion 24 and language performance scores for essay grading 25 . Integrating linguistic fingerprints also yields performance advantages for classification tasks, for instance in predicting user opinion 26 , 27 and identifying individual users 28 .

Limitations of OpenAIs ChatGPT evaluations

OpenAI published a discussion of the model’s performance of several tasks, including Advanced Placement (AP) classes within the US educational system 6 . The subjects used in performance evaluation are diverse and include arts, history, English literature, calculus, statistics, physics, chemistry, economics, and US politics. While the models achieved good or very good marks in most subjects, they did not perform well in English literature. GPT-3.5 also experienced problems with chemistry, macroeconomics, physics, and statistics. While the overall results are impressive, there are several significant issues: firstly, the conflict of interest of the model’s owners poses a problem for the performance interpretation. Secondly, there are issues with the soundness of the assessment beyond the conflict of interest, which make the generalizability of the results hard to assess with respect to the models’ capability to write essays. Notably, the AP exams combine multiple-choice questions with free-text answers. Only the aggregated scores are publicly available. To the best of our knowledge, neither the generated free-text answers, their overall assessment, nor their assessment given specific criteria from the used judgment rubric are published. Thirdly, while the paper states that 1–2 qualified third-party contractors participated in the rating of the free-text answers, it is unclear how often multiple ratings were generated for the same answer and what was the agreement between them. This lack of information hinders a scientifically sound judgement regarding the capabilities of these models in general, but also specifically for essays. Lastly, the owners of the model conducted their study in a few-shot prompt setting, where they gave the models a very structured template as well as an example of a human-written high-quality essay to guide the generation of the answers. This further fine-tuning of what the models generate could have also influenced the output. The results published by the owners go beyond the AP courses which are directly comparable to our work and also consider other student assessments like Graduate Record Examinations (GREs). However, these evaluations suffer from the same problems with the scientific rigor as the AP classes.

Scientific assessment of ChatGPT

Researchers across the globe are currently assessing the individual capabilities of these models with greater scientific rigor. We note that due to the recency and speed of these developments, the hereafter discussed literature has mostly only been published as pre-prints and has not yet been peer-reviewed. In addition to the above issues concretely related to the assessment of the capabilities to generate student essays, it is also worth noting that there are likely large problems with the trustworthiness of evaluations, because of data contamination, i.e. because the benchmark tasks are part of the training of the model, which enables memorization. For example, Aiyappa et al. 29 find evidence that this is likely the case for benchmark results regarding NLP tasks. This complicates the effort by researchers to assess the capabilities of the models beyond memorization.

Nevertheless, the first assessment results are already available – though mostly focused on ChatGPT-3 and not yet ChatGPT-4. Closest to our work is a study by Yeadon et al. 30 , who also investigate ChatGPT-3 performance when writing essays. They grade essays generated by ChatGPT-3 for five physics questions based on criteria that cover academic content, appreciation of the underlying physics, grasp of subject material, addressing the topic, and writing style. For each question, ten essays were generated and rated independently by five researchers. While the sample size precludes a statistical assessment, the results demonstrate that the AI model is capable of writing high-quality physics essays, but that the quality varies in a manner similar to human-written essays.

Guo et al. 14 create a set of free-text question answering tasks based on data they collected from the internet, e.g. question answering from Reddit. The authors then sample thirty triplets of a question, a human answer, and a ChatGPT-3 generated answer and ask human raters to assess if they can detect which was written by a human, and which was written by an AI. While this approach does not directly assess the quality of the output, it serves as a Turing test 31 designed to evaluate whether humans can distinguish between human- and AI-produced output. The results indicate that humans are in fact able to distinguish between the outputs when presented with a pair of answers. Humans familiar with ChatGPT are also able to identify over 80% of AI-generated answers without seeing a human answer in comparison. However, humans who are not yet familiar with ChatGPT-3 are not capable of identifying AI-written answers about 50% of the time. Moreover, the authors also find that the AI-generated outputs are deemed to be more helpful than the human answers in slightly more than half of the cases. This suggests that the strong results from OpenAI’s own benchmarks regarding the capabilities to generate free-text answers generalize beyond the benchmarks.

There are, however, some indicators that the benchmarks may be overly optimistic in their assessment of the model’s capabilities. For example, Kortemeyer 32 conducts a case study to assess how well ChatGPT-3 would perform in a physics class, simulating the tasks that students need to complete as part of the course: answer multiple-choice questions, do homework assignments, ask questions during a lesson, complete programming exercises, and write exams with free-text questions. Notably, ChatGPT-3 was allowed to interact with the instructor for many of the tasks, allowing for multiple attempts as well as feedback on preliminary solutions. The experiment shows that ChatGPT-3’s performance is in many aspects similar to that of the beginning learners and that the model makes similar mistakes, such as omitting units or simply plugging in results from equations. Overall, the AI would have passed the course with a low score of 1.5 out of 4.0. Similarly, Kung et al. 33 study the performance of ChatGPT-3 in the United States Medical Licensing Exam (USMLE) and find that the model performs at or near the passing threshold. Their assessment is a bit more optimistic than Kortemeyer’s as they state that this level of performance, comprehensible reasoning and valid clinical insights suggest that models such as ChatGPT may potentially assist human learning in clinical decision making.

Frieder et al. 34 evaluate the capabilities of ChatGPT-3 in solving graduate-level mathematical tasks. They find that while ChatGPT-3 seems to have some mathematical understanding, its level is well below that of an average student and in most cases is not sufficient to pass exams. Yuan et al. 35 consider the arithmetic abilities of language models, including ChatGPT-3 and ChatGPT-4. They find that they exhibit the best performance among other currently available language models (incl. Llama 36 , FLAN-T5 37 , and Bloom 38 ). However, the accuracy of basic arithmetic tasks is still only at 83% when considering correctness to the degree of \(10^{-3}\) , i.e. such models are still not capable of functioning reliably as calculators. In a slightly satiric, yet insightful take, Spencer et al. 39 assess how a scientific paper on gamma-ray astrophysics would look like, if it were written largely with the assistance of ChatGPT-3. They find that while the language capabilities are good and the model is capable of generating equations, the arguments are often flawed and the references to scientific literature are full of hallucinations.

The general reasoning skills of the models may also not be at the level expected from the benchmarks. For example, Cherian et al. 40 evaluate how well ChatGPT-3 performs on eleven puzzles that second graders should be able to solve and find that ChatGPT is only able to solve them on average in 36.4% of attempts, whereas the second graders achieve a mean of 60.4%. However, their sample size is very small and the problem was posed as a multiple-choice question answering problem, which cannot be directly compared to the NLG we consider.

Research gap

Within this article, we address an important part of the current research gap regarding the capabilities of ChatGPT (and similar technologies), guided by the following research questions:

RQ1: How good is ChatGPT based on GPT-3 and GPT-4 at writing argumentative student essays?

RQ2: How do AI-generated essays compare to essays written by students?

RQ3: What are linguistic devices that are characteristic of student versus AI-generated content?

We study these aspects with the help of a large group of teaching professionals who systematically assess a large corpus of student essays. To the best of our knowledge, this is the first large-scale, independent scientific assessment of ChatGPT (or similar models) of this kind. Answering these questions is crucial to understanding the impact of ChatGPT on the future of education.

Materials and methods

The essay topics originate from a corpus of argumentative essays in the field of argument mining 41 . Argumentative essays require students to think critically about a topic and use evidence to establish a position on the topic in a concise manner. The corpus features essays for 90 topics from Essay Forum 42 , an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get feedback from native speakers on their essay-writing capabilities. Information about the age of the writers is not available, but the topics indicate that the essays were written in grades 11–13, indicating that the authors were likely at least 16. Topics range from ‘Should students be taught to cooperate or to compete?’ to ‘Will newspapers become a thing of the past?’. In the corpus, each topic features one human-written essay uploaded and discussed in the forum. The students who wrote the essays are not native speakers. The average length of these essays is 19 sentences with 388 tokens (an average of 2.089 characters) and will be termed ‘student essays’ in the remainder of the paper.

For the present study, we use the topics from Stab and Gurevych 41 and prompt ChatGPT with ‘Write an essay with about 200 words on “[ topic ]”’ to receive automatically-generated essays from the ChatGPT-3 and ChatGPT-4 versions from 22 March 2023 (‘ChatGPT-3 essays’, ‘ChatGPT-4 essays’). No additional prompts for getting the responses were used, i.e. the data was created with a basic prompt in a zero-shot scenario. This is in contrast to the benchmarks by OpenAI, who used an engineered prompt in a few-shot scenario to guide the generation of essays. We note that we decided to ask for 200 words because we noticed a tendency to generate essays that are longer than the desired length by ChatGPT. A prompt asking for 300 words typically yielded essays with more than 400 words. Thus, using the shorter length of 200, we prevent a potential advantage for ChatGPT through longer essays, and instead err on the side of brevity. Similar to the evaluations of free-text answers by OpenAI, we did not consider multiple configurations of the model due to the effort required to obtain human judgments. For the same reason, our data is restricted to ChatGPT and does not include other models available at that time, e.g. Alpaca. We use the browser versions of the tools because we consider this to be a more realistic scenario than using the API. Table 1 below shows the core statistics of the resulting dataset. Supplemental material S1 shows examples for essays from the data set.

Annotation study

Study participants.

The participants had registered for a two-hour online training entitled ‘ChatGPT – Challenges and Opportunities’ conducted by the authors of this paper as a means to provide teachers with some of the technological background of NLG systems in general and ChatGPT in particular. Only teachers permanently employed at secondary schools were allowed to register for this training. Focusing on these experts alone allows us to receive meaningful results as those participants have a wide range of experience in assessing students’ writing. A total of 139 teachers registered for the training, 129 of them teach at grammar schools, and only 10 teachers hold a position at other secondary schools. About half of the registered teachers (68 teachers) have been in service for many years and have successfully applied for promotion. For data protection reasons, we do not know the subject combinations of the registered teachers. We only know that a variety of subjects are represented, including languages (English, French and German), religion/ethics, and science. Supplemental material S5 provides some general information regarding German teacher qualifications.

The training began with an online lecture followed by a discussion phase. Teachers were given an overview of language models and basic information on how ChatGPT was developed. After about 45 minutes, the teachers received a both written and oral explanation of the questionnaire at the core of our study (see Supplementary material S3 ) and were informed that they had 30 minutes to finish the study tasks. The explanation included information on how the data was obtained, why we collect the self-assessment, and how we chose the criteria for the rating of the essays, the overall goal of our research, and a walk-through of the questionnaire. Participation in the questionnaire was voluntary and did not affect the awarding of a training certificate. We further informed participants that all data was collected anonymously and that we would have no way of identifying who participated in the questionnaire. We orally informed participants that they consent to the use of the provided ratings for our research by participating in the survey.

Once these instructions were provided orally and in writing, the link to the online form was given to the participants. The online form was running on a local server that did not log any information that could identify the participants (e.g. IP address) to ensure anonymity. As per instructions, consent for participation was given by using the online form. Due to the full anonymity, we could by definition not document who exactly provided the consent. This was implemented as further insurance that non-participation could not possibly affect being awarded the training certificate.

About 20% of the training participants did not take part in the questionnaire study, the remaining participants consented based on the information provided and participated in the rating of essays. After the questionnaire, we continued with an online lecture on the opportunities of using ChatGPT for teaching as well as AI beyond chatbots. The study protocol was reviewed and approved by the Research Ethics Committee of the University of Passau. We further confirm that our study protocol is in accordance with all relevant guidelines.

Questionnaire

The questionnaire consists of three parts: first, a brief self-assessment regarding the English skills of the participants which is based on the Common European Framework of Reference for Languages (CEFR) 43 . We have six levels ranging from ‘comparable to a native speaker’ to ‘some basic skills’ (see supplementary material S3 ). Then each participant was shown six essays. The participants were only shown the generated text and were not provided with information on whether the text was human-written or AI-generated.

The questionnaire covers the seven categories relevant for essay assessment shown below (for details see supplementary material S3 ):

Topic and completeness

Logic and composition

Expressiveness and comprehensiveness

Language mastery

Vocabulary and text linking

Language constructs

These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring.

After rating each essay, the participants were also asked to self-assess their confidence in the ratings. We used a five-point Likert scale based on the criteria for the self-assessment of peer-review scores from the Association for Computational Linguistics (ACL). Once a participant finished rating the six essays, they were shown a summary of their ratings, as well as the individual ratings for each of their essays and the information on how the essay was generated.

Computational linguistic analysis

In order to further explore and compare the quality of the essays written by students and ChatGPT, we consider the six following linguistic characteristics: lexical diversity, sentence complexity, nominalization, presence of modals, epistemic and discourse markers. Those are motivated by previous work: Weiss et al. 25 observe the correlation between measures of lexical, syntactic and discourse complexities to the essay gradings of German high-school examinations while McNamara et al. 45 explore cohesion (indicated, among other things, by connectives), syntactic complexity and lexical diversity in relation to the essay scoring.

Lexical diversity

We identify vocabulary richness by using a well-established measure of textual, lexical diversity (MTLD) 46 which is often used in the field of automated essay grading 25 , 45 , 47 . It takes into account the number of unique words but unlike the best-known measure of lexical diversity, the type-token ratio (TTR), it is not as sensitive to the difference in the length of the texts. In fact, Koizumi and In’nami 48 find it to be least affected by the differences in the length of the texts compared to some other measures of lexical diversity. This is relevant to us due to the difference in average length between the human-written and ChatGPT-generated essays.

Syntactic complexity

We use two measures in order to evaluate the syntactic complexity of the essays. One is based on the maximum depth of the sentence dependency tree which is produced using the spaCy 3.4.2 dependency parser 49 (‘Syntactic complexity (depth)’). For the second measure, we adopt an approach similar in nature to the one by Weiss et al. 25 who use clause structure to evaluate syntactic complexity. In our case, we count the number of conjuncts, clausal modifiers of nouns, adverbial clause modifiers, clausal complements, clausal subjects, and parataxes (‘Syntactic complexity (clauses)’). The supplementary material in S2 shows the difference between sentence complexity based on two examples from the data.

Nominalization is a common feature of a more scientific style of writing 50 and is used as an additional measure for syntactic complexity. In order to explore this feature, we count occurrences of nouns with suffixes such as ‘-ion’, ‘-ment’, ‘-ance’ and a few others which are known to transform verbs into nouns.

Semantic properties

Both modals and epistemic markers signal the commitment of the writer to their statement. We identify modals using the POS-tagging module provided by spaCy as well as a list of epistemic expressions of modality, such as ‘definitely’ and ‘potentially’, also used in other approaches to identifying semantic properties 51 . For epistemic markers we adopt an empirically-driven approach and utilize the epistemic markers identified in a corpus of dialogical argumentation by Hautli-Janisz et al. 52 . We consider expressions such as ‘I think’, ‘it is believed’ and ‘in my opinion’ to be epistemic.

Discourse properties

Discourse markers can be used to measure the coherence quality of a text. This has been explored by Somasundaran et al. 53 who use discourse markers to evaluate the story-telling aspect of student writing while Nadeem et al. 54 incorporated them in their deep learning-based approach to automated essay scoring. In the present paper, we employ the PDTB list of discourse markers 55 which we adjust to exclude words that are often used for purposes other than indicating discourse relations, such as ‘like’, ‘for’, ‘in’ etc.

Statistical methods

We use a within-subjects design for our study. Each participant was shown six randomly selected essays. Results were submitted to the survey system after each essay was completed, in case participants ran out of time and did not finish scoring all six essays. Cronbach’s \(\alpha\) 56 allows us to determine the inter-rater reliability for the rating criterion and data source (human, ChatGPT-3, ChatGPT-4) in order to understand the reliability of our data not only overall, but also for each data source and rating criterion. We use two-sided Wilcoxon-rank-sum tests 57 to confirm the significance of the differences between the data sources for each criterion. We use the same tests to determine the significance of the linguistic characteristics. This results in three comparisons (human vs. ChatGPT-3, human vs. ChatGPT-4, ChatGPT-3 vs. ChatGPT-4) for each of the seven rating criteria and each of the seven linguistic characteristics, i.e. 42 tests. We use the Holm-Bonferroni method 58 for the correction for multiple tests to achieve a family-wise error rate of 0.05. We report the effect size using Cohen’s d 59 . While our data is not perfectly normal, it also does not have severe outliers, so we prefer the clear interpretation of Cohen’s d over the slightly more appropriate, but less accessible non-parametric effect size measures. We report point plots with estimates of the mean scores for each data source and criterion, incl. the 95% confidence interval of these mean values. The confidence intervals are estimated in a non-parametric manner based on bootstrap sampling. We further visualize the distribution for each criterion using violin plots to provide a visual indicator of the spread of the data (see Supplementary material S4 ).

Further, we use the self-assessment of the English skills and confidence in the essay ratings as confounding variables. Through this, we determine if ratings are affected by the language skills or confidence, instead of the actual quality of the essays. We control for the impact of these by measuring Pearson’s correlation coefficient r 60 between the self-assessments and the ratings. We also determine whether the linguistic features are correlated with the ratings as expected. The sentence complexity (both tree depth and dependency clauses), as well as the nominalization, are indicators of the complexity of the language. Similarly, the use of discourse markers should signal a proper logical structure. Finally, a large lexical diversity should be correlated with the ratings for the vocabulary. Same as above, we measure Pearson’s r . We use a two-sided test for the significance based on a \(\beta\) -distribution that models the expected correlations as implemented by scipy 61 . Same as above, we use the Holm-Bonferroni method to account for multiple tests. However, we note that it is likely that all—even tiny—correlations are significant given our amount of data. Consequently, our interpretation of these results focuses on the strength of the correlations.

Our statistical analysis of the data is implemented in Python. We use pandas 1.5.3 and numpy 1.24.2 for the processing of data, pingouin 0.5.3 for the calculation of Cronbach’s \(\alpha\) , scipy 1.10.1 for the Wilcoxon-rank-sum tests Pearson’s r , and seaborn 0.12.2 for the generation of plots, incl. the calculation of error bars that visualize the confidence intervals.

Out of the 111 teachers who completed the questionnaire, 108 rated all six essays, one rated five essays, one rated two essays, and one rated only one essay. This results in 658 ratings for 270 essays (90 topics for each essay type: human-, ChatGPT-3-, ChatGPT-4-generated), with three ratings for 121 essays, two ratings for 144 essays, and one rating for five essays. The inter-rater agreement is consistently excellent ( \(\alpha >0.9\) ), with the exception of language mastery where we have good agreement ( \(\alpha =0.89\) , see Table  2 ). Further, the correlation analysis depicted in supplementary material S4 shows weak positive correlations ( \(r \in 0.11, 0.28]\) ) between the self-assessment for the English skills, respectively the self-assessment for the confidence in ratings and the actual ratings. Overall, this indicates that our ratings are reliable estimates of the actual quality of the essays with a potential small tendency that confidence in ratings and language skills yields better ratings, independent of the data source.

Table  2 and supplementary material S4 characterize the distribution of the ratings for the essays, grouped by the data source. We observe that for all criteria, we have a clear order of the mean values, with students having the worst ratings, ChatGPT-3 in the middle rank, and ChatGPT-4 with the best performance. We further observe that the standard deviations are fairly consistent and slightly larger than one, i.e. the spread is similar for all ratings and essays. This is further supported by the visual analysis of the violin plots.

The statistical analysis of the ratings reported in Table  4 shows that differences between the human-written essays and the ones generated by both ChatGPT models are significant. The effect sizes for human versus ChatGPT-3 essays are between 0.52 and 1.15, i.e. a medium ( \(d \in [0.5,0.8)\) ) to large ( \(d \in [0.8, 1.2)\) ) effect. On the one hand, the smallest effects are observed for the expressiveness and complexity, i.e. when it comes to the overall comprehensiveness and complexity of the sentence structures, the differences between the humans and the ChatGPT-3 model are smallest. On the other hand, the difference in language mastery is larger than all other differences, which indicates that humans are more prone to making mistakes when writing than the NLG models. The magnitude of differences between humans and ChatGPT-4 is larger with effect sizes between 0.88 and 1.43, i.e., a large to very large ( \(d \in [1.2, 2)\) ) effect. Same as for ChatGPT-3, the differences are smallest for expressiveness and complexity and largest for language mastery. Please note that the difference in language mastery between humans and both GPT models does not mean that the humans have low scores for language mastery (M=3.90), but rather that the NLG models have exceptionally high scores (M=5.03 for ChatGPT-3, M=5.25 for ChatGPT-4).

When we consider the differences between the two GPT models, we observe that while ChatGPT-4 has consistently higher mean values for all criteria, only the differences for logic and composition, vocabulary and text linking, and complexity are significant. The effect sizes are between 0.45 and 0.5, i.e. small ( \(d \in [0.2, 0.5)\) ) and medium. Thus, while GPT-4 seems to be an improvement over GPT-3.5 in general, the only clear indicator of this is a better and clearer logical composition and more complex writing with a more diverse vocabulary.

We also observe significant differences in the distribution of linguistic characteristics between all three groups (see Table  3 ). Sentence complexity (depth) is the only category without a significant difference between humans and ChatGPT-3, as well as ChatGPT-3 and ChatGPT-4. There is also no significant difference in the category of discourse markers between humans and ChatGPT-3. The magnitude of the effects varies a lot and is between 0.39 and 1.93, i.e., between small ( \(d \in [0.2, 0.5)\) ) and very large. However, in comparison to the ratings, there is no clear tendency regarding the direction of the differences. For instance, while the ChatGPT models write more complex sentences and use more nominalizations, humans tend to use more modals and epistemic markers instead. The lexical diversity of humans is higher than that of ChatGPT-3 but lower than that of ChatGPT-4. While there is no difference in the use of discourse markers between humans and ChatGPT-3, ChatGPT-4 uses significantly fewer discourse markers.

We detect the expected positive correlations between the complexity ratings and the linguistic markers for sentence complexity ( \(r=0.16\) for depth, \(r=0.19\) for clauses) and nominalizations ( \(r=0.22\) ). However, we observe a negative correlation between the logic ratings and the discourse markers ( \(r=-0.14\) ), which counters our intuition that more frequent use of discourse indicators makes a text more logically coherent. However, this is in line with previous work: McNamara et al. 45 also find no indication that the use of cohesion indices such as discourse connectives correlates with high- and low-proficiency essays. Finally, we observe the expected positive correlation between the ratings for the vocabulary and the lexical diversity ( \(r=0.12\) ). All observed correlations are significant. However, we note that the strength of all these correlations is weak and that the significance itself should not be over-interpreted due to the large sample size.

Our results provide clear answers to the first two research questions that consider the quality of the generated essays: ChatGPT performs well at writing argumentative student essays and outperforms the quality of the human-written essays significantly. The ChatGPT-4 model has (at least) a large effect and is on average about one point better than humans on a seven-point Likert scale.

Regarding the third research question, we find that there are significant linguistic differences between humans and AI-generated content. The AI-generated essays are highly structured, which for instance is reflected by the identical beginnings of the concluding sections of all ChatGPT essays (‘In conclusion, [...]’). The initial sentences of each essay are also very similar starting with a general statement using the main concepts of the essay topics. Although this corresponds to the general structure that is sought after for argumentative essays, it is striking to see that the ChatGPT models are so rigid in realizing this, whereas the human-written essays are looser in representing the guideline on the linguistic surface. Moreover, the linguistic fingerprint has the counter-intuitive property that the use of discourse markers is negatively correlated with logical coherence. We believe that this might be due to the rigid structure of the generated essays: instead of using discourse markers, the AI models provide a clear logical structure by separating the different arguments into paragraphs, thereby reducing the need for discourse markers.

Our data also shows that hallucinations are not a problem in the setting of argumentative essay writing: the essay topics are not really about factual correctness, but rather about argumentation and critical reflection on general concepts which seem to be contained within the knowledge of the AI model. The stochastic nature of the language generation is well-suited for this kind of task, as different plausible arguments can be seen as a sampling from all available arguments for a topic. Nevertheless, we need to perform a more systematic study of the argumentative structures in order to better understand the difference in argumentation between human-written and ChatGPT-generated essay content. Moreover, we also cannot rule out that subtle hallucinations may have been overlooked during the ratings. There are also essays with a low rating for the criteria related to factual correctness, indicating that there might be cases where the AI models still have problems, even if they are, on average, better than the students.

One of the issues with evaluations of the recent large-language models is not accounting for the impact of tainted data when benchmarking such models. While it is certainly possible that the essays that were sourced by Stab and Gurevych 41 from the internet were part of the training data of the GPT models, the proprietary nature of the model training means that we cannot confirm this. However, we note that the generated essays did not resemble the corpus of human essays at all. Moreover, the topics of the essays are general in the sense that any human should be able to reason and write about these topics, just by understanding concepts like ‘cooperation’. Consequently, a taint on these general topics, i.e. the fact that they might be present in the data, is not only possible but is actually expected and unproblematic, as it relates to the capability of the models to learn about concepts, rather than the memorization of specific task solutions.

While we did everything to ensure a sound construct and a high validity of our study, there are still certain issues that may affect our conclusions. Most importantly, neither the writers of the essays, nor their raters, were English native speakers. However, the students purposefully used a forum for English writing frequented by native speakers to ensure the language and content quality of their essays. This indicates that the resulting essays are likely above average for non-native speakers, as they went through at least one round of revisions with the help of native speakers. The teachers were informed that part of the training would be in English to prevent registrations from people without English language skills. Moreover, the self-assessment of the language skills was only weakly correlated with the ratings, indicating that the threat to the soundness of our results is low. While we cannot definitively rule out that our results would not be reproducible with other human raters, the high inter-rater agreement indicates that this is unlikely.

However, our reliance on essays written by non-native speakers affects the external validity and the generalizability of our results. It is certainly possible that native speaking students would perform better in the criteria related to language skills, though it is unclear by how much. However, the language skills were particular strengths of the AI models, meaning that while the difference might be smaller, it is still reasonable to conclude that the AI models would have at least comparable performance to humans, but possibly still better performance, just with a smaller gap. While we cannot rule out a difference for the content-related criteria, we also see no strong argument why native speakers should have better arguments than non-native speakers. Thus, while our results might not fully translate to native speakers, we see no reason why aspects regarding the content should not be similar. Further, our results were obtained based on high-school-level essays. Native and non-native speakers with higher education degrees or experts in fields would likely also achieve a better performance, such that the difference in performance between the AI models and humans would likely also be smaller in such a setting.

We further note that the essay topics may not be an unbiased sample. While Stab and Gurevych 41 randomly sampled the essays from the writing feedback section of an essay forum, it is unclear whether the essays posted there are representative of the general population of essay topics. Nevertheless, we believe that the threat is fairly low because our results are consistent and do not seem to be influenced by certain topics. Further, we cannot with certainty conclude how our results generalize beyond ChatGPT-3 and ChatGPT-4 to similar models like Bard ( https://bard.google.com/?hl=en ) Alpaca, and Dolly. Especially the results for linguistic characteristics are hard to predict. However, since—to the best of our knowledge and given the proprietary nature of some of these models—the general approach to how these models work is similar and the trends for essay quality should hold for models with comparable size and training procedures.

Finally, we want to note that the current speed of progress with generative AI is extremely fast and we are studying moving targets: ChatGPT 3.5 and 4 today are already not the same as the models we studied. Due to a lack of transparency regarding the specific incremental changes, we cannot know or predict how this might affect our results.

Our results provide a strong indication that the fear many teaching professionals have is warranted: the way students do homework and teachers assess it needs to change in a world of generative AI models. For non-native speakers, our results show that when students want to maximize their essay grades, they could easily do so by relying on results from AI models like ChatGPT. The very strong performance of the AI models indicates that this might also be the case for native speakers, though the difference in language skills is probably smaller. However, this is not and cannot be the goal of education. Consequently, educators need to change how they approach homework. Instead of just assigning and grading essays, we need to reflect more on the output of AI tools regarding their reasoning and correctness. AI models need to be seen as an integral part of education, but one which requires careful reflection and training of critical thinking skills.

Furthermore, teachers need to adapt strategies for teaching writing skills: as with the use of calculators, it is necessary to critically reflect with the students on when and how to use those tools. For instance, constructivists 62 argue that learning is enhanced by the active design and creation of unique artifacts by students themselves. In the present case this means that, in the long term, educational objectives may need to be adjusted. This is analogous to teaching good arithmetic skills to younger students and then allowing and encouraging students to use calculators freely in later stages of education. Similarly, once a sound level of literacy has been achieved, strongly integrating AI models in lesson plans may no longer run counter to reasonable learning goals.

In terms of shedding light on the quality and structure of AI-generated essays, this paper makes an important contribution by offering an independent, large-scale and statistically sound account of essay quality, comparing human-written and AI-generated texts. By comparing different versions of ChatGPT, we also offer a glance into the development of these models over time in terms of their linguistic properties and the quality they exhibit. Our results show that while the language generated by ChatGPT is considered very good by humans, there are also notable structural differences, e.g. in the use of discourse markers. This demonstrates that an in-depth consideration not only of the capabilities of generative AI models is required (i.e. which tasks can they be used for), but also of the language they generate. For example, if we read many AI-generated texts that use fewer discourse markers, it raises the question if and how this would affect our human use of discourse markers. Understanding how AI-generated texts differ from human-written enables us to look for these differences, to reason about their potential impact, and to study and possibly mitigate this impact.

Data availability

The datasets generated during and/or analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.8343644

Code availability

All materials are available online in form of a replication package that contains the data and the analysis code, https://doi.org/10.5281/zenodo.8343644 .

Ouyang, L. et al. Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 .

Ruby, D. 30+ detailed chatgpt statistics–users & facts (sep 2023). https://www.demandsage.com/chatgpt-statistics/ (2023). Accessed 09 June 2023.

Leahy, S. & Mishra, P. TPACK and the Cambrian explosion of AI. In Society for Information Technology & Teacher Education International Conference , (ed. Langran, E.) 2465–2469 (Association for the Advancement of Computing in Education (AACE), 2023).

Ortiz, S. Need an ai essay writer? here’s how chatgpt (and other chatbots) can help. https://www.zdnet.com/article/how-to-use-chatgpt-to-write-an-essay/ (2023). Accessed 09 June 2023.

Openai chat interface. https://chat.openai.com/ . Accessed 09 June 2023.

OpenAI. Gpt-4 technical report (2023). arXiv:2303.08774 .

Brown, T. B. et al. Language models are few-shot learners (2020). arXiv:2005.14165 .

Wang, B. Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX. https://github.com/kingoflolz/mesh-transformer-jax (2021).

Wei, J. et al. Finetuned language models are zero-shot learners. In International Conference on Learning Representations (2022).

Taori, R. et al. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca (2023).

Cai, Z. G., Haslett, D. A., Duan, X., Wang, S. & Pickering, M. J. Does chatgpt resemble humans in language use? (2023). arXiv:2303.08014 .

Mahowald, K. A discerning several thousand judgments: Gpt-3 rates the article + adjective + numeral + noun construction (2023). arXiv:2301.12564 .

Dentella, V., Murphy, E., Marcus, G. & Leivada, E. Testing ai performance on less frequent aspects of language reveals insensitivity to underlying meaning (2023). arXiv:2302.12313 .

Guo, B. et al. How close is chatgpt to human experts? comparison corpus, evaluation, and detection (2023). arXiv:2301.07597 .

Zhao, W. et al. Is chatgpt equipped with emotional dialogue capabilities? (2023). arXiv:2304.09582 .

Keim, D. A. & Oelke, D. Literature fingerprinting : A new method for visual literary analysis. In 2007 IEEE Symposium on Visual Analytics Science and Technology , 115–122, https://doi.org/10.1109/VAST.2007.4389004 (IEEE, 2007).

El-Assady, M. et al. Interactive visual analysis of transcribed multi-party discourse. In Proceedings of ACL 2017, System Demonstrations , 49–54 (Association for Computational Linguistics, Vancouver, Canada, 2017).

Mennatallah El-Assady, A. H.-J. & Butt, M. Discourse maps - feature encoding for the analysis of verbatim conversation transcripts. In Visual Analytics for Linguistics , vol. CSLI Lecture Notes, Number 220, 115–147 (Stanford: CSLI Publications, 2020).

Matt Foulis, J. V. & Reed, C. Dialogical fingerprinting of debaters. In Proceedings of COMMA 2020 , 465–466, https://doi.org/10.3233/FAIA200536 (Amsterdam: IOS Press, 2020).

Matt Foulis, J. V. & Reed, C. Interactive visualisation of debater identification and characteristics. In Proceedings of the COMMA workshop on Argument Visualisation, COMMA , 1–7 (2020).

Chatzipanagiotidis, S., Giagkou, M. & Meurers, D. Broad linguistic complexity analysis for Greek readability classification. In Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications , 48–58 (Association for Computational Linguistics, Online, 2021).

Ajili, M., Bonastre, J.-F., Kahn, J., Rossato, S. & Bernard, G. FABIOLE, a speech database for forensic speaker comparison. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) , 726–733 (European Language Resources Association (ELRA), Portorož, Slovenia, 2016).

Deutsch, T., Jasbi, M. & Shieber, S. Linguistic features for readability assessment. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications , 1–17, https://doi.org/10.18653/v1/2020.bea-1.1 (Association for Computational Linguistics, Seattle, WA, USA \(\rightarrow\) Online, 2020).

Fiacco, J., Jiang, S., Adamson, D. & Rosé, C. Toward automatic discourse parsing of student writing motivated by neural interpretation. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) , 204–215, https://doi.org/10.18653/v1/2022.bea-1.25 (Association for Computational Linguistics, Seattle, Washington, 2022).

Weiss, Z., Riemenschneider, A., Schröter, P. & Meurers, D. Computationally modeling the impact of task-appropriate language complexity and accuracy on human grading of German essays. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 30–45, https://doi.org/10.18653/v1/W19-4404 (Association for Computational Linguistics, Florence, Italy, 2019).

Yang, F., Dragut, E. & Mukherjee, A. Predicting personal opinion on future events with fingerprints. In Proceedings of the 28th International Conference on Computational Linguistics , 1802–1807, https://doi.org/10.18653/v1/2020.coling-main.162 (International Committee on Computational Linguistics, Barcelona, Spain (Online), 2020).

Tumarada, K. et al. Opinion prediction with user fingerprinting. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , 1423–1431 (INCOMA Ltd., Held Online, 2021).

Rocca, R. & Yarkoni, T. Language as a fingerprint: Self-supervised learning of user encodings using transformers. In Findings of the Association for Computational Linguistics: EMNLP . 1701–1714 (Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 2022).

Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on chatgpt? (2023). arXiv:2303.12767 .

Yeadon, W., Inyang, O.-O., Mizouri, A., Peach, A. & Testrow, C. The death of the short-form physics essay in the coming ai revolution (2022). arXiv:2212.11661 .

TURING, A. M. I.-COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX , 433–460, https://doi.org/10.1093/mind/LIX.236.433 (1950). https://academic.oup.com/mind/article-pdf/LIX/236/433/30123314/lix-236-433.pdf .

Kortemeyer, G. Could an artificial-intelligence agent pass an introductory physics course? (2023). arXiv:2301.12127 .

Kung, T. H. et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLOS Digital Health 2 , 1–12. https://doi.org/10.1371/journal.pdig.0000198 (2023).

Article   Google Scholar  

Frieder, S. et al. Mathematical capabilities of chatgpt (2023). arXiv:2301.13867 .

Yuan, Z., Yuan, H., Tan, C., Wang, W. & Huang, S. How well do large language models perform in arithmetic tasks? (2023). arXiv:2304.02015 .

Touvron, H. et al. Llama: Open and efficient foundation language models (2023). arXiv:2302.13971 .

Chung, H. W. et al. Scaling instruction-finetuned language models (2022). arXiv:2210.11416 .

Workshop, B. et al. Bloom: A 176b-parameter open-access multilingual language model (2023). arXiv:2211.05100 .

Spencer, S. T., Joshi, V. & Mitchell, A. M. W. Can ai put gamma-ray astrophysicists out of a job? (2023). arXiv:2303.17853 .

Cherian, A., Peng, K.-C., Lohit, S., Smith, K. & Tenenbaum, J. B. Are deep neural networks smarter than second graders? (2023). arXiv:2212.09993 .

Stab, C. & Gurevych, I. Annotating argument components and relations in persuasive essays. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers , 1501–1510 (Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 2014).

Essay forum. https://essayforum.com/ . Last-accessed: 2023-09-07.

Common european framework of reference for languages (cefr). https://www.coe.int/en/web/common-european-framework-reference-languages . Accessed 09 July 2023.

Kmk guidelines for essay assessment. http://www.kmk-format.de/material/Fremdsprachen/5-3-2_Bewertungsskalen_Schreiben.pdf . Accessed 09 July 2023.

McNamara, D. S., Crossley, S. A. & McCarthy, P. M. Linguistic features of writing quality. Writ. Commun. 27 , 57–86 (2010).

McCarthy, P. M. & Jarvis, S. Mtld, vocd-d, and hd-d: A validation study of sophisticated approaches to lexical diversity assessment. Behav. Res. Methods 42 , 381–392 (2010).

Article   PubMed   Google Scholar  

Dasgupta, T., Naskar, A., Dey, L. & Saha, R. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications , 93–102 (2018).

Koizumi, R. & In’nami, Y. Effects of text length on lexical diversity measures: Using short texts with less than 200 tokens. System 40 , 554–564 (2012).

spacy industrial-strength natural language processing in python. https://spacy.io/ .

Siskou, W., Friedrich, L., Eckhard, S., Espinoza, I. & Hautli-Janisz, A. Measuring plain language in public service encounters. In Proceedings of the 2nd Workshop on Computational Linguistics for Political Text Analysis (CPSS-2022) (Potsdam, Germany, 2022).

El-Assady, M. & Hautli-Janisz, A. Discourse Maps - Feature Encoding for the Analysis of Verbatim Conversation Transcripts (CSLI lecture notes (CSLI Publications, Center for the Study of Language and Information, 2019).

Hautli-Janisz, A. et al. QT30: A corpus of argument and conflict in broadcast debate. In Proceedings of the Thirteenth Language Resources and Evaluation Conference , 3291–3300 (European Language Resources Association, Marseille, France, 2022).

Somasundaran, S. et al. Towards evaluating narrative quality in student writing. Trans. Assoc. Comput. Linguist. 6 , 91–106 (2018).

Nadeem, F., Nguyen, H., Liu, Y. & Ostendorf, M. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications , 484–493, https://doi.org/10.18653/v1/W19-4450 (Association for Computational Linguistics, Florence, Italy, 2019).

Prasad, R. et al. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08) (European Language Resources Association (ELRA), Marrakech, Morocco, 2008).

Cronbach, L. J. Coefficient alpha and the internal structure of tests. Psychometrika 16 , 297–334. https://doi.org/10.1007/bf02310555 (1951).

Article   MATH   Google Scholar  

Wilcoxon, F. Individual comparisons by ranking methods. Biom. Bull. 1 , 80–83 (1945).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

MathSciNet   MATH   Google Scholar  

Cohen, J. Statistical power analysis for the behavioral sciences (Academic press, 2013).

Freedman, D., Pisani, R. & Purves, R. Statistics (international student edition). Pisani, R. Purves, 4th edn. WW Norton & Company, New York (2007).

Scipy documentation. https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html . Accessed 09 June 2023.

Windschitl, M. Framing constructivism in practice as the negotiation of dilemmas: An analysis of the conceptual, pedagogical, cultural, and political challenges facing teachers. Rev. Educ. Res. 72 , 131–175 (2002).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Faculty of Computer Science and Mathematics, University of Passau, Passau, Germany

Steffen Herbold, Annette Hautli-Janisz, Ute Heuer, Zlata Kikteva & Alexander Trautsch

You can also search for this author in PubMed   Google Scholar

Contributions

S.H., A.HJ., and U.H. conceived the experiment; S.H., A.HJ, and Z.K. collected the essays from ChatGPT; U.H. recruited the study participants; S.H., A.HJ., U.H. and A.T. conducted the training session and questionnaire; all authors contributed to the analysis of the results, the writing of the manuscript, and review of the manuscript.

Corresponding author

Correspondence to Steffen Herbold .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary information 2., supplementary information 3., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13 , 18617 (2023). https://doi.org/10.1038/s41598-023-45644-9

Download citation

Received : 01 June 2023

Accepted : 22 October 2023

Published : 30 October 2023

DOI : https://doi.org/10.1038/s41598-023-45644-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Defense against adversarial attacks: robust and efficient compressed optimized neural networks.

  • Insaf Kraidia
  • Afifa Ghenai
  • Samir Brahim Belhaouari

Scientific Reports (2024)

AI-driven translations for kidney transplant equity in Hispanic populations

  • Oscar A. Garcia Valencia
  • Charat Thongprayoon
  • Wisit Cheungpasitporn

Solving Not Answering. Validation of Guidance for Writing Higher-Order Multiple-Choice Questions in Medical Science Education

  • Maria Xiromeriti
  • Philip M. Newton

Medical Science Educator (2024)

How will the state think with ChatGPT? The challenges of generative artificial intelligence for public administrations

  • Thomas Cantens

AI & SOCIETY (2024)

User satisfaction with the service quality of ChatGPT

  • Kim Shin Young
  • Sang-Gun Lee
  • Ga Youn Hong

Service Business (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

write essays with chat gpt

write essays with chat gpt

How to Grade Essays with ChatGPT

Introduction.

The rise of large language models (LLMs) like OpenAI’s ChatGPT has opened exciting possibilities in essay grading. With its advanced natural language processing capabilities, ChatGPT offers a new dimension in assessing written work, potentially revolutionizing the grading process for educators and researchers. Let’s delve into how ChatGPT could potentially make essay grading easier, more efficient, and more accurate.

ChatGPT can analyze written content for various parameters, including content quality, argument structure, coherence, and adherence to guidelines. Whether you use a continuous scoring system (e.g., quality of writing) or a discrete one (e.g., essay positions), ChatGPT can be tailored to your specific needs, offering customized feedback for different writing styles and assignments. Literature also suggests that LLMs can significantly increase grading efficiency, alleviating some of the burden on educators (Abedi et al., 2023; Okonkwo & Ade-Ibijola, 2021; Richter et al., 2019). Imagine grading hundreds of essays and providing feedback on them – a time-consuming and tiring task. ChatGPT can automate the initial assessment, flagging essays that require further attention based on specific criteria. Additionally, ChatGPT can identify stylistic strengths and weaknesses, analyze the use of literary devices, and even point out potential inconsistencies in an argument’s logic. This could free up valuable educator time for student interaction and curriculum development.

However, caution against over-reliance on this new technology is adivsed in scenarios where biased or inaccurate models could unfairly impact individual students. It is essential to recognize both the potential advantages and limitations of LLMs. This blog post aims to delve into and reflect on ChatGPT’s capabilities for grading and classifying essays and to provide insights into the practical application of using ChatGPT in educational settings.

In this blog, we will explore:

  • Essay grading with ChatGPT and ChatGPT API
  • Steps for essay grading with ChatGPT API
  • Steps for essay classification with ChatGPT API
  • Cost & computation times

For steps 2 and 3, we will provide detailed instructions on how to access and set up the ChatGPT API, prepare and upload your text dataset, and efficiently grade or classify numerous essays. Additionally, we will compare the outcomes of human grading to those obtained through GPT grading.

## Essay Grading with ChatGPT and ChatGPT API

For a single essay, we can simply ask ChatGPT to grade as follows:

write essays with chat gpt

For multiple essays, we could request ChatGPT to grade each one individually. However, when dealing with a large number of essays (e.g., 50, 100, 1000, etc.), manually grading them in this way becomes a laborious and time-consuming task. In such cases, we can leverage the ChatGPT API service to evaluate numerous essays at once, providing greater flexibility and efficiency. ChatGPT API is a versatile tool that enables developers to integrate ChatGPT into their own applications, services, or websites. When you use the API, you also gain more control over the interaction, such as the ability to adjust temperature, maximum tokens, and the presence of system messages.

It is important to understand the distinctions between ChatGPT’s web interface and the pretrained models accessible through the OpenAI API .

ChatGPT’s web version provides a user-friendly chat interface, requiring no coding knowledge and offering features like integrated system tools. However, it is less customizable and is not designed for managing high volumes of requests. Additionally, due to its internal short-term memory span, previous conversations can influence later responses. In contrast, the OpenAI API offers pretrained models without a built-in interface, necessitating coding experience for integration. These models excel at managing large request volumes, but lack ChatGPT’s conversational memory; they process each input independently. This fundamental difference can lead to variations in the outputs generated by ChatGPT’s web interface and the OpenAI API.

Here’s an example of grading a single essay using the ChatAPI with Python:

Interestingly, this example produces a single score rather than the sentence generated above via the ChatGPT web interface. This difference could be attributed to the ChatGPT API interpreting the prompt more directly than the ChatGPT online service, even though they both use the same pretrained model. Alternatively, the variability in ChatGPT’s results might be due to inherent randomness in its responses.

By implementing a loop with multiple texts, we can acquire scores for an entire set of essays. Let’s see how to do that.

Steps for Essay Grading with ChatGPT API

Get and set up a chatgpt api key.

We assume that you have already installed the Python OpenAI library on your system and have an active OpenAI account. Setting up and obtaining access to the ChatGPT API involves the following steps:

Obtain an OpenAI key: Vist the OpenAI API website at https://platform.openai.com/api-keys and click +Create a new secret key button. Save your key securely, as you cannot regenerate the same code due to OpenAI’s security policies.

Set ip API key: In your Python script or notebook, set up the API key using the following code, replacing “YOUR-API-KEY” with your actual API key:

Load the text dataset

In this post, we will grade a series of essays about the iPad usage in schools

Text Stance_iPad Scores
0 Some people allow Ipads because some people ne… AMB 1
1 I have a tablet. But it is a lot of money. But… AMB 1
2 Do you think we should get rid of the Ipad wh… AMB 1
3 I said yes because the teacher will not be tal… AMB 2
4 Well I would like the idea . But then for it … AMB 4

Score the multiple essays

Grading 50 essays takes only 25 seconds.

Text Stance_iPad Scores Scores_GPT
0 Some people allow Ipads because some people ne… AMB 1 2.0
1 I have a tablet. But it is a lot of money. But… AMB 1 2.0
2 Do you think we should get rid of the Ipad wh… AMB 1 2.0
3 I said yes because the teacher will not be tal… AMB 2 2.0
4 Well I would like the idea . But then for it … AMB 4 4.0

Compare human grading scores with GPT grading scores

For these data, we happend to have scores given by human raters as well, allowing us how similar the human scores are to the scores generated by ChatGPT.

Using the code provided in the accompanying script, we get the following:

write essays with chat gpt

A contigency table (confusion matrix) of the scores is:

Scores_GPT 1.0 2.0 3.0 4.0 5.0
Scores
0 1 7 0 0 0
1 0 9 0 0 0
2 0 4 1 0 0
3 0 8 2 0 0
4 0 8 3 2 0
5 0 0 2 2 0
6 0 0 0 0 1

The averages and standard deviations of human grading and GPT grading scores are 2.54 ( SD = 1.68) and 2.34 ( SD = 0.74), respectively. The correlation between them is 0.62, indicating a fairly strong positive linear relationship. Additionally, the Root Mean Squared Error (RMSE) is 1.36, providing a measure of the GPT’s prediction accuracy compared to the actual human grading scores.

Steps for Essay Classification with ChatGPT API

ChatGPT can be utilized not only for scoring essays but also for classifying essays based on some categorical variable such as writers’ opinions regarding iPad usage in schools. Here are the steps to guide you through the process, assuming you already have access to the ChatGPT API and have loaded your text dataset:

Classify multiple essays

Classifying 50 essays takes only 27 seconds.

We create a new column re_Stance_iPad based on the mapping of values from the existing Stance_iPad column. Except for AFF and NEG opinions, opinions on AMB, BAL, and NAR are unclear. Therefore, AMB, BAL, and NAR are combined as OTHER.

Text Stance_iPad Scores Scores_GPT re_Stance_iPad Stance_iPad_GPT
0 Some people allow Ipads because some people ne… AMB 1 2.0 OTHER OTHER
1 I have a tablet. But it is a lot of money. But… AMB 1 2.0 OTHER OTHER
2 Do you think we should get rid of the Ipad wh… AMB 1 2.0 OTHER OTHER
3 I said yes because the teacher will not be tal… AMB 2 2.0 OTHER OTHER
4 Well I would like the idea . But then for it … AMB 4 4.0 OTHER OTHER

Compare human classification with GPT classification

Stance_iPad_GPT AFF NEG OTHER
re_Stance_iPad
AFF 7 0 3
NEG 0 9 1
OTHER 3 1 26

ChatGPT achieves an accuracy of approximately 84%, demonstrating its correctness in classification. An F1 score of 0.84, reflecting the harmonic mean of precision and recall, signifies a well-balanced performance in terms of both precision and recall. Additionally, the Cohen’s Kappa value of 0.71, which measures the agreement between predicted and actual classifications while accounting for chance, indicates substantial agreement beyond what would be expected by chance alone.

Cost & Computation times

How long does it take to assess all essays.

Grading and classifying 50 essays each took 25 and 27 seconds , resulting in a rate of about 2 essays per second.

What is the cost of assessing all essays?

In this blog, we utilized GPT-3.5-turbo-0125. According to OpenAI’s pricing page , the cost for input processing is $0.0005 per 1,000 tokens, and for output, it is $0.0015 per 1,000 tokens, indicating that the ChatGPT API charges for both tokens sent out and tokens received.

The total expenditure for grading all essays —50 assessing essay quality and 50 for essay classification—was approximately $0.01 .

What are tokens and how to count them?

Tokens can be viewed as fragments of words. When the API receives prompts, it breaks down the input into tokens. These divisions do not always align with the beginning or end of words; tokens may include spaces and even parts of words. To grasp the concept of tokens and their length equivalencies better, here are some helpful rules of thumb:

  • 1 token ≈ 4 characters in English.
  • 1 token ≈ ¾ of a word.
  • 100 tokens ≈ 75 words.
  • 1 to 2 sentences ≈ 30 tokens.
  • 1 paragraph ≈ 100 tokens.
  • 1,500 words ≈ 2,048 tokens.

To get additional context on how tokens are counted, consider this:

The prompt at the beginning of this blog, requesting that OpenAI grade an essay, contains 129 tokens, and the output contains 12 tokens.

The input cost is $0.0000645, and the output cost is $0.000018.

ChatGPT provides an alternative approach to essay grading. This post has delved into the practical application of ChatGPT’s natural language processing capabilities, demonstrating how it can be used for efficient and accurate essay grading, with a comparison to human grading. The flexibility of ChatGPT is particularly evident when handling large volumes of essays, making it a viable alternative tool for educators and researchers. By employing the ChatGPT API key service, the grading process becomes not only streamlined but also adaptable to varying scales, from individual essays to hundreds or even thousands.

This technology has the potential to significantly enhance the efficiency of the grading process. By automating the assessment of written work, teachers and researchers can devote more time to other critical aspects of education. However, it’s important to acknowledge the limitations of current LLMs in this context. While they can assist in grading, relying solely on LLMs for final grades could be problematic, especially if LLMs are biased or inaccurate. Such scenarios could lead to unfair outcomes for individual students, highlighting the need for human oversight in the grading process. For large scale research, where we look at always across many essays, this is less of a concern (see e.g., Mozer et al., 2023)

The guide in this blog has provided a step-by-step walkthrough of setting up and accessing the ChatGPT API essay grading.

We also explored the reliability of ChatGPT’s grading, as compared to human grading. The moderate positive correlation of 0.62 attests to same consistency between human grading and ChatGPT’s evaluations. The classification results reveal that the model achieves an accuracy of approximately 84%, and the Cohen’s Kappa value of 0.71 indicates substantial agreement beyond what would be expected by chance alone. See the related study (Kim et al., 2024) for more on this.

In essence, this comprehensive guide underscores the transformative potential of ChatGPT in essay grading, presenting it as a valuable approach in the ever-evolving educational fields. This post gives an overview; we next dig in a bit more, thinking about prompt engineering + providing examples to improve accuracy.

Writer’s Comments

The api experience: a blend of ease and challenge.

Starting your journey with the ChatGPT API will be surprisingly smooth, especially if you have some Python experience. Copying and pasting code from this blog, followed by acquiring your own ChatGPT API and tweaking prompts and datasets, might seem like a breeze. However, this simplicity masks the underlying complexity. Bumps along the road are inevitable, reminding us that “mostly” easy does not mean entirely challenge-free.

The biggest hurdle you will likely face is mastering the art of crafting effective prompts. While ChatGPT’s responses are impressive, they can also be unpredictably variable. Conducting multiple pilot runs with 5-10 essays is crucial. Experimenting with diverse prompts on the same essays can act as a stepping stone, refining your approach and building confidence for wider application.

When things click, the benefits are undeniable. Automating the grading process with ChatGPT can save considerable time. Human graders, myself included, can struggle with maintaining consistent standards across a mountain of essays. ChatGPT, on the other hand, might be more stable when grading large batches in a row.

It is crucial to acknowledge that this method is not a magic bullet. Continuous scoring is not quite there yet, and limitations still exist. But the good news is that LLMs like ChatGPT are constantly improving, and new options are emerging.

Overall Reflections: A Journey of Discovery

The exploration of the ChatGPT API can be a blend of innovation, learning, and the occasional frustration. While AI grading systems like ChatGPT are not perfect, their ability to save time and provide consistent grading scheme makes them an intriguing addition to the educational toolkit. As we explore and refine these tools, the horizon for their application in educational settings seems ever-expanding, offering a glimpse into a future where AI and human educators work together to enhance the learning experience. Who knows, maybe AI will become a valuable partner in the grading process in the future!

Call to Action

Have you experimented with using ChatGPT for grading? Share your experiences and questions in the comments below! We can all learn from each other as we explore the potential of AI in education.

  • Abedi, M., Alshybani, I., Shahadat, M. R. B., & Murillo, M. (2023). Beyond Traditional Teaching: The Potential of Large Language Models and Chatbots in Graduate Engineering Education. Qeios. https://doi.org/10.32388/MD04B0
  • Kim, Y., Mozer, R., Miratrix, L., & Al-Ademi, S. (2024). ChatGPT vs. Machine Learning: Assessing the Efficacy and Accuracy of Large Language Models for Automated Essay Scoring (in preparation).
  • Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Computers and Education: Artificial Intelligence, 2, 100033. https://doi.org/10.1016/j.caeai.2021.100033
  • Pricing . (n.d.). OpenAI. Retrieved March 2, 2024, from https://openai.com/pricing#language-models
  • Mozer, R., Miratrix, L., Relyea, J. E., & Kim, J. S. (2023). Combining Human and Automated Scoring Methods in Experimental Assessments of Writing: A Case Study Tutorial. Journal of Educational and Behavioral Statistics, 10769986231207886. https://doi.org/10.3102/10769986231207886
  • Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education, 16(1), 1-27. https://doi.org/10.1186/s41239-019-0171-0

Have a language expert improve your writing

Check your paper for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

Using ChatGPT to Write a College Essay | Tips & Examples

Published on June 26, 2023 by Koen Driessen . Revised on December 8, 2023.

A good college essay helps to distinguish your application from those of other candidates. It conveys your experiences, values, and character traits and indicates why you would make a good addition to the university you’re applying to.

Your college essay should be your own work. However, AI tools like ChatGPT may be helpful during the preparation and revision stages of your writing.

You can also use the essay checker , so you don’t have to revise your writing yourself.

You can use these tools to:

  • Brainstorm essay ideas
  • Develop an outline
  • Generate feedback

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

Brainstorming ideas, developing an outline, generating feedback, limitations of using chatgpt, other interesting articles, frequently asked questions.

Your college essay topic should reveal something distinctive about your personality and have a unique angle.

Brainstorming a range of potential ideas is a good place to start. This can help to stimulate creative thinking and prompt self-reflection. It can also help you draw connections between seemingly unrelated experiences or ideas.

You can use ChatGPT to generate questions to help you brainstorm.

Example: Using ChatGPT to brainstorm ideas

  • Generate 10 questions to help me brainstorm topics for my college admission essay.
  • How has a particular life experience shaped your perspective on a specific issue or topic?
  • What challenges or obstacles have you faced, and how have they influenced your personal growth?
  • How has your involvement in extracurricular activities impacted your character development and contributed to your identity?
  • In what ways have you demonstrated leadership skills and made a positive impact on your community?
  • Describe a significant academic achievement or project that showcases your intellectual curiosity and passion.
  • How have your cultural background and family values influenced your aspirations and goals for the future?
  • Reflect on a time when you had to step outside of your comfort zone to achieve a meaningful outcome. What did you learn from this experience?
  • How do you envision using your education to make a difference in the world or contribute to a specific field or industry?
  • What do you believe sets you apart from other applicants, and how would your unique perspective enrich the college community?
  • Discuss a book, artwork, or scientific discovery that has profoundly impacted your thinking and helped shape your intellectual interests.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

write essays with chat gpt

Try for free

College admission essays don’t follow one predefined structure, so you can approach them creatively and incorporate storytelling techniques.

Two common approaches to structuring your essay are the narrative structure (a chronological narrative that shows your character development over time) and the vignette or montage structure (multiple scenes connected by a common topic or theme).

Regardless of the structure you choose, it’s important to establish a coherent narrative and ensure that your essay flows smoothly.

You can use ChatGPT as a source of inspiration and to generate potential ideas on how to structure your college essay . Do this by prompting the tool to create an outline using your chosen topic. You can make the prompt as specific as you like, mentioning any facts or viewpoints that should be incorporated into the output.

Examples: Using ChatGPT to generate an essay outline

  • Provide a very short outline for a college admission essay. The essay will be about my experience working at an animal shelter. The essay will be 500 words long.
  • Hook: Share a brief and engaging anecdote about your experience at the animal shelter.
  • Thesis statement: Working at an animal shelter has been a transformative experience, teaching me valuable lessons about compassion, resilience, and community impact.
  • Describe your first impressions and emotions upon starting work at the shelter.
  • Discuss the challenges you encountered, such as the condition of the animals or the emotional toll of their situations.
  • Highlight the relationships you formed with the animals and their impact on you.
  • Discuss specific instances where you witnessed the transformative power of love and care on the animals’ well-being.
  • Explore the collaborative efforts within the shelter community.
  • Discuss how working with others led to positive outcomes and inspired you to make a difference.
  • Reflect on how working at the shelter has shaped your character and values.
  • Discuss the skills and lessons learned that are applicable beyond the shelter environment.
  • Connect your experiences to your future goals and aspirations.
  • Summarize the main points discussed in the essay.
  • Reinforce the transformative nature of your experience and your commitment to making a positive impact in the future.

You can use ChatGPT to help revise your essay before submitting it to ensure that it’s of a high quality. However, ChatGPT cannot provide the same kind of informed feedback as a human, and it’s not able to adequately judge qualities such as vulnerability and authenticity. For this reason, it’s important to also ask for feedback from two to three people who have experience with college essays and who know you well.

Nevertheless, you can use ChatGPT to generate feedback on certain aspects, such as consistency of tone, clarity of structure, and grammar and punctuation. You can also use  Scribbr’s free paraphrasing tool to help vary the tone of your writing and Scribbr’s free grammar checker to ensure your text is free of grammar, spelling, and punctuation errors.

Alternatively, you can seek the advice of a human essay coach or editor. We recommend using Scribbr’s essay editing service .

Example: Using ChatGPT to get feedback

  • Provide constructive feedback on how to improve the above essay with regard to appropriateness of tone, clarity of structure, and grammar and punctuation.
  • The tone of the essay is generally appropriate, conveying a sense of personal growth and passion. However, consider injecting more enthusiasm and emotion into certain parts of the essay to make it more engaging and captivating.
  • The overall structure of the essay is clear and follows a logical progression. However, it would benefit from a stronger introduction that hooks the reader and clearly outlines the main points to be discussed.
  • Consider using clear topic sentences at the beginning of each paragraph to provide a preview of the main idea or argument that will be presented.
  • Pay attention to subject-verb agreement and verb tenses throughout the essay to ensure consistency.
  • Proofread the essay to eliminate any repetitive phrases or sentences.
  • Check for the correct use of commas and other punctuation marks, ensuring they are used appropriately to enhance clarity and flow.

Additionally, here are a few constructive suggestions for improvement:

  • Expand on the ways in which working at the animal shelter influenced your future aspirations and academic/professional pursuits.
  • Be mindful of using diverse vocabulary and sentence structures to enhance the overall quality of the essay.

While ChatGPT can help you brainstorm, outline, and revise your essay, having it write your essay for you can negatively impact your application in several ways.

Unoriginal content

ChatGPT responses are based on a wide range of sources and sometimes resemble existing texts. Furthermore, they are not always unique, since other users may receive the same responses to similar prompts.

As the point of a college essay is to showcase your unique experiences and perspective and make you stand out from other candidates, having ChatGPT write your essay is not a good idea.

Formulaic style

ChatGPT outputs don’t reflect your personality, are highly formulaic, and contain many clichés (e.g., outputs often use hackneyed phrases like “burning desire,” “insatiable curiosity,” and “thirst for knowledge”).

Furthermore, ChatGPT outputs often “tell” rather than “show” (i.e., they state a fact instead of illustrating it using examples and vivid imagery).

Lack of personal insight

Human-written text tends to be more unpredictable and contain more creative language choices than AI-generated writing.

While the connections you make in your writing should not be nonsensical, drawing unique and unexpected connections between different experiences can help show character development and display your creativity and critical thinking skills.

ChatGPT can’t do this. Furthermore, it can’t express authentic emotion or vulnerability about specific memories that are, after all, your memories, not ChatGPT’s.

Risk of plagiarism

Passing off AI-generated text as your own work is usually considered plagiarism (or at least academic dishonesty ). AI detectors may be used to detect this offense.

It’s highly unlikely that a university will accept your application if you are caught submitting an AI-generated college essay.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Is ChatGPT trustworthy?
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

No, having ChatGPT write your college essay can negatively impact your application in numerous ways. ChatGPT outputs are unoriginal and lack personal insight.

Furthermore, Passing off AI-generated text as your own work is considered academically dishonest . AI detectors may be used to detect this offense, and it’s highly unlikely that any university will accept you if you are caught submitting an AI-generated admission essay.

However, you can use ChatGPT to help write your college essay during the preparation and revision stages (e.g., for brainstorming ideas and generating feedback).

Yes, you use ChatGPT to help write your college essay by having it generate feedback on certain aspects of your work (consistency of tone, clarity of structure, etc.).

However, ChatGPT is not able to adequately judge qualities like vulnerability and authenticity. For this reason, it’s important to also ask for feedback from people who have experience with college essays and who know you well. Alternatively, you can get advice using Scribbr’s essay editing service .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Driessen, K. (2023, December 08). Using ChatGPT to Write a College Essay | Tips & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/ai-tools/chatgpt-college-essay/

Is this article helpful?

Koen Driessen

Koen Driessen

Other students also liked, how to use chatgpt | basics & tips, how to use chatgpt in your studies, using chatgpt for assignments | tips & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Have a thesis expert improve your writing

Check your thesis for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

Using AI tools

  • How to Write an Essay with ChatGPT | Tips & Examples

How to Write an Essay with ChatGPT | Tips & Examples

Published on 26 June 2023 by Koen Driessen .

Passing off AI-generated text as your own work is widely considered plagiarism. However, when used correctly, generative AI tools like ChatGPT can legitimately help guide your writing process.

These tools are especially helpful in the preparation and revision stages of your essay writing.

You can use ChatGPT to:

  • Write a research question
  • Develop an outline
  • Find relevant source information
  • Summarise or paraphrase text
  • Get feedback

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

Writing a research question, developing an outline, finding source recommendations, summarising/paraphrasing text, getting feedback, other interesting articles, frequently asked questions.

You can use ChatGPT to brainstorm potential research questions or to narrow down your thesis statement . Begin by inputting a description of the research topic or assigned question. Then include a prompt like “Write 3 possible research questions on this topic”.

You can make the prompt as specific as you like. For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative , descriptive , expository , or narrative ).

You can also mention any facts or viewpoints you’ve gathered that should be incorporated into the output.

If the output doesn’t suit your topic, you can click “Regenerate response” to have the tool generate a new response. You can do this as many times as you like, and you can try making your prompt more specific if you struggle to get the results you want.

Example: Using ChatGPT to write a research question

  • Generate three possible research questions for an argumentative high school essay on the following topic: “The long-term impact of the Covid-19 pandemic.”
  • To what extent has the Covid-19 pandemic exacerbated existing social inequalities and what measures can be taken to address these disparities in the long term?
  • How has the Covid-19 pandemic transformed education systems, and what are the potential long-term implications for students’ learning outcomes and educational equity?
  • What lessons can be learned from the Covid-19 pandemic to enhance global preparedness and response to future public health crises, and how can these lessons be effectively implemented in the long term?
  • Based on the second question, develop three further research questions with a specific focus on distance learning.
  • What are the long-term effects of widespread adoption of distance learning during the Covid-19 pandemic on students’ academic performance, motivation, and overall educational outcomes?
  • How has the shift to distance learning impacted the social and emotional well-being of students, and what strategies can be implemented to address any negative consequences in the long term?
  • What are the key factors that contribute to the success of distance learning programs, and how can educational institutions and policymakers ensure equitable access to quality education in the long term, especially for marginalized and underserved student populations?

The only proofreading tool specialized in correcting academic writing

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

write essays with chat gpt

Correct my document today

Once you’ve decided on a research question, you can use ChatGPT to develop an essay outline . This can help keep you on track by giving you a clear idea of what topics you want to discuss and in what order.

Do this by prompting ChatGPT to create an outline for a specific research question, mentioning any topics or points you want to discuss. You can also mention the writing level and the intended length of your essay so that the tool generates an appropriate outline.

You can then refine this by further prompting ChatGPT or editing the outline manually until it meets your requirements.

Examples: Using ChatGPT to generate an essay outline

  • Develop an outline for an argumentative high school essay with the following research question: “What are the long-term effects of widespread adoption of distance learning during the Covid-19 pandemic on students’ academic performance and overall educational outcomes?”The essay will be about 4 pages long.
  • Background information on the Covid-19 pandemic and the shift to distance learning
  • Thesis statement: The widespread adoption of distance learning during the Covid-19 pandemic has had both positive and negative long-term effects on students’ academic performance and overall educational outcomes.
  • Increased flexibility and personalized learning opportunities
  • Access to a wider range of resources and educational materials
  • Development of self-directed learning skills
  • Critiques and limitations of previous studies
  • Lack of face-to-face interaction and reduced social engagement
  • Increased potential for distractions and lack of structure
  • Technological barriers and unequal access to resources
  • Examination of studies and data on academic achievement during distance learning
  • Analysis of long-term effects on educational attainment and future opportunities
  • Discussion of potential implications for college admissions and career readiness
  • Strategies for promoting student engagement and motivation in distance learning
  • Importance of teacher-student communication and support
  • Addressing technological disparities and ensuring equitable access to resources
  • Recap of the main points discussed
  • Emphasize the need for further research and long-term planning to optimize distance learning experiences
  • Restate the thesis and provide a final thought on the potential long-term effects of distance learning on students’ academic performance and overall educational outcomes

Once you know the scope of your essay, you can find relevant primary and secondary sources to support your argument.

However, we don’t recommend prompting ChatGPT to generate a list of sources as it occasionally makes mistakes (like listing nonexistent sources). Instead, it’s a good idea to use ChatGPT to get suggestions for the types of sources relevant to your essay and track them down using a credible research database or your institution’s library.

When you have found relevant sources, use a specialised tool like the Scribbr Citation Generator to cite them in your essay.

Example: Using ChatGPT to find sources

  • What types of sources can I use to write an essay on the following research question?”What are the long-term effects of widespread adoption of distance learning during the Covid-19 pandemic on students’ academic performance and overall educational outcomes?”
  • Academic Journals and Research Articles: Look for scholarly articles and research studies published in peer-reviewed academic journals. These sources often provide in-depth analysis, empirical evidence, and statistical data on the long-term effects of distance learning.
  • Educational and Government Reports: Reports published by educational institutions, government agencies, or educational organizations can offer valuable insights into the impact of distance learning on academic performance and educational outcomes. These reports may include surveys, case studies, and data analysis specific to the Covid-19 pandemic and distance learning.
  • Official Websites and Publications: Explore the official websites of educational institutions, education ministries, or educational technology companies. They often publish articles, white papers, or research findings related to distance learning and its effects on students’ academic performance.
  • Expert Interviews or Opinion Pieces: Seek out interviews or opinion pieces from experts in the field of education, distance learning, or pedagogy. These sources can provide valuable perspectives and insights on the long-term effects of distance learning on students’ educational outcomes.
  • Online Databases and Digital Libraries: Utilize online databases and digital libraries such as JSTOR, Google Scholar, or academic databases provided by your educational institution. These platforms offer a wide range of scholarly articles, research papers, and publications that can support your arguments.
  • Surveys and Studies Conducted by Educational Institutions: Some schools or universities may have conducted their own surveys or studies to evaluate the impact of distance learning on students. Check if your educational institution has published any research findings or survey results that are relevant to your topic.

During your writing process, you can use ChatGPT as a summarising tool to condense text to its essential ideas or as a paraphraser to rephrase text in clear, accessible language. Using ChatGPT in these ways can help you to understand complex material, express your own ideas more clearly, and avoid repetition.

Simply input the relevant text and prompt the tool to summarise or paraphrase it. Alternatively, you can use Scribbr’s free text summariser and Scribbr’s free paraphrasing tool , which are specifically designed for these purposes.

Example: Using ChatGPT to summarise/paraphrase

  • Paraphrase the following text using clear language:”Engaging in an extensive examination of the intricate interplay between remote instructional methods and academic achievement necessitates consideration of diverse factors, including the pedagogical efficacy of virtual platforms, the socioemotional ramifications of reduced face-to-face interactions, the equitable accessibility of technological resources, and the potential long-lasting implications for students’ educational trajectories and future opportunities.”
  • To thoroughly explore the connection between remote teaching methods and academic success, it is crucial to take into account various factors. These factors include evaluating how well virtual platforms support effective teaching, understanding the impact of limited in-person interactions on students’ emotional well-being, ensuring fair access to technology resources, and recognizing the potential long-term effects on students’ educational paths and future prospects.

Once you’ve written your essay, you can prompt ChatGPT to provide feedback and recommend improvements.

You can indicate how the tool should provide feedback (e.g., “Act like a university professor examining papers”) and include the specific points you want to receive feedback on (e.g., consistency of tone, clarity of argument, appropriateness of evidence).

While this is not an adequate substitute for an experienced academic supervisor, it can help you with quick preliminary feedback.

You can also use ChatGPT to check grammar mistakes. However, ChatGPT sometimes misses errors and on rare occasions may even introduce new grammatical mistakes. We suggest using a tool like Scribbr’s free grammar checker , which is designed specifically for this purpose. Or, for more in-depth feedback, try Scribbr’s proofreading and editing service .

Example: Using ChatGPT for feedback

  • Feedback the above text with regard to consistency of tone, clarity of argument, appropriateness of evidence, and coherence of structure.
  • Clarity of Argument: The argument could be further clarified by explicitly connecting the discussed factors to their impact on academic performance. For example, explicitly stating how the proficiency of virtual platforms and equitable access to technology affect academic performance would strengthen the clarity of the argument.
  • Appropriateness of Evidence: The evidence provided is generally appropriate and relevant. To enhance the strength of the evidence, consider incorporating specific examples, research studies, or statistics that directly support the discussed factors and their impact on academic performance. This would provide stronger evidence for the claims made.
  • Coherence of Structure: The structure of the text is coherent, with a clear introduction, body paragraphs addressing different factors, and a conclusion. To improve coherence, consider adding transitional phrases or sentences at the beginning or end of paragraphs to facilitate the smooth flow of ideas between paragraphs. This would enhance the overall structure and make it easier for the reader to follow the argument.

Overall, the text demonstrates a consistent tone, a clear argument, appropriate evidence, and a coherent structure. Clarifying the argument by explicitly connecting the factors to their impact, incorporating stronger evidence, and adding transitional phrases for better coherence would further enhance the text’s effectiveness. Note Passing off AI-generated text as your own work is generally considered plagiarism (or at least academic dishonesty ) and may result in an automatic fail and other negative consequences . AI detectors may be used to detect this offence.

If you want more tips on using AI tools , understanding plagiarism , and citing sources , make sure to check out some of our other articles with explanations, examples, and formats.

  • Citing ChatGPT
  • Best grammar checker
  • Best paraphrasing tool
  • ChatGPT in your studies
  • Is ChatGPT trustworthy?
  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Best plagiarism checker

Citing sources

  • Citation styles
  • In-text citation
  • Citation examples
  • Annotated bibliography

Yes, you can use ChatGPT to summarise text . This can help you understand complex information more easily, summarise the central argument of your own paper, or clarify your research question.

You can also use Scribbr’s free text summariser , which is designed specifically for this purpose.

Yes, you can use ChatGPT to paraphrase text to help you express your ideas more clearly, explore different ways of phrasing your arguments, and avoid repetition.

However, it’s not specifically designed for this purpose. We recommend using a specialised tool like Scribbr’s free paraphrasing tool , which will provide a smoother user experience.

Using AI writing tools (like ChatGPT ) to write your essay is usually considered plagiarism and may result in penalisation, unless it is allowed by your university. Text generated by AI tools is based on existing texts and therefore cannot provide unique insights. Furthermore, these outputs sometimes contain factual inaccuracies or grammar mistakes.

However, AI writing tools can be used effectively as a source of feedback and inspiration for your writing (e.g., to generate research questions ). Other AI tools, like grammar checkers, can help identify and eliminate grammar and punctuation mistakes to enhance your writing.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Driessen, K. (2023, June 26). How to Write an Essay with ChatGPT | Tips & Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/using-ai-tools/chatgpt-essays/

Is this article helpful?

Koen Driessen

Koen Driessen

Other students also liked, how to write good chatgpt prompts, how to use chatgpt in your studies, how to use chatgpt | basics & tips.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT welcome screen

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies .

That growth has propelled OpenAI itself into becoming one of the most-hyped companies in recent memory. And its latest partnership with Apple for its upcoming generative AI offering, Apple Intelligence, has given the company another significant bump in the AI race.

2024 also saw the release of GPT-4o, OpenAI’s new flagship omni model for ChatGPT. GPT-4o is now the default free model, complete with voice and vision capabilities. But after demoing GPT-4o, OpenAI paused one of its voices , Sky, after allegations that it was mimicking Scarlett Johansson’s voice in “Her.”

OpenAI is facing internal drama, including the sizable exit of co-founder and longtime chief scientist Ilya Sutskever as the company dissolved its Superalignment team. OpenAI is also facing a lawsuit from Alden Global Capital-owned newspapers , including the New York Daily News and the Chicago Tribune, for alleged copyright infringement, following a similar suit filed by The New York Times last year.

Here’s a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. And if you have any other questions, check out our ChatGPT FAQ here.

Timeline of the most recent ChatGPT updates

September 2024, august 2024, february 2024, january 2024.

  • ChatGPT FAQs

OpenAI reaches 1 million paid users of its corporate offerings

OpenAI announced it has surpassed 1 million paid users for its versions of ChatGPT intended for businesses, including ChatGPT Team, ChatGPT Enterprise and its educational offering, ChatGPT Edu. The company said that nearly half of OpenAI’s corporate users are based in the US.

Volkswagen rolls out its ChatGPT assistant to the US

Volkswagen is taking its ChatGPT voice assistant experiment to vehicles in the United States. Its ChatGPT-integrated Plus Speech voice assistant is an AI chatbot based on Cerence’s Chat Pro product and a LLM from OpenAI and will begin rolling out on September 6 with the 2025 Jetta and Jetta GLI models.

OpenAI inks content deal with Condé Nast

As part of the new deal, OpenAI will surface stories from Condé Nast properties like The New Yorker, Vogue, Vanity Fair, Bon Appétit and Wired in ChatGPT and SearchGPT. Condé Nast CEO Roger Lynch implied that the “multi-year” deal will involve payment from OpenAI in some form and a Condé Nast spokesperson told TechCrunch that OpenAI will have permission to train on Condé Nast content.

We’re partnering with Condé Nast to deepen the integration of quality journalism into ChatGPT and our SearchGPT prototype. https://t.co/tiXqSOTNAl — OpenAI (@OpenAI) August 20, 2024

Our first impressions of ChatGPT’s Advanced Voice Mode

TechCrunch’s Maxwell Zeff has been playing around with OpenAI’s Advanced Voice Mode, in what he describes as “the most convincing taste I’ve had of an AI-powered future yet.” Compared to Siri or Alexa, Advanced Voice Mode stands out with faster response times, unique answers and the ability to answer complex questions. But the feature falls short as an effective replacement for virtual assistants.

OpenAI shuts down election influence operation that used ChatGPT

OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election. OpenAI identified five website fronts presenting as both progressive and conservative news outlets that used ChatGPT to draft several long-form articles, though it doesn’t seem that it reached much of an audience.

OpenAI finds that GPT-4o does some weird stuff sometimes

OpenAI has found that GPT-4o, which powers the recently launched alpha of Advanced Voice Mode in ChatGPT, can behave in strange ways. In a new “red teaming” report, OpenAI reveals some of GPT-4o’s weirder quirks, like mimicking the voice of the person speaking to it or randomly shouting in the middle of a conversation.

ChatGPT’s mobile app reports its biggest month yet

After a big jump following the release of OpenAI’s new GPT-4o “omni” model, the mobile version of ChatGPT has now seen its biggest month of revenue yet. The app pulled in $28 million in net revenue from the App Store and Google Play in July, according to data provided by app intelligence firm Appfigures.

OpenAI could potentially catch students who cheat with ChatGPT

OpenAI has built a watermarking tool that could potentially catch students who cheat by using ChatGPT — but The Wall Street Journal reports that the company is debating whether to actually release it. An OpenAI spokesperson confirmed to TechCrunch that the company is researching tools that can detect writing from ChatGPT, but said it’s taking a “deliberate approach” to releasing it.

ChatGPT’s advanced Voice Mode starts rolling out to some users

OpenAI is giving users their first access to GPT-4o’s updated realistic audio responses. The alpha version is now available to a small group of ChatGPT Plus users, and the company says the feature will gradually roll out to all Plus users in the fall of 2024. The release follows controversy surrounding the voice’s similarity to Scarlett Johansson, leading OpenAI to delay its release.

We’re starting to roll out advanced Voice Mode to a small group of ChatGPT Plus users. Advanced Voice Mode offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions. pic.twitter.com/64O94EhhXK — OpenAI (@OpenAI) July 30, 2024

OpenAI announces new search prototype, SearchGPT

OpenAI is testing SearchGPT, a new AI search experience to compete with Google. SearchGPT aims to elevate search queries with “timely answers” from across the internet, as well as the ability to ask follow-up questions. The temporary prototype is currently only available to a small group of users and its publisher partners, like The Atlantic, for testing and feedback.

We’re testing SearchGPT, a temporary prototype of new AI search features that give you fast and timely answers with clear and relevant sources. We’re launching with a small group of users for feedback and plan to integrate the experience into ChatGPT. https://t.co/dRRnxXVlGh pic.twitter.com/iQpADXmllH — OpenAI (@OpenAI) July 25, 2024

OpenAI could lose $5 billion this year, report claims

A new report from The Information , based on undisclosed financial information, claims OpenAI could lose up to $5 billion due to how costly the business is to operate. The report also says the company could spend as much as $7 billion in 2024 to train and operate ChatGPT.

OpenAI unveils GPT-4o mini

OpenAI released its latest small AI model, GPT-4o mini . The company says GPT-4o mini, which is cheaper and faster than OpenAI’s current AI models, outperforms industry leading small AI models on reasoning tasks involving text and vision. GPT-4o mini will replace GPT-3.5 Turbo as the smallest model OpenAI offers. 

OpenAI partners with Los Alamos National Laboratory for bioscience research

OpenAI announced a partnership with the Los Alamos National Laboratory to study how AI can be employed by scientists in order to advance research in healthcare and bioscience. This follows other health-related research collaborations at OpenAI, including Moderna and Color Health.

OpenAI and Los Alamos National Laboratory announce partnership to study AI for bioscience research https://t.co/WV4XMZsHBA — OpenAI (@OpenAI) July 10, 2024

OpenAI makes CriticGPT to find mistakes in GPT-4

OpenAI announced it has trained a model off of GPT-4, dubbed CriticGPT , which aims to find errors in ChatGPT’s code output so they can make improvements and better help so-called human “AI trainers” rate the quality and accuracy of ChatGPT responses.

We’ve trained a model, CriticGPT, to catch bugs in GPT-4’s code. We’re starting to integrate such models into our RLHF alignment pipeline to help humans supervise AI on difficult tasks: https://t.co/5oQYfrpVBu — OpenAI (@OpenAI) June 27, 2024

OpenAI inks content deal with TIME

OpenAI and TIME announced a multi-year strategic partnership that brings the magazine’s content, both modern and archival, to ChatGPT. As part of the deal, TIME will also gain access to OpenAI’s technology in order to develop new audience-based products.

We’re partnering with TIME and its 101 years of archival content to enhance responses and provide links to stories on https://t.co/LgvmZUae9M : https://t.co/xHAYkYLxA9 — OpenAI (@OpenAI) June 27, 2024

OpenAI delays ChatGPT’s new Voice Mode

OpenAI planned to start rolling out its advanced Voice Mode feature to a small group of ChatGPT Plus users in late June, but it says lingering issues forced it to postpone the launch to July. OpenAI says Advanced Voice Mode might not launch for all ChatGPT Plus customers until the fall, depending on whether it meets certain internal safety and reliability checks.

ChatGPT releases app for Mac

ChatGPT for macOS is now available for all users . With the app, users can quickly call up ChatGPT by using the keyboard combination of Option + Space. The app allows users to upload files and other photos, as well as speak to ChatGPT from their desktop and search through their past conversations.

The ChatGPT desktop app for macOS is now available for all users. Get faster access to ChatGPT to chat about email, screenshots, and anything on your screen with the Option + Space shortcut: https://t.co/2rEx3PmMqg pic.twitter.com/x9sT8AnjDm — OpenAI (@OpenAI) June 25, 2024

Apple brings ChatGPT to its apps, including Siri

Apple announced at WWDC 2024 that it is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems. The ChatGPT integrations, powered by GPT-4o, will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, and will be free without the need to create a ChatGPT or OpenAI account. Features exclusive to paying ChatGPT users will also be available through Apple devices .

Apple is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems #WWDC24 Read more: https://t.co/0NJipSNJoS pic.twitter.com/EjQdPBuyy4 — TechCrunch (@TechCrunch) June 10, 2024

House Oversight subcommittee invites Scarlett Johansson to testify about ‘Sky’ controversy

Scarlett Johansson has been invited to testify about the controversy surrounding OpenAI’s Sky voice at a hearing for the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation. In a letter, Rep. Nancy Mace said Johansson’s testimony could “provide a platform” for concerns around deepfakes.

ChatGPT experiences two outages in a single day

ChatGPT was down twice in one day: one multi-hour outage in the early hours of the morning Tuesday and another outage later in the day that is still ongoing. Anthropic’s Claude and Perplexity also experienced some issues.

You're not alone, ChatGPT is down once again. pic.twitter.com/Ydk2vNOOK6 — TechCrunch (@TechCrunch) June 4, 2024

The Atlantic and Vox Media ink content deals with OpenAI

The Atlantic and Vox Media have announced licensing and product partnerships with OpenAI . Both agreements allow OpenAI to use the publishers’ current content to generate responses in ChatGPT, which will feature citations to relevant articles. Vox Media says it will use OpenAI’s technology to build “audience-facing and internal applications,” while The Atlantic will build a new experimental product called Atlantic Labs .

I am delighted that @theatlantic now has a strategic content & product partnership with @openai . Our stories will be discoverable in their new products and we'll be working with them to figure out new ways that AI can help serious, independent media : https://t.co/nfSVXW9KpB — nxthompson (@nxthompson) May 29, 2024

OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier

OpenAI announced a new deal with management consulting giant PwC . The company will become OpenAI’s biggest customer to date, covering 100,000 users, and will become OpenAI’s first partner for selling its enterprise offerings to other businesses.

OpenAI says it is training its GPT-4 successor

OpenAI announced in a blog post that it has recently begun training its next flagship model to succeed GPT-4. The news came in an announcement of its new safety and security committee, which is responsible for informing safety and security decisions across OpenAI’s products.

Former OpenAI director claims the board found out about ChatGPT on Twitter

On the The TED AI Show podcast, former OpenAI board member Helen Toner revealed that the board did not know about ChatGPT until its launch in November 2022. Toner also said that Sam Altman gave the board inaccurate information about the safety processes the company had in place and that he didn’t disclose his involvement in the OpenAI Startup Fund.

Sharing this, recorded a few weeks ago. Most of the episode is about AI policy more broadly, but this was my first longform interview since the OpenAI investigation closed, so we also talked a bit about November. Thanks to @bilawalsidhu for a fun conversation! https://t.co/h0PtK06T0K — Helen Toner (@hlntnr) May 28, 2024

ChatGPT’s mobile app revenue saw biggest spike yet following GPT-4o launch

The launch of GPT-4o has driven the company’s biggest-ever spike in revenue on mobile , despite the model being freely available on the web. Mobile users are being pushed to upgrade to its $19.99 monthly subscription, ChatGPT Plus, if they want to experiment with OpenAI’s most recent launch.

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

After demoing its new GPT-4o model last week, OpenAI announced it is pausing one of its voices , Sky, after users found that it sounded similar to Scarlett Johansson in “Her.”

OpenAI explained in a blog post that Sky’s voice is “not an imitation” of the actress and that AI voices should not intentionally mimic the voice of a celebrity. The blog post went on to explain how the company chose its voices: Breeze, Cove, Ember, Juniper and Sky.

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them. Read more about how we chose these voices: https://t.co/R8wwZjU36L — OpenAI (@OpenAI) May 20, 2024

ChatGPT lets you add files from Google Drive and Microsoft OneDrive

OpenAI announced new updates for easier data analysis within ChatGPT . Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations. The company says these improvements will be added to GPT-4o in the coming weeks.

We're rolling out interactive tables and charts along with the ability to add files directly from Google Drive and Microsoft OneDrive into ChatGPT. Available to ChatGPT Plus, Team, and Enterprise users over the coming weeks. https://t.co/Fu2bgMChXt pic.twitter.com/M9AHLx5BKr — OpenAI (@OpenAI) May 16, 2024

OpenAI inks deal to train AI on Reddit data

OpenAI announced a partnership with Reddit that will give the company access to “real-time, structured and unique content” from the social network. Content from Reddit will be incorporated into ChatGPT, and the companies will work together to bring new AI-powered features to Reddit users and moderators.

We’re partnering with Reddit to bring its content to ChatGPT and new products: https://t.co/xHgBZ8ptOE — OpenAI (@OpenAI) May 16, 2024

OpenAI debuts GPT-4o “omni” model now powering ChatGPT

OpenAI’s spring update event saw the reveal of its new omni model, GPT-4o, which has a black hole-like interface , as well as voice and vision capabilities that feel eerily like something out of “Her.” GPT-4o is set to roll out “iteratively” across its developer and consumer-facing products over the next few weeks.

OpenAI demos real-time language translation with its latest GPT-4o model. pic.twitter.com/pXtHQ9mKGc — TechCrunch (@TechCrunch) May 13, 2024

OpenAI to build a tool that lets content creators opt out of AI training

The company announced it’s building a tool, Media Manager, that will allow creators to better control how their content is being used to train generative AI models — and give them an option to opt out. The goal is to have the new tool in place and ready to use by 2025.

OpenAI explores allowing AI porn

In a new peek behind the curtain of its AI’s secret instructions , OpenAI also released a new NSFW policy . Though it’s intended to start a conversation about how it might allow explicit images and text in its AI products, it raises questions about whether OpenAI — or any generative AI vendor — can be trusted to handle sensitive content ethically.

OpenAI and Stack Overflow announce partnership

In a new partnership, OpenAI will get access to developer platform Stack Overflow’s API and will get feedback from developers to improve the performance of their AI models. In return, OpenAI will include attributions to Stack Overflow in ChatGPT. However, the deal was not favorable to some Stack Overflow users — leading to some sabotaging their answer in protest .

U.S. newspapers file copyright lawsuit against OpenAI and Microsoft

Alden Global Capital-owned newspapers, including the New York Daily News, the Chicago Tribune, and the Denver Post, are suing OpenAI and Microsoft for copyright infringement. The lawsuit alleges that the companies stole millions of copyrighted articles “without permission and without payment” to bolster ChatGPT and Copilot.

OpenAI inks content licensing deal with Financial Times

OpenAI has partnered with another news publisher in Europe, London’s Financial Times , that the company will be paying for content access. “Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote in a press release.

OpenAI opens Tokyo hub, adds GPT-4 model optimized for Japanese

OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands.

Sam Altman pitches ChatGPT Enterprise to Fortune 500 companies

According to Reuters, OpenAI’s Sam Altman hosted hundreds of executives from Fortune 500 companies across several cities in April, pitching versions of its AI services intended for corporate use.

OpenAI releases “more direct, less verbose” version of GPT-4 Turbo

Premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now use an updated and enhanced version of GPT-4 Turbo . The new model brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base.

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding. Source: https://t.co/fjoXDCOnPr pic.twitter.com/I4fg4aDq1T — OpenAI (@OpenAI) April 12, 2024

ChatGPT no longer requires an account — but there’s a catch

You can now use ChatGPT without signing up for an account , but it won’t be quite the same experience. You won’t be able to save or share chats, use custom instructions, or other features associated with a persistent account. This version of ChatGPT will have “slightly more restrictive content policies,” according to OpenAI. When TechCrunch asked for more details, however, the response was unclear:

“The signed out experience will benefit from the existing safety mitigations that are already built into the model, such as refusing to generate harmful content. In addition to these existing mitigations, we are also implementing additional safeguards specifically designed to address other forms of content that may be inappropriate for a signed out experience,” a spokesperson said.

OpenAI’s chatbot store is filling up with spam

TechCrunch found that the OpenAI’s GPT Store is flooded with bizarre, potentially copyright-infringing GPTs . A cursory search pulls up GPTs that claim to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services and advertise themselves as being able to bypass AI content detection tools.

The New York Times responds to OpenAI’s claims that it “hacked” ChatGPT for its copyright lawsuit

In a court filing opposing OpenAI’s motion to dismiss The New York Times’ lawsuit alleging copyright infringement, the newspaper asserted that “OpenAI’s attention-grabbing claim that The Times ‘hacked’ its products is as irrelevant as it is false.” The New York Times also claimed that some users of ChatGPT used the tool to bypass its paywalls.

OpenAI VP doesn’t say whether artists should be paid for training data

At a SXSW 2024 panel, Peter Deng, OpenAI’s VP of consumer product dodged a question on whether artists whose work was used to train generative AI models should be compensated . While OpenAI lets artists “opt out” of and remove their work from the datasets that the company uses to train its image-generating models, some artists have described the tool as onerous.

A new report estimates that ChatGPT uses more than half a million kilowatt-hours of electricity per day

ChatGPT’s environmental impact appears to be massive. According to a report from The New Yorker , ChatGPT uses an estimated 17,000 times the amount of electricity than the average U.S. household to respond to roughly 200 million requests each day.

ChatGPT can now read its answers aloud

OpenAI released a new Read Aloud feature for the web version of ChatGPT as well as the iOS and Android apps. The feature allows ChatGPT to read its responses to queries in one of five voice options and can speak 37 languages, according to the company. Read aloud is available on both GPT-4 and GPT-3.5 models.

ChatGPT can now read responses to you. On iOS or Android, tap and hold the message and then tap “Read Aloud”. We’ve also started rolling on web – click the "Read Aloud" button below the message. pic.twitter.com/KevIkgAFbG — OpenAI (@OpenAI) March 4, 2024

OpenAI partners with Dublin City Council to use GPT-4 for tourism

As part of a new partnership with OpenAI, the Dublin City Council will use GPT-4 to craft personalized itineraries for travelers, including recommendations of unique and cultural destinations, in an effort to support tourism across Europe.

A law firm used ChatGPT to justify a six-figure bill for legal services

New York-based law firm Cuddy Law was criticized by a judge for using ChatGPT to calculate their hourly billing rate . The firm submitted a $113,500 bill to the court, which was then halved by District Judge Paul Engelmayer, who called the figure “well above” reasonable demands.

ChatGPT experienced a bizarre bug for several hours

ChatGPT users found that ChatGPT was giving nonsensical answers for several hours , prompting OpenAI to investigate the issue. Incidents varied from repetitive phrases to confusing and incorrect answers to queries. The issue was resolved by OpenAI the following morning.

Match Group announced deal with OpenAI with a press release co-written by ChatGPT

The dating app giant home to Tinder, Match and OkCupid announced an enterprise agreement with OpenAI in an enthusiastic press release written with the help of ChatGPT . The AI tech will be used to help employees with work-related tasks and come as part of Match’s $20 million-plus bet on AI in 2024.

ChatGPT will now remember — and forget — things you tell it to

As part of a test, OpenAI began rolling out new “memory” controls for a small portion of ChatGPT free and paid users, with a broader rollout to follow. The controls let you tell ChatGPT explicitly to remember something, see what it remembers or turn off its memory altogether. Note that deleting a chat from chat history won’t erase ChatGPT’s or a custom GPT’s memories — you must delete the memory itself.

We’re testing ChatGPT's ability to remember things you discuss to make future chats more helpful. This feature is being rolled out to a small portion of Free and Plus users, and it's easy to turn on or off. https://t.co/1Tv355oa7V pic.twitter.com/BsFinBSTbs — OpenAI (@OpenAI) February 13, 2024

OpenAI begins rolling out “Temporary Chat” feature

Initially limited to a small subset of free and subscription users, Temporary Chat lets you have a dialogue with a blank slate. With Temporary Chat, ChatGPT won’t be aware of previous conversations or access memories but will follow custom instructions if they’re enabled.

But, OpenAI says it may keep a copy of Temporary Chat conversations for up to 30 days for “safety reasons.”

Use temporary chat for conversations in which you don’t want to use memory or appear in history. pic.twitter.com/H1U82zoXyC — OpenAI (@OpenAI) February 13, 2024

ChatGPT users can now invoke GPTs directly in chats

Paid users of ChatGPT can now bring GPTs into a conversation by typing “@” and selecting a GPT from the list. The chosen GPT will have an understanding of the full conversation, and different GPTs can be “tagged in” for different use cases and needs.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT. This allows you to add relevant GPTs with the full context of the conversation. pic.twitter.com/Pjn5uIy9NF — OpenAI (@OpenAI) January 30, 2024

ChatGPT is reportedly leaking usernames and passwords from users’ private conversations

Screenshots provided to Ars Technica found that ChatGPT is potentially leaking unpublished research papers, login credentials and private information from its users. An OpenAI representative told Ars Technica that the company was investigating the report.

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

OpenAI has been told it’s suspected of violating European Union privacy , following a multi-month investigation of ChatGPT by Italy’s data protection authority. Details of the draft findings haven’t been disclosed, but in a response, OpenAI said: “We want our AI to learn about the world, not about private individuals.”

OpenAI partners with Common Sense Media to collaborate on AI guidelines

In an effort to win the trust of parents and policymakers, OpenAI announced it’s partnering with Common Sense Media to collaborate on AI guidelines and education materials for parents, educators and young adults. The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy .

OpenAI responds to Congressional Black Caucus about lack of diversity on its board

After a letter from the Congressional Black Caucus questioned the lack of diversity in OpenAI’s board, the company responded . The response, signed by CEO Sam Altman and Chairman of the Board Bret Taylor, said building a complete and diverse board was one of the company’s top priorities and that it was working with an executive search firm to assist it in finding talent. 

OpenAI drops prices and fixes ‘lazy’ GPT-4 that refused to work

In a blog post , OpenAI announced price drops for GPT-3.5’s API, with input prices dropping to 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out. GPT-4 Turbo also got a new preview model for API use, which includes an interesting fix that aims to reduce “laziness” that users have experienced.

Expanding the platform for @OpenAIDevs : new generation of embedding models, updated GPT-4 Turbo, and lower pricing on GPT-3.5 Turbo. https://t.co/7wzCLwB1ax — OpenAI (@OpenAI) January 25, 2024

OpenAI bans developer of a bot impersonating a presidential candidate

OpenAI has suspended AI startup Delphi, which developed a bot impersonating Rep. Dean Phillips (D-Minn.) to help bolster his presidential campaign. The ban comes just weeks after OpenAI published a plan to combat election misinformation, which listed “chatbots impersonating candidates” as against its policy.

OpenAI announces partnership with Arizona State University

Beginning in February, Arizona State University will have full access to ChatGPT’s Enterprise tier , which the university plans to use to build a personalized AI tutor, develop AI avatars, bolster their prompt engineering course and more. It marks OpenAI’s first partnership with a higher education institution.

Winner of a literary prize reveals around 5% her novel was written by ChatGPT

After receiving the prestigious Akutagawa Prize for her novel The Tokyo Tower of Sympathy, author Rie Kudan admitted that around 5% of the book quoted ChatGPT-generated sentences “verbatim.” Interestingly enough, the novel revolves around a futuristic world with a pervasive presence of AI.

Sam Altman teases video capabilities for ChatGPT and the release of GPT-5

In a conversation with Bill Gates on the Unconfuse Me podcast, Sam Altman confirmed an upcoming release of GPT-5 that will be “fully multimodal with speech, image, code, and video support.” Altman said users can expect to see GPT-5 drop sometime in 2024.

OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

OpenAI is forming a Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services. This comes as a part of OpenAI’s public program to award grants to fund experiments in setting up a “democratic process” for determining the rules AI systems follow.

OpenAI unveils plan to combat election misinformation

In a blog post, OpenAI announced users will not be allowed to build applications for political campaigning and lobbying until the company works out how effective their tools are for “personalized persuasion.”

Users will also be banned from creating chatbots that impersonate candidates or government institutions, and from using OpenAI tools to misrepresent the voting process or otherwise discourage voting.

The company is also testing out a tool that detects DALL-E generated images and will incorporate access to real-time news, with attribution, in ChatGPT.

Snapshot of how we’re preparing for 2024’s worldwide elections: • Working to prevent abuse, including misleading deepfakes • Providing transparency on AI-generated content • Improving access to authoritative voting information https://t.co/qsysYy5l0L — OpenAI (@OpenAI) January 15, 2024

OpenAI changes policy to allow military applications

In an unannounced update to its usage policy , OpenAI removed language previously prohibiting the use of its products for the purposes of “military and warfare.” In an additional statement, OpenAI confirmed that the language was changed in order to accommodate military customers and projects that do not violate their ban on efforts to use their tools to “harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

ChatGPT subscription aimed at small teams debuts

Aptly called ChatGPT Team , the new plan provides a dedicated workspace for teams of up to 149 people using ChatGPT as well as admin tools for team management. In addition to gaining access to GPT-4, GPT-4 with Vision and DALL-E3, ChatGPT Team lets teams build and share GPTs for their business needs.

OpenAI’s GPT store officially launches

After some back and forth over the last few months, OpenAI’s GPT Store is finally here . The feature lives in a new tab in the ChatGPT web client, and includes a range of GPTs developed both by OpenAI’s partners and the wider dev community.

To access the GPT Store, users must be subscribed to one of OpenAI’s premium ChatGPT plans — ChatGPT Plus, ChatGPT Enterprise or the newly launched ChatGPT Team.

the GPT store is live! https://t.co/AKg1mjlvo2 fun speculation last night about which GPTs will be doing the best by the end of today. — Sam Altman (@sama) January 10, 2024

Developing AI models would be “impossible” without copyrighted materials, OpenAI claims

Following a proposed ban on using news publications and books to train AI chatbots in the U.K., OpenAI submitted a plea to the House of Lords communications and digital committee. OpenAI argued that it would be “impossible” to train AI models without using copyrighted materials, and that they believe copyright law “does not forbid training.”

OpenAI claims The New York Times’ copyright lawsuit is without merit

OpenAI published a public response to The New York Times’s lawsuit against them and Microsoft for allegedly violating copyright law, claiming that the case is without merit.

In the response , OpenAI reiterates its view that training AI models using publicly available data from the web is fair use. It also makes the case that regurgitation is less likely to occur with training data from a single source and places the onus on users to “act responsibly.”

We build AI to empower people, including journalists. Our position on the @nytimes lawsuit: • Training is fair use, but we provide an opt-out • "Regurgitation" is a rare bug we're driving to zero • The New York Times is not telling the full story https://t.co/S6fSaDsfKb — OpenAI (@OpenAI) January 8, 2024

OpenAI’s app store for GPTs planned to launch next week

After being delayed in December , OpenAI plans to launch its GPT Store sometime in the coming week, according to an email viewed by TechCrunch. OpenAI says developers building GPTs will have to review the company’s updated usage policies and GPT brand guidelines to ensure their GPTs are compliant before they’re eligible for listing in the GPT Store. OpenAI’s update notably didn’t include any information on the expected monetization opportunities for developers listing their apps on the storefront.

GPT Store launching next week – OpenAI pic.twitter.com/I6mkZKtgZG — Manish Singh (@refsrc) January 4, 2024

OpenAI moves to shrink regulatory risk in EU around data privacy

In an email, OpenAI detailed an incoming update to its terms, including changing the OpenAI entity providing services to EEA and Swiss residents to OpenAI Ireland Limited. The move appears to be intended to shrink its regulatory risk in the European Union, where the company has been under scrutiny over ChatGPT’s impact on people’s privacy.

What is ChatGPT? How does it work?

ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI . The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

When did ChatGPT get released?

November 30, 2022 is when ChatGPT was released for public use.

What is the latest version of ChatGPT?

Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o .

Can I use ChatGPT for free?

There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus .

Who uses ChatGPT?

Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

What companies use ChatGPT?

Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool .

Most recently, Microsoft announced at it’s 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

What does GPT mean in ChatGPT?

GPT stands for Generative Pre-Trained Transformer.

What is the difference between ChatGPT and a chatbot?

A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

Can ChatGPT write essays?

Can chatgpt commit libel.

Due to the nature of how these models work , they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

Does ChatGPT have an app?

Yes, there is a free ChatGPT mobile app for iOS and Android users.

What is the ChatGPT character limit?

It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

Does ChatGPT have an API?

Yes, it was released March 1, 2023.

What are some sample everyday uses for ChatGPT?

Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.

What are some advanced uses for ChatGPT?

Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

How good is ChatGPT at writing code?

It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

Can you save a ChatGPT chat?

Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

Are there alternatives to ChatGPT?

Yes. There are multiple AI-powered chatbot competitors such as Together , Google’s Gemini and Anthropic’s Claude , and developers are creating open source alternatives .

How does ChatGPT handle data privacy?

OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out  this form . This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

The web form for making a deletion of data about you request is entitled “ OpenAI Personal Data Removal Request ”.

In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here  for instructions on how you can opt out of our use of your information to train our models.”

What controversies have surrounded ChatGPT?

Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

Several major school systems and colleges, including New York City Public Schools , have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with .

There have also been cases of ChatGPT accusing individuals of false crimes .

Where can I find examples of ChatGPT prompts?

Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase . Another is ChatX . More launch every day.

Can ChatGPT be detected?

Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests , they’re inconsistent at best.

Are ChatGPT chats public?

No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

What lawsuits are there surrounding ChatGPT?

None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

Are there issues regarding plagiarism with ChatGPT?

Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

More TechCrunch

Get the industry’s biggest tech news, techcrunch daily news.

Every weekday and Sunday, you can get the best of TechCrunch’s coverage.

Startups Weekly

Startups are the core of TechCrunch, so get our best coverage delivered weekly.

TechCrunch Fintech

The latest Fintech news and analysis, delivered every Tuesday.

TechCrunch Mobility

TechCrunch Mobility is your destination for transportation news and insight.

Threads makes it easier to evangelize the open social web with a new direct link feature

It’s a small advance, but one that speaks to Meta’s enginerring team paying attention to how the fediverse community is trying to educate Threads users about the possibilities.  

Threads makes it easier to evangelize the open social web with a new direct link feature

Autonomous delivery startup Nuro pivots and another Indian EV scooter startup takes the IPO road

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! The transportation…

Autonomous delivery startup Nuro pivots and another Indian EV scooter startup takes the IPO road

OpenAI unveils o1, a model that can fact-check itself

ChatGPT maker OpenAI has announced its next major product release: A generative AI model code-named Strawberry, officially called OpenAI o1. To be more precise, o1 is actually a collection of…

OpenAI unveils o1, a model that can fact-check itself

Australian plan for misinformation law riles Elon Musk

The Australian government wants to fine social media platforms up to 5% of their global revenue if they fail to stop the spread of misinformation under a revised legislative plan…

Australian plan for misinformation law riles Elon Musk

Amazon and Flipkart violated competition laws in India, report says

An Indian antitrust regulator has found that Amazon and Flipkart, owned by Walmart, violated local competition laws, according to a report from Reuters. The finding presents a new challenge for…

Amazon and Flipkart violated competition laws in India, report says

Tune.FM wants to take on Spotify by using crypto to pay artists up to 100x more per stream

Tune.FM is a decentralized music streaming service where users pay for each song they stream using Tune.FM’s crypto token JAM.

Tune.FM wants to take on Spotify by using crypto to pay artists up to 100x more per stream

Google DeepMind teaches a robot to autonomously tie its shoes and fix fellow robots

DeepMind employed a new learning platform, ALOHA Unleashed, paired with its simulation program, DemoStart, to teach robots by watching humans.

Google DeepMind teaches a robot to autonomously tie its shoes and fix fellow robots

Apple AirPods Pro granted FDA approval to serve as hearing aids

The FDA on Thursday announced that it has granted what it calls “the first over-the-counter (OTC) hearing aid software device, Hearing Aid Feature.”

Apple AirPods Pro granted FDA approval to serve as hearing aids

Google Wallet to test a feature that turns your US passport into a digital ID

Google announced on Thursday that it’s introducing new Wallet updates for travelers and commuters. Most notably, Google Wallet will soon start beta testing the ability to create a Digital ID…

Google Wallet to test a feature that turns your US passport into a digital ID

White House extracts voluntary commitments from AI vendors to combat deepfake nudes

The White House says several major AI vendors have committed to taking steps to combat nonconsensual deepfakes and child sexual abuse material. Adobe, Cohere, Microsoft, Anthropic OpenAI, and data provider…

White House extracts voluntary commitments from AI vendors to combat deepfake nudes

Microsoft lays off another 650 from gaming division

Microsoft is laying off around 650 employees from its gaming division, according to an internal memo shared online by IGN. The latest cuts come eight months after the company laid…

Microsoft lays off another 650 from gaming division

Featured Article

Hacker tricks ChatGPT into giving out detailed instructions for making homemade bombs

An explosives expert told TechCrunch that the ChatGPT output could be used to make a detonatable product and was too sensitive to be released.

Hacker tricks ChatGPT into giving out detailed instructions for making homemade bombs

Meta, TikTok, and Snap pledge to participate in program to combat suicide and self-harm content

In an attempt to prevent suicide and self-harm content from spreading online, the nonprofit Mental Health Coalition (MHC) today announced a new program, Thrive, aimed at encouraging online platforms to…

Meta, TikTok, and Snap pledge to participate in program to combat suicide and self-harm content

Bitcoin and NFTs may get greater legal protections as ‘personal property’ under proposed UK law

The U.K. government has introduced a new bill to Parliament which proposes new legal protections for digital assets such as Bitcoin.

Bitcoin and NFTs may get greater legal protections as ‘personal property’ under proposed UK law

X is working on a new way for people to block DMs

Elon Musk’s social network X is exploring a new feature that would allow users to block others from direct messaging them, but in a way that’s separate from the account…

X is working on a new way for people to block DMs

Face to face with Figure’s new humanoid robot

Figure is testing Figure 02’s efficacy for helping out in the kitchen and picking up around the house.

Face to face with Figure’s new humanoid robot

After using a business coach to shift careers, AceUp founder wants to drive coaching based on data

Is a business coach really worth the investment? Execs often seek coaches to bolster aspects of their work, like communication skills and their productivity. At least anecdotally, these skills do…

After using a business coach to shift careers, AceUp founder wants to drive coaching based on data

Humanz brings its influencer marketing platform to the U.S.

Humanz, a marketing platform for content creators and brands, has entered the U.S. market, the company announced on Thursday.  Having launched in Israel in 2017, Humanz has gained strong traction…

Humanz brings its influencer marketing platform to the U.S.

iFixit marks iPhone 16 arrival with battery-powered soldering iron launch

iFixit, everyone’s favorite gadget repair gadfly, is launching a portable soldering iron. The gadget is designed to make component repair more accessible for home users. The timing of the announcement…

iFixit marks iPhone 16 arrival with battery-powered soldering iron launch

Irish Big Tech watchdog digs into platforms’ content reporting mechanisms after DSA complaints

Ireland’s media regulator said it is reviewing how major platforms let users report illegal content, following a high number of complaints. 

Irish Big Tech watchdog digs into platforms’ content reporting mechanisms after DSA complaints

Novatus nabs $40M to help financial institutions quell their regtech nightmares

Novatus helps financial companies manage their data for risk and compliance data, and it has now raised $40M to expand into new markets.

Novatus nabs $40M to help financial institutions quell their regtech nightmares

OffDeal wants to help small businesses find big exits with AI agents

Small businesses are the unsung heroes of the American economy, employing nearly half of America’s workforce and making up 44% of the country’s GDP. But when it’s time for small…

OffDeal wants to help small businesses find big exits with AI agents

Google’s GenAI facing privacy risk assessment scrutiny in Europe

Google’s lead privacy regulator in the European Union has opened an investigation into whether it has complied with the bloc’s data protection laws in relation to use of people’s information…

Google’s GenAI facing privacy risk assessment scrutiny in Europe

Consumer group sues to ban purchases inside games like Fortnite and Minecraft in Europe

Video games are some of the most lucrative apps in the world, in part because of how they lure people into spending money on credits to buy digital goodies, to…

Consumer group sues to ban purchases inside games like Fortnite and Minecraft in Europe

WhatsApp brings Meta Verified, customized messages to small businesses in India

WhatsApp is now letting small businesses in India sign up for a Meta Verified badge and giving them the ability to send customized messages to customers.

WhatsApp brings Meta Verified, customized messages to small businesses in India

Drama at OpenWeb, as a new CEO is announced – and the founding CEO says he’s staying

OpenWeb, a New York startup whose tools help publishers engage users, has a unique problem. Its co-founding CEO reportedly won’t leave, even though it announced a new CEO. According to…

Drama at OpenWeb, as a new CEO is announced – and the founding CEO says he’s staying

Adam Neumann’s crypto comeback company is reportedly refunding investors

In a development that will surprise few, former WeWork CEO Adam Neumann’s climate/crypto/carbon-credit startup Flowcarbon appears to be in the process of curling up to die, Forbes reported today. Buyers…

Adam Neumann’s crypto comeback company is reportedly refunding investors

Amazon starts testing ads in its Rufus chatbot

Rufus, Amazon’s recently launched, shopping-focused chatbot, is getting ads soon. That’s according to a changelog published by Amazon this week (first spotted by AdWeek), which states that sponsored ads could…

Amazon starts testing ads in its Rufus chatbot

LineLeap lets users pay to skip the line at bars

No one likes standing in line. I was reminded of just how awful the experience can be last Saturday, while being herded like cattle through a two-hour queue for a…

LineLeap lets users pay to skip the line at bars

A comprehensive list of 2024 tech layoffs

A complete list of all the known layoffs in tech, from Big Tech to startups, broken down by month throughout 2024.

A comprehensive list of 2024 tech layoffs

The Learning Agency

  • Guides & Resources

Assessing ChatGPT’s Writing Evaluation Skills Using Benchmark Data

Introduction.

For many, one of ChatGPT’s most promising educational applications is as an assisted writing feedback tool (AWFT). In other words, ChatGPT could help students hone their writing skills and support teachers in evaluating student work. Many educators already use tools like ChatGPT in the classroom. 

However, for ChatGPT to be an effective feedback tool, it must understand the different components of student writing and provide targeted feedback anchored in the organization and development of these components. Research shows struggling writers can improve when receiving this formative and granular feedback. ChatGPT is less likely to improve writing outcomes by offering summative feedback alone to students, such as holistic evaluations of writing samples or letter grades. 

For ChatGPT to be an effective feedback tool, it must understand the different components of student writing and provide targeted feedback anchored in the organization and development of these components.

It is unknown how well ChatGPT can provide students with either summative or formative feedback. Thus, we tested ChatGPT’s (version 4) default model’s performance for providing summative and formative feedback on student writing using two benchmarks for automated writing evaluation: the Hewlett Foundation ASAP dataset , a corpus of student essays that pioneered innovation in automated essay scoring algorithms, and 2) the PERSUADE (Persuasive Essays for Rating, Selecting, and Understanding Argumentative and Discourse Elements) Corpus , a large-scale collection of student essays with annotated discourse elements related to individual components of student writing along with their effectiveness in supporting a student’s ideas. 

ChatGPT shows human-level performance in holistic scoring using the ASAP dataset as a benchmark, but it struggles with more granular, discourse-level evaluation using the PERSUADE dataset as a benchmark. ChatGPT’s greatest challenge was identifying the distinct elements of argumentative writing (e.g., claim, counterclaim, rebuttal, supporting evidence) and breaking an essay into meaningful and coherent segments. ChatGPT also tends to be a lenient grader when focused on smaller writing segments, like discourse elements, rating them at a higher effectiveness level than a human would. This finding isn’t entirely surprising because the chatbot comes from a family of generative language models that weren’t trained on data representing tasks like essay segmentation or classification. Put differently, generative language models are much less exposed to annotated essay segments in training than their usual use cases (text generation, sentiment analysis, etc.). Thus, they cannot be expected to be as reliable for discourse-level evaluation.

In assessing ChatGPT’s performance on the ASAP benchmark, we tasked the chatbot with assigning a holistic and analytic score for approximately 300 essays sampled from the ASAP dataset that were stratified by score. These essays were sampled from the eight ASAP essay prompts, and we interacted with the chatbot via OpenAI’s API using its standard or default settings (e.g., temperature). We instructed ChatGPT via few-shot prompting , a method where you provide a few examples related to the task to demonstrate ideal performance and supplement the general knowledge of the language model. This approach was done deliberately to see how well the default version of ChatGPT could perform without special-purpose tuning. The API prompts were also adjusted to accommodate variations in rubrics and score ranges across these sets. 

Performance was evaluated using the quadratic weighted kappa , a standard metric for comparing machine-generated holistic scores against human evaluations. This metric was computed for each essay set by comparing the final score to GPT-4’s predictions. Quadratic weighted kappa was also calculated for the machine-generated analytic scores (e.g., organization, sentence fluency, conventions) by comparing GPT-4’s predictions with the human raters.

In our assessment of ChatGPT’s performance on the PERSUADE benchmark, we tasked the chatbot with three key assignments:

  • Segmenting an essay into distinct rhetorical or argumentative segments. 
  • Labeling each segment according to a discourse element type (e.g., lead, position, claim, counterclaim, rebuttal, evidence, concluding statement).
  • Rating the effectiveness of each discourse element as Ineffective, Adequate, or Effective.

We randomly sampled 300 essays from the PERSUADE dataset for ChatGPT’s evaluation and interacted with the chatbot via OpenAI’s API. Using a standardized API prompt for each essay, ChatGPT (GPT-4 version) received a scoring and segmentation rubric along with several examples for each annotation label of a discourse element type. Additionally, the prompt instructed ChatGPT to generate a rationale before assigning a label, following a chain-of-thought prompting method. We assessed accuracy by checking whether the chatbot matched the human-annotated writing elements with at least 50% text overlap across both the machine prediction and ground truth, along with correctly identifying the discourse type and effectiveness rating.

On the ASAP benchmark, which examines overall essay quality, ChatGPT performed comparably to a human in holistic scoring. Specifically, as shown in Table 1, the quadratic weighted kappa statistic for the chatbot on final holistic scores ranged from 0.67 to 0.84, indicating an acceptable level of human agreement.  However, while ChatGPT generally agreed with human raters in holistic scores, the manner in which ChatGPT’s scores differed from those of humans (when they did differ) is not consistent with how humans’ scores differed among themselves. Table 2 presents kappa agreement on score differences as another metric for comparing GPT’s agreement with each human rater to the agreement among the raters themselves. Note that essay set 2 had two holistic domains: 1) a holistic score considering ideas, voice, organization, and style, and 2) a holistic score considering only language conventions and skills. Overall, there was lower agreement (e.g., .13 for essay set 1, domain 1) on the patterns of score differences between ChatGPT and human rater 1 compared to the differences between the human raters. This low agreement indicates it’ll be difficult to predict how ChatGPT’s holistic scoring would diverge from that of a human evaluator. 

On the ASAP benchmark, which examines overall essay quality, ChatGPT performed comparably to a human in holistic scoring.

ChatGPT did not perform as well as a human on analytic scoring for specific components such as ideas and conventions in the ASAP benchmark. As shown in Tables 3 and 4, kappa scores for GPT ranged from 0.18 to 0.58 on essay set 7’s analytic traits and from 0.33 to 0.63 on essay set 8’s analytic traits.  Across both essay sets, the chatbot demonstrated low agreement for the analytic trait of ideas while uniquely performing poorly for scoring conventions in essay set 7. GPT also demonstrated an inconsistent pattern in how its scores differed from those of human raters, compared to the differences among the human raters themselves, as shown in Tables 5 and 6. Note that combined scores for raters were not available for the individual traits in ASAP essay sets 7 and 8, so comparisons on a “final” holistic score were not possible.

Table 1: Kappa Agreement Between ChatGPT and Human Raters in Holistic Scoring of the ASAP Dataset

Set 1 domain 10.670.720.660.67
Set 2 domain 10.840.810.840.78
Set 2 domain 20.550.80.550.47
Set 3 domain 10.670.770.70.65
Set 4 domain 10.730.850.710.71
Set 5 domain 10.80.750.760.79
Set 6 domain 10.840.780.810.86
Set 7 domain 10.660.720.570.64
Set 8 domain 10.740.620.730.50

Table 2: Kappa Agreement on Score Differences Between ChatGPT and Human Raters in Holistic Scoring of the ASAP Dataset

Set 1 domain 10.130.14
Set 2 domain 10.300.44
Set 2 domain 20.150.15
Set 3 domain 10.400.33
Set 4 domain 10.480.23
Set 5 domain 10.520.26
Set 6 domain 10.490.27
Set 7 domain 10.530.15
Set 8 domain 10.230.56

*R1 = Rater 1, R2 = Rater 2

Table 3: Kappa Agreement Between ChatGPT and Human Raters in Analytic Scoring of the ASAP Dataset, Essay Set 7

IdeasR1 vs R20.70
R1 vs GPT0.35
R2 vs GPT0.34
OrganizationR1 vs R20.58
R1 vs GPT0.53
R2 vs GPT0.58
StyleR1 vs R20.54
R1 vs GPT0.55
R2 vs GPT0.51
ConventionsR1 vs R2 0.57
R1 vs GPT0.18
R2 vs GPT0.26

Table 4: Kappa Agreement Between ChatGPT and Human Raters in Analytic Scoring of the ASAP Dataset, Essay Set 8

Ideas and ContentR1 vs R20.65
R1 vs GPT0.55
R2 vs GPT0.42
OrganizationR1 vs R20.54
R1 vs GPT0.63
R2 vs GPT0.47
VoiceR1 vs R20.40
R1 vs GPT0.61
R2 vs GPT0.33
Word ChoiceR1 vs R20.54
R1 vs GPT0.64
R2 vs GPT0.37
Sentence FluencyR1 vs R20.58
R1 vs GPT0.68
R2 vs GPT0.44
ConventionsR1 vs R20.64
R1 vs GPT0.73
R2 vs GPT0.54

Table 5: Kappa Agreement on Score Differences Between ChatGPT and Human Raters in Analytic Scoring of the ASAP Dataset, Essay Set 7

Ideas(R1 – GPT) vs (R1 – R2)0.32
 (R2 – GPT) vs (R2 – R1)0.26
Organization(R1 – GPT) vs (R1 – R2)0.39
 (R2 – GPT) vs (R2 – R1)0.13
Style(R1 – GPT) vs (R1 – R2)0.40
 (R2 – GPT) vs (R2 – R1)0.29
Conventions(R1 – GPT) vs (R1 – R2)0.41
 (R2 – GPT) vs (R2 – R1)0.10

Table 6: Kappa Agreement on Score Differences Between ChatGPT and Human Raters in Analytic Scoring of the ASAP Dataset, Essay Set 8

Ideas and Content(R1 – GPT) vs (R1 – R2)0.41
(R2 – GPT) vs (R2 – R1)0.33
Organization(R1 – GPT) vs (R1 – R2)0.51
(R2 – GPT) vs (R2 – R1)0.30
Voice(R1 – GPT) vs (R1 – R2)0.64
(R2 – GPT) vs (R2 – R1)0.36
Word Choice(R1 – GPT) vs (R1 – R2)0.55
(R2 – GPT) vs (R2 – R1)0.18
Sentence Fluency(R1 – GPT) vs (R1 – R2)0.55
(R2 – GPT) vs (R2 – R1)0.28
Conventions(R1 – GPT) vs (R1 – R2)0.56
(R2 – GPT) vs (R2 – R1)0.22

On the PERSUADE benchmark, ChatGPT’s predictions are poor across the board. This benchmark measures the performance of algorithms in identifying and evaluating elements of argumentative writing. Table 7 shows that the chatbot could only match 33% of the annotated writing elements, on average, and Table 8 shows it averaged a 52% predictive accuracy for the three effectiveness labels (Ineffective, Adequate, Effective). One major challenge that ChatGPT encountered with the PERSUADE benchmark was text segmentation, or breaking the essay into meaningful units. First, ChatGPT struggled to faithfully replicate the original essay text in its outputs because it tended to automatically correct spelling, grammar, formatting, and other elements. As a result, only 80% of ChatGPT’s generated segments faithfully reproduced the text of the original segment. This is unsurprising, however, because ChatGPT is based on a generative language model , meaning it’s trained to predict and generate text one word at a time, and there’s a possibility of unintentional omissions or alterations when replicating text at its default temperature setting.

Table 7: ChatGPT’s Discourse Element Matching Accuracy by Category

Lead76%
Position20%
Claim19%
Counterclaim26%
Rebuttal21%
Evidence31%
Concluding Statement79%

Table 8: ChatGPT’s Predictive Accuracy for Discourse Effectiveness

Ineffective31%
Adequate49%
Effective74%

ChatGPT also tended to overestimate segment sizes compared to the human annotations. Despite there being 2,100 annotated discourse segments to match in the sample, ChatGPT only generated 1,600 predictions, already ruling out matching roughly a quarter of the ground truth segments. This observation suggests two possible scenarios: (1) ChatGPT might be grouping adjacent writing elements or overestimating segment sizes, and/or (2) ChatGPT may be overly cautious when making predictions. In either case, the model generates a reasonable number of predictions but faces challenges in achieving accurate matches.

ChatGPT tended to be an “easy grader” when scoring discourse elements of writing. It often rated writing elements as "Effective" when humans had rated them as "Adequate," or it rated elements as "Adequate" when humans had assessed them as "Ineffective."

Additionally, ChatGPT tended to be an “easy grader” when scoring discourse elements of writing.  It often rated writing elements as “Effective” when humans had rated them as “Adequate,” or it rated elements as “Adequate” when humans had assessed them as “Ineffective.”  More specifically, it rated 40% of Adequate elements as Effective, and 54% of Ineffective elements as Adequate.

ChatGPT’s performance is also significantly lower than other language models trained on the PERSUADE dataset, specifically those that won the related  Feedback Prize competition series .  These winning models matched upwards of 78% of the human-annotated elements, and their effectiveness labels were correct 75% of the time. It’s worth noting that the competition-winning solutions did not use GPT -4 but rather  DeBERTA , which belongs to the  BERT family of Transformer models  and is  notably different from GPT  in its architecture.

ChatGPT can perform comparably to a human in assigning a final holistic score for a student essay, but it struggles to identify and evaluate the structural pieces of argumentative writing in our experimental setup. It can assess an essay’s overall quality, but it can’t meaningfully break essays into units that form the student’s argument, such as claims, counterclaims, or supporting evidence.

ChatGPT can perform comparably to a human in assigning a final holistic score for a student essay, but it struggles to identify and evaluate the structural pieces of argumentative writing in our experimental setup.

One likely explanation is that the model performs really well on datasets that might have been leaked during the pretraining or instruction-tuning phase but not so well on datasets that are not public. It can work well in a zero-shot manner when the task is similar to one from the datasets it was fine-tuned on, but the PERSUADE segmentation task might not be that common. PERSUADE proposes a less common task on which simple prompting does not perform well. Though performance can likely improve with more sophisticated prompting strategies, fine-tuning, and parameter tuning, the gap in performance for ChatGPT relative to PERSUADE baselines is large enough to suggest that foundation models have problems generalizing to new, less common tasks.

One alternative solution is to combine ChatGPT with BERT-based models, which are designed to analyze and categorize text to improve writing evaluation and feedback systems. For instance, ChatGPT can enhance these systems by providing text augmentations, rephrasing, and stylistic improvements. Overall, these distinct families of language models can work powerfully together to provide feedback on student writing and improve learning outcomes.

– This article was written by  Perpetual Baffour , Tor Saxberg , Ralph Abboud , Ulrich Boser , and  Scott Crossley .

1 thought on “Assessing ChatGPT’s Writing Evaluation Skills Using Benchmark Data”

It would be super helpful to see examples. One link was provided to a prompt, which helped. But to understand the implications of the report, seeing examples of when ChatGPT performed well compared to a human rater vs. did not perform well would bring the research to life.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

More Guides & Resources

write essays with chat gpt

A.I. In Schools: A Reporter’s Tip Sheet for the New School Year

A.I. and education will be hotly debated this school year. What does A.I. in the classroom look like, beyond bot-generated worksheets and quizzes, and how should reporters to cover it?

write essays with chat gpt

A How To Guide: Creating Educational Apps for Student Success

Building an app is not unlike learning a new skill: there is a bit of a learning curve, but with practice and experience, it gets easier and more intuitive.

write essays with chat gpt

Six Tips for Building and Managing Engaging Online Communities

Anyone can create a Slack workspace or Google Group, but sustaining or growing a generative, respectful online community requires careful planning and regular attention.

Stay up-to-date by signing up for our weekly newsletter

More From Forbes

Train chatgpt to write your linkedin posts in 5 easy steps.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Train ChatGPT to write your LinkedIn posts in 5 easy steps

LinkedIn has over 1 billion users from 200 countries. 16.2% use it daily. 49 million people look for jobs there every week. LinkedIn is where the money's at. But when you’re a busy founder, you don’t have time to mess around. Writing posts takes ages and you have other things to do.

ChatGPT can help. Here's how to make it write LinkedIn posts just like you in five simple steps. Copy, paste and edit the square brackets in ChatGPT, and keep the same chat window open so the context carries through. Be proud to publish every time.

Make AI work for your LinkedIn game: ChatGPT prompts for awesome posts

Make it sound like you.

Your posts should sound like you wrote them. Not a robot. ChatGPT needs to get your style. How you talk. What words you use. Head to LinkedIn, look at your analytics and find your top performing posts of all time, then give ChatGPT those as examples so it can copy your vibe.

"Your task will be to write my LinkedIn posts. First read these posts I wrote. Tell me how I write and create a style guide to use in the new posts. Make the style guide include what kind of words I use, my sentence length, my tone and style and structure. Include what makes my writing unique. [Include example posts]"

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Read what it says. If it's right, move on. If not, give it more posts or explain what it got wrong.

Pick your topics

Your goal is to reserve a space in someone’s head for the thing that you do. Especially on LinkedIn. If a connection thinks of someone else first, you’ve lost the game. To achieve this, stick to what you know, and do it consistently. Keep going until people see you as the expert, and then don’t stop. Pick three or four main things you'll post about, which become your pillars. Your followers will know what to expect from you and this matters for showing up online.

"Now, give me 10 ideas for LinkedIn posts about these topics: [list your content pillars, based on the topic you want to own and be known for]. Present the ideas using one sentence for each one and make them punchy."

Look at the ideas and choose the best ones. Take them forward using the next few prompts.

Get the post

Good instructions make good posts. Bad ones make rubbish. Get your instructions right and ChatGPT will pump out killer content. Spend time on this bit because it pays off.

"Let’s go forward with idea [select the idea you want to go forward with first]. Use my writing style that we just described. Start the post with a hook, which should be a short, sharp, punchy line that grabs attention with my target audience but should not be a question. Then add a rehook, a short line that comes after the hook, that sets up the post and signposts the rest of the post. The main part of the post should fill a knowledge gap in my target audience, so I should help them do something in distinct steps, adding value with each one. Write new sentences on new lines, with line breaks. The penultimate line should be a compelling statement that strongly states one of my audience’s strong beliefs back to them. The final line should invite engagement on my post, inviting people to comment. Make sure the answer to this question is something they would be proud to share. Before you write this post, ask me questions about my target audience. Then ask for a personal story to incorporate in the post."

Make it better

First drafts are never perfect. That's fine. Read what ChatGPT writes. Then make it better. This is where okay posts become great ones. The ones people remember and share.

"Change this post to make it more [specify what you’d like changing, for example chatty, professional, simple, punchy]. Do not use these words [include the words used in the post that you wouldn’t use in real life]. Also don’t [anything else you’ve spotted that you don’t like]. Now give me the post without the section titles."

Keep re-prompting until you love it. The more you tell ChatGPT, the better it gets at writing like you.

Double check

ChatGPT forgets things. Chances are, with this journey of prompting you’ve just undertaken, it’s gone away from your original style guide. So here’s where you double check. Get ChatGPT to mark its own homework by comparing the draft post with its original instructions.

"Now review this draft and refine it to better match my style. Shorten any sentences that are longer than [specify, for example ten words], and simplify any complex language, including [specify sentences that are too complex]. Replace any words that don’t sound like me with ones I would use. The part that I think doesn’t flow well is [specify that here if applicable], so rewrite it to sound more natural. Add any final touches to make the post engaging and authentic. Once refined, give me the final version ready to post."

Now ask it to repeat this process for the other ideas you liked. Give ChatGPT the rest of the numbers, one by one, until you have a month’s worth of content ready to go.

“Now let’s learn from this process and repeat it to create post idea [number]. Ask me questions before creating the post in the same style.”

Level up your LinkedIn with AI power: ChatGPT prompts to grow

Getting ChatGPT to write your LinkedIn posts saves time. But it's more than that. It helps you post quality stuff that people want to read. Stuff that grows your brand. Make ChatGPT analyze your style, select your topics, then write the perfect prompt. Make it better and double check.

Tonnes of LinkedIn content could be five prompts away. Try these today and watch your likes and comments go through the roof.

Jodie Cook

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

IMAGES

  1. How to Get Chat GPT to Write an Essay: A Comprehensive Guide

    write essays with chat gpt

  2. How To Use Chat Gpt To Write An Essay With Ease

    write essays with chat gpt

  3. How To use chat GPT to write an Essay || Step By Step Guide with Examples

    write essays with chat gpt

  4. Write Essays With ChatGPT

    write essays with chat gpt

  5. Chat GPT

    write essays with chat gpt

  6. 11 Ways Chat GPT Can Help You Teach Argumentative Writing

    write essays with chat gpt

VIDEO

  1. How do you use ChatGPT in academic writing?

  2. Chat GPT Writing a Book Review: How We Did It

  3. Uncovering the Truth: Chat GPT Essays and Plagiarism

  4. Ace Your School Projects with Chat GPT!

  5. 99% need to stop using Chat GPT to write their essays

  6. CHAT GPT VS GEMINI AI Uncovered| CHEIZ TECH

COMMENTS

  1. How to Write an Essay with ChatGPT

    For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative, descriptive, expository, or narrative). You can also mention any facts or viewpoints you've gathered that should be incorporated into the output.

  2. Should I Use ChatGPT to Write My Essays?

    In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay. Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not ...

  3. Using ChatGPT for Assignments

    Using ChatGPT for Assignments | Tips & Examples. Published on February 13, 2023 by Jack Caulfield and Tobias Solis. Revised on November 16, 2023. People are still figuring out the best use cases for ChatGPT, the popular chatbot based on a powerful AI language model.This article provides some ideas for how to use ChatGPT and other AI tools to assist with your academic writing.

  4. How to Write a Paper with ChatGPT

    Your research paper should be based on in-depth independent research. However, generative AI tools like ChatGPT can be effectively used throughout the research process to: Brainstorm research questions. Develop a methodology. Create an outline. Find sources. Summarize and paraphrase text. Provide feedback. Note.

  5. 5 ways ChatGPT can help you write an essay

    1. Use ChatGPT to generate essay ideas. Before you start writing an essay, you need to flesh out the idea. When professors assign essays, they generally give students a prompt that gives them ...

  6. Three ways ChatGPT helps me in my academic writing

    For example, you might write: "I'm writing a paper on [topic] for a leading [discipline] academic journal. What I tried to say in the following section is [specific point].

  7. Can ChatGPT write a college admission essay? We tested it

    We tested its admissions essay. By Pranshu Verma and. Rekha Tenjarla. Jan. 8 at 6:00 a.m. ChatGPT's release a year ago triggered a wave of panic among educators. Now, universities are in the ...

  8. How to Write Your Essay Using ChatGPT

    Let's start with the basics. ChatGPT is one of several chatbots that can answer questions in a conversational style, as if the answer were coming from a human. It provides answers based on information it receives in development and in response to prompts you provide. In that respect, like a human, ChatGPT is limited by the information it has.

  9. Should Students Let ChatGPT Help Them Write Their College Essays?

    This week, she held class discussions about ChatGPT, cautioning students that using A.I. chatbots to generate ideas or writing could make their college essays sound too generic. She advised them ...

  10. ChatGPT-3.5 as writing assistance in students' essays

    The duration of the essay writing for the ChatGPT-assisted group was 172.22 ± 31.59, and for the control, 179.11 ± 31.93 min. ChatGPT and control group, on average, obtained grade C, with a ...

  11. Writing an Essay with ChatGPT

    Write an essay in support of the following statement: ... Chat GPT's response is randomly generated from all the information it has access to. It does not plagiarise anyone's work. It basically does what you would do: search for sources in order to gain an understanding, and using those sources and new understanding, produce relevant text. ...

  12. How to Use OpenAI to Write Essays: ChatGPT Tips for Students

    3. Ask ChatGPT to write the essay. To get the best essay from ChatGPT, create a prompt that contains the topic, type of essay, and the other details you've gathered. In these examples, we'll show you prompts to get ChatGPT to write an essay based on your topic, length requirements, and a few specific requests:

  13. How to use ChatGPT to do research for papers, presentations ...

    Westend61/Getty Images. ChatGPT is often thought of as a tool that will replace human work on tasks such as writing papers for students or professionals. But ChatGPT can also be used to support ...

  14. How to Write an Introduction Using ChatGPT

    You can use ChatGPT to brainstorm potential outlines for your introduction. To do this, include a brief overview of all relevant aspects of your paper, including your research question, methodology, central arguments, and essay type (e.g., argumentative, expository). For a longer essay or dissertation, you might also mention section or chapter ...

  15. How to Use ChatGPT to Write Essays That Impress

    Step 1: Use ChatGPT to Find and Refine Essay Topics. Before we do anything else, we need a solid topic and its details for our essay. You might have a general idea given by your professor or your manager. This will essentially drive all the steps, and hence, needs to be strong.

  16. Write an Essay From Scratch With Chat GPT: Step-by-Step Tutorial

    Instructions and Essay Prompt. Take a position on an issue and compose a 5-page paper that supports it. In the introduction, establish why your topic is important and present a specific, argumentative thesis statement that previews your argument. The body of your essay should be logical, coherent, and purposeful.

  17. ChatGPT

    Improve my essay writing ask me to outline my thoughts (opens in a new window) Tell me a fun fact about the Roman Empire (opens in a new window) ... Access to GPT-4, GPT-4o, GPT-4o mini. Up to 5x more messages for GPT-4o. Access to advanced data analysis, file uploads, vision, and web browsing.

  18. Can You Use ChatGPT for Your College Essay?

    ChatGPT (short for "Chat Generative Pre-trained Transformer") is a chatbot created by OpenAI, an artificial intelligence research company. ChatGPT can be used for various tasks, like having human-like conversations, answering questions, giving recommendations, translating words and phrases—and writing things like essays.

  19. A large-scale comparison of human-written versus ChatGPT-generated essays

    The corpus features essays for 90 topics from Essay Forum 42, an active community for providing writing feedback on different kinds of text and is frequented by high-school students to get ...

  20. How to Grade Essays with ChatGPT

    For a single essay, we can simply ask ChatGPT to grade as follows: For multiple essays, we could request ChatGPT to grade each one individually. However, when dealing with a large number of essays (e.g., 50, 100, 1000, etc.), manually grading them in this way becomes a laborious and time-consuming task.

  21. Using ChatGPT to Write a College Essay

    Examples: Using ChatGPT to generate an essay outline. Provide a very short outline for a college admission essay. The essay will be about my experience working at an animal shelter. The essay will be 500 words long. Introduction. Hook: Share a brief and engaging anecdote about your experience at the animal shelter.

  22. Essay generator

    Essay generator. Revolutionize essay writing with our AI-driven tool: Generate unique, plagiarism-free essays in minutes, catering to all formats and topics effortlessly.

  23. How to Write an Essay with ChatGPT

    Writing a research question. You can use ChatGPT to brainstorm potential research questions or to narrow down your thesis statement. Begin by inputting a description of the research topic or assigned question. Then include a prompt like "Write 3 possible research questions on this topic".

  24. ChatGPT: Everything you need to know about the AI chatbot

    ChatGPT, OpenAI's text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code ...

  25. Assessing ChatGPT's Writing Evaluation Skills Using Benchmark Data

    We randomly sampled 300 essays from the PERSUADE dataset for ChatGPT's evaluation and interacted with the chatbot via OpenAI's API. Using a standardized API prompt for each essay, ChatGPT (GPT-4 version) received a scoring and segmentation rubric along with several examples for each annotation label of a discourse element type.

  26. Train ChatGPT To Write Your LinkedIn Posts In 5 Easy Steps

    Train ChatGPT to write your LinkedIn posts in 5 easy steps. getty. LinkedIn has over 1 billion users from 200 countries. 16.2% use it daily. 49 million people look for jobs there every week.