Browse Econ Literature

  • Working papers
  • Software components
  • Book chapters
  • JEL classification

More features

  • Subscribe to new research

RePEc Biblio

Author registration.

  • Economics Virtual Seminar Calendar NEW!

IDEAS home

Failed PhD: how scientists have bounced back from doctoral setbacks

  • Author & abstract
  • Related works & more

Corrections

  • Carrie Arnold

Suggested Citation

Download full text from publisher.

Follow serials, authors, keywords & more

Public profiles for Economics researchers

Various research rankings in Economics

RePEc Genealogy

Who was a student of whom, using RePEc

Curated articles & papers on economics topics

Upload your paper to be listed on RePEc and IDEAS

New papers by email

Subscribe to new additions to RePEc

EconAcademics

Blog aggregator for economics research

Cases of plagiarism in Economics

About RePEc

Initiative for open bibliographies in Economics

News about RePEc

Questions about IDEAS and RePEc

RePEc volunteers

Participating archives

Publishers indexing in RePEc

Privacy statement

Found an error or omission?

Opportunities to help RePEc

Get papers listed

Have your research listed on RePEc

Open a RePEc archive

Have your institution's/publisher's output listed on RePEc

Get RePEc data

Use data assembled by RePEc

Failed PhD: how scientists have bounced back from doctoral setbacks

  • Arnold, Carrie

In a scientific culture that eschews admitting failure, some researchers are staring it in the face — and finding success.

     
 








 

, 2023, vol. 620, issue 7975, 911-912

In a scientific culture that eschews admitting failure, some researchers are staring it in the face — and finding success.

; ; ; (search for similar items in EconPapers)
2023

(external link)
Abstract (text/html)
Access to the full text of the articles in this series is restricted.


This item may be available elsewhere in EconPapers: for items with the same title.

BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

This journal article can be ordered from

for this article

Nature is currently edited by

in Nature from
Bibliographic data for series maintained by Sonal Shukla ( ) and Springer Nature Abstracting and Indexing ( ).

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Daily briefing: How to bounce back from a PhD-project failure

  • PMID: 35851150
  • DOI: 10.1038/d41586-022-01977-5

PubMed Disclaimer

Similar articles

  • How to bounce back from a PhD-project failure. Forrester N. Forrester N. Nature. 2022 Jul;607(7918):407-409. doi: 10.1038/d41586-022-01900-y. Nature. 2022. PMID: 35831589 No abstract available.
  • Daily briefing: How clinical trials can bounce back from COVID-19 disruption. Graham F. Graham F. Nature. 2020 Sep 15. doi: 10.1038/d41586-020-02654-1. Online ahead of print. Nature. 2020. PMID: 34522044 No abstract available.
  • A Predictive Model and Risk Score for Unplanned Cardiac Surgery Intensive Care Unit Readmissions. Magruder JT, Kashiouris M, Grimm JC, Duquaine D, McGuinness B, Russell S, Orlando M, Sussman M, Whitman GJ. Magruder JT, et al. J Card Surg. 2015 Sep;30(9):685-90. doi: 10.1111/jocs.12589. Epub 2015 Jun 30. J Card Surg. 2015. PMID: 26129715
  • Shock Index and Characteristics of "Bounce-Back" Patients in the Emergency Department of King Abdullah Medical City (KAMC): A Retrospective Analysis. Alaama AO, Alsulaimani HM, Alghamdi H, Alrehaili MM, Alsaud RN, Almuqati AM, Bukhari NR, Alhassan A, Bakhsh NM, Alwadei MH. Alaama AO, et al. Cureus. 2022 Sep 28;14(9):e29692. doi: 10.7759/cureus.29692. eCollection 2022 Sep. Cureus. 2022. PMID: 36321042 Free PMC article.
  • Frequency, Risk Factors, and Outcomes of Unplanned Readmission to the Neurological Intensive Care Unit after Spontaneous Intracerebral Hemorrhage. Tangonan R, Alvarado-Dyer R, Loggini A, Ammar FE, Kumbhani R, Lazaridis C, Kramer C, Goldenberg FD, Mansour A. Tangonan R, et al. Neurocrit Care. 2022 Oct;37(2):390-398. doi: 10.1007/s12028-021-01415-w. Epub 2022 Jan 24. Neurocrit Care. 2022. PMID: 35072926

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Nature Publishing Group

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Premium accounts now available! Sign up and create a premium account. Read more Life Science Network Close

Life Science Network

Search Search

Your cookie settings

This site uses cookies. Some cookies are essential to make the site work and others help up to improve our services. Read more about cookies in our Privacy policy .

Choose your cookie preferences:

  • Essential (mandatory, cannot be disabled)
  • Analytics (recommended, but not essential, can be disabled)

Failed PhD: how scientists have bounced back from doctoral setbacks

Journal content Created on Aug 21, 2023 by Nature

  • All journal content
  • My journal content

Published in

Nature, Nature Publishing Group

Nature, Published online: 21 August 2023; doi:10.1038/d41586-023-02603-8

Carrie Arnold

At Life Science Network we import abstract of articles published in the most popular journals. In addition, members of our network often upload full article pdfs of their research.

To access all content shared in our network, please sign up for an account. If you already have an account, sign in , or connect with LinkedIn , Google .

Add to favorites Sign in to add to favorites.

  • Recommendations n/a n/a positive of 0 vote(s)

Recommended by

  • No recommendations yet.

Do you recommend this? Please sign in. Yes No

Recommended

Post a comment.

You need to be signed in to post comments. You can sign in here .

There are no comments yet.

Loading ad...

Student working late at computer.

‘You have to suffer for your PhD’: poor mental health among doctoral researchers – new research

failed phd nature

Lecturer in Social Sciences, University of Westminster

Disclosure statement

Cassie Hazell has received funding from the Office for Students.

University of Westminster provides funding as a member of The Conversation UK.

View all partners

PhD students are the future of research, innovation and teaching at universities and beyond – but this future is at risk. There are already indications from previous research that there is a mental health crisis brewing among PhD researchers.

My colleagues and I studied the mental health of PhD researchers in the UK and discovered that, compared with working professionals, PhD students were more likely to meet the criteria for clinical levels of depression and anxiety. They were also more likely to have significantly more severe symptoms than the working-professional control group.

We surveyed 3,352 PhD students, as well as 1,256 working professionals who served as a matched comparison group . We used the questionnaires used by NHS mental health services to assess several mental health symptoms.

More than 40% of PhD students met the criteria for moderate to severe depression or anxiety. In contrast, 32% of working professionals met these criteria for depression, and 26% for anxiety.

The groups reported an equally high risk of suicide. Between 33% and 35% of both PhD students and working professionals met the criteria for “suicide risk”. The figures for suicide risk might be so high because of the high rates of depression found in our sample.

We also asked PhD students what they thought about their own and their peers’ mental health. More than 40% of PhD students believed that experiencing a mental health problem during your PhD is the norm. A similar number (41%) told us that most of their PhD colleagues had mental health problems.

Just over a third of PhD students had considered ending their studies altogether for mental health reasons.

Young woman in dark at library

There is clearly a high prevalence of mental health problems among PhD students, beyond those rates seen in the general public. Our results indicate a problem with the current system of PhD study – or perhaps with academic more widely. Academia notoriously encourages a culture of overwork and under-appreciation.

This mindset is present among PhD students. In our focus groups and surveys for other research , PhD students reported wearing their suffering as a badge of honour and a marker that they are working hard enough rather than too much. One student told us :

“There is a common belief … you have to suffer for the sake of your PhD, if you aren’t anxious or suffering from impostor syndrome, then you aren’t doing it "properly”.

We explored the potential risk factors that could lead to poor mental health among PhD students and the things that could protect their mental health.

Financial insecurity was one risk factor. Not all researchers receive funding to cover their course and personal expenses, and once their PhD is complete, there is no guarantee of a job. The number of people studying for a PhD is increasing without an equivalent increase in postdoctoral positions .

Another risk factor was conflict in their relationship with their academic supervisor . An analogy offered by one of our PhD student collaborators likened the academic supervisor to a “sword” that you can use to defeat the “PhD monster”. If your weapon is ineffective, then it makes tackling the monster a difficult – if not impossible – task. Supervisor difficulties can take many forms. These can include a supervisor being inaccessible, overly critical or lacking expertise.

A lack of interests or relationships outside PhD study, or the presence of stressors in students’ personal lives were also risk factors.

We have also found an association between poor mental health and high levels of perfectionism, impostor syndrome (feeling like you don’t belong or deserve to be studying for your PhD) and the sense of being isolated .

Better conversations

Doctoral research is not all doom and gloom. There are many students who find studying for a PhD to be both enjoyable and fulfilling , and there are many examples of cooperative and nurturing research environments across academia.

Studying for a PhD is an opportunity for researchers to spend several years learning and exploring a topic they are passionate about. It is a training programme intended to equip students with the skills and expertise to further the world’s knowledge. These examples of good practice provide opportunities for us to learn about what works well and disseminate them more widely.

The wellbeing and mental health of PhD students is a subject that we must continue to talk about and reflect on. However, these conversations need to happen in a way that considers the evidence, offers balance, and avoids perpetuating unhelpful myths.

Indeed, in our own study, we found that the percentage of PhD students who believed their peers had mental health problems and that poor mental health was the norm, exceeded the rates of students who actually met diagnostic criteria for a common mental health problem . That is, PhD students may be overestimating the already high number of their peers who experienced mental health problems.

We therefore need to be careful about the messages we put out on this topic, as we may inadvertently make the situation worse. If messages are too negative, we may add to the myth that all PhD students experience mental health problems and help maintain the toxicity of academic culture.

  • Mental health
  • Academic life
  • PhD research

failed phd nature

Newsletter and Deputy Social Media Producer

failed phd nature

College Director and Principal | Curtin College

failed phd nature

Research Clinician – Hybrid Study

failed phd nature

Head of School: Engineering, Computer and Mathematical Sciences

failed phd nature

Educational Designer

NBC New York

Fact Check: Olympics boxing gender testing controversy explained

Imane khelif and lin yu-ting are female boxers, but they are facing attacks from anti-lgbtq+ conservatives online who claim they're transgender, by kiki intarasuwan • published august 2, 2024 • updated on august 3, 2024 at 9:28 am, what to know.

  • Two female boxers who were disqualified from the 2023 world championship after being judged to have failed gender eligibility tests were cleared to fight at the Olympic Games in Paris, and it sparked online outrage from conservatives and anti-LGBTQ+ commentators
  • International Boxing Association Russian president, Umar Kremlev, claimed that DNA test results showed the two athletes have XY chromosomes, but the results were never published
  • IOC said it stands by the athletes and their eligibility to compete

Political outrage surrounding two women competing in boxing at the Paris Olympics stems from "a lot of misinformation," an International Olympics Committee spokesperson said Friday.

It all began when the IOC said Algerian boxer Imane Khelif and Lin Yu-ting  of Taiwan were allowed to compete at the 2024 Olympics after the International Boxing Association (IBA) disqualified them for unspecified gender eligibility tests from the 2023 world championships --- despite them having already competed.

24/7 New York news stream: Watch NBC 4 free wherever you are

Both boxers have long been competing as women and neither athlete identifies as transgender.

Here are some questions viewers have after conservatives like former U.S. President Donald Trump, Italian Premier Giorgia Meloni and other right-wing commentators sparked an online outcry that men shouldn't be allowed to compete with women:

Does Imane Khelif have XY chromosomes?

329 medal events. 32 sports. Endless drama. Catch all the action at the Paris Olympics. Sign up for our free Olympics Headlines newsletter.

IBA Russian president, Umar Kremlev, claimed that DNA test results showed the two athletes have XY chromosomes, citing it as the reason they were disqualified in the world championships. The IBA also cited high levels of testosterone in Khelif's system.

However, the test results were never published and Khelif has never disclosed her biological markers, calling the decision a "big conspiracy." The disqualification came after Khelif defeated Russian boxer Azalia Amineva in the 2023 tournament. IBA said it stripped Lin of a bronze medal because it claimed she failed to meet unspecified eligibility requirements in a biochemical test.

The IOC has long criticized the IBA and its governance of the sport and eventually banned the Russian-run organization in 2019. In a statement Friday, the IOC said it stands by the athletes and their eligibility to compete, noting that the boxing association's own documents say the decision was made unilaterally by the IBA's secretary general.

Those documents also say the IBA went on to resolve at a meeting that it should “establish a clear procedure on gender testing” after it had already disqualified the two fighters.

failed phd nature

Who does Imane Khelif, boxer in gender test controversy, fight next? Here's how to watch

failed phd nature

Boxer Angela Carini apologizes for not shaking Imane Khelif's hand after Olympic fight

What is dsd and swyer syndrome.

DSD stands for differences in sexual development. It can involve genes, hormones and reproductive organs, but it has nothing to do with gender identity. It's false to conflate transgender people with people who were born with DSD, said GLAAD , the world’s largest lesbian, gay, bisexual, transgender and queer media advocacy organization.

Some people with DSDs are raised as female but may have sex chromosomes other than XX, or elevated testosterone levels, according to the NIH.

People with Swyer syndrome, according to the National Library of Medicine, have one X chromosome and one Y chromosome in each cell (typically found in boys and men), but they have female reproductive structures. Again, it's not known whether either of the boxers has these genetic variations.

They have both been competing as women from the start.

How did Khelif and Lin qualify for the Olympics?

The IBA still holds world championships but it did not run qualifying events for the 2024 Paris Games.

Qualification for Paris 2024 took place across three phases: continental qualification tournament, world qualification tournament 1 and world qualification tournament 2, according to NBC Olympics.

How does the Olympics test for gender qualification?

Due to the banishment of the IBA, the IOC used rules from 2016 in determining boxers’ gender eligibility.

It leaves regulations up to each sport's international governing body because "they know their sport and their discipline the best," IOC spokesman Mark Adams told reporters. "I hope we all agree that we're not calling for people to go back to the days of sex testing which was a terrible, terrible thing to do. This involves real people and we're talking about real people's lives here."

The history of sex testing at the Olympics is decades-long and practices such as invasive physical examination have been exposed as abusive. The IOC in recent years updated its policy to be more inclusive and doesn't require athletes to undergo "medically unnecessary" procedures or treatment, NBC News reported .

Several Olympic sports’ governing bodies have also updated their gender rules over the past three years, including World Aquatics, World Athletics and the International Cycling Union. The governing body for track and field also last year tightened rules on athletes with differences in sex development.

Can women have high levels of testosterone?

Simply put, yes, in the same way that many men can have low levels of testosterone. However, women with higher levels of testosterone have faced more criticism and questions about their gender.

Many of the rules set by governing bodies for participation in women's competitions include testing of the athlete's testosterone levels, but it's not a perfect test, Adams said as he addressed the boxing controversy.

While scientists and the IOC agree that testosterone is " an important factor shaping performance in elite athletes in certain sports, events and disciplines, " it doesn't necessarily predict the performance of an individual athlete.

"Many women can have testosterone which will be called 'male levels' and still be women and still compete as women. This idea that you do one test for testosterone and that sorts everything out? Not the case I'm afraid," Adams said.

The IOC spokesman told reporters that there has been “a lot of misinformation around on social media particularly, which is damaging.”

This article tagged under:

failed phd nature

failed phd nature

  • PhD Failure Rate – A Study of 26,076 PhD Candidates
  • Doing a PhD

The PhD failure rate in the UK is 19.5%, with 16.2% of students leaving their PhD programme early, and 3.3% of students failing their viva. 80.5% of all students who enrol onto a PhD programme successfully complete it and are awarded a doctorate.

Introduction

One of the biggest concerns for doctoral students is the ongoing fear of failing their PhD.

After all those years of research, the long days in the lab and the endless nights in the library, it’s no surprise to find many agonising over the possibility of it all being for nothing. While this fear will always exist, it would help you to know how likely failure is, and what you can do to increase your chances of success.

Read on to learn how PhDs can be failed, what the true failure rates are based on an analysis of 26,067 PhD candidates from 14 UK universities, and what your options are if you’re unsuccessful in obtaining your PhD.

Ways You Can Fail A PhD

There are essentially two ways in which you can fail a PhD; non-completion or failing your viva (also known as your thesis defence ).

Non-completion

Non-completion is when a student leaves their PhD programme before having sat their viva examination. Since vivas take place at the end of the PhD journey, typically between the 3rd and 4th year for most full-time programmes, most failed PhDs fall within the ‘non-completion’ category because of the long duration it covers.

There are many reasons why a student may decide to leave a programme early, though these can usually be grouped into two categories:

  • Motives – The individual may no longer believe undertaking a PhD is for them. This might be because it isn’t what they had imagined, or they’ve decided on an alternative path.
  • Extenuating circumstances – The student may face unforeseen problems beyond their control, such as poor health, bereavement or family difficulties, preventing them from completing their research.

In both cases, a good supervisor will always try their best to help the student continue with their studies. In the former case, this may mean considering alternative research questions or, in the latter case, encouraging you to seek academic support from the university through one of their student care policies.

Besides the student deciding to end their programme early, the university can also make this decision. On these occasions, the student’s supervisor may not believe they’ve made enough progress for the time they’ve been on the project. If the problem can’t be corrected, the supervisor may ask the university to remove the student from the programme.

Failing The Viva

Assuming you make it to the end of your programme, there are still two ways you can be unsuccessful.

The first is an unsatisfactory thesis. For whatever reason, your thesis may be deemed not good enough, lacking originality, reliable data, conclusive findings, or be of poor overall quality. In such cases, your examiners may request an extensive rework of your thesis before agreeing to perform your viva examination. Although this will rarely be the case, it is possible that you may exceed the permissible length of programme registration and if you don’t have valid grounds for an extension, you may not have enough time to be able to sit your viva.

The more common scenario, while still being uncommon itself, is that you sit and fail your viva examination. The examiners may decide that your research project is severely flawed, to the point where it can’t possibly be remedied even with major revisions. This could happen for reasons such as basing your study on an incorrect fundamental assumption; this should not happen however if there is a proper supervisory support system in place.

PhD Failure Rate – UK & EU Statistics

According to 2010-11 data published by the Higher Education Funding Council for England (now replaced by UK Research and Innovation ), 72.9% of students enrolled in a PhD programme in the UK or EU complete their degree within seven years. Following this, 80.5% of PhD students complete their degree within 25 years.

This means that four out of every five students who register onto a PhD programme successfully complete their doctorate.

While a failure rate of one in five students may seem a little high, most of these are those who exit their programme early as opposed to those who fail at the viva stage.

Failing Doesn’t Happen Often

Although a PhD is an independent project, you will be appointed a supervisor to support you. Each university will have its own system for how your supervisor is to support you , but regardless of this, they will all require regular communication between the two of you. This could be in the form of annual reviews, quarterly interim reviews or regular meetings. The majority of students also have a secondary academic supervisor (and in some cases a thesis committee of supervisors); the role of these can vary from having a hands-on role in regular supervision, to being another useful person to bounce ideas off of.

These frequent check-ins are designed to help you stay on track with your project. For example, if any issues are identified, you and your supervisor can discuss how to rectify them in order to refocus your research. This reduces the likelihood of a problem going undetected for several years, only for it to be unearthed after it’s too late to address.

In addition, the thesis you submit to your examiners will likely be your third or fourth iteration, with your supervisor having critiqued each earlier version. As a result, your thesis will typically only be submitted to the examiners after your supervisor approves it; many UK universities require a formal, signed document to be submitted by the primary academic supervisor at the same time as the student submits the thesis, confirming that he or she has approved the submission.

Failed Viva – Outcomes of 26,076 Students

Despite what you may have heard, the failing PhD rate amongst students who sit their viva is low.

This, combined with ongoing guidance from your supervisor, is because vivas don’t have a strict pass/fail outcome. You can find a detailed breakdown of all viva outcomes in our viva guide, but to summarise – the most common outcome will be for you to revise your thesis in accordance with the comments from your examiners and resubmit it.

This means that as long as the review of your thesis and your viva examination uncovers no significant issues, you’re almost certain to be awarded a provisional pass on the basis you make the necessary corrections to your thesis.

To give you an indication of the viva failure rate, we’ve analysed the outcomes of 26,076 PhD candidates from 14 UK universities who sat a viva between 2006 and 2017.

The analysis shows that of the 26,076 students who sat their viva, 25,063 succeeded; this is just over 96% of the total students as shown in the chart below.

failed phd nature

Students Who Passed

Failed PhD_Breakdown of the extent of thesis amendments required for students who passed their viva

The analysis shows that of the 96% of students who passed, approximately 5% required no amendments, 79% required minor amendments and the remaining 16% required major revisions. This supports our earlier discussion on how the most common outcome of a viva is a ‘pass with minor amendments’.

Students Who Failed

Failed PhD_Percentage of students who failed their viva and were awarded an MPhil vs not awarded a degree

Of the 4% of unsuccessful students, approximately 97% were awarded an MPhil (Master of Philosophy), and 3% weren’t awarded a degree.

Note : It should be noted that while the data provides the student’s overall outcome, i.e. whether they passed or failed, they didn’t all provide the students specific outcome, i.e. whether they had to make amendments, or with a failure, whether they were awarded an MPhil. Therefore, while the breakdowns represent the current known data, the exact breakdown may differ.

Summary of Findings

By using our data in combination with the earlier statistic provided by HEFCE, we can gain an overall picture of the PhD journey as summarised in the image below.

DiscoverPhDs_Breakdown of all possible outcomes for PhD candidates based on analysis of 26,076 candidates at 14 universities between 2006 and 2017

To summarise, based on the analysis of 26,076 PhD candidates at 14 universities between 2006 and 2017, the PhD pass rate in the UK is 80.5%. Of the 19.5% of students who fail, 3.3% is attributed to students failing their viva and the remaining 16.2% is attributed to students leaving their programme early.

The above statistics indicate that while 1 in every 5 students fail their PhD, the failure rate for the viva process itself is low. Specifically, only 4% of all students who sit their viva fail; in other words, 96% of the students pass it.

What Are Your Options After an Unsuccessful PhD?

Appeal your outcome.

If you believe you had a valid case, you can try to appeal against your outcome . The appeal process will be different for each university, so ensure you consult the guidelines published by your university before taking any action.

While making an appeal may be an option, it should only be considered if you genuinely believe you have a legitimate case. Most examiners have a lot of experience in assessing PhD candidates and follow strict guidelines when making their decisions. Therefore, your claim for appeal will need to be strong if it is to stand up in front of committee members in the adjudication process.

Downgrade to MPhil

If you are unsuccessful in being awarded a PhD, an MPhil may be awarded instead. For this to happen, your work would need to be considered worthy of an MPhil, as although it is a Master’s degree, it is still an advanced postgraduate research degree.

Unfortunately, there’s a lot of stigma around MPhil degrees, with many worrying that it will be seen as a sign of a failed PhD. While not as advanced as a PhD, an MPhil is still an advanced research degree, and being awarded one shows that you’ve successfully carried out an independent research project which is an undertaking to be admired.

Finding a PhD has never been this easy – search for a PhD by keyword, location or academic area of interest.

Additional Resources

Hopefully now knowing the overall picture your mind will feel slightly more at ease. Regardless, there are several good practices you can adopt to ensure you’re always in the best possible position. The key of these includes developing a good working relationship with your supervisor, working to a project schedule, having your thesis checked by several other academics aside from your supervisor, and thoroughly preparing for your viva examination.

We’ve developed a number of resources which should help you in the above:

  • What to Expect from Your Supervisor – Find out what to look for in a Supervisor, how they will typically support you, and how often you should meet with them.
  • How to Write a Research Proposal – Find an outline of how you can go about putting a project plan together.
  • What is a PhD Viva? – Learn exactly what a viva is, their purpose and what you can expect on the day. We’ve also provided a full breakdown of all the possible outcomes of a viva and tips to help you prepare for your own.

Data for Statistics

  • Cardiff University – 2006/07 to 2016/17
  • Imperial College London – 2006/07 to 2016/17
  • London School of Economics (LSE) – 2006/07 to 2015/16
  • Queen Mary University of London – 2009/10 to 2015/16
  • University College London (UCL) – 2006/07 to 2016/17
  • University of Aberdeen – 2006/07 to 2016/17
  • University of Birmingham – 2006/07 to 2015/16
  • University of Bristol – 2006/07 to 2016/17
  • University of Edinburgh – 2006/07 to 2016/17
  • University of Nottingham – 2006/07 to 2015/16
  • University of Oxford – 2007/08 to 2016/17
  • University of York – 2009/10 to 2016/17
  • University of Manchester – 2008/09 to 2017/18
  • University of Sheffield – 2006/07 to 2016/17

Note : The data used for this analysis was obtained from the above universities under the Freedom of Information Act. As per the Act, the information was provided in such a way that no specific individual can be identified from the data.

Browse PhDs Now

Join thousands of students.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

My supervisor is suggesting I will fail my PhD, is this possible?

I am a final-year PhD student in Canada studying cybersecurity. During my PhD, I did not have very good supervision. I told them I wanted to defend soon. However, one of my supervisors keeps on telling me: “Don’t rush, you may fail”.

I got one first-author paper in IEEE Transactions and 3 medium level first-author conferences accepted. How can I fail? Is it possible? Has anyone ever failed the PhD defense?

Bryan Krause's user avatar

  • Comments are not for extended discussion; this conversation has been moved to chat . –  Bryan Krause ♦ Commented Jan 13, 2023 at 16:47

8 Answers 8

There are definitely fails in PhD defenses. It may depend on the specific system and I don't know about Canada, but I know of a number of them in the UK, where the candidate was asked to rework and come back in a year or so. Also PhD examiners in the UK don't have to accept a thesis just because there are publications. I do think some published material shouldn't have been accepted, and not everything I have seen published is in my view acceptable at PhD level.

Christian Hennig's user avatar

  • 4 I'd say that "major revisions" are not a fail necesarily. If you are told to work a bit more on it, yeah you fail as you don't get a PhD, but its not fail as "bye no PhD for you ever". –  Ander Biguri Commented Jan 13, 2023 at 11:29
  • 6 @AnderBiguri That's a fair enough objection to my use of terminology, however if you plan to apply/go for a postdoc or anything you need a PhD for directly after your defense, the immediate practical consequences are those of a fail. –  Christian Hennig Commented Jan 13, 2023 at 11:38
  • Totally agree with that :) –  Ander Biguri Commented Jan 13, 2023 at 11:42
  • 5 yep, I remember an English student at Oxford who had published an entire book with Springer and was failed, which sounded like a real scandal to my ears. A friend of mine at Cambridge, was asked to completely rewrite his thesis, in Finance, spent a year doing it, sent the revision, did not hear from anyone for months, when he finally contacted them they said "Oh you passed last year". Nightmare. –  PatrickT Commented Jan 13, 2023 at 19:16
  • 2 @Tom There is some variation, also within the UK. I have seen both cases, where the viva had to be repeated, and where people were just told upon resubmission that they had passed now, even with major corrections. I don't remember exactly anymore but chances are I have even seen a form in which examiners could choose between these options (on top of minor corrections). –  Christian Hennig Commented Jan 14, 2023 at 11:04

Your supervisor is aware of expectations for a PhD program. Their role is to help you understand these expectations and develop your PhD work to this standard.

Having one IEEE Trans publication and a few proceedings is good, but not necessarily indicative that your work meets the criteria of a PhD award. Normally, PhD dissertation is a major piece of academic research, which can be compared to a manuscript (a book). A journal paper is a more scoped contribution compared roughly to one chapter of your PhD thesis. Having one journal paper published does not guarantee you a PhD. I am aware of some candidates with 5+ journal publications, who failed their defence because they rushed and did not write an adequate PhD dissertation. It definitely happens.

Having a postdoc offer before you completed your PhD is a good sign that your work is interesting and promising. However, if your postdoc offer is conditional on you completing the PhD successfully, you still have to complete your PhD. Seeing your advisor as an obstacle is not constructive or helpful. Once again, they are trying to help you, and you should see their expertise as a resource.

Dmitry Savostyanov's user avatar

  • 22 Everything here is correct, but I'll just caution that 'your supervisor is aware of expectations for a PhD program' does not imply that supervisors have a 100% track record of being right when they predict a fail. –  Daniel Hatton Commented Jan 12, 2023 at 20:30
  • 2 @DanielHatton True. But they hedged their bets: "you may fail" is always true. –  PatrickT Commented Jan 13, 2023 at 19:17
  • This is mostly good but "they are trying to help you" is not always true. –  aquirdturtle Commented Jan 14, 2023 at 22:34

Why PhD Defenses fail rarely

The main reason why PhD defenses fail rarely is that the process is structured so that in general people attempt their defense only when they are almost certain to pass. If there are any issues and objections, there is a strong preference to have them resolved before a defense, not have them be raised during a rejecting vote in the defense process. No one wants to waste all the formal process effort on a failed attempt, so supervisors and committees will know that someone is likely to fail and strongly advise them to not make the attempt and postpone it, so in general a failure should happen only if the student has been warned that they are likely to fail and disregards this advice to make the attempt anyway. This sounds suspiciously similar to what you are describing.

Peteris's user avatar

  • This answer could be seen as slightly misleading. OP is in a situation where they have been warned by the supervisor that they could fail the PhD if they submit with the current results. In that situation, the chances of actually failing the PhD are much higher than in the average case. –  lighthouse keeper Commented Jan 13, 2023 at 13:46
  • 22 @lighthousekeeper that's exactly what the answer is saying‽ –  leftaroundabout Commented Jan 13, 2023 at 14:33
  • +1 for speaking to this specific case for the OP. This is probably the most useful answer to pay attention to. Maybe add some whitespace to guide readers? –  Mike M Commented Jan 13, 2023 at 22:07

Yes it is possible to fail a PhD defence and it does happen. Thankfully this is rare.

I’m not in CS so I cannot compare with your peers but you should not make the error of thinking that you need so many publications to get a degree.

If anything, compare a situation where you have x publications as a single author with a situation where you have x publications with many co-authors. Obviously your intellectual contribution to each publication matters; your supervisors and members of your PhD committee can decide you have not done enough even if you have 2x publications because your contribution to each publication has been minimal. I want to emphasize I’m not talking about writing codes or some other such tasks: a PhD is a research degree so your advisor needs to convince your committee and eventually the external examiner that you have made significant and novel contributions to these publications.

I have heard of students failing at the defence stage. This is not pleasant, and it’s a situation everyone wants to avoid. It often (but not always) happens because the candidate is rushed by external events - some visa issue, some family matter, whatever.

In most systems I know, candidates will first go through a sort of “internal defence”, where the student may have to present their work to the committee, or there is some big committee meeting where the final draft of the thesis is evaluated before the thesis is sent to the external examiner. Nobody wants the student to fail so having the committee on board minimizes but does not eliminate the risk of failures. If the thesis is marginal and some committee members still have issue, but the thesis goes out anyways, there could be trouble at the defence with the external examiner.

If you think you have done enough but your advisor does not agree, it’s time to have a frank discussion with your supervisory committee to sort things out, and establish clear milestones for the completion of your degree.

ZeroTheHero's user avatar

The reason it is rare to fail a Ph.D. defense is that supervisors make sure nobody defends until they are ready. Don't push to be the exception.

Nik's user avatar

Good answers already, but I think this might also be relevant.

Have you found your institution's academic regulations relating to research degrees? If not, you should. They might be a boring read but they should lay out the exact procedure and requirements for a PhD assessment as well as all the possible outcomes. There will be "failed" outcomes in the regulations. Sadly there will also be stories of students who have failed (even with publications). There might be resit opportunities, too. The regulations might also detail the appeals process if you do fail.

There is some debate in the comments here as to whether "major revisions" are considered a fail or not. The short answer is that depends on your institution's academic regulations.

One thing that the academic regulations are very unlikely to say is "1 good journal + 3 medium conferences = pass", so although your chances of passing are good, your chances of failing are unlikely to be zero.

Pam's user avatar

From reading your other question, your supervisor isn't really saying that they think it's likely that you fail.

I need to submit one paper to a journal and write one conference paper, then I am ready to write my thesis.
I got a postdoc position in a great research lab. The tentative start date is the beginning of May. They asked for a letter from my supervisor, stating that I am going to defend before the beginning of May. However, my supervisor keeps on telling me he can only state that I can submit my thesis before that date. He wrote a letter for that.

Your current timeline has you starting, finishing, submitting, and defending your thesis in less than 4 months (really more like 3 months), with your defense being sometime in late April. Even if you and your supervisor do everything perfectly there are still a lot of outside factors that can impact that, the biggest one being when can/will your committee get together to hear your defense. Your timeline is so tight that if you submit your thesis and the committee takes a week to review it and then says we want some minor changes, we'll be able to review those changes in another week... What are you going to do? Or if one person can only meet on Wednesday and another person is unavailable on Wednesday so they have to schedule your defense for a week later? (Or two, or three...)

If you submit your thesis but don't have time to do the changes then it's possible that you could fail. It's even more likely that you don't fail but you don't pass your defense on your timeline . Your supervisor is (wisely) unwilling to commit to other people doing things that are outside of your control.

user3067860's user avatar

If your advisor says "it's time to get ready for your defense", your odds of passing are extremely high. If you try to defend against your advisor's will, that's a different story.

I would suggest you talk with your advisor about why they think you are not ready. It could be that they are not happy with your work and need more. It could be that they think you should take as much of the free study time you have in grad school and make the most of it: trust me, you will miss this aspect later in life!

If you believe you are being treated unfairly by your advisor in this situation, I would suggest you discuss with the chair.

Cliff AB's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged phd thesis defense ..

  • Featured on Meta
  • Introducing an accessibility dashboard and some upcoming changes to display...
  • We've made changes to our Terms of Service & Privacy Policy - July 2024
  • Announcing a change to the data-dump process

Hot Network Questions

  • Reference for the proof that Möbius transformations extend to isometries of hyperbolic 3-space
  • Word/phrase for avoiding a task because it has a lot of unknowns
  • Confused with "per āera" in "Familia Romana"
  • Terrain rising, controller not giving climb clearance... what to do?
  • What's the difference between "Model detail" and "Texture detail" in Portal 1?
  • Slow and steady wins
  • Do something you can't do in Brainfuck
  • Is this really a neon lamp?
  • My world has a god who appears regularly to inflict punishment on wrongdoers. Is long-term sinning still possible?
  • Can chronal shift be applied to yourself?
  • Is it OK to call a person "sempai" who is actually much younger than you?
  • The average of an infinite sequence
  • What was the price of bottled water, and road maps, in 1995? More generally, how to find such historic prices
  • Coupon Collector vs. Geometric Distribution: Catching All 150 Pokémon
  • Does the input of a CD4016 need to be baised?
  • Can it be proven from pure logic that at least one thing exists?
  • How to adress potential issues with interpretation/generalizability after discarding observations in exact matching?
  • Electrons in Nernst equation depend on how it is balanced
  • Why don't aircraft use D.C generators?
  • is responding with "Entschuldigung?" to something I could not understand considered impolite?
  • Does the volume of air in an air tight container affect food shelf life?
  • adding data to a CSV file for it to be read
  • Connect electric cable with 4 wires to 3-prong 30 Amp 125-Volt/250-Volt outlet?
  • What is the difference between the complex numbers i and -i?

failed phd nature

After horrible 5.5 years completely failed PhD (not even any degree awarded)

Hi, I started my PhD in 2012, on my first day I already got warned to be careful with my supervisor. Having had quite bad supervisors in the past (the ‘fat girls are stupid’ kind and the ‘you aren’t my favourite so I do not help you’ kind) I figured at this stage I’m fine I was well used to it. Now, 1 car crash, bullying, address to the lab withdrawn then lack of results blamed on being stupid, almost 6 years later I suffer from depression, have ongoing nightmares and not even any kind of participation badge for all the time in the lab and- frankly, therapy. I agree that the thesis was bad but I do find it an odd coincidence that I was told I was going to pass, until I filed an academic and dignity complaint against my supervisors (with evidence) and now I am supposed to get more experimental results (without lab access) and am supposed to improve my bad writing... I know I am not stupid but I feel bad I been given the materials, the access and the support that was advertised and had they listened when Initially discussed the lack of biological relevance and scientific depth when I requested a switch from topic- i feel I would have at least gotten an MPhil. After refusing a switch in topic or supervisors because ‘there is no time to get enough results with only 3 years left’ they then switched my topic with only little under 2 years left, which suddenly was more than enough time. At the same time I was told I was useless unless I went part-time but worked on the thesis full-time and came in every weekend (while blocking my out of hours access???) The thesis was bad and I said it from the beginning but was always told I was doing Phantastin with a paper on the way (until the complaint...when oddly suddenly I failed and had tons of obstacles thrown my way) How do I get over this?

You are supposed to get more results, or they have failed you outright? Has this been through the board of examiners? There are still steps you can take to rectify this is you want to. You can appeal the decision, if you have grounds do so, e.g. if there was "material irregularity in the decision making process", such as they didn't follow the procedures properly, or there were errors made, if your performance was affected by something you haven't disclosed or they failed to take account of it properly (maybe the latter in your case?). Seek advice from the Students' Union If you just want to forget it about it, then I suggest getting a change of scene, go on holiday, or go and stay with friends/family somewhere. Time and distance will give you some perspective. Failing that, try some counselling. It does sound like you have had a raw deal here and this should be a lesson to anyone that is thinking about registering a complaint about supervisors - it generally does not have a good outcome and is best left until your certificate is safely in your hand.

Hi, Just finished crying and reading through the notes. I was failed outright (no viva) but was given the option to resubmit in 1 year with a mandatory viva (perfectly fair enough) but they want more data. How can I possibly get more data without access to the laboratory? They blocked my access long before submission, I didn‘t even have library access. Thanks for the student union tip, I have raised that the internal examiner is one of my supervisors closest friends but I am am still shattered. How is one supposed to get good data without laboratory access, out of hour access revoked almost 1 year before writing period started and no access to materials needed for cloning without arguing for weeks? Such a long time, such a long gap in my CV and so much bad treatment all for nothing :(

Hi, Tigernore, You have my sympathy. Working under a bullly supervisor is awful and you have not been given fair treatment. As Tree of Life has advised, seek the Students Union. In fact, see if your Students Union provide legal services. You have nothing to lose anyway, and talking to a lawyer will help you determine if any rules have been broken incl the right access to lab support and material as a student. It will also put pressure on the university as they normally do not want anything that may damage their reputation, especially if they are in the wrong. You may even be able to fight for lab access again and another fair examination of your thesis. Dry your tears. Now is the time for desperate actions and strategy. You must stay strong. What your supervisor and examiners want to do is to force you to give up and walk away, painting you as a bad student. You must not let them win. I speak from my personal experience as I too launched a complain towards my supervisor and the amount of backlash and soft threats (veiled as advice to maintain good relationship with supervisor as I need his letter of support) were terrible. I represented myself at institute level -failed, faculty level -failed and finally University level -success with a detailed portfolio of evidence and cover letter provided with strong support from a lawyer from Students Union. In my case, I had very strong evidence and was advised by my lawyer that if I failed again at university level, I could go to court. Luckily I didn't have to but the experience was traumatic. Every case is different, and I wish you the very best as you fight for yours. Don give up without trying to fight.

I would echo what tru has said as well. This is not over yet if you don't want it to be. They have to give you lab access if they are asking for more results. Cutting your access to things doesn't seem fair - you should check if this happens to all students in your department - if doesn't, you have a massive case for mistreatment because they have been setting you up to fail. Who has signed off on this decision? Examiners? Head of postgrads? Head of School? Faculty Dean? Take up to a higher level if needed. Don't cry about this, get angry instead. Channel that anger into getting the access and then the results you need to get this PhD.

Hi, I know of multiple students having had ...let's say issues in my department. Including sexual harrassment and when filed being threatened with losing the degree, having no right to holidays and having the same issue I have of being told to go part-time (including part-time stipend) but working full-time in the lab-which most cannot afford. I just can't seem to get heard, everyone is just saying, well let it go they have the power etc. And without access to labs like you agree I can't get anywhere and I don't even have a supervisor / academic tutor at this point. I am filing my appeal over this week and requesting await of the complaint process and readjustment of my access. It is impossible to salvage this into a PhD with what happened but at least an MPhil would have been nice. Thanks for your messages!!!

Quote From Tigernore: Hi, I know of multiple students having had ...let's say issues in my department. Including sexual harrassment and when filed being threatened with losing the degree, having no right to holidays and having the same issue I have of being told to go part-time (including part-time stipend) but working full-time in the lab-which most cannot afford. I just can't seem to get heard, everyone is just saying, well let it go they have the power etc. And without access to labs like you agree I can't get anywhere and I don't even have a supervisor / academic tutor at this point. I am filing my appeal over this week and requesting await of the complaint process and readjustment of my access. It is impossible to salvage this into a PhD with what happened but at least an MPhil would have been nice. Thanks for your messages!!! Ah, the usual discouragement.. "Let go because you can't win.. Why bother since your supervisors have power..." Haven't we all heard of that before... This is the phsycological game to break the student's spirit and rid "troublemakers".. Don't give in. Tigernore, steel yourself. You have not been given a fair fighting chance, and you know it. Instead of talking to nonsense people who are out to discourage you (probably other academics who may or may not have ties with your uni and supervisor), talk to your Student Union who is supposed to defend you. Talk to a legal representative from your Student Union. Fight for your PhD... All is not lost unless you give up on yourself. All the best in your appeal, and don't walk away without exhausting all avenues.

I agree with Tru, I am going through a fight of my own at the moment and experienced the psychological games. I am very much on my own and have been working without supervision for 6 months now, I am almost 12 months in. Having no supervision is better than the situation I was in, but it can't continue for long this way, so I am hoping for a resolution soon. Dragging things out seems to be another way of trying to get rid of any student who speaks out. I have a strong case and its sounds like you have also, so as Tru also says don't give in. My SU haven't been any help, they often don't respond and don't seem to know processes well. Your SU may be better so I advise speaking to them. I struggled getting heard also, my department seemingly didn't want to know, so it had to go formal. I also have/had a supervisor you have to be careful with and my project was changed after I started. So I sympathise with your situation. I see many similarities on this forum among experiences students have concerning supervisor issues and how Universities respond to such cases.

Post your reply

Postgraduate Forum

Masters Degrees

PhD Opportunities

Postgraduate Forum Copyright ©2024 All rights reserved

PostgraduateForum Is a trading name of FindAUniversity Ltd FindAUniversity Ltd, 77 Sidney St, Sheffield, S1 4RG, UK. Tel +44 (0) 114 268 4940 Fax: +44 (0) 114 268 5766

Modal image

Welcome to the world's leading Postgraduate Forum

An active and supportive community.

Support and advice from your peers.

Your postgraduate questions answered.

Use your experience to help others.

Sign Up to Postgraduate Forum

Enter your email address below to get started with your forum account

Login to your account

Enter your username below to login to your account

Reset password

Please enter your username or email address to reset your password

An email has been sent to your email account along with instructions on how to reset your password. If you do not recieve your email, or have any futher problems accessing your account, then please contact our customer support.

or continue as guest

Postgrad Forum uses cookies to create a better experience for you

To ensure all features on our website work properly, your computer, tablet or mobile needs to accept cookies. Our cookies don’t store your personal information, but provide us with anonymous information about use of the website and help us recognise you so we can offer you services more relevant to you. For more information please read our privacy policy

Get the Reddit app

A subreddit dedicated to PhDs.

What to do if I fail my PhD

Hello guys.

To start this right off, I am afraid I will not be able to finish my PhD. At this point, I am not sure which option frightens me more, being able to finish or having to give it up. I have lost five years of my life in this project already, and it is... It's bad. I haven't published a paper in a journal yet. I can't get results. Before you ask, my project was one of those "will take you a while to get a paper, but when you get the first paper, the second and third will be right along". But I never did get a single paper. I have given it until the end of the school year (to finish the tuition) but it is fast approaching...

Has anyone here left their PhD to go work on the industry? What issues did you face? Should I apply only to industries related to my project? What should I say when they ask me what I did these five years? I want to say I was just doing research, no mention of PhD, but I am afraid that they will ask about the project and why it failed... The fault was not wholy mine, but, honestly, I can't see a good way to answer this question.

Any resources you might lend me, I would be grateful. I am honestly panicking here, and regretting everyday my decision.

By continuing, you agree to our User Agreement and acknowledge that you understand the Privacy Policy .

Enter the 6-digit code from your authenticator app

You’ve set up two-factor authentication for this account.

Enter a 6-digit backup code

Create your username and password.

Reddit is anonymous, so your username is what you’ll go by here. Choose wisely—because once you get a name, you can’t change it.

Reset your password

Enter your email address or username and we’ll send you a link to reset your password

Check your inbox

An email with a link to reset your password was sent to the email address associated with your account

Choose a Reddit account to continue

  • Share full article

Advertisement

Supported by

Tsung-Dao Lee, 97, Physicist Who Challenged a Law of Nature, Dies

At 31, he and a colleague won the 1957 Nobel Prize in Physics for discovering that subatomic particles, contrary to what scientists thought, are not always symmetrical.

A man in a suit delivers a lecture in front of a blue background.

By Dylan Loeb McClain

Tsung-Dao Lee, a Chinese American physicist who shared the Nobel Prize in Physics in 1957 for overturning what had been considered a fundamental law of nature — that particles are always symmetrical — died on Sunday at his home in San Francisco. He was 97.

His death was announced in a joint statement by the Tsung-Dao Lee Institute at the Jiao Tong University in Shanghai and the China Center for Advanced Science and Technology in Beijing. Dr. Lee was a longtime professor at Columbia University.

The theory that Dr. Lee overturned was called the law of conservation of parity, which said that every phenomenon and its mirror image should behave precisely the same. At the time he challenged the theory, in 1956, it had been widely accepted for 30 years.

Dr. Lee was then a young professor at Columbia, where he had been promoted to full professor at age 29 — the youngest in the university’s history at that point.

He had become intrigued by a problem involving the decay of so-called K mesons, which are subatomic particles. These particles decay all the time, forming electrons, neutrinos and photons. Experiments had shown that when K mesons decayed, some exhibited changes that suggested that each differed from the others. But they also had identical masses and life expectancies, indicating that they were the same.

This apparent contradiction created quite a conundrum for physicists. They had assumed that weak nuclear forces, like meson decay, obeyed the law of conservation of parity just like the two other fundamental forces that govern quantum physics: strong nuclear forces, which bind protons and neutrons together in the nucleus, and electromagnetic forces, which govern the attraction and repulsion of electric charges and the behavior of light. In other words, scientists had assumed that the orientation of weak nuclear forces could always be reversed.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 06 August 2024

AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI

  • Attila Dabis   ORCID: orcid.org/0000-0003-4924-7664 1 &
  • Csaba Csáki   ORCID: orcid.org/0000-0002-8245-1002 1  

Humanities and Social Sciences Communications volume  11 , Article number:  1006 ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Science, technology and society

This article addresses the ethical challenges posed by generative artificial intelligence (AI) tools in higher education and explores the first responses of universities to these challenges globally. Drawing on five key international documents from the UN, EU, and OECD, the study used content analysis to identify key ethical dimensions related to the use of generative AI in academia, such as accountability, human oversight, transparency, or inclusiveness. Empirical evidence was compiled from 30 leading universities ranked among the top 500 in the Shanghai Ranking list from May to July 2023, covering those institutions that already had publicly available responses to these dimensions in the form of policy documents or guidelines. The paper identifies the central ethical imperative that student assignments must reflect individual knowledge acquired during their education, with human individuals retaining moral and legal responsibility for AI-related wrongdoings. This top-down requirement aligns with a bottom-up approach, allowing instructors flexibility in determining how they utilize generative AI especially large language models in their own courses. Regarding human oversight, the typical response identified by the study involves a blend of preventive measures (e.g., course assessment modifications) and soft, dialogue-based sanctioning procedures. The challenge of transparency induced the good practice of clear communication of AI use in course syllabi in the first university responses examined by this study.

Similar content being viewed by others

failed phd nature

Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications

failed phd nature

Intersectionality of social and philosophical frameworks with technology: could ethical AI restore equality of opportunities in academia?

failed phd nature

Research on flipped classrooms in foreign language teaching in Chinese higher education

Introduction.

The competition in generative artificial intelligence (AI) ignited by the arrival of ChatGPT, the conversational platform based on a large language model (LLM) in late November 2022 (OpenAI, 2022 ) had a shocking effect even on those who are not involved in the industry (Rudolph et al. 2023 ). Within four months, on 22 March 2023, an open letter was signed by several hundred IT professionals, corporate stakeholders, and academics calling on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 (i.e., those that may trick a human being into believing it is conversing with a peer rather than a machine) for at least six months (Future of Life Institute, 2023 ).

Despite these concerns, competition in generative AI and LLMs does not seem to lose momentum, forcing various social systems to overcome the existential distress they might feel about the changes and the uncertainty of what the future may bring (Roose, 2023 ). Organisations and individuals from different sectors of the economy and various industries are looking for adaptive strategies to accommodate the emerging new normal. This includes lawmakers, international organisations, employers, and employees, as well as academic and higher education institutions (Ray, 2023 ; Wach et al. 2023 ). This fierce competition generates gaps in real-time in everyday and academic life, the latter of which is also trying to make sense of the rapid technological advancement and its effects on university-level education (Perkins, 2023 ). Naturally, these gaps can only be filled, and relevant questions answered much slower by academia, making AI-related research topics timely.

This article aims to reduce the magnitude of these gaps and is intended to help leaders, administrators, teachers, and students better understand the ramifications of AI tools on higher education institutions. It will do so by providing a non-exhaustive snapshot of how various universities around the world responded to generative AI-induced ethical challenges in their everyday academic lives within six-eights months after the arrival of ChatGPT. Thus, the research had asked what expectations and guidelines the first policies introduced into existing academic structures to ensure the informed, transparent, responsible and ethical use of the new tools of generative AI (henceforth GAI) by students and teachers. Through reviewing and evaluating first responses and related difficulties the paper helps institutional decision-makers to create better policies to address AI issues specific to academia. The research reported here thus addressed actual answers to the question of what happened at the institutional (policy) level as opposed to what should happen with the use of AI in classrooms. Based on such a descriptive overview, one may contemplate normative recommendations and their realistic implementability.

Given the global nature of the study’s subject matter, the paper presents examples from various continents. Even though it was not yet a widespread practice to adopt separate, AI-related guidelines, the research focused on universities that had already done so quite early. Furthermore, as best practices most often accrue from the highest-ranking universities, the analysis only considered higher education institutions that were represented among the top 500 universities in the Shanghai Ranking list (containing 3041 Universities at the time), a commonly used source to rank academic excellence. Footnote 1 The main sources of this content analysis are internal documents (such as Codes of Ethics, Academic Regulations, Codes of Practice and Procedure, Guidelines for Students and Teachers or similar policy documents) from those institutions whose response to the GAI challenge was publicly accessible.

The investigation is organised around AI-related ethical dilemmas as concluded from relevant international documents, such as the instruments published by the UN, the EU, and the OECD (often considered soft law material). Through these sources, the study inductively identifies the primary aspects that these AI guidelines mention and can be connected to higher education. Thus it only contains concise references to the main ethical implications of the manifold pedagogical practices in which AI tools can be utilised in the classroom. The paper starts with a review of the challenges posed by AI technology to higher education with special focus on ethical dilemmas. Section 3 covers the research objective and the methodology followed. Section 4 presents the analysis of the selected international documents and establishes a list of key ethical principles relevant in HE contexts and in parallel presents the analysis of the examples distilled from the institutional policy documents and guidelines along that dimension. The paper closes with drawing key conclusions as well as listing limitations and ideas for future research.

Generative AI and higher education: Developments in the literature

General ai-related challenges in the classroom from a historical perspective.

Jacque Ellul fatalistically wrote already in 1954 that the “infusion of some more or less vague sentiment of human welfare” cannot fundamentally alter technology’s “rigorous autonomy”, bringing him to the conclusion that “technology never observes the distinction between moral and immoral use” (Ellul, 1964 , p. 97). Footnote 2 Jumping ahead nearly six decades, the above quote comes to the fore, among others, when evaluating the moral and ethical aspects of the services offered by specific software programs, like ChatGPT. While they might be trained to give ethical answers, these moral barriers can be circumvented by prompt injection (Blalock, 2022 ), or manipulated with tricks (Alberti, 2022 ), so generative AI platforms can hardly be held accountable for the inaccuracy of their responses Footnote 3 or how the physical user who inserted a prompt will make use of the output. Indeed, the AI chatbot is now considered to be a potentially disruptive technology in higher education practices (Farazouli et al. 2024 ).

Educators and educational institution leaders have from the beginning sought solutions on how “to use a variety of the strategies and technologies of the day to help their institutions adapt to dramatically changing social needs” (Miller, 2023 , p. 3). Education in the past had always had high hopes for applying the latest technological advances (Reiser, 2001 ; Howard and Mozejko, 2015 ), including the promise of providing personalised learning or using the latest tools to create and manage courses (Crompton and Burke, 2023 ).

The most basic (and original) educational settings include three components: the blackboard with chalk, the instructor, and textbooks as elementary “educational technologies” at any level (Reiser, 2001 ). Beyond these, one may talk about “educational media” which, once digital technology had entered the picture, have progressed from Computer Based Learning to Learning Management Systems to the use of the Internet, and lately to online shared learning environments with various stages in between including intelligent tutoring system, Dialogue-based Tutoring System, and Exploratory Learning Environment and Artificial Intelligence (Paek and Kim, 2021 ). And now the latest craze is about the generative form of AI often called conversational chatbot (Rudolph et al. 2023 ).

The above-mentioned promises appear to be no different in the case of using generative AI tools in education (Baskara, 2023a ; Mhlanga, 2023 ; Yan et al. 2023 ). The general claim is that GAI chatbots have transformative potential in HE (Mollick and Mollick, 2022 ; Ilieva et al. 2023 ). It is further alleged, that feedback mechanisms supposedly provided by GAI can be used to provide personalised guidance to students (Baskara, 2023b ). Some argue, that “AI education should be expanded and improved, especially by presenting realistic use cases and the real limitations of the technology, so that students are able to use AI confidently and responsibly in their professional future” (Almaraz-López et al. 2023 , p. 1). It is still debated whether the hype is justified, yet the question still remains, how to address the issues arising in the wake of the educational application of GAI tools (Ivanov, 2023 ; Memarian and Doleck, 2023 ).

Generative AI tools, such as their most-known representative, ChatGPT impact several areas of learning and teaching. From the point of view of students, chatbots may help with so-called Self-Regulated or Self-Determined Learning (Nicol and Macfarlane‐Dick, 2006 ; Baskara, 2023b ), where students either dialogue with chatbots or AI help with reviewing student work, even correcting it and giving feedback (Uchiyama et al. 2023 ). There are innovative ideas on how to use AI to support peer feedback (Bauer et al. 2023 ). Some consider that GAI can provide adaptive and personalised environments (Qadir, 2023 ) and may offer personalised tutoring (see, for example, Limo et al. ( 2023 ) on ChatGPT as a virtual tutor for personalized learning experiences). Furthermore, Yan et al. ( 2023 ) lists nine different categories of educational tasks that prior studies have attempted to automate using LLMs: Profiling and labelling (various educational or related content), Detection, Assessment and grading, Teaching support (in various educational and communication activities), Prediction, Knowledge representation, Feedback, Content generation (outline, questions, cases, etc.), Recommendation.

From the lecturers’ point of view, one of the most argued impacts is that assessment practices need to be revisited (Chaudhry et al. 2023 ; Gamage et al. 2023 ; Lim et al. 2023 ). For example, ChatGPT-written responses to exam questions may not be distinguished from student-written answers (Rudolph et al. 2023 ; Farazouli et al. 2024 ). Furthermore, essay-type works are facing special challenges (Sweeney, 2023 ). On the other hand, AI may be utilised to automate a range of educational tasks, such as test question generation, including open-ended questions, test correction, or even essay grading, feedback provision, analysing student feedback surveys, and so on (Mollick and Mollick, 2022 ; Rasul et al. 2023 ; Gimpel et al. 2023 ).

There is no convincing evidence, however, that either lecturers or dedicated tools are able to distinguish AI-written and student-written text with high enough accuracy that can be used to prove unethical behaviour in all cases (Akram, 2023 ). This led to concerns regarding the practicality and ethicality of such innovations (Yan et al. 2023 ). Indeed, the appearance of ChatGPT in higher education has reignited the (inconclusive) debate on the potential and risks associated with AI technologies (Ray, 2023 ; Rudolph et al. 2023 ).

When new technologies appear in or are considered for higher education, debates about their claimed advantages and potential drawbacks heat up as they are expected to disrupt traditional practices and require teachers to adapt to their potential benefits and drawbacks (as collected by Farrokhnia et al. 2023 ). One key area of such debates is the ethical issues raised by the growing accessibility of generative AI and discursive chatbots.

Key ethical challenges posed by AI in higher education

Yan et al. ( 2023 ), while investigating the practicality of AI in education in general, also consider ethicality in the context of educational technology and point out that related debates over the last decade (pre-ChatGPT, so to say), mostly focused on algorithmic ethics, i.e. concerns related to data mining and using AI in learning analytics. At the same time, the use of AI by teachers or, especially, by students has received less attention (or only under the scope or traditional human ethics). However, with the arrival of generative AI chatbots (such as ChatGPT), the number of publications about their use in higher education grew rapidly (Rasul et al. 2023 ; Yan et al. 2023 ).

The study by Chan ( 2023 ) offers a (general) policy framework for higher education institutions, although it focuses on one location and is based on the perceptions of students and teachers. While there are studies that collect factors to be considered for the ethical use of AI in HE, they appear to be restricted to ChatGPT (see, for example, Mhlanga ( 2023 )). Mhlanga ( 2023 ) presents six factors: respect for privacy, fairness, and non-discrimination, transparency in the use of ChatGPT, responsible use of AI (including clarifying its limitations), ChatGPT is not a substitute for human teachers, and accuracy of information. The framework by Chan ( 2023 ) is aimed at creating policies to teach students about GAI and considers three dimensions: pedagogical, governance, and operational. Within those dimensions, ten key areas identified covering ethical concerns such as academic integrity versus academic misconduct and related ethical dilemmas (e.g. cheating or plagiarism), data privacy, transparency, accountability and security, equity in access to AI technologies, critical AI literacy, over-reliance on AI technologies (not directly ethical), responsible use of AI (in general), competencies impeded by AI (such as leadership and teamwork). Baskara ( 2023b ), while also looking at ChatGPT only, considers the following likely danger areas: privacy, algorithmic bias issues, data security, and the potential negative impact of ChatGPT on learners’ autonomy and agency, The paper also questions the possible negative impact of GAI on social interaction and collaboration among learners. Although Yan et al. ( 2023 ) considers education in general (not HE in particular) during its review of 118 papers published since 2017 on the topic of AI ethics in education, its list of areas to look at is still relevant: transparency (of the models used), privacy (related to data collection and use by AI tools), equality (such as availability of AI tools in different languages), and beneficence (e.g. avoiding bias and avoiding biased and toxic knowledge from training data). While systematically reviewing recent publications about AI’s “morality footprint” in higher education, Memarian and Doleck ( 2023 ) consider the Fairness, Accountability, Transparency, and Ethics (FATE) approach as their framework of analyses. They note that “Ethics” appears to be the most used term as it serves as a general descriptor, while the other terms are typically only used in their descriptive sense, and their operationalisation is often lacking in related literature.

Regarding education-related data analytics, Khosravi et al. ( 2022 ) argue that educational technology that involves AI should consider accountability, explainability, fairness, interpretability and safety as key ethical concerns. Ferguson et al. ( 2016 ) also looked at learning analytics solutions using AI and warned of potential issues related to privacy, beneficence, and equality. M.A. Chaudhry et al. ( 2022 ) emphasise that enhancing the comprehension of stakeholders of a new educational AI system is the most important task, which requires making all information and decision processes available to those affected, therefore the key concern is related to transparency according to their arguments.

As such debates continue, it is difficult to identify an established definition of ethical AI in HE. It is clear, however, that the focus should not be on detecting academic misconduct (Rudolph et al. 2023 ). Instead, practical recommendations are required. This is especially true as even the latest studies focus mostly on issues related to assessment practices (Chan, 2023 ; Farazouli et al. 2024 ) and often limit their scope to ChatGPT (Cotton et al. 2024 ) (this specific tool still dominates discourses of LLMs despite the availability of many other solutions since its arrival). At the same time, the list of issues addressed appears to be arbitrary, and most publications do not look at actual practices on a global scale. Indeed, reviews of actual current practices of higher education institutions are rare, and this aspect is not yet the focus of recent HE AI ethics research reports.

As follows from the growing literature and the debate shaping up about the implications of using GAI tools in HE, there was a clear need for a systematic review of how first responses in actual academic policies and guidelines in practice have represented and addressed known ethical principles.

Research objective and methodology

In order to contribute to the debate on the impact of GAI on HE, this study aimed to review how leading institutions had reacted to the arrival of generative AI (such as ChatGPT) and what policies or institutional guidelines they have put in place shortly after. The research intended to understand whether key ethical principles were reflected in the first policy responses of HE institutions and, if yes, how they were handled.

As potential principles can diverge and could be numerous, as well as early guidelines may cover wide areas, the investigation is intended to be based on a few broad categories instead of trying to manage a large set of ideals and goals. To achieve this objective, the research was executed in three steps:

It was started with identifying and collecting general ethical ideals, which were then translated and structured for the context of higher education. A thorough content analysis was performed with the intention to put emphasis on positive values instead of simply focusing on issues or risks and their mitigation.

Given those positive ideals, this research collected actual examples of university policies and guidelines already available: this step was executed from May to July 2023 to find early responses addressing such norms and principles developed by leading HE institutions.

The documents identified were then analysed to understand how such norms and principles had been addressed by leading HE institutions.

As a result, this research managed to highlight and contrast differing practical views, and the findings raise awareness about the difficulties of creating relevant institutional policies. The research considered the ethics of using GAI and not expectations towards their development. The next two sections provide details of the two steps.

Establishing ethical principles for higher education

While the review of relevant ethical and HE literature (as presented above) was not fully conclusive, it highlighted the importance and need for some ideals specific to HE. Therefore, as a first step, this study sought to find highly respected sources of such ethical dimensions by executing a directed content analysis of relevant international regulatory and policy recommendations.

In order to establish what key values and ideas drive the formation of future AI regulations in general, Corrêa et al. ( 2023 ) investigated 200 publications discussing governance policies and ethical guidelines for using AI as proposed by various organisations (including national governments and institutions, civil society and academic organisations, private companies, as well as international bodies). The authors were also interested in whether there are common patterns or missing ideals and norms in this extensive set of proposals and recommendations. As the research was looking for key principles and normative attributes that could form a common ground for the comparison of HE policies, this vast set of documents was used to identify internationally recognised bodies that have potential real influence in this arena and decided to consider the guidelines and recommendations they have put forward for the ethical governance of AI. Therefore, for the purpose of this study, the following sources were selected (some organisations, such as the EU were represented by several bodies):

European Commission ( 2021 ): Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021/0106 (COD)) . Footnote 4

European Parliament Committee on Culture and Education ( 2021 ): Report on artificial intelligence in education, culture and the audiovisual sector (2020/2017(INI)) . Footnote 5

High-Level Expert Group on Artificial Intelligence (EUHLEX) ( 2019 ): Ethics Guidelines for Trustworthy AI . Footnote 6

UNESCO ( 2022 ): Recommendation on the Ethics of Artificial Intelligence (SHS/BIO/PI/2021/1) . Footnote 7

OECD ( 2019 ): Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449) . Footnote 8

The ethical dilemmas established by these international documents (most of which is considered soft law material) were then used to inductively identify the primary aspects around which the investigation of educational AI principles may be organised.

Among the above documents, the EUHLEX material is the salient one as it contains a Glossary that defines and explains, among others, the two primary concepts that will be used in this paper: “artificial intelligence” and “ethics”. As this paper is, to a large extent, based on the deducted categorisation embedded in these international documents, it will follow suit in using the above terms as EUHLEX did, supporting it with the definitions contained in the other four referenced international documents. Consequently, artificial intelligence (AI) systems are referred to in this paper as software and hardware systems designed by humans that “act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal” (EUHLEX, 2019 ). With regards to ethics, the EUHLEX group defines this term, in general as an academic discipline which is a subfield of philosophy, dealing with questions like “What is a good action?”, “What is the value of a human life?”, “What is justice?”, or “What is the good life?”. It also mentions that academia distinguishes four major fields: (i) Meta-ethics, (ii) normative ethics, (iii) descriptive ethics, and (iv) applied ethics ” (EUHLEX, 2019 , p. 37). Within these, AI ethics belongs to the latter group of applied ethics that focuses on the practical issues raised by the design, development, implementation, and use of AI systems. By extension, the application of AI systems in higher education also falls under the domain of applied ethics.

The selection of sample universities

The collection of cases started with the AI guidelines compiled by the authors as members of the AI Committee at their university from May to July 2023. The AI Committee consisted of 12 members and investigated over 150 cases to gauge international best practices of GAI use in higher education when formulating a policy recommendation for their own university leadership. Given the global nature of the subject matter, examples from various continents were collected. From this initial pool authors narrowed the scope to the Top 500 higher education institutions of the Shanghai Ranking list for this study, as best practices most often accrue from the highest-ranking universities. Finally, only those institutions were included which, at the time of data collection, have indeed had publicly available policy documents or guidelines with clearly identifiable ethical considerations (such as relevant internal documents, Codes of Ethics, Academic Regulations, Codes of Practice and Procedure, or Guidelines for Students and Teachers). By the end of this selection process, 30 samples proved to be substantiated enough to be included in this study (presented in Table 1 ).

All documents were contextually analysed and annotated by both authors individually looking for references or mentions of ideas, actions or recommendations related to the ethical principles identified during the first step of the research. These comments were then compared and commonalities analysed regarding the nature and goal of the ethical recommendation.

Principles and practices of responsible use of AI in higher education

Ai-related ethical codes forming the base of this investigation.

A common feature of the selected AI ethics documents issued by international organisations is that they enumerate a set of ethical principles based on fundamental human values. The referenced international documents have different geographical- and policy scopes, yet they overlap in their categorisation of the ethical dimensions relevant to this research, even though they might use discrepant language to describe the same phenomenon (a factor we took into account when establishing key categories). For example, what EUHLEX dubs as “Human agency and oversight” is addressed by UNESCO under the section called “Human oversight and determination”, yet they essentially cover the same issues and recommended requirements. Among the many principles enshrined in these documents, the research focuses on those that can be directly linked to the everyday education practices of universities in relation to AI tools, omitting those that, within this context, are less situation-dependent and should normally form the overarching basis of the functioning of universities at all times, such as: respecting human rights and fundamental freedoms, refraining from all forms of discrimination, the right to privacy and data protection, or being aware of environmental concerns and responsibilities regarding sustainable development. As pointed out by Nikolinakos ( 2023 ), such principles and values provide essential guidance not only for development but also during the deployment and use of AI systems. Synthesising the common ethical codes in these instruments has led to the following cluster of ethical principles that are directly linked to AI-related higher education practices:

Accountability and responsibility;

Human agency and oversight;

Transparency and explainability

Inclusiveness and diversity.

The following subsections will give a comprehensive definition of these ethical areas and relate them to higher education expectations. Each subsection will first explain the corresponding ethical cluster, then present the specific university examples, concluding with a summary of the identified best practice under that particular cluster.

Accountability and responsibility

Definition in ethical codes and relevance.

The most fundamental requirements, appearing in almost all relevant documents, bring forward the necessity that mechanisms should be implemented to ensure responsibility and accountability for AI systems and their outcomes. These cover expectations both before and after their deployment, including development and use. They entail the basic requirements of auditability (i.e. the enablement of the assessment of algorithms), clear roles in the management of data and design processes (as a means for contributing to the trustworthiness of AI technology), the minimalisation and reporting of negative impacts (focusing on the possibility of identifying, assessing, documenting and reporting on the potential negative impacts of AI systems), as well as the ability of redress (understood as the capability to utilise mechanisms that offer legal and practical remedy when unjust adverse impact occurs) (EUHLEX, 2019 , pp. 19–20).

Additionally, Points 35–36 of the UNESCO recommendations remind us that it is imperative to “attribute ethical and legal responsibility for any stage of the life cycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities. AI system can never replace ultimate human responsibility and accountability” (UNESCO, 2022 , p. 22).

The fulfilment of this fundamental principle is also expected from academic authors, as per the announcements of some of the largest publishing houses in the world. Accordingly, AI is not an author or co-author, Footnote 9 and AI-assisted technologies should not be cited as authors either, Footnote 10 given that AI-generated content cannot be considered capable of initiating an original piece of research without direction from human authors. The ethical guidelines of Wiley ( 2023 ) stated that ”[AI tools] also cannot be accountable for a published work or for research design, which is a generally held requirement of authorship, nor do they have legal standing or the ability to hold or assign copyright.” Footnote 11 This research angle carries over to teaching as well since students are also expected to produce outputs that are the results of their own work. Furthermore, they also often do their own research (such as literature search and review) in support of their projects, homework, thesis, and other forms of performance evaluation.

Accountability and responsibility in university first responses

The rapidly changing nature of the subject matter poses a significant challenge for scholars to assess the state of play of human responsibility. This is well exemplified by the reversal of hearts by some Australian universities (see Rudolph et al. ( 2023 ) quoting newspaper articles) who first disallowed the use of AI by students while doing assignments, just to reverse that decision a few months later and replace it by a requirement of disclosing the use of AI in homeworks. Similarly, Indian governments have been oscillating between a non-regulatory approach to foster an “innovation-friendly environment” for their universities in the summer of 2023 (Liu, 2023 ), only to roll back on this pledge a few months later (Dhaor, 2023 ).

Beyond this regulatory entropy, a fundamental principle enshrined in university codes of ethics across the globe is that students need to meet existing rules of scientific referencing and authorship. Footnote 12 In other words, they should refrain from any form of plagiarism in all their written work (including essays, theses, term papers, or in-class presentations). Submitting any work and assessments created by someone or something else (including AI-generated content) as if it was their own usually amounts to either a violation of scientific referencing, plagiarism or is considered to be a form of cheating (or a combination of these), depending on the terminology used by the respective higher education institution.

As a course description of Johns Hopkins puts it, “academic honesty is required in all work you submit to be graded …., you must solve all homework and programming assignments without the help of outside sources (e.g., GAI tools)” (Johns Hopkins University, 2023 ).

The Tokyo Institute of Technology applies a more flexible approach, as they “trust the independence of the students and expect the best use” of AI systems from them based on good sense and ethical standards. They add, however, that submitting reports that rely almost entirely on the output of GenAI is “highly improper, and its continued use is equivalent to one’s enslavement to the technology” (Tokyo Institute of Technology, 2023 ).

In the case of York University, the Senate’s Academic Standards, Curriculum, and Pedagogy Committee clarified in February 2023 that students are not authorised to use “text-, image-, code-, or video-generating AI tools when completing their academic work unless explicitly permitted by a specific instructor in a particular course” (York University Senate, 2023 ).

In the same time frame (6 February 2023), the University of Oxford stated in a guidance material for staff members that “the unauthorised use of AI tools in exams and other assessed work is a serious disciplinary offence” not permitted for students (University of Oxford, 2023b ).

Main message and best practice: honesty and mutual trust

In essence, students are not allowed to present AI-generated content as their own, Footnote 13 and they should have full responsibility and accountability for their own papers. Footnote 14 This is in line with the most ubiquitous principle enshrined in almost all university guidelines, irrespective of AI, that students are expected to complete their tasks based on their own knowledge and skills obtained throughout their education.

Given that the main challenge here is unauthorised use and overreliance on GAI platforms, the best practice answer is for students to adhere to academic honesty and integrity, scientific referencing standards, existing anti-plagiarism rules, and complete university assignments without fully relying on GAI tools, using, first and foremost, their own skills. The only exception is when instructed otherwise by their professors. By extension, preventing overuse and unauthorised use of AI assists students in avoiding undermining their own academic capacity-building efforts.

Human agency and oversight

AI systems have the potential to manipulate and influence human behaviour in ways that are not easily detectable. AI systems must, therefore, follow human-centric design principles and leave meaningful opportunities for human choice and intervention. Such systems should not be able to unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans (EUHLEX, 2019 , p. 16).

Human oversight thus refers to the capability for human intervention in every decision cycle of the AI system and the ability of users to make informed, autonomous decisions regarding AI systems. This encompasses the ability to choose not to use an AI system in a particular situation or to halt AI-related operations via a “stop” button or a comparable procedure in case the user detects anomalies, dysfunctions and unexpected performance from AI tools (European Commission, 2021 , Art. 14).

The sheer capability of active oversight and intervention vis-á-vis GAI systems is strongly linked to ethical responsibility and legal accountability. As Liao puts it, “the sufficient condition for human beings being rightsholders is that they have a physical basis for moral agency.” (Liao, 2020 , pp. 496–497). Wagner complemented this with the essential point that entity status for non-human actors would help to shield other parties from liability, i.e., primarily manufacturers and users (Wagner, 2018 ). This, in turn, would result in risk externalisation, which serves to minimise or relativise a person’s moral accountability and legal liability associated with wrongful or unethical acts.

Users, in our case, are primarily students who, at times, might be tempted to make use of AI tools in an unethical way, hoping to fulfil their university tasks faster and more efficiently than they could without these.

Human agency and oversight in university first responses

The crucial aspect of this ethical issue is the presence of a “stop” button or a similar regulatory procedure to streamline the operation of GAI tools. Existing university guidelines in this question point clearly in the direction of soft sanctions, if any, given the fact that there is a lack of evidence that AI detection platforms are effective and reliable tools to tell apart human work from AI-generated ones. Additionally, these tools raise some significant implications for privacy and data security issues, which is why university guidelines are particularly cautious when referring to these. Accordingly, the National Taiwan University, the University of Toronto, the University of Waterloo, the University of Miami, the National Autonomous University of Mexico, and Yale, among others, do not recommend the use of AI detection platforms in university assessments. The University of Zürich further added the moral perspective in a guidance note from 13 July 2023, that “forbidding the use of undetectable tools on unsupervised assignments or demanding some sort of honour code likely ends up punishing the honest students” (University of Zürich, 2023 ). Apart from unreliability, the University of Cape Town also drew attention in its guide for staff that AI detection tools may “disproportionately flag text written by non-first language speakers as AI-generated” (University of Cape Town, 2023 , p. 8).

Macquarie University took a slightly more ambiguous stance when they informed their staff that, while it is not “proof” for anything, an AI writing detection feature was launched within Turnitin as of 5 April 2023 (Hillier, 2023 ), claiming that the software has a 97% detection rate with a 1% false positive rate in the tests that they had conducted (Turnitin, 2023 ). Apart from these, Boston University is among the few examples that recommend employing AI detection tools, but only in a restricted manner to ”evaluate the degree to which AI tools have likely been employed” and not as a source for any punitive measures against students (University of Boston, 2023 ). Remarkably, they complement the above with suggestions for a merit-based scoring system, whereby instructors shall treat work by students who declare no use of AI tools as the baseline for grading. A lower baseline is suggested for students who declare the use of AI tools (depending on how extensive the usage was), and for the bottom of this spectrum, the university suggests imposing a significant penalty for low-energy or unreflective reuse of material generated by AI tools and assigning zero points for merely reproducing the output from AI platforms.

A discrepant approach was adopted at the University of Toronto. Here, if an instructor indicates that the use of AI tools is not permitted on an assessment, and a student is later found to have used such a tool nevertheless, then the instructor should consider meeting with the student as the first step of a dialogue-based process under the Code of Behaviour on Academic Matters (the same Code, which categorises the use of ChatGPT and other such tools as “unauthorised aid” or as “any other form of cheating” in case, an instructor specified that no outside assistance was permitted on an assignment) (University of Toronto, 2019 ).

More specifically, Imperial College London’s Guidance on the Use of Generative AI tools envisages the possibility of inviting a random selection of students to a so-called “authenticity interview” on their submitted assignments (Imperial College London, 2023b ). This entails requiring students to attend an oral examination of their submitted work to ensure its authenticity, which includes questions about the subject or how they approached their assignment.

As a rare exception, the University of Helsinki represents one of the more rigorous examples. The “Guidelines for the Use of AI in Teaching at the University of Helsinki” does not lay down any specific procedures for AI-related ethical offences. On the contrary, as para. 7 stipulates the unauthorised use of GAI in any course examination “constitutes cheating and will be treated in the same way as other cases of cheating” (University of Helsinki, 2023 ). Footnote 15

Those teachers who are reluctant to make AI tools a big part of their courses should rather aim to develop course assessment methods that can plausibly prevent the use of AI tools instead of attempting to filter these afterwards. Footnote 16 For example, the Humboldt-Universität zu Berlin instructs that, if possible, oral or practical examinations or written examinations performed on-site are recommended as alternatives to “classical” written home assignments (Humboldt-Universität zu Berlin, 2023a ).

Monash University also mentions some examples in this regard (Monash University, 2023a ), such as: asking students to create oral presentations, videos, and multimedia resources; asking them to incorporate more personal reflections tied to the concepts studied; implementing programmatic assessment that focuses on assessing broader attributes of students, using multiple methods rather than focusing on assessing individual kinds of knowledge or skills using a single assessment method (e.g., writing an essay).

Similarly, the University of Toronto suggest instructors to: ask students to respond to a specific reading that is very new and thus has a limited online footprint; assign group work to be completed in class, with each member contributing; or ask students to create a first draft of an assignment by hand, which could be complemented by a call to explain or justify certain elements of their work (University of Toronto, 2023 ).

Main message and best practice: Avoiding overreaction

In summary, the best practice that can be identified under this ethical dilemma is to secure human oversight through a blend of preventive measures (e.g. a shift in assessment methods) and soft sanctions. Given that AI detectors are unreliable and can cause a series of data privacy issues, the sanctioning of unauthorised AI use should happen on a “soft basis”, as part of a dialogue with the student concerned. Additionally, universities need to be aware and pay due attention to potentially unwanted rebound effects of bona fide measures, such as the merit-based scoring system of the University of Boston. In that case, using different scoring baselines based on the self-declared use of AI could, in practice, generate incentives for not declaring any use of AI at all, thereby producing counter-effective results.

While explainability refers to providing intelligible insight into the functioning of AI tools with a special focus on the interplay between the user’s input and the received output, transparency alludes to the requirement of providing unambiguous communication in the framework of system use.

As the European Commission’s Regulation proposal ( 2021 ) puts it under subchapter 5.2.4., transparency obligations should apply for systems that „(i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). When persons interact with an AI system or their emotions or characteristics are recognised through automated means, people must be informed of that circumstance. If an AI system is used to generate or manipulate image, audio or video content that appreciably resembles authentic content, there should be an obligation to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes (law enforcement, freedom of expression). This allows persons to make informed choices or step back from a given situation.”

People (in our case, university students and teachers) should, therefore, be fully informed when a decision is influenced by or relies on AI algorithms. In such instances, individuals should be able to ask for further explanation from the decision-maker using AI (e.g., a university body). Furthermore, individuals should be afforded the choice to present their case to a dedicated representative of the organisation in question who should have the power to reviset the decision and make corrections if necessary (UNESCO, 2022 , p. 22). Therefore, in the context of courses and other related education events, teachers should be clear about their utilisation of AI during the preparation of the material. Furthermore, instructors must unambiguously clarify ethical AI use in the classroom. Clear communication is essential about whether students have permission to utilise AI tools during assignments and how to report actual use.

As both UN and EU sources point out, raising awareness about and promoting basic AI literacy should be fostered as a means to empower people and reduce the digital divides and digital access inequalities resulting from the broad adoption of AI systems (EUHLEX, 2019 , p. 23; UNESCO, 2022 , p. 34).

Transparency and explainability in university first responses

The implementation of this principle seems to revolve around the challenge of decentralisation of university work, including the respect for teachers’ autonomy.

Teachers’ autonomy entails that teachers can decide if and to what extent they will allow their students to use AI platforms as part of their respective courses. This, however, comes with the essential corollary, that they must clearly communicate their decision to both students and university management in the course syllabus. To support transparency in this respect, many universities decided to establish 3-level- or 4-level admissibility frameworks (and even those who did not establish such multi-level systems, e.g., the University of Toronto, urge instructors to explicitly indicate in the course syllabus the expected use of AI) (University of Toronto, 2023 ).

The University of Auckland is among the universities that apply a fully laissez passer laissez-faire approach in this respect, meaning that there is a lack of centralised guidance or recommendations on this subject. They rather confer all practical decision-making of GAI use on course directors, adding that it is ultimately the student’s responsibility to correctly acknowledge the use of Gen-AI software (University of Auckland, 2023 ). Similarly, the University of Helsinki gives as much manoeuvring space to their staff as to allow them to change the course of action during the semester. As para 1 of their earlier quoted Guidelines stipulates, teachers are responsible for deciding how GAI can be used on a given course and are free to fully prohibit their use if they think it impedes the achievement of the learning objectives.

Colorado State University, for example, provides its teachers with 3 types of syllabus statement options (Colorado State University, 2023 ): (a) the prohibitive statement: whereby any work created, or inspired by AI agents is considered plagiarism and will not be tolerated; (b) the use-with-permission statement: whereby generative AI can be used but only as an exception and in line with the teachers further instruction, and (c) the abdication statement: where the teacher acknowledges that the course grade will also be a reflection of the students ability to harness AI technologies as part of their preparation for their future in a workforce that will increasingly require AI-literacy.

Macquarie University applies a similar system and provides it’s professors with an Assessment Checklist in which AI use can be either “Not permitted” or “Some use permitted” (meaning that the scope of use is limited while the majority of the work should be written or made by the student.), or “Full use permitted (with attribution)”, alluding to the adaptive use of AI tools, where the generated content is edited, mixed, adapted and integrated into the student’s final submission – with attribution of the source (Macquarie University, 2023 ).

The same approach is used at Monash University where generative AI tools can be: (a) used for all assessments in a specific unit; (b) cannot be used for any assessments; (c) some AI tools may be used selectively (Monash University, 2023b ).

The University of Cape Town (UCT) applies a 3-tier system not just in terms of the overall approach to the use or banning of GAI, but also with regard to specific assessment approaches recommended to teachers. As far as the former is concerned, they differentiate between the strategies of: (a) Avoiding (reverting to in-person assessment, where the use of AI isn’t possible); (b) Outrunning (devising an assessment that AI cannot produce); and (c) Embracing (discussing the appropriate use of AI with students and its ethical use to create the circumstances for authentic assessment outputs). The assessment possibilities, in turn, are categorised into easy, medium, and hard levels. Easy tasks include, e.g., generic short written assignments. Medium level might include examples such as personalised or context-based assessments (e.g. asking students to write to a particular audience whose knowledge and values must be considered or asking questions that would require them to give a response that draws from concepts that were learnt in class, in a lab, field trip…etc). In contrast, hard assessments include projects involving real-world applications, synchronous oral assessments, or panel assessments (University of Cape Town, 2023 ).

4-tier-systems are analogues. The only difference is that they break down the “middle ground”. Accordingly, the Chinese University of Hong Kong clarifies that Approach 1 (by default) means the prohibition of all use of AI tools; Approach 2 entails using AI tools only with prior permission; Approach 3 means using AI tools only with explicit acknowledgement; and Approach 4 is reserved for courses in which the use of AI tools is freely permitted with no acknowledgement needed (Chinese University of Hong Kong, 2023 ).

Similarly, the University of Delaware provides course syllabus statement examples for teachers including: (1) Prohibiting all use of AI tools; (2) Allowing their use only with prior permission; (3) Allow their use only with explicit acknowledgement; (4) Freely allow their use (University of Delaware, 2023 ).

The Technical University of Berlin also proposes a 4-tier system but uses a very different logic based on the practical knowledge one can obtain by using GAI. Accordingly, they divide AI tools as used to: (a) acquire professional competence; (b) learn to write scientifically; (c) be able to assess AI tools and compare them with scientific methods; d) professional use of AI tools in scientific work. Their corresponding guideline even quotes Art. 5 of the German Constitution referencing the freedom of teaching ( Freiheit der Lehre ), entailing that teachers should have the ability to decide for themselves which teaching aids they allow or prohibit. Footnote 17

This detailed approach, however, is rather the exception. According to the compilation on 6 May 2023 by Solis ( 2023 ), among the 100 largest German universities, 2% applied a general prohibition on the use of ChatGPT, 23% granted partial permission, 12% generally permitted its use, while 63% of the universities had none or only vague guidelines in this respect.

Main message and best practice: raising awareness

Overall, the best practice answer to the dilemma of transparency is the internal decentralisation of university work and the application of a “bottom-up” approach that respects the autonomy of university professors. Notwithstanding the potential existence of regulatory frameworks that set out binding rules for all citizens of an HE institution, this means providing university instructors with proper manoeuvring space to decide on their own how they would like to make AI use permissible in their courses, insofar as they communicate their decision openly.

Inclusiveness and diversity

Para. 34 of the Report by the European Parliament Committee on Culture and Education ( 2021 ) highlights that inclusive education can only be reached with the proactive presence of teachers and stresses that “AI technologies cannot be used to the detriment or at the expense of in-person education, as teachers must not be replaced by any AI or AI-related technologies”. Additionally, para. 20 of the same document highlights the need to create diverse teams of developers and engineers to work alongside the main actors in the educational, cultural, and audiovisual sectors in order to prevent gender or social bias from being inadvertently included in AI algorithms, systems, and applications.

This approach also underlines the need to consider the variety of different theories through which AI has been developed as a precursor to ensuring the application of the principle of diversity (UNESCO, 2022 , pp. 33–35), and it also recognises that a nuanced answer to AI-related challenges is only possible if affected stakeholders have an equal say in regulatory and design processes. An idea closely linked to the principle of fairness and the pledge to leave no one behind who might be affected by the outcome of using AI systems (EUHLEX, 2019 , pp. 18–19).

Therefore, in the context of higher education, the principle of inclusiveness aims to ensure that an institution provides the same opportunities to access the benefits of AI technologies for all its students, irrespective of their background, while also considering the particular needs of various vulnerable groups potentially marginalised based on age, gender, culture, religion, language, or disabilities. Footnote 18 Inclusiveness also alludes to stakeholder participation in internal university dialogues on the use and impact of AI systems (including students, teachers, administration and leadership) as well as in the constant evaluation of how these systems evolve. On a broader scale, it implies communication with policymakers on how higher education should accommodate itself to this rapidly changing environment (EUHLEX, 2019 , p. 23; UNESCO, 2022 , p. 35).

Inclusiveness and diversity in university first responses

Universities appear to be aware of the potential disadvantages for students who are either unfamiliar with GAI or who choose not to use it or use it in an unethical manner. As a result, many universities thought that the best way to foster inclusive GAI use was to offer specific examples of how teachers could constructively incorporate these tools into their courses.

The University of Waterloo, for example, recommends various methods that instructors can apply on sight, with the same set of tools for all students during their courses, which in itself mitigates the effects of any discrepancies in varying student backgrounds (University of Waterloo, 2023 ): (a) Give students a prompt during class, and the resulting text and ask them to critique and improve it using track changes; (b) Create two distinct texts and have students explain the flaws of each or combine them in some way using track changes; (c) Test code and documentation accuracy with a peer; or (d) Use ChatGPT to provide a preliminary summary of an issue as a jumping-off point for further research and discussion.

The University of Pittsburgh ( 2023 ) and Monash added similar recommendations to their AI guidelines (Monash University, 2023c ).

The University of Cambridge mentions under its AI-deas initiative a series of projects aimed to develop new AI methods to understand and address sensory, neural or linguistic challenges such as hearing loss, brain injury or language barriers to support people who find communicating a daily challenge in order to improve equity and inclusion. As they put it, “with AI we can assess and diagnose common language and communication conditions at scale, and develop technologies such as intelligent hearing aids, real-time machine translation, or other language aids to support affected individuals at home, work or school.” (University of Cambridge, 2023 ).

The homepage of the Technical University of Berlin (Technische Universität Berlin) displays ample and diverse materials, including videos Footnote 19 and other documents, as a source of inspiration for teachers on how to provide an equitable share of AI knowledge for their students (Glathe et al. 2023 ). More progressively, the university’s Institute of Psychology offers a learning modul called “Inclusive Digitalisation”, available for students enrolled in various degree programmes to understand inclusion and exclusion mechanisms in digitalisation. This modul touches upon topics such as barrier-free software design, mechanisms and reasons for digitalised discrimination or biases in corporate practices (their homepage specifically alludes to the fact that input and output devices, such as VR glasses, have exclusively undergone testing with male test subjects and that the development of digital products and services is predominantly carried out by men. The practical ramifications of such a bias result in input and output devices that are less appropriate for women and children) (Technische Universität Berlin, 2023 ).

Columbia recommends the practice of “scaffolding”, which is the process of breaking down a larger assignment into subtasks (Columbia University, 2023 ). In their understanding, this method facilitates regular check-ins and enables students to receive timely feedback throughout the learning process. Simultaneously, the implementation of scaffolding helps instructors become more familiar with students and their work as the semester progresses, allowing them to take additional steps in the case of students who might need more attention due to their vulnerable backgrounds or disabilities to complete the same tasks.

The Humboldt-Universität zu Berlin, in its Recommendations, clearly links the permission of GAI use with the requirement of equal accessibility. They remind that if examiners require students to use AI for an examination, “students must be provided with access to these technologies free of charge and in compliance with data protection regulations” (Humboldt-Universität zu Berlin, 2023b ).

Concurringly, the University of Cape Town also links inclusivity to accessibility. As they put it, “there is a risk that those with poorer access to connectivity, devices, data and literacies will get unequal access to the opportunities being provided by AI”, leading to the conclusion that the planning of the admissible use of GAI on campus should be cognizant of access inequalities (University of Cape Town, 2023 ). They also draw their staff’s attention to a UNESCO guide material containing useful methods to incorporate ChatGPT into the course, including methods such as the “Socratic opponent” (AI acts as an opponent to develop an argument), the “study buddy” (AI helps the student reflect on learning material) or the “dynamic assessor” (AI provides educators with a profile of each student’s current knowledge based on their interactions with ChatGPT) (UNESCO International Institute for Higher Education in Latin America and the Caribbean, 2023 ).

Finally, the National Autonomous University of Mexico’s Recommendations suggest using GAI tools, among others, for the purposes of community development. They suggest that such community-building activities, whether online or in live groups, kill two birds with one stone. On the one hand, they assist individuals in keeping their knowledge up to date with a topic that is constantly evolving, while it offers people from various backgrounds the opportunity to become part of communities in the process where they can share their experiences and build new relations (National Autonomous University of Mexico, 2023 ).

Main message and best practice: Proactive central support and the pledge to leave no one behind

To conclude, AI-related inclusivity for students is best fostered if the university does not leave its professors solely to their own resources to come up with diverging initiatives. The best practice example for this dilemma thus lies in a proactive approach that results in the elaboration of concrete teaching materials (e.g., subscriptions to AI tools to ensure equal accessibility for all students, templates, video tutorials, open-access answers to FAQs…etc.), specific ideas, recommendations and to support specialised programmes and collaborations with an inclusion-generating edge. With centrally offered resources and tools institutions seem to be able to ensure accessability irrespective of students’ background and financial abilities.

Discussion of the First Responses

While artificial intelligence and even its generative form has been around for a while, the arrival of application-ready LLMs – most notably ChatGPT has changed the game when it comes to grammatically correct large-scale and content-specific text generation. This has invoked an immediate reaction from the higher education community as the question arose as to how it may affect various forms of student performance evaluation (such as essay and thesis writing) (Chaudhry et al. 2023 ; Yu, 2023 ; Farazouli et al. 2024 ).

Often the very first reaction (a few months after the announcement of the availability of ChatGPT) was a ban on these tools and a potential return to hand-written evaluation and oral exams. In the institutions investigated under this research, notable examples may be most Australian universities (such as Monash) or even Oxford. On the other hand, even leading institutions have immediately embraced this new tool as a great potential helper of lecturers – the top name here being Harvard. Very early responses thus ranged widely – and have changed fast over the first six-eight months “post-ChatGPT”.

Over time responses from the institutions investigated started to put out clear guidelines and even created dedicated policies or modified existing ones to ensure a framework of acceptable use. The inspiration leading these early regulatory efforts was influenced by the international ethics documents reviewed in this paper. Institutions were aware of and relied on those guidelines. The main goal of this research was to shed light on the questions of how much and in what ways they took them on board regarding first responses. Most first reactions were based on “traditional” AI ethics and understanding of AI before LLMs and the generative revolution. First responses by institutions were not based on scientific literature or arguments from journal publications. Instead, as our results demonstrated it was based on publicly available ethical norms and guidelines published by well-known international organizations and professional bodies.

Conclusions, limitations and future research

Ethical dilemmas discussed in this paper were based on the conceptualisation embedded in relevant documents of various international fora. Each ethical dimension, while multifaceted in itself, forms a complex set of challenges that are inextricably intertwined with one another. Browsing university materials, the overall impression is that Universities primarily aim to explore and harness the potential benefits of generative AI but not with an uncritical mindset. They are focusing on the opportunities while simultaneously trying to address the emerging challenges in the field.

Accordingly, the main ethical imperative is that students must complete university assignments based on the knowledge and skills they acquired during their university education unless their instructors determine otherwise. Moral and legal responsibility in this regard always rests with human individuals. AI agents possess neither the legal standing nor the physical basis for moral agency, which makes them incapable of assuming such responsibilities. This “top-down” requirement is most often complemented by the “bottom-up” approach of providing instructors with proper maneuvering space to decide how they would like to make AI use permissible in their courses.

Good practice in human oversight could thus be achieved through a combination of preventive measures and soft, dialogue-based procedures. This latter category includes the simple act of teachers providing clear, written communications in their syllabi and engaging in a dialogue with their students to provide unambiguous and transparent instructions on the use of generative AI tools within their courses. Additionally, to prevent the unauthorised use of AI tools, changing course assessment methods by default is more effective than engaging in post-assessment review due to the unreliability of AI detection tools.

Among the many ethical dilemmas that generative AI tools pose to social systems, this paper focused on those pertaining to the pedagogical aspects of higher education. Due to this limitation, related fields, such as university research, were excluded from the scope of the analysis. However, research-related activities are certainly ripe for scientific scrutiny along the lines indicated in this study. Furthermore, only a limited set of institutions could be investigated, those who were the ”first respondents” to the set of issues covered by this study. Hereby, this paper hopes to inspire further research on the impact of AI tools on higher education. Such research could cover more institutions, but it would also be interesting to revisit the same institutions again to see how their stance and approach might have changed over time considering how fast this technology evolves and how much we learn about its capabilities and shortcomings.

Data availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. All documents referenced in this study are publicly available on the corresponding websites provided in the Bibliography or in the footnotes. No code has been developed as part of this research.

For the methodology behind the Shanghai Rankings see: https://www.shanghairanking.com/methodology/arwu/2022 . Accessed: 14 November 2023.

While the original French version was published in 1954, the first English translation is dated 1964.

As the evaluation by Bang et al. ( 2023 ) found, ChatGPT is only 63.41% accurate on average in ten different reasoning categories under logical reasoning, non-textual reasoning, and common-sense reasoning, making it an unreliable reasoner.

Source: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence . Accessed: 14 November 2023.

Source https://www.europarl.europa.eu/doceo/document/A-9-2021-0127_EN.html . Accessed: 14 November 2023.

Source: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai . Accessed: 14 November 2023.

Source: https://unesdoc.unesco.org/ark:/48223/pf0000381137 . Accessed: 14 November 2023.

Source: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#mainText . Accessed: 14 November 2023.

The editors-in-chief of Nature and Science stated that ChatGPT does not meet the standard for authorship: „ An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs…. We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” (Stokel-Walker, 2023 ). See also (Nature, 2023 ).

While there was an initial mistake that credited ChatGPT as an author of an academic paper, Elsevier issued a Corrigendum on the subject in February 2023 (O’Connor, 2023 ). Elsevier then clarified in its “Use of AI and AI-assisted technologies in writing for Elsevier” announcement, issued in March 2023, that “Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author”. See https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier . Accessed 23 Nov 2023.

The ethical guidelines of Wiley was updated on 28 February 2023 to clarify the publishing house’s stance on AI-generated content.

See e.g.: Section 2.4 of Princeton University’s Academic Regulations (Princeton University, 2023 ); the Code of Practice and Procedure regarding Misconduct in Research of the University of Oxford (University of Oxford, 2023a ); Section 2.1.1 of the Senate Guidelines on Academic Honesty of York University, enumerating cases of cheating (York University, 2011 ); Imperial College London’s Academic Misconduct Policy and Procedures document (Imperial College London, 2023a ); the Guidelines for seminar and term papers of the University of Vienna (Universität Wien, 2016 ); Para 4. § (1) - (4) of the Anti-plagiarism Regulation of the Corvinus University of Budapest (Corvinus University of Budapest, 2018 ), to name a few.

15 Art. 2 (c)(v) of the early Terms of Use of OpenAI Products (including ChatGPT) dated 14 March 2023 clarified the restrictions of the use of their products. Accordingly, users may not represent the output from their services as human-generated when it was not ( https://openai.com/policies/mar-2023-terms/ . Accessed 14 Nov 2023). Higher education institutions tend to follow suit with this policy. For example, the List of Student Responsibilities enumerated under the “Policies and Regulations” of the Harvard Summer School from 2023 reminds students that their “academic integrity policy forbids students to represent work as their own that they did not write, code, or create” (Harvard University, 2023 ).

A similar view was communicated by Taylor & Francis in a press release issued on 17 February 2023, in which they clarified that: “Authors are accountable for the originality, validity and integrity of the content of their submissions. In choosing to use AI tools, authors are expected to do so responsibly and in accordance with our editorial policies on authorship and principles of publishing ethics” (Taylor and Francis, 2023 ).

This is one of the rare examples where the guideline was adopted by the university’s senior management, in this case, the Academic Affairs Council.

It should be noted that abundant sources recommend harnessing AI tools’ opportunities to improve education instead of attempting to ban them. Heaven, among others, advocated on the pages of the MIT Technology Review the use of advanced chatbots such as ChatGPT as these could be used as “powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalised lesson plans, save teachers time on admin” (Heaven, 2023 ).

This university based its policies on the recommendations of the German Association for University Didactics (Deutsche Gesellschaft für Hochschuldidaktik). Consequently, they draw their students’ attention to the corresponding material, see: (Glathe et al. 2023 ).

For a detailed review of such groups affected by AI see the Artificial Intelligence and Democratic Values Index by the Center for AI and Digital Policy at https://www.caidp.org/reports/aidv-2023/ . Accessed 20 Nov 2023.

See for example: https://www.youtube.com/watch?v=J9W2Pd9GnpQ . Accessed: 14 November 2023.

Akram A (2023) An empirical study of AI generated text detection tools. ArXiv Prepr ArXiv231001423. https://doi.org/10.48550/arXiv.2310.01423

Alberti S (2022) Silas Alberti on X: ChatGPT is trained to not be evil. X Formerly Twitter, 1 December 2022. https://t.co/ZMFdqPs17i . Accessed 23 Nov 2023

Almaraz-López C, Almaraz-Menéndez F, López-Esteban C (2023) Comparative study of the attitudes and perceptions of university students in business administration and management and in education toward Artificial Intelligence. Educ. Sci. 13(6):609. https://doi.org/10.3390/educsci13060609

Article   Google Scholar  

Bang Y, Cahyawijaya S, Lee N et al. (2023) A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. arXiv. https://doi.org/10.48550/arXiv.2302.04023

Baskara FXR (2023a) ChatGPT as a virtual learning environment: multidisciplinary simulations. In: Proceeding of the 3rd International Conference on Innovations in Social Sciences Education and Engineering, Paper 017. https://conference.loupiasconference.orag/index.php/icoissee3/index

Baskara FXR (2023b) The promises and pitfalls of using ChatGPT for self-determined learning in higher education: An argumentative review. Pros. Semin. Nas. Fakultas Tarb. dan. Ilmu Kegur. IAIM Sinjai 2:95–101. https://doi.org/10.47435/sentikjar.v2i0.1825

Bauer E, Greisel M, Kuznetsov I et al. (2023) Using natural language processing to support peer‐feedback in the age of artificial intelligence: A cross‐disciplinary framework and a research agenda. Br. J. Educ. Technol. 54(5):1222–1245. https://doi.org/10.1111/bjet.13336

Blalock D (2022) Here are all the ways to get around ChatGPT’s safeguards: [1/n]. X Formerly Twitter, 13 December 2022. https://twitter.com/davisblalock/status/1602600453555961856 . Accessed 23 Nov 2023

Chan CKY (2023) A comprehensive AI policy education framework for university teaching and learning. Int J. Educ. Technol. High. Educ. 20(1):1–25. https://doi.org/10.1186/s41239-023-00408-3

Chaudhry IS, Sarwary SAM, El Refae GA, Chabchoub H (2023) Time to revisit existing student’s performance evaluation approach in higher education sector in a new era of ChatGPT—A case study. Cogent Educ. 10(1):2210461. https://doi.org/10.1080/2331186x.2023.2210461

Chaudhry MA, Cukurova M, Luckin R (2022) A transparency index framework for AI in education. In: International Conference on Artificial Intelligence in Education. Springer, Cham, Switzerland, pp 195–198. https://doi.org/10.35542/osf.io/bstcf

Chinese University of Hong Kong (2023) Use of Artificial Intelligence tools in teaching, learning and assessments - A guide for students. https://www.aqs.cuhk.edu.hk/documents/A-guide-for-students_use-of-AI-tools.pdf . Accessed 23 Nov 2023

Colorado State University (2023) What should a syllabus statement on AI look like? https://tilt.colostate.edu/what-should-a-syllabus-statement-on-ai-look-like/ . Accessed 23 Nov 2023

Columbia University (2023) Considerations for AI tools in the classroom. https://ctl.columbia.edu/resources-and-technology/resources/ai-tools/ . Accessed 23 Nov 2023

Corrêa NK, Galvão C, Santos JW et al. (2023) Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns 4(10):100857. https://doi.org/10.1016/j.patter.2023.100857

Article   PubMed   PubMed Central   Google Scholar  

Corvinus University of Budapest (2018) Anti-Plagiarism rules. https://www.uni-corvinus.hu/contents/uploads/2020/11/I.20_Plagiumszabalyzat_2018_junius_19_EN.6b1.pdf . Accessed 23 Nov 2023

Cotton DR, Cotton PA, Shipway JR (2024) Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int 61(2):228–239. https://doi.org/10.1080/14703297.2023.2190148

Crompton H, Burke D (2023) Artificial intelligence in higher education: the state of the field. Int J. Educ. Technol. High. Educ. 20(1):1–22. https://doi.org/10.1186/s41239-023-00392-8

Dhaor A (2023) India will regulate AI, ensure data privacy, says Rajeev Chandrasekhar. Hindustan Times, 12 October 2023. https://www.hindustantimes.com/cities/noida-news/india-will-regulate-ai-ensure-data-privacy-says-rajeev-chandrasekhar-101697131022456.html . Accessed 23 Nov 2023

Ellul J (1964) The technological society. Vintage Books

EUHLEX (2019) Ethics guidelines for trustworthy AI | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai . Accessed 23 Nov 2023

European Commission (2021) Proposal for a Regulation laying down harmonised rules on artificial intelligence | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence . Accessed 23 Nov 2023

European Parliament - Committee on Culture and Education (2021) Report on artificial intelligence in education, culture and the audiovisual sector | A9-0127/2021. https://www.europarl.europa.eu/doceo/document/A-9-2021-0127_EN.html . Accessed 23 Nov 2023

Farazouli A, Cerratto-Pargman T, Bolander-Laksov K, McGrath C (2024) Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers’ assessment practices. Assess. Eval. High. Educ. 49(3):363–375. https://doi.org/10.1080/02602938.2023.2241676

Farrokhnia M, Banihashem SK, Noroozi O, Wals A (2023) A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int 61(3):460–474. https://doi.org/10.1080/14703297.2023.2195846

Ferguson R, Hoel T, Scheffel M, Drachsler H (2016) Guest editorial: Ethics and privacy in learning analytics. J. Learn Anal. 3(1):5–15. https://doi.org/10.18608/jla.2016.31.2

Future of Life Institute (2023) Pause giant AI experiments: An open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ . Accessed 15 Nov 2023

Gamage KA, Dehideniya SC, Xu Z, Tang X (2023) ChatGPT and higher education assessments: more opportunities than concerns? J Appl Learn Teach 6(2). https://doi.org/10.37074/jalt.2023.6.2.32

Gimpel H, Hall K, Decker S, et al. (2023) Unlocking the power of generative AI models and systems such as GPT-4 and ChatGPT for higher education: A guide for students and lecturers. Hohenheim Discussion Papers in Business, Economics and Social Sciences 2023, 02:2146. http://opus.uni-hohenheim.de/frontdoor.php?source_opus=2146&la=en

Glathe A, Mörth M, Riedel A (2023) Vorschläge für Eigenständigkeitserklärungen bei möglicher Nutzung von KI-Tools. European University Viadrina. https://opus4.kobv.de/opus4-euv/files/1326/Forschendes-Lernen-mit-KI_SKILL.pdf . Accessed 23 Nov 2023

Harvard University (2023) Student Responsibilities. Harvard Summer School 2023. https://summer.harvard.edu/academic-opportunities-support/policies-and-regulations/student-responsibilities/ . Accessed 23 Nov 2023

Heaven WD (2023) ChatGPT is going to change education, not destroy it. MIT Technology Review. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/ . Accessed 14 Nov 2023

Hillier M (2023) Turnitin Artificial Intelligence writing detection. https://teche.mq.edu.au/2023/03/turnitin-artificial-intelligence-writing-detection/ . Accessed 23 Nov 2023

Howard SK, Mozejko A (2015) Considering the history of digital technologies in education. In: Henderson M, Romeo G (eds) Teaching and digital technologies: Big issues and critical questions. Cambridge University Press, Port Melbourne, Australia, pp 157–168. https://doi.org/10.1017/cbo9781316091968.017

Humboldt-Universität zu Berlin (2023a) ChatGPT & Co: Empfehlungen für das Umgehen mit Künstlicher Intelligenz in Prüfungen. https://www.hu-berlin.de/de/pr/nachrichten/september-2023/nr-2397-1 . Accessed 23 Nov 2023

Humboldt-Universität zu Berlin (2023b) Empfehlungen zur Nutzung von Künstlicher Intelligenz in Studienleistungen und Prüfungen an der Humboldt-Universität zu Berlin. https://www.hu-berlin.de/de/pr/nachrichten/september-2023/hu_empfehlungen_ki-in-pruefungen_20230905.pdf . Accessed 23 Nov 2023

Ilieva G, Yankova T, Klisarova-Belcheva S et al. (2023) Effects of generative chatbots in higher education. Information 14(9):492. https://doi.org/10.3390/info14090492

Imperial College London (2023a) Academic misconduct policy and procedure. https://www.imperial.ac.uk/media/imperial-college/administration-and-support-services/registry/academic-governance/public/academic-policy/academic-integrity/Academic-Misconduct-Policy-and-Procedure-v1.3-15.03.23.pdf . Accessed 14 Nov 2023

Imperial College London (2023b) College guidance on the use of generative AI tools. https://www.imperial.ac.uk/about/leadership-and-strategy/provost/vice-provost-education/generative-ai-tools-guidance/ . Accessed 23 Nov 2023

Ivanov S (2023) The dark side of artificial intelligence in higher education. Serv. Ind. J. 43(15–16):1055–1082. https://doi.org/10.1080/02642069.2023.2258799

Johns Hopkins University (2023) CSCI 601.771: Self-supervised Models. https://self-supervised.cs.jhu.edu/sp2023/ . Accessed 23 Nov 2023

Khosravi H, Shum SB, Chen G et al. (2022) Explainable artificial intelligence in education. Comput Educ. Artif. Intell. 3:100074. https://doi.org/10.1016/j.caeai.2022.100074

Liao SM (2020) The moral status and rights of Artificial Intelligence. In: Liao SM (ed) Ethics of Artificial Intelligence. Oxford University Press, pp 480–503. https://doi.org/10.1093/oso/9780190905033.003.0018

Lim T, Gottipati S, Cheong M (2023) Artificial Intelligence in today’s education landscape: Understanding and managing ethical issues for educational assessment. Research Square Preprint. https://doi.org/10.21203/rs.3.rs-2696273/v1

Limo FAF, Tiza DRH, Roque MM et al. (2023) Personalized tutoring: ChatGPT as a virtual tutor for personalized learning experiences. Soc. Space 23(1):293–312. https://socialspacejournal.eu/article-page/?id=176

Google Scholar  

Liu S (2023) India’s AI Regulation Dilemma. The Diplomat, 27 October 2023. https://thediplomat.com/2023/10/indias-ai-regulation-dilemma/ . Accessed 23 Nov 2023

Macquarie University (2023) Academic integrity vs the other AI (Generative Artificial Intelligence). https://teche.mq.edu.au/2023/03/academic-integrity-vs-the-other-ai-generative-artificial-intelligence/ . Accessed 14 Nov 2023

Memarian B, Doleck T (2023) Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI), and higher education: A systematic review. Comput Educ Artif Intell 100152. https://doi.org/10.1016/j.caeai.2023.100152

Mhlanga D (2023) Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning. SSRN Electron J 4354422. https://doi.org/10.2139/ssrn.4354422

Miller GE (2023) eLearning and the Transformation of Higher Education. In: Miller GE, Ives K (eds) Leading the eLearning Transformation of Higher Education. Routledge, pp 3–23. https://doi.org/10.4324/9781003445623-3

Mollick ER, Mollick L (2022) New modes of learning enabled by AI chatbots: Three methods and assignments. SSRN Electron J 4300783. https://doi.org/10.2139/ssrn.4300783

Monash University (2023a) Generative AI and assessment: Designing assessment for achievement and demonstration of learning outcomes. https://www.monash.edu/learning-teaching/teachhq/Teaching-practices/artificial-intelligence/generative-ai-and-assessment . Accessed 23 Nov 2023

Monash University (2023b) Policy and practice guidance around acceptable and responsible use of AI technologies. https://www.monash.edu/learning-teaching/teachhq/Teaching-practices/artificial-intelligence/policy-and-practice-guidance-around-acceptable-and-responsible-use-of-ai-technologies . Accessed 23 Nov 2023

Monash University (2023c) Choosing assessment tasks. https://www.monash.edu/learning-teaching/teachhq/Assessment/choosing-assessment-tasks . Accessed 23 Nov 2023

National Autonomous University of Mexico (2023) Recomendaciones para el uso de Inteligencia Artificial Generativa en la docencia. https://cuaed.unam.mx/descargas/recomendaciones-uso-iagen-docencia-unam-2023.pdf . Accessed 14 Oct 2023

Nature (2023) Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 613:612. https://doi.org/10.1038/d41586-023-00191-1 . Editorial

Article   CAS   Google Scholar  

Nicol DJ, Macfarlane‐Dick D (2006) Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Stud. High. Educ. 31(2):199–218. https://doi.org/10.1080/03075070600572090

Nikolinakos NT (2023) Ethical Principles for Trustworthy AI. In: Nikolinakos NT (ed) EU Policy and Legal Framework for Artificial Intelligence, Robotics and Related Technologies -The AI Act. Springer International Publishing, Cham, Switzerland, pp 101–166. https://doi.org/10.1007/978-3-031-27953-9

O’Connor S (2023) Corrigendum to “Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?” [Nurse Educ. Pract. 66 (2023) 103537]. Nurse Educ. Pr. 67:103572. https://doi.org/10.1016/j.nepr.2023.103572

OECD (2019) Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#mainText . Accessed 23 Nov 2023

OpenAI (2022) Introducing ChatGPT. https://openai.com/blog/chatgpt . Accessed 14 Nov 2022

Paek S, Kim N (2021) Analysis of worldwide research trends on the impact of artificial intelligence in education. Sustainability 13(14):7941. https://doi.org/10.3390/su13147941

Perkins M (2023) Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. J. Univ. Teach. Learn Pr. 20(2):07. https://doi.org/10.53761/1.20.02.07

Princeton University (2023) Academic Regulations: Rights, rules, responsibilities. https://rrr.princeton.edu/2023/students-and-university/24-academic-regulations . Accessed 23 Nov 2023

Qadir J (2023) Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. In: 2023 IEEE Global Engineering Education Conference (EDUCON). IEEE, pp 1–9. https://doi.org/10.1109/educon54358.2023.10125121

Rasul T, Nair S, Kalendra D et al. (2023) The role of ChatGPT in higher education: Benefits, challenges, and future research directions. J. Appl Learn Teach. 6(1):41–56. https://doi.org/10.37074/jalt.2023.6.1.29

Ray PP (2023) ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys. Syst. 3:121–154. https://doi.org/10.1016/j.iotcps.2023.04.003

Reiser RA (2001) A history of instructional design and technology: Part I: A history of instructional media. Educ. Technol. Res Dev. 49(1):53–64. https://doi.org/10.1007/BF02504506

Roose K (2023) GPT-4 is exciting and scary. New York Times, 15 March 2023. https://www.nytimes.com/2023/03/15/technology/gpt-4-artificial-intelligence-openai.html . Accessed 23 Nov 2023

Rudolph J, Tan S, Tan S (2023) War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. J. Appl Learn Teach. 6(1):364–389. https://doi.org/10.37074/jalt.2023.6.1.23

Solis T (2023) Die ChatGPT-Richtlinien der 100 größten deutschen Universitäten. Scribbr, 6 May 2023. https://www.scribbr.de/ki-tools-nutzen/chatgpt-universitaere-richtlinien/ . Accessed 23 Nov 2023

Stokel-Walker C (2023) ChatGPT listed as author on research papers: Many scientists disapprove. Nature 613:620–621. https://doi.org/10.1038/d41586-023-00107-z

Article   ADS   CAS   PubMed   Google Scholar  

Sweeney S (2023) Who wrote this? Essay mills and assessment – Considerations regarding contract cheating and AI in higher education. Int J. Manag Educ. 21(2):100818. https://doi.org/10.1016/j.ijme.2023.100818

Taylor and Francis (2023) Taylor & Francis clarifies the responsible use of AI tools in academic content creation. Taylor Francis Newsroom, 17 February 2023. https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/ . Accessed 23 Nov 2023

Technische Universität Berlin (2023) Inklusive Digitalisierung Modul. https://moseskonto.tu-berlin.de/moses/modultransfersystem/bolognamodule/beschreibung/anzeigen.html?nummer=51021&version=2&sprache=1 . Accessed 05 Aug 2024

Tokyo Institute of Technology (2023) Policy on Use of Generative Artificial Intelligence in Learning. https://www.titech.ac.jp/english/student/students/news/2023/066592.html . Accessed 23 Nov 2023

Turnitin (2023) Turnitin announces AI writing detector and AI writing resource center for educators. https://www.turnitin.com/press/turnitin-announces-ai-writing-detector-and-ai-writing-resource-center-for-educators . Accessed 14 Nov 2023

Uchiyama S, Umemura K, Morita Y (2023) Large Language Model-based system to provide immediate feedback to students in flipped classroom preparation learning. ArXiv Prepr ArXiv230711388. https://doi.org/10.48550/arXiv.2307.11388

UNESCO (2022) Recommendation on the ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000381137 . Accessed 23 Nov 2023

UNESCO International Institute for Higher Education in Latin America and the Caribbean (2023) ChatGPT and Artificial Intelligence in higher education. https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf . Accessed 14 Nov 2023

Universität Wien (2016) Guidelines for seminar and term papers. https://bda.univie.ac.at/fileadmin/user_upload/p_bda/Teaching/PaperGuidlines.pdf . Accessed 23 Nov 2023

University of Auckland (2023) Advice for students on using Generative Artificial Intelligence in coursework. https://www.auckland.ac.nz/en/students/forms-policies-and-guidelines/student-policies-and-guidelines/academic-integrity-copyright/advice-for-student-on-using-generative-ai.html . Accessed 24 Nov 2023

University of Boston (2023) Using Generative AI in coursework. https://www.bu.edu/cds-faculty/culture-community/gaia-policy/ . Accessed 23 Nov 2023

University of Cambridge (2023) Artificial Intelligence and teaching, learning and assessment. https://www.cambridgeinternational.org/support-and-training-for-schools/artificial-intelligence/ . Accessed 23 Nov 2023

University of Cape Town (2023) Staff Guide - Assessment and academic integrity in the age of AI. https://docs.google.com/document/u/0/d/1o5ZIOBjPsP6Nh2VIlM56_kcuqB-Y7xTf/edit?pli=1&usp=embed_facebook . Accessed 14 Nov 2023

University of Delaware (2023) Considerations for using and addressing advanced automated tools in coursework and assignments. https://ctal.udel.edu/advanced-automated-tools/ . Accessed 14 Nov 2023

University of Helsinki (2023) Using AI to support learning | Instructions for students. https://studies.helsinki.fi/instructions/article/using-ai-support-learning . Accessed 24 Nov 2023

University of Oxford (2023a) Code of practice and procedure on academic integrity in research. https://hr.admin.ox.ac.uk/academic-integrity-in-research . Accessed 23 Nov 2023

University of Oxford (2023b) Unauthorised use of AI in exams and assessment. https://academic.admin.ox.ac.uk/article/unauthorised-use-of-ai-in-exams-and-assessment . Accessed 23 Nov 2023

University of Pittsburgh (2023) Generative AI Resources for Faculty. https://teaching.pitt.edu/generative-ai-resources-for-faculty/ . Accessed 23 Nov 2023

University of Toronto (2019) Code of behaviour on academic matters. https://governingcouncil.utoronto.ca/secretariat/policies/code-behaviour-academic-matters-july-1-2019 . Accessed 23 Nov 2023

University of Toronto (2023) ChatGPT and Generative AI in the classroom. https://www.viceprovostundergrad.utoronto.ca/strategic-priorities/digital-learning/special-initiative-artificial-intelligence/ . Accessed 20 Nov 2023

University of Waterloo (2023) Artificial Intelligence at UW. https://uwaterloo.ca/associate-vice-president-academic/artificial-intelligence-uw . Accessed 23 Nov 2023

University of Zürich (2023) ChatGPT. https://ethz.ch/en/the-eth-zurich/education/educational-development/ai-in-education/chatgpt.html . Accessed 23 Nov 2023

Wach K, Duong CD, Ejdys J et al. (2023) The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrep. Bus. Econ. Rev. 11(2):7–24. https://doi.org/10.15678/eber.2023.110201

Wagner G (2018) Robot liability. SSRN Electron J 3198764. https://doi.org/10.2139/ssrn.3198764

Wiley (2023) Best practice guidelines on research integrity and publishing ethics. https://authorservices.wiley.com/ethics-guidelines/index.html . Accessed 20 Nov 2023

Yan L, Sha L, Zhao L et al. (2023) Practical and ethical challenges of large language models in education: A systematic scoping review. Br. J. Educ. Technol. 55(1):90–112. https://doi.org/10.1111/bjet.13370

York University (2011) Senate Policy on Academic Honesty. https://www.yorku.ca/secretariat/policies/policies/academic-honesty-senate-policy-on/ . Accessed 23 Nov 2023

York University Senate (2023) Academic Integrity and Generative Artificial Intelligence Technology. https://www.yorku.ca/unit/vpacad/academic-integrity/wp-content/uploads/sites/576/2023/03/Senate-ASCStatement_Academic-Integrity-and-AI-Technology.pdf . Accessed 23 Nov 2023

Yu H (2023) Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front Psychol. 14:1181712. https://doi.org/10.3389/fpsyg.2023.1181712

Download references

The authors have received no funding, grants, or other support for the research reported here. Open access funding provided by Corvinus University of Budapest.

Author information

Authors and affiliations.

Covinus University of Budapest, Budapest, Hungary

Attila Dabis & Csaba Csáki

You can also search for this author in PubMed   Google Scholar

Contributions

AD had established the initial idea and contributed to the collection of ethical standards as well as to the collection of university policy documents. Also contributed to writing the initial draft and the final version. CsCs had reviewed and clarified the initial concept and then developed the first structure including methodological considerations. Also contributed to the collection of university policy documents as well as to writing the second draft and the final version.

Corresponding author

Correspondence to Attila Dabis .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This research did not involve any human participants or animals and required no ethical approval.

Informed consent

This article does not contain any studies with human participants performed by any of the authors. No consent was required as no private data was collected or utilized.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Dabis, A., Csáki, C. AI and ethics: Investigating the first policy responses of higher education institutions to the challenge of generative AI. Humanit Soc Sci Commun 11 , 1006 (2024). https://doi.org/10.1057/s41599-024-03526-z

Download citation

Received : 21 February 2024

Accepted : 29 July 2024

Published : 06 August 2024

DOI : https://doi.org/10.1057/s41599-024-03526-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

failed phd nature

COMMENTS

  1. Failed PhD: how scientists have bounced back from doctoral ...

    In 2014, seven years after he finished his PhD, he published a paper on the topic that has become his most cited article (D. Sims et al. Nature Rev. Genet. 15, 121-132; 2014).

  2. How to bounce back from a PhD-project failure

    Be kind to yourself and then see if there is anything salvageable from your project. Look out for questions that remain unanswered, then lick your wounds and start working on something new ...

  3. Turning a failing PhD around

    Turning a failing PhD around. Charles Swanton. Nature Cancer 4 , 1399-1400 ( 2023) Cite this article. 3364 Accesses. 7 Altmetric. Metrics. Charles Swanton obtained a PhD from the Imperial Cancer ...

  4. Failed PhD: how scientists have bounced back from doctoral ...

    Failed PhD: how scientists have bounced back from doctoral setbacks. Nature. 2023 Aug;620 (7975):911-912. doi: 10.1038/d41586-023-02603-8.

  5. Failed PhD: how scientists have bounced back from doctoral s

    Corrections. All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:620:y:2023:i:7975:d:10.1038_d41586-023-02603-8.See general information about how to correct material in RePEc.. If you have authored this item and are not yet registered ...

  6. Failed PhD: how scientists have bounced back from doctoral setbacks

    Failed PhD: how scientists have bounced back from doctoral setbacks Arnold, Carrie; Abstract. In a scientific culture that eschews admitting failure, some researchers are staring it in the face — and finding success. ... Publication: Nature. Pub Date: August 2023 DOI: 10.1038/d41586-023-02603-8 Bibcode: 2023Natur.620..911A full text sources ...

  7. Failed PhD: how scientists have bounced back from doctoral setbacks

    Failed PhD: how scientists have bounced back from doctoral setbacks. Carrie Arnold. Nature, 2023, vol. 620, issue 7975, 911-912 . Abstract: In a scientific culture that eschews admitting failure, some researchers are staring it in the face — and finding success. Keywords: Careers; Education; Lab life; Scientific community (search for similar items in EconPapers)

  8. Daily briefing: How to bounce back from a PhD-project failure

    Daily briefing: How to bounce back from a PhD-project failure. Nature. 2022 Jul 13. doi: 10.1038/d41586-022-01977-5. Online ahead of print.

  9. Failed PhD: how scientists have bounced back from doctoral setbacks

    Failed PhD: how scientists have bounced back from doctoral setbacks. Sign in | Create an account. https://orcid.org. Europe PMC. Menu. About. About Europe PMC; Preprints in Europe PMC; Funders; Become a funder; Governance ... Nature, 01 Aug 2023, 620(7975): 911-912

  10. Failed PhD: how scientists have bounced back from doctoral setbacks

    Failed PhD: how scientists have bounced back from doctoral setbacks. Journal content Created on Aug 21, 2023 by Nature. Details; All journal content; My journal content; More. Favorite Sign in to add to favorites. fb twt in Disciplines. None assigned; Published in. Nature, Nature Publishing Group. Content. Nature, Published online: 21 August ...

  11. 'You have to suffer for your PhD': poor mental health among doctoral

    More than 40% of PhD students met the criteria for moderate to severe depression or anxiety. In contrast, 32% of working professionals met these criteria for depression, and 26% for anxiety. The ...

  12. Daily briefing: How to bounce back from a PhD-project failure

    Bounce back from a PhD-project failure. Science is riddled with stories of getting scooped, data glitches and funding crises — which can feel particularly acute for PhD candidates who are racing ...

  13. After years of failed experiments, I made a mid-Ph.D. pivot ...

    Even though I didn't produce a publishable body of work by the time I made my mid-Ph.D. pivot, I came to appreciate that the years I spent on failed experiments weren't wasted. I had gained numerous skills, both hard and soft, that enabled me to work efficiently on something new. For instance, I knew how to quickly read papers, which helped ...

  14. How I turned seemingly 'failed' experiments into a ...

    I thought despairingly. I had spent the past 10 months repeating an experiment with various tweaks to the protocol, and still I saw nothing—the synthetic vesicles that were supposed to divide weren't dividing at all. A progress report about my Ph.D. project was due in a month, and I felt I had nothing to write about.

  15. What are some warning signs that a PhD project will fail?

    4. Probably both the projects that crash and burn and the projects that are very successful come from the same "high-risk" category, rather than different ones. Safer projects are more incremental and have both a low risk of "crash and burn" and a low ceiling. Ideally, advisors would not let their students fail merely because they spend some of ...

  16. My Ph.D. qualifying exam was a nightmare—but I'm not letting ...

    My Ph.D. adviser had encouraged me to take a vacation. So I was sitting at an airport restaurant, sipping a margarita, when I received the email. It informed me I had failed my qualifying exam on my third attempt, which meant dismissal from the program. I knew things hadn't gone perfectly.

  17. Olympic boxing controversy: Does Imane Khelif have XY chromosome?

    Fact Check: Olympics boxing gender testing controversy explained Imane Khelif and Lin Yu-ting are female boxers, but they are facing attacks from anti-LGBTQ+ conservatives online who claim they're ...

  18. How failure benefits science

    Failed PhD: how scientists have bounced back from doctoral setbacks. ... Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print) nature.com sitemap. About Nature Portfolio ...

  19. PhD Failure Rate

    To summarise, based on the analysis of 26,076 PhD candidates at 14 universities between 2006 and 2017, the PhD pass rate in the UK is 80.5%. Of the 19.5% of students who fail, 3.3% is attributed to students failing their viva and the remaining 16.2% is attributed to students leaving their programme early. The above statistics indicate that ...

  20. My supervisor is suggesting I will fail my PhD, is this possible?

    Peteris. 8,281 30 40. This answer could be seen as slightly misleading. OP is in a situation where they have been warned by the supervisor that they could fail the PhD if they submit with the current results. In that situation, the chances of actually failing the PhD are much higher than in the average case.

  21. Failed PhD: how scientists have bounced back from doctoral setbacks

    In a scientific culture that eschews admitting failure, some researchers are staring it in the face — and finding success.

  22. After horrible 5.5 years completely failed PhD (not even any degree

    Hi, I started my PhD in 2012, on my first day I already got warned to be careful with my supervisor. Having had quite bad supervisors in the past (the 'fat girls are stupid' kind and the 'you aren't my favourite so I do not help you' kind) I figured at this stage I'm fine I was well used to it.

  23. What to do if I fail my PhD : r/PhD

    But it can be done. The key will be finding what you want to do, and then telling your story in a compelling way. I think you will find people will be interested in your unique path and respect your credentials. hey, I dropped out of my PhD in my 5th year, I know your pain.

  24. Why it is not a 'failure' to leave academia

    And scientists can start by appreciating a simple truth: researchers who leave academia are not failed academics. Students and their supervisors must begin to regard a PhD programme as a ...

  25. Tsung-Dao Lee, 97, Physicist Who Challenged a Law of Nature, Dies

    It was a revolutionary idea because it meant that in nature, some particles are, in effect, right-handed while others are left-handed. In the fall of 1956, Dr. Lee and Dr. Yang published their ...

  26. AI and ethics: Investigating the first policy responses of ...

    This article addresses the ethical challenges posed by generative artificial intelligence (AI) tools in higher education and explores the first responses of universities to these challenges globally.