Mobile Navigation

Research index, filter and sort, filter selections, sort options, research papers.

Young Tiger

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

H.T. Kung

Kung elected to ACM

Computer scientist honored for transformative contributions

Awards , Computer Science

Fantasy figures on top of a blue title screen for a recent SEAS Industry Insights seminar

All gamers welcome

Riot Games explains range of roles in video game design

Computer Science

A free, AI-powered research tool for scientific literature

  • Charles E. Jones
  • Coil Spring
  • Brownian Motion

New & Improved API for Developers

Introducing semantic reader in beta.

Stay Connected With Semantic Scholar Sign Up What Is Semantic Scholar? Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI.

research papers on ai

The Journal of Artificial Intelligence Research (JAIR) is dedicated to the rapid dissemination of important research results to the global artificial intelligence (AI) community. The journal’s scope encompasses all areas of AI, including agents and multi-agent systems, automated reasoning, constraint processing and search, knowledge representation, machine learning, natural language, planning and scheduling, robotics and vision, and uncertainty in AI.

Current Issue

Vol. 79 (2024)

Published: 2024-01-10

Multi-Modal Attentive Prompt Learning for Few-shot Emotion Recognition in Conversations

Condense: conditional density estimation for time series anomaly detection, performative ethics from within the ivory tower: how cs practitioners uphold systems of oppression, learning logic specifications for policy guidance in pomdps: an inductive logic programming approach, multi-objective reinforcement learning based on decomposition: a taxonomy and framework, can fairness be automated guidelines and opportunities for fairness-aware automl, practical and parallelizable algorithms for non-monotone submodular maximization with size constraint, exploring the tradeoff between system profit and income equality among ride-hailing drivers, on mitigating the utility-loss in differentially private learning: a new perspective by a geometrically inspired kernel approach, an algorithm with improved complexity for pebble motion/multi-agent path finding on trees, weighted, circular and semi-algebraic proofs, reinforcement learning for generative ai: state of the art, opportunities and open research challenges, human-in-the-loop reinforcement learning: a survey and position on requirements, challenges, and opportunities, boolean observation games, detecting change intervals with isolation distributional kernel, query-driven qualitative constraint acquisition, visually grounded language learning: a review of language games, datasets, tasks, and models, right place, right time: proactive multi-robot task allocation under spatiotemporal uncertainty, principles and their computational consequences for argumentation frameworks with collective attacks, the ai race: why current neural network-based architectures are a poor basis for artificial general intelligence, undesirable biases in nlp: addressing challenges of measurement.

Tackling the most challenging problems in computer science

Our teams aspire to make discoveries that positively impact society. Core to our approach is sharing our research and tools to fuel progress in the field, to help more people more quickly. We regularly publish in academic journals, release projects as open source, and apply research to Google products to benefit users at scale.

Featured research developments

research papers on ai

Mitigating aviation’s climate impact with Project Contrails

research papers on ai

Consensus and subjectivity of skin tone annotation for ML fairness

research papers on ai

A toolkit for transparency in AI dataset documentation

research papers on ai

Building better pangenomes to improve the equity of genomics

research papers on ai

A set of methods, best practices, and examples for designing with AI

research papers on ai

Learn more from our research

Researchers across Google are innovating across many domains. We challenge conventions and reimagine technology so that everyone can benefit.

research papers on ai

Publications

Google publishes over 1,000 papers annually. Publishing our work enables us to collaborate and share ideas with, as well as learn from, the broader scientific community.

research papers on ai

Research areas

From conducting fundamental research to influencing product development, our research teams have the opportunity to impact technology used by billions of people every day.

research papers on ai

Tools and datasets

We make tools and datasets available to the broader research community with the goal of building a more collaborative ecosystem.

research papers on ai

Meet the people behind our innovations

research papers on ai

Our teams collaborate with the research and academic communities across the world

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Future Healthc J
  • v.8(2); 2021 Jul

Logo of futhealthcj

Artificial intelligence in healthcare: transforming the practice of medicine

Junaid bajwa.

A Microsoft Research, Cambridge, UK

Usman Munir

B Microsoft Research, Cambridge, UK

Aditya Nori

C Microsoft Research, Cambridge, UK

Bryan Williams

D University College London, London, UK and director, NIHR UCLH Biomedical Research Centre, London, UK

Artificial intelligence (AI) is a powerful and disruptive area of computer science, with the potential to fundamentally transform the practice of medicine and the delivery of healthcare. In this review article, we outline recent breakthroughs in the application of AI in healthcare, describe a roadmap to building effective, reliable and safe AI systems, and discuss the possible future direction of AI augmented healthcare systems.

Introduction

Healthcare systems around the world face significant challenges in achieving the ‘quadruple aim’ for healthcare: improve population health, improve the patient's experience of care, enhance caregiver experience and reduce the rising cost of care. 1–3 Ageing populations, growing burden of chronic diseases and rising costs of healthcare globally are challenging governments, payers, regulators and providers to innovate and transform models of healthcare delivery. Moreover, against a backdrop now catalysed by the global pandemic, healthcare systems find themselves challenged to ‘perform’ (deliver effective, high-quality care) and ‘transform’ care at scale by leveraging real-world data driven insights directly into patient care. The pandemic has also highlighted the shortages in healthcare workforce and inequities in the access to care, previously articulated by The King's Fund and the World Health Organization (Box ​ (Box1 1 ). 4,5

Workforce challenges in the next decade

The application of technology and artificial intelligence (AI) in healthcare has the potential to address some of these supply-and-demand challenges. The increasing availability of multi-modal data (genomics, economic, demographic, clinical and phenotypic) coupled with technology innovations in mobile, internet of things (IoT), computing power and data security herald a moment of convergence between healthcare and technology to fundamentally transform models of healthcare delivery through AI-augmented healthcare systems.

In particular, cloud computing is enabling the transition of effective and safe AI systems into mainstream healthcare delivery. Cloud computing is providing the computing capacity for the analysis of considerably large amounts of data, at higher speeds and lower costs compared with historic ‘on premises’ infrastructure of healthcare organisations. Indeed, we observe that many technology providers are increasingly seeking to partner with healthcare organisations to drive AI-driven medical innovation enabled by cloud computing and technology-related transformation (Box ​ (Box2 2 ). 6–8

Quotes from technology leaders

Here, we summarise recent breakthroughs in the application of AI in healthcare, describe a roadmap to building effective AI systems and discuss the possible future direction of AI augmented healthcare systems.

What is artificial intelligence?

Simply put, AI refers to the science and engineering of making intelligent machines, through algorithms or a set of rules, which the machine follows to mimic human cognitive functions, such as learning and problem solving. 9 AI systems have the potential to anticipate problems or deal with issues as they come up and, as such, operate in an intentional, intelligent and adaptive manner. 10 AI's strength is in its ability to learn and recognise patterns and relationships from large multidimensional and multimodal datasets; for example, AI systems could translate a patient's entire medical record into a single number that represents a likely diagnosis. 11,12 Moreover, AI systems are dynamic and autonomous, learning and adapting as more data become available. 13

AI is not one ubiquitous, universal technology, rather, it represents several subfields (such as machine learning and deep learning) that, individually or in combination, add intelligence to applications. Machine learning (ML) refers to the study of algorithms that allow computer programs to automatically improve through experience. 14 ML itself may be categorised as ‘supervised’, ‘unsupervised’ and ‘reinforcement learning’ (RL), and there is ongoing research in various sub-fields including ‘semi-supervised’, ‘self-supervised’ and ‘multi-instance’ ML.

  • Supervised learning leverages labelled data (annotated information); for example, using labelled X-ray images of known tumours to detect tumours in new images. 15
  • ‘Unsupervised learning’ attempts to extract information from data without labels; for example, categorising groups of patients with similar symptoms to identify a common cause. 16
  • In RL, computational agents learn by trial and error, or by expert demonstration. The algorithm learns by developing a strategy to maximise rewards. Of note, major breakthroughs in AI in recent years have been based on RL.
  • Deep learning (DL) is a class of algorithms that learns by using a large, many-layered collection of connected processes and exposing these processors to a vast set of examples. DL has emerged as the predominant method in AI today driving improvements in areas such as image and speech recognition. 17,18

How to build effective and trusted AI-augmented healthcare systems?

Despite more than a decade of significant focus, the use and adoption of AI in clinical practice remains limited, with many AI products for healthcare still at the design and develop stage. 19–22 While there are different ways to build AI systems for healthcare, far too often there are attempts to force square pegs into round holes ie find healthcare problems to apply AI solutions to without due consideration to local context (such as clinical workflows, user needs, trust, safety and ethical implications).

We hold the view that AI amplifies and augments, rather than replaces, human intelligence. Hence, when building AI systems in healthcare, it is key to not replace the important elements of the human interaction in medicine but to focus it, and improve the efficiency and effectiveness of that interaction. Moreover, AI innovations in healthcare will come through an in-depth, human-centred understanding of the complexity of patient journeys and care pathways.

In Fig ​ Fig1, 1 , we describe a problem-driven, human-centred approach, adapted from frameworks by Wiens et al , Care and Sendak to building effective and reliable AI-augmented healthcare systems. 23–25

An external file that holds a picture, illustration, etc.
Object name is futurehealth-8-2-e188fig1.jpg

Multi-step, iterative approach to build effective and reliable AI-augmented systems in healthcare.

Design and develop

The first stage is to design and develop AI solutions for the right problems using a human-centred AI and experimentation approach and engaging appropriate stakeholders, especially the healthcare users themselves.

Stakeholder engagement and co-creation

Build a multidisciplinary team including computer and social scientists, operational and research leadership, and clinical stakeholders (physician, caregivers and patients) and subject experts (eg for biomedical scientists) that would include authorisers, motivators, financiers, conveners, connectors, implementers and champions. 26 A multi-stakeholder team brings the technical, strategic, operational expertise to define problems, goals, success metrics and intermediate milestones.

Human-centred AI

A human-centred AI approach combines an ethnographic understanding of health systems, with AI. Through user-designed research, first understand the key problems (we suggest using a qualitative study design to understand ‘what is the problem’, ‘why is it a problem’, ‘to whom does it matter’, ‘why has it not been addressed before’ and ‘why is it not getting attention’) including the needs, constraints and workflows in healthcare organisations, and the facilitators and barriers to the integration of AI within the clinical context. After defining key problems, the next step is to identify which problems are appropriate for AI to solve, whether there is availability of applicable datasets to build and later evaluate AI. By contextualising algorithms in an existing workflow, AI systems would operate within existing norms and practices to ensure adoption, providing appropriate solutions to existing problems for the end user.

Experimentation

The focus should be on piloting of new stepwise experiments to build AI tools, using tight feedback loops from stakeholders to facilitate rapid experiential learning and incremental changes. 27 The experiments would allow the trying out of new ideas simultaneously, exploring to see which one works, learn what works and what doesn't, and why. 28 Experimentation and feedback will help to elucidate the purpose and intended uses for the AI system: the likely end users and the potential harm and ethical implications of AI system to them (for instance, data privacy, security, equity and safety).

Evaluate and validate

Next, we must iteratively evaluate and validate the predictions made by the AI tool to test how well it is functioning. This is critical, and evaluation is based on three dimensions: statistical validity, clinical utility and economic utility.

  • Statistical validity is understanding the performance of AI on metrics of accuracy, reliability, robustness, stability and calibration. High model performance on retrospective, in silico settings is not sufficient to demonstrate clinical utility or impact.
  • To determine clinical utility, evaluate the algorithm in a real-time environment on a hold-out and temporal validation set (eg longitudinal and external geographic datasets) to demonstrate clinical effectiveness and generalisability. 25
  • Economic utility quantifies the net benefit relative to the cost from the investment in the AI system.

Scale and diffuse

Many AI systems are initially designed to solve a problem at one healthcare system based on the patient population specific to that location and context. Scale up of AI systems requires special attention to deployment modalities, model updates, the regulatory system, variation between systems and reimbursement environment.

Monitor and maintain

Even after an AI system has been deployed clinically, it must be continually monitored and maintained to monitor for risks and adverse events using effective post-market surveillance. Healthcare organisations, regulatory bodies and AI developers should cooperate to collate and analyse the relevant datasets for AI performance, clinical and safety-related risks, and adverse events. 29

What are the current and future use cases of AI in healthcare?

AI can enable healthcare systems to achieve their ‘quadruple aim’ by democratising and standardising a future of connected and AI augmented care, precision diagnostics, precision therapeutics and, ultimately, precision medicine (Table ​ (Table1 1 ). 30 Research in the application of AI healthcare continues to accelerate rapidly, with potential use cases being demonstrated across the healthcare sector (both physical and mental health) including drug discovery, virtual clinical consultation, disease diagnosis, prognosis, medication management and health monitoring.

Widescale adoption and application of artificial intelligence in healthcare

Timings are illustrative to widescale adoption of the proposed innovation taking into account challenges / regulatory environment / use at scale.

We describe a non-exhaustive suite of AI applications in healthcare in the near term, medium term and longer term, for the potential capabilities of AI to augment, automate and transform medicine.

AI today (and in the near future)

Currently, AI systems are not reasoning engines ie cannot reason the same way as human physicians, who can draw upon ‘common sense’ or ‘clinical intuition and experience’. 12 Instead, AI resembles a signal translator, translating patterns from datasets. AI systems today are beginning to be adopted by healthcare organisations to automate time consuming, high volume repetitive tasks. Moreover, there is considerable progress in demonstrating the use of AI in precision diagnostics (eg diabetic retinopathy and radiotherapy planning).

AI in the medium term (the next 5–10 years)

In the medium term, we propose that there will be significant progress in the development of powerful algorithms that are efficient (eg require less data to train), able to use unlabelled data, and can combine disparate structured and unstructured data including imaging, electronic health data, multi-omic, behavioural and pharmacological data. In addition, healthcare organisations and medical practices will evolve from being adopters of AI platforms, to becoming co-innovators with technology partners in the development of novel AI systems for precision therapeutics.

AI in the long term (>10 years)

In the long term, AI systems will become more intelligent , enabling AI healthcare systems achieve a state of precision medicine through AI-augmented healthcare and connected care. Healthcare will shift from the traditional one-size-fits-all form of medicine to a preventative, personalised, data-driven disease management model that achieves improved patient outcomes (improved patient and clinical experiences of care) in a more cost-effective delivery system.

Connected/augmented care

AI could significantly reduce inefficiency in healthcare, improve patient flow and experience, and enhance caregiver experience and patient safety through the care pathway; for example, AI could be applied to the remote monitoring of patients (eg intelligent telehealth through wearables/sensors) to identify and provide timely care of patients at risk of deterioration.

In the long term, we expect that healthcare clinics, hospitals, social care services, patients and caregivers to be all connected to a single, interoperable digital infrastructure using passive sensors in combination with ambient intelligence. 31 Following are two AI applications in connected care.

Virtual assistants and AI chatbots

AI chatbots (such as those used in Babylon ( www.babylonhealth.com ) and Ada ( https://ada.com )) are being used by patients to identify symptoms and recommend further actions in community and primary care settings. AI chatbots can be integrated with wearable devices such as smartwatches to provide insights to both patients and caregivers in improving their behaviour, sleep and general wellness.

Ambient and intelligent care

We also note the emergence of ambient sensing without the need for any peripherals.

  • Emerald ( www.emeraldinno.com ): a wireless, touchless sensor and machine learning platform for remote monitoring of sleep, breathing and behaviour, founded by Massachusetts Institute of Technology faculty and researchers.
  • Google nest: claiming to monitor sleep (including sleep disturbances like cough) using motion and sound sensors. 32
  • A recently published article exploring the ability to use smart speakers to contactlessly monitor heart rhythms. 33
  • Automation and ambient clinical intelligence: AI systems leveraging natural language processing (NLP) technology have the potential to automate administrative tasks such as documenting patient visits in electronic health records, optimising clinical workflow and enabling clinicians to focus more time on caring for patients (eg Nuance Dragon Ambient eXperience ( www.nuance.com/healthcare/ambient-clinical-intelligence.html )).

Precision diagnostics

Diagnostic imaging.

The automated classification of medical images is the leading AI application today. A recent review of AI/ML-based medical devices approved in the USA and Europe from 2015–2020 found that more than half (129 (58%) devices in the USA and 126 (53%) devices in Europe) were approved or CE marked for radiological use. 34 Studies have demonstrated AI's ability to meet or exceed the performance of human experts in image-based diagnoses from several medical specialties including pneumonia in radiology (a convolutional neural network trained with labelled frontal chest X-ray images outperformed radiologists in detecting pneumonia), dermatology (a convolutional neural network was trained with clinical images and was found to classify skin lesions accurately), pathology (one study trained AI algorithms with whole-slide pathology images to detect lymph node metastases of breast cancer and compared the results with those of pathologists) and cardiology (a deep learning algorithm diagnosed heart attack with a performance comparable with that of cardiologists). 35–38

We recognise that there are some exemplars in this area in the NHS (eg University of Leeds Virtual Pathology Project and the National Pathology Imaging Co-operative) and expect widescale adoption and scaleup of AI-based diagnostic imaging in the medium term. 39 We provide two use cases of such technologies.

Diabetic retinopathy screening

Key to reducing preventable, diabetes-related vision loss worldwide is screening individuals for detection and the prompt treatment of diabetic retinopathy. However, screening is costly given the substantial number of diabetes patients and limited manpower for eye care worldwide. 40 Research studies on automated AI algorithms for diabetic retinopathy in the USA, Singapore, Thailand and India have demonstrated robust diagnostic performance and cost effectiveness. 41–44 Moreover, Centers for Medicare & Medicaid Services approved Medicare reimbursement for the use of Food and Drug Administration approved AI algorithm ‘IDx-DR’, which demonstrated 87% sensitivity and 90% specificity for detecting more-than-mild diabetic retinopathy. 45

Improving the precision and reducing waiting timings for radiotherapy planning

An important AI application is to assist clinicians for image preparation and planning tasks for radiotherapy cancer treatment. Currently, segmentation of the images is time consuming and laborious task, performed manually by an oncologist using specially designed software to draw contours around the regions of interest. The AI-based InnerEye open-source technology can cut this preparation time for head and neck, and prostate cancer by up to 90%, meaning that waiting times for starting potentially life-saving radiotherapy treatment can be dramatically reduced (Fig ​ (Fig2 2 ). 46,47

An external file that holds a picture, illustration, etc.
Object name is futurehealth-8-2-e188fig2.jpg

Potential applications for the InnerEye deep learning toolkit include quantitative radiology for monitoring tumour progression, planning for surgery and radiotherapy planning. 47

Precision therapeutics

To make progress towards precision therapeutics, we need to considerably improve our understanding of disease. Researchers globally are exploring the cellular and molecular basis of disease, collecting a range of multimodal datasets that can lead to digital and biological biomarkers for diagnosis, severity and progression. Two important future AI applications include immunomics / synthetic biology and drug discovery.

Immunomics and synthetic biology

Through the application of AI tools on multimodal datasets in the future, we may be able to better understand the cellular basis of disease and the clustering of diseases and patient populations to provide more targeted preventive strategies, for example, using immunomics to diagnose and better predict care and treatment options. This will be revolutionary for multiple standards of care, with particular impact in the cancer, neurological and rare disease space, personalising the experience of care for the individual.

AI-driven drug discovery

AI will drive significant improvement in clinical trial design and optimisation of drug manufacturing processes, and, in general, any combinatorial optimisation process in healthcare could be replaced by AI. We have already seen the beginnings of this with the recent announcements by DeepMind and AlphaFold, which now sets the stage for better understanding disease processes, predicting protein structures and developing more targeted therapeutics (for both rare and more common diseases; Fig ​ Fig3 3 ). 48,49

An external file that holds a picture, illustration, etc.
Object name is futurehealth-8-2-e188fig3.jpg

An overview of the main neural network model architecture for AlphaFold. 49 MSA = multiple sequence alignment.

Precision medicine

New curative therapies.

Over the past decade, synthetic biology has produced developments like CRISPR gene editing and some personalised cancer therapies. However, the life cycle for developing such advanced therapies is still extremely inefficient and expensive.

In future, with better access to data (genomic, proteomic, glycomic, metabolomic and bioinformatic), AI will allow us to handle far more systematic complexity and, in turn, help us transform the way we understand, discover and affect biology. This will improve the efficiency of the drug discovery process by helping better predict early which agents are more likely to be effective and also better anticipate adverse drug effects, which have often thwarted the further development of otherwise effective drugs at a costly late stage in the development process. This, in turn will democratise access to novel advanced therapies at a lower cost.

AI empowered healthcare professionals

In the longer term, healthcare professionals will leverage AI in augmenting the care they provide, allowing them to provide safer, standardised and more effective care at the top of their licence; for example, clinicians could use an ‘AI digital consult’ to examine ‘digital twin’ models of their patients (a truly ‘digital and biomedical’ version of a patient), allowing them to ‘test’ the effectiveness, safety and experience of an intervention (such as a cancer drug) in the digital environment prior to delivering the intervention to the patient in the real world.

We recognise that there are significant challenges related to the wider adoption and deployment of AI into healthcare systems. These challenges include, but are not limited to, data quality and access, technical infrastructure, organisational capacity, and ethical and responsible practices in addition to aspects related to safety and regulation. Some of these issues have been covered, but others go beyond the scope of this current article.

Conclusion and key recommendations

Advances in AI have the potential to transform many aspects of healthcare, enabling a future that is more personalised, precise, predictive and portable. It is unclear if we will see an incremental adoption of new technologies or radical adoption of these technological innovations, but the impact of such technologies and the digital renaissance they bring requires health systems to consider how best they will adapt to the changing landscape. For the NHS, the application of such technologies truly has the potential to release time for care back to healthcare professionals, enabling them to focus on what matters to their patients and, in the future, leveraging a globally democratised set of data assets comprising the ‘highest levels of human knowledge’ to ‘work at the limits of science’ to deliver a common high standard of care, wherever and whenever it is delivered, and by whoever. 50 Globally, AI could become a key tool for improving health equity around the world.

As much as the last 10 years have been about the roll out of digitisation of health records for the purposes of efficiency (and in some healthcare systems, billing/reimbursement), the next 10 years will be about the insight and value society can gain from these digital assets, and how these can be translated into driving better clinical outcomes with the assistance of AI, and the subsequent creation of novel data assets and tools. It is clear that we are at an turning point as it relates to the convergence of the practice of medicine and the application of technology, and although there are multiple opportunities, there are formidable challenges that need to be overcome as it relates to the real world and the scale of implementation of such innovation. A key to delivering this vision will be an expansion of translational research in the field of healthcare applications of artificial intelligence. Alongside this, we need investment into the upskilling of a healthcare workforce and future leaders that are digitally enabled, and to understand and embrace, rather than being intimidated by, the potential of an AI-augmented healthcare system.

Healthcare leaders should consider (as a minimum) these issues when planning to leverage AI for health:

  • processes for ethical and responsible access to data: healthcare data is highly sensitive, inconsistent, siloed and not optimised for the purposes of machine learning development, evaluation, implementation and adoption
  • access to domain expertise / prior knowledge to make sense and create some of the rules which need to be applied to the datasets (to generate the necessary insight)
  • access to sufficient computing power to generate decisions in real time, which is being transformed exponentially with the advent of cloud computing
  • research into implementation: critically, we must consider, explore and research issues which arise when you take the algorithm and put it in the real world, building ‘trusted’ AI algorithms embedded into appropriate workflows.

Search form

Doing more but learning less: addressing the risks of ai in research.

Abstract illustration of data

(© stock.adobe.com)

Artificial intelligence (AI) is widely heralded for its potential to enhance productivity in scientific research. But with that promise come risks that could narrow scientists’ ability to better understand the world, according to a new paper co-authored by a Yale anthropologist.

Some future AI approaches, the authors argue, could constrict the questions researchers ask, the experiments they perform, and the perspectives that come to bear on scientific data and theories.

All told, these factors could leave people vulnerable to “illusions of understanding” in which they believe they comprehend the world better than they do.

The paper published March 7 in Nature .

“ There is a risk that scientists will use AI to produce more while understanding less,” said co-author Lisa Messeri, an anthropologist in Yale’s Faculty of Arts and Sciences. “We’re not arguing that scientists shouldn’t use AI tools, but we’re advocating for a conversation about how scientists will use them and suggesting that we shouldn’t automatically assume that all uses of the technology, or the ubiquitous use of it, will benefit science.”

The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review.

“ We hope this paper offers a vocabulary for talking about AI’s potential epistemic risks,” Messeri said.

Added Crockett: “To understand these risks, scientists can benefit from work in the humanities and qualitative social sciences.”

Messeri and Crockett classified proposed visions of AI spanning the scientific process that are currently creating buzz among researchers into four archetypes:

  • In study design, they argue, “AI as Oracle” tools are imagined as being able to objectively and efficiently search, evaluate, and summarize massive scientific literatures, helping researchers to formulate questions in their project’s design stage.
  • In data collection, “AI as Surrogate” applications, it is hoped, allow scientists to generate accurate stand-in data points, including as a replacement for human study participants, when data is otherwise too difficult or expensive to obtain.
  • In data analysis, “AI as Quant” tools seek to surpass the human intellect’s ability to analyze vast and complex datasets.
  • And “AI as Arbiter” applications aim to objectively evaluate scientific studies for merit and replicability, thereby replacing humans in the peer-review process.   

The authors warn against treating AI applications from these four archetypes as trusted partners, rather than simply tools , in the production of scientific knowledge. Doing so, they say, could make scientists susceptible to illusions of understanding, which can crimp their perspectives and convince them that they know more than they do.

The efficiencies and insights that AI tools promise can weaken the production of scientific knowledge by creating “monocultures of knowing,” in which researchers prioritize the questions and methods best suited to AI over other modes of inquiry, Messeri and Crockett state. A scholarly environment of that kind leaves researchers vulnerable to what they call “illusions of exploratory breadth,” where scientists wrongly believe that they are exploring all testable hypotheses, when they are only examining the narrower range of questions that can be tested through AI.

For example, “Surrogate” AI tools that seem to accurately mimic human survey responses could make experiments that require measurements of physical behavior or face-to-face interactions increasingly unpopular because they are slower and more expensive to conduct, Crockett said.

The authors also describe the possibility that AI tools become viewed as more objective and reliable than human scientists, creating a “monoculture of knowers” in which AI systems are treated as a singular, authoritative, and objective knower in place of a diverse scientific community of scientists with varied backgrounds, training, and expertise. A monoculture, they say, invites “illusions of objectivity” where scientists falsely believe that AI tools have no perspective or represent all perspectives when, in truth, they represent the standpoints of the computer scientists who developed and trained them.

“ There is a belief around science that the objective observer is the ideal creator of knowledge about the world,” Messeri said. “But this is a myth. There has never been an objective ‘knower,’ there can never be one, and continuing to pursue this myth only weakens science.”  

There is substantial evidence that human diversity makes science more robust and creative, the authors add.

“ Acknowledging that science is a social practice that benefits from including diverse standpoints will help us realize its full potential,” Crockett said. “Replacing diverse standpoints with AI tools will set back the clock on the progress we’ve made toward including more perspectives in scientific work.”

It is important to remember AI’s social implications, which extend far beyond the laboratories where it is being used in research, Messeri said.

“ We train scientists to think about technical aspects of new technology,” she said. “We don’t train them nearly as well to consider the social aspects, which is vital to future work in this domain.”

Science & Technology

Social Sciences

Media Contact

Bess Connolly : [email protected] ,

research papers on ai

Pulitzer Prize-winning authors on the art of writing biographies

research papers on ai

PODCAST: Harnessing the immune system to detect and eradicate disease

Two people shaking hands

New Yale faculty receive honorary Master of Arts degrees

research papers on ai

Greening global trade: A reform agenda for a sustainable future

  • Show More Articles

Academia Insider

The best AI tools for research papers and academic research (Literature review, grants, PDFs and more)

As our collective understanding and application of artificial intelligence (AI) continues to evolve, so too does the realm of academic research. Some people are scared by it while others are openly embracing the change. 

Make no mistake, AI is here to stay!

Instead of tirelessly scrolling through hundreds of PDFs, a powerful AI tool comes to your rescue, summarizing key information in your research papers. Instead of manually combing through citations and conducting literature reviews, an AI research assistant proficiently handles these tasks.

These aren’t futuristic dreams, but today’s reality. Welcome to the transformative world of AI-powered research tools!

The influence of AI in scientific and academic research is an exciting development, opening the doors to more efficient, comprehensive, and rigorous exploration.

This blog post will dive deeper into these tools, providing a detailed review of how AI is revolutionizing academic research. We’ll look at the tools that can make your literature review process less tedious, your search for relevant papers more precise, and your overall research process more efficient and fruitful.

I know that I wish these were around during my time in academia. It can be quite confronting when trying to work out what ones you should and shouldn’t use. A new one seems to be coming out every day!

Here is everything you need to know about AI for academic research and the ones I have personally trialed on my Youtube channel.

Best ChatGPT interface – Chat with PDFs/websites and more

I get more out of ChatGPT with HeyGPT . It can do things that ChatGPT cannot which makes it really valuable for researchers.

Use your own OpenAI API key ( h e re ). No login required. Access ChatGPT anytime, including peak periods. Faster response time. Unlock advanced functionalities with HeyGPT Ultra for a one-time lifetime subscription

AI literature search and mapping – best AI tools for a literature review – elicit and more

Harnessing AI tools for literature reviews and mapping brings a new level of efficiency and precision to academic research. No longer do you have to spend hours looking in obscure research databases to find what you need!

AI-powered tools like Semantic Scholar and elicit.org use sophisticated search engines to quickly identify relevant papers.

They can mine key information from countless PDFs, drastically reducing research time. You can even search with semantic questions, rather than having to deal with key words etc.

With AI as your research assistant, you can navigate the vast sea of scientific research with ease, uncovering citations and focusing on academic writing. It’s a revolutionary way to take on literature reviews.

  • Elicit –  https://elicit.org
  • Supersymmetry.ai: https://www.supersymmetry.ai
  • Semantic Scholar: https://www.semanticscholar.org
  • Connected Papers –  https://www.connectedpapers.com/
  • Research rabbit – https://www.researchrabbit.ai/
  • Laser AI –  https://laser.ai/
  • Litmaps –  https://www.litmaps.com
  • Inciteful –  https://inciteful.xyz/
  • Scite –  https://scite.ai/
  • System –  https://www.system.com

If you like AI tools you may want to check out this article:

  • How to get ChatGPT to write an essay [The prompts you need]

AI-powered research tools and AI for academic research

AI research tools, like Concensus, offer immense benefits in scientific research. Here are the general AI-powered tools for academic research. 

These AI-powered tools can efficiently summarize PDFs, extract key information, and perform AI-powered searches, and much more. Some are even working towards adding your own data base of files to ask questions from. 

Tools like scite even analyze citations in depth, while AI models like ChatGPT elicit new perspectives.

The result? The research process, previously a grueling endeavor, becomes significantly streamlined, offering you time for deeper exploration and understanding. Say goodbye to traditional struggles, and hello to your new AI research assistant!

  • Bit AI –  https://bit.ai/
  • Consensus –  https://consensus.app/
  • Exper AI –  https://www.experai.com/
  • Hey Science (in development) –  https://www.heyscience.ai/
  • Iris AI –  https://iris.ai/
  • PapersGPT (currently in development) –  https://jessezhang.org/llmdemo
  • Research Buddy –  https://researchbuddy.app/
  • Mirror Think – https://mirrorthink.ai

AI for reading peer-reviewed papers easily

Using AI tools like Explain paper and Humata can significantly enhance your engagement with peer-reviewed papers. I always used to skip over the details of the papers because I had reached saturation point with the information coming in. 

These AI-powered research tools provide succinct summaries, saving you from sifting through extensive PDFs – no more boring nights trying to figure out which papers are the most important ones for you to read!

They not only facilitate efficient literature reviews by presenting key information, but also find overlooked insights.

With AI, deciphering complex citations and accelerating research has never been easier.

  • Open Read –  https://www.openread.academy
  • Chat PDF – https://www.chatpdf.com
  • Explain Paper – https://www.explainpaper.com
  • Humata – https://www.humata.ai/
  • Lateral AI –  https://www.lateral.io/
  • Paper Brain –  https://www.paperbrain.study/
  • Scholarcy – https://www.scholarcy.com/
  • SciSpace Copilot –  https://typeset.io/
  • Unriddle – https://www.unriddle.ai/
  • Sharly.ai – https://www.sharly.ai/

AI for scientific writing and research papers

In the ever-evolving realm of academic research, AI tools are increasingly taking center stage.

Enter Paper Wizard, Jenny.AI, and Wisio – these groundbreaking platforms are set to revolutionize the way we approach scientific writing.

Together, these AI tools are pioneering a new era of efficient, streamlined scientific writing.

  • Paper Wizard –  https://paperwizard.ai/
  • Jenny.AI https://jenni.ai/ (20% off with code ANDY20)
  • Wisio – https://www.wisio.app

AI academic editing tools

In the realm of scientific writing and editing, artificial intelligence (AI) tools are making a world of difference, offering precision and efficiency like never before. Consider tools such as Paper Pal, Writefull, and Trinka.

Together, these tools usher in a new era of scientific writing, where AI is your dedicated partner in the quest for impeccable composition.

  • Paper Pal –  https://paperpal.com/
  • Writefull –  https://www.writefull.com/
  • Trinka –  https://www.trinka.ai/

AI tools for grant writing

In the challenging realm of science grant writing, two innovative AI tools are making waves: Granted AI and Grantable.

These platforms are game-changers, leveraging the power of artificial intelligence to streamline and enhance the grant application process.

Granted AI, an intelligent tool, uses AI algorithms to simplify the process of finding, applying, and managing grants. Meanwhile, Grantable offers a platform that automates and organizes grant application processes, making it easier than ever to secure funding.

Together, these tools are transforming the way we approach grant writing, using the power of AI to turn a complex, often arduous task into a more manageable, efficient, and successful endeavor.

  • Granted AI – https://grantedai.com/
  • Grantable – https://grantable.co/

Free AI research tools

There are many different tools online that are emerging for researchers to be able to streamline their research processes. There’s no need for convience to come at a massive cost and break the bank.

The best free ones at time of writing are:

  • Elicit – https://elicit.org
  • Connected Papers – https://www.connectedpapers.com/
  • Litmaps – https://www.litmaps.com ( 10% off Pro subscription using the code “STAPLETON” )
  • Consensus – https://consensus.app/

Wrapping up

The integration of artificial intelligence in the world of academic research is nothing short of revolutionary.

With the array of AI tools we’ve explored today – from research and mapping, literature review, peer-reviewed papers reading, scientific writing, to academic editing and grant writing – the landscape of research is significantly transformed.

The advantages that AI-powered research tools bring to the table – efficiency, precision, time saving, and a more streamlined process – cannot be overstated.

These AI research tools aren’t just about convenience; they are transforming the way we conduct and comprehend research.

They liberate researchers from the clutches of tedium and overwhelm, allowing for more space for deep exploration, innovative thinking, and in-depth comprehension.

Whether you’re an experienced academic researcher or a student just starting out, these tools provide indispensable aid in your research journey.

And with a suite of free AI tools also available, there is no reason to not explore and embrace this AI revolution in academic research.

We are on the precipice of a new era of academic research, one where AI and human ingenuity work in tandem for richer, more profound scientific exploration. The future of research is here, and it is smart, efficient, and AI-powered.

Before we get too excited however, let us remember that AI tools are meant to be our assistants, not our masters. As we engage with these advanced technologies, let’s not lose sight of the human intellect, intuition, and imagination that form the heart of all meaningful research. Happy researching!

Thank you to Ivan Aguilar – Ph.D. Student at SFU (Simon Fraser University), for starting this list for me!

research papers on ai

Dr Andrew Stapleton has a Masters and PhD in Chemistry from the UK and Australia. He has many years of research experience and has worked as a Postdoctoral Fellow and Associate at a number of Universities. Although having secured funding for his own research, he left academia to help others with his YouTube channel all about the inner workings of academia and how to make it work for you.

Thank you for visiting Academia Insider.

We are here to help you navigate Academia as painlessly as possible. We are supported by our readers and by visiting you are helping us earn a small amount through ads and affiliate revenue - Thank you!

research papers on ai

2024 © Academia Insider

research papers on ai

share this!

March 8, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Doing more but learning less: Addressing the risks of AI in research

by Mike Cummings, Yale University

ai and research

Artificial intelligence (AI) is widely heralded for its potential to enhance productivity in scientific research. But with that promise come risks that could narrow scientists' ability to better understand the world, according to a new paper co-authored by a Yale anthropologist.

Some future AI approaches, the authors argue, could constrict the questions researchers ask, the experiments they perform, and the perspectives that come to bear on scientific data and theories.

All told, these factors could leave people vulnerable to "illusions of understanding" in which they believe they comprehend the world better than they do.

The Perspective article is published in Nature .

"There is a risk that scientists will use AI to produce more while understanding less," said co-author Lisa Messeri, an anthropologist in Yale's Faculty of Arts and Sciences. "We're not arguing that scientists shouldn't use AI tools, but we're advocating for a conversation about how scientists will use them and suggesting that we shouldn't automatically assume that all uses of the technology, or the ubiquitous use of it, will benefit science."

The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review.

"We hope this paper offers a vocabulary for talking about AI's potential epistemic risks," Messeri said.

Added Crockett, "To understand these risks, scientists can benefit from work in the humanities and qualitative social sciences."

Messeri and Crockett classified proposed visions of AI spanning the scientific process that are currently creating buzz among researchers into four archetypes:

  • In study design , they argue, "AI as Oracle" tools are imagined as being able to objectively and efficiently search, evaluate, and summarize massive scientific literatures, helping researchers to formulate questions in their project's design stage.
  • In data collection, "AI as Surrogate" applications, it is hoped, allow scientists to generate accurate stand-in data points, including as a replacement for human study participants, when data is otherwise too difficult or expensive to obtain.
  • In data analysis, "AI as Quant" tools seek to surpass the human intellect's ability to analyze vast and complex datasets.
  • And "AI as Arbiter" applications aim to objectively evaluate scientific studies for merit and replicability, thereby replacing humans in the peer-review process.

The authors warn against treating AI applications from these four archetypes as trusted partners, rather than simply tools, in the production of scientific knowledge. Doing so, they say, could make scientists susceptible to illusions of understanding, which can crimp their perspectives and convince them that they know more than they do.

The efficiencies and insights that AI tools promise can weaken the production of scientific knowledge by creating "monocultures of knowing," in which researchers prioritize the questions and methods best suited to AI over other modes of inquiry, Messeri and Crockett state. A scholarly environment of that kind leaves researchers vulnerable to what they call "illusions of exploratory breadth," where scientists wrongly believe that they are exploring all testable hypotheses, when they are only examining the narrower range of questions that can be tested through AI.

For example, "Surrogate" AI tools that seem to accurately mimic human survey responses could make experiments that require measurements of physical behavior or face-to-face interactions increasingly unpopular because they are slower and more expensive to conduct, Crockett said.

The authors also describe the possibility that AI tools become viewed as more objective and reliable than human scientists, creating a "monoculture of knowers" in which AI systems are treated as a singular, authoritative, and objective knower in place of a diverse scientific community of scientists with varied backgrounds, training, and expertise. A monoculture, they say, invites "illusions of objectivity" where scientists falsely believe that AI tools have no perspective or represent all perspectives when, in truth, they represent the standpoints of the computer scientists who developed and trained them.

"There is a belief around science that the objective observer is the ideal creator of knowledge about the world," Messeri said. "But this is a myth. There has never been an objective 'knower,' there can never be one, and continuing to pursue this myth only weakens science."

There is substantial evidence that human diversity makes science more robust and creative, the authors add.

"Acknowledging that science is a social practice that benefits from including diverse standpoints will help us realize its full potential," Crockett said. "Replacing diverse standpoints with AI tools will set back the clock on the progress we've made toward including more perspectives in scientific work."

It is important to remember AI's social implications, which extend far beyond the laboratories where it is being used in research, Messeri said.

"We train scientists to think about technical aspects of new technology," she said. "We don't train them nearly as well to consider the social aspects, which is vital to future work in this domain."

Journal information: Nature

Provided by Yale University

Explore further

Feedback to editors

research papers on ai

Harnessing the mechanisms of fungal bioluminescence to confer autonomous luminescence in plant and animal cells

32 minutes ago

research papers on ai

New study discovers how altered protein folding drives multicellular evolution

research papers on ai

Designing nanoparticles for pregnancy-safe treatments

59 minutes ago

research papers on ai

Bald eagles eat prairie dogs? Researchers underscore relationship between raptors and rodents in the Great Plains

research papers on ai

NASA unveils design for message heading to Jupiter's moon Europa

research papers on ai

Pushing the boundary on ultralow frequency gravitational waves

research papers on ai

Primatologist observes how monkeys change behavior to survive deforestation

2 hours ago

research papers on ai

Research unveils effective STEM program models for high school students from historically marginalized communities

research papers on ai

CSI in space: Analyzing bloodstain patterns in microgravity

3 hours ago

research papers on ai

Transcription factors that regulate development of light organs and bioluminescence in firefly identified

Relevant physicsforums posts, interesting anecdotes in the history of physics, berta karlik -- the grande dame of the vienna radium institute.

5 hours ago

History of Railroad Safety - Spotlight on current derailments

16 hours ago

Cover songs versus the original track, which ones are better?

Mar 7, 2024

Interesting Words

Mar 6, 2024

Which ancient civilizations are you most interested in?

Mar 4, 2024

More from Art, Music, History, and Linguistics

Related Stories

research papers on ai

Study argues that large language models can reveal breakthroughs in neuroscience that humans alone cannot

Feb 9, 2024

research papers on ai

Time to rethink how we define scientific expertise and authority, argue psychologists

Jan 29, 2024

research papers on ai

Researchers discuss how AI could change the nature of social science research

Jun 16, 2023

research papers on ai

Unleashing the power of AI to track animal behavior

Sep 26, 2023

research papers on ai

Large language models pose risk to science with false answers, says study

Nov 20, 2023

research papers on ai

Guiding vaccine development with machine learning

Aug 10, 2023

Recommended for you

research papers on ai

Keep the change: Scientists analyze the attitudes of shop assistants

research papers on ai

Standing together against hate: A collective responsibility

research papers on ai

AI model trained with images can recognize visual indicators of gentrification

Mar 5, 2024

research papers on ai

Although trust in science remains high, the public questions scientists' adherence to science's norms

research papers on ai

New model evaluates how reputation and indirect reciprocity affect cooperative behaviors

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 06 March 2024

Artificial intelligence and illusions of understanding in scientific research

  • Lisa Messeri   ORCID: orcid.org/0000-0002-0964-123X 1   na1 &
  • M. J. Crockett   ORCID: orcid.org/0000-0001-8800-410X 2 , 3   na1  

Nature volume  627 ,  pages 49–58 ( 2024 ) Cite this article

457 Altmetric

Metrics details

  • Human behaviour
  • Interdisciplinary studies
  • Research management
  • Social anthropology

Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists’ visions for AI, observing that their appeal comes from promises to improve productivity and objectivity by overcoming human shortcomings. But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community’s ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

research papers on ai

Crabtree, G. Self-driving laboratories coming of age. Joule 4 , 2538–2541 (2020).

Article   CAS   Google Scholar  

Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620 , 47–60 (2023). This review explores how AI can be incorporated across the research pipeline, drawing from a wide range of scientific disciplines .

Article   CAS   PubMed   Google Scholar  

Dillion, D., Tandon, N., Gu, Y. & Gray, K. Can AI language models replace human participants? Trends Cogn. Sci. 27 , 597–600 (2023).

Article   PubMed   Google Scholar  

Grossmann, I. et al. AI and the transformation of social science research. Science 380 , 1108–1109 (2023). This forward-looking article proposes a variety of ways to incorporate generative AI into social-sciences research .

Gil, Y. Will AI write scientific papers in the future? AI Mag. 42 , 3–15 (2022).

Google Scholar  

Kitano, H. Nobel Turing Challenge: creating the engine for scientific discovery. npj Syst. Biol. Appl. 7 , 29 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code (Oxford Univ. Press, 2020). This book examines how social norms about race become embedded in technologies, even those that are focused on providing good societal outcomes .

Broussard, M. More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press, 2023).

Noble, S. U. Algorithms of Oppression: How Search Engines Reinforce Racism (New York Univ. Press, 2018).

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? in Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (Association for Computing Machinery, 2021). One of the first comprehensive critiques of large language models, this article draws attention to a host of issues that ought to be considered before taking up such tools .

Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale Univ. Press, 2021).

Johnson, D. G. & Verdicchio, M. Reframing AI discourse. Minds Mach. 27 , 575–590 (2017).

Article   Google Scholar  

Atanasoski, N. & Vora, K. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures (Duke Univ. Press, 2019).

Mitchell, M. & Krakauer, D. C. The debate over understanding in AI’s large language models. Proc. Natl Acad. Sci. USA 120 , e2215907120 (2023).

Kidd, C. & Birhane, A. How AI can distort human beliefs. Science 380 , 1222–1223 (2023).

Birhane, A., Kasirzadeh, A., Leslie, D. & Wachter, S. Science in the age of large language models. Nat. Rev. Phys. 5 , 277–280 (2023).

Kapoor, S. & Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 4 , 100804 (2023).

Hullman, J., Kapoor, S., Nanayakkara, P., Gelman, A. & Narayanan, A. The worst of both worlds: a comparative analysis of errors in learning from data in psychology and machine learning. In Proc. 2022 AAAI/ACM Conference on AI, Ethics, and Society (eds Conitzer, V. et al.) 335–348 (Association for Computing Machinery, 2022).

Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1 , 206–215 (2019). This paper articulates the problems with attempting to explain AI systems that lack interpretability, and advocates for building interpretable models instead .

Crockett, M. J., Bai, X., Kapoor, S., Messeri, L. & Narayanan, A. The limitations of machine learning models for predicting scientific replicability. Proc. Natl Acad. Sci. USA 120 , e2307596120 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Lazar, S. & Nelson, A. AI safety on whose terms? Science 381 , 138 (2023).

Collingridge, D. The Social Control of Technology (St Martin’s Press, 1980).

Wagner, G., Lukyanenko, R. & Paré, G. Artificial intelligence and the conduct of literature reviews. J. Inf. Technol. 37 , 209–226 (2022).

Hutson, M. Artificial-intelligence tools aim to tame the coronavirus literature. Nature https://doi.org/10.1038/d41586-020-01733-7 (2020).

Haas, Q. et al. Utilizing artificial intelligence to manage COVID-19 scientific evidence torrent with Risklick AI: a critical tool for pharmacology and therapy development. Pharmacology 106 , 244–253 (2021).

Müller, H., Pachnanda, S., Pahl, F. & Rosenqvist, C. The application of artificial intelligence on different types of literature reviews – a comparative study. In 2022 International Conference on Applied Artificial Intelligence (ICAPAI) https://doi.org/10.1109/ICAPAI55158.2022.9801564 (Institute of Electrical and Electronics Engineers, 2022).

van Dinter, R., Tekinerdogan, B. & Catal, C. Automation of systematic literature reviews: a systematic literature review. Inf. Softw. Technol. 136 , 106589 (2021).

Aydın, Ö. & Karaarslan, E. OpenAI ChatGPT generated literature review: digital twin in healthcare. In Emerging Computer Technologies 2 (ed. Aydın, Ö.) 22–31 (İzmir Akademi Dernegi, 2022).

AlQuraishi, M. AlphaFold at CASP13. Bioinformatics 35 , 4862–4865 (2019).

Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596 , 583–589 (2021).

Lee, J. S., Kim, J. & Kim, P. M. Score-based generative modeling for de novo protein design. Nat. Computat. Sci. 3 , 382–392 (2023).

Gómez-Bombarelli, R. et al. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nat. Mater. 15 , 1120–1127 (2016).

Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4 , 761–769 (2022).

Extance, A. How AI technology can tame the scientific literature. Nature 561 , 273–274 (2018).

Hastings, J. AI for Scientific Discovery (CRC Press, 2023). This book reviews current and future incorporation of AI into the scientific research pipeline .

Ahmed, A. et al. The future of academic publishing. Nat. Hum. Behav. 7 , 1021–1026 (2023).

Gray, K., Yam, K. C., Zhen’An, A. E., Wilbanks, D. & Waytz, A. The psychology of robots and artificial intelligence. In The Handbook of Social Psychology (eds Gilbert, D. et al.) (in the press).

Argyle, L. P. et al. Out of one, many: using language models to simulate human samples. Polit. Anal. 31 , 337–351 (2023).

Aher, G., Arriaga, R. I. & Kalai, A. T. Using large language models to simulate multiple humans and replicate human subject studies. In Proc. 40th International Conference on Machine Learning (eds Krause, A. et al.) 337–371 (JMLR.org, 2023).

Binz, M. & Schulz, E. Using cognitive psychology to understand GPT-3. Proc. Natl Acad. Sci. USA 120 , e2218523120 (2023).

Ornstein, J. T., Blasingame, E. N. & Truscott, J. S. How to train your stochastic parrot: large language models for political texts. Github , https://joeornstein.github.io/publications/ornstein-blasingame-truscott.pdf (2023).

He, S. et al. Learning to predict the cosmological structure formation. Proc. Natl Acad. Sci. USA 116 , 13825–13832 (2019).

Article   MathSciNet   CAS   PubMed   PubMed Central   Google Scholar  

Mahmood, F. et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Trans. Med. Imaging 39 , 3257–3267 (2020).

Teixeira, B. et al. Generating synthetic X-ray images of a person from the surface geometry. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 9059–9067 (Institute of Electrical and Electronics Engineers, 2018).

Marouf, M. et al. Realistic in silico generation and augmentation of single-cell RNA-seq data using generative adversarial networks. Nat. Commun. 11 , 166 (2020).

Watts, D. J. A twenty-first century science. Nature 445 , 489 (2007).

boyd, d. & Crawford, K. Critical questions for big data. Inf. Commun. Soc. 15 , 662–679 (2012). This article assesses the ethical and epistemic implications of scientific and societal moves towards big data and provides a parallel case study for thinking about the risks of artificial intelligence .

Jolly, E. & Chang, L. J. The Flatland fallacy: moving beyond low–dimensional thinking. Top. Cogn. Sci. 11 , 433–454 (2019).

Yarkoni, T. & Westfall, J. Choosing prediction over explanation in psychology: lessons from machine learning. Perspect. Psychol. Sci. 12 , 1100–1122 (2017).

Radivojac, P. et al. A large-scale evaluation of computational protein function prediction. Nat. Methods 10 , 221–227 (2013).

Bileschi, M. L. et al. Using deep learning to annotate the protein universe. Nat. Biotechnol. 40 , 932–937 (2022).

Barkas, N. et al. Joint analysis of heterogeneous single-cell RNA-seq dataset collections. Nat. Methods 16 , 695–698 (2019).

Demszky, D. et al. Using large language models in psychology. Nat. Rev. Psychol. 2 , 688–701 (2023).

Karjus, A. Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence. Preprint at https://arxiv.org/abs/2309.14379 (2023).

Davies, A. et al. Advancing mathematics by guiding human intuition with AI. Nature 600 , 70–74 (2021).

Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D. & Griffiths, T. L. Using large-scale experiments and machine learning to discover theories of human decision-making. Science 372 , 1209–1214 (2021).

Ilyas, A. et al. Adversarial examples are not bugs, they are features. Preprint at https://doi.org/10.48550/arXiv.1905.02175 (2019)

Semel, B. M. Listening like a computer: attentional tensions and mechanized care in psychiatric digital phenotyping. Sci. Technol. Hum. Values 47 , 266–290 (2022).

Gil, Y. Thoughtful artificial intelligence: forging a new partnership for data science and scientific discovery. Data Sci. 1 , 119–129 (2017).

Checco, A., Bracciale, L., Loreti, P., Pinfield, S. & Bianchi, G. AI-assisted peer review. Humanit. Soc. Sci. Commun. 8 , 25 (2021).

Thelwall, M. Can the quality of published academic journal articles be assessed with machine learning? Quant. Sci. Stud. 3 , 208–226 (2022).

Dhar, P. Peer review of scholarly research gets an AI boost. IEEE Spectrum spectrum.ieee.org/peer-review-of-scholarly-research-gets-an-ai-boost (2020).

Heaven, D. AI peer reviewers unleashed to ease publishing grind. Nature 563 , 609–610 (2018).

Conroy, G. How ChatGPT and other AI tools could disrupt scientific publishing. Nature 622 , 234–236 (2023).

Nosek, B. A. et al. Replicability, robustness, and reproducibility in psychological science. Annu. Rev. Psychol. 73 , 719–748 (2022).

Altmejd, A. et al. Predicting the replicability of social science lab experiments. PLoS ONE 14 , e0225826 (2019).

Yang, Y., Youyou, W. & Uzzi, B. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proc. Natl Acad. Sci. USA 117 , 10762–10768 (2020).

Youyou, W., Yang, Y. & Uzzi, B. A discipline-wide investigation of the replicability of psychology papers over the past two decades. Proc. Natl Acad. Sci. USA 120 , e2208863120 (2023).

Rabb, N., Fernbach, P. M. & Sloman, S. A. Individual representation in a community of knowledge. Trends Cogn. Sci. 23 , 891–902 (2019). This comprehensive review paper documents the empirical evidence for distributed cognition in communities of knowledge and the resultant vulnerabilities to illusions of understanding .

Rozenblit, L. & Keil, F. The misunderstood limits of folk science: an illusion of explanatory depth. Cogn. Sci. 26 , 521–562 (2002). This paper provided an empirical demonstration of the illusion of explanatory depth, and inspired a programme of research in cognitive science on communities of knowledge .

Hutchins, E. Cognition in the Wild (MIT Press, 1995).

Lave, J. & Wenger, E. Situated Learning: Legitimate Peripheral Participation (Cambridge Univ. Press, 1991).

Kitcher, P. The division of cognitive labor. J. Philos. 87 , 5–22 (1990).

Hardwig, J. Epistemic dependence. J. Philos. 82 , 335–349 (1985).

Keil, F. in Oxford Studies In Epistemology (eds Gendler, T. S. & Hawthorne, J.) 143–166 (Oxford Academic, 2005).

Weisberg, M. & Muldoon, R. Epistemic landscapes and the division of cognitive labor. Philos. Sci. 76 , 225–252 (2009).

Sloman, S. A. & Rabb, N. Your understanding is my understanding: evidence for a community of knowledge. Psychol. Sci. 27 , 1451–1460 (2016).

Wilson, R. A. & Keil, F. The shadows and shallows of explanation. Minds Mach. 8 , 137–159 (1998).

Keil, F. C., Stein, C., Webb, L., Billings, V. D. & Rozenblit, L. Discerning the division of cognitive labor: an emerging understanding of how knowledge is clustered in other minds. Cogn. Sci. 32 , 259–300 (2008).

Sperber, D. et al. Epistemic vigilance. Mind Lang. 25 , 359–393 (2010).

Wilkenfeld, D. A., Plunkett, D. & Lombrozo, T. Depth and deference: when and why we attribute understanding. Philos. Stud. 173 , 373–393 (2016).

Sparrow, B., Liu, J. & Wegner, D. M. Google effects on memory: cognitive consequences of having information at our fingertips. Science 333 , 776–778 (2011).

Fisher, M., Goddu, M. K. & Keil, F. C. Searching for explanations: how the internet inflates estimates of internal knowledge. J. Exp. Psychol. Gen. 144 , 674–687 (2015).

De Freitas, J., Agarwal, S., Schmitt, B. & Haslam, N. Psychological factors underlying attitudes toward AI tools. Nat. Hum. Behav. 7 , 1845–1854 (2023).

Castelo, N., Bos, M. W. & Lehmann, D. R. Task-dependent algorithm aversion. J. Mark. Res. 56 , 809–825 (2019).

Cadario, R., Longoni, C. & Morewedge, C. K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 5 , 1636–1642 (2021).

Oktar, K. & Lombrozo, T. Deciding to be authentic: intuition is favored over deliberation when authenticity matters. Cognition 223 , 105021 (2022).

Bigman, Y. E., Yam, K. C., Marciano, D., Reynolds, S. J. & Gray, K. Threat of racial and economic inequality increases preference for algorithm decision-making. Comput. Hum. Behav. 122 , 106859 (2021).

Claudy, M. C., Aquino, K. & Graso, M. Artificial intelligence can’t be charmed: the effects of impartiality on laypeople’s algorithmic preferences. Front. Psychol. 13 , 898027 (2022).

Snyder, C., Keppler, S. & Leider, S. Algorithm reliance under pressure: the effect of customer load on service workers. Preprint at SSRN https://doi.org/10.2139/ssrn.4066823 (2022).

Bogert, E., Schecter, A. & Watson, R. T. Humans rely more on algorithms than social influence as a task becomes more difficult. Sci Rep. 11 , 8028 (2021).

Raviv, A., Bar‐Tal, D., Raviv, A. & Abin, R. Measuring epistemic authority: studies of politicians and professors. Eur. J. Personal. 7 , 119–138 (1993).

Cummings, L. The “trust” heuristic: arguments from authority in public health. Health Commun. 29 , 1043–1056 (2014).

Lee, M. K. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5 , https://doi.org/10.1177/2053951718756684 (2018).

Kissinger, H. A., Schmidt, E. & Huttenlocher, D. The Age of A.I. And Our Human Future (Little, Brown, 2021).

Lombrozo, T. Explanatory preferences shape learning and inference. Trends Cogn. Sci. 20 , 748–759 (2016). This paper provides an overview of philosophical theories of explanatory virtues and reviews empirical evidence on the sorts of explanations people find satisfying .

Vrantsidis, T. H. & Lombrozo, T. Simplicity as a cue to probability: multiple roles for simplicity in evaluating explanations. Cogn. Sci. 46 , e13169 (2022).

Johnson, S. G. B., Johnston, A. M., Toig, A. E. & Keil, F. C. Explanatory scope informs causal strength inferences. In Proc. 36th Annual Meeting of the Cognitive Science Society 2453–2458 (Cognitive Science Society, 2014).

Khemlani, S. S., Sussman, A. B. & Oppenheimer, D. M. Harry Potter and the sorcerer’s scope: latent scope biases in explanatory reasoning. Mem. Cognit. 39 , 527–535 (2011).

Liquin, E. G. & Lombrozo, T. Motivated to learn: an account of explanatory satisfaction. Cogn. Psychol. 132 , 101453 (2022).

Hopkins, E. J., Weisberg, D. S. & Taylor, J. C. V. The seductive allure is a reductive allure: people prefer scientific explanations that contain logically irrelevant reductive information. Cognition 155 , 67–76 (2016).

Weisberg, D. S., Hopkins, E. J. & Taylor, J. C. V. People’s explanatory preferences for scientific phenomena. Cogn. Res. Princ. Implic. 3 , 44 (2018).

Jerez-Fernandez, A., Angulo, A. N. & Oppenheimer, D. M. Show me the numbers: precision as a cue to others’ confidence. Psychol. Sci. 25 , 633–635 (2014).

Kim, J., Giroux, M. & Lee, J. C. When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychol. Mark. 38 , 1140–1155 (2021).

Nguyen, C. T. The seductions of clarity. R. Inst. Philos. Suppl. 89 , 227–255 (2021). This article describes how reductive and quantitative explanations can generate a sense of understanding that is not necessarily correlated with actual understanding .

Fisher, M., Smiley, A. H. & Grillo, T. L. H. Information without knowledge: the effects of internet search on learning. Memory 30 , 375–387 (2022).

Eliseev, E. D. & Marsh, E. J. Understanding why searching the internet inflates confidence in explanatory ability. Appl. Cogn. Psychol. 37 , 711–720 (2023).

Fisher, M. & Oppenheimer, D. M. Who knows what? Knowledge misattribution in the division of cognitive labor. J. Exp. Psychol. Appl. 27 , 292–306 (2021).

Chromik, M., Eiband, M., Buchner, F., Krüger, A. & Butz, A. I think I get your point, AI! The illusion of explanatory depth in explainable AI. In 26th International Conference on Intelligent User Interfaces (eds Hammond, T. et al.) 307–317 (Association for Computing Machinery, 2021).

Strevens, M. No understanding without explanation. Stud. Hist. Philos. Sci. A 44 , 510–515 (2013).

Ylikoski, P. in Scientific Understanding: Philosophical Perspectives (eds De Regt, H. et al.) 100–119 (Univ. Pittsburgh Press, 2009).

Giudice, M. D. The prediction–explanation fallacy: a pervasive problem in scientific applications of machine learning. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/4vq8f (2021).

Hofman, J. M. et al. Integrating explanation and prediction in computational social science. Nature 595 , 181–188 (2021). This paper highlights the advantages and disadvantages of explanatory versus predictive approaches to modelling, with a focus on applications to computational social science .

Shmueli, G. To explain or to predict? Stat. Sci. 25 , 289–310 (2010).

Article   MathSciNet   Google Scholar  

Hofman, J. M., Sharma, A. & Watts, D. J. Prediction and explanation in social systems. Science 355 , 486–488 (2017).

Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151 , 90–103 (2019).

Nguyen, C. T. Cognitive islands and runaway echo chambers: problems for epistemic dependence on experts. Synthese 197 , 2803–2821 (2020).

Breiman, L. Statistical modeling: the two cultures. Stat. Sci. 16 , 199–215 (2001).

Gao, J. & Wang, D. Quantifying the benefit of artificial intelligence for scientific research. Preprint at arxiv.org/abs/2304.10578 (2023).

Hanson, B. et al. Garbage in, garbage out: mitigating risks and maximizing benefits of AI in research. Nature 623 , 28–31 (2023).

Kleinberg, J. & Raghavan, M. Algorithmic monoculture and social welfare. Proc. Natl Acad. Sci. USA 118 , e2018340118 (2021). This paper uses formal modelling methods to demonstrate that when companies all rely on the same algorithm to make decisions (an algorithmic monoculture), the overall quality of those decisions is reduced because valuable options can slip through the cracks, even when the algorithm performs accurately for individual companies .

Hofstra, B. et al. The diversity–innovation paradox in science. Proc. Natl Acad. Sci. USA 117 , 9284–9291 (2020).

Hong, L. & Page, S. E. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proc. Natl Acad. Sci. USA 101 , 16385–16389 (2004).

Page, S. E. Where diversity comes from and why it matters? Eur. J. Soc. Psychol. 44 , 267–279 (2014). This article reviews research demonstrating the benefits of cognitive diversity and diversity in methodological approaches for problem solving and innovation .

Clarke, A. E. & Fujimura, J. H. (eds) The Right Tools for the Job: At Work in Twentieth-Century Life Sciences (Princeton Univ. Press, 2014).

Silva, V. J., Bonacelli, M. B. M. & Pacheco, C. A. Framing the effects of machine learning on science. AI Soc. https://doi.org/10.1007/s00146-022-01515-x (2022).

Sassenberg, K. & Ditrich, L. Research in social psychology changed between 2011 and 2016: larger sample sizes, more self-report measures, and more online studies. Adv. Methods Pract. Psychol. Sci. 2 , 107–114 (2019).

Simon, A. F. & Wilder, D. Methods and measures in social and personality psychology: a comparison of JPSP publications in 1982 and 2016. J. Soc. Psychol. https://doi.org/10.1080/00224545.2022.2135088 (2022).

Anderson, C. A. et al. The MTurkification of social and personality psychology. Pers. Soc. Psychol. Bull. 45 , 842–850 (2019).

Latour, B. in The Social After Gabriel Tarde: Debates and Assessments (ed. Candea, M.) 145–162 (Routledge, 2010).

Porter, T. M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton Univ. Press, 1996).

Lazer, D. et al. Meaningful measures of human society in the twenty-first century. Nature 595 , 189–196 (2021).

Knox, D., Lucas, C. & Cho, W. K. T. Testing causal theories with learned proxies. Annu. Rev. Polit. Sci. 25 , 419–441 (2022).

Barberá, P. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Polit. Anal. 23 , 76–91 (2015).

Brady, W. J., McLoughlin, K., Doan, T. N. & Crockett, M. J. How social learning amplifies moral outrage expression in online social networks. Sci. Adv. 7 , eabe5641 (2021).

Barnes, J., Klinger, R. & im Walde, S. S. Assessing state-of-the-art sentiment models on state-of-the-art sentiment datasets. In Proc. 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (eds Balahur, A. et al.) 2–12 (Association for Computational Linguistics, 2017).

Gitelman, L. (ed.) “Raw Data” is an Oxymoron (MIT Press, 2013).

Breznau, N. et al. Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proc. Natl Acad. Sci. USA 119 , e2203150119 (2022). This study demonstrates how 73 research teams analysing the same dataset reached different conclusions about the relationship between immigration and public support for social policies, highlighting the subjectivity and uncertainty involved in analysing complex datasets .

Gillespie, T. in Media Technologies: Essays on Communication, Materiality, and Society (eds Gillespie, T. et al.) 167–194 (MIT Press, 2014).

Leonelli, S. Data-Centric Biology: A Philosophical Study (Univ. Chicago Press, 2016).

Wang, A., Kapoor, S., Barocas, S. & Narayanan, A. Against predictive optimization: on the legitimacy of decision-making algorithms that optimize predictive accuracy. ACM J. Responsib. Comput. , https://doi.org/10.1145/3636509 (2023).

Athey, S. Beyond prediction: using big data for policy problems. Science 355 , 483–485 (2017).

del Rosario Martínez-Ordaz, R. Scientific understanding through big data: from ignorance to insights to understanding. Possibility Stud. Soc. 1 , 279–299 (2023).

Nussberger, A.-M., Luo, L., Celis, L. E. & Crockett, M. J. Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nat. Commun. 13 , 5821 (2022).

Zittrain, J. in The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (eds. Voeneky, S. et al.) 176–184 (Cambridge Univ. Press, 2022). This article articulates the epistemic risks of prioritizing predictive accuracy over explanatory understanding when AI tools are interacting in complex systems.

Shumailov, I. et al. The curse of recursion: training on generated data makes models forget. Preprint at arxiv.org/abs/2305.17493 (2023).

Latour, B. Science In Action: How to Follow Scientists and Engineers Through Society (Harvard Univ. Press, 1987). This book provides strategies and approaches for thinking about science as a social endeavour .

Franklin, S. Science as culture, cultures of science. Annu. Rev. Anthropol. 24 , 163–184 (1995).

Haraway, D. Situated knowledges: the science question in feminism and the privilege of partial perspective. Fem. Stud. 14 , 575–599 (1988). This article acknowledges that the objective ‘view from nowhere’ is unobtainable: knowledge, it argues, is always situated .

Harding, S. Objectivity and Diversity: Another Logic of Scientific Research (Univ. Chicago Press, 2015).

Longino, H. E. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry (Princeton Univ. Press, 1990).

Daston, L. & Galison, P. Objectivity (Princeton Univ. Press, 2007). This book is a historical analysis of the shifting modes of ‘objectivity’ that scientists have pursued, arguing that objectivity is not a universal concept but that it shifts alongside scientific techniques and ambitions .

Prescod-Weinstein, C. Making Black women scientists under white empiricism: the racialization of epistemology in physics. Signs J. Women Cult. Soc. 45 , 421–447 (2020).

Mavhunga, C. What Do Science, Technology, and Innovation Mean From Africa? (MIT Press, 2017).

Schiebinger, L. The Mind Has No Sex? Women in the Origins of Modern Science (Harvard Univ. Press, 1991).

Martin, E. The egg and the sperm: how science has constructed a romance based on stereotypical male–female roles. Signs J. Women Cult. Soc. 16 , 485–501 (1991). This case study shows how assumptions about gender affect scientific theories, sometimes delaying the articulation of what might be considered to be more accurate descriptions of scientific phenomena .

Harding, S. Rethinking standpoint epistemology: What is “strong objectivity”? Centen. Rev. 36 , 437–470 (1992). In this article, Harding outlines her position on ‘strong objectivity’, by which clearly articulating one’s standpoint can lead to more robust knowledge claims .

Oreskes, N. Why Trust Science? (Princeton Univ. Press, 2019). This book introduces the reader to 20 years of scholarship in science and technology studies, arguing that the tools the discipline has for understanding science can help to reinstate public trust in the institution .

Rolin, K., Koskinen, I., Kuorikoski, J. & Reijula, S. Social and cognitive diversity in science: introduction. Synthese 202 , 36 (2023).

Hong, L. & Page, S. E. Problem solving by heterogeneous agents. J. Econ. Theory 97 , 123–163 (2001).

Sulik, J., Bahrami, B. & Deroy, O. The diversity gap: when diversity matters for knowledge. Perspect. Psychol. Sci. 17 , 752–767 (2022).

Lungeanu, A., Whalen, R., Wu, Y. J., DeChurch, L. A. & Contractor, N. S. Diversity, networks, and innovation: a text analytic approach to measuring expertise diversity. Netw. Sci. 11 , 36–64 (2023).

AlShebli, B. K., Rahwan, T. & Woon, W. L. The preeminence of ethnic diversity in scientific collaboration. Nat. Commun. 9 , 5163 (2018).

Campbell, L. G., Mehtani, S., Dozier, M. E. & Rinehart, J. Gender-heterogeneous working groups produce higher quality science. PLoS ONE 8 , e79147 (2013).

Nielsen, M. W., Bloch, C. W. & Schiebinger, L. Making gender diversity work for scientific discovery and innovation. Nat. Hum. Behav. 2 , 726–734 (2018).

Yang, Y., Tian, T. Y., Woodruff, T. K., Jones, B. F. & Uzzi, B. Gender-diverse teams produce more novel and higher-impact scientific ideas. Proc. Natl Acad. Sci. USA 119 , e2200841119 (2022).

Kozlowski, D., Larivière, V., Sugimoto, C. R. & Monroe-White, T. Intersectional inequalities in science. Proc. Natl Acad. Sci. USA 119 , e2113067119 (2022).

Fehr, C. & Jones, J. M. Culture, exploitation, and epistemic approaches to diversity. Synthese 200 , 465 (2022).

Nakadai, R., Nakawake, Y. & Shibasaki, S. AI language tools risk scientific diversity and innovation. Nat. Hum. Behav. 7 , 1804–1805 (2023).

National Academies of Sciences, Engineering, and Medicine et al. Advancing Antiracism, Diversity, Equity, and Inclusion in STEMM Organizations: Beyond Broadening Participation (National Academies Press, 2023).

Winner, L. Do artifacts have politics? Daedalus 109 , 121–136 (1980).

Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018).

Littmann, M. et al. Validity of machine learning in biology and medicine increased through collaborations across fields of expertise. Nat. Mach. Intell. 2 , 18–24 (2020).

Carusi, A. et al. Medical artificial intelligence is as much social as it is technological. Nat. Mach. Intell. 5 , 98–100 (2023).

Raghu, M. & Schmidt, E. A survey of deep learning for scientific discovery. Preprint at arxiv.org/abs/2003.11755 (2020).

Bishop, C. AI4Science to empower the fifth paradigm of scientific discovery. Microsoft Research Blog www.microsoft.com/en-us/research/blog/ai4science-to-empower-the-fifth-paradigm-of-scientific-discovery/ (2022).

Whittaker, M. The steep cost of capture. Interactions 28 , 50–55 (2021).

Liesenfeld, A., Lopez, A. & Dingemanse, M. Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators. In Proc. 5th International Conference on Conversational User Interfaces 1–6 (Association for Computing Machinery, 2023).

Chu, J. S. G. & Evans, J. A. Slowed canonical progress in large fields of science. Proc. Natl Acad. Sci. USA 118 , e2021636118 (2021).

Park, M., Leahey, E. & Funk, R. J. Papers and patents are becoming less disruptive over time. Nature 613 , 138–144 (2023).

Frith, U. Fast lane to slow science. Trends Cogn. Sci. 24 , 1–2 (2020). This article explains the epistemic risks of a hyperfocus on scientific productivity and explores possible avenues for incentivizing the production of higher-quality science on a slower timescale .

Stengers, I. Another Science is Possible: A Manifesto for Slow Science (Wiley, 2018).

Lake, B. M. & Baroni, M. Human-like systematic generalization through a meta-learning neural network. Nature 623 , 115–121 (2023).

Feinman, R. & Lake, B. M. Learning task-general representations with generative neuro-symbolic modeling. Preprint at arxiv.org/abs/2006.14448 (2021).

Schölkopf, B. et al. Toward causal representation learning. Proc. IEEE 109 , 612–634 (2021).

Mitchell, M. AI’s challenge of understanding the world. Science 382 , eadm8175 (2023).

Sartori, L. & Bocca, G. Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI Soc. 38 , 443–458 (2023).

Download references

Acknowledgements

We thank D. S. Bassett, W. J. Brady, S. Helmreich, S. Kapoor, T. Lombrozo, A. Narayanan, M. Salganik and A. J. te Velthuis for comments. We also thank C. Buckner and P. Winter for their feedback and suggestions.

Author information

These authors contributed equally: Lisa Messeri, M. J. Crockett

Authors and Affiliations

Department of Anthropology, Yale University, New Haven, CT, USA

Lisa Messeri

Department of Psychology, Princeton University, Princeton, NJ, USA

M. J. Crockett

University Center for Human Values, Princeton University, Princeton, NJ, USA

You can also search for this author in PubMed   Google Scholar

Contributions

The authors contributed equally to the research and writing of the paper.

Corresponding authors

Correspondence to Lisa Messeri or M. J. Crockett .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Cameron Buckner, Peter Winter and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Messeri, L., Crockett, M.J. Artificial intelligence and illusions of understanding in scientific research. Nature 627 , 49–58 (2024). https://doi.org/10.1038/s41586-024-07146-0

Download citation

Received : 31 July 2023

Accepted : 31 January 2024

Published : 06 March 2024

Issue Date : 07 March 2024

DOI : https://doi.org/10.1038/s41586-024-07146-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research papers on ai

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, AI by McKinsey, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

logo

AI Research Tools

research papers on ai

scite is an AI-powered research tool that helps researchers discover and evaluate scientific articles. It analyzes millions of citations and shows how each article has

research papers on ai

HyperWrite is an AI-powered writing assistant that helps you create high-quality content quickly and easily. It can also provide personalized suggestions as you write to

research papers on ai

ChatPDF allows you to talk to your PDF documents as if they were human. It’s perfect for quickly extracting information or answering questions from large

research papers on ai

Qonqur is an innovative software that allows you to control your computer and digital content using hand gestures, without the need for expensive virtual reality

research papers on ai

Avidnote is an AI-powered research tool that helps you organize, write, and analyze your academic work more efficiently. With this tool, you can easily upload

research papers on ai

Consensus is an AI-powered search engine that helps you find evidence-based answers to your research questions. It intelligently searches through over 200 million scientific papers

research papers on ai

SciSpace is an AI research assistant that simplifies researching papers through AI-generated explanations and a network showing connections between relevant papers. It aims to automate

research papers on ai

Samwell AI is an AI writing assistant that’s specifically designed to help students and academics effortlessly write essays, research papers, and other academic content. Its

research papers on ai

Julius is an AI data analysis tool that helps you visualize, analyze, and get insights from all kinds of data. With Julius, you can simply

research papers on ai

Genei is a research tool that automates the process of summarizing background reading and can also generate blogs, articles, and reports. It allows you to

research papers on ai

Grok AI is a large language model chatbot developed by xAI that is currently in early access. It’s designed to be a resourceful AI assistant

research papers on ai

Instabooks AI

Instabooks AI instantly generates customized textbooks on any topic you want to explore in depth. Simply type a detailed description of the information you want

Discover the latest AI research tools to accelerate your studies and academic research. Search through millions of research papers, summarize articles, view citations, and more.

  • Privacy Policy
  • Terms & Conditions

Copyright © 2024 EasyWithAI.com

Top AI Tools

  • Best Free AI Image Generators
  • Best AI Video Editors
  • Best AI Meeting Assistants
  • Best AI Tools for Students
  • Top 5 Free AI Text Generators
  • Top 5 AI Image Upscalers

Readers like you help support Easy With AI. When you make a purchase using links on our site, we may earn an affiliate commission at no extra cost to you.

Subscribe to our weekly newsletter for the latest AI tools !

We don’t spam! Read our privacy policy for more info.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Please check your inbox or spam folder to confirm your subscription. Thank you!

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: emo: emote portrait alive -- generating expressive portrait videos with audio2video diffusion model under weak conditions.

Abstract: In this work, we tackle the challenge of enhancing the realism and expressiveness in talking head video generation by focusing on the dynamic and nuanced relationship between audio cues and facial movements. We identify the limitations of traditional techniques that often fail to capture the full spectrum of human expressions and the uniqueness of individual facial styles. To address these issues, we propose EMO, a novel framework that utilizes a direct audio-to-video synthesis approach, bypassing the need for intermediate 3D models or facial landmarks. Our method ensures seamless frame transitions and consistent identity preservation throughout the video, resulting in highly expressive and lifelike animations. Experimental results demonsrate that EMO is able to produce not only convincing speaking videos but also singing videos in various styles, significantly outperforming existing state-of-the-art methodologies in terms of expressiveness and realism.

Submission history

Access paper:.

  • Download PDF
  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Experts call for legal ‘safe harbor’ so researchers, journalists and artists can evaluate AI tools

  • Share on Facebook
  • Share on LinkedIn

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

According to a new paper published by 23 AI researchers, academics and creatives, ‘safe harbor’ legal and technical protections are essential to allow researchers, journalists and artists to do “good-faith” evaluations of AI products and services.

Despite the need for independent evaluation, the paper says, conducting research related to these vulnerabilities is often legally prohibited by the terms of service for popular AI models, including those of OpenAI , Google, Anthropic, Inflection, Meta, and Midjourney. The paper’s authors called on tech companies to indemnify public interest AI research and protect it from account suspensions or legal reprisal.

“While these terms are intended as a deterrent against malicious actors, they also inadvertently restrict AI safety and trustworthiness research; companies forbid the research and may enforce their policies with account suspensions,” said a blog post accompanying the paper.

Two of the paper’s co-authors, Shayne Longpre of MIT Media Lab and Sayash Kapoor of Princeton University, explained to VentureBeat that this is particularly important when, for example, in a recent effort to dismiss parts of the New York Times’ lawsuit, OpenAI characterized the Times’ evaluation of ChatGPT as “hacking.” The Times’ lead counsel responded by saying, “What OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced the Times’s copyrighted works.”

The AI Impact Tour – Boston

Longpre said that the idea of a ‘safe harbor’ was first proposed by the Knight First Amendment Institute for social media platform research in 2022 . “They asked social media platforms not to ban journalists from trying to investigate the harms of social media, and then similarly for researcher protections as well,” he said, pointing out that there had been a history of academics and journalists being like sued, or even spending time in jail, as they fought to expose weaknesses in platforms.

“We tried to learn as much as we could from this past effort to propose a safe harbor for AI research,” he said. “With AI, we essentially have no information about how people are using these systems, what sorts of harms are happening, and one of the only tools we have is research access to these platforms.”

Independent evaluation and red teaming are ‘critical’

The paper, A Safe Harbor for AI Evaluation and Red Teaming , says that to the authors’ knowledge, “account suspensions in the course of public interest research” have taken place at companies including OpenAI, Anthropic, Inflection, and Midjourney, with “Midjourney being the most prolific.” They cited artist Reid Southen , who is listed as one of the paper’s co-authors and whose Midjourney account was suspended after sharing Midjourney images that seemed nearly identical to original copyrighted versions. His investigation found that Midjourney could infringe on owner copyright without the user explicitly intending to with simple prompts.

“Midjourney has banned me three times now at a personal expense approaching $300,” Southen told VentureBeat by email. “The first ban happened within 8 hours of my investigation and posting of results, and shortly thereafter they updated their ToS without informing their users to pass the blame for any infringing imagery onto the end user.”

The type of model behavior he found, he continued, “is exactly why independent evaluation and red teaming should be permitted, because [the companies have] shown they won’t do it themselves, to the detriment of rights owners everywhere.”

Transparency is key

Ultimately, said Longpre, the issues around safe harbor protections have to do with transparency.

“Do independent researchers have the right where, if they can prove that they’re not doing any misuse or harm, to investigate the capabilities and or flaws of a product?” he said. But he added that, in general, “we want to send a message that we want to work with companies, because we believe that there’s also a path where they can be more transparent and use the community to their advantage to help seek out these flaws and improve them.”

Kapoor added that companies may have good reasons to ban some types of use of their services. However, it shouldn’t be a “one-size-fits-all” policy, “with the terms of the service the same whether you are a malicious user versus a researcher conducting safety-critical research,” he said.

Kapoor also said that the paper’s authors have been in conversation with some of the companies whose terms of use are at issue. “Most of them have just been looking at the proposal, but our approach was very much to start this dialogue with companies,” he said. “So far most of the people we’ve reached out to have been willing to sort of start that conversation with us, even though as of yet I don’t think we have any firm commitments from any companies on introducing the safe harbor,” although he pointed out that After OpenAI read the first draft of the paper, they modified the language in their terms of service to accommodate certain types of safe harbor.

“So to some extent, that gave us a signal that companies might actually be willing to go some of the way with us,” he said.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Analyze research papers at superhuman speed

Search for research papers, get one sentence abstract summaries, select relevant papers and search for more like them, extract details from papers into an organized table.

research papers on ai

Find themes and concepts across many papers

Don't just take our word for it.

research papers on ai

Tons of features to speed up your research

Upload your own pdfs, orient with a quick summary, view sources for every answer, ask questions to papers, research for the machine intelligence age, pick a plan that's right for you, get in touch, enterprise and institutions, custom pricing, common questions. great answers., how do researchers use elicit.

Over 800,000 researchers have tried Elicit already. Researchers commonly use Elicit to:

  • Speed up literature review
  • Find papers they couldn’t find elsewhere
  • Automate systematic reviews and meta-analyses
  • Learn about a new domain

Elicit tends to work best for empirical domains that involve experiments and concrete results. This type of research is common in biomedicine and machine learning.

What is Elicit not a good fit for?

Elicit does not currently answer questions or surface information that is not written about in an academic paper. It tends to work less well for identifying facts (e.g. “How many cars were sold in Malaysia last year?”) and theoretical or non-empirical domains.

What types of data can Elicit search over?

Elicit searches across 200 million academic papers from the Semantic Scholar corpus, which covers all academic disciplines. When you extract data from papers in Elicit, Elicit will use the full text if available or the abstract if not.

How accurate are the answers in Elicit?

A good rule of thumb is to assume that around 90% of the information you see in Elicit is accurate. While we do our best to increase accuracy without skyrocketing costs, it’s very important for you to check the work in Elicit closely. We try to make this easier for you by identifying all of the sources for information generated with language models.

What is Elicit Plus?

Elicit Plus is Elicit's subscription offering, which comes with a set of features, as well as monthly credits. On Elicit Plus, you may use up to 12,000 credits a month. Unused monthly credits do not carry forward into the next month. Plus subscriptions auto-renew every month.

What are credits?

Elicit uses a credit system to pay for the costs of running our app. When you run workflows and add columns to tables it will cost you credits. When you sign up you get 5,000 credits to use. Once those run out, you'll need to subscribe to Elicit Plus to get more. Credits are non-transferable.

How can you get in contact with the team?

Please email us at [email protected] or post in our Slack community if you have feedback or general comments! We log and incorporate all user comments. If you have a problem, please email [email protected] and we will try to help you as soon as possible.

What happens to papers uploaded to Elicit?

When you upload papers to analyze in Elicit, those papers will remain private to you and will not be shared with anyone else.

How accurate is Elicit?

Training our models on specific tasks, searching over academic papers, making it easy to double-check answers, save time, think more. try elicit for free..

Supercharge Your Next Research Paper

Jenni's AI-powered text editor helps you write, edit, and cite with confidence. Save hours on your next paper.

research papers on ai

Loved by over 3 million academics

Auto in-text citations

10x Writing speed

Reference library

Trusted by Universities and businesses across the world

meta logo

Write, cite, and edit

Features built to enhance your research and writing capabilities

Jenni's Auto Complete Feature

AI Autocomplete

Autocomplete will write alongside you to beat writer's block whenever you need a helping hand

Jenni's Citations Feature

In-text Citations

Jenni consults the latest research and your PDF uploads. Cite in APA, MLA, IEEE, Chicago, or Harvard style

Image Showing Jenni's Paraphrase Feature

Paraphrase & Rewrite

Paraphrase any text in any tone. Rewrite the internet customized to you

Jenni's PDF Upload Feature

Generate From Your Files

Bring your research papers to life with source-based generation

Chat to your PDFs

Quickly understand and summarize your research papers with our AI chat assistant

Bulk Import Sources via .bib

Already saved papers ready to cite? Import a .bib to populate your library in seconds

LaTeX and Word Export

Export your draft to LaTeX, .docx, or HTML without any formatting loss

Outline Builder

Enter your prompt and get a list of section headings ready for you to flesh out

Multilingual Support

Jenni can generate in US or British English, Spanish, German, French, or Chinese

Research Library

Save and manage research in your library. Easily cite research in any document, fast

Never write alone

Get suggestions whenever you are stuck or expand your notes into full paragraphs

Jenni's Writing Interface - Suggestions

Join 2 million empowered writers

Jenni has helped write over 970 million words. From academic essays, journals, to top-ranking blog posts

research papers on ai

@Hadeel_Naily

A major shoutout to Jenni Ai for straight up saving my life ❤️

research papers on ai

@sonofgorkhali

I started with Jenni-who & Jenni-what. But now I can't write without Jenni. I love Jenni AI and am amazed to see how far Jenni has come. Kudos to Jenni.AI team

research papers on ai

@Mushtaq Bilal

Jenni, the AI assistant for academic writing, just got BETTER and SMARTER.

research papers on ai

@gachoki_munene

This one is a game changer, Doc, especially on that small matter of lacking words or writer's block. I am definitely introducing it to my students asap.

research papers on ai

@angrytomtweets

I thought ChatGPT was a good writing assistant. But when I found Jenni AI - It blew my mind. It's 10x more advanced than I thought.

research papers on ai

I thought AI writing was useless. Then I found Jenni AI. It turned out to be much more advanced than I ever could have imagined. Jenni AI = ChatGPT x 10.

research papers on ai

Oscar Duran

@duranoscarf

herramienta de auto completado de textos. Usando inteligencia artificial te permite escribir de manera rápida y mas eficiente (hay que revisar igual)

research papers on ai

Jenni is perfect for writing research docs, SOPs, study projects presentations 👌🏽

research papers on ai

@xaviercaffrey13

Copyai is alright but have you tried @whoisjenniai?

Team & institutional plans

Collaborate with your research team and speed up your workflow.

Enquire now

You're in control

Types of content Jenni can help you with

Save hours writing your essay with our AI essay writing tool.

Literature reviews

Discover, write, and cite relevant research.

Research Papers

Polish your writing to increase submission success.

Personal statements

Create a compelling college motivation letter.

Write blogs & articles faster with the help of AI.

Write your next compelling speech in less time.

Frequently asked questions

Does jenni use gpt-4, what are citations, is jenni multilingual, is there mobile support.

Does Jenni plagiarize?

Try Jenni for free today

Write your first paper with Jenni today and never look back

VIDEO

  1. Free AI Tools

  2. 3 ai tools for students #research #researchpaper #short #researchmethods #aitools #research

  3. 🤖 How to get AI to Write your Assignments?

  4. Using AI for Academic Writing

  5. Free AI Tools

  6. A100% guaranteed method to bypass ai detection

COMMENTS

  1. Research index

    Research Papers. Feb 15, 2024 February 15, 2024. Video generation models as world simulators. ... Safety & Alignment. Read paper. Dec 14, 2023 December 14, 2023. Practices for Governing Agentic AI Systems. Responsible AI, ... Aug 1, 2023 August 1, 2023. Confidence-Building Measures for Artificial Intelligence: Workshop proceedings.

  2. 578339 PDFs

    Artificial Intelligence | Explore the latest full-text research PDFs, articles, conference papers, preprints and more on ARTIFICIAL INTELLIGENCE. Find methods information, sources, references or ...

  3. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  4. Semantic Scholar

    Try it for select papers. Learn More. G r een AI R o y Schwa r tz, Jesse Dodge, N. A. Smith, ... Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More. About About Us Meet the Team Publishers Blog (opens in a new tab) ...

  5. Artificial Intelligence authors/titles recent submissions

    Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Human-Computer Interaction (cs.HC) ... A Case Study on Utilizing ChatGPT Intelligence for Research Paper Analysis Authors: Anjalee De Silva, Janaka L. Wijekoon, Rashini Liyanarachchi, Rrubaa Panchendrarajan, Weranga Rajapaksha.

  6. AIJ

    The journal of Artificial Intelligence (AIJ) welcomes papers on broad aspects of AI that constitute advances in the overall field including, but not limited to, cognition and AI, automated reasoning and inference, case-based reasoning, commonsense reasoning, computer vision, constraint …. View full aims & scope. Visit IJCAI.

  7. AI and science: what 1,600 researchers think

    Artificial-intelligence (AI) tools are becoming increasingly common in science, and many scientists anticipate that they will soon be central to the practice of research, suggests a Nature survey ...

  8. Top-10 Research Papers in AI

    Mar 8, 2021. 5. Each year scientists from around the world publish thousands of research papers in AI but only a few of them reach wide audiences and make a global impact in the world. Below are the top-10 most impactful research papers published in top AI conferences during the last 5 years. The ranking is based on the number of citations and ...

  9. Journal of Artificial Intelligence Research

    The Journal of Artificial Intelligence Research (JAIR) is dedicated to the rapid dissemination of important research results to the global artificial intelligence (AI) community. The journal's scope encompasses all areas of AI, including agents and multi-agent systems, automated reasoning, constraint processing and search, knowledge ...

  10. Research

    FEATURED RESEARCH. AI for the benefit of humanity. EXAMPLES OF OUR WORK. Improving skin tone evaluation in machine learning to uphold our AI principles. Discover Discover. Generative AI. ... Google publishes over 1,000 papers annually. Publishing our work enables us to collaborate and share ideas with, as well as learn from, the broader ...

  11. Growth in AI and robotics research accelerates

    The number of AI and robotics papers published in the 82 high-quality science journals in the Nature Index (Count) has been rising year-on-year — so rapidly that it resembles an exponential ...

  12. AI in health and medicine

    Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to ...

  13. The impact of artificial intelligence on human society and bioethics

    This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships.

  14. AI technologies for education: Recent research & future directions

    AI was implemented and examined in a wide variety of subject areas, such as science, medicine, arts, sports, engineering, mathematics, technologies, foreign language, business, history and more (See Table 3).The largest number of AIEd research studies (n = 14) were in engineering, computer science, information technology (IT), or informatics, followed by mathematics (n = 8), foreign language ...

  15. Artificial intelligence in healthcare: transforming the practice of

    Artificial intelligence in healthcare: transforming the practice of medicine is a review article that explores the current and potential applications of AI in various domains of medicine, such as diagnosis, treatment, research, and education. The article also discusses the challenges and ethical issues of implementing AI in healthcare, and provides some recommendations for future directions ...

  16. IEEE Transactions on Artificial Intelligence

    IEEE Transactions on Artificial Intelligence. null | IEEE Xplore. Need Help? US & Canada: +1 800 678 4333 Worldwide: +1 732 981 0060 Contact & Support

  17. [2303.12712] Sparks of Artificial General Intelligence: Early

    Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an ...

  18. AI Research Papers

    Make decisions with confidence. Qualcomm contributes impactful artificial intelligence research covering computer vision, machine learning, generative modeling, and more. Explore our AI research papers.

  19. Doing more but learning less: addressing the risks of AI in research

    The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study design through peer review. " We hope this paper offers a vocabulary for talking about AI's potential epistemic risks," Messeri said.

  20. The best AI tools for research papers and academic research (Literature

    AI for scientific writing and research papers. In the ever-evolving realm of academic research, AI tools are increasingly taking center stage. Enter Paper Wizard, Jenny.AI, and Wisio - these groundbreaking platforms are set to revolutionize the way we approach scientific writing.

  21. Doing more but learning less: Addressing the risks of AI in research

    The paper, co-authored by Princeton cognitive scientist M. J. Crockett, sets a framework for discussing the risks involved in using AI tools throughout the scientific research process, from study ...

  22. (PDF) Research paper on Artificial Intelligence

    "Best Paper Award Second Prize" ICGECD 2020 -2nd International Conference on General Education and Contemporary Development, October 23-24, 2020 with our research paper Artificial intelligence ...

  23. Stable Diffusion 3: Research Paper

    Key Takeaways: Today, we're publishing our research paper that dives into the underlying technology powering Stable Diffusion 3.. Stable Diffusion 3 outperforms state-of-the-art text-to-image generation systems such as DALL·E 3, Midjourney v6, and Ideogram v1 in typography and prompt adherence, based on human preference evaluations.

  24. Artificial intelligence and illusions of understanding in scientific

    The visions of AI Oracles, Surrogates, Quants and Arbiters appear in papers authored both by scientists without AI expertise and by researchers who study AI, either through interdisciplinary ...

  25. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  26. AI Research Tools

    Consensus is an AI-powered search engine that helps you find evidence-based answers to your research questions. It intelligently searches through over 200 million scientific papers. Discover the latest AI research tools to accelerate your studies and academic research. Analyze research papers, summarize articles, citations, and more.

  27. [2402.17485] EMO: Emote Portrait Alive -- Generating Expressive

    In this work, we tackle the challenge of enhancing the realism and expressiveness in talking head video generation by focusing on the dynamic and nuanced relationship between audio cues and facial movements. We identify the limitations of traditional techniques that often fail to capture the full spectrum of human expressions and the uniqueness of individual facial styles. To address these ...

  28. Experts call for legal 'safe harbor' so researchers, journalists and

    The paper, A Safe Harbor for AI Evaluation and Red Teaming, says that to the authors' knowledge, "account suspensions in the course of public interest research" have taken place at companies ...

  29. Elicit

    Researchers commonly use Elicit to: Speed up literature review. Find papers they couldn't find elsewhere. Automate systematic reviews and meta-analyses. Learn about a new domain. Elicit tends to work best for empirical domains that involve experiments and concrete results. This type of research is common in biomedicine and machine learning.

  30. Jenni AI

    Jenni is your AI assistant for all things in your academic journey. We specialise in developing AI that helps you make your writing more efficient, while still keeping control. ... Bring your research papers to life with source-based generation. Chat to your PDFs. Quickly understand and summarize your research papers with our AI chat assistant ...