• Privacy Policy

Buy Me a Coffee

Research Method

Home » Scientific Research – Types, Purpose and Guide

Scientific Research – Types, Purpose and Guide

Table of Contents

Scientific Research

Scientific Research

Definition:

Scientific research is the systematic and empirical investigation of phenomena, theories, or hypotheses, using various methods and techniques in order to acquire new knowledge or to validate existing knowledge.

It involves the collection, analysis, interpretation, and presentation of data, as well as the formulation and testing of hypotheses. Scientific research can be conducted in various fields, such as natural sciences, social sciences, and engineering, and may involve experiments, observations, surveys, or other forms of data collection. The goal of scientific research is to advance knowledge, improve understanding, and contribute to the development of solutions to practical problems.

Types of Scientific Research

There are different types of scientific research, which can be classified based on their purpose, method, and application. In this response, we will discuss the four main types of scientific research.

Descriptive Research

Descriptive research aims to describe or document a particular phenomenon or situation, without altering it in any way. This type of research is usually done through observation, surveys, or case studies. Descriptive research is useful in generating ideas, understanding complex phenomena, and providing a foundation for future research. However, it does not provide explanations or causal relationships between variables.

Exploratory Research

Exploratory research aims to explore a new area of inquiry or develop initial ideas for future research. This type of research is usually conducted through observation, interviews, or focus groups. Exploratory research is useful in generating hypotheses, identifying research questions, and determining the feasibility of a larger study. However, it does not provide conclusive evidence or establish cause-and-effect relationships.

Experimental Research

Experimental research aims to test cause-and-effect relationships between variables by manipulating one variable and observing the effects on another variable. This type of research involves the use of an experimental group, which receives a treatment, and a control group, which does not receive the treatment. Experimental research is useful in establishing causal relationships, replicating results, and controlling extraneous variables. However, it may not be feasible or ethical to manipulate certain variables in some contexts.

Correlational Research

Correlational research aims to examine the relationship between two or more variables without manipulating them. This type of research involves the use of statistical techniques to determine the strength and direction of the relationship between variables. Correlational research is useful in identifying patterns, predicting outcomes, and testing theories. However, it does not establish causation or control for confounding variables.

Scientific Research Methods

Scientific research methods are used in scientific research to investigate phenomena, acquire knowledge, and answer questions using empirical evidence. Here are some commonly used scientific research methods:

Observational Studies

This method involves observing and recording phenomena as they occur in their natural setting. It can be done through direct observation or by using tools such as cameras, microscopes, or sensors.

Experimental Studies

This method involves manipulating one or more variables to determine the effect on the outcome. This type of study is often used to establish cause-and-effect relationships.

Survey Research

This method involves collecting data from a large number of people by asking them a set of standardized questions. Surveys can be conducted in person, over the phone, or online.

Case Studies

This method involves in-depth analysis of a single individual, group, or organization. Case studies are often used to gain insights into complex or unusual phenomena.

Meta-analysis

This method involves combining data from multiple studies to arrive at a more reliable conclusion. This technique can be used to identify patterns and trends across a large number of studies.

Qualitative Research

This method involves collecting and analyzing non-numerical data, such as interviews, focus groups, or observations. This type of research is often used to explore complex phenomena and to gain an understanding of people’s experiences and perspectives.

Quantitative Research

This method involves collecting and analyzing numerical data using statistical techniques. This type of research is often used to test hypotheses and to establish cause-and-effect relationships.

Longitudinal Studies

This method involves following a group of individuals over a period of time to observe changes and to identify patterns and trends. This type of study can be used to investigate the long-term effects of a particular intervention or exposure.

Data Analysis Methods

There are many different data analysis methods used in scientific research, and the choice of method depends on the type of data being collected and the research question. Here are some commonly used data analysis methods:

  • Descriptive statistics: This involves using summary statistics such as mean, median, mode, standard deviation, and range to describe the basic features of the data.
  • Inferential statistics: This involves using statistical tests to make inferences about a population based on a sample of data. Examples of inferential statistics include t-tests, ANOVA, and regression analysis.
  • Qualitative analysis: This involves analyzing non-numerical data such as interviews, focus groups, and observations. Qualitative analysis may involve identifying themes, patterns, or categories in the data.
  • Content analysis: This involves analyzing the content of written or visual materials such as articles, speeches, or images. Content analysis may involve identifying themes, patterns, or categories in the content.
  • Data mining: This involves using automated methods to analyze large datasets to identify patterns, trends, or relationships in the data.
  • Machine learning: This involves using algorithms to analyze data and make predictions or classifications based on the patterns identified in the data.

Application of Scientific Research

Scientific research has numerous applications in many fields, including:

  • Medicine and healthcare: Scientific research is used to develop new drugs, medical treatments, and vaccines. It is also used to understand the causes and risk factors of diseases, as well as to develop new diagnostic tools and medical devices.
  • Agriculture : Scientific research is used to develop new crop varieties, to improve crop yields, and to develop more sustainable farming practices.
  • Technology and engineering : Scientific research is used to develop new technologies and engineering solutions, such as renewable energy systems, new materials, and advanced manufacturing techniques.
  • Environmental science : Scientific research is used to understand the impacts of human activity on the environment and to develop solutions for mitigating those impacts. It is also used to monitor and manage natural resources, such as water and air quality.
  • Education : Scientific research is used to develop new teaching methods and educational materials, as well as to understand how people learn and develop.
  • Business and economics: Scientific research is used to understand consumer behavior, to develop new products and services, and to analyze economic trends and policies.
  • Social sciences : Scientific research is used to understand human behavior, attitudes, and social dynamics. It is also used to develop interventions to improve social welfare and to inform public policy.

How to Conduct Scientific Research

Conducting scientific research involves several steps, including:

  • Identify a research question: Start by identifying a question or problem that you want to investigate. This question should be clear, specific, and relevant to your field of study.
  • Conduct a literature review: Before starting your research, conduct a thorough review of existing research in your field. This will help you identify gaps in knowledge and develop hypotheses or research questions.
  • Develop a research plan: Once you have a research question, develop a plan for how you will collect and analyze data to answer that question. This plan should include a detailed methodology, a timeline, and a budget.
  • Collect data: Depending on your research question and methodology, you may collect data through surveys, experiments, observations, or other methods.
  • Analyze data: Once you have collected your data, analyze it using appropriate statistical or qualitative methods. This will help you draw conclusions about your research question.
  • Interpret results: Based on your analysis, interpret your results and draw conclusions about your research question. Discuss any limitations or implications of your findings.
  • Communicate results: Finally, communicate your findings to others in your field through presentations, publications, or other means.

Purpose of Scientific Research

The purpose of scientific research is to systematically investigate phenomena, acquire new knowledge, and advance our understanding of the world around us. Scientific research has several key goals, including:

  • Exploring the unknown: Scientific research is often driven by curiosity and the desire to explore uncharted territory. Scientists investigate phenomena that are not well understood, in order to discover new insights and develop new theories.
  • Testing hypotheses: Scientific research involves developing hypotheses or research questions, and then testing them through observation and experimentation. This allows scientists to evaluate the validity of their ideas and refine their understanding of the phenomena they are studying.
  • Solving problems: Scientific research is often motivated by the desire to solve practical problems or address real-world challenges. For example, researchers may investigate the causes of a disease in order to develop new treatments, or explore ways to make renewable energy more affordable and accessible.
  • Advancing knowledge: Scientific research is a collective effort to advance our understanding of the world around us. By building on existing knowledge and developing new insights, scientists contribute to a growing body of knowledge that can be used to inform decision-making, solve problems, and improve our lives.

Examples of Scientific Research

Here are some examples of scientific research that are currently ongoing or have recently been completed:

  • Clinical trials for new treatments: Scientific research in the medical field often involves clinical trials to test new treatments for diseases and conditions. For example, clinical trials may be conducted to evaluate the safety and efficacy of new drugs or medical devices.
  • Genomics research: Scientists are conducting research to better understand the human genome and its role in health and disease. This includes research on genetic mutations that can cause diseases such as cancer, as well as the development of personalized medicine based on an individual’s genetic makeup.
  • Climate change: Scientific research is being conducted to understand the causes and impacts of climate change, as well as to develop solutions for mitigating its effects. This includes research on renewable energy technologies, carbon capture and storage, and sustainable land use practices.
  • Neuroscience : Scientists are conducting research to understand the workings of the brain and the nervous system, with the goal of developing new treatments for neurological disorders such as Alzheimer’s disease and Parkinson’s disease.
  • Artificial intelligence: Researchers are working to develop new algorithms and technologies to improve the capabilities of artificial intelligence systems. This includes research on machine learning, computer vision, and natural language processing.
  • Space exploration: Scientific research is being conducted to explore the cosmos and learn more about the origins of the universe. This includes research on exoplanets, black holes, and the search for extraterrestrial life.

When to use Scientific Research

Some specific situations where scientific research may be particularly useful include:

  • Solving problems: Scientific research can be used to investigate practical problems or address real-world challenges. For example, scientists may investigate the causes of a disease in order to develop new treatments, or explore ways to make renewable energy more affordable and accessible.
  • Decision-making: Scientific research can provide evidence-based information to inform decision-making. For example, policymakers may use scientific research to evaluate the effectiveness of different policy options or to make decisions about public health and safety.
  • Innovation : Scientific research can be used to develop new technologies, products, and processes. For example, research on materials science can lead to the development of new materials with unique properties that can be used in a range of applications.
  • Knowledge creation : Scientific research is an important way of generating new knowledge and advancing our understanding of the world around us. This can lead to new theories, insights, and discoveries that can benefit society.

Advantages of Scientific Research

There are many advantages of scientific research, including:

  • Improved understanding : Scientific research allows us to gain a deeper understanding of the world around us, from the smallest subatomic particles to the largest celestial bodies.
  • Evidence-based decision making: Scientific research provides evidence-based information that can inform decision-making in many fields, from public policy to medicine.
  • Technological advancements: Scientific research drives technological advancements in fields such as medicine, engineering, and materials science. These advancements can improve quality of life, increase efficiency, and reduce costs.
  • New discoveries: Scientific research can lead to new discoveries and breakthroughs that can advance our knowledge in many fields. These discoveries can lead to new theories, technologies, and products.
  • Economic benefits : Scientific research can stimulate economic growth by creating new industries and jobs, and by generating new technologies and products.
  • Improved health outcomes: Scientific research can lead to the development of new medical treatments and technologies that can improve health outcomes and quality of life for people around the world.
  • Increased innovation: Scientific research encourages innovation by promoting collaboration, creativity, and curiosity. This can lead to new and unexpected discoveries that can benefit society.

Limitations of Scientific Research

Scientific research has some limitations that researchers should be aware of. These limitations can include:

  • Research design limitations : The design of a research study can impact the reliability and validity of the results. Poorly designed studies can lead to inaccurate or inconclusive results. Researchers must carefully consider the study design to ensure that it is appropriate for the research question and the population being studied.
  • Sample size limitations: The size of the sample being studied can impact the generalizability of the results. Small sample sizes may not be representative of the larger population, and may lead to incorrect conclusions.
  • Time and resource limitations: Scientific research can be costly and time-consuming. Researchers may not have the resources necessary to conduct a large-scale study, or may not have sufficient time to complete a study with appropriate controls and analysis.
  • Ethical limitations : Certain types of research may raise ethical concerns, such as studies involving human or animal subjects. Ethical concerns may limit the scope of the research that can be conducted, or require additional protocols and procedures to ensure the safety and well-being of participants.
  • Limitations of technology: Technology may limit the types of research that can be conducted, or the accuracy of the data collected. For example, certain types of research may require advanced technology that is not yet available, or may be limited by the accuracy of current measurement tools.
  • Limitations of existing knowledge: Existing knowledge may limit the types of research that can be conducted. For example, if there is limited knowledge in a particular field, it may be difficult to design a study that can provide meaningful results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Documentary Research

Documentary Research – Types, Methods and...

Original Research

Original Research – Definition, Examples, Guide

Humanities Research

Humanities Research – Types, Methods and Examples

Historical Research

Historical Research – Types, Methods and Examples

Artistic Research

Artistic Research – Methods, Types and Examples

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 3, 2023 3:14 PM
  • URL: https://guides.lib.berkeley.edu/researchmethods

ScienceDaily

Top Science News

Latest top headlines.

  • Child Development
  • Brain Tumor
  • Brain Injury
  • Heart Disease
  • Cholesterol
  • Stroke Prevention
  • Sleep Disorders
  • Solar Flare
  • Robotics Research
  • Artificial Intelligence
  • Energy and the Environment
  • Renewable Energy
  • Cell Biology
  • Developmental Biology
  • Cows, Sheep, Pigs
  • Endangered Plants
  • Coral Reefs
  • Sustainability
  • Agriculture and Food
  • Food and Agriculture
  • Understanding People Who Can't Visualize
  • Illuminating Oxygen's Journey in the Brain
  • Heart Disease Risk: More Than One Drink a Day
  • Why Do Some Memories Become Longterm?

Top Physical/Tech

  • Unlocking Supernova Stardust Secrets
  • What Controls Sun's Differential Rotation?
  • Robot, Can You Say 'Cheese'?
  • A Solar Cell You Can Bend and Soak in Water

Top Environment

  • Cell Division Quality Control 'Stopwatch'
  • Century-Old Powdered Milk in Antarctica
  • New Artificial Reef Stands Up to Storms
  • Sugarcane's Complex Genetic Code Cracked

Health News

Latest health headlines.

  • Diet and Weight Loss
  • Alzheimer's Research
  • Alzheimer's
  • Social Psychology
  • Relationships
  • Mental Health
  • Learning Disorders
  • Mental Health Research
  • Sleep Disorder Research
  • Insomnia Research
  • Child Psychology
  • Disorders and Syndromes
  • Brain-Computer Interfaces
  • Neuroscience

Health & Medicine

  • Sweeteners Unlikely to Increase Your Appetite
  • Virtual Rehabilitation for Stroke Recovery
  • Familial Alzheimer's Via Bone Marrow Transplant
  • Crimean-Congo Hemorrhagic Fever Virus

Mind & Brain

  • Suppressing Boredom at Work Hurts Productivity
  • Younger Women: Mental Health, Heart Health
  • Synaptic Protein Change During Development
  • Premenstrual Disorders and Perinatal Depression

Living Well

  • Too Little Sleep Linked to High Blood Pressure
  • Concern for Others Begins at Around 18 Months
  • Risk Factors for Faster Brain Aging
  • Brain Expansion in Humans

Physical/Tech News

Latest physical/tech headlines.

  • Air Quality
  • Computer Modeling
  • Mathematical Modeling
  • Mathematics
  • Quantum Physics
  • Spintronics
  • Engineering
  • Materials Science
  • Extrasolar Planets
  • Kuiper Belt
  • Black Holes
  • Astrophysics
  • Asteroids, Comets and Meteors
  • Virtual Reality
  • Virtual Environment
  • Microarrays
  • Intelligence
  • Quantum Computers

Matter & Energy

  • Pinpointing Freshwater Pollution Sources
  • AI Boosts Super-Resolution Microscopy
  • Magnetic Avalanche Triggered by Quantum Effects
  • Can Metalens Be Easily Commercialized?

Space & Time

  • New Molecular Signposts in Starburst Galaxy
  • Protostars and Newly Formed Planets
  • 'Cosmic Cannibals': Fast-Moving Jets in Space
  • Deep Space Objects Can Become 'Ice Bombs'

Computers & Math

  • Immersive Projection Mapping
  • Artificial Nose for Gas Sensing, Odor Detection
  • AI to Locate Damage to Brain After Stroke?
  • A New Type of Cooling for Quantum Simulators

Environment News

Latest environment headlines.

  • Epigenetics Research
  • Evolutionary Biology
  • Biochemistry Research
  • Wild Animals
  • Mating and Breeding
  • Behavioral Science
  • Veterinary Medicine
  • Global Warming
  • Environmental Science
  • Air Pollution
  • Ancient Civilizations
  • Origin of Life
  • Early Climate
  • New Species
  • Paleontology
  • Anthropology
  • Human Evolution

Plants & Animals

  • Social Status Leaves Traces in the Epigenome
  • Small Birds Spice Up Spotted Hyena Diets
  • Lyrebird: One Real Song-And-Dance Bird
  • TB Vaccine in Cattle Reduces Disease Spread

Earth & Climate

  • Variability of Jet Streams in Northern ...
  • Manganese: Soil Carbon Sequestration
  • Built Environment and Risk of CVD
  • Open Waste Burning: Air Pollution in Arctic

Fossils & Ruins

  • Appearance of a 6th Century Chinese Emperor
  • More Heat Likely to Reach Antarctica
  • In Paleontology, Correct Names Are Key
  • Mystery of Dorset's Cerne Giant

Society/Education News

Latest society/education headlines.

  • Endangered Animals
  • Energy Issues
  • Environmental Awareness
  • Children's Health
  • Educational Policy
  • Educational Psychology
  • Education and Employment
  • STEM Education
  • Gender Difference
  • K-12 Education
  • Video Games
  • Sports Science

Science & Society

  • The Grey Seal Hunt Is Too Large
  • Obesity: Global Study Tracks BMI Measurements
  • Heat, Cold Extremes: Solar, Wind Energy Use
  • N. American Cities: Major Species Turnover?

Education & Learning

  • Most Teens Worry How Sick Days Impact Grades?
  • Effective Teachers: Range of Student Abilities
  • Students Contribute to Exoplanet Discovery
  • 'Transcendent' Thinking May Grow Teens' Brains

Business & Industry

  • Pairing Crypto Mining With Green Hydrogen
  • Feeling Apathetic? There May Be Hope
  • Tensions Between Individual and Team Wellbeing
  • AI Can Track Hockey Data
  • DNA Study IDs Descendants of George Washington
  • Researchers Turn Back the Clock On Cancer Cells

Trending Topics

Strange & offbeat, about this site.

ScienceDaily features breaking news about the latest discoveries in science, health, the environment, technology, and more -- from leading universities, scientific journals, and research organizations.

Visitors can browse more than 500 individual topics, grouped into 12 main sections (listed under the top navigational menu), covering: the medical sciences and health; physical sciences and technology; biological sciences and the environment; and social sciences, business and education. Headlines and summaries of relevant news stories are provided on each topic page.

Stories are posted daily, selected from press materials provided by hundreds of sources from around the world. Links to sources and relevant journal citations (where available) are included at the end of each post.

For more information about ScienceDaily, please consult the links listed at the bottom of each page.

Chapter 1 Science and Scientific Research

What is research? Depending on who you ask, you will likely get very different answers to this seemingly innocuous question. Some people will say that they routinely research different online websites to find the best place to buy goods or services they want. Television news channels supposedly conduct research in the form of viewer polls on topics of public interest such as forthcoming elections or government-funded projects. Undergraduate students research the Internet to find the information they need to complete assigned projects or term papers. Graduate students working on research projects for a professor may see research as collecting or analyzing data related to their project. Businesses and consultants research different potential solutions to remedy organizational problems such as a supply chain bottleneck or to identify customer purchase patterns. However, none of the above can be considered “scientific research” unless: (1) it contributes to a body of science, and (2) it follows the scientific method. This chapter will examine what these terms mean.

What is science? To some, science refers to difficult high school or college-level courses such as physics, chemistry, and biology meant only for the brightest students. To others, science is a craft practiced by scientists in white coats using specialized equipment in their laboratories. Etymologically, the word “science” is derived from the Latin word scientia meaning knowledge. Science refers to a systematic and organized body of knowledge in any area of inquiry that is acquired using “the scientific method” (the scientific method is described further below). Science can be grouped into two broad categories: natural science and social science. Natural science is the science of naturally occurring objects or phenomena, such as light, objects, matter, earth, celestial bodies, or the human body. Natural sciences can be further classified into physical sciences, earth sciences, life sciences, and others. Physical sciences consist of disciplines such as physics (the science of physical objects), chemistry (the science of matter), and astronomy (the science of celestial objects). Earth sciences consist of disciplines such as geology (the science of the earth). Life sciences include disciplines such as biology (the science of human bodies) and botany (the science of plants). In contrast, social science is the science of people or collections of people, such as groups, firms, societies, or economies, and their individual or collective behaviors. Social sciences can be classified into disciplines such as psychology (the science of human behaviors), sociology (the science of social groups), and economics (the science of firms, markets, and economies).

The natural sciences are different from the social sciences in several respects. The natural sciences are very precise, accurate, deterministic, and independent of the person m aking the scientific observations. For instance, a scientific experiment in physics, such as measuring the speed of sound through a certain media or the refractive index of water, should always yield the exact same results, irrespective of the time or place of the experiment, or the person conducting the experiment. If two students conducting the same physics experiment obtain two different values of these physical properties, then it generally means that one or both of those students must be in error. However, the same cannot be said for the social sciences, which tend to be less accurate, deterministic, or unambiguous. For instance, if you measure a person’s happiness using a hypothetical instrument, you may find that the same person is more happy or less happy (or sad) on different days and sometimes, at different times on the same day. One’s happiness may vary depending on the news that person received that day or on the events that transpired earlier during that day. Furthermore, there is not a single instrument or metric that can accurately measure a person’s happiness. Hence, one instrument may calibrate a person as being “more happy” while a second instrument may find that the same person is “less happy” at the same instant in time. In other words, there is a high degree of measurement error in the social sciences and there is considerable uncertainty and little agreement on social science policy decisions. For instance, you will not find many disagreements among natural scientists on the speed of light or the speed of the earth around the sun, but you will find numerous disagreements among social scientists on how to solve a social problem such as reduce global terrorism or rescue an economy from a recession. Any student studying the social sciences must be cognizant of and comfortable with handling higher levels of ambiguity, uncertainty, and error that come with such sciences, which merely reflects the high variability of social objects.

Sciences can also be classified based on their purpose. Basic sciences , also called pure sciences, are those that explain the most basic objects and forces, relationships between them, and laws governing them. Examples include physics, mathematics, and biology. Applied sciences , also called practical sciences, are sciences that apply scientific knowledge from basic sciences in a physical environment. For instance, engineering is an applied science that applies the laws of physics and chemistry for practical applications such as building stronger bridges or fuel efficient combustion engines, while medicine is an applied science that applies the laws of biology for solving human ailments. Both basic and applied sciences are required for human development. However, applied sciences cannot stand on their own right, but instead relies on basic sciences for its progress. Of course, the industry and private enterprises tend to focus more on applied sciences given their practical value, while universities study both basic and applied sciences.

Scientific Knowledge

The purpose of science is to create scientific knowledge. Scientific knowledge refers to a generalized body of laws and theories to explain a phenomenon or behavior of interest that are acquired using the scientific method. Laws are observed patterns of phenomena or behaviors, while theories are systematic explanations of the underlying phenomenon or behavior. For instance, in physics, the Newtonian Laws of Motion describe what happens when an object is in a state of rest or motion (Newton’s First Law), what force is needed to move a stationary object or stop a moving object (Newton’s Second Law), and what happens when two objects collide (Newton’s Third Law). Collectively, the three laws constitute the basis of classical mechanics – a theory of moving objects. Likewise, the theory of optics explains the properties of light and how it behaves in different media, electromagnetic theory explains the properties of electricity and how to generate it, quantum mechanics explains the properties of subatomic \particles, and thermodynamics explains the properties of energy and mechanical work. An introductory college level text book in physics will likely contain separate chapters devoted to each of these theories. Similar theories are also available in social sciences. For instance, cognitive dissonance theory in psychology explains how people react when their observations of an event is different from what they expected of that event, general deterrence theory explains why some people engage in improper or criminal behaviors, such as illegally download music or commit software piracy, and the theory of planned behavior explains how people make conscious reasoned choices in their everyday lives.

The goal of scientific research is to discover laws and postulate theories that can explain natural or social phenomena, or in other words, build scientific knowledge. It is important to understand that this knowledge may be imperfect or even quite far from the truth. Sometimes, there may not be a single universal truth, but rather an equilibrium of “multiple truths.” We must understand that the theories, upon which scientific knowledge is based, are only explanations of a particular phenomenon, as suggested by a scientist. As such, there may be good or poor explanations, depending on the extent to which those explanations fit well with reality, and consequently, there may be good or poor theories. The progress of science is marked by our progression over time from poorer theories to better theories, through better observations using more accurate instruments and more informed logical reasoning.

We arrive at scientific laws or theories through a process of logic and evidence. Logic (theory) and evidence (observations) are the two, and only two, pillars upon which scientific knowledge is based. In science, theories and observations are interrelated and cannot exist without each other. Theories provide meaning and significance to what we observe, and observations help validate or refine existing theory or construct new theory. Any other means of knowledge acquisition, such as faith or authority cannot be considered science.

Scientific Research

Given that theories and observations are the two pillars of science, scientific research operates at two levels: a theoretical level and an empirical level. The theoretical level is concerned with developing abstract concepts about a natural or social phenomenon and relationships between those concepts (i.e., build “theories”), while the empirical level is concerned with testing the theoretical concepts and relationships to see how well they reflect our observations of reality, with the goal of ultimately building better theories. Over time, a theory becomes more and more refined (i.e., fits the observed reality better), and the science gains maturity. Scientific research involves continually moving back and forth between theory and observations. Both theory and observations are essential components of scientific research. For instance, relying solely on observations for making inferences and ignoring theory is not considered valid scientific research.

Depending on a researcher’s training and interest, scientific inquiry may take one of two possible forms: inductive or deductive. In inductive research , the goal of a researcher is to infer theoretical concepts and patterns from observed data. In deductive research , the goal of the researcher is to test concepts and patterns known from theory using new empirical data. Hence, inductive research is also called theory-building research, and deductive research is theory-testing research. Note here that the goal of theory-testing is not just to test a theory, but possibly to refine, improve, and extend it. Figure 1.1 depicts the complementary nature of inductive and deductive research. Note that inductive and deductive research are two halves of the research cycle that constantly iterates between theory and observations. You cannot do inductive or deductive research if you are not familiar with both the theory and data components of research. Naturally, a complete researcher is one who can traverse the entire research cycle and can handle both inductive and deductive research.

It is important to understand that theory-building (inductive research) and theory-testing (deductive research) are both critical for the advancement of science. Elegant theories are not valuable if they do not match with reality. Likewise, mountains of data are also useless until they can contribute to the construction to meaningful theories. Rather than viewing these two processes in a circular relationship, as shown in Figure 1.1, perhaps they can be better viewed as a helix, with each iteration between theory and data contributing to better explanations of the phenomenon of interest and better theories. Though both inductive and deductive research are important for the advancement of science, it appears that inductive (theory-building) research is more valuable when there are few prior theories or explanations, while deductive (theory-testing) research is more productive when there are many competing theories of the same phenomenon and researchers are interested in knowing which theory works best and under what circumstances.

Theories lead to testing hypothesis which leads to observations, which lead to generalization from observations, which again leads to theories.

Figure 1.1. The Cycle of Research

Theory building and theory testing are particularly difficult in the social sciences, given the imprecise nature of the theoretical concepts, inadequate tools to measure them, and the presence of many unaccounted factors that can also influence the phenomenon of interest. It is also very difficult to refute theories that do not work. For instance, Karl Marx’s theory of communism as an effective means of economic production withstood for decades, before it was finally discredited as being inferior to capitalism in promoting economic growth and social welfare. Erstwhile communist economies like the Soviet Union and China eventually moved toward more capitalistic economies characterized by profit-maximizing private enterprises. However, the recent collapse of the mortgage and financial industries in the United States demonstrates that capitalism also has its flaws and is not as effective in fostering economic growth and social welfare as previously presumed. Unlike theories in the natural sciences, social science theories are rarely perfect, which provides numerous opportunities for researchers to improve those theories or build their own alternative theories.

Conducting scientific research, therefore, requires two sets of skills – theoretical and methodological – needed to operate in the theoretical and empirical levels respectively. Methodological skills (“know-how”) are relatively standard, invariant across disciplines, and easily acquired through doctoral programs. However, theoretical skills (“know-what”) is considerably harder to master, requires years of observation and reflection, and are tacit skills that cannot be “taught” but rather learned though experience. All of the greatest scientists in the history of mankind, such as Galileo, Newton, Einstein, Neils Bohr, Adam Smith, Charles Darwin, and Herbert Simon, were master theoreticians, and they are remembered for the theories they postulated that transformed the course of science. Methodological skills are needed to be an ordinary researcher, but theoretical skills are needed to be an extraordinary researcher!

Scientific Method

In the preceding sections, we described science as knowledge acquired through a scientific method. So what exactly is the “scientific method”? Scientific method refers to a standardized set of techniques for building scientific knowledge, such as how to make valid observations, how to interpret results, and how to generalize those results. The scientific method allows researchers to independently and impartially test preexisting theories and prior findings, and subject them to open debate, modifications, or enhancements. The scientific method must satisfy four characteristics:

  • Replicability: Others should be able to independently replicate or repeat a scientific study and obtain similar, if not identical, results.
  • Precision: Theoretical concepts, which are often hard to measure, must be defined with such precision that others can use those definitions to measure those concepts and test that theory.
  • Falsifiability: A theory must be stated in a way that it can be disproven. Theories that cannot be tested or falsified are not scientific theories and any such knowledge is not scientific knowledge. A theory that is specified in imprecise terms or whose concepts are not accurately measurable cannot be tested, and is therefore not scientific. Sigmund Freud’s ideas on psychoanalysis fall into this category and is therefore not considered a

“theory”, even though psychoanalysis may have practical utility in treating certain types of ailments.

  • Parsimony: When there are multiple explanations of a phenomenon, scientists must always accept the simplest or logically most economical explanation. This concept is called parsimony or “Occam’s razor.” Parsimony prevents scientists from pursuing overly complex or outlandish theories with endless number of concepts and relationships that may explain a little bit of everything but nothing in particular.

Any branch of inquiry that does not allow the scientific method to test its basic laws or theories cannot be called “science.” For instance, theology (the study of religion) is not science because theological ideas (such as the presence of God) cannot be tested by independent observers using a replicable, precise, falsifiable, and parsimonious method. Similarly, arts, music, literature, humanities, and law are also not considered science, even though they are creative and worthwhile endeavors in their own right.

The scientific method, as applied to social sciences, includes a variety of research approaches, tools, and techniques, such as qualitative and quantitative data, statistical analysis, experiments, field surveys, case research, and so forth. Most of this book is devoted to learning about these different methods. However, recognize that the scientific method operates primarily at the empirical level of research, i.e., how to make observations and analyze and interpret these observations. Very little of this method is directly pertinent to the theoretical level, which is really the more challenging part of scientific research.

Types of Scientific Research

Depending on the purpose of research, scientific research projects can be grouped into three types: exploratory, descriptive, and explanatory. Exploratory research is often conducted in new areas of inquiry, where the goals of the research are: (1) to scope out the magnitude or extent of a particular phenomenon, problem, or behavior, (2) to generate some initial ideas (or “hunches”) about that phenomenon, or (3) to test the feasibility of undertaking a more extensive study regarding that phenomenon. For instance, if the citizens of a country are generally dissatisfied with governmental policies regarding during an economic recession, exploratory research may be directed at measuring the extent of citizens’ dissatisfaction, understanding how such dissatisfaction is manifested, such as the frequency of public protests, and the presumed causes of such dissatisfaction, such as ineffective government policies in dealing with inflation, interest rates, unemployment, or higher taxes. Such research may include examination of publicly reported figures, such as estimates of economic indicators, such as gross domestic product (GDP), unemployment, and consumer price index, as archived by third-party sources, obtained through interviews of experts, eminent economists, or key government officials, and/or derived from studying historical examples of dealing with similar problems. This research may not lead to a very accurate understanding of the target problem, but may be worthwhile in scoping out the nature and extent of the problem and serve as a useful precursor to more in-depth research.

Descriptive research is directed at making careful observations and detailed documentation of a phenomenon of interest. These observations must be based on the scientific method (i.e., must be replicable, precise, etc.), and therefore, are more reliable than casual observations by untrained people. Examples of descriptive research are tabulation of demographic statistics by the United States Census Bureau or employment statistics by the Bureau of Labor, who use the same or similar instruments for estimating employment by sector or population growth by ethnicity over multiple employment surveys or censuses. If any changes are made to the measuring instruments, estimates are provided with and without the changed instrumentation to allow the readers to make a fair before-and-after comparison regarding population or employment trends. Other descriptive research may include chronicling ethnographic reports of gang activities among adolescent youth in urban populations, the persistence or evolution of religious, cultural, or ethnic practices in select communities, and the role of technologies such as Twitter and instant messaging in the spread of democracy movements in Middle Eastern countries.

Explanatory research seeks explanations of observed phenomena, problems, or behaviors. While descriptive research examines the what, where, and when of a phenomenon, explanatory research seeks answers to why and how types of questions. It attempts to “connect the dots” in research, by identifying causal factors and outcomes of the target phenomenon. Examples include understanding the reasons behind adolescent crime or gang violence, with the goal of prescribing strategies to overcome such societal ailments. Most academic or doctoral research belongs to the explanation category, though some amount of exploratory and/or descriptive research may also be needed during initial phases of academic research. Seeking explanations for observed events requires strong theoretical and interpretation skills, along with intuition, insights, and personal experience. Those who can do it well are also the most prized scientists in their disciplines.

History of Scientific Thought

Before closing this chapter, it may be interesting to go back in history and see how science has evolved over time and identify the key scientific minds in this evolution. Although instances of scientific progress have been documented over many centuries, the terms “science,” “scientists,” and the “scientific method” were coined only in the 19 th century. Prior to this time, science was viewed as a part of philosophy, and coexisted with other branches of philosophy such as logic, metaphysics, ethics, and aesthetics, although the boundaries between some of these branches were blurred.

In the earliest days of human inquiry, knowledge was usually recognized in terms of theological precepts based on faith. This was challenged by Greek philosophers such as Plato, Aristotle, and Socrates during the 3 rd century BC, who suggested that the fundamental nature of being and the world can be understood more accurately through a process of systematic logical reasoning called rationalism . In particular, Aristotle’s classic work Metaphysics (literally meaning “beyond physical [existence]”) separated theology (the study of Gods) from ontology (the study of being and existence) and universal science (the study of first principles, upon which logic is based). Rationalism (not to be confused with “rationality”) views reason as the source of knowledge or justification, and suggests that the criterion of truth is not sensory but rather intellectual and deductive, often derived from a set of first principles or axioms (such as Aristotle’s “law of non-contradiction”).

The next major shift in scientific thought occurred during the 16 th century, when British philosopher Francis Bacon (1561-1626) suggested that knowledge can only be derived from observations in the real world. Based on this premise, Bacon emphasized knowledge acquisition as an empirical activity (rather than as a reasoning activity), and developed empiricism as an influential branch of philosophy. Bacon’s works led to the popularization of inductive methods of scientific inquiry, the development of the “scientific method” (originally called the “Baconian method”), consisting of systematic observation, measurement, and experimentation, and may have even sowed the seeds of atheism or the rejection of theological precepts as “unobservable.”

Empiricism continued to clash with rationalism throughout the Middle Ages, as philosophers sought the most effective way of gaining valid knowledge. French philosopher Rene Descartes sided with the rationalists, while British philosophers John Locke and David Hume sided with the empiricists. Other scientists, such as Galileo Galilei and Sir Issac Newton, attempted to fuse the two ideas into natural philosophy (the philosophy of nature), to focus specifically on understanding nature and the physical universe, which is considered to be the precursor of the natural sciences. Galileo (1564-1642) was perhaps the first to state that the laws of nature are mathematical, and contributed to the field of astronomy through an innovative combination of experimentation and mathematics.

In the 18 th century, German philosopher Immanuel Kant sought to resolve the dispute between empiricism and rationalism in his book Critique of Pure Reason , by arguing that experience is purely subjective and processing them using pure reason without first delving into the subjective nature of experiences will lead to theoretical illusions. Kant’s ideas led to the development of German idealism , which inspired later development of interpretive techniques such as phenomenology, hermeneutics, and critical social theory.

At about the same time, French philosopher Auguste Comte (1798–1857), founder of the discipline of sociology, attempted to blend rationalism and empiricism in a new doctrine called positivism . He suggested that theory and observations have circular dependence on each other. While theories may be created via reasoning, they are only authentic if they can be verified through observations. The emphasis on verification started the separation of modern science from philosophy and metaphysics and further development of the “scientific method” as the primary means of validating scientific claims. Comte’s ideas were expanded by Emile Durkheim in his development of sociological positivism (positivism as a foundation for social research) and Ludwig Wittgenstein in logical positivism.

In the early 20 th century, strong accounts of positivism were rejected by interpretive sociologists (antipositivists) belonging to the German idealism school of thought. Positivism was typically equated with quantitative research methods such as experiments and surveys and without any explicit philosophical commitments, while antipositivism employed qualitative methods such as unstructured interviews and participant observation. Even practitioners of positivism, such as American sociologist Paul Lazarsfield who pioneered large-scale survey research and statistical techniques for analyzing survey data, acknowledged potential problems of observer bias and structural limitations in positivist inquiry. In response, antipositivists emphasized that social actions must be studied though interpretive means based upon an understanding the meaning and purpose that individuals attach to their personal actions, which inspired Georg Simmel’s work on symbolic interactionism, Max Weber’s work on ideal types, and Edmund Husserl’s work on phenomenology.

In the mid-to-late 20 th century, both positivist and antipositivist schools of thought were subjected to criticisms and modifications. British philosopher Sir Karl Popper suggested that human knowledge is based not on unchallengeable, rock solid foundations, but rather on a set of tentative conjectures that can never be proven conclusively, but only disproven. Empirical evidence is the basis for disproving these conjectures or “theories.” This metatheoretical stance, called postpositivism (or postempiricism), amends positivism by suggesting that it is impossible to verify the truth although it is possible to reject false beliefs, though it retains the positivist notion of an objective truth and its emphasis on the scientific method.

Likewise, antipositivists have also been criticized for trying only to understand society but not critiquing and changing society for the better. The roots of this thought lie in Das Capital , written by German philosophers Karl Marx and Friedrich Engels, which critiqued capitalistic societies as being social inequitable and inefficient, and recommended resolving this inequity through class conflict and proletarian revolutions. Marxism inspired social revolutions in countries such as Germany, Italy, Russia, and China, but generally failed to accomplish the social equality that it aspired. Critical research (also called critical theory) propounded by Max Horkheimer and Jurgen Habermas in the 20 th century, retains similar ideas of critiquing and resolving social inequality, and adds that people can and should consciously act to change their social and economic circumstances, although their ability to do so is constrained by various forms of social, cultural and political domination. Critical research attempts to uncover and critique the restrictive and alienating conditions of the status quo by analyzing the oppositions, conflicts and contradictions in contemporary society, and seeks to eliminate the causes of alienation and domination (i.e., emancipate the oppressed class). More on these different research philosophies and approaches will be covered in future chapters of this book.

  • Social Science Research: Principles, Methods, and Practices. Authored by : Anol Bhattacherjee. Provided by : University of South Florida. Located at : http://scholarcommons.usf.edu/oa_textbooks/3/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

UM-Flint Home

TODAY'S HOURS:

Research Process

  • Select a Topic
  • Find Background Info
  • Focus Topic
  • List Keywords
  • Search for Sources
  • Evaluate & Integrate Sources
  • Cite and Track Sources

What is Scientific Research?

Research study design, natural vs. social science, qualitative vs. quantitative research, more information on qualitative research in the social sciences, acknowledgements.

Thank you to Julie Miller, reference intern, for helping to create this page.

Some people use the term research loosely, for example:

  • People will say they are researching different online websites to find the best place to buy a new appliance or locate a lawn care service.
  • TV news may talk about conducting research when they conduct a viewer poll on current event topic such as an upcoming election.
  • Undergraduate students working on a term paper or project may say they are researching the internet to find information.
  • Private sector companies may say they are conducting research to find a solution for a supply chain holdup.

However, none of the above is considered “scientific research” unless:

  • The research contributes to a body of science by providing new information through ethical study design or
  • The research follows the scientific method, an iterative process of observation and inquiry.

The Scientific Method

  • Make an observation: notice a phenomenon in your life or in society or find a gap in the already published literature.
  • Ask a question about what you have observed.
  • Hypothesize about a potential answer or explanation.
  • Make predictions if our hypothesis is correct.
  • Design an experiment or study that will test your prediction.
  • Test the prediction by conducting an experiment or study; report the outcomes of your study.
  • Iterate! Was your prediction correct? Was the outcome unexpected? Did it lead to new observations?

The scientific method is not separate from the Research Process as described in the rest of this guide, in fact the Research Process is directly related to the observation stage of the scientific method. Understanding what other scientists and researchers have already studied will help you focus your area of study and build on their knowledge.

Designing your experiment or study is important for both natural and social scientists. Sage Research Methods (SRM) has an excellent "Project Planner" that guides you through the basic stages of research design. SRM also has excellent explanations of qualitative and quantitative research methods for the social sciences.

For the natural sciences, Springer Nature Experiments and Protocol Exchange have guidance on quantitative research methods.

U-M login required

Books, journals, reference books, videos, podcasts, data-sets, and case studies on social science research methods.

Sage Research Methods includes over 2,000 books, reference books, journal articles, videos, datasets, and case studies on all aspects of social science research methodology. Browse the methods map or the list of methods to identify a social science method to pursue further. Includes a project planning tool and the "Which Stats Test" tool to identify the best statistical method for your project. Includes the notable "little green book" series (Quantitative Applications in the Social Sciences) and the "little blue book" series (Qualitative Research Methods).

Platform connecting researchers with protocols and methods.

Springer Nature Experiments has been designed to help users/researchers find and evaluate relevant protocols and methods across the whole Springer Nature protocols and methods portfolio using one search. This database includes:

  • Nature Protocols
  • Nature Reviews Methods Primers
  • Nature Methods
  • Springer Protocols

Open access for all users

Open repository for sharing scientific research protocols. These protocols are posted directly on the Protocol Exchange by authors and are made freely available to the scientific community for use and comment.

Includes these topics:

  • Biochemistry
  • Biological techniques
  • Chemical biology
  • Chemical engineering
  • Cheminformatics
  • Climate science
  • Computational biology and bioinformatics
  • Drug discovery
  • Electronics
  • Energy sciences
  • Environmental sciences
  • Materials science
  • Molecular biology
  • Molecular medicine
  • Neuroscience
  • Organic chemistry
  • Planetary science

Qualitative research is primarily exploratory. It is used to gain an understanding of underlying reasons, opinions, and motivations. Qualitative research is also used to uncover trends in thought and opinions and to dive deeper into a problem by studying an individual or a group.

Qualitative methods usually use unstructured or semi-structured techniques. The sample size is typically smaller than in quantitative research.

Example: interviews and focus groups.

Quantitative research is characterized by the gathering of data with the aim of testing a hypothesis. The data generated are numerical, or, if not numerical, can be transformed into useable statistics.

Quantitative data collection methods are more structured than qualitative data collection methods and sample sizes are usually larger.

Example: survey

Note: The above descriptions of qualitative and quantitative research are mainly for research in the Social Sciences, rather than for Natural Sciences as most natural sciences rely on quantitative methods for their experiments.

Qualitative research is approaching the world in its natural setting and in a way that reveals the particularities rather than doing studies in a controlled setting. It aims to understand, describe, and sometimes explain social phenomena in a number of different ways:

  • Experiences of individuals or groups
  • Interactions and communications
  • Documents (texts, images, film, or sounds, and digital documents)
  • Experiences or interactions

Qualitative researchers seek to understand how people conceptualize the world around them, what they are doing, how they are doing it or what is happening to them in terms that are significant and that offer meaningful learnings.

Qualitative researchers develop and refine concepts (or hypotheses, if they are used) in the process of research and of collecting data. Cases (its history and complexity) are an important context for understanding the issue that is studied. A major part of qualitative research is based on text and writing – from field notes and transcripts to descriptions and interpretations and finally to the presentation of the findings and of the research as a whole.

For more information, see:

Cover Art

  • << Previous: Cite and Track Sources
  • Last Updated: Mar 1, 2024 1:02 PM
  • URL: https://libguides.umflint.edu/research

Book cover

Graduate Research pp 55–78 Cite as

Principles of Scientific Research

  • Robert V. Smith 2  

246 Accesses

Scientific research has provided knowledge and understanding that has freed humankind from the ignorance that once promoted fear, mysticism, superstition, and illness. Developments in science and scientific methods, however, did not come easily. Many of our ancestors had to face persecution, even death, from religious and political groups because they dared to advance the notion that knowledge and understanding could be gained through systematic study and practice. Today, the benefits of scientific research are understood. We appreciate the advances in the biological and physical sciences that allow the control of environment, the probing of the universe, and communications around the globe. We also appreciate the advances in biochemistry and molecular biology that have led to curative drugs, to genetic counseling, and to an unparalleled understanding of structure—function relationships in living organisms. We look hopefully to the development of life itself and, in concert with social-behavioral scientists, the unraveling of the relationship between mind and brain. Despite the potential moral issues raised by the latter advances, the history of science provides us faith that knowledge and understanding can be advanced for the benefit of humanity.

  • Pilot Experiment
  • Nobel Laureate
  • Vanadium Steel
  • STEPHEN Hawking
  • Engineering Graduate Student

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

... ever since the dawn of civilization, people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world.... Humanity’s deepest desire for knowledge is justification enough for our continuing quest. — Stephen Hawking

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Unable to display preview.  Download preview PDF.

A. Agresti and B. Finlay, Statistical Methods for Social. Scientists ( San Francisco: Macmillan, 1986 ).

Google Scholar  

N. C. Barford, Experimental Measurements: Precision , Error and Truth , 2nd ed. ( New York: Wiley, 1985 ).

W. I. B. Beveridge, The Art of Scientific Investigation ( New York: Vintage Books, 1960 ).

G. E. Box, W. G. Hunter, and J. S. Hunter, Statistics for Experimenters: An Introduction to Design , Data Analysis and Model Building ( New York: Wiley, 1978 ).

K. A. Brownlee, Statistical Theory and Methodology: In Science and Engineering (New York: Wiley, 1984).

W. G. Cochran and G. W. Snedecor, Statistical Methods , 7th ed. ( Ames: Iowa State University Press, 1980 ).

D. R. Cox, Applied Statistics: Principles and Examples ( New York: Chapman and Hall, 1981 ).

Book   Google Scholar  

R. A. Fisher, Statistical Methods for Research Workers , 14th ed. ( New York: Hafner, 1973 ).

M. Kendall and A. Stewart, The Advanced Theory of Statistics , 3 vols., 4th ed. ( New York: Hafner, 1977 ).

R. E. Kirk, Experimental Design: Procedures for the Behavorial Sciences , 2nd ed. ( Monterey, CA: Brooks/Cole, 1982 ).

R. Remington and M. A. Schork, Statistics with Applications to the Biological and Health Sciences , 2nd ed. ( Englewood Cliffs, NJ: Prentice-Hall, 1985 ).

R. R. Sokal and F. J. Rohlf, Biometry: The Principles and Practice of Statistics in Biological Research , 2nd ed. ( New York: Freeman, 1981 ).

J. W. Tukey, Exploratory Data Analysis ( Reading, MA: Addison-Wesley, 1977 ).

P. R. Bevington, Data Reduction and Error Analysis for the Physical Sciences ( New York: McGraw-Hill, 1969 ).

BMD: Biomedical Computer Programs ,rev. ed., W. J. Dixon and M. B. Brown, eds. (Berkeley: University of California Press, 1983).

S. Brandt, Statistical and Computational Methods in Data Analysis , 2nd rev. ed. ( Amsterdam: Elsevier, 1976 ).

N. H. Nie, C. H. Hull, J. G. Jenkins, K. Steinbrenner, and D. H. Bent, Statistical Package for the Social Sciences (SPSS) , 2nd ed. ( New York: McGraw-Hill, 1975 ).

T. A. Ryan, B. L. Joiner, and B. F. Ryan, Minitab Student Handbook ( North Scituate, MA: Duxbury Press, 1976 ).

SYSTAT, Mainframe Statistics Package for Microcomputers ( Evanston, IL: SYSTAT Inc., 1986 ).

P. B. Medawar, Advice to a Young Scientist ( New York: Harper and Row, 1979 ).

J. Kitfield, Laureates—Linus Pauling, Northwest Orient 17 (1) (1986), pp. 37–39.

H. C. Brown, Adventures in research, Chemical and Engineering News 59 (14) (1981), pp. 24–29.

Article   Google Scholar  

Rosalyn S. Yalow, Melange: Commencement 1988, The Chronicle of Higher Education34 (39) (1988), p. B-3.

E. B. Wilson, An Introduction to Scientific Research ( New York: McGraw-Hill, 1952 ).

W. Thomson (Lord Kelvin), Popular Lectures and Addresses by Sir William Thomson , 1891–1894 ( New York: Macmillan, 1894 ).

R. W. Hamming, The unreasonable effectiveness of mathematics, American Mathematics Monthly 87 (2) (1980), pp. 81–90.

Commission on Physical Sciences, Mathematics and Resources, National Academy of Sciences, Improving the Treatment of Scientific and Engineering Data Through Education (Washington, DC: National Academy Press, 1986 ).

Stephen W. Hawking, A Brief History of Time ( New York: Bantam Books, 1988 ).

A. H. Corwin, in Proceedings of the Robert A. Welch Conference on Chemical Research . XX. American Chemistry Bicentennial , W . O. Milligan, ed. ( Houston, TX: Robert A. Welch Foundation, 1977 ), pp. 45–69.

M. Gardner, Aha! Insight ( New York: Scientific American, 1978 ).

J. Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind ( Boston: Houghton Mifflin, 1976 ).

W. W. Rostow, The Barbaric Counterrevolution: Cause and Cure ( Austin: University of Texas Press, 1983 ).

E. A. Eschbach, Fostering creativity, PNL Profile Fall (1986), pp. 9–10; Battelle Pacific Northwest Laboratories, Document BN-FA 530 , Updated 3–88 ,Creativity, discovery, invention and the put-down.

J. C. Sheehan, The Enchanted Ring: The Untold Story of Penicillin ( Cambridge, MA: MIT Press, 1982 ).

C. W. Ceram, Gods , Graves , and Scholars , 2nd ed. ( New York: Knopf, 1982 ).

Download references

Author information

Authors and affiliations.

Pullman, Washington, USA

Robert V. Smith

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Robert V. Smith

About this chapter

Cite this chapter.

Smith, R.V. (1990). Principles of Scientific Research. In: Graduate Research. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7410-5_5

Download citation

DOI : https://doi.org/10.1007/978-1-4899-7410-5_5

Publisher Name : Springer, Boston, MA

Print ISBN : 978-0-306-43465-5

Online ISBN : 978-1-4899-7410-5

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Articles on Scientific research

Displaying 1 - 20 of 88 articles.

study for scientific research

Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create

Dennis M. Gorman , Texas A&M University

study for scientific research

Netflix’s You Are What You Eat uses a twin study. Here’s why studying twins is so important for science

Nathan Kettlewell , University of Technology Sydney

study for scientific research

Fact-bombing by experts doesn’t change hearts and minds. But good science communication can

Tom Carruthers , The University of Western Australia ; Heather Bray , The University of Western Australia , and Matthew Nurse , Australian National University

study for scientific research

Talking about science and technology has positive impacts on research and society

Ashley Rose Mehlenbacher , University of Waterloo ; Donna Strickland , University of Waterloo , and Mary Wells , University of Waterloo

study for scientific research

Tenacious curiosity in the lab can lead to a Nobel Prize – mRNA research exemplifies the unpredictable value of basic scientific research

André O. Hudson , Rochester Institute of Technology

study for scientific research

Pigs with human brain cells and biological chips: how lab-grown hybrid lifeforms bamboozle scientific ethics

Julian Koplin , Monash University

study for scientific research

When Greenland was green: Ancient soil from beneath a mile of ice offers warnings for the future

Paul Bierman , University of Vermont and Tammy Rittenour , Utah State University

study for scientific research

10 reasons humans kill animals – and why we can’t avoid it

Benjamin Allen , University of Southern Queensland

study for scientific research

Hurricanes push heat deeper into the ocean than scientists realized, boosting long-term ocean warming, new research shows

Noel Gutiérrez Brizuela , University of California, San Diego and Sally Warner , Brandeis University

study for scientific research

Colonialism has shaped scientific plant collections around the world – here’s why that matters

Daniel Park , Purdue University

study for scientific research

You shed DNA everywhere you go – trace samples in the water, sand and air are enough to identify who you are, raising ethical questions about privacy

Jenny Whilde , University of Florida and Jessica Alice Farrell , University of Florida

study for scientific research

Nigeria needs to take science more seriously - an agenda for the new president

Oyewale Tomori , Nigerian Academy of Science

study for scientific research

Two decades of stagnant funding have rendered Canada uncompetitive in biomedical research. Here’s why it matters, and how to fix it.

Stephen L Archer , Queen's University, Ontario

study for scientific research

How tracking technology is transforming our understanding of animal behaviour

Louise Gentle , Nottingham Trent University

study for scientific research

What the world would lose with the demise of Twitter: Valuable eyewitness accounts and raw data on human behavior, as well as a habitat for trolls

Anjana Susarla , Michigan State University

study for scientific research

There are 8 years left to meet the UN Sustainable Development Goals, but is it enough time?

Rees Kassen , L’Université d’Ottawa/University of Ottawa and Ruth Morgan , UCL

study for scientific research

‘Gain of function’ research can create experimental viruses. In light of COVID, it should be more strictly regulated – or banned

Colin D. Butler , Australian National University

study for scientific research

By fact-checking Thoreau’s observations at Walden Pond, we showed how old diaries and specimens can inform modern research

Tara K. Miller , Boston University ; Abe Miller-Rushing , National Park Service , and Richard B. Primack , Boston University

study for scientific research

New ‘ethics guidance’ for top science journals aims to root out harmful research – but can it succeed?

Cordelia Fine , The University of Melbourne

study for scientific research

Expanding Alzheimer’s research with primates could overcome the problem with treatments that show promise in mice but don’t help humans

Agnès Lacreuse , UMass Amherst ; Allyson J. Bennett , University of Wisconsin-Madison , and Amanda M. Dettmer , Yale University

Related Topics

  • Climate change
  • Research funding
  • Science research
  • Scientific method
  • Scientific publishing
  • South Africa

Top contributors

study for scientific research

Previous Vice President of the Academy of Science of South Africa and DSI-NRF SARChI chair in Fungal Genomics, Professor in Genetics, University of Pretoria, University of Pretoria

study for scientific research

Editor-in-Chief of the South African Journal of Science and Consultant, Vice Principal for Research and Graduate Education, University of Pretoria

study for scientific research

Professor of Public Affairs, The Ohio State University

study for scientific research

Honorary Professor, Australian National University

study for scientific research

Adjunct Professor of Environmental Geography, CQUniversity Australia

study for scientific research

Professor, History & Philosophy of Science program, School of Historical & Philosophical Studies, The University of Melbourne

study for scientific research

Associate Professor, University of Sydney

study for scientific research

Postdoctoral Fellow in Chronobiology, National Institute for Medical Research

study for scientific research

Professor of Medicine, Pharmacology and Biomedical Engineering, University of Illinois Chicago

study for scientific research

Professor of Planetary Science and Astrobiology, Birkbeck, University of London; Honorary Associate Professor, UCL

study for scientific research

Professor of Natural Philosophy in the Department of Physics, University of York

study for scientific research

Professor in High Medieval History, Durham University

study for scientific research

Associate Professor in Experimental Psychology (Perception), University of Oxford

study for scientific research

PhD Student and Trainee Clinical Psychologist at the Graduate Center, City University of New York

study for scientific research

Adjunct Senior Lecturer, University of Tasmania

  • X (Twitter)
  • Unfollow topic Follow topic
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Scientific Method Steps in Psychology Research

Steps, Uses, and Key Terms

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

study for scientific research

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

study for scientific research

Verywell / Theresa Chiechi

How do researchers investigate psychological phenomena? They utilize a process known as the scientific method to study different aspects of how people think and behave.

When conducting research, the scientific method steps to follow are:

  • Observe what you want to investigate
  • Ask a research question and make predictions
  • Test the hypothesis and collect data
  • Examine the results and draw conclusions
  • Report and share the results 

This process not only allows scientists to investigate and understand different psychological phenomena but also provides researchers and others a way to share and discuss the results of their studies.

Generally, there are five main steps in the scientific method, although some may break down this process into six or seven steps. An additional step in the process can also include developing new research questions based on your findings.

What Is the Scientific Method?

What is the scientific method and how is it used in psychology?

The scientific method consists of five steps. It is essentially a step-by-step process that researchers can follow to determine if there is some type of relationship between two or more variables.

By knowing the steps of the scientific method, you can better understand the process researchers go through to arrive at conclusions about human behavior.

Scientific Method Steps

While research studies can vary, these are the basic steps that psychologists and scientists use when investigating human behavior.

The following are the scientific method steps:

Step 1. Make an Observation

Before a researcher can begin, they must choose a topic to study. Once an area of interest has been chosen, the researchers must then conduct a thorough review of the existing literature on the subject. This review will provide valuable information about what has already been learned about the topic and what questions remain to be answered.

A literature review might involve looking at a considerable amount of written material from both books and academic journals dating back decades.

The relevant information collected by the researcher will be presented in the introduction section of the final published study results. This background material will also help the researcher with the first major step in conducting a psychology study: formulating a hypothesis.

Step 2. Ask a Question

Once a researcher has observed something and gained some background information on the topic, the next step is to ask a question. The researcher will form a hypothesis, which is an educated guess about the relationship between two or more variables

For example, a researcher might ask a question about the relationship between sleep and academic performance: Do students who get more sleep perform better on tests at school?

In order to formulate a good hypothesis, it is important to think about different questions you might have about a particular topic.

You should also consider how you could investigate the causes. Falsifiability is an important part of any valid hypothesis. In other words, if a hypothesis was false, there needs to be a way for scientists to demonstrate that it is false.

Step 3. Test Your Hypothesis and Collect Data

Once you have a solid hypothesis, the next step of the scientific method is to put this hunch to the test by collecting data. The exact methods used to investigate a hypothesis depend on exactly what is being studied. There are two basic forms of research that a psychologist might utilize: descriptive research or experimental research.

Descriptive research is typically used when it would be difficult or even impossible to manipulate the variables in question. Examples of descriptive research include case studies, naturalistic observation , and correlation studies. Phone surveys that are often used by marketers are one example of descriptive research.

Correlational studies are quite common in psychology research. While they do not allow researchers to determine cause-and-effect, they do make it possible to spot relationships between different variables and to measure the strength of those relationships. 

Experimental research is used to explore cause-and-effect relationships between two or more variables. This type of research involves systematically manipulating an independent variable and then measuring the effect that it has on a defined dependent variable .

One of the major advantages of this method is that it allows researchers to actually determine if changes in one variable actually cause changes in another.

While psychology experiments are often quite complex, a simple experiment is fairly basic but does allow researchers to determine cause-and-effect relationships between variables. Most simple experiments use a control group (those who do not receive the treatment) and an experimental group (those who do receive the treatment).

Step 4. Examine the Results and Draw Conclusions

Once a researcher has designed the study and collected the data, it is time to examine this information and draw conclusions about what has been found.  Using statistics , researchers can summarize the data, analyze the results, and draw conclusions based on this evidence.

So how does a researcher decide what the results of a study mean? Not only can statistical analysis support (or refute) the researcher’s hypothesis; it can also be used to determine if the findings are statistically significant.

When results are said to be statistically significant, it means that it is unlikely that these results are due to chance.

Based on these observations, researchers must then determine what the results mean. In some cases, an experiment will support a hypothesis, but in other cases, it will fail to support the hypothesis.

So what happens if the results of a psychology experiment do not support the researcher's hypothesis? Does this mean that the study was worthless?

Just because the findings fail to support the hypothesis does not mean that the research is not useful or informative. In fact, such research plays an important role in helping scientists develop new questions and hypotheses to explore in the future.

After conclusions have been drawn, the next step is to share the results with the rest of the scientific community. This is an important part of the process because it contributes to the overall knowledge base and can help other scientists find new research avenues to explore.

Step 5. Report the Results

The final step in a psychology study is to report the findings. This is often done by writing up a description of the study and publishing the article in an academic or professional journal. The results of psychological studies can be seen in peer-reviewed journals such as  Psychological Bulletin , the  Journal of Social Psychology ,  Developmental Psychology , and many others.

The structure of a journal article follows a specified format that has been outlined by the  American Psychological Association (APA) . In these articles, researchers:

  • Provide a brief history and background on previous research
  • Present their hypothesis
  • Identify who participated in the study and how they were selected
  • Provide operational definitions for each variable
  • Describe the measures and procedures that were used to collect data
  • Explain how the information collected was analyzed
  • Discuss what the results mean

Why is such a detailed record of a psychological study so important? By clearly explaining the steps and procedures used throughout the study, other researchers can then replicate the results. The editorial process employed by academic and professional journals ensures that each article that is submitted undergoes a thorough peer review, which helps ensure that the study is scientifically sound.

Once published, the study becomes another piece of the existing puzzle of our knowledge base on that topic.

Before you begin exploring the scientific method steps, here's a review of some key terms and definitions that you should be familiar with:

  • Falsifiable : The variables can be measured so that if a hypothesis is false, it can be proven false
  • Hypothesis : An educated guess about the possible relationship between two or more variables
  • Variable : A factor or element that can change in observable and measurable ways
  • Operational definition : A full description of exactly how variables are defined, how they will be manipulated, and how they will be measured

Uses for the Scientific Method

The  goals of psychological studies  are to describe, explain, predict and perhaps influence mental processes or behaviors. In order to do this, psychologists utilize the scientific method to conduct psychological research. The scientific method is a set of principles and procedures that are used by researchers to develop questions, collect data, and reach conclusions.

Goals of Scientific Research in Psychology

Researchers seek not only to describe behaviors and explain why these behaviors occur; they also strive to create research that can be used to predict and even change human behavior.

Psychologists and other social scientists regularly propose explanations for human behavior. On a more informal level, people make judgments about the intentions, motivations , and actions of others on a daily basis.

While the everyday judgments we make about human behavior are subjective and anecdotal, researchers use the scientific method to study psychology in an objective and systematic way. The results of these studies are often reported in popular media, which leads many to wonder just how or why researchers arrived at the conclusions they did.

Examples of the Scientific Method

Now that you're familiar with the scientific method steps, it's useful to see how each step could work with a real-life example.

Say, for instance, that researchers set out to discover what the relationship is between psychotherapy and anxiety .

  • Step 1. Make an observation : The researchers choose to focus their study on adults ages 25 to 40 with generalized anxiety disorder.
  • Step 2. Ask a question : The question they want to answer in their study is: Do weekly psychotherapy sessions reduce symptoms in adults ages 25 to 40 with generalized anxiety disorder?
  • Step 3. Test your hypothesis : Researchers collect data on participants' anxiety symptoms . They work with therapists to create a consistent program that all participants undergo. Group 1 may attend therapy once per week, whereas group 2 does not attend therapy.
  • Step 4. Examine the results : Participants record their symptoms and any changes over a period of three months. After this period, people in group 1 report significant improvements in their anxiety symptoms, whereas those in group 2 report no significant changes.
  • Step 5. Report the results : Researchers write a report that includes their hypothesis, information on participants, variables, procedure, and conclusions drawn from the study. In this case, they say that "Weekly therapy sessions are shown to reduce anxiety symptoms in adults ages 25 to 40."

Of course, there are many details that go into planning and executing a study such as this. But this general outline gives you an idea of how an idea is formulated and tested, and how researchers arrive at results using the scientific method.

Erol A. How to conduct scientific research ? Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102

University of Minnesota. Psychologists use the scientific method to guide their research .

Shaughnessy, JJ, Zechmeister, EB, & Zechmeister, JS. Research Methods In Psychology . New York: McGraw Hill Education; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

Revolutionizing the Study of Mental Disorders

March 27, 2024 • Feature Story • 75th Anniversary

At a Glance:

  • The Research Domain Criteria framework (RDoC) was created in 2010 by the National Institute of Mental Health.
  • The framework encourages researchers to examine functional processes that are implemented by the brain on a continuum from normal to abnormal.
  • This way of researching mental disorders can help overcome inherent limitations in using all-or-nothing diagnostic systems for research.
  • Researchers worldwide have taken up the principles of RDoC.
  • The framework continues to evolve and update as new information becomes available.

President George H. W. Bush proclaimed  the 1990s “ The Decade of the Brain  ,” urging the National Institutes of Health, the National Institute of Mental Health (NIMH), and others to raise awareness about the benefits of brain research.

“Over the years, our understanding of the brain—how it works, what goes wrong when it is injured or diseased—has increased dramatically. However, we still have much more to learn,” read the president’s proclamation. “The need for continued study of the brain is compelling: millions of Americans are affected each year by disorders of the brain…Today, these individuals and their families are justifiably hopeful, for a new era of discovery is dawning in brain research.”

An image showing an FMRI machine with computer screens showing brain images. Credit: iStock/patrickheagney.

Still, despite the explosion of new techniques and tools for studying the brain, such as functional magnetic resonance imaging (fMRI), many mental health researchers were growing frustrated that their field was not progressing as quickly as they had hoped.

For decades, researchers have studied mental disorders using diagnoses based on the Diagnostic and Statistical Manual of Mental Disorders (DSM)—a handbook that lists the symptoms of mental disorders and the criteria for diagnosing a person with a disorder. But, among many researchers, suspicion was growing that the system used to diagnose mental disorders may not be the best way to study them.

“There are many benefits to using the DSM in medical settings—it provides reliability and ease of diagnosis. It also provides a clear-cut diagnosis for patients, which can be necessary to request insurance-based coverage of healthcare or job- or school-based accommodations,” said Bruce Cuthbert, Ph.D., who headed the workgroup that developed NIMH’s Research Domain Criteria Initiative. “However, when used in research, this approach is not always ideal.”

Researchers would often test people with a specific diagnosed DSM disorder against those with a different disorder or with no disorder and see how the groups differed. However, different mental disorders can have similar symptoms, and people can be diagnosed with several different disorders simultaneously. In addition, a diagnosis using the DSM is all or none—patients either qualify for the disorder based on their number of symptoms, or they don’t. This black-and-white approach means there may be people who experience symptoms of a mental disorder but just miss the cutoff for diagnosis.

Dr. Cuthbert, who is now the senior member of the RDoC Unit which orchestrates RDoC work, stated that “Diagnostic systems are based on clinical signs and symptoms, but signs and symptoms can’t really tell us much about what is going on in the brain or the underlying causes of a disorder. With modern neuroscience, we were seeing that information on genetic, pathophysiological, and psychological causes of mental disorders did not line up well with the current diagnostic disorder categories, suggesting that there were central processes that relate to mental disorders that were not being reflected in DMS-based research.”

Road to evolution

Concerned about the limits of using the DSM for research, Dr. Cuthbert, a professor of clinical psychology at the University of Minnesota at the time, approached Dr. Thomas Insel (then NIMH director) during a conference in the autumn of 2008. Dr. Cuthbert recalled saying, “I think it’s really important that we start looking at dimensions of functions related to mental disorders such as fear, working memory, and reward systems because we know that these dimensions cut across various disorders. I think NIMH really needs to think about mental disorders in this new way.”

Dr. Cuthbert didn’t know it then, but he was suggesting something similar to ideas that NIMH was considering. Just months earlier, Dr. Insel had spearheaded the inclusion of a goal in NIMH’s 2008 Strategic Plan for Research to “develop, for research purposes, new ways of classifying mental disorders based on dimensions of observable behavior and neurobiological measures.”

Unaware of the new strategic goal, Dr. Cuthbert was surprised when Dr. Insel's senior advisor, Marlene Guzman, called a few weeks later to ask if he’d be interested in taking a sabbatical to help lead this new effort. Dr. Cuthbert soon transitioned into a full-time NIMH employee, joining the Institute at an exciting time to lead the development of what became known as the Research Domain Criteria (RDoC) Framework. The effort began in 2009 with the creation of an internal working group of interdisciplinary NIMH staff who identified core functional areas that could be used as examples of what research using this new conceptual framework looked like.

The workgroup members conceived a bold change in how investigators studied mental disorders.

“We wanted researchers to transition from looking at mental disorders as all or none diagnoses based on groups of symptoms. Instead, we wanted to encourage researchers to understand how basic core functions of the brain—like fear processing and reward processing—work at a biological and behavioral level and how these core functions contribute to mental disorders,” said Dr. Cuthbert.

This approach would incorporate biological and behavioral measures of mental disorders and examine processes that cut across and apply to all mental disorders. From Dr. Cuthbert’s standpoint, this could help remedy some of the frustrations mental health researchers were experiencing.

Around the same time the workgroup was sharing its plans and organizing the first steps, Sarah Morris, Ph.D., was a researcher focusing on schizophrenia at the University of Maryland School of Medicine in Baltimore. When she first read these papers, she wondered what this new approach would mean for her research, her grants, and her lab.

She also remembered feeling that this new approach reflected what she was seeing in her data.

“When I grouped my participants by those with and without schizophrenia, there was a lot of overlap, and there was a lot of variability across the board, and so it felt like RDoC provided the pathway forward to dissect that and sort it out,” said Dr. Morris.

Later that year, Dr. Morris joined NIMH and the RDoC workgroup, saying, “I was bumping up against a wall every day in my own work and in the data in front of me. And the idea that someone would give the field permission to try something new—that was super exciting.”

The five original RDoC domains of functioning were introduced to the broader scientific community in a series of articles published in 2010  .

To establish the new framework, the RDoC workgroup (including Drs. Cuthbert and Morris) began a series of workshops in 2011 to collect feedback from experts in various areas from the larger scientific community. Five workshops were held over the next two years, each with a different broad domain of functioning based upon prior basic behavioral neuroscience. The five domains were called:

  • Negative valence (which included processes related to things like fear, threat, and loss)
  • Positive valence (which included processes related to working for rewards and appreciating rewards)
  • Cognitive processes
  • Social processes
  • Arousal and regulation processes (including arousal systems for the body and sleep).

At each workshop, experts defined several specific functions, termed constructs, that fell within the domain of interest. For instance, constructs in the cognitive processes domain included attention, memory, cognitive control, and others.

The result of these feedback sessions was a framework that described mental disorders as the interaction between different functional processes—processes that could occur on a continuum from normal to abnormal. Researchers could measure these functional processes in a variety of complementary ways—for example, by looking at genes associated with these processes, the brain circuits that implement these processes, tests or observations of behaviors that represent these functional processes, and what patients report about their concerns. Also included in the framework was an understanding that functional processes associated with mental disorders are impacted and altered by the environment and a person’s developmental stage.

Preserving momentum

An image depicting the RDoC Framework that includes four overlapping circles (titled: Lifespan, Domains, Units of Analysis, and Environment).

Over time, the Framework continued evolving and adapting to the changing science. In 2018, a sixth functional area called sensorimotor processes was added to the Framework, and in 2019, a workshop was held to better incorporate developmental and environmental processes into the framework.;

Since its creation, the use of RDoC principles in mental health research has spread across the U.S. and the rest of the world. For example, the Psychiatric Ratings using Intermediate Stratified Markers project (PRISM)   , which receives funding from the European Union’s Innovative Medicines Initiative, is seeking to link biological markers of social withdrawal with clinical diagnoses using RDoC-style principles. Similarly, the Roadmap for Mental Health Research in Europe (ROAMER)   project by the European Commission sought to integrate mental health research across Europe using principles similar to those in the RDoC Framework.;

Dr. Morris, who has acceded to the Head of the RDoC Unit, commented: “The fact that investigators and science funders outside the United States are also pursuing similar approaches gives me confidence that we’ve been on the right pathway. I just think that this has got to be how nature works and that we are in better alignment with the basic fundamental processes that are of interest to understanding mental disorders.”

The RDoC framework will continue to adapt and change with emerging science to remain relevant as a resource for researchers now and in the future. For instance, NIMH continues to work toward the development and optimization of tools to assess RDoC constructs and supports data-driven efforts to measure function within and across domains.

“For the millions of people impacted by mental disorders, research means hope. The RDoC framework helps us study mental disorders in a different way and has already driven considerable change in the field over the past decade,” said Joshua A. Gordon, M.D., Ph.D., director of NIMH. “We hope this and other innovative approaches will continue to accelerate research progress, paving the way for prevention, recovery, and cure.”

Publications

Cuthbert, B. N., & Insel, T. R. (2013). Toward the future of psychiatric diagnosis: The seven pillars of RDoC. BMC Medicine , 11 , 126. https://doi.org/10.1186/1741-7015-11-126  

Cuthbert B. N. (2014). Translating intermediate phenotypes to psychopathology: The NIMH Research Domain Criteria. Psychophysiology , 51 (12), 1205–1206. https://doi.org/10.1111/psyp.12342  

Cuthbert, B., & Insel, T. (2010). The data of diagnosis: New approaches to psychiatric classification. Psychiatry , 73 (4), 311–314. https://doi.org/10.1521/psyc.2010.73.4.311  

Cuthbert, B. N., & Kozak, M. J. (2013). Constructing constructs for psychopathology: The NIMH research domain criteria. Journal of Abnormal Psychology , 122 (3), 928–937. https://doi.org/10.1037/a0034028  

Garvey, M. A., & Cuthbert, B. N. (2017). Developing a motor systems domain for the NIMH RDoC program.  Schizophrenia Bulletin , 43 (5), 935–936. https://doi.org/10.1093/schbul/sbx095  

Insel, T. (2013). Transforming diagnosis . http://www.nimh.nih.gov/about/director/2013/transforming-diagnosis.shtml

Kozak, M. J., & Cuthbert, B. N. (2016). The NIMH Research Domain Criteria initiative: Background, issues, and pragmatics. Psychophysiology , 53 (3), 286–297. https://doi.org/10.1111/psyp.12518  

Morris, S. E., & Cuthbert, B. N. (2012). Research Domain Criteria: Cognitive systems, neural circuits, and dimensions of behavior. Dialogues in Clinical Neuroscience , 14 (1), 29–37. https://doi.org/10.31887/DCNS.2012.14.1/smorris  

Sanislow, C. A., Pine, D. S., Quinn, K. J., Kozak, M. J., Garvey, M. A., Heinssen, R. K., Wang, P. S., & Cuthbert, B. N. (2010). Developing constructs for psychopathology research: Research domain criteria. Journal of Abnormal Psychology , 119 (4), 631–639. https://doi.org/10.1037/a0020909  

  • Presidential Proclamation 6158 (The Decade of the Brain) 
  • Research Domain Criteria Initiative website
  • Psychiatric Ratings using Intermediate Stratified Markers (PRISM)  
  • Roadmap for Mental Health Research in Europe (ROAMER)  

NASA Logo

Scientific Consensus

study for scientific research

It’s important to remember that scientists always focus on the evidence, not on opinions. Scientific evidence continues to show that human activities ( primarily the human burning of fossil fuels ) have warmed Earth’s surface and its ocean basins, which in turn have continued to impact Earth’s climate . This is based on over a century of scientific evidence forming the structural backbone of today's civilization.

NASA Global Climate Change presents the state of scientific knowledge about climate change while highlighting the role NASA plays in better understanding our home planet. This effort includes citing multiple peer-reviewed studies from research groups across the world, 1 illustrating the accuracy and consensus of research results (in this case, the scientific consensus on climate change) consistent with NASA’s scientific research portfolio.

With that said, multiple studies published in peer-reviewed scientific journals 1 show that climate-warming trends over the past century are extremely likely due to human activities. In addition, most of the leading scientific organizations worldwide have issued public statements endorsing this position. The following is a partial list of these organizations, along with links to their published statements and a selection of related resources.

American Scientific Societies

Statement on climate change from 18 scientific associations.

"Observations throughout the world make it clear that climate change is occurring, and rigorous scientific research demonstrates that the greenhouse gases emitted by human activities are the primary driver." (2009) 2

American Association for the Advancement of Science

"Based on well-established evidence, about 97% of climate scientists have concluded that human-caused climate change is happening." (2014) 3

AAAS emblem

American Chemical Society

"The Earth’s climate is changing in response to increasing concentrations of greenhouse gases (GHGs) and particulate matter in the atmosphere, largely as the result of human activities." (2016-2019) 4

ACS emblem

American Geophysical Union

"Based on extensive scientific evidence, it is extremely likely that human activities, especially emissions of greenhouse gases, are the dominant cause of the observed warming since the mid-20th century. There is no alterative explanation supported by convincing evidence." (2019) 5

AGU emblem

American Medical Association

"Our AMA ... supports the findings of the Intergovernmental Panel on Climate Change’s fourth assessment report and concurs with the scientific consensus that the Earth is undergoing adverse global climate change and that anthropogenic contributions are significant." (2019) 6

AMA emblem

American Meteorological Society

"Research has found a human influence on the climate of the past several decades ... The IPCC (2013), USGCRP (2017), and USGCRP (2018) indicate that it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-twentieth century." (2019) 7

AMS emblem

American Physical Society

"Earth's changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe. While natural sources of climate variability are significant, multiple lines of evidence indicate that human influences have had an increasingly dominant effect on global climate warming observed since the mid-twentieth century." (2015) 8

APS emblem

The Geological Society of America

"The Geological Society of America (GSA) concurs with assessments by the National Academies of Science (2005), the National Research Council (2011), the Intergovernmental Panel on Climate Change (IPCC, 2013) and the U.S. Global Change Research Program (Melillo et al., 2014) that global climate has warmed in response to increasing concentrations of carbon dioxide (CO2) and other greenhouse gases ... Human activities (mainly greenhouse-gas emissions) are the dominant cause of the rapid warming since the middle 1900s (IPCC, 2013)." (2015) 9

GSA emblem

Science Academies

International academies: joint statement.

"Climate change is real. There will always be uncertainty in understanding a system as complex as the world’s climate. However there is now strong evidence that significant global warming is occurring. The evidence comes from direct measurements of rising surface air temperatures and subsurface ocean temperatures and from phenomena such as increases in average global sea levels, retreating glaciers, and changes to many physical and biological systems. It is likely that most of the warming in recent decades can be attributed to human activities (IPCC 2001)." (2005, 11 international science academies) 1 0

U.S. National Academy of Sciences

"Scientists have known for some time, from multiple lines of evidence, that humans are changing Earth’s climate, primarily through greenhouse gas emissions." 1 1

UNSAS emblem

U.S. Government Agencies

U.s. global change research program.

"Earth’s climate is now changing faster than at any point in the history of modern civilization, primarily as a result of human activities." (2018, 13 U.S. government departments and agencies) 12

USGCRP emblem

Intergovernmental Bodies

Intergovernmental panel on climate change.

“It is unequivocal that the increase of CO 2 , methane, and nitrous oxide in the atmosphere over the industrial era is the result of human activities and that human influence is the principal driver of many changes observed across the atmosphere, ocean, cryosphere, and biosphere. “Since systematic scientific assessments began in the 1970s, the influence of human activity on the warming of the climate system has evolved from theory to established fact.” 1 3-17

IPCC emblem

Other Resources

List of worldwide scientific organizations.

The following page lists the nearly 200 worldwide scientific organizations that hold the position that climate change has been caused by human action. http://www.opr.ca.gov/facts/list-of-scientific-organizations.html

U.S. Agencies

The following page contains information on what federal agencies are doing to adapt to climate change. https://www.c2es.org/site/assets/uploads/2012/02/climate-change-adaptation-what-federal-agencies-are-doing.pdf

Technically, a “consensus” is a general agreement of opinion, but the scientific method steers us away from this to an objective framework. In science, facts or observations are explained by a hypothesis (a statement of a possible explanation for some natural phenomenon), which can then be tested and retested until it is refuted (or disproved).

As scientists gather more observations, they will build off one explanation and add details to complete the picture. Eventually, a group of hypotheses might be integrated and generalized into a scientific theory, a scientifically acceptable general principle or body of principles offered to explain phenomena.

1. K. Myers, et al, "Consensus revisited: quantifying scientific agreement on climate change and climate expertise among Earth scientists 10 years later", Environmental Research Letters Vol.16 No. 10, 104030 (20 October 2021); DOI:10.1088/1748-9326/ac2774 M. Lynas, et al, "Greater than 99% consensus on human caused climate change in the peer-reviewed scientific literature", Environmental Research Letters Vol.16 No. 11, 114005 (19 October 2021); DOI:10.1088/1748-9326/ac2966 J. Cook et al., "Consensus on consensus: a synthesis of consensus estimates on human-caused global warming", Environmental Research Letters Vol. 11 No. 4, (13 April 2016); DOI:10.1088/1748-9326/11/4/048002 J. Cook et al., "Quantifying the consensus on anthropogenic global warming in the scientific literature", Environmental Research Letters Vol. 8 No. 2, (15 May 2013); DOI:10.1088/1748-9326/8/2/024024 W. R. L. Anderegg, “Expert Credibility in Climate Change”, Proceedings of the National Academy of Sciences Vol. 107 No. 27, 12107-12109 (21 June 2010); DOI: 10.1073/pnas.1003187107 P. T. Doran & M. K. Zimmerman, "Examining the Scientific Consensus on Climate Change", Eos Transactions American Geophysical Union Vol. 90 Issue 3 (2009), 22; DOI: 10.1029/2009EO030002 N. Oreskes, “Beyond the Ivory Tower: The Scientific Consensus on Climate Change”, Science Vol. 306 no. 5702, p. 1686 (3 December 2004); DOI: 10.1126/science.1103618

2. Statement on climate change from 18 scientific associations (2009)

3. AAAS Board Statement on Climate Change (2014)

4. ACS Public Policy Statement: Climate Change (2016-2019)

5. Society Must Address the Growing Climate Crisis Now (2019)

6. Global Climate Change and Human Health (2019)

7. Climate Change: An Information Statement of the American Meteorological Society (2019)

8. American Physical Society (2021)

9. GSA Position Statement on Climate Change (2015)

10. Joint science academies' statement: Global response to climate change (2005)

11. Climate at the National Academies

12. Fourth National Climate Assessment: Volume II (2018)

13. IPCC Fifth Assessment Report, Summary for Policymakers, SPM 1.1 (2014)

14. IPCC Fifth Assessment Report, Summary for Policymakers, SPM 1 (2014)

15. IPCC Sixth Assessment Report, Working Group 1 (2021)

16. IPCC Sixth Assessment Report, Working Group 2 (2022)

17. IPCC Sixth Assessment Report, Working Group 3 (2022)

Discover More Topics From NASA

Explore Earth Science

study for scientific research

Earth Science in Action

Earth Action

Earth Science Data

The sum of Earth's plants, on land and in the ocean, changes slightly from year to year as weather patterns shift.

Facts About Earth

study for scientific research

  • Share full article

Advertisement

Supported by

More Studies by Columbia Cancer Researchers Are Retracted

The studies, pulled because of copied data, illustrate the sluggishness of scientific publishers to address serious errors, experts said.

study for scientific research

By Benjamin Mueller

Scientists in a prominent cancer lab at Columbia University have now had four studies retracted and a stern note added to a fifth accusing it of “severe abuse of the scientific publishing system,” the latest fallout from research misconduct allegations recently leveled against several leading cancer scientists.

A scientific sleuth in Britain last year uncovered discrepancies in data published by the Columbia lab, including the reuse of photos and other images across different papers. The New York Times reported last month that a medical journal in 2022 had quietly taken down a stomach cancer study by the researchers after an internal inquiry by the journal found ethics violations.

Despite that study’s removal, the researchers — Dr. Sam Yoon, chief of a cancer surgery division at Columbia University’s medical center, and Changhwan Yoon, a more junior biologist there — continued publishing studies with suspicious data. Since 2008, the two scientists have collaborated with other researchers on 26 articles that the sleuth, Sholto David, publicly flagged for misrepresenting experiments’ results.

One of those articles was retracted last month after The Times asked publishers about the allegations. In recent weeks, medical journals have retracted three additional studies, which described new strategies for treating cancers of the stomach, head and neck. Other labs had cited the articles in roughly 90 papers.

A major scientific publisher also appended a blunt note to the article that it had originally taken down without explanation in 2022. “This reuse (and in part, misrepresentation) of data without appropriate attribution represents a severe abuse of the scientific publishing system,” it said .

Still, those measures addressed only a small fraction of the lab’s suspect papers. Experts said the episode illustrated not only the extent of unreliable research by top labs, but also the tendency of scientific publishers to respond slowly, if at all, to significant problems once they are detected. As a result, other labs keep relying on questionable work as they pour federal research money into studies, allowing errors to accumulate in the scientific record.

“For every one paper that is retracted, there are probably 10 that should be,” said Dr. Ivan Oransky, co-founder of Retraction Watch, which keeps a database of 47,000-plus retracted studies. “Journals are not particularly interested in correcting the record.”

Columbia’s medical center declined to comment on allegations facing Dr. Yoon’s lab. It said the two scientists remained at Columbia and the hospital “is fully committed to upholding the highest standards of ethics and to rigorously maintaining the integrity of our research.”

The lab’s web page was recently taken offline. Columbia declined to say why. Neither Dr. Yoon nor Changhwan Yoon could be reached for comment. (They are not related.)

Memorial Sloan Kettering Cancer Center, where the scientists worked when much of the research was done, is investigating their work.

The Columbia scientists’ retractions come amid growing attention to the suspicious data that undergirds some medical research. Since late February, medical journals have retracted seven papers by scientists at Harvard’s Dana-Farber Cancer Institute . That followed investigations into data problems publicized by Dr. David , an independent molecular biologist who looks for irregularities in published images of cells, tumors and mice, sometimes with help from A.I. software.

The spate of misconduct allegations has drawn attention to the pressures on academic scientists — even those, like Dr. Yoon, who also work as doctors — to produce heaps of research.

Strong images of experiments’ results are often needed for those studies. Publishing them helps scientists win prestigious academic appointments and attract federal research grants that can pay dividends for themselves and their universities.

Dr. Yoon, a robotic surgery specialist noted for his treatment of stomach cancers, has helped bring in nearly $5 million in federal research money over his career.

The latest retractions from his lab included articles from 2020 and 2021 that Dr. David said contained glaring irregularities . Their results appeared to include identical images of tumor-stricken mice, despite those mice supposedly having been subjected to different experiments involving separate treatments and types of cancer cells.

The medical journal Cell Death & Disease retracted two of the latest studies, and Oncogene retracted the third. The journals found that the studies had also reused other images, like identical pictures of constellations of cancer cells.

The studies Dr. David flagged as containing image problems were largely overseen by the more senior Dr. Yoon. Changhwan Yoon, an associate research scientist who has worked alongside Dr. Yoon for a decade, was often a first author, which generally designates the scientist who ran the bulk of the experiments.

Kun Huang, a scientist in China who oversaw one of the recently retracted studies, a 2020 paper that did not include the more senior Dr. Yoon, attributed that study’s problematic sections to Changhwan Yoon. Dr. Huang, who made those comments this month on PubPeer, a website where scientists post about studies, did not respond to an email seeking comment.

But the more senior Dr. Yoon has long been made aware of problems in research he published alongside Changhwan Yoon: The two scientists were notified of the removal in January 2022 of their stomach cancer study that was found to have violated ethics guidelines.

Research misconduct is often pinned on the more junior researchers who conduct experiments. Other scientists, though, assign greater responsibility to the senior researchers who run labs and oversee studies, even as they juggle jobs as doctors or administrators.

“The research world’s coming to realize that with great power comes great responsibility and, in fact, you are responsible not just for what one of your direct reports in the lab has done, but for the environment you create,” Dr. Oransky said.

In their latest public retraction notices, medical journals said that they had lost faith in the results and conclusions. Imaging experts said some irregularities identified by Dr. David bore signs of deliberate manipulation, like flipped or rotated images, while others could have been sloppy copy-and-paste errors.

The little-noticed removal by a journal of the stomach cancer study in January 2022 highlighted some scientific publishers’ policy of not disclosing the reasons for withdrawing papers as long as they have not yet formally appeared in print. That study had appeared only online.

Roland Herzog, the editor of the journal Molecular Therapy, said that editors had drafted an explanation that they intended to publish at the time of the article’s removal. But Elsevier, the journal’s parent publisher, advised them that such a note was unnecessary, he said.

Only after the Times article last month did Elsevier agree to explain the article’s removal publicly with the stern note. In an editorial this week , the Molecular Therapy editors said that in the future, they would explain the removal of any articles that had been published only online.

But Elsevier said in a statement that it did not consider online articles “to be the final published articles of record.” As a result, company policy continues to advise that such articles be removed without an explanation when they are found to contain problems. The company said it allowed editors to provide additional information where needed.

Elsevier, which publishes nearly 3,000 journals and generates billions of dollars in annual revenue , has long been criticized for its opaque removals of online articles.

Articles by the Columbia scientists with data discrepancies that remain unaddressed were largely distributed by three major publishers: Elsevier, Springer Nature and the American Association for Cancer Research. Dr. David alerted many journals to the data discrepancies in October.

Each publisher said it was investigating the concerns. Springer Nature said investigations take time because they can involve consulting experts, waiting for author responses and analyzing raw data.

Dr. David has also raised concerns about studies published independently by scientists who collaborated with the Columbia researchers on some of their recently retracted papers. For example, Sandra Ryeom, an associate professor of surgical sciences at Columbia, published an article in 2003 while at Harvard that Dr. David said contained a duplicated image . As of 2021, she was married to the more senior Dr. Yoon, according to a mortgage document from that year.

A medical journal appended a formal notice to the article last week saying “appropriate editorial action will be taken” once data concerns had been resolved. Dr. Ryeom said in a statement that she was working with the paper’s senior author on “correcting the error.”

Columbia has sought to reinforce the importance of sound research practices. Hours after the Times article appeared last month, Dr. Michael Shelanski, the medical school’s senior vice dean for research, sent an email to faculty members titled “Research Fraud Accusations — How to Protect Yourself.” It warned that such allegations, whatever their merits, could take a toll on the university.

“In the months that it can take to investigate an allegation,” Dr. Shelanski wrote, “funding can be suspended, and donors can feel that their trust has been betrayed.”

Benjamin Mueller reports on health and medicine. He was previously a U.K. correspondent in London and a police reporter in New York. More about Benjamin Mueller

More From Forbes

Scientists tend to inflate how ethical they are in doing their research.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

We have known for a long time that people tend to paint a rosy picture of how good they are. Now we know that scientists are no exception, at least when it comes to conducting their own research. This is especially surprising since scientists are regularly thought to be objective.

Research and Ethics - clinical trial law and rules, Medical compliance.

This new discovery emerged from a massive survey of 11,050 scientific researchers in Sweden, conducted by Amanda M. Lindkvist, Lina Koppel, and Gustav Tinghög at Linköping University and published in the journal Scientific Reports. The survey was very simple, with only two questions. Here was the first:

Question One : In your role as a researcher, to what extent do you perceive yourself as following good research practices—compared to other researchers in your field?

Rather than allowing the survey participants to each define what ‘good research practice’ is, the researchers gave them these criteria:

(1) Tell the truth about one’s research.

(2) Consciously review and report the basic premises of one’s studies.

(3) Openly account for one’s methods and results.

(4) Openly account for one’s commercial interests and other associations.

One Of The Best Shows Ever Made Lands On Netflix Today For The Very First Time

This popular google app will stop working in 3 days how to migrate your data, google suddenly reveals surprise android update that beats iphone.

(5) Not make unauthorized use of the research results of others.

(6) Keep one’s research organized, for example through documentation and filing.

(7) Stive to conduct one’s research without doing harm to people, animals or the environment.

(8) Be fair in one’s judgement of others’ research.

Note that many of these criteria have to do with honesty, but there are also ones on conscientiousness, non-malevolence, and fairness.

What were the results? Participants used a scale to rate themselves from 1 to 7, with 1 = Much less than other researchers, 4 = As much as other researchers, and 7 = Much more than other researchers. This is what the responses revealed:

44% rated themselves as more ethical in their research practices than other researchers in their field.

55% rated themselves as the same as their peers.

Not even 1% rated themselves as less ethical than their peers.

Of course these results can’t reflect real life, since mathematically there have to be more than 1% of scientists who are less than average in this area of their lives.

The other question that Lindkvist and his colleagues asked these scientific researchers was this:

Question Two : To what extent do you perceive researchers within your field as following good research practices–compared to researchers within other fields?

Here too the results were very skewed. 29% said their field followed good research practices to a greater extent than did scientists in other fields. Only 8% said it was the other way around.

These results should surprise us for a couple of reasons. One is that they go against the popular narrative of scientists as objective and neutral. When it comes to their own ethical behavior in conducting their research, they appear as a whole to be biased and overconfident. Another reason these results are surprising is that many scientists are likely aware of the existence of scientific research on how people in general tend to have an inflated view of their own virtue . So you’d expect that they would be on guard against such a tendency in their own case.

There are dangers that come with scientists having an overly positive view of their own research ethics. Lindkvist helpfully explains one of them: it “may lead researchers to underestimate the ethical implications of the decisions they make and to sometimes be blind to their own ethical failures. For example, researchers may downplay their own questionable practices but exaggerate those of other researchers, perhaps especially researchers outside their field.” Another danger that Lindkvist notes is a greater tendency to ignore warnings and ethical safeguards, if they are dismissed by a scientist as applying to others but not to her since she thinks she is above average.

It would be interesting in future work to see if similar patterns emerge with researchers in other countries besides Sweden. It would also be interesting to look at researchers anonymously rating the research ethics of their colleagues in their own departments and schools.

If these results hold up, it will be important to find ways to encourage scientific researchers to correct their inflated perceptions. As Lindkvist urges, “To restore science’s credibility, we need to create incentive structures, institutions, and communities that foster ethical humility and encourage us to be our most ethical selves in an academic system that otherwise incentivizes us to be bad.”

Christian B. Miller

  • Editorial Standards
  • Reprints & Permissions

Scientists gear up to study solar eclipse with high-altitude planes and sun-orbiting probes

Solar Eclipse

For the millions of people across North America who will be treated to a total solar eclipse on April 8, it will be spectacular show — a chance to see the moon fully obscure the sun’s face .

But for scientists, it is a rare opportunity to study Earth, the moon and the sun “in entirely different ways than we usually do,” said Pam Melroy, NASA’s deputy administrator.

One of the agency’s main priorities will be to observe the sun’s outer atmosphere, or the corona, which normally can’t be seen because the star is too bright. During a total solar eclipse, the corona comes into view as faint wisps around a glowing halo when the moon blocks light from the sun’s surface.

“Things are happening with the corona that we don’t fully understand, and the eclipse gives us a unique opportunity to collect data that may give insights into the future of our star,” Melroy said in a news briefing last week.

Scientists are interested in the corona because it plays a key role in transferring heat and energy into the solar wind, the constant stream of charged particles released from the sun’s outer atmosphere. The solar wind ebbs and flows, occasionally shooting high-powered solar flares into space. These can hit Earth with electromagnetic radiation , which can cause radio blackouts and knock out power grids.

Amir Caspi, a solar astrophysicist at the Southwest Research Institute in Boulder, Colorado, has an instrument installed in the nose of a WB-57 aircraft that will study the sun’s atmosphere as the plane chases the eclipse.

It’s a golden opportunity, he said, since even the special telescopes that can block out a star’s light, known as coronagraphs, have limitations.

“A total solar eclipse is like nature’s perfect coronagraph,” he said. “The moon comes between us and the sun, and it’s exactly the right size in the sky to block out the disc of the sun but not too much more.”

Caspi will focus on trying to understand the origin of the solar wind. He also hopes to gather clues about a long-standing mystery: why the corona is millions of degrees hotter than the surface of the sun.

He pioneered this method of imaging the sun’s corona in 2017, during the last total solar eclipse to cross the continental U.S.

“We didn’t know what we would get,” he said. “It was nail-biting for quite some time, and then we got amazing data. I could see it coming down off the live satellite feed.”

The WB-57 plane can fly at an altitude of 60,000 feet, well above any clouds and high enough that Earth’s atmosphere won’t interfere as much with the observations.

Many researchers plan to gather data about the sun’s atmosphere from other vantage points during the eclipse, including from space.

Several spacecraft, including NASA’s Parker Solar Probe , will have their eyes trained on the sun throughout the celestial event. The probe launched in 2018, so it wasn’t available to study the 2017 solar eclipse.

In 2021, the Parker probe became the first spacecraft to fly through the corona, and it has since flown more than a dozen close approaches to “touch” the sun. Due to the timing of its orbit, the probe will not be on a close encounter on April 8. But it will be near enough to the sun to measure and image solar wind as the charged particles stream by, according to Nour Raouafi, the Parker Solar Probe project scientist and an astrophysicist at the Johns Hopkins Applied Physics Laboratory.

Additionally, a spacecraft from the European Space Agency, known as Solar Orbiter , will be circling almost directly above the Parker Solar Probe at the time of the eclipse. Together, the observatories will tag-team to capture details of the sun’s atmosphere and the solar wind.

“It’s one of the rare occasions that these two spacecraft come so close together,” Raouafi said. “So, we will have a lot of synergies between them, in between all the observation we will do during the eclipse from Earth, which is something totally, totally unprecedented.”

The sun has been ramping up toward a peak in its roughly 11-year cycle of activity, expected in 2025. That means the Parker Solar Probe will have a front-row seat should any eruptions belch from the sun.

There are no guarantees that such outbursts will happen during the eclipse, but Raouafi said measurements of the solar wind from space will still be crucial to understanding the effects of the sun’s activity on Earth.

“These are the drivers of space weather, and the probe is probably the best tool we have out there, the best spacecraft mission we have out there, to help us understand that,” he said. “And the way to do it? Let’s hope for the sun to give us the biggest show it can produce.”

Even for nonscientists, the darkness that will temporarily take hold of afternoon skies along the so-called path of totality will be an extraordinary experience.

“I remember the first time that I learned that it’s kind of a very rare thing — that it just so happens that our moon is the right size and distance to cause this effect here on Earth,” Melroy said. “It’s really a miracle of our universe.”

study for scientific research

Denise Chow is a reporter for NBC News Science focused on general science and climate change.

Chase Cain is a national climate reporter for NBC News.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 26 March 2024

Predicting and improving complex beer flavor through machine learning

  • Michiel Schreurs   ORCID: orcid.org/0000-0002-9449-5619 1 , 2 , 3   na1 ,
  • Supinya Piampongsant 1 , 2 , 3   na1 ,
  • Miguel Roncoroni   ORCID: orcid.org/0000-0001-7461-1427 1 , 2 , 3   na1 ,
  • Lloyd Cool   ORCID: orcid.org/0000-0001-9936-3124 1 , 2 , 3 , 4 ,
  • Beatriz Herrera-Malaver   ORCID: orcid.org/0000-0002-5096-9974 1 , 2 , 3 ,
  • Christophe Vanderaa   ORCID: orcid.org/0000-0001-7443-5427 4 ,
  • Florian A. Theßeling 1 , 2 , 3 ,
  • Łukasz Kreft   ORCID: orcid.org/0000-0001-7620-4657 5 ,
  • Alexander Botzki   ORCID: orcid.org/0000-0001-6691-4233 5 ,
  • Philippe Malcorps 6 ,
  • Luk Daenen 6 ,
  • Tom Wenseleers   ORCID: orcid.org/0000-0002-1434-861X 4 &
  • Kevin J. Verstrepen   ORCID: orcid.org/0000-0002-3077-6219 1 , 2 , 3  

Nature Communications volume  15 , Article number:  2368 ( 2024 ) Cite this article

39k Accesses

749 Altmetric

Metrics details

  • Chemical engineering
  • Gas chromatography
  • Machine learning
  • Metabolomics
  • Taste receptors

The perception and appreciation of food flavor depends on many interacting chemical compounds and external factors, and therefore proves challenging to understand and predict. Here, we combine extensive chemical and sensory analyses of 250 different beers to train machine learning models that allow predicting flavor and consumer appreciation. For each beer, we measure over 200 chemical properties, perform quantitative descriptive sensory analysis with a trained tasting panel and map data from over 180,000 consumer reviews to train 10 different machine learning models. The best-performing algorithm, Gradient Boosting, yields models that significantly outperform predictions based on conventional statistics and accurately predict complex food features and consumer appreciation from chemical profiles. Model dissection allows identifying specific and unexpected compounds as drivers of beer flavor and appreciation. Adding these compounds results in variants of commercial alcoholic and non-alcoholic beers with improved consumer appreciation. Together, our study reveals how big data and machine learning uncover complex links between food chemistry, flavor and consumer perception, and lays the foundation to develop novel, tailored foods with superior flavors.

Similar content being viewed by others

study for scientific research

BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules

Rudraksh Tuwani, Somin Wadhwa & Ganesh Bagler

study for scientific research

Sensory lexicon and aroma volatiles analysis of brewing malt

Xiaoxia Su, Miao Yu, … Tianyi Du

study for scientific research

Predicting odor from molecular structure: a multi-label classification approach

Kushagra Saini & Venkatnarayan Ramanathan

Introduction

Predicting and understanding food perception and appreciation is one of the major challenges in food science. Accurate modeling of food flavor and appreciation could yield important opportunities for both producers and consumers, including quality control, product fingerprinting, counterfeit detection, spoilage detection, and the development of new products and product combinations (food pairing) 1 , 2 , 3 , 4 , 5 , 6 . Accurate models for flavor and consumer appreciation would contribute greatly to our scientific understanding of how humans perceive and appreciate flavor. Moreover, accurate predictive models would also facilitate and standardize existing food assessment methods and could supplement or replace assessments by trained and consumer tasting panels, which are variable, expensive and time-consuming 7 , 8 , 9 . Lastly, apart from providing objective, quantitative, accurate and contextual information that can help producers, models can also guide consumers in understanding their personal preferences 10 .

Despite the myriad of applications, predicting food flavor and appreciation from its chemical properties remains a largely elusive goal in sensory science, especially for complex food and beverages 11 , 12 . A key obstacle is the immense number of flavor-active chemicals underlying food flavor. Flavor compounds can vary widely in chemical structure and concentration, making them technically challenging and labor-intensive to quantify, even in the face of innovations in metabolomics, such as non-targeted metabolic fingerprinting 13 , 14 . Moreover, sensory analysis is perhaps even more complicated. Flavor perception is highly complex, resulting from hundreds of different molecules interacting at the physiochemical and sensorial level. Sensory perception is often non-linear, characterized by complex and concentration-dependent synergistic and antagonistic effects 15 , 16 , 17 , 18 , 19 , 20 , 21 that are further convoluted by the genetics, environment, culture and psychology of consumers 22 , 23 , 24 . Perceived flavor is therefore difficult to measure, with problems of sensitivity, accuracy, and reproducibility that can only be resolved by gathering sufficiently large datasets 25 . Trained tasting panels are considered the prime source of quality sensory data, but require meticulous training, are low throughput and high cost. Public databases containing consumer reviews of food products could provide a valuable alternative, especially for studying appreciation scores, which do not require formal training 25 . Public databases offer the advantage of amassing large amounts of data, increasing the statistical power to identify potential drivers of appreciation. However, public datasets suffer from biases, including a bias in the volunteers that contribute to the database, as well as confounding factors such as price, cult status and psychological conformity towards previous ratings of the product.

Classical multivariate statistics and machine learning methods have been used to predict flavor of specific compounds by, for example, linking structural properties of a compound to its potential biological activities or linking concentrations of specific compounds to sensory profiles 1 , 26 . Importantly, most previous studies focused on predicting organoleptic properties of single compounds (often based on their chemical structure) 27 , 28 , 29 , 30 , 31 , 32 , 33 , thus ignoring the fact that these compounds are present in a complex matrix in food or beverages and excluding complex interactions between compounds. Moreover, the classical statistics commonly used in sensory science 34 , 35 , 36 , 37 , 38 , 39 require a large sample size and sufficient variance amongst predictors to create accurate models. They are not fit for studying an extensive set of hundreds of interacting flavor compounds, since they are sensitive to outliers, have a high tendency to overfit and are less suited for non-linear and discontinuous relationships 40 .

In this study, we combine extensive chemical analyses and sensory data of a set of different commercial beers with machine learning approaches to develop models that predict taste, smell, mouthfeel and appreciation from compound concentrations. Beer is particularly suited to model the relationship between chemistry, flavor and appreciation. First, beer is a complex product, consisting of thousands of flavor compounds that partake in complex sensory interactions 41 , 42 , 43 . This chemical diversity arises from the raw materials (malt, yeast, hops, water and spices) and biochemical conversions during the brewing process (kilning, mashing, boiling, fermentation, maturation and aging) 44 , 45 . Second, the advent of the internet saw beer consumers embrace online review platforms, such as RateBeer (ZX Ventures, Anheuser-Busch InBev SA/NV) and BeerAdvocate (Next Glass, inc.). In this way, the beer community provides massive data sets of beer flavor and appreciation scores, creating extraordinarily large sensory databases to complement the analyses of our professional sensory panel. Specifically, we characterize over 200 chemical properties of 250 commercial beers, spread across 22 beer styles, and link these to the descriptive sensory profiling data of a 16-person in-house trained tasting panel and data acquired from over 180,000 public consumer reviews. These unique and extensive datasets enable us to train a suite of machine learning models to predict flavor and appreciation from a beer’s chemical profile. Dissection of the best-performing models allows us to pinpoint specific compounds as potential drivers of beer flavor and appreciation. Follow-up experiments confirm the importance of these compounds and ultimately allow us to significantly improve the flavor and appreciation of selected commercial beers. Together, our study represents a significant step towards understanding complex flavors and reinforces the value of machine learning to develop and refine complex foods. In this way, it represents a stepping stone for further computer-aided food engineering applications 46 .

To generate a comprehensive dataset on beer flavor, we selected 250 commercial Belgian beers across 22 different beer styles (Supplementary Fig.  S1 ). Beers with ≤ 4.2% alcohol by volume (ABV) were classified as non-alcoholic and low-alcoholic. Blonds and Tripels constitute a significant portion of the dataset (12.4% and 11.2%, respectively) reflecting their presence on the Belgian beer market and the heterogeneity of beers within these styles. By contrast, lager beers are less diverse and dominated by a handful of brands. Rare styles such as Brut or Faro make up only a small fraction of the dataset (2% and 1%, respectively) because fewer of these beers are produced and because they are dominated by distinct characteristics in terms of flavor and chemical composition.

Extensive analysis identifies relationships between chemical compounds in beer

For each beer, we measured 226 different chemical properties, including common brewing parameters such as alcohol content, iso-alpha acids, pH, sugar concentration 47 , and over 200 flavor compounds (Methods, Supplementary Table  S1 ). A large portion (37.2%) are terpenoids arising from hopping, responsible for herbal and fruity flavors 16 , 48 . A second major category are yeast metabolites, such as esters and alcohols, that result in fruity and solvent notes 48 , 49 , 50 . Other measured compounds are primarily derived from malt, or other microbes such as non- Saccharomyces yeasts and bacteria (‘wild flora’). Compounds that arise from spices or staling are labeled under ‘Others’. Five attributes (caloric value, total acids and total ester, hop aroma and sulfur compounds) are calculated from multiple individually measured compounds.

As a first step in identifying relationships between chemical properties, we determined correlations between the concentrations of the compounds (Fig.  1 , upper panel, Supplementary Data  1 and 2 , and Supplementary Fig.  S2 . For the sake of clarity, only a subset of the measured compounds is shown in Fig.  1 ). Compounds of the same origin typically show a positive correlation, while absence of correlation hints at parameters varying independently. For example, the hop aroma compounds citronellol, and alpha-terpineol show moderate correlations with each other (Spearman’s rho=0.39 and 0.57), but not with the bittering hop component iso-alpha acids (Spearman’s rho=0.16 and −0.07). This illustrates how brewers can independently modify hop aroma and bitterness by selecting hop varieties and dosage time. If hops are added early in the boiling phase, chemical conversions increase bitterness while aromas evaporate, conversely, late addition of hops preserves aroma but limits bitterness 51 . Similarly, hop-derived iso-alpha acids show a strong anti-correlation with lactic acid and acetic acid, likely reflecting growth inhibition of lactic acid and acetic acid bacteria, or the consequent use of fewer hops in sour beer styles, such as West Flanders ales and Fruit beers, that rely on these bacteria for their distinct flavors 52 . Finally, yeast-derived esters (ethyl acetate, ethyl decanoate, ethyl hexanoate, ethyl octanoate) and alcohols (ethanol, isoamyl alcohol, isobutanol, and glycerol), correlate with Spearman coefficients above 0.5, suggesting that these secondary metabolites are correlated with the yeast genetic background and/or fermentation parameters and may be difficult to influence individually, although the choice of yeast strain may offer some control 53 .

figure 1

Spearman rank correlations are shown. Descriptors are grouped according to their origin (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)), and sensory aspect (aroma, taste, palate, and overall appreciation). Please note that for the chemical compounds, for the sake of clarity, only a subset of the total number of measured compounds is shown, with an emphasis on the key compounds for each source. For more details, see the main text and Methods section. Chemical data can be found in Supplementary Data  1 , correlations between all chemical compounds are depicted in Supplementary Fig.  S2 and correlation values can be found in Supplementary Data  2 . See Supplementary Data  4 for sensory panel assessments and Supplementary Data  5 for correlation values between all sensory descriptors.

Interestingly, different beer styles show distinct patterns for some flavor compounds (Supplementary Fig.  S3 ). These observations agree with expectations for key beer styles, and serve as a control for our measurements. For instance, Stouts generally show high values for color (darker), while hoppy beers contain elevated levels of iso-alpha acids, compounds associated with bitter hop taste. Acetic and lactic acid are not prevalent in most beers, with notable exceptions such as Kriek, Lambic, Faro, West Flanders ales and Flanders Old Brown, which use acid-producing bacteria ( Lactobacillus and Pediococcus ) or unconventional yeast ( Brettanomyces ) 54 , 55 . Glycerol, ethanol and esters show similar distributions across all beer styles, reflecting their common origin as products of yeast metabolism during fermentation 45 , 53 . Finally, low/no-alcohol beers contain low concentrations of glycerol and esters. This is in line with the production process for most of the low/no-alcohol beers in our dataset, which are produced through limiting fermentation or by stripping away alcohol via evaporation or dialysis, with both methods having the unintended side-effect of reducing the amount of flavor compounds in the final beer 56 , 57 .

Besides expected associations, our data also reveals less trivial associations between beer styles and specific parameters. For example, geraniol and citronellol, two monoterpenoids responsible for citrus, floral and rose flavors and characteristic of Citra hops, are found in relatively high amounts in Christmas, Saison, and Brett/co-fermented beers, where they may originate from terpenoid-rich spices such as coriander seeds instead of hops 58 .

Tasting panel assessments reveal sensorial relationships in beer

To assess the sensory profile of each beer, a trained tasting panel evaluated each of the 250 beers for 50 sensory attributes, including different hop, malt and yeast flavors, off-flavors and spices. Panelists used a tasting sheet (Supplementary Data  3 ) to score the different attributes. Panel consistency was evaluated by repeating 12 samples across different sessions and performing ANOVA. In 95% of cases no significant difference was found across sessions ( p  > 0.05), indicating good panel consistency (Supplementary Table  S2 ).

Aroma and taste perception reported by the trained panel are often linked (Fig.  1 , bottom left panel and Supplementary Data  4 and 5 ), with high correlations between hops aroma and taste (Spearman’s rho=0.83). Bitter taste was found to correlate with hop aroma and taste in general (Spearman’s rho=0.80 and 0.69), and particularly with “grassy” noble hops (Spearman’s rho=0.75). Barnyard flavor, most often associated with sour beers, is identified together with stale hops (Spearman’s rho=0.97) that are used in these beers. Lactic and acetic acid, which often co-occur, are correlated (Spearman’s rho=0.66). Interestingly, sweetness and bitterness are anti-correlated (Spearman’s rho = −0.48), confirming the hypothesis that they mask each other 59 , 60 . Beer body is highly correlated with alcohol (Spearman’s rho = 0.79), and overall appreciation is found to correlate with multiple aspects that describe beer mouthfeel (alcohol, carbonation; Spearman’s rho= 0.32, 0.39), as well as with hop and ester aroma intensity (Spearman’s rho=0.39 and 0.35).

Similar to the chemical analyses, sensorial analyses confirmed typical features of specific beer styles (Supplementary Fig.  S4 ). For example, sour beers (Faro, Flanders Old Brown, Fruit beer, Kriek, Lambic, West Flanders ale) were rated acidic, with flavors of both acetic and lactic acid. Hoppy beers were found to be bitter and showed hop-associated aromas like citrus and tropical fruit. Malt taste is most detected among scotch, stout/porters, and strong ales, while low/no-alcohol beers, which often have a reputation for being ‘worty’ (reminiscent of unfermented, sweet malt extract) appear in the middle. Unsurprisingly, hop aromas are most strongly detected among hoppy beers. Like its chemical counterpart (Supplementary Fig.  S3 ), acidity shows a right-skewed distribution, with the most acidic beers being Krieks, Lambics, and West Flanders ales.

Tasting panel assessments of specific flavors correlate with chemical composition

We find that the concentrations of several chemical compounds strongly correlate with specific aroma or taste, as evaluated by the tasting panel (Fig.  2 , Supplementary Fig.  S5 , Supplementary Data  6 ). In some cases, these correlations confirm expectations and serve as a useful control for data quality. For example, iso-alpha acids, the bittering compounds in hops, strongly correlate with bitterness (Spearman’s rho=0.68), while ethanol and glycerol correlate with tasters’ perceptions of alcohol and body, the mouthfeel sensation of fullness (Spearman’s rho=0.82/0.62 and 0.72/0.57 respectively) and darker color from roasted malts is a good indication of malt perception (Spearman’s rho=0.54).

figure 2

Heatmap colors indicate Spearman’s Rho. Axes are organized according to sensory categories (aroma, taste, mouthfeel, overall), chemical categories and chemical sources in beer (malt (blue), hops (green), yeast (red), wild flora (yellow), Others (black)). See Supplementary Data  6 for all correlation values.

Interestingly, for some relationships between chemical compounds and perceived flavor, correlations are weaker than expected. For example, the rose-smelling phenethyl acetate only weakly correlates with floral aroma. This hints at more complex relationships and interactions between compounds and suggests a need for a more complex model than simple correlations. Lastly, we uncovered unexpected correlations. For instance, the esters ethyl decanoate and ethyl octanoate appear to correlate slightly with hop perception and bitterness, possibly due to their fruity flavor. Iron is anti-correlated with hop aromas and bitterness, most likely because it is also anti-correlated with iso-alpha acids. This could be a sign of metal chelation of hop acids 61 , given that our analyses measure unbound hop acids and total iron content, or could result from the higher iron content in dark and Fruit beers, which typically have less hoppy and bitter flavors 62 .

Public consumer reviews complement expert panel data

To complement and expand the sensory data of our trained tasting panel, we collected 180,000 reviews of our 250 beers from the online consumer review platform RateBeer. This provided numerical scores for beer appearance, aroma, taste, palate, overall quality as well as the average overall score.

Public datasets are known to suffer from biases, such as price, cult status and psychological conformity towards previous ratings of a product. For example, prices correlate with appreciation scores for these online consumer reviews (rho=0.49, Supplementary Fig.  S6 ), but not for our trained tasting panel (rho=0.19). This suggests that prices affect consumer appreciation, which has been reported in wine 63 , while blind tastings are unaffected. Moreover, we observe that some beer styles, like lagers and non-alcoholic beers, generally receive lower scores, reflecting that online reviewers are mostly beer aficionados with a preference for specialty beers over lager beers. In general, we find a modest correlation between our trained panel’s overall appreciation score and the online consumer appreciation scores (Fig.  3 , rho=0.29). Apart from the aforementioned biases in the online datasets, serving temperature, sample freshness and surroundings, which are all tightly controlled during the tasting panel sessions, can vary tremendously across online consumers and can further contribute to (among others, appreciation) differences between the two categories of tasters. Importantly, in contrast to the overall appreciation scores, for many sensory aspects the results from the professional panel correlated well with results obtained from RateBeer reviews. Correlations were highest for features that are relatively easy to recognize even for untrained tasters, like bitterness, sweetness, alcohol and malt aroma (Fig.  3 and below).

figure 3

RateBeer text mining results can be found in Supplementary Data  7 . Rho values shown are Spearman correlation values, with asterisks indicating significant correlations ( p  < 0.05, two-sided). All p values were smaller than 0.001, except for Esters aroma (0.0553), Esters taste (0.3275), Esters aroma—banana (0.0019), Coriander (0.0508) and Diacetyl (0.0134).

Besides collecting consumer appreciation from these online reviews, we developed automated text analysis tools to gather additional data from review texts (Supplementary Data  7 ). Processing review texts on the RateBeer database yielded comparable results to the scores given by the trained panel for many common sensory aspects, including acidity, bitterness, sweetness, alcohol, malt, and hop tastes (Fig.  3 ). This is in line with what would be expected, since these attributes require less training for accurate assessment and are less influenced by environmental factors such as temperature, serving glass and odors in the environment. Consumer reviews also correlate well with our trained panel for 4-vinyl guaiacol, a compound associated with a very characteristic aroma. By contrast, correlations for more specific aromas like ester, coriander or diacetyl are underrepresented in the online reviews, underscoring the importance of using a trained tasting panel and standardized tasting sheets with explicit factors to be scored for evaluating specific aspects of a beer. Taken together, our results suggest that public reviews are trustworthy for some, but not all, flavor features and can complement or substitute taste panel data for these sensory aspects.

Models can predict beer sensory profiles from chemical data

The rich datasets of chemical analyses, tasting panel assessments and public reviews gathered in the first part of this study provided us with a unique opportunity to develop predictive models that link chemical data to sensorial features. Given the complexity of beer flavor, basic statistical tools such as correlations or linear regression may not always be the most suitable for making accurate predictions. Instead, we applied different machine learning models that can model both simple linear and complex interactive relationships. Specifically, we constructed a set of regression models to predict (a) trained panel scores for beer flavor and quality and (b) public reviews’ appreciation scores from beer chemical profiles. We trained and tested 10 different models (Methods), 3 linear regression-based models (simple linear regression with first-order interactions (LR), lasso regression with first-order interactions (Lasso), partial least squares regressor (PLSR)), 5 decision tree models (AdaBoost regressor (ABR), extra trees (ET), gradient boosting regressor (GBR), random forest (RF) and XGBoost regressor (XGBR)), 1 support vector regression (SVR), and 1 artificial neural network (ANN) model.

To compare the performance of our machine learning models, the dataset was randomly split into a training and test set, stratified by beer style. After a model was trained on data in the training set, its performance was evaluated on its ability to predict the test dataset obtained from multi-output models (based on the coefficient of determination, see Methods). Additionally, individual-attribute models were ranked per descriptor and the average rank was calculated, as proposed by Korneva et al. 64 . Importantly, both ways of evaluating the models’ performance agreed in general. Performance of the different models varied (Table  1 ). It should be noted that all models perform better at predicting RateBeer results than results from our trained tasting panel. One reason could be that sensory data is inherently variable, and this variability is averaged out with the large number of public reviews from RateBeer. Additionally, all tree-based models perform better at predicting taste than aroma. Linear models (LR) performed particularly poorly, with negative R 2 values, due to severe overfitting (training set R 2  = 1). Overfitting is a common issue in linear models with many parameters and limited samples, especially with interaction terms further amplifying the number of parameters. L1 regularization (Lasso) successfully overcomes this overfitting, out-competing multiple tree-based models on the RateBeer dataset. Similarly, the dimensionality reduction of PLSR avoids overfitting and improves performance, to some extent. Still, tree-based models (ABR, ET, GBR, RF and XGBR) show the best performance, out-competing the linear models (LR, Lasso, PLSR) commonly used in sensory science 65 .

GBR models showed the best overall performance in predicting sensory responses from chemical information, with R 2 values up to 0.75 depending on the predicted sensory feature (Supplementary Table  S4 ). The GBR models predict consumer appreciation (RateBeer) better than our trained panel’s appreciation (R 2 value of 0.67 compared to R 2 value of 0.09) (Supplementary Table  S3 and Supplementary Table  S4 ). ANN models showed intermediate performance, likely because neural networks typically perform best with larger datasets 66 . The SVR shows intermediate performance, mostly due to the weak predictions of specific attributes that lower the overall performance (Supplementary Table  S4 ).

Model dissection identifies specific, unexpected compounds as drivers of consumer appreciation

Next, we leveraged our models to infer important contributors to sensory perception and consumer appreciation. Consumer preference is a crucial sensory aspects, because a product that shows low consumer appreciation scores often does not succeed commercially 25 . Additionally, the requirement for a large number of representative evaluators makes consumer trials one of the more costly and time-consuming aspects of product development. Hence, a model for predicting chemical drivers of overall appreciation would be a welcome addition to the available toolbox for food development and optimization.

Since GBR models on our RateBeer dataset showed the best overall performance, we focused on these models. Specifically, we used two approaches to identify important contributors. First, rankings of the most important predictors for each sensorial trait in the GBR models were obtained based on impurity-based feature importance (mean decrease in impurity). High-ranked parameters were hypothesized to be either the true causal chemical properties underlying the trait, to correlate with the actual causal properties, or to take part in sensory interactions affecting the trait 67 (Fig.  4A ). In a second approach, we used SHAP 68 to determine which parameters contributed most to the model for making predictions of consumer appreciation (Fig.  4B ). SHAP calculates parameter contributions to model predictions on a per-sample basis, which can be aggregated into an importance score.

figure 4

A The impurity-based feature importance (mean deviance in impurity, MDI) calculated from the Gradient Boosting Regression (GBR) model predicting RateBeer appreciation scores. The top 15 highest ranked chemical properties are shown. B SHAP summary plot for the top 15 parameters contributing to our GBR model. Each point on the graph represents a sample from our dataset. The color represents the concentration of that parameter, with bluer colors representing low values and redder colors representing higher values. Greater absolute values on the horizontal axis indicate a higher impact of the parameter on the prediction of the model. C Spearman correlations between the 15 most important chemical properties and consumer overall appreciation. Numbers indicate the Spearman Rho correlation coefficient, and the rank of this correlation compared to all other correlations. The top 15 important compounds were determined using SHAP (panel B).

Both approaches identified ethyl acetate as the most predictive parameter for beer appreciation (Fig.  4 ). Ethyl acetate is the most abundant ester in beer with a typical ‘fruity’, ‘solvent’ and ‘alcoholic’ flavor, but is often considered less important than other esters like isoamyl acetate. The second most important parameter identified by SHAP is ethanol, the most abundant beer compound after water. Apart from directly contributing to beer flavor and mouthfeel, ethanol drastically influences the physical properties of beer, dictating how easily volatile compounds escape the beer matrix to contribute to beer aroma 69 . Importantly, it should also be noted that the importance of ethanol for appreciation is likely inflated by the very low appreciation scores of non-alcoholic beers (Supplementary Fig.  S4 ). Despite not often being considered a driver of beer appreciation, protein level also ranks highly in both approaches, possibly due to its effect on mouthfeel and body 70 . Lactic acid, which contributes to the tart taste of sour beers, is the fourth most important parameter identified by SHAP, possibly due to the generally high appreciation of sour beers in our dataset.

Interestingly, some of the most important predictive parameters for our model are not well-established as beer flavors or are even commonly regarded as being negative for beer quality. For example, our models identify methanethiol and ethyl phenyl acetate, an ester commonly linked to beer staling 71 , as a key factor contributing to beer appreciation. Although there is no doubt that high concentrations of these compounds are considered unpleasant, the positive effects of modest concentrations are not yet known 72 , 73 .

To compare our approach to conventional statistics, we evaluated how well the 15 most important SHAP-derived parameters correlate with consumer appreciation (Fig.  4C ). Interestingly, only 6 of the properties derived by SHAP rank amongst the top 15 most correlated parameters. For some chemical compounds, the correlations are so low that they would have likely been considered unimportant. For example, lactic acid, the fourth most important parameter, shows a bimodal distribution for appreciation, with sour beers forming a separate cluster, that is missed entirely by the Spearman correlation. Additionally, the correlation plots reveal outliers, emphasizing the need for robust analysis tools. Together, this highlights the need for alternative models, like the Gradient Boosting model, that better grasp the complexity of (beer) flavor.

Finally, to observe the relationships between these chemical properties and their predicted targets, partial dependence plots were constructed for the six most important predictors of consumer appreciation 74 , 75 , 76 (Supplementary Fig.  S7 ). One-way partial dependence plots show how a change in concentration affects the predicted appreciation. These plots reveal an important limitation of our models: appreciation predictions remain constant at ever-increasing concentrations. This implies that once a threshold concentration is reached, further increasing the concentration does not affect appreciation. This is false, as it is well-documented that certain compounds become unpleasant at high concentrations, including ethyl acetate (‘nail polish’) 77 and methanethiol (‘sulfury’ and ‘rotten cabbage’) 78 . The inability of our models to grasp that flavor compounds have optimal levels, above which they become negative, is a consequence of working with commercial beer brands where (off-)flavors are rarely too high to negatively impact the product. The two-way partial dependence plots show how changing the concentration of two compounds influences predicted appreciation, visualizing their interactions (Supplementary Fig.  S7 ). In our case, the top 5 parameters are dominated by additive or synergistic interactions, with high concentrations for both compounds resulting in the highest predicted appreciation.

To assess the robustness of our best-performing models and model predictions, we performed 100 iterations of the GBR, RF and ET models. In general, all iterations of the models yielded similar performance (Supplementary Fig.  S8 ). Moreover, the main predictors (including the top predictors ethanol and ethyl acetate) remained virtually the same, especially for GBR and RF. For the iterations of the ET model, we did observe more variation in the top predictors, which is likely a consequence of the model’s inherent random architecture in combination with co-correlations between certain predictors. However, even in this case, several of the top predictors (ethanol and ethyl acetate) remain unchanged, although their rank in importance changes (Supplementary Fig.  S8 ).

Next, we investigated if a combination of RateBeer and trained panel data into one consolidated dataset would lead to stronger models, under the hypothesis that such a model would suffer less from bias in the datasets. A GBR model was trained to predict appreciation on the combined dataset. This model underperformed compared to the RateBeer model, both in the native case and when including a dataset identifier (R 2  = 0.67, 0.26 and 0.42 respectively). For the latter, the dataset identifier is the most important feature (Supplementary Fig.  S9 ), while most of the feature importance remains unchanged, with ethyl acetate and ethanol ranking highest, like in the original model trained only on RateBeer data. It seems that the large variation in the panel dataset introduces noise, weakening the models’ performances and reliability. In addition, it seems reasonable to assume that both datasets are fundamentally different, with the panel dataset obtained by blind tastings by a trained professional panel.

Lastly, we evaluated whether beer style identifiers would further enhance the model’s performance. A GBR model was trained with parameters that explicitly encoded the styles of the samples. This did not improve model performance (R2 = 0.66 with style information vs R2 = 0.67). The most important chemical features are consistent with the model trained without style information (eg. ethanol and ethyl acetate), and with the exception of the most preferred (strong ale) and least preferred (low/no-alcohol) styles, none of the styles were among the most important features (Supplementary Fig.  S9 , Supplementary Table  S5 and S6 ). This is likely due to a combination of style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original models, as well as the low number of samples belonging to some styles, making it difficult for the model to learn style-specific patterns. Moreover, beer styles are not rigorously defined, with some styles overlapping in features and some beers being misattributed to a specific style, all of which leads to more noise in models that use style parameters.

Model validation

To test if our predictive models give insight into beer appreciation, we set up experiments aimed at improving existing commercial beers. We specifically selected overall appreciation as the trait to be examined because of its complexity and commercial relevance. Beer flavor comprises a complex bouquet rather than single aromas and tastes 53 . Hence, adding a single compound to the extent that a difference is noticeable may lead to an unbalanced, artificial flavor. Therefore, we evaluated the effect of combinations of compounds. Because Blond beers represent the most extensive style in our dataset, we selected a beer from this style as the starting material for these experiments (Beer 64 in Supplementary Data  1 ).

In the first set of experiments, we adjusted the concentrations of compounds that made up the most important predictors of overall appreciation (ethyl acetate, ethanol, lactic acid, ethyl phenyl acetate) together with correlated compounds (ethyl hexanoate, isoamyl acetate, glycerol), bringing them up to 95 th percentile ethanol-normalized concentrations (Methods) within the Blond group (‘Spiked’ concentration in Fig.  5A ). Compared to controls, the spiked beers were found to have significantly improved overall appreciation among trained panelists, with panelist noting increased intensity of ester flavors, sweetness, alcohol, and body fullness (Fig.  5B ). To disentangle the contribution of ethanol to these results, a second experiment was performed without the addition of ethanol. This resulted in a similar outcome, including increased perception of alcohol and overall appreciation.

figure 5

Adding the top chemical compounds, identified as best predictors of appreciation by our model, into poorly appreciated beers results in increased appreciation from our trained panel. Results of sensory tests between base beers and those spiked with compounds identified as the best predictors by the model. A Blond and Non/Low-alcohol (0.0% ABV) base beers were brought up to 95th-percentile ethanol-normalized concentrations within each style. B For each sensory attribute, tasters indicated the more intense sample and selected the sample they preferred. The numbers above the bars correspond to the p values that indicate significant changes in perceived flavor (two-sided binomial test: alpha 0.05, n  = 20 or 13).

In a last experiment, we tested whether using the model’s predictions can boost the appreciation of a non-alcoholic beer (beer 223 in Supplementary Data  1 ). Again, the addition of a mixture of predicted compounds (omitting ethanol, in this case) resulted in a significant increase in appreciation, body, ester flavor and sweetness.

Predicting flavor and consumer appreciation from chemical composition is one of the ultimate goals of sensory science. A reliable, systematic and unbiased way to link chemical profiles to flavor and food appreciation would be a significant asset to the food and beverage industry. Such tools would substantially aid in quality control and recipe development, offer an efficient and cost-effective alternative to pilot studies and consumer trials and would ultimately allow food manufacturers to produce superior, tailor-made products that better meet the demands of specific consumer groups more efficiently.

A limited set of studies have previously tried, to varying degrees of success, to predict beer flavor and beer popularity based on (a limited set of) chemical compounds and flavors 79 , 80 . Current sensitive, high-throughput technologies allow measuring an unprecedented number of chemical compounds and properties in a large set of samples, yielding a dataset that can train models that help close the gaps between chemistry and flavor, even for a complex natural product like beer. To our knowledge, no previous research gathered data at this scale (250 samples, 226 chemical parameters, 50 sensory attributes and 5 consumer scores) to disentangle and validate the chemical aspects driving beer preference using various machine-learning techniques. We find that modern machine learning models outperform conventional statistical tools, such as correlations and linear models, and can successfully predict flavor appreciation from chemical composition. This could be attributed to the natural incorporation of interactions and non-linear or discontinuous effects in machine learning models, which are not easily grasped by the linear model architecture. While linear models and partial least squares regression represent the most widespread statistical approaches in sensory science, in part because they allow interpretation 65 , 81 , 82 , modern machine learning methods allow for building better predictive models while preserving the possibility to dissect and exploit the underlying patterns. Of the 10 different models we trained, tree-based models, such as our best performing GBR, showed the best overall performance in predicting sensory responses from chemical information, outcompeting artificial neural networks. This agrees with previous reports for models trained on tabular data 83 . Our results are in line with the findings of Colantonio et al. who also identified the gradient boosting architecture as performing best at predicting appreciation and flavor (of tomatoes and blueberries, in their specific study) 26 . Importantly, besides our larger experimental scale, we were able to directly confirm our models’ predictions in vivo.

Our study confirms that flavor compound concentration does not always correlate with perception, suggesting complex interactions that are often missed by more conventional statistics and simple models. Specifically, we find that tree-based algorithms may perform best in developing models that link complex food chemistry with aroma. Furthermore, we show that massive datasets of untrained consumer reviews provide a valuable source of data, that can complement or even replace trained tasting panels, especially for appreciation and basic flavors, such as sweetness and bitterness. This holds despite biases that are known to occur in such datasets, such as price or conformity bias. Moreover, GBR models predict taste better than aroma. This is likely because taste (e.g. bitterness) often directly relates to the corresponding chemical measurements (e.g., iso-alpha acids), whereas such a link is less clear for aromas, which often result from the interplay between multiple volatile compounds. We also find that our models are best at predicting acidity and alcohol, likely because there is a direct relation between the measured chemical compounds (acids and ethanol) and the corresponding perceived sensorial attribute (acidity and alcohol), and because even untrained consumers are generally able to recognize these flavors and aromas.

The predictions of our final models, trained on review data, hold even for blind tastings with small groups of trained tasters, as demonstrated by our ability to validate specific compounds as drivers of beer flavor and appreciation. Since adding a single compound to the extent of a noticeable difference may result in an unbalanced flavor profile, we specifically tested our identified key drivers as a combination of compounds. While this approach does not allow us to validate if a particular single compound would affect flavor and/or appreciation, our experiments do show that this combination of compounds increases consumer appreciation.

It is important to stress that, while it represents an important step forward, our approach still has several major limitations. A key weakness of the GBR model architecture is that amongst co-correlating variables, the largest main effect is consistently preferred for model building. As a result, co-correlating variables often have artificially low importance scores, both for impurity and SHAP-based methods, like we observed in the comparison to the more randomized Extra Trees models. This implies that chemicals identified as key drivers of a specific sensory feature by GBR might not be the true causative compounds, but rather co-correlate with the actual causative chemical. For example, the high importance of ethyl acetate could be (partially) attributed to the total ester content, ethanol or ethyl hexanoate (rho=0.77, rho=0.72 and rho=0.68), while ethyl phenylacetate could hide the importance of prenyl isobutyrate and ethyl benzoate (rho=0.77 and rho=0.76). Expanding our GBR model to include beer style as a parameter did not yield additional power or insight. This is likely due to style-specific chemical signatures, such as iso-alpha acids and lactic acid, that implicitly convey style information to the original model, as well as the smaller sample size per style, limiting the power to uncover style-specific patterns. This can be partly attributed to the curse of dimensionality, where the high number of parameters results in the models mainly incorporating single parameter effects, rather than complex interactions such as style-dependent effects 67 . A larger number of samples may overcome some of these limitations and offer more insight into style-specific effects. On the other hand, beer style is not a rigid scientific classification, and beers within one style often differ a lot, which further complicates the analysis of style as a model factor.

Our study is limited to beers from Belgian breweries. Although these beers cover a large portion of the beer styles available globally, some beer styles and consumer patterns may be missing, while other features might be overrepresented. For example, many Belgian ales exhibit yeast-driven flavor profiles, which is reflected in the chemical drivers of appreciation discovered by this study. In future work, expanding the scope to include diverse markets and beer styles could lead to the identification of even more drivers of appreciation and better models for special niche products that were not present in our beer set.

In addition to inherent limitations of GBR models, there are also some limitations associated with studying food aroma. Even if our chemical analyses measured most of the known aroma compounds, the total number of flavor compounds in complex foods like beer is still larger than the subset we were able to measure in this study. For example, hop-derived thiols, that influence flavor at very low concentrations, are notoriously difficult to measure in a high-throughput experiment. Moreover, consumer perception remains subjective and prone to biases that are difficult to avoid. It is also important to stress that the models are still immature and that more extensive datasets will be crucial for developing more complete models in the future. Besides more samples and parameters, our dataset does not include any demographic information about the tasters. Including such data could lead to better models that grasp external factors like age and culture. Another limitation is that our set of beers consists of high-quality end-products and lacks beers that are unfit for sale, which limits the current model in accurately predicting products that are appreciated very badly. Finally, while models could be readily applied in quality control, their use in sensory science and product development is restrained by their inability to discern causal relationships. Given that the models cannot distinguish compounds that genuinely drive consumer perception from those that merely correlate, validation experiments are essential to identify true causative compounds.

Despite the inherent limitations, dissection of our models enabled us to pinpoint specific molecules as potential drivers of beer aroma and consumer appreciation, including compounds that were unexpected and would not have been identified using standard approaches. Important drivers of beer appreciation uncovered by our models include protein levels, ethyl acetate, ethyl phenyl acetate and lactic acid. Currently, many brewers already use lactic acid to acidify their brewing water and ensure optimal pH for enzymatic activity during the mashing process. Our results suggest that adding lactic acid can also improve beer appreciation, although its individual effect remains to be tested. Interestingly, ethanol appears to be unnecessary to improve beer appreciation, both for blond beer and alcohol-free beer. Given the growing consumer interest in alcohol-free beer, with a predicted annual market growth of >7% 84 , it is relevant for brewers to know what compounds can further increase consumer appreciation of these beers. Hence, our model may readily provide avenues to further improve the flavor and consumer appreciation of both alcoholic and non-alcoholic beers, which is generally considered one of the key challenges for future beer production.

Whereas we see a direct implementation of our results for the development of superior alcohol-free beverages and other food products, our study can also serve as a stepping stone for the development of novel alcohol-containing beverages. We want to echo the growing body of scientific evidence for the negative effects of alcohol consumption, both on the individual level by the mutagenic, teratogenic and carcinogenic effects of ethanol 85 , 86 , as well as the burden on society caused by alcohol abuse and addiction. We encourage the use of our results for the production of healthier, tastier products, including novel and improved beverages with lower alcohol contents. Furthermore, we strongly discourage the use of these technologies to improve the appreciation or addictive properties of harmful substances.

The present work demonstrates that despite some important remaining hurdles, combining the latest developments in chemical analyses, sensory analysis and modern machine learning methods offers exciting avenues for food chemistry and engineering. Soon, these tools may provide solutions in quality control and recipe development, as well as new approaches to sensory science and flavor research.

Beer selection

250 commercial Belgian beers were selected to cover the broad diversity of beer styles and corresponding diversity in chemical composition and aroma. See Supplementary Fig.  S1 .

Chemical dataset

Sample preparation.

Beers within their expiration date were purchased from commercial retailers. Samples were prepared in biological duplicates at room temperature, unless explicitly stated otherwise. Bottle pressure was measured with a manual pressure device (Steinfurth Mess-Systeme GmbH) and used to calculate CO 2 concentration. The beer was poured through two filter papers (Macherey-Nagel, 500713032 MN 713 ¼) to remove carbon dioxide and prevent spontaneous foaming. Samples were then prepared for measurements by targeted Headspace-Gas Chromatography-Flame Ionization Detector/Flame Photometric Detector (HS-GC-FID/FPD), Headspace-Solid Phase Microextraction-Gas Chromatography-Mass Spectrometry (HS-SPME-GC-MS), colorimetric analysis, enzymatic analysis, Near-Infrared (NIR) analysis, as described in the sections below. The mean values of biological duplicates are reported for each compound.

HS-GC-FID/FPD

HS-GC-FID/FPD (Shimadzu GC 2010 Plus) was used to measure higher alcohols, acetaldehyde, esters, 4-vinyl guaicol, and sulfur compounds. Each measurement comprised 5 ml of sample pipetted into a 20 ml glass vial containing 1.75 g NaCl (VWR, 27810.295). 100 µl of 2-heptanol (Sigma-Aldrich, H3003) (internal standard) solution in ethanol (Fisher Chemical, E/0650DF/C17) was added for a final concentration of 2.44 mg/L. Samples were flushed with nitrogen for 10 s, sealed with a silicone septum, stored at −80 °C and analyzed in batches of 20.

The GC was equipped with a DB-WAXetr column (length, 30 m; internal diameter, 0.32 mm; layer thickness, 0.50 µm; Agilent Technologies, Santa Clara, CA, USA) to the FID and an HP-5 column (length, 30 m; internal diameter, 0.25 mm; layer thickness, 0.25 µm; Agilent Technologies, Santa Clara, CA, USA) to the FPD. N 2 was used as the carrier gas. Samples were incubated for 20 min at 70 °C in the headspace autosampler (Flow rate, 35 cm/s; Injection volume, 1000 µL; Injection mode, split; Combi PAL autosampler, CTC analytics, Switzerland). The injector, FID and FPD temperatures were kept at 250 °C. The GC oven temperature was first held at 50 °C for 5 min and then allowed to rise to 80 °C at a rate of 5 °C/min, followed by a second ramp of 4 °C/min until 200 °C kept for 3 min and a final ramp of (4 °C/min) until 230 °C for 1 min. Results were analyzed with the GCSolution software version 2.4 (Shimadzu, Kyoto, Japan). The GC was calibrated with a 5% EtOH solution (VWR International) containing the volatiles under study (Supplementary Table  S7 ).

HS-SPME-GC-MS

HS-SPME-GC-MS (Shimadzu GCMS-QP-2010 Ultra) was used to measure additional volatile compounds, mainly comprising terpenoids and esters. Samples were analyzed by HS-SPME using a triphase DVB/Carboxen/PDMS 50/30 μm SPME fiber (Supelco Co., Bellefonte, PA, USA) followed by gas chromatography (Thermo Fisher Scientific Trace 1300 series, USA) coupled to a mass spectrometer (Thermo Fisher Scientific ISQ series MS) equipped with a TriPlus RSH autosampler. 5 ml of degassed beer sample was placed in 20 ml vials containing 1.75 g NaCl (VWR, 27810.295). 5 µl internal standard mix was added, containing 2-heptanol (1 g/L) (Sigma-Aldrich, H3003), 4-fluorobenzaldehyde (1 g/L) (Sigma-Aldrich, 128376), 2,3-hexanedione (1 g/L) (Sigma-Aldrich, 144169) and guaiacol (1 g/L) (Sigma-Aldrich, W253200) in ethanol (Fisher Chemical, E/0650DF/C17). Each sample was incubated at 60 °C in the autosampler oven with constant agitation. After 5 min equilibration, the SPME fiber was exposed to the sample headspace for 30 min. The compounds trapped on the fiber were thermally desorbed in the injection port of the chromatograph by heating the fiber for 15 min at 270 °C.

The GC-MS was equipped with a low polarity RXi-5Sil MS column (length, 20 m; internal diameter, 0.18 mm; layer thickness, 0.18 µm; Restek, Bellefonte, PA, USA). Injection was performed in splitless mode at 320 °C, a split flow of 9 ml/min, a purge flow of 5 ml/min and an open valve time of 3 min. To obtain a pulsed injection, a programmed gas flow was used whereby the helium gas flow was set at 2.7 mL/min for 0.1 min, followed by a decrease in flow of 20 ml/min to the normal 0.9 mL/min. The temperature was first held at 30 °C for 3 min and then allowed to rise to 80 °C at a rate of 7 °C/min, followed by a second ramp of 2 °C/min till 125 °C and a final ramp of 8 °C/min with a final temperature of 270 °C.

Mass acquisition range was 33 to 550 amu at a scan rate of 5 scans/s. Electron impact ionization energy was 70 eV. The interface and ion source were kept at 275 °C and 250 °C, respectively. A mix of linear n-alkanes (from C7 to C40, Supelco Co.) was injected into the GC-MS under identical conditions to serve as external retention index markers. Identification and quantification of the compounds were performed using an in-house developed R script as described in Goelen et al. and Reher et al. 87 , 88 (for package information, see Supplementary Table  S8 ). Briefly, chromatograms were analyzed using AMDIS (v2.71) 89 to separate overlapping peaks and obtain pure compound spectra. The NIST MS Search software (v2.0 g) in combination with the NIST2017, FFNSC3 and Adams4 libraries were used to manually identify the empirical spectra, taking into account the expected retention time. After background subtraction and correcting for retention time shifts between samples run on different days based on alkane ladders, compound elution profiles were extracted and integrated using a file with 284 target compounds of interest, which were either recovered in our identified AMDIS list of spectra or were known to occur in beer. Compound elution profiles were estimated for every peak in every chromatogram over a time-restricted window using weighted non-negative least square analysis after which peak areas were integrated 87 , 88 . Batch effect correction was performed by normalizing against the most stable internal standard compound, 4-fluorobenzaldehyde. Out of all 284 target compounds that were analyzed, 167 were visually judged to have reliable elution profiles and were used for final analysis.

Discrete photometric and enzymatic analysis

Discrete photometric and enzymatic analysis (Thermo Scientific TM Gallery TM Plus Beermaster Discrete Analyzer) was used to measure acetic acid, ammonia, beta-glucan, iso-alpha acids, color, sugars, glycerol, iron, pH, protein, and sulfite. 2 ml of sample volume was used for the analyses. Information regarding the reagents and standard solutions used for analyses and calibrations is included in Supplementary Table  S7 and Supplementary Table  S9 .

NIR analyses

NIR analysis (Anton Paar Alcolyzer Beer ME System) was used to measure ethanol. Measurements comprised 50 ml of sample, and a 10% EtOH solution was used for calibration.

Correlation calculations

Pairwise Spearman Rank correlations were calculated between all chemical properties.

Sensory dataset

Trained panel.

Our trained tasting panel consisted of volunteers who gave prior verbal informed consent. All compounds used for the validation experiment were of food-grade quality. The tasting sessions were approved by the Social and Societal Ethics Committee of the KU Leuven (G-2022-5677-R2(MAR)). All online reviewers agreed to the Terms and Conditions of the RateBeer website.

Sensory analysis was performed according to the American Society of Brewing Chemists (ASBC) Sensory Analysis Methods 90 . 30 volunteers were screened through a series of triangle tests. The sixteen most sensitive and consistent tasters were retained as taste panel members. The resulting panel was diverse in age [22–42, mean: 29], sex [56% male] and nationality [7 different countries]. The panel developed a consensus vocabulary to describe beer aroma, taste and mouthfeel. Panelists were trained to identify and score 50 different attributes, using a 7-point scale to rate attributes’ intensity. The scoring sheet is included as Supplementary Data  3 . Sensory assessments took place between 10–12 a.m. The beers were served in black-colored glasses. Per session, between 5 and 12 beers of the same style were tasted at 12 °C to 16 °C. Two reference beers were added to each set and indicated as ‘Reference 1 & 2’, allowing panel members to calibrate their ratings. Not all panelists were present at every tasting. Scores were scaled by standard deviation and mean-centered per taster. Values are represented as z-scores and clustered by Euclidean distance. Pairwise Spearman correlations were calculated between taste and aroma sensory attributes. Panel consistency was evaluated by repeating samples on different sessions and performing ANOVA to identify differences, using the ‘stats’ package (v4.2.2) in R (for package information, see Supplementary Table  S8 ).

Online reviews from a public database

The ‘scrapy’ package in Python (v3.6) (for package information, see Supplementary Table  S8 ). was used to collect 232,288 online reviews (mean=922, min=6, max=5343) from RateBeer, an online beer review database. Each review entry comprised 5 numerical scores (appearance, aroma, taste, palate and overall quality) and an optional review text. The total number of reviews per reviewer was collected separately. Numerical scores were scaled and centered per rater, and mean scores were calculated per beer.

For the review texts, the language was estimated using the packages ‘langdetect’ and ‘langid’ in Python. Reviews that were classified as English by both packages were kept. Reviewers with fewer than 100 entries overall were discarded. 181,025 reviews from >6000 reviewers from >40 countries remained. Text processing was done using the ‘nltk’ package in Python. Texts were corrected for slang and misspellings; proper nouns and rare words that are relevant to the beer context were specified and kept as-is (‘Chimay’,’Lambic’, etc.). A dictionary of semantically similar sensorial terms, for example ‘floral’ and ‘flower’, was created and collapsed together into one term. Words were stemmed and lemmatized to avoid identifying words such as ‘acid’ and ‘acidity’ as separate terms. Numbers and punctuation were removed.

Sentences from up to 50 randomly chosen reviews per beer were manually categorized according to the aspect of beer they describe (appearance, aroma, taste, palate, overall quality—not to be confused with the 5 numerical scores described above) or flagged as irrelevant if they contained no useful information. If a beer contained fewer than 50 reviews, all reviews were manually classified. This labeled data set was used to train a model that classified the rest of the sentences for all beers 91 . Sentences describing taste and aroma were extracted, and term frequency–inverse document frequency (TFIDF) was implemented to calculate enrichment scores for sensorial words per beer.

The sex of the tasting subject was not considered when building our sensory database. Instead, results from different panelists were averaged, both for our trained panel (56% male, 44% female) and the RateBeer reviews (70% male, 30% female for RateBeer as a whole).

Beer price collection and processing

Beer prices were collected from the following stores: Colruyt, Delhaize, Total Wine, BeerHawk, The Belgian Beer Shop, The Belgian Shop, and Beer of Belgium. Where applicable, prices were converted to Euros and normalized per liter. Spearman correlations were calculated between these prices and mean overall appreciation scores from RateBeer and the taste panel, respectively.

Pairwise Spearman Rank correlations were calculated between all sensory properties.

Machine learning models

Predictive modeling of sensory profiles from chemical data.

Regression models were constructed to predict (a) trained panel scores for beer flavors and quality from beer chemical profiles and (b) public reviews’ appreciation scores from beer chemical profiles. Z-scores were used to represent sensory attributes in both data sets. Chemical properties with log-normal distributions (Shapiro-Wilk test, p  <  0.05 ) were log-transformed. Missing chemical measurements (0.1% of all data) were replaced with mean values per attribute. Observations from 250 beers were randomly separated into a training set (70%, 175 beers) and a test set (30%, 75 beers), stratified per beer style. Chemical measurements (p = 231) were normalized based on the training set average and standard deviation. In total, three linear regression-based models: linear regression with first-order interaction terms (LR), lasso regression with first-order interaction terms (Lasso) and partial least squares regression (PLSR); five decision tree models, Adaboost regressor (ABR), Extra Trees (ET), Gradient Boosting regressor (GBR), Random Forest (RF) and XGBoost regressor (XGBR); one support vector machine model (SVR) and one artificial neural network model (ANN) were trained. The models were implemented using the ‘scikit-learn’ package (v1.2.2) and ‘xgboost’ package (v1.7.3) in Python (v3.9.16). Models were trained, and hyperparameters optimized, using five-fold cross-validated grid search with the coefficient of determination (R 2 ) as the evaluation metric. The ANN (scikit-learn’s MLPRegressor) was optimized using Bayesian Tree-Structured Parzen Estimator optimization with the ‘Optuna’ Python package (v3.2.0). Individual models were trained per attribute, and a multi-output model was trained on all attributes simultaneously.

Model dissection

GBR was found to outperform other methods, resulting in models with the highest average R 2 values in both trained panel and public review data sets. Impurity-based rankings of the most important predictors for each predicted sensorial trait were obtained using the ‘scikit-learn’ package. To observe the relationships between these chemical properties and their predicted targets, partial dependence plots (PDP) were constructed for the six most important predictors of consumer appreciation 74 , 75 .

The ‘SHAP’ package in Python (v0.41.0) was implemented to provide an alternative ranking of predictor importance and to visualize the predictors’ effects as a function of their concentration 68 .

Validation of causal chemical properties

To validate the effects of the most important model features on predicted sensory attributes, beers were spiked with the chemical compounds identified by the models and descriptive sensory analyses were carried out according to the American Society of Brewing Chemists (ASBC) protocol 90 .

Compound spiking was done 30 min before tasting. Compounds were spiked into fresh beer bottles, that were immediately resealed and inverted three times. Fresh bottles of beer were opened for the same duration, resealed, and inverted thrice, to serve as controls. Pairs of spiked samples and controls were served simultaneously, chilled and in dark glasses as outlined in the Trained panel section above. Tasters were instructed to select the glass with the higher flavor intensity for each attribute (directional difference test 92 ) and to select the glass they prefer.

The final concentration after spiking was equal to the within-style average, after normalizing by ethanol concentration. This was done to ensure balanced flavor profiles in the final spiked beer. The same methods were applied to improve a non-alcoholic beer. Compounds were the following: ethyl acetate (Merck KGaA, W241415), ethyl hexanoate (Merck KGaA, W243906), isoamyl acetate (Merck KGaA, W205508), phenethyl acetate (Merck KGaA, W285706), ethanol (96%, Colruyt), glycerol (Merck KGaA, W252506), lactic acid (Merck KGaA, 261106).

Significant differences in preference or perceived intensity were determined by performing the two-sided binomial test on each attribute.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The data that support the findings of this work are available in the Supplementary Data files and have been deposited to Zenodo under accession code 10653704 93 . The RateBeer scores data are under restricted access, they are not publicly available as they are property of RateBeer (ZX Ventures, USA). Access can be obtained from the authors upon reasonable request and with permission of RateBeer (ZX Ventures, USA).  Source data are provided with this paper.

Code availability

The code for training the machine learning models, analyzing the models, and generating the figures has been deposited to Zenodo under accession code 10653704 93 .

Tieman, D. et al. A chemical genetic roadmap to improved tomato flavor. Science 355 , 391–394 (2017).

Article   ADS   CAS   PubMed   Google Scholar  

Plutowska, B. & Wardencki, W. Application of gas chromatography–olfactometry (GC–O) in analysis and quality assessment of alcoholic beverages – A review. Food Chem. 107 , 449–463 (2008).

Article   CAS   Google Scholar  

Legin, A., Rudnitskaya, A., Seleznev, B. & Vlasov, Y. Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie. Anal. Chim. Acta 534 , 129–135 (2005).

Loutfi, A., Coradeschi, S., Mani, G. K., Shankar, P. & Rayappan, J. B. B. Electronic noses for food quality: A review. J. Food Eng. 144 , 103–111 (2015).

Ahn, Y.-Y., Ahnert, S. E., Bagrow, J. P. & Barabási, A.-L. Flavor network and the principles of food pairing. Sci. Rep. 1 , 196 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bartoshuk, L. M. & Klee, H. J. Better fruits and vegetables through sensory analysis. Curr. Biol. 23 , R374–R378 (2013).

Article   CAS   PubMed   Google Scholar  

Piggott, J. R. Design questions in sensory and consumer science. Food Qual. Prefer. 3293 , 217–220 (1995).

Article   Google Scholar  

Kermit, M. & Lengard, V. Assessing the performance of a sensory panel-panellist monitoring and tracking. J. Chemom. 19 , 154–161 (2005).

Cook, D. J., Hollowood, T. A., Linforth, R. S. T. & Taylor, A. J. Correlating instrumental measurements of texture and flavour release with human perception. Int. J. Food Sci. Technol. 40 , 631–641 (2005).

Chinchanachokchai, S., Thontirawong, P. & Chinchanachokchai, P. A tale of two recommender systems: The moderating role of consumer expertise on artificial intelligence based product recommendations. J. Retail. Consum. Serv. 61 , 1–12 (2021).

Ross, C. F. Sensory science at the human-machine interface. Trends Food Sci. Technol. 20 , 63–72 (2009).

Chambers, E. IV & Koppel, K. Associations of volatile compounds with sensory aroma and flavor: The complex nature of flavor. Molecules 18 , 4887–4905 (2013).

Pinu, F. R. Metabolomics—The new frontier in food safety and quality research. Food Res. Int. 72 , 80–81 (2015).

Danezis, G. P., Tsagkaris, A. S., Brusic, V. & Georgiou, C. A. Food authentication: state of the art and prospects. Curr. Opin. Food Sci. 10 , 22–31 (2016).

Shepherd, G. M. Smell images and the flavour system in the human brain. Nature 444 , 316–321 (2006).

Meilgaard, M. C. Prediction of flavor differences between beers from their chemical composition. J. Agric. Food Chem. 30 , 1009–1017 (1982).

Xu, L. et al. Widespread receptor-driven modulation in peripheral olfactory coding. Science 368 , eaaz5390 (2020).

Kupferschmidt, K. Following the flavor. Science 340 , 808–809 (2013).

Billesbølle, C. B. et al. Structural basis of odorant recognition by a human odorant receptor. Nature 615 , 742–749 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Smith, B. Perspective: Complexities of flavour. Nature 486 , S6–S6 (2012).

Pfister, P. et al. Odorant receptor inhibition is fundamental to odor encoding. Curr. Biol. 30 , 2574–2587 (2020).

Moskowitz, H. W., Kumaraiah, V., Sharma, K. N., Jacobs, H. L. & Sharma, S. D. Cross-cultural differences in simple taste preferences. Science 190 , 1217–1218 (1975).

Eriksson, N. et al. A genetic variant near olfactory receptor genes influences cilantro preference. Flavour 1 , 22 (2012).

Ferdenzi, C. et al. Variability of affective responses to odors: Culture, gender, and olfactory knowledge. Chem. Senses 38 , 175–186 (2013).

Article   PubMed   Google Scholar  

Lawless, H. T. & Heymann, H. Sensory evaluation of food: Principles and practices. (Springer, New York, NY). https://doi.org/10.1007/978-1-4419-6488-5 (2010).

Colantonio, V. et al. Metabolomic selection for enhanced fruit flavor. Proc. Natl. Acad. Sci. 119 , e2115865119 (2022).

Fritz, F., Preissner, R. & Banerjee, P. VirtualTaste: a web server for the prediction of organoleptic properties of chemical compounds. Nucleic Acids Res 49 , W679–W684 (2021).

Tuwani, R., Wadhwa, S. & Bagler, G. BitterSweet: Building machine learning models for predicting the bitter and sweet taste of small molecules. Sci. Rep. 9 , 1–13 (2019).

Dagan-Wiener, A. et al. Bitter or not? BitterPredict, a tool for predicting taste from chemical structure. Sci. Rep. 7 , 1–13 (2017).

Pallante, L. et al. Toward a general and interpretable umami taste predictor using a multi-objective machine learning approach. Sci. Rep. 12 , 1–11 (2022).

Malavolta, M. et al. A survey on computational taste predictors. Eur. Food Res. Technol. 248 , 2215–2235 (2022).

Lee, B. K. et al. A principal odor map unifies diverse tasks in olfactory perception. Science 381 , 999–1006 (2023).

Mayhew, E. J. et al. Transport features predict if a molecule is odorous. Proc. Natl. Acad. Sci. 119 , e2116576119 (2022).

Niu, Y. et al. Sensory evaluation of the synergism among ester odorants in light aroma-type liquor by odor threshold, aroma intensity and flash GC electronic nose. Food Res. Int. 113 , 102–114 (2018).

Yu, P., Low, M. Y. & Zhou, W. Design of experiments and regression modelling in food flavour and sensory analysis: A review. Trends Food Sci. Technol. 71 , 202–215 (2018).

Oladokun, O. et al. The impact of hop bitter acid and polyphenol profiles on the perceived bitterness of beer. Food Chem. 205 , 212–220 (2016).

Linforth, R., Cabannes, M., Hewson, L., Yang, N. & Taylor, A. Effect of fat content on flavor delivery during consumption: An in vivo model. J. Agric. Food Chem. 58 , 6905–6911 (2010).

Guo, S., Na Jom, K. & Ge, Y. Influence of roasting condition on flavor profile of sunflower seeds: A flavoromics approach. Sci. Rep. 9 , 11295 (2019).

Ren, Q. et al. The changes of microbial community and flavor compound in the fermentation process of Chinese rice wine using Fagopyrum tataricum grain as feedstock. Sci. Rep. 9 , 3365 (2019).

Hastie, T., Friedman, J. & Tibshirani, R. The Elements of Statistical Learning. (Springer, New York, NY). https://doi.org/10.1007/978-0-387-21606-5 (2001).

Dietz, C., Cook, D., Huismann, M., Wilson, C. & Ford, R. The multisensory perception of hop essential oil: a review. J. Inst. Brew. 126 , 320–342 (2020).

CAS   Google Scholar  

Roncoroni, Miguel & Verstrepen, Kevin Joan. Belgian Beer: Tested and Tasted. (Lannoo, 2018).

Meilgaard, M. Flavor chemistry of beer: Part II: Flavor and threshold of 239 aroma volatiles. in (1975).

Bokulich, N. A. & Bamforth, C. W. The microbiology of malting and brewing. Microbiol. Mol. Biol. Rev. MMBR 77 , 157–172 (2013).

Dzialo, M. C., Park, R., Steensels, J., Lievens, B. & Verstrepen, K. J. Physiology, ecology and industrial applications of aroma formation in yeast. FEMS Microbiol. Rev. 41 , S95–S128 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Datta, A. et al. Computer-aided food engineering. Nat. Food 3 , 894–904 (2022).

American Society of Brewing Chemists. Beer Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A.).

Olaniran, A. O., Hiralal, L., Mokoena, M. P. & Pillay, B. Flavour-active volatile compounds in beer: production, regulation and control. J. Inst. Brew. 123 , 13–23 (2017).

Verstrepen, K. J. et al. Flavor-active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Meilgaard, M. C. Flavour chemistry of beer. part I: flavour interaction between principal volatiles. Master Brew. Assoc. Am. Tech. Q 12 , 107–117 (1975).

Briggs, D. E., Boulton, C. A., Brookes, P. A. & Stevens, R. Brewing 227–254. (Woodhead Publishing). https://doi.org/10.1533/9781855739062.227 (2004).

Bossaert, S., Crauwels, S., De Rouck, G. & Lievens, B. The power of sour - A review: Old traditions, new opportunities. BrewingScience 72 , 78–88 (2019).

Google Scholar  

Verstrepen, K. J. et al. Flavor active esters: Adding fruitiness to beer. J. Biosci. Bioeng. 96 , 110–118 (2003).

Snauwaert, I. et al. Microbial diversity and metabolite composition of Belgian red-brown acidic ales. Int. J. Food Microbiol. 221 , 1–11 (2016).

Spitaels, F. et al. The microbial diversity of traditional spontaneously fermented lambic beer. PLoS ONE 9 , e95384 (2014).

Blanco, C. A., Andrés-Iglesias, C. & Montero, O. Low-alcohol Beers: Flavor Compounds, Defects, and Improvement Strategies. Crit. Rev. Food Sci. Nutr. 56 , 1379–1388 (2016).

Jackowski, M. & Trusek, A. Non-Alcohol. beer Prod. – Overv. 20 , 32–38 (2018).

Takoi, K. et al. The contribution of geraniol metabolism to the citrus flavour of beer: Synergy of geraniol and β-citronellol under coexistence with excess linalool. J. Inst. Brew. 116 , 251–260 (2010).

Kroeze, J. H. & Bartoshuk, L. M. Bitterness suppression as revealed by split-tongue taste stimulation in humans. Physiol. Behav. 35 , 779–783 (1985).

Mennella, J. A. et al. A spoonful of sugar helps the medicine go down”: Bitter masking bysucrose among children and adults. Chem. Senses 40 , 17–25 (2015).

Wietstock, P., Kunz, T., Perreira, F. & Methner, F.-J. Metal chelation behavior of hop acids in buffered model systems. BrewingScience 69 , 56–63 (2016).

Sancho, D., Blanco, C. A., Caballero, I. & Pascual, A. Free iron in pale, dark and alcohol-free commercial lager beers. J. Sci. Food Agric. 91 , 1142–1147 (2011).

Rodrigues, H. & Parr, W. V. Contribution of cross-cultural studies to understanding wine appreciation: A review. Food Res. Int. 115 , 251–258 (2019).

Korneva, E. & Blockeel, H. Towards better evaluation of multi-target regression models. in ECML PKDD 2020 Workshops (eds. Koprinska, I. et al.) 353–362 (Springer International Publishing, Cham, 2020). https://doi.org/10.1007/978-3-030-65965-3_23 .

Gastón Ares. Mathematical and Statistical Methods in Food Science and Technology. (Wiley, 2013).

Grinsztajn, L., Oyallon, E. & Varoquaux, G. Why do tree-based models still outperform deep learning on tabular data? Preprint at http://arxiv.org/abs/2207.08815 (2022).

Gries, S. T. Statistics for Linguistics with R: A Practical Introduction. in Statistics for Linguistics with R (De Gruyter Mouton, 2021). https://doi.org/10.1515/9783110718256 .

Lundberg, S. M. et al. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2 , 56–67 (2020).

Ickes, C. M. & Cadwallader, K. R. Effects of ethanol on flavor perception in alcoholic beverages. Chemosens. Percept. 10 , 119–134 (2017).

Kato, M. et al. Influence of high molecular weight polypeptides on the mouthfeel of commercial beer. J. Inst. Brew. 127 , 27–40 (2021).

Wauters, R. et al. Novel Saccharomyces cerevisiae variants slow down the accumulation of staling aldehydes and improve beer shelf-life. Food Chem. 398 , 1–11 (2023).

Li, H., Jia, S. & Zhang, W. Rapid determination of low-level sulfur compounds in beer by headspace gas chromatography with a pulsed flame photometric detector. J. Am. Soc. Brew. Chem. 66 , 188–191 (2008).

Dercksen, A., Laurens, J., Torline, P., Axcell, B. C. & Rohwer, E. Quantitative analysis of volatile sulfur compounds in beer using a membrane extraction interface. J. Am. Soc. Brew. Chem. 54 , 228–233 (1996).

Molnar, C. Interpretable Machine Learning: A Guide for Making Black-Box Models Interpretable. (2020).

Zhao, Q. & Hastie, T. Causal interpretations of black-box models. J. Bus. Econ. Stat. Publ. Am. Stat. Assoc. 39 , 272–281 (2019).

Article   MathSciNet   Google Scholar  

Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning. (Springer, 2019).

Labrado, D. et al. Identification by NMR of key compounds present in beer distillates and residual phases after dealcoholization by vacuum distillation. J. Sci. Food Agric. 100 , 3971–3978 (2020).

Lusk, L. T., Kay, S. B., Porubcan, A. & Ryder, D. S. Key olfactory cues for beer oxidation. J. Am. Soc. Brew. Chem. 70 , 257–261 (2012).

Gonzalez Viejo, C., Torrico, D. D., Dunshea, F. R. & Fuentes, S. Development of artificial neural network models to assess beer acceptability based on sensory properties using a robotic pourer: A comparative model approach to achieve an artificial intelligence system. Beverages 5 , 33 (2019).

Gonzalez Viejo, C., Fuentes, S., Torrico, D. D., Godbole, A. & Dunshea, F. R. Chemical characterization of aromas in beer and their effect on consumers liking. Food Chem. 293 , 479–485 (2019).

Gilbert, J. L. et al. Identifying breeding priorities for blueberry flavor using biochemical, sensory, and genotype by environment analyses. PLOS ONE 10 , 1–21 (2015).

Goulet, C. et al. Role of an esterase in flavor volatile variation within the tomato clade. Proc. Natl. Acad. Sci. 109 , 19009–19014 (2012).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Borisov, V. et al. Deep Neural Networks and Tabular Data: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 1–21 https://doi.org/10.1109/TNNLS.2022.3229161 (2022).

Statista. Statista Consumer Market Outlook: Beer - Worldwide.

Seitz, H. K. & Stickel, F. Molecular mechanisms of alcoholmediated carcinogenesis. Nat. Rev. Cancer 7 , 599–612 (2007).

Voordeckers, K. et al. Ethanol exposure increases mutation rate through error-prone polymerases. Nat. Commun. 11 , 3664 (2020).

Goelen, T. et al. Bacterial phylogeny predicts volatile organic compound composition and olfactory response of an aphid parasitoid. Oikos 129 , 1415–1428 (2020).

Article   ADS   Google Scholar  

Reher, T. et al. Evaluation of hop (Humulus lupulus) as a repellent for the management of Drosophila suzukii. Crop Prot. 124 , 104839 (2019).

Stein, S. E. An integrated method for spectrum extraction and compound identification from gas chromatography/mass spectrometry data. J. Am. Soc. Mass Spectrom. 10 , 770–781 (1999).

American Society of Brewing Chemists. Sensory Analysis Methods. (American Society of Brewing Chemists, St. Paul, MN, U.S.A., 1992).

McAuley, J., Leskovec, J. & Jurafsky, D. Learning Attitudes and Attributes from Multi-Aspect Reviews. Preprint at https://doi.org/10.48550/arXiv.1210.3926 (2012).

Meilgaard, M. C., Carr, B. T. & Carr, B. T. Sensory Evaluation Techniques. (CRC Press, Boca Raton). https://doi.org/10.1201/b16452 (2014).

Schreurs, M. et al. Data from: Predicting and improving complex beer flavor through machine learning. Zenodo https://doi.org/10.5281/zenodo.10653704 (2024).

Download references

Acknowledgements

We thank all lab members for their discussions and thank all tasting panel members for their contributions. Special thanks go out to Dr. Karin Voordeckers for her tremendous help in proofreading and improving the manuscript. M.S. was supported by a Baillet-Latour fellowship, L.C. acknowledges financial support from KU Leuven (C16/17/006), F.A.T. was supported by a PhD fellowship from FWO (1S08821N). Research in the lab of K.J.V. is supported by KU Leuven, FWO, VIB, VLAIO and the Brewing Science Serves Health Fund. Research in the lab of T.W. is supported by FWO (G.0A51.15) and KU Leuven (C16/17/006).

Author information

These authors contributed equally: Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni.

Authors and Affiliations

VIB—KU Leuven Center for Microbiology, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Michiel Schreurs, Supinya Piampongsant, Miguel Roncoroni, Lloyd Cool, Beatriz Herrera-Malaver, Florian A. Theßeling & Kevin J. Verstrepen

CMPG Laboratory of Genetics and Genomics, KU Leuven, Gaston Geenslaan 1, B-3001, Leuven, Belgium

Leuven Institute for Beer Research (LIBR), Gaston Geenslaan 1, B-3001, Leuven, Belgium

Laboratory of Socioecology and Social Evolution, KU Leuven, Naamsestraat 59, B-3000, Leuven, Belgium

Lloyd Cool, Christophe Vanderaa & Tom Wenseleers

VIB Bioinformatics Core, VIB, Rijvisschestraat 120, B-9052, Ghent, Belgium

Łukasz Kreft & Alexander Botzki

AB InBev SA/NV, Brouwerijplein 1, B-3000, Leuven, Belgium

Philippe Malcorps & Luk Daenen

You can also search for this author in PubMed   Google Scholar

Contributions

S.P., M.S. and K.J.V. conceived the experiments. S.P., M.S. and K.J.V. designed the experiments. S.P., M.S., M.R., B.H. and F.A.T. performed the experiments. S.P., M.S., L.C., C.V., L.K., A.B., P.M., L.D., T.W. and K.J.V. contributed analysis ideas. S.P., M.S., L.C., C.V., T.W. and K.J.V. analyzed the data. All authors contributed to writing the manuscript.

Corresponding author

Correspondence to Kevin J. Verstrepen .

Ethics declarations

Competing interests.

K.J.V. is affiliated with bar.on. The other authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Florian Bauer, Andrew John Macintosh and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary files, supplementary data 1, supplementary data 2, supplementary data 3, supplementary data 4, supplementary data 5, supplementary data 6, supplementary data 7, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Schreurs, M., Piampongsant, S., Roncoroni, M. et al. Predicting and improving complex beer flavor through machine learning. Nat Commun 15 , 2368 (2024). https://doi.org/10.1038/s41467-024-46346-0

Download citation

Received : 30 October 2023

Accepted : 21 February 2024

Published : 26 March 2024

DOI : https://doi.org/10.1038/s41467-024-46346-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

study for scientific research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.25(3); 2014 Oct

Logo of ejifcc

Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

Jacalyn kelly.

1 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada

Tara Sadeghieh

Khosrow adeli.

2 Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada

3 Chair, Communications and Publications Division (CPD), International Federation for Sick Clinical Chemistry (IFCC), Milan, Italy

The authors declare no conflicts of interest regarding publication of this article.

Peer review has been defined as a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review. Despite its wide-spread use by most journals, the peer review process has also been widely criticised due to the slowness of the process to publish new findings and due to perceived bias by the editors and/or reviewers. Within the scientific community, peer review has become an essential component of the academic writing process. It helps ensure that papers published in scientific journals answer meaningful research questions and draw accurate conclusions based on professionally executed experimentation. Submission of low quality manuscripts has become increasingly prevalent, and peer review acts as a filter to prevent this work from reaching the scientific community. The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against plagiarism. Despite its downfalls, there has not yet been a foolproof system developed to take the place of peer review, however, researchers have been looking into electronic means of improving the peer review process. Unfortunately, the recent explosion in online only/electronic journals has led to mass publication of a large number of scientific articles with little or no peer review. This poses significant risk to advances in scientific knowledge and its future potential. The current article summarizes the peer review process, highlights the pros and cons associated with different types of peer review, and describes new methods for improving peer review.

WHAT IS PEER REVIEW AND WHAT IS ITS PURPOSE?

Peer Review is defined as “a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field” ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals, by determining the validity, significance and originality of the study. Secondly, peer review is intended to improve the quality of manuscripts that are deemed suitable for publication. Peer reviewers provide suggestions to authors on how to improve the quality of their manuscripts, and also identify any errors that need correcting before publication.

HISTORY OF PEER REVIEW

The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ( 2 ). The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ( 2 ). There, he stated that physicians must take notes describing the state of their patients’ medical conditions upon each visit. Following treatment, the notes were scrutinized by a local medical council to determine whether the physician had met the required standards of medical care. If the medical council deemed that the appropriate standards were not met, the physician in question could receive a lawsuit from the maltreated patient ( 2 ).

The invention of the printing press in 1453 allowed written documents to be distributed to the general public ( 3 ). At this time, it became more important to regulate the quality of the written material that became publicly available, and editing by peers increased in prevalence. In 1620, Francis Bacon wrote the work Novum Organum, where he described what eventually became known as the first universal method for generating and assessing new science ( 3 ). His work was instrumental in shaping the Scientific Method ( 3 ). In 1665, the French Journal des sçavans and the English Philosophical Transactions of the Royal Society were the first scientific journals to systematically publish research results ( 4 ). Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process in 1665 ( 5 ), however, it is important to note that peer review was initially introduced to help editors decide which manuscripts to publish in their journals, and at that time it did not serve to ensure the validity of the research ( 6 ). It did not take long for the peer review process to evolve, and shortly thereafter papers were distributed to reviewers with the intent of authenticating the integrity of the research study before publication. The Royal Society of Edinburgh adhered to the following peer review process, published in their Medical Essays and Observations in 1731: “Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.” ( 7 ). The Royal Society of London adopted this review procedure in 1752 and developed the “Committee on Papers” to review manuscripts before they were published in Philosophical Transactions ( 6 ).

Peer review in the systematized and institutionalized form has developed immensely since the Second World War, at least partly due to the large increase in scientific research during this period ( 7 ). It is now used not only to ensure that a scientific manuscript is experimentally and ethically sound, but also to determine which papers sufficiently meet the journal’s standards of quality and originality before publication. Peer review is now standard practice by most credible scientific journals, and is an essential part of determining the credibility and quality of work submitted.

IMPACT OF THE PEER REVIEW PROCESS

Peer review has become the foundation of the scholarly publication system because it effectively subjects an author’s work to the scrutiny of other experts in the field. Thus, it encourages authors to strive to produce high quality research that will advance the field. Peer review also supports and maintains integrity and authenticity in the advancement of science. A scientific hypothesis or statement is generally not accepted by the academic community unless it has been published in a peer-reviewed journal ( 8 ). The Institute for Scientific Information ( ISI ) only considers journals that are peer-reviewed as candidates to receive Impact Factors. Peer review is a well-established process which has been a formal part of scientific communication for over 300 years.

OVERVIEW OF THE PEER REVIEW PROCESS

The peer review process begins when a scientist completes a research study and writes a manuscript that describes the purpose, experimental design, results, and conclusions of the study. The scientist then submits this paper to a suitable journal that specializes in a relevant research field, a step referred to as pre-submission. The editors of the journal will review the paper to ensure that the subject matter is in line with that of the journal, and that it fits with the editorial platform. Very few papers pass this initial evaluation. If the journal editors feel the paper sufficiently meets these requirements and is written by a credible source, they will send the paper to accomplished researchers in the field for a formal peer review. Peer reviewers are also known as referees (this process is summarized in Figure 1 ). The role of the editor is to select the most appropriate manuscripts for the journal, and to implement and monitor the peer review process. Editors must ensure that peer reviews are conducted fairly, and in an effective and timely manner. They must also ensure that there are no conflicts of interest involved in the peer review process.

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g001.jpg

Overview of the review process

When a reviewer is provided with a paper, he or she reads it carefully and scrutinizes it to evaluate the validity of the science, the quality of the experimental design, and the appropriateness of the methods used. The reviewer also assesses the significance of the research, and judges whether the work will contribute to advancement in the field by evaluating the importance of the findings, and determining the originality of the research. Additionally, reviewers identify any scientific errors and references that are missing or incorrect. Peer reviewers give recommendations to the editor regarding whether the paper should be accepted, rejected, or improved before publication in the journal. The editor will mediate author-referee discussion in order to clarify the priority of certain referee requests, suggest areas that can be strengthened, and overrule reviewer recommendations that are beyond the study’s scope ( 9 ). If the paper is accepted, as per suggestion by the peer reviewer, the paper goes into the production stage, where it is tweaked and formatted by the editors, and finally published in the scientific journal. An overview of the review process is presented in Figure 1 .

WHO CONDUCTS REVIEWS?

Peer reviews are conducted by scientific experts with specialized knowledge on the content of the manuscript, as well as by scientists with a more general knowledge base. Peer reviewers can be anyone who has competence and expertise in the subject areas that the journal covers. Reviewers can range from young and up-and-coming researchers to old masters in the field. Often, the young reviewers are the most responsive and deliver the best quality reviews, though this is not always the case. On average, a reviewer will conduct approximately eight reviews per year, according to a study on peer review by the Publishing Research Consortium (PRC) ( 7 ). Journals will often have a pool of reviewers with diverse backgrounds to allow for many different perspectives. They will also keep a rather large reviewer bank, so that reviewers do not get burnt out, overwhelmed or time constrained from reviewing multiple articles simultaneously.

WHY DO REVIEWERS REVIEW?

Referees are typically not paid to conduct peer reviews and the process takes considerable effort, so the question is raised as to what incentive referees have to review at all. Some feel an academic duty to perform reviews, and are of the mentality that if their peers are expected to review their papers, then they should review the work of their peers as well. Reviewers may also have personal contacts with editors, and may want to assist as much as possible. Others review to keep up-to-date with the latest developments in their field, and reading new scientific papers is an effective way to do so. Some scientists use peer review as an opportunity to advance their own research as it stimulates new ideas and allows them to read about new experimental techniques. Other reviewers are keen on building associations with prestigious journals and editors and becoming part of their community, as sometimes reviewers who show dedication to the journal are later hired as editors. Some scientists see peer review as a chance to become aware of the latest research before their peers, and thus be first to develop new insights from the material. Finally, in terms of career development, peer reviewing can be desirable as it is often noted on one’s resume or CV. Many institutions consider a researcher’s involvement in peer review when assessing their performance for promotions ( 11 ). Peer reviewing can also be an effective way for a scientist to show their superiors that they are committed to their scientific field ( 5 ).

ARE REVIEWERS KEEN TO REVIEW?

A 2009 international survey of 4000 peer reviewers conducted by the charity Sense About Science at the British Science Festival at the University of Surrey, found that 90% of reviewers were keen to peer review ( 12 ). One third of respondents to the survey said they were happy to review up to five papers per year, and an additional one third of respondents were happy to review up to ten.

HOW LONG DOES IT TAKE TO REVIEW ONE PAPER?

On average, it takes approximately six hours to review one paper ( 12 ), however, this number may vary greatly depending on the content of the paper and the nature of the peer reviewer. One in every 100 participants in the “Sense About Science” survey claims to have taken more than 100 hours to review their last paper ( 12 ).

HOW TO DETERMINE IF A JOURNAL IS PEER REVIEWED

Ulrichsweb is a directory that provides information on over 300,000 periodicals, including information regarding which journals are peer reviewed ( 13 ). After logging into the system using an institutional login (eg. from the University of Toronto), search terms, journal titles or ISSN numbers can be entered into the search bar. The database provides the title, publisher, and country of origin of the journal, and indicates whether the journal is still actively publishing. The black book symbol (labelled ‘refereed’) reveals that the journal is peer reviewed.

THE EVALUATION CRITERIA FOR PEER REVIEW OF SCIENTIFIC PAPERS

As previously mentioned, when a reviewer receives a scientific manuscript, he/she will first determine if the subject matter is well suited for the content of the journal. The reviewer will then consider whether the research question is important and original, a process which may be aided by a literature scan of review articles.

Scientific papers submitted for peer review usually follow a specific structure that begins with the title, followed by the abstract, introduction, methodology, results, discussion, conclusions, and references. The title must be descriptive and include the concept and organism investigated, and potentially the variable manipulated and the systems used in the study. The peer reviewer evaluates if the title is descriptive enough, and ensures that it is clear and concise. A study by the National Association of Realtors (NAR) published by the Oxford University Press in 2006 indicated that the title of a manuscript plays a significant role in determining reader interest, as 72% of respondents said they could usually judge whether an article will be of interest to them based on the title and the author, while 13% of respondents claimed to always be able to do so ( 14 ).

The abstract is a summary of the paper, which briefly mentions the background or purpose, methods, key results, and major conclusions of the study. The peer reviewer assesses whether the abstract is sufficiently informative and if the content of the abstract is consistent with the rest of the paper. The NAR study indicated that 40% of respondents could determine whether an article would be of interest to them based on the abstract alone 60-80% of the time, while 32% could judge an article based on the abstract 80-100% of the time ( 14 ). This demonstrates that the abstract alone is often used to assess the value of an article.

The introduction of a scientific paper presents the research question in the context of what is already known about the topic, in order to identify why the question being studied is of interest to the scientific community, and what gap in knowledge the study aims to fill ( 15 ). The introduction identifies the study’s purpose and scope, briefly describes the general methods of investigation, and outlines the hypothesis and predictions ( 15 ). The peer reviewer determines whether the introduction provides sufficient background information on the research topic, and ensures that the research question and hypothesis are clearly identifiable.

The methods section describes the experimental procedures, and explains why each experiment was conducted. The methods section also includes the equipment and reagents used in the investigation. The methods section should be detailed enough that it can be used it to repeat the experiment ( 15 ). Methods are written in the past tense and in the active voice. The peer reviewer assesses whether the appropriate methods were used to answer the research question, and if they were written with sufficient detail. If information is missing from the methods section, it is the peer reviewer’s job to identify what details need to be added.

The results section is where the outcomes of the experiment and trends in the data are explained without judgement, bias or interpretation ( 15 ). This section can include statistical tests performed on the data, as well as figures and tables in addition to the text. The peer reviewer ensures that the results are described with sufficient detail, and determines their credibility. Reviewers also confirm that the text is consistent with the information presented in tables and figures, and that all figures and tables included are important and relevant ( 15 ). The peer reviewer will also make sure that table and figure captions are appropriate both contextually and in length, and that tables and figures present the data accurately.

The discussion section is where the data is analyzed. Here, the results are interpreted and related to past studies ( 15 ). The discussion describes the meaning and significance of the results in terms of the research question and hypothesis, and states whether the hypothesis was supported or rejected. This section may also provide possible explanations for unusual results and suggestions for future research ( 15 ). The discussion should end with a conclusions section that summarizes the major findings of the investigation. The peer reviewer determines whether the discussion is clear and focused, and whether the conclusions are an appropriate interpretation of the results. Reviewers also ensure that the discussion addresses the limitations of the study, any anomalies in the results, the relationship of the study to previous research, and the theoretical implications and practical applications of the study.

The references are found at the end of the paper, and list all of the information sources cited in the text to describe the background, methods, and/or interpret results. Depending on the citation method used, the references are listed in alphabetical order according to author last name, or numbered according to the order in which they appear in the paper. The peer reviewer ensures that references are used appropriately, cited accurately, formatted correctly, and that none are missing.

Finally, the peer reviewer determines whether the paper is clearly written and if the content seems logical. After thoroughly reading through the entire manuscript, they determine whether it meets the journal’s standards for publication,

and whether it falls within the top 25% of papers in its field ( 16 ) to determine priority for publication. An overview of what a peer reviewer looks for when evaluating a manuscript, in order of importance, is presented in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g002.jpg

How a peer review evaluates a manuscript

To increase the chance of success in the peer review process, the author must ensure that the paper fully complies with the journal guidelines before submission. The author must also be open to criticism and suggested revisions, and learn from mistakes made in previous submissions.

ADVANTAGES AND DISADVANTAGES OF THE DIFFERENT TYPES OF PEER REVIEW

The peer review process is generally conducted in one of three ways: open review, single-blind review, or double-blind review. In an open review, both the author of the paper and the peer reviewer know one another’s identity. Alternatively, in single-blind review, the reviewer’s identity is kept private, but the author’s identity is revealed to the reviewer. In double-blind review, the identities of both the reviewer and author are kept anonymous. Open peer review is advantageous in that it prevents the reviewer from leaving malicious comments, being careless, or procrastinating completion of the review ( 2 ). It encourages reviewers to be open and honest without being disrespectful. Open reviewing also discourages plagiarism amongst authors ( 2 ). On the other hand, open peer review can also prevent reviewers from being honest for fear of developing bad rapport with the author. The reviewer may withhold or tone down their criticisms in order to be polite ( 2 ). This is especially true when younger reviewers are given a more esteemed author’s work, in which case the reviewer may be hesitant to provide criticism for fear that it will damper their relationship with a superior ( 2 ). According to the Sense About Science survey, editors find that completely open reviewing decreases the number of people willing to participate, and leads to reviews of little value ( 12 ). In the aforementioned study by the PRC, only 23% of authors surveyed had experience with open peer review ( 7 ).

Single-blind peer review is by far the most common. In the PRC study, 85% of authors surveyed had experience with single-blind peer review ( 7 ). This method is advantageous as the reviewer is more likely to provide honest feedback when their identity is concealed ( 2 ). This allows the reviewer to make independent decisions without the influence of the author ( 2 ). The main disadvantage of reviewer anonymity, however, is that reviewers who receive manuscripts on subjects similar to their own research may be tempted to delay completing the review in order to publish their own data first ( 2 ).

Double-blind peer review is advantageous as it prevents the reviewer from being biased against the author based on their country of origin or previous work ( 2 ). This allows the paper to be judged based on the quality of the content, rather than the reputation of the author. The Sense About Science survey indicates that 76% of researchers think double-blind peer review is a good idea ( 12 ), and the PRC survey indicates that 45% of authors have had experience with double-blind peer review ( 7 ). The disadvantage of double-blind peer review is that, especially in niche areas of research, it can sometimes be easy for the reviewer to determine the identity of the author based on writing style, subject matter or self-citation, and thus, impart bias ( 2 ).

Masking the author’s identity from peer reviewers, as is the case in double-blind review, is generally thought to minimize bias and maintain review quality. A study by Justice et al. in 1998 investigated whether masking author identity affected the quality of the review ( 17 ). One hundred and eighteen manuscripts were randomized; 26 were peer reviewed as normal, and 92 were moved into the ‘intervention’ arm, where editor quality assessments were completed for 77 manuscripts and author quality assessments were completed for 40 manuscripts ( 17 ). There was no perceived difference in quality between the masked and unmasked reviews. Additionally, the masking itself was often unsuccessful, especially with well-known authors ( 17 ). However, a previous study conducted by McNutt et al. had different results ( 18 ). In this case, blinding was successful 73% of the time, and they found that when author identity was masked, the quality of review was slightly higher ( 18 ). Although Justice et al. argued that this difference was too small to be consequential, their study targeted only biomedical journals, and the results cannot be generalized to journals of a different subject matter ( 17 ). Additionally, there were problems masking the identities of well-known authors, introducing a flaw in the methods. Regardless, Justice et al. concluded that masking author identity from reviewers may not improve review quality ( 17 ).

In addition to open, single-blind and double-blind peer review, there are two experimental forms of peer review. In some cases, following publication, papers may be subjected to post-publication peer review. As many papers are now published online, the scientific community has the opportunity to comment on these papers, engage in online discussions and post a formal review. For example, online publishers PLOS and BioMed Central have enabled scientists to post comments on published papers if they are registered users of the site ( 10 ). Philica is another journal launched with this experimental form of peer review. Only 8% of authors surveyed in the PRC study had experience with post-publication review ( 7 ). Another experimental form of peer review called Dynamic Peer Review has also emerged. Dynamic peer review is conducted on websites such as Naboj, which allow scientists to conduct peer reviews on articles in the preprint media ( 19 ). The peer review is conducted on repositories and is a continuous process, which allows the public to see both the article and the reviews as the article is being developed ( 19 ). Dynamic peer review helps prevent plagiarism as the scientific community will already be familiar with the work before the peer reviewed version appears in print ( 19 ). Dynamic review also reduces the time lag between manuscript submission and publishing. An example of a preprint server is the ‘arXiv’ developed by Paul Ginsparg in 1991, which is used primarily by physicists ( 19 ). These alternative forms of peer review are still un-established and experimental. Traditional peer review is time-tested and still highly utilized. All methods of peer review have their advantages and deficiencies, and all are prone to error.

PEER REVIEW OF OPEN ACCESS JOURNALS

Open access (OA) journals are becoming increasingly popular as they allow the potential for widespread distribution of publications in a timely manner ( 20 ). Nevertheless, there can be issues regarding the peer review process of open access journals. In a study published in Science in 2013, John Bohannon submitted 304 slightly different versions of a fictional scientific paper (written by a fake author, working out of a non-existent institution) to a selected group of OA journals. This study was performed in order to determine whether papers submitted to OA journals are properly reviewed before publication in comparison to subscription-based journals. The journals in this study were selected from the Directory of Open Access Journals (DOAJ) and Biall’s List, a list of journals which are potentially predatory, and all required a fee for publishing ( 21 ). Of the 304 journals, 157 accepted a fake paper, suggesting that acceptance was based on financial interest rather than the quality of article itself, while 98 journals promptly rejected the fakes ( 21 ). Although this study highlights useful information on the problems associated with lower quality publishers that do not have an effective peer review system in place, the article also generalizes the study results to all OA journals, which can be detrimental to the general perception of OA journals. There were two limitations of the study that made it impossible to accurately determine the relationship between peer review and OA journals: 1) there was no control group (subscription-based journals), and 2) the fake papers were sent to a non-randomized selection of journals, resulting in bias.

JOURNAL ACCEPTANCE RATES

Based on a recent survey, the average acceptance rate for papers submitted to scientific journals is about 50% ( 7 ). Twenty percent of the submitted manuscripts that are not accepted are rejected prior to review, and 30% are rejected following review ( 7 ). Of the 50% accepted, 41% are accepted with the condition of revision, while only 9% are accepted without the request for revision ( 7 ).

SATISFACTION WITH THE PEER REVIEW SYSTEM

Based on a recent survey by the PRC, 64% of academics are satisfied with the current system of peer review, and only 12% claimed to be ‘dissatisfied’ ( 7 ). The large majority, 85%, agreed with the statement that ‘scientific communication is greatly helped by peer review’ ( 7 ). There was a similarly high level of support (83%) for the idea that peer review ‘provides control in scientific communication’ ( 7 ).

HOW TO PEER REVIEW EFFECTIVELY

The following are ten tips on how to be an effective peer reviewer as indicated by Brian Lucey, an expert on the subject ( 22 ):

1) Be professional

Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it.

2) Be pleasant

If the paper is of low quality, suggest that it be rejected, but do not leave ad hominem comments. There is no benefit to being ruthless.

3) Read the invite

When emailing a scientist to ask them to conduct a peer review, the majority of journals will provide a link to either accept or reject. Do not respond to the email, respond to the link.

4) Be helpful

Suggest how the authors can overcome the shortcomings in their paper. A review should guide the author on what is good and what needs work from the reviewer’s perspective.

5) Be scientific

The peer reviewer plays the role of a scientific peer, not an editor for proofreading or decision-making. Don’t fill a review with comments on editorial and typographic issues. Instead, focus on adding value with scientific knowledge and commenting on the credibility of the research conducted and conclusions drawn. If the paper has a lot of typographical errors, suggest that it be professionally proof edited as part of the review.

6) Be timely

Stick to the timeline given when conducting a peer review. Editors track who is reviewing what and when and will know if someone is late on completing a review. It is important to be timely both out of respect for the journal and the author, as well as to not develop a reputation of being late for review deadlines.

7) Be realistic

The peer reviewer must be realistic about the work presented, the changes they suggest and their role. Peer reviewers may set the bar too high for the paper they are editing by proposing changes that are too ambitious and editors must override them.

8) Be empathetic

Ensure that the review is scientific, helpful and courteous. Be sensitive and respectful with word choice and tone in a review.

Remember that both specialists and generalists can provide valuable insight when peer reviewing. Editors will try to get both specialised and general reviewers for any particular paper to allow for different perspectives. If someone is asked to review, the editor has determined they have a valid and useful role to play, even if the paper is not in their area of expertise.

10) Be organised

A review requires structure and logical flow. A reviewer should proofread their review before submitting it for structural, grammatical and spelling errors as well as for clarity. Most publishers provide short guides on structuring a peer review on their website. Begin with an overview of the proposed improvements; then provide feedback on the paper structure, the quality of data sources and methods of investigation used, the logical flow of argument, and the validity of conclusions drawn. Then provide feedback on style, voice and lexical concerns, with suggestions on how to improve.

In addition, the American Physiology Society (APS) recommends in its Peer Review 101 Handout that peer reviewers should put themselves in both the editor’s and author’s shoes to ensure that they provide what both the editor and the author need and expect ( 11 ). To please the editor, the reviewer should ensure that the peer review is completed on time, and that it provides clear explanations to back up recommendations. To be helpful to the author, the reviewer must ensure that their feedback is constructive. It is suggested that the reviewer take time to think about the paper; they should read it once, wait at least a day, and then re-read it before writing the review ( 11 ). The APS also suggests that Graduate students and researchers pay attention to how peer reviewers edit their work, as well as to what edits they find helpful, in order to learn how to peer review effectively ( 11 ). Additionally, it is suggested that Graduate students practice reviewing by editing their peers’ papers and asking a faculty member for feedback on their efforts. It is recommended that young scientists offer to peer review as often as possible in order to become skilled at the process ( 11 ). The majority of students, fellows and trainees do not get formal training in peer review, but rather learn by observing their mentors. According to the APS, one acquires experience through networking and referrals, and should therefore try to strengthen relationships with journal editors by offering to review manuscripts ( 11 ). The APS also suggests that experienced reviewers provide constructive feedback to students and junior colleagues on their peer review efforts, and encourages them to peer review to demonstrate the importance of this process in improving science ( 11 ).

The peer reviewer should only comment on areas of the manuscript that they are knowledgeable about ( 23 ). If there is any section of the manuscript they feel they are not qualified to review, they should mention this in their comments and not provide further feedback on that section. The peer reviewer is not permitted to share any part of the manuscript with a colleague (even if they may be more knowledgeable in the subject matter) without first obtaining permission from the editor ( 23 ). If a peer reviewer comes across something they are unsure of in the paper, they can consult the literature to try and gain insight. It is important for scientists to remember that if a paper can be improved by the expertise of one of their colleagues, the journal must be informed of the colleague’s help, and approval must be obtained for their colleague to read the protected document. Additionally, the colleague must be identified in the confidential comments to the editor, in order to ensure that he/she is appropriately credited for any contributions ( 23 ). It is the job of the reviewer to make sure that the colleague assisting is aware of the confidentiality of the peer review process ( 23 ). Once the review is complete, the manuscript must be destroyed and cannot be saved electronically by the reviewers ( 23 ).

COMMON ERRORS IN SCIENTIFIC PAPERS

When performing a peer review, there are some common scientific errors to look out for. Most of these errors are violations of logic and common sense: these may include contradicting statements, unwarranted conclusions, suggestion of causation when there is only support for correlation, inappropriate extrapolation, circular reasoning, or pursuit of a trivial question ( 24 ). It is also common for authors to suggest that two variables are different because the effects of one variable are statistically significant while the effects of the other variable are not, rather than directly comparing the two variables ( 24 ). Authors sometimes oversee a confounding variable and do not control for it, or forget to include important details on how their experiments were controlled or the physical state of the organisms studied ( 24 ). Another common fault is the author’s failure to define terms or use words with precision, as these practices can mislead readers ( 24 ). Jargon and/or misused terms can be a serious problem in papers. Inaccurate statements about specific citations are also a common occurrence ( 24 ). Additionally, many studies produce knowledge that can be applied to areas of science outside the scope of the original study, therefore it is better for reviewers to look at the novelty of the idea, conclusions, data, and methodology, rather than scrutinize whether or not the paper answered the specific question at hand ( 24 ). Although it is important to recognize these points, when performing a review it is generally better practice for the peer reviewer to not focus on a checklist of things that could be wrong, but rather carefully identify the problems specific to each paper and continuously ask themselves if anything is missing ( 24 ). An extremely detailed description of how to conduct peer review effectively is presented in the paper How I Review an Original Scientific Article written by Frederic G. Hoppin, Jr. It can be accessed through the American Physiological Society website under the Peer Review Resources section.

CRITICISM OF PEER REVIEW

A major criticism of peer review is that there is little evidence that the process actually works, that it is actually an effective screen for good quality scientific work, and that it actually improves the quality of scientific literature. As a 2002 study published in the Journal of the American Medical Association concluded, ‘Editorial peer review, although widely used, is largely untested and its effects are uncertain’ ( 25 ). Critics also argue that peer review is not effective at detecting errors. Highlighting this point, an experiment by Godlee et al. published in the British Medical Journal (BMJ) inserted eight deliberate errors into a paper that was nearly ready for publication, and then sent the paper to 420 potential reviewers ( 7 ). Of the 420 reviewers that received the paper, 221 (53%) responded, the average number of errors spotted by reviewers was two, no reviewer spotted more than five errors, and 35 reviewers (16%) did not spot any.

Another criticism of peer review is that the process is not conducted thoroughly by scientific conferences with the goal of obtaining large numbers of submitted papers. Such conferences often accept any paper sent in, regardless of its credibility or the prevalence of errors, because the more papers they accept, the more money they can make from author registration fees ( 26 ). This misconduct was exposed in 2014 by three MIT graduate students by the names of Jeremy Stribling, Dan Aguayo and Maxwell Krohn, who developed a simple computer program called SCIgen that generates nonsense papers and presents them as scientific papers ( 26 ). Subsequently, a nonsense SCIgen paper submitted to a conference was promptly accepted. Nature recently reported that French researcher Cyril Labbé discovered that sixteen SCIgen nonsense papers had been used by the German academic publisher Springer ( 26 ). Over 100 nonsense papers generated by SCIgen were published by the US Institute of Electrical and Electronic Engineers (IEEE) ( 26 ). Both organisations have been working to remove the papers. Labbé developed a program to detect SCIgen papers and has made it freely available to ensure publishers and conference organizers do not accept nonsense work in the future. It is available at this link: http://scigendetect.on.imag.fr/main.php ( 26 ).

Additionally, peer review is often criticized for being unable to accurately detect plagiarism. However, many believe that detecting plagiarism cannot practically be included as a component of peer review. As explained by Alice Tuff, development manager at Sense About Science, ‘The vast majority of authors and reviewers think peer review should detect plagiarism (81%) but only a minority (38%) think it is capable. The academic time involved in detecting plagiarism through peer review would cause the system to grind to a halt’ ( 27 ). Publishing house Elsevier began developing electronic plagiarism tools with the help of journal editors in 2009 to help improve this issue ( 27 ).

It has also been argued that peer review has lowered research quality by limiting creativity amongst researchers. Proponents of this view claim that peer review has repressed scientists from pursuing innovative research ideas and bold research questions that have the potential to make major advances and paradigm shifts in the field, as they believe that this work will likely be rejected by their peers upon review ( 28 ). Indeed, in some cases peer review may result in rejection of innovative research, as some studies may not seem particularly strong initially, yet may be capable of yielding very interesting and useful developments when examined under different circumstances, or in the light of new information ( 28 ). Scientists that do not believe in peer review argue that the process stifles the development of ingenious ideas, and thus the release of fresh knowledge and new developments into the scientific community.

Another issue that peer review is criticized for, is that there are a limited number of people that are competent to conduct peer review compared to the vast number of papers that need reviewing. An enormous number of papers published (1.3 million papers in 23,750 journals in 2006), but the number of competent peer reviewers available could not have reviewed them all ( 29 ). Thus, people who lack the required expertise to analyze the quality of a research paper are conducting reviews, and weak papers are being accepted as a result. It is now possible to publish any paper in an obscure journal that claims to be peer-reviewed, though the paper or journal itself could be substandard ( 29 ). On a similar note, the US National Library of Medicine indexes 39 journals that specialize in alternative medicine, and though they all identify themselves as “peer-reviewed”, they rarely publish any high quality research ( 29 ). This highlights the fact that peer review of more controversial or specialized work is typically performed by people who are interested and hold similar views or opinions as the author, which can cause bias in their review. For instance, a paper on homeopathy is likely to be reviewed by fellow practicing homeopaths, and thus is likely to be accepted as credible, though other scientists may find the paper to be nonsense ( 29 ). In some cases, papers are initially published, but their credibility is challenged at a later date and they are subsequently retracted. Retraction Watch is a website dedicated to revealing papers that have been retracted after publishing, potentially due to improper peer review ( 30 ).

Additionally, despite its many positive outcomes, peer review is also criticized for being a delay to the dissemination of new knowledge into the scientific community, and as an unpaid-activity that takes scientists’ time away from activities that they would otherwise prioritize, such as research and teaching, for which they are paid ( 31 ). As described by Eva Amsen, Outreach Director for F1000Research, peer review was originally developed as a means of helping editors choose which papers to publish when journals had to limit the number of papers they could print in one issue ( 32 ). However, nowadays most journals are available online, either exclusively or in addition to print, and many journals have very limited printing runs ( 32 ). Since there are no longer page limits to journals, any good work can and should be published. Consequently, being selective for the purpose of saving space in a journal is no longer a valid excuse that peer reviewers can use to reject a paper ( 32 ). However, some reviewers have used this excuse when they have personal ulterior motives, such as getting their own research published first.

RECENT INITIATIVES TOWARDS IMPROVING PEER REVIEW

F1000Research was launched in January 2013 by Faculty of 1000 as an open access journal that immediately publishes papers (after an initial check to ensure that the paper is in fact produced by a scientist and has not been plagiarised), and then conducts transparent post-publication peer review ( 32 ). F1000Research aims to prevent delays in new science reaching the academic community that are caused by prolonged publication times ( 32 ). It also aims to make peer reviewing more fair by eliminating any anonymity, which prevents reviewers from delaying the completion of a review so they can publish their own similar work first ( 32 ). F1000Research offers completely open peer review, where everything is published, including the name of the reviewers, their review reports, and the editorial decision letters ( 32 ).

PeerJ was founded by Jason Hoyt and Peter Binfield in June 2012 as an open access, peer reviewed scholarly journal for the Biological and Medical Sciences ( 33 ). PeerJ selects articles to publish based only on scientific and methodological soundness, not on subjective determinants of ‘impact ’, ‘novelty’ or ‘interest’ ( 34 ). It works on a “lifetime publishing plan” model which charges scientists for publishing plans that give them lifetime rights to publish with PeerJ, rather than charging them per publication ( 34 ). PeerJ also encourages open peer review, and authors are given the option to post the full peer review history of their submission with their published article ( 34 ). PeerJ also offers a pre-print review service called PeerJ Pre-prints, in which paper drafts are reviewed before being sent to PeerJ to publish ( 34 ).

Rubriq is an independent peer review service designed by Shashi Mudunuri and Keith Collier to improve the peer review system ( 35 ). Rubriq is intended to decrease redundancy in the peer review process so that the time lost in redundant reviewing can be put back into research ( 35 ). According to Keith Collier, over 15 million hours are lost each year to redundant peer review, as papers get rejected from one journal and are subsequently submitted to a less prestigious journal where they are reviewed again ( 35 ). Authors often have to submit their manuscript to multiple journals, and are often rejected multiple times before they find the right match. This process could take months or even years ( 35 ). Rubriq makes peer review portable in order to help authors choose the journal that is best suited for their manuscript from the beginning, thus reducing the time before their paper is published ( 35 ). Rubriq operates under an author-pay model, in which the author pays a fee and their manuscript undergoes double-blind peer review by three expert academic reviewers using a standardized scorecard ( 35 ). The majority of the author’s fee goes towards a reviewer honorarium ( 35 ). The papers are also screened for plagiarism using iThenticate ( 35 ). Once the manuscript has been reviewed by the three experts, the most appropriate journal for submission is determined based on the topic and quality of the paper ( 35 ). The paper is returned to the author in 1-2 weeks with the Rubriq Report ( 35 ). The author can then submit their paper to the suggested journal with the Rubriq Report attached. The Rubriq Report will give the journal editors a much stronger incentive to consider the paper as it shows that three experts have recommended the paper to them ( 35 ). Rubriq also has its benefits for reviewers; the Rubriq scorecard gives structure to the peer review process, and thus makes it consistent and efficient, which decreases time and stress for the reviewer. Reviewers also receive feedback on their reviews and most significantly, they are compensated for their time ( 35 ). Journals also benefit, as they receive pre-screened papers, reducing the number of papers sent to their own reviewers, which often end up rejected ( 35 ). This can reduce reviewer fatigue, and allow only higher-quality articles to be sent to their peer reviewers ( 35 ).

According to Eva Amsen, peer review and scientific publishing are moving in a new direction, in which all papers will be posted online, and a post-publication peer review will take place that is independent of specific journal criteria and solely focused on improving paper quality ( 32 ). Journals will then choose papers that they find relevant based on the peer reviews and publish those papers as a collection ( 32 ). In this process, peer review and individual journals are uncoupled ( 32 ). In Keith Collier’s opinion, post-publication peer review is likely to become more prevalent as a complement to pre-publication peer review, but not as a replacement ( 35 ). Post-publication peer review will not serve to identify errors and fraud but will provide an additional measurement of impact ( 35 ). Collier also believes that as journals and publishers consolidate into larger systems, there will be stronger potential for “cascading” and shared peer review ( 35 ).

CONCLUDING REMARKS

Peer review has become fundamental in assisting editors in selecting credible, high quality, novel and interesting research papers to publish in scientific journals and to ensure the correction of any errors or issues present in submitted papers. Though the peer review process still has some flaws and deficiencies, a more suitable screening method for scientific papers has not yet been proposed or developed. Researchers have begun and must continue to look for means of addressing the current issues with peer review to ensure that it is a full-proof system that ensures only quality research papers are released into the scientific community.

COMMENTS

  1. Scientific Research

    Scientific research is the systematic and empirical investigation of phenomena, theories, or hypotheses, using various methods and techniques in order to acquire new knowledge or to validate existing knowledge. It involves the collection, analysis, interpretation, and presentation of data, as well as the formulation and testing of hypotheses.

  2. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  3. Scientific Research Definition, Classifications & Purpose

    A scientific research definition is that it is the process by which scientists study various phenomenon using systematic methods of collecting, analyzing, and interpreting data. It is often ...

  4. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  5. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  6. ScienceDaily: Your source for the latest research news

    ScienceDaily features breaking news about the latest discoveries in science, health, the environment, technology, and more -- from leading universities, scientific journals, and research ...

  7. Research 101: Understanding Research Studies

    The basis of a scientific research study follows a common pattern: Define the question. Gather information and resources. Form hypotheses. Perform an experiment and collect data. Analyze the data ...

  8. How to Conduct Scientific Research?

    Scientific method should be neutral, objective, rational, and as a result, should be able to approve or disapprove the hypothesis. The research plan should include the procedure to obtain data and evaluate the variables. It should ensure that analyzable data are obtained. It should also include plans on the statistical analysis to be performed.

  9. Chapter 1 Science and Scientific Research

    The scientific method, as applied to social sciences, includes a variety of research approaches, tools, and techniques, such as qualitative and quantitative data, statistical analysis, experiments, field surveys, case research, and so forth. Most of this book is devoted to learning about these different methods.

  10. What Is Research, and Why Do People Do It?

    Abstractspiepr Abs1. Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain ...

  11. Academic Research

    Guide to Life Science Careers, Unit 2.1. A career in academic research involves many activities besides research. Scientists spend their time writing applications for funding to do research ...

  12. What types of studies are there?

    There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need reliable answers to a number of questions.

  13. Scientific Research & Study Design

    Research Study Design. Designing your experiment or study is important for both natural and social scientists. Sage Research Methods (SRM) has an excellent "Project Planner" that guides you through the basic stages of research design. SRM also has excellent explanations of qualitative and quantitative research methods for the social sciences.

  14. Principles of Scientific Research

    Abstract. Scientific research has provided knowledge and understanding that has freed humankind from the ignorance that once promoted fear, mysticism, superstition, and illness. Developments in science and scientific methods, however, did not come easily. Many of our ancestors had to face persecution, even death, from religious and political ...

  15. Scientific research

    This is a great example of a twin study - a uniquely useful research tool in science. Pixabay / Pexels December 5, 2023 Fact-bombing by experts doesn't change hearts and minds.

  16. The Scientific Method Steps, Uses, and Key Terms

    When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...

  17. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  18. Research articles

    Design, synthesis and bioactivity study on oxygen-heterocyclic-based pyran analogues as effective P-glycoprotein-mediated multidrug resistance in MCF-7/ADR cell. Ashraf H. F. Abd El-Wahab; Rita M ...

  19. Latest science news, discoveries and analysis

    Find breaking science news and analysis from the world's leading research journal. Latest science news and analysis from the world's leading research journal ... This study identifies a major ...

  20. What Are The Steps Of The Scientific Method?

    A detailed record of psychological studies and all scientific studies is vital to clearly explain the steps and procedures used throughout the study. So that other researchers can try this experiment too and replicate the results. ... Scientific research is the most critical tool for navigating and learning about our complex world. Without it ...

  21. Revolutionizing the Study of Mental Disorders

    The Research Domain Criteria framework (RDoC) was created in 2010 by the National Institute of Mental Health. The framework encourages researchers to examine functional processes that are implemented by the brain on a continuum from normal to abnormal. This way of researching mental disorders can help overcome inherent limitations in using all ...

  22. Types of studies and research design

    Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. ... When the existing studies have important scientific and methodological ...

  23. Scientific Consensus

    Scientific Consensus. Temperature data showing rapid warming in the past few decades, the latest data going up through 2023. According to NASA, Earth's average surface temperature in 2023 was the warmest on record since recordkeeping began in 1880, continuing a long-term trend of rising global temperatures. On top of that, the 10 most recent ...

  24. More Studies by Columbia Cancer Researchers Are Retracted

    March 20, 2024. Scientists in a prominent cancer lab at Columbia University have now had four studies retracted and a stern note added to a fifth accusing it of "severe abuse of the scientific ...

  25. Scientists Tend to Inflate How Ethical They Are in Doing Their Research

    This new discovery emerged from a massive survey of 11,050 scientific researchers in Sweden, conducted by Amanda M. Lindkvist, Lina Koppel, and Gustav Tinghög at Linköping University and ...

  26. Mandating indoor air quality for public buildings

    Science. 28 Mar 2024. Vol 383, Issue 6690. pp. 1418 - 1420. DOI: 10.1126/science.adl0677. People living in urban and industrialized societies, which are expanding globally, spend more than 90% of their time in the indoor environment, breathing indoor air (IA). Despite decades of research and advocacy, most countries do not have legislated ...

  27. Scientists plan to study solar eclipse with planes and NASA probe

    Scientists gear up to study solar eclipse with high-altitude planes and sun-orbiting probes. For astrophysicists, the eclipse is a rare opportunity to study the sun — particularly its outer ...

  28. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  29. Predicting and improving complex beer flavor through machine ...

    Research in the lab of K.J.V. is supported by KU Leuven, FWO, VIB, VLAIO and the Brewing Science Serves Health Fund. Research in the lab of T.W. is supported by FWO (G.0A51.15) and KU Leuven (C16 ...

  30. Peer Review in Scientific Publications: Benefits, Critiques, & A

    The peer review process begins when a scientist completes a research study and writes a manuscript that describes the purpose, experimental design, results, and conclusions of the study. ... In a study published in Science in 2013, John Bohannon submitted 304 slightly different versions of a fictional scientific paper (written by a fake author ...