Not a member?

Find out what The Global Health Network can do for you. Register now.

Member Sites A network of members around the world. Join now.

  • 1000 Challenge
  • ODIN Wastewater Surveillance Project
  • CEPI Technical Resources
  • Global Health Research Management
  • UK Overseas Territories Public Health Network
  • Global Malaria Research
  • Global Outbreaks Research
  • Sub-Saharan Congenital Anomalies Network
  • Global Pathogen Variants
  • Global Health Data Science
  • AI for Global Health Research
  • MRC Clinical Trials Unit at UCL
  • Virtual Biorepository
  • Epidemic Preparedness Innovations
  • Rapid Support Team
  • The Global Health Network Africa
  • The Global Health Network Asia
  • The Global Health Network LAC
  • Global Health Bioethics
  • Global Pandemic Planning
  • EPIDEMIC ETHICS
  • Global Vector Hub
  • Global Health Economics
  • LactaHub – Breastfeeding Knowledge
  • Global Birth Defects
  • Antimicrobial Resistance (AMR)
  • Human Infection Studies
  • EDCTP Knowledge Hub
  • CHAIN Network
  • Brain Infections Global
  • Research Capacity Network
  • Global Research Nurses
  • ZIKAlliance
  • TDR Fellows
  • Global Health Coordinators
  • Global Health Laboratories
  • Global Health Methodology Research
  • Global Health Social Science
  • Global Health Trials
  • Zika Infection
  • Global Musculoskeletal
  • Global Pharmacovigilance
  • Global Pregnancy CoLab
  • INTERGROWTH-21ˢᵗ
  • East African Consortium for Clinical Research
  • Women in Global Health Research
  • Coronavirus

Research Tools Resources designed to help you.

  • Site Finder
  • Process Map
  • Global Health Training Centre
  • Resources Gateway
  • Global Health Research Process Map
  • About This Site

Downloadable Templates and Tools for Clinical Research

Welcome to global health trials' tools and templates library. please note that this page has been updated for 2015 following a quality check and review of the templates, and many new ones have been added. please click on the orange text to download each template., the templates below have been shared by other groups, and are free to use and adapt for your researchstudies. please ensure that you read and adapt them carefully for your own setting, and that you reference global health trials and the global health network when you use them. to share your own templates and sops, or comment on these, please email [email protected]. we look forward to hearing from you.

These templates and tools are ordered by category, so please scroll down to find what you need.

 

 

 

 

 
    

 

 

 

            

 

 

 

 
    

 

 

 

 

 

 

 

 
  

 

   

 

         

 

 

 

 

 

 
     

 

 

 

 

 
  

 
   

 

 

 

 

 

         

 

 

 

 

 

 

 

 

 
 

  

 
         

 

 

 

 

 

 

 

 

 

 

/td><

 

 

 

 

 

 

 

 

 

 

 

To share your own templates and SOPs, or comment on these, please email [email protected]. We look forward to hearing from you!

  • Webinar on community engagement in clinical research involving pregnant women
  • Free Webinar: Science, technology and innovation for upskilling knowledge-based economies in Africa
  • Open Public Consultation on “Strengthened cooperation against vaccine preventable diseases”

Trial Operations    Trial Management    Ethics and Informed Consent    Resources    Trial Design    Data Management and Statistics   

training   

shewitdege

This is Degena Bahrey Tadesse from Tigray, Ethiopia. I am new for this web I am assistant professor in Adult Health Nursing Could you share me the sample/templet research proposal for Global Research Nurses Pump-priming Grants 2023: Research Project Award

jo8281968517

I have learned lot..Thanks..

yfarzi

i was wondering why there is no SOP on laboratory procedures ?

kirannn14

Hi, Can you provide me the SOP for electronic signatures in Clinical trial

anupambendre

Do you have an "SOP for Telephonic site selection visit". Kindly Share on my registered mail ID

sguteta

Thank you for sharing the resources. It is very kind of you.

ericdortenzio

Hi These tolls are very useful! Thank you

Do you have a task and responsability matrix template for clinical trial managment ? Best

abdulkamara1986

I am very much happy to find myself here as a clinician

GHN_Editors

Dear Getrude

We have a free 14-module course on research ethics on our training centre; you'll receive a certificate if you complete all the modules and quizzes. You can take it in your own time. Just visit 'Training centre' in the tabs above, then 'short courses'.

Kind regards The Editorial Team

gamanyagg

need modules on free online gcp course on research ethics

antropmcdiaz

Estimados: me parece excelente el aporte que han hecho dado que aporta. por un lado a mejorar la transparencia del trabajo como a facilitar el seguimiento y supervisión de los mismos. Muchas gracias por ello

We also have an up to date list of global health events available here: https://globalhealthtrials.tghn.org/community/training-events/

Dear Nazish

Thank you, I am glad you found the seminars and the training courses useful. We list many training events (all relevant to Global Health, and as many of them as possible are either free or subsidised) on the 'community' web pages above. Keep an eye on those for events and activities which you can get involved with. Also, if you post an 'introduction' on the introduction group stating where you are from and your research interests, we can keep you updated of relevant local events.

ndurran

Thanks so much. These are very helpful seminars. Please let me know any other websites/links that provide free or inexpensive lectures on clinical Research. Appreciate your help.

Hi Nazish, and welcome to the Network. The items here are downloadable templates for you to use; it sounds like you may be seeking lectures and eLearning courses? If so - no problem! You can find free seminars with sound and slides here: https://globalhealthtrainingcentre.tghn.org/webinars/ , and you can find free, certified eLearning courses here: https://globalhealthtrials.tghn.org/elearning . Certificates are awarded for the eLearning courses for those scoring over 80% in the quiz at the end of each course. If you need anything else, do ask! Kind regards The Editorial Team

Hi, I am new to this website and also to the Clinical Research Industry for that matter I only am able to see the PDF of these courses, just wanted to know are these audio lectures and also happen to have audio clips that go with the pdf?

amanirak

This site is impeccable and very useful for my job!!!!

Thank you for your kind comments.

shailajadr

Fantastic resources

dralinn

I am delighted you found this website. I earlier introduced it to you because of your prolific interest in health care information and resource sharing....

Please Sign in (or Register ) to view further.

Useful Resources

Related articles.

  • PRISMA for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts BY Jai K Das
  • 5 ways statistics can fool you—Tips for practicing clinicians BY Jai K Das
  • How to prepare for a job interview and predict the questions you’ll be asked BY The Editorial Team
  • Preparing for and Executing a Randomised Controlled Trial of Podoconiosis Treatment in Northern Ethiopia BY Henok Negussie, Thomas Addissie, Adamu Addissie, Gail Davey
  • Dengue: Guidelines for Diagnosis, Treatment, Prevention and Control BY WHO/ TDR

Most popular tags

  • Archive (303)
  • archive (104)
  • data sharing (70)
  • sharing (63)
  • training (49)
  • malaria (30)
  • ACT consortium (25)
  • informed consent (7)
  • data management (6)
  • trial management (6)
  • careers (5)
  • guidelines (5)
  • monitoring (5)
  • workshop (5)
  • administration (4)
  • clinical research (4)

medical research questionnaire sample

Summer is here, and so is the sale. Get a yearly plan with up to 65% off today! 🌴🌞

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

medical research questionnaire sample

HubSpot CRM

medical research questionnaire sample

Google Sheets

medical research questionnaire sample

Google Analytics

medical research questionnaire sample

Microsoft Excel

medical research questionnaire sample

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

medical research questionnaire sample

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • 20 Amazing health survey questions for questionnaires

20 Amazing health survey questions for questionnaires

Surveys are an excellent approach to acquiring data that isn't revealed by lab results or spoken in casual conversation. Patients can be reluctant to offer you personal feedback, but surveys allow them to do so confidently. Online surveys encourage communication with the patient by collecting opinions from clients and employees.

The health assessment of a person plays a significant role in determining and assessing their health status . Healthcare organizations frequently use health assessment survey questions to gather patient data more effectively, quickly, and conveniently. This article will explain a health survey, how you can create it quickly and promptly on forms.app , and examples of health survey questions you can use in your excellent survey.

  • What is a health survey?

Health surveys are a crucial and practical decision-making tool when creating a health plan. Health studies provide detailed information about the chronic illnesses that patients have, as well as about patient perspectives on health trends, way of life, and use of healthcare services .

A patient satisfaction survey is a collection of questions designed to get feedback from patients and gauge their satisfaction with the service and quality of their healthcare provider . The patient satisfaction survey questionnaire assists in identifying critical indicators for patient care that aid medical institutions in understanding the quality of treatment offered and potential service issues.

medical research questionnaire sample

  • How to write better questions in your health survey

The proper application of a health survey is its most crucial component. The timing is critical to health surveys. Patients in the hospital typically need a certain amount of uninterrupted time to complete the survey questions without interruption; instead, they should complete the surveys after their visit. Here are some tips on how to create a good health survey question:

1. Ask clear questions

In general, people avoid solving long and obscure surveys. Patients want to understand the questions clearly when sharing their views and ideas. If you keep the health survey questions clear and short , you can increase the number of respondents and get more effective results. To make your questions clear, you can add descriptions under question titles.

medical research questionnaire sample

2. Use visual power

The use of visuals in surveys positively affects the number of participants. By using some images in health surveys, you can enable patients to respond more quickly and accurately . For example, in a health survey question asking the patient which region he has pain, you can make it easier for patients to answer by using visuals.

medical research questionnaire sample

3. Reserve a section in the questionnaire for patient suggestions

Opinions and suggestions of patients are essential to improving the treatment, health, and hospital systems. In the last part of the questionnaire, you can ask the patients to present their opinions and suggestions . In this way, the patient can feel more important , and you can reach the patient's views more effectively .

medical research questionnaire sample

4. Include the 'other' option in the answer choices.

There may not be an option suitable for the patients in the answer choice. This may cause the patient to leave the question blank or give an incorrect answer. In this case, you can ask the patient to write their reply by adding the ' other ' option to the options .

medical research questionnaire sample

  • 20 excellent health survey question examples

A health survey question asks respondents about their general health and condition. Researchers can use these questions to gather data about a patient's public health, disease risk factors, feelings about their medical care, and other relevant information . 

A health survey effectively gathers information from a large population or a specific target group. You can collect critical data from the patient by asking the appropriate questions at the right time. Below, this article  has shared 20 Great health care survey question examples for surveys:

1  - How healthy do you feel on a scale of 1 to 10?

2  - How often do you go to the hospital?

  a) Once a week

  b) Once every two weeks

  c) Once a month

  d) Once every three months

  e) Once a year

  f) Other (Please write your answer)

3  - Do you have any chronic diseases?

  a) Yes 

  b) No 

4  - Do you have any genetic diseases?

  a) Diabetes

  b) High blood pressure

  c) Huntington

  d) Thalassemia

  e) Hemophilia

  f) Other (Please specify)

5  - Do you regularly use alcohol and/or drugs?

  a) Yes to both

  b) Only to drugs

  c) Only to alcohol

  d) No

6  - How frequently do you get your health checkup?

  a) Once in 2 months

  b) Once in 6 months

  c) Once a year

  d) Only when needed

  e) Never get it done

7  - Does anyone in your family members have a hereditary disease?

  a) Yes

  b) No

8  - How often do you exercise?

  a) Every day

  b) Once in two days

  c) Once a week

  d) Once a month

  e) Never

9  - Have you had an allergic reaction or received treatment for it?

  a) Yes, I did. I also received treatment.

  b) I had it but did not receive treatment

  c) I've never had one.

10  - What level of function can you carry out routine tasks?

  a) Excellent level

  b) Good level

  c) Intermediate level

  d) Bad level

  e) Terrible level

11  - Have you experienced depression or psychological distress in the last four weeks?

  a) Yes very much

  b) Sometimes

  c) Never

12  - How much have your emotional issues impacted your interactions with friends and family over the past four weeks?

  a) It didn't affect me at all

  b) Very little

  c) Moderate

  d) Quite a few

  e) Too much

13  - How would you rate your treatment process?

  a) Wonderful

  b) Above average

  c) Average

  d) Below average

  e) Very poor

14  - Do you use any medication regularly?

15  - What various medications have you used over the last 24 hours?

16  - How was the doctor's attitude towards you on a scale of 1 to 10?

17  - How do you rate the local hospitals in your area?

  a) Excellent

  b) Good

  d) Poor

18  - Please rate (1-10) your agreement with the following: Health insurance is affordable.

19  - Which of the following have you experienced pain in the past month?

  a) Heart

  b) Kidney

  c) Lung

  d) Stomach

  e) Other (Please specify)

20  - Do you recommend this health facility to your family and friends?

  a) Definitely yes

  b) Yes

  c) No  

  d) Definitely not

  • How to create a health survey on forms.app

forms.app is one of the best survey makers . It offers its users a wide variety of ready-to-use forms, surveys, and quizzes. The free template for health survey on forms.app is easy to use. It will be explained step by step how to use the forms.app to create the health questionnaire.

1  - Sign up or log in to forms.app : For health surveys that you can create quickly and easily on forms.app, you must first log in to forms.app. You can register for free and quickly if you do not have an existing account.

medical research questionnaire sample

2  - Choose a sample or start from scratch :  On forms.app, you can select from a wide selection of templates covering a wide range of topics. You can edit an existing survey template on forms.app by selecting it and making the necessary changes, or you can start with a blank form and add fields as you see fit.

medical research questionnaire sample

3  - Select a theme or manually customize your form : You can also select a different theme from the many options offered by form.app.

medical research questionnaire sample

4  - Complete the settings : Finish the settings, and save. After completing all the sets, the test is ready to use! It can be used to save and share with attendees.

medical research questionnaire sample

Free health survey templates

A hospital or health center can find patients' feedback on their care and services by conducting a health survey. You can quickly and efficiently get the answers from the patients using the forms.app questionnaire you created. This survey tool enables medical professionals to pinpoint risk factors in the neighborhood surrounding hospitals or healthcare facilities, including prevalent health practices like drug usage, smoking, poor dietary choices, and inactivity .

Hospitals can determine whether patients' diagnoses are accurate and whether their medications are enough to treat them. These surveys will undoubtedly move more quickly and contribute to improving health services if they ask each patient the right questions. You can get started using the free templates below.

Mental Health Quiz

Mental Health Quiz

Mental Health Evaluation Form

Mental Health Evaluation Form

Telemental Health Consent Form Template

Telemental Health Consent Form Template

Sena is a content writer at forms.app. She likes to read and write articles on different topics. Sena also likes to learn about different cultures and travel. She likes to study and learn different languages. Her specialty is linguistics, surveys, survey questions, and sampling methods.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

Types of data analysis in research (+ Examples)

Types of data analysis in research (+ Examples)

20 must-have add-ons for Google Forms

20 must-have add-ons for Google Forms

Nursena Canbolat

How to grow your business and lead generation with a form builder

How to grow your business and lead generation with a form builder

  • Open access
  • Published: 11 January 2010

Questionnaires in clinical trials: guidelines for optimal design and administration

  • Phil Edwards 1  

Trials volume  11 , Article number:  2 ( 2010 ) Cite this article

74k Accesses

109 Citations

28 Altmetric

Metrics details

A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). The mode of administration can also impact on the cost, quality and completeness of data collected. There is good evidence for design features that improve data completeness but further research is required to evaluate strategies in clinical trials. Theory-based guidelines for style, appearance, and layout of self-administered questionnaires have been proposed but require evaluation.

Peer Review reports

Introduction

With fixed trial resources there will usually be a trade off between the number of participants that can be recruited into a trial and the quality and quantity of information that can be collected from each participant [ 1 ]. Although half a century ago there was little empirical evidence for optimal questionnaire design, Bradford Hill suggested that for every question asked of a study participant the investigator should be required to answer three himself, perhaps to encourage the investigator to keep the number of questions to a minimum [ 2 ].

To assess the empirical evidence for how questionnaire length and other design features might influence data completeness in a clinical trial, a systematic review of randomised controlled trials (RCTs) was conducted, and has recently been updated [ 3 ]. The strategies found to be effective in increasing response to postal and electronic questionnaires are summarised in the section on increasing data completeness below.

Clinical trial investigators have also relied on principles of questionnaire design that do not have an established empirical basis, but which are nonetheless considered to present 'good practice', based on expert opinion. The section on questionnaire development below includes some of that advice and presents general guidelines for questionnaire development which may help investigators who are about to design a questionnaire for a clinical trial.

As this paper concerns the collection of outcome data by questionnaire from trial participants (patients, carers, relatives or healthcare professionals) it begins by introducing the regulatory guidelines for data collection in clinical trials. It does not address the parallel (and equally important) needs of data management, cleaning, validation or processing required in the creation of the final clinical database.

Regulatory guidelines

The International Conference on Harmonisation (ICH) of technical requirements for registration of pharmaceuticals for human use states:

'The collection of data and transfer of data from the investigator to the sponsor can take place through a variety of media, including paper case record forms, remote site monitoring systems, medical computer systems and electronic transfer. Whatever data capture instrument is used, the form and content of the information collected should be in full accordance with the protocol and should be established in advance of the conduct of the clinical trial. It should focus on the data necessary to implement the planned analysis, including the context information (such as timing assessments relative to dosing) necessary to confirm protocol compliance or identify important protocol deviations. 'Missing values' should be distinguishable from the 'value zero' or 'characteristic absent'...' [ 4 ].

This suggests that the choice of variables that are to be measured by the questionnaire (or case report form) is constrained by the trial protocol, but that the mode of data collection is not. The trial protocol is unlikely, however, to list all of the variables that may be required to evaluate the safety of the experimental treatment. The choice of variables to assess safety will depend on the possible consequences of treatment, on current knowledge of possible adverse effects of related treatments, and on the duration of the trial [ 5 ]. In drug trials there may be many possible reactions due to the pharmacodynamic properties of the drug. The Council for International Organisations of Medical Sciences (CIOMS) advises that:

'Safety data that cannot be categorized and succinctly collected in predefined data fields should be recorded in the comment section of the case report form when deemed important in the clinical judgement of the investigator' [ 5 ].

Safety data can therefore initially be captured on a questionnaire as text responses to open-ended questions that will subsequently be coded using a common adverse event dictionary, such as the Medical Dictionary for Drug Regulatory Activities (MEDRA). The coding of text responses should be performed by personnel who are blinded to treatment allocation. Both ICH and CIOMS warn against investigators collecting too much data that will not be analysed, potentially wasting time and resources, reducing the rate of recruitment, and increasing losses to follow-up.

Before questionnaire design begins, the trial protocol should be available at least in draft. This will state which outcomes are to be measured and which parameters are of interest (for example, percentage, mean, and so on). Preferably, a statistical analysis plan will also be available that makes explicit how each variable will be analysed, including how precisely each is to be measured and how each variable will be categorised in analysis. If these requirements are known in advance, the questionnaire can be designed in such a way that will reduce the need for data to be coded once questionnaires have been completed and returned.

Questionnaire development

If a questionnaire has previously been used in similar trials to the one planned, its use will bring the added advantage that the results will be comparable and may be combined in a meta-analysis. However, if the mode of administration of the questionnaire will change (for example, questions developed for administration by personal interview are to be included in a self-administered questionnaire), the questionnaire should be piloted before it is used (see section on piloting below). To encourage the consistent reporting of serious adverse events across trials, the CIOMS Working Group has prepared an example of the format and content of a possible questionnaire [ 5 ].

If a new questionnaire is to be developed, testing will establish that it measures what is intended to be measured, and that it does so reliably. The validity of a questionnaire may be assessed in a reliability study that assesses the agreement (or correlation) between the outcome measured using the questionnaire with that measured using the 'gold standard'. However, this will not be possible if there is no recognised gold standard measurement for outcome. The reliability of a questionnaire may be assessed by quantifying the strength of agreement between the outcomes measured using the questionnaire on the same patients at different times. The methods for conducting studies of validity and reliability are covered in depth elsewhere [ 6 ]. If new questions are to be developed, the reading ease of the questions can be assessed using the Flesch reading ease score. This score assesses the number of words in sentences, and the number syllables in words. Higher Flesch reading scores indicate material that is easier to read [ 7 ].

Types of questions

Open-ended questions offer participants a space into which they can answer by writing text. These can be used when there are a large number of possible answers and it is important to capture all of the detail in the information provided. If answers are not factual, open-ended questions might increase the burden on participants. The text responses will subsequently need to be reviewed by the investigator, who will (whilst remaining blind to treatment allocation) assign one or more codes that categorise the response (for example, applying an adverse event dictionary) before analysis. Participants will need sufficient space so that full and accurate information can be provided.

Closed-ended questions contain either mutually exclusive response options only, or must include a clear instruction that participants may select more than one response option (for example, 'tick all that apply'). There is some evidence that answers to closed questions are influenced by the values chosen by investigators for each response category offered and that respondents may avoid extreme categories [ 8 ]. Closed-ended questions where participants are asked to 'tick all that apply' can alternatively be presented as separate questions, each with a 'yes' or 'no' response option (this design may be suitable if the analysis planned will treat each response category as a binary variable).

Asking participants subsidiary questions (that is, 'branching off') depending on their answers to core questions will provide further detail about outcomes, but will increase questionnaire length and could make a questionnaire harder to follow. Similarly 'matrix' style questions (that is, multiple questions with common response option categories) might seem complicated to some participants, adding to the data collection burden [ 9 ].

Style, appearance and layout

The way that a self-administered questionnaire looks is considered to be as important as the questions that are asked [ 9 , 10 ]. There is good evidence that in addition to the words that appear on the page (verbal language) the questionnaire communicates meaning and instructions to participants via symbols and graphical features (non-verbal language). The evidence from several RCTs of alternative question response styles and layouts suggests that participants view the middle (central) response option as the one that represents the midpoint of an outcome scale. Participants then expect response options to appear in an order of increasing or decreasing progression, beginning with the leftmost or uppermost category; and they expect response options that are closer to each other to also have values that are 'conceptually closer'. The order, spacing and grouping of response options are therefore important design features, as they will affect the quality of data provided on the questionnaire, and the time taken by participants to provide it [ 10 ].

Some attempts have been made to develop theory-based guidelines for self-administered questionnaire design [ 11 ]. Based on a review of psychological and sociological theories about graphic language, cognition, visual perception and motivation, five principles have been derived:

'Use the visual elements of brightness, colour, shape, and location in a consistent manner to define the desired navigational path for respondents to follow when answering the questionnaire;

When established format conventions are changed in the midst of a questionnaire use prominent visual guides to redirect respondents;

Place directions [instructions] where they are to be used and where they can be seen;

Present information in a manner that does not require respondents to connect information from separate locations in order to comprehend it;

Ask people to answer only one question at a time' [ 11 ].

Adherence to these principles may help to ensure that when participants complete a questionnaire they understand what is being asked, how to give their response, and which question to answer next. This will help participants to give all the information being sought and reduce the chances that they become confused or frustrated when completing the questionnaire. These principles require evaluation in RCTs.

Font size and colour may further affect the legibility of a questionnaire, which may also impact on data quality and completeness. Questionnaires for trials that enrol older participants may therefore require the use of a larger font (for example, 11 or 12 point minimum) than those for trials including younger participants. The legibility and comprehension of the questionnaire can be assessed during the pilot phase (see section on piloting below).

Perhaps most difficult to define are the factors that make a questionnaire more aesthetically pleasing to participants, and that may potentially increase compliance. The use of space, graphics, underlining, bold type, colour and shading, and other qualities of design may affect how participants react and engage with a questionnaire. Edward Tufte's advice for achieving graphical excellence [ 12 ] might be adapted to consider how to achieve excellence in questionnaire design, viz : ask the participant the simplest, clearest questions in the shortest time using the fewest words on the fewest pages; above all else ask only what you need to know.

Further research is therefore needed (as will be seen in the section on increasing data completeness) into the types of question and the aspects of style, appearance and layout of questionnaires that are effective in increasing data quality and completeness.

Mode of administration

Self-administered questionnaires are usually cheaper to use as they require no investigator input other than that for their distribution. Mailed questionnaires require correct addresses to be available for each participant, and resources to cover the costs of delivery. Electronically distributed questionnaires require correct email addresses as well as access to computers and the internet. Mailed and electronically distributed questionnaires have the advantage that they give participants time to think about their responses to questions, but they may require assistance to be available for participants (for example, a telephone helpline).

As self-administered questionnaires have least investigator involvement they are less susceptible to information bias (for example, social desirability bias) and interviewer effects, but are more susceptible to item non-response [ 8 ]. Evidence from a systematic review of 57 studies comparing self-reported versus clinically verified compliance with treatment suggests that questionnaires and diaries may be more reliable than interviews [ 13 ].

In-person administration allows a rapport with participants to be developed, for example through eye contact, active listening and body language. It also allows interviewers to clarify questions and to check answers. Telephone administration may still provide the aural dimension (active listening) of an in-person interview. A possible disadvantage of telephone interviews is that participants may become distracted by other things going on around them, or decide to end the call [ 9 ].

A mixture of modes of administration may also be considered: for example, participant follow-up might commence with postal or email administration of the questionnaire, with subsequent telephone calls to non-respondents. The offer of an in-person interview may also be necessary, particularly if translation to a second language is required, or if participants are not sufficiently literate. Such approaches may risk introducing selection bias if participants in one treatment group are more or less likely than the other group to respond to one mode of administration used (for example, telephone follow-up in patients randomised to a new type of hearing aid) [ 14 ].

An advantage of electronic and web-based questionnaires is that they can be designed automatically to screen and filter participant responses. Movement from one question to the next can then appear seamless, reducing the data collection burden on participants who are only asked questions relevant to previous answers. Embedded algorithms can also check the internal consistency of participant responses so that data are internally valid when submitted, reducing the need for data queries to be resolved later. However, collection of data from participants using electronic means may discriminate against participants without access to a computer or the internet. Choice of mode of administration must therefore take into account its acceptability to participants and any potential for exclusion of eligible participants that may result.

Piloting is a process whereby new questionnaires are tested, revised and tested further before they are used in the main trial. It is an iterative process that usually begins by asking other researchers who have some knowledge and experience in a similar field to comment on the first draft of the questionnaire. Once the questionnaire has been revised, it can then be piloted in a non-expert group, such as among colleagues. A further revision of the questionnaire can be piloted with individuals who are representative of the population who will complete it in the main trial. In-depth 'cognitive interviewing' might also provide insights into how participants comprehend questions, process and recall information, and decide what answers to give [ 15 ]. Here participants are read each question and are either asked to 'think aloud' as they consider what their answer will be, or are asked further 'probing' questions by the interviewer.

For international multicentre trials it will be necessary to translate a questionnaire. Although a simple translation to, and translation back from the second language might be sufficient, further piloting and cognitive interviews may be required to identify and correct for any cultural differences in interpretation of the translated questionnaire. Translation into other languages may alter the layout and formatting of words on the page from the original design and so further redesign of the questionnaire may be required. If a questionnaire is to be developed for a clinical trial, sufficient resources are therefore required for its design, piloting and revision.

Increasing data completeness

Loss to follow-up will reduce statistical power by reducing the effective sample size. Losses may also introduce bias if the trial treatment is an effect modifier for the association between outcome and participation at follow-up [ 16 ].

There may be exceptional circumstances for allowing participants to skip certain questions (for example, sensitive questions on sexual lifestyle) to ensure that the remainder of the questionnaire is still collected; the data that are provided may then be used to impute the values of variables that were not provided. Although the impact of missing outcome data and missing covariates on study results can be reduced through the use of multiple imputation techniques, no method of analysis can be expected to overcome them completely [ 17 ].

Longer and more demanding tasks might be expected to have fewer volunteers than shorter, easier tasks. The evidence from randomised trials of questionnaire length in a range of settings seems to support the notion that when it comes to questionnaire design 'shorter is better' [ 18 ]. Recent evidence that a longer questionnaire achieved the same high response proportion as that of a shorter alternative might cast doubt on the importance of the number of questions included in a questionnaire [ 19 ]. However, under closer scrutiny the results of this study (96.09% versus 96.74%) are compatible with an average 2% reduction in odds of response for each additional page added to the shorter version [ 18 ]. The main lesson seems to be that when the baseline response proportion is very high (for example, over 95%) then few interventions are likely to have effects large enough to increase it further.

There is a trade off between increased measurement error from using a simplified outcome scale and increased power from achieving measurement on a larger sample of participants (from fewer losses to follow-up). If a shorter version of an outcome scale provides measures of an outcome that are highly correlated with the longer version, then it will be more efficient for the trial to use the shorter version [ 1 ]. A moderate reduction to the length of a shorter questionnaire will be more effective in reducing losses to follow-up than a moderate change to the length of a longer questionnaire [ 18 ].

In studies that seek to collect information on many outcomes, questionnaire length will necessarily be determined by the number of items required from each participant. In very compliant populations there may be little lost by using a longer questionnaire. However, using a longer questionnaire to measure more outcomes may also increase the risk of false positive findings that result from multiple testing (for example, measuring 100 outcomes may produce 5 that are significantly associated with treatment by chance alone) [ 4 , 20 ].

Other strategies to increase completeness

A recently updated Cochrane systematic review presents evidence from RCTs of methods to increase response to postal and electronic questionnaires in a range of health and non-health settings [ 3 ]. The review includes 481 trials that evaluated 110 different methods for increasing response to postal questionnaires and 32 trials that evaluated 27 methods for increasing response to electronic questionnaires. The trials evaluate aspects of questionnaire design, the introductory letter, packaging and methods of delivery that might influence the tendency for participants to open the envelope (or email) and to engage with its contents. A summary of the results follows.

What participants are offered

Postal questionnaires.

The evidence favours offering monetary incentives and suggests that money is more effective than other types of incentive (for example, tokens, lottery tickets, pens, and so on). The relationship between the amount of monetary incentive offered and questionnaire response is non-linear with diminishing marginal returns for each additional amount offered [ 21 ]. Unconditional incentives appear to be more effective, as are incentives offered with the first rather than a subsequent mailing. There is less evidence for the effects of offering the results of the study (when complete) or offering larger non-monetary incentives.

Electronic questionnaires

The evidence favours non-monetary incentives (for example, Amazon.com gift cards), immediate notification of lottery results, and offering study results. Less evidence exists for the effect of offering monetary rather than non-monetary incentives.

How questionnaires look

The evidence favours using personalised materials, a handwritten address, and printing single sided rather than double sided. There is also evidence that inclusion of a participant's name in the salutation at the start of the cover letter increases response and that the addition of a handwritten signature on letters will further increase response [ 22 ]. There is less evidence for positive effects of using coloured or higher quality paper, identifying features (for example, identity number), study logos, brown envelopes, coloured ink, coloured letterhead, booklets, larger paper, larger fonts, pictures in the questionnaire, matrix style questions, or questions that require recall in order of time period.

The evidence favours using a personalised approach, a picture in emails, a white background for emails, a simple header, and textual rather than a visual presentation of response categories. Response may be reduced when 'survey' is mentioned in the subject line. Less evidence exists for sending emails in text format or HTML, including a topic in email subject lines, or including a header in emails.

How questionnaires are received or returned

The evidence favours sending questionnaires by first class or recorded delivery, using stamped return envelopes, and using several stamps. There is less evidence for effects of mailing soon after discharge from hospital, mailing or delivering on a Monday, sending to work addresses, using stamped outgoing envelopes (rather than franked), using commemorative or first class stamps on return envelopes, including a prepaid return envelope, using window or larger envelopes, or offering the option of response by internet.

Methods and number of requests for participation

The evidence favours contacting participants before sending questionnaires, follow-up contact with non-responders, providing another copy of the questionnaire at follow-up and sending text message reminders rather than postcards. There is less evidence for effects of precontact by telephone rather than by mail, telephone follow-up rather than by mail, and follow-up within a month rather than later.

Nature and style of questions included

The evidence favours placing more relevant questions and easier questions first, user friendly and more interesting or salient questionnaires, horizontal orientation of response options rather than vertical, factual questions only, and including a 'teaser'. Response may be reduced when sensitive questions are included or when a questionnaire for carers or relatives is included. There is less evidence for asking general questions or asking for demographic information first, using open-ended rather than closed questions, using open-ended questions first, including 'don't know' boxes, asking participants to 'circle answer' rather than 'tick box', presenting response options in increasing order, using a response scale with 5 levels rather than 10 levels, or including a supplemental questionnaire or a consent form.

The evidence favours using a more interesting or salient e-questionnaire.

Who sent the questionnaire

The evidence favours questionnaires that originate from a university rather than government department or commercial organisation. Less evidence exists for the effects of precontact by a medical researcher (compared to non-medical), letters signed by more senior or well known people, sending questionnaires in university-printed envelopes, questionnaires that originate from a doctor rather than a research group, names that are ethnically identifiable, or questionnaires that originate from male rather than female investigators.

The evidence suggests that response is reduced when e-questionnaires are signed by male rather than female investigators. There is less evidence for the effectiveness of e-questionnaires originating from a university or when sent by more senior or well known people.

What participants are told

The evidence favours assuring confidentiality and mentioning an obligation to respond in follow-up letters. Response may be reduced when endorsed by an 'eminent professional' and requesting participants to not remove ID codes. Less evidence exists for the effects of stating that others have responded, a choice to opt out of the study, providing instructions, giving a deadline, providing an estimate of completion time, requesting a telephone number, stating that participants will be contacted if they do not respond, requesting an explanation for non-participation, an appeal or plea, requesting a signature, stressing benefits to sponsor, participants or society, or assuring anonymity rather than participants being identifiable.

The evidence favours stating that others have responded and giving a deadline. There is less evidence for the effect of an appeal (for example, 'request for help') in the subject line of an email.

So although uncertainty remains about whether some strategies increase data completeness there is sufficient evidence to produce some guidelines. Where there is a choice, a shorter questionnaire can reduce the size of the task and burden on respondents. Begin a questionnaire with the easier and most relevant questions, and make it user friendly and interesting for participants. A monetary incentive can be included as a little unexpected 'thank you for your time'. Participants are more likely to respond with advance warning (by letter, email or phone call in advance of being sent a questionnaire). This is a simple courtesy warning participants that they are soon to be given a task to do, and that they may need to set some time aside to complete it. The relevance and importance of participation in the trial can be emphasised by addressing participants by name, signing letters by hand, and using first class postage or recorded delivery. University sponsorship may add credibility, as might the assurance of confidentiality. Follow-up contact and reminders to non-responders are likely to be beneficial, but include another copy of the questionnaire to save participants having to remember where they put it, or if they have thrown it away.

The effects of some strategies to increase questionnaire response may differ when used in a clinical trial compared with a non-health setting. Around half of trials included in the Cochrane review were health related (patient groups, population health surveys and surveys of healthcare professionals). The other included trials were conducted among business professionals, consumers, and the general population. To assess whether the size of the effects of each strategy on questionnaire response differ in health settings will require a sufficiently sophisticated analysis that controls for covariates (for example, number of pages in the questionnaire, use of incentives, and so on). Unfortunately, these details are seldom included by investigators in the published reports [ 3 ].

However, a review of 15 RCTs of methods to increase response in healthcare professionals and patients found evidence for using some strategies (for example, shorter questionnaires and sending reminders) in the health-related setting [ 23 ]. There is also evidence that incentives do improve questionnaire response in clinical trials [ 24 , 25 ]. The offer of monetary incentives to participants for completion of a questionnaire may, however, be unacceptable to some ethics committees if they are deemed likely to exert pressure on individuals to participate [ 26 ]. Until further studies establish whether other strategies are also effective in the clinical trial setting, the results of the Cochrane review may be used as guidelines for improving data completeness. More discussion on the design and administration of questionnaires is available elsewhere [ 27 ].

Risk factors for loss to follow-up

Irrespective of questionnaire design it is possible that some participants will not respond because: (a) they have never received the questionnaire or (b) they no longer wish to participate in the study. An analysis of the information collected at randomisation can be used to identify any factors (for example, gender, severity of condition) that are predictive of loss to follow-up [ 28 ]. Follow-up strategies can then be tailored for those participants most at risk of becoming lost (for example, additional incentives for 'at risk' participants). Interviews with a sample of responders and non-responders may also identify potential improvements to the questionnaire design, or to participant information. The need for improved questionnaire saliency, explanations of trial procedures, and stressing the importance of responding have all been identified using this method [ 29 ].

Further research

Few clinical trials appear to have nested trials of methods that might increase the quality and quantity of the data collected by questionnaire, and of participation in trials more generally. Trials of alternative strategies that may increase the quality and quantity of data collected by questionnaire in clinical trials are needed. Reports of these trials must include details of the alternative instruments used (for example, number of items, number of pages, opportunity to save data electronically and resume completion at another time), mean or median time to completion of electronic questionnaires, material costs and the amount of staff time required. Data collection in clinical trials is costly, and so care is needed to design data collection instruments that will provide sufficiently reliable measures of outcomes whilst ensuring high levels of follow-up. Whether shorter 'quick and dirty' outcome measures (for example, a few simple questions) are better than more sophisticated questionnaires will require assessment of the costs in terms of their impact on bias, precision, trial completion time, and overall costs.

A good questionnaire design for a clinical trial will minimise bias and maximise precision in the estimates of treatment effect within budget. Attempts to collect more data than will be analysed may risk reducing recruitment (reducing power) and increasing losses to follow-up (possibly introducing bias). Questionnaire design still does remain as much an art as a science, but the evidence base for improving the quality and completeness of data collection in clinical trials is growing.

Armstrong BG: Optimizing power in allocating resources to exposure assessment in an epidemiologic study. Am J Epidemiol. 1996, 144: 192-197.

Article   CAS   PubMed   Google Scholar  

Hill AB: Observation and experiment. N Engl J Med. 1953, 248: 995-1001. 10.1056/NEJM195306112482401.

Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, Pratap S: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, 3: MR000008-

PubMed   Google Scholar  

International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH harmonised tripartite guideline, statistical principles for clinical trials E9. http://www.ich.org/LOB/media/MEDIA485.pdf

CIOMS: Management of safety information from clinical trials: report of CIOMS working group VI. 2005, Geneva, Switzerland: Council for International Organisations of Medical Sciences (CIOMS)

Google Scholar  

Streiner DL, Norman GR: Health measurement scales: a practical guide to their development and use. 2004, Oxford University Press, 3

Farr JN, Jenkins JJ, Paterson DG: Simplification of Flesch reading ease formula. J Appl Psychol. 1951, 35: 333-337. 10.1037/h0062427.

Article   Google Scholar  

Armstrong BK, White E, Saracci R: Principles of exposure measurement in epidemiology. Monographs in Epidemiology and Biostatistics. 1995, New York, NY: Oxford University Press, 21:

Nieuwenhuijsen M: Design of exposure questionnaires for epidemiological studies. Occup Environ Med. 2005, 62: 272-280. 10.1136/oem.2004.015206.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Tourangeau R, Couper MP, Conrad F: Spacing, position, and order: interpretive heuristics for visual features of survey questions. Pub Opin Quart. 2004, 68: 368-393. 10.1093/poq/nfh035.

Jenkins CR, Dillman DA: Towards a theory of self-administered questionnaire design. http://www.census.gov/srd/papers/pdf/sm95-06.pdf

Tufte E: The visual display of quantitative information. 1999, Cheshire, CT: Graphics Press

Garber MC, Nau DP, Erickson SR, Aikens JE, Lawrence JB: The concordance of self-report with other measures of medication adherence: a summary of the literature. Med Care. 2004, 42: 649-652. 10.1097/01.mlr.0000129496.05898.02.

Article   PubMed   Google Scholar  

Heerwegh D: Mode differences between face-to-face and web surveys: an experimental investigation of data quality and social desirability effects. Int J Pub Opin Res. 2009, 21: 111-121. 10.1093/ijpor/edn054.

Willis GB: Cognitive interviewing: a how-to guide. http://www.appliedresearch.cancer.gov/areas/cognitive/interview.pdf

Greenland S: Response and follow-up bias in cohort studies. Am J Epidemiol. 1977, 106: 184-187.

CAS   PubMed   Google Scholar  

Kenward MG, Carpenter J: Multiple imputation: current perspectives. Stat Methods Med Res. 2007, 16: 199-218. 10.1177/0962280206075304.

Edwards P, Roberts I, Sandercock P, Frost C: Follow-up by mail in clinical trials: does questionnaire length matter?. Contr Clin Trials. 2004, 25: 31-52. 10.1016/j.cct.2003.08.013.

Rothman K, Mikkelsen EM, Riis A, Sørensen HT, Wise LA, Hatch EE: Randomized trial of questionnaire length. Epidemiology. 2009, 20: 154-10.1097/EDE.0b013e31818f2e96.

Sterne JAC, Davey Smith G: Sifting the evidence - what's wrong with significance tests?. BMJ. 2001, 322: 226-231. 10.1136/bmj.322.7280.226.

Edwards P, Cooper R, Roberts I, Frost C: Meta-analysis of randomised trials of monetary incentives and response to mailed questionnaires. J Epidemiol Comm Health. 2005, 59: 987-999. 10.1136/jech.2005.034397.

Scott P, Edwards P: Personally addressed hand-signed letters increase questionnaire response: a meta-analysis of randomised controlled trials. BMC Health Serv Res. 2006, 6: 111-10.1186/1472-6963-6-111.

Article   PubMed   PubMed Central   Google Scholar  

Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE: Maximising response to postal questionnaires - a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.

Kenyon S, Pike K, Jones D, Taylor D, Salt A, Marlow N, Brocklehurst P: The effect of a monetary incentive on return of a postal health and development questionnaire: a randomised trial. BMC Health Serv Res. 2005, 5: 55-10.1186/1472-6963-5-55.

Gates S, Williams MA, Withers E, Williamson E, Mt-Isa S, Lamb SE: Does a monetary incentive improve the response to a postal questionnaire in a randomised controlled trial? The MINT incentive study. Trials. 2009, 10: 44-10.1186/1745-6215-10-44.

McColl E: Commentary: methods to increase response rates to postal questionnaires. Int J Epidemiol. 2007, 36: 968-

McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, Thomas R, Harvey E, Garratt A, Bond J: Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol Assess. 2001, 5: 1-256.

Edwards P, Fernandes J, Roberts I, Kuppermann N: Young men were at risk of becoming lost to follow-up in a cohort of head-injured adults. J Clin Epidemiol. 2007, 60: 417-424. 10.1016/j.jclinepi.2006.06.021.

Nakash R, Hutton JL, Lamb SE, Gates S, Fisher J: Response and non-response to postal questionnaire follow-up in a clinical trial - a qualitative study of the patient's perspective. J Eval Clin Prac. 2008, 14: 226-235. 10.1111/j.1365-2753.2007.00838.x.

Download references

Acknowledgements

I would like to thank Lambert Felix for his help with updating the Cochrane review summarised in this article, and Graham Try for his comments on earlier drafts of the manuscript.

Author information

Authors and affiliations.

Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK

Phil Edwards

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phil Edwards .

Additional information

Competing interests.

The author declares that he has no competing interests.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Edwards, P. Questionnaires in clinical trials: guidelines for optimal design and administration. Trials 11 , 2 (2010). https://doi.org/10.1186/1745-6215-11-2

Download citation

Received : 29 July 2009

Accepted : 11 January 2010

Published : 11 January 2010

DOI : https://doi.org/10.1186/1745-6215-11-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Monetary Incentive
  • Questionnaire Design
  • Electronic Questionnaire
  • Text Response
  • Flesch Reading Ease

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

medical research questionnaire sample

Medical Health Questionnaire

Streamline health assessments with our Medical Health Questionnaire to ensure accurate and efficient patient information gathering.

medical research questionnaire sample

By Joshua Napilay on Aug 08, 2024.

Fact Checked by Nate Lacson.

Medical Health Questionnaire PDF Example

Why is the Medical Health Questionnaire essential for accurate patient assessment?

The Medical Health Questionnaire is indispensable for precise patient evaluation, particularly about high blood pressure, general health, and diabetes. This crucial form assists hospitals in determining the overall health of clients, ensuring accurate data on blood pressure, diabetes status, and the presence of conditions like depression. Respondents, including employees and regular clients, are prompted to complete the form regularly, providing a comprehensive snapshot of their health.

By incorporating fields such as age, birth date, tobacco use, and job-related details, the form helps gauge users' average health levels. The data collected aids in the early detection of conditions like high blood pressure and diabetes, enabling hospitals to offer timely interventions. Moreover, assessing the number of days people experience conditions like depression or heart-related issues allows medical professionals to tailor care plans accordingly.

Efficiently organized sections save time for respondents and medical staff, promoting ease of use. Users can easily share relevant information about their health status, ensuring that hospitals can read and interpret the data promptly. This form is an essential tool in the healthcare system, offering a systematic approach to gathering crucial health data, ultimately contributing to better patient outcomes.

Medical Health Questionnaire Template

Medical health questionnaire example.

Medical Health Questionnaire PDF Example

What key details should be gathered to establish a comprehensive medical profile?

To establish a comprehensive medical profile, it is crucial to gather critical details systematically. The process begins by determining the person's basic information, such as name, age, and address. The following step involves eliciting information about their medical history, starting with the year they started seeking medical attention regularly. This information is essential to assess the progression of their health over time.

The person's present condition is a focal point, requiring a detailed exploration of ongoing health issues or concerns. Understanding the day-to-day impact of these conditions is essential, as it provides insights into their daily life and activities. Additionally, inquiring about the week-to-week variations in their health allows for a more nuanced understanding of their overall well-being.

In the section dedicated to work, it's essential to determine the level of physical activity and any occupational hazards that may contribute to their medical profile. Adding details about their home environment further enriches the understanding of factors influencing their health.

As you gather information, consider incorporating a section on lifestyle factors, including habits, diet, and exercise routines. This holistic approach ensures that the medical profile is comprehensive and offers a more nuanced understanding of the person's overall health. Starting with basic information and systematically adding pertinent details, this approach allows healthcare professionals to create a thorough and accurate medical profile.

Have any significant medical events or conditions been in the patient's history?

When delving into a patient's medical history, the quest for significant events or conditions is paramount for a comprehensive understanding of their health trajectory. The inquiry begins with meticulously examining the patient's past, aiming to identify any noteworthy occurrences that may have left a lasting impact. This involves exploring major medical events, such as surgeries, hospitalizations, or significant illnesses that have shaped the patient's health narrative.

Chronic conditions are central to this investigation, as they often play a defining role in a patient's overall well-being. Uncovering conditions like diabetes, hypertension, or cardiovascular issues provides essential context for current health concerns. A detailed exploration of significant injuries, accidents, or allergic reactions also contributes valuable insights into the patient's medical history.

It is crucial to inquire about hereditary factors that might influence the patient's health, as a family history of certain conditions can significantly contribute to the overall risk assessment. Moreover, lifestyle-related events, such as changes in habits, diet, or exercise routines, are vital to understanding the patient's holistic health approach.

Pursuing significant medical events or conditions in a patient's history is integral to crafting a thorough medical profile. This comprehensive exploration enables healthcare professionals to tailor their approach, offering personalized care and interventions based on the patient's unique health journey.

How do the patient's daily habits, such as diet and exercise, impact their health?

The patient's daily habits, including diet and exercise, profoundly influence their health and well-being. Diet, as a cornerstone of health, significantly shapes the body's nutritional intake, playing a pivotal role in various physiological functions.

A balanced and nutritious diet fosters optimal organ function, immune system strength, and energy levels. Conversely, poor dietary choices may contribute to nutritional deficiencies, obesity, or the development of chronic conditions such as diabetes and cardiovascular diseases.

Exercise, another crucial component, contributes not only to physical fitness but also to mental health. Regular physical activity promotes cardiovascular health, muscular strength, and flexibility. It aids in weight management, reduces the risk of chronic diseases, and enhances overall mood by releasing endorphins. Conversely, a sedentary lifestyle may lead to weight gain, muscle atrophy, and an increased susceptibility to health issues.

The symbiotic relationship between diet and exercise further underscores the importance of a holistic approach to health. Healthy dietary choices synergize with regular exercise to create a robust foundation for overall wellness.

Healthcare professionals consider these daily habits when crafting personalized care plans, emphasizing the significance of lifestyle modifications in preventive healthcare. Recognizing the impact of diet and exercise empowers individuals to make informed choices, fostering a proactive approach to maintaining and enhancing their health.

What medications is the patient currently taking, and are there any allergies?

Gaining insight into a patient's current medication regimen and any existing allergies is pivotal for a comprehensive understanding of their health profile. The medications a patient takes offer a snapshot of their ongoing medical management.

This includes prescription medications , over-the-counter drugs, and any supplements. Understanding these medications' names, dosages, and frequency allows healthcare professionals to assess their efficacy, potential interactions, and impact on the patient's overall health.

Equally vital is the exploration of any allergies the patient may have. Allergic reactions to medications can range from mild to severe and may manifest as skin rashes, respiratory distress, or more serious systemic responses. Inquiring about allergies extends beyond medications to include allergies to substances such as latex or specific foods, ensuring a comprehensive understanding of potential risks.

This information is crucial for preventing adverse reactions, prescribing new medications, or recommending treatments. It forms the basis for creating a safe, tailored healthcare plan that aligns with the patient's unique medical circumstances.

Timely and accurate knowledge of a patient's medication and allergy history is fundamental for providing effective and safe healthcare interventions, emphasizing the importance of thorough communication between healthcare providers and patients.

How does the family's medical history contribute to understanding the patient's health?

The family's medical history is a valuable lens through which healthcare professionals gain insights into the patient's health predispositions, potential risks, and genetic susceptibilities. Examining the health trajectory of close relatives aids in understanding familial patterns of certain conditions, offering crucial information for risk assessment and preventive care.

Genetic factors play a significant role in determining an individual's susceptibility to various illnesses. A family history of diabetes, cardiovascular diseases, or certain cancers can highlight potential genetic links, informing healthcare providers about the patient's inherent risk factors. This knowledge enables a proactive approach to preventive measures, early screenings, and targeted interventions.

Furthermore, understanding the family's medical history aids in identifying hereditary conditions or genetic disorders that may affect the patient's health. This information allows healthcare professionals to tailor their care plans, screenings, and diagnostic approaches to account for the familial context.

A comprehensive understanding of the family's medical history contributes to a more holistic and personalized approach to healthcare. It empowers healthcare providers to proactively anticipate and address potential health issues, emphasizing the importance of integrating genetic and familial factors into assessing a patient's health and well-being.

What role do habits like smoking and alcohol consumption play in the patient's health?

Habits such as smoking and alcohol consumption play a pivotal role in shaping the patient's overall health, exerting both immediate and long-term impacts. Smoking, a well-established health risk, is linked to a myriad of detrimental health outcomes.

It significantly increases the risk of respiratory conditions like chronic obstructive pulmonary disease (COPD) and lung cancer, cardiovascular diseases and compromises overall lung function. The harmful effects extend beyond the respiratory system, affecting nearly every organ in the body.

Alcohol consumption similarly influences health outcomes. While moderate alcohol intake may have certain cardiovascular benefits, excessive or chronic consumption poses serious health risks. Long-term alcohol abuse is associated with liver diseases, cardiovascular issues, increased susceptibility to infections, and mental health disorders.

Both smoking and excessive alcohol consumption contribute to the development of chronic conditions, compromising the immune system and overall well-being. These habits are often intertwined, amplifying their collective impact on health. Moreover, they can exacerbate existing health conditions and hinder the efficacy of medical treatments.

Understanding the role of these habits is crucial for healthcare providers to develop targeted interventions and counseling strategies. Addressing smoking and alcohol consumption within the context of a patient's health allows for a more comprehensive and tailored approach to preventive care and health management. Encouraging lifestyle modifications forms an integral part of promoting overall well-being and preventing the onset of severe health conditions.

Who should be contacted in case of a medical emergency, and what are their details?

In the event of a medical emergency, it is imperative to have immediate access to individuals who can provide crucial information and make decisions on behalf of the patient. The primary contact is typically the patient's designated emergency contact person. This individual should be someone close to the patient, aware of their medical history, and capable of making prompt decisions in critical situations. It is vital to provide the emergency contact person's name, relationship to the patient, and a reachable phone number.

Additionally, their details should be included if the patient has a designated healthcare proxy or power of attorney. These individuals can make healthcare decisions for the patient if they cannot do so themselves. Providing the healthcare proxy's name, relationship, and contact information ensures a seamless emergency communication channel.

For minors, it is essential to list the contact details of parents or legal guardians. Schools, childcare providers, or relevant institutions should have this information readily available.

Ensuring that emergency contacts are well-informed and easily reachable is paramount for swift and effective medical intervention. These details are crucial in facilitating communication between healthcare providers and the patient's support network during critical moments.

Research and evidence

The Medical Health Questionnaire has a rich history rooted in the evolution of healthcare practices and the growing recognition of the importance of comprehensive patient assessments (Bhat, 2023). Over the years, medical professionals and researchers have continually refined and expanded the scope of health questionnaires to enhance diagnostic accuracy, treatment planning, and overall patient care (Akman, 2023).

The development of medical questionnaires can be traced back to early efforts to systematize patient information. As medical science progressed, clinicians recognized the need for standardized tools to collect relevant data efficiently. This led to the creating of the first health questionnaires to capture a holistic view of an individual's health status, medical history, and lifestyle factors.

The evolution of these questionnaires has been heavily influenced by ongoing research in various medical disciplines (Cowley et al., 2022). Evidence-based practices and clinical studies have played a crucial role in shaping the questions included in health assessments, ensuring that they align with the latest medical knowledge and diagnostic criteria. The constant feedback loop between research findings and questionnaire refinement has resulted in more accurate and insightful tools for healthcare practitioners.

Today, the Medical Health Questionnaire is a testament to the collaboration between medical professionals, researchers, and technological advancements. Incorporating evidence-based elements ensures that the questionnaire remains a dynamic and reliable resource, adapting to the ever-changing healthcare landscape. As a result, healthcare providers can trust the historical foundation and ongoing research supporting the Medical Health Questionnaire as an invaluable instrument in promoting proactive and personalized patient care.

Why use Carepatron as your Medical Health Questionnaire software?

Elevate your healthcare practice with Carepatron, the ultimate solution for streamlined practice management. Our Medical Health Questionnaire software boasts a user-friendly interface, ensuring accessibility for all, regardless of technical expertise.

Benefit from customizable templates catering to various health questionnaires, including daily symptom surveys, medical assessments, and health risk evaluations. The platform's robust features, such as Electronic Health Record (EHR) integration, secure messaging, and automated appointment reminders, enhance practice efficiency and compliance with HIPAA regulations.

Experience unparalleled support and guidance from the team, assisting healthcare providers in interpreting results and developing personalized care plans. Revolutionize your clinical workflows, improve care delivery efficiency, and boost patient engagement.

Choose Carepatron as your go-to Medical Health Questionnaire software and embark on a journey towards optimized care and proactive patient well-being.

Why use Carepatron as your Medical Health Questionnaire software?

Akman, S. (2023, May 25). 35+ essential questions to ask in a health history questionnaire . 35+ Essential Questions to Ask in a Health History Questionnaire - forms. App. https://forms.app/en/blog/health-history-questionnaire-questions

Bhat, A. (2023, June 30). Health History questionnaire: 15 Must-Have Questions . QuestionPro. https://www.questionpro.com/blog/health-history-questionnaire/

Cowley, D. S., Burke, A., & Lentz, G. M. (2022). Additional considerations in gynecologic care. In Elsevier eBooks (pp. 148-187.e6). https://doi.org/10.1016/b978-0-323-65399-2.00018-8

Commonly asked questions

The specific questions on a health questionnaire vary but generally cover medical history, lifestyle, and current health status.

A medical questionnaire is a document that gathers information about an individual's medical history, conditions, and lifestyle for healthcare assessment.

Health assessment questions typically inquire about an individual's overall health, symptoms, lifestyle choices, and any relevant medical history.

Related Templates

Popular templates.

Coracoid Pain Test PDF Example

Join 10,000+ teams using Carepatron to be more productive

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Selecting, designing,...

Selecting, designing, and developing your questionnaire

  • Related content
  • Peer review

Data supplement

Posted as supplied by author

Further illustrative examples

Table A Examples of research questions for which a questionnaire may not be the most appropriate design

Table B Pros and cons of open and closed-ended questions

Table C Checklist for developing a questionnaire

Table D Types of sampling techniques for questionnaire research

Table E Critical appraisal checklist for a questionnaire study

  • Bowling A. Constructing and evaluating questionnaires for health services research. In: Research methods in health: investigating health and health services . Buckingham: Open University Press, 1997.
  • Fox C. Questionnaire development. J Health Soc Policy 1996;8:39-48.
  • Joyce CR. Use, misuse and abuse of questionnaires on quality of life. Patient Educ Counsel 1995;26:319-23.
  • Murray P. Fundamental issues in questionnaire design. Accid Emerg Nurs 1999;7:148-53.
  • Robson C. Real world research: a resource for social science and practitioner-researchers . Oxford: Blackwell Press, 1993.
  • Sudman S, Bradburn N. Asking questions: a practical guide to questionnaire design . San Francisco: Jossey Bass, 1983.
  • Wolfe F. Practical issues in psychosocial measures. J Rheumatol 1997;24:990-3.
  • Labaw PJ. Advanced questionnaire design . Cambridge, MA: Art Books, 1980.
  • Brooks R. EuroQol: the current state of play. Health Policy 1996;37:53-72.
  • Anderson RT, Aaronson NK, Bullinger M, McBee WL. A review of the progress towards developing health-related quality-of-life instruments for international clinical studies and outcomes research. Pharmacoeconomics 1996;10:336-55.
  • Beurskens AJ, de Vet HC, Koke AJ, van der Heijden GJ, Knipschild PG. Measuring the functional status of patients with low back pain. Assessment of the quality of four disease-specific questionnaires. Spine 1995;20:1017-28.
  • Bouchard S, Pelletier MH, Gauthier JG, Cote G, Laberge B. The assessment of panic using self-report: a comprehensive survey of validated instruments. J Anxiety Disord 1997;11:89-111.
  • Adams AS, Soumerai SB, Lomas J, Ross-Degnan D. Evidence of self-report bias in assessing adherence to guidelines. Int J Qual Health Care 1999;11:187-92.
  • Bradburn NM,.Miles C. Vague Quantifiers. Public Opin Q 1979;43:92-101.
  • Gariti P, Alterman AI, Ehrman R, Mulvaney FD, O’Brien CP. Detecting smoking following smoking cessation treatment. Drug Alcohol Depend 2002;65:191-6.
  • Little P. Margetts B. Dietary and exercise assessment in general practice. Fam Pract 1996;13:477-82.
  • Ware JE, Kosinski M, Keller SD. A 12-item short-form health survey construction of scales and preliminary tests of reliability and validity. Medical Care 1996;34:220-33.

Cleo wants to measure nurses’ awareness of the risk factors for falls in older people. She completes a thorough review of existing measures, but cannot find one to suit her needs. After running two focus groups with nurses of varying levels of awareness, Cleo creates a list of ten key questions, and scores the instrument so that those with good falls-awareness should know all the answers while those with (say) good awareness of health issues in general will not. She then pilots the questionnaire on a sample of nurses to assess its legibility and comprehensibility. Two questions are consistently misunderstood so she alters their wording. Cleo then asks a second sample of 200 nurses to complete the questionnaire on two occasions a week apart, and compares their answers. This exercise in test-retest-reliability shows that participants respond to the questionnaire in a consistent manner. She standardises her instrument by paying meticulous attention to layout and inserting clear instructions for participants.

Ade is undertaking a survey of uptake in cervical screening. He identifies a measure for use in this area, as well as a validated measure of general health status (the SF12). Ade feels that the SF12 doesn’t quite ask the questions the way he would have phrased them himself, so he alters some of the questions but keeps the scoring the same. He also finds that the instrument is a bit too long to fit on the paper, so he crosses off the last two questions. In doing this, Ade has inadvertently invalidated the measure, since validity depends on full and faithful use of the original format.

Burden of disease

What is the prevalence of asthma in schoolchildren?

A child may have asthma but the parent does not know it; a parent may think incorrectly that their child has asthma; or they may withhold information that is perceived as stigmatizing.

Cross-sectional survey using standardised diagnostic criteria and/or systematic analysis of medical records.

Professional behaviour

How do general practitioners manage low back pain?

What doctors say they do is not the same as what they actually do, especially when they think their practice is being judged by others.

Direct observation or video recording of consultations; use of simulated patients; systematic analysis of medical records

Health-related lifestyle

What proportion of people in smoking cessation studies quit successfully?

The proportion of true quitters is less than the proportion who say they have quit. A similar pattern is seen in studies of dietary choices, exercise, and other lifestyle factors.

‘Gold standard’ diagnostic test (in this example, urinary cotinence).

Needs assessment in ‘special needs’ groups

What are the unmet needs of refugees and asylum seekers for health and social care services?

A questionnaire is likely to reflect the preconceptions of researchers (e.g. it may take existing services and/or the needs of more ‘visible’ groups as its starting point), and fail to tap into important areas of need.

Range of exploratory qualitative methods designed to build up a ‘rich picture’ of the problem – e.g. semi-structured interviews of users, health professionals and the voluntary sector; focus groups; and in-depth studies of critical events.

 

Appear easy and quick to complete, (which may encourage participants to fill them in).

Participants don’t have to think up an answer.

Socially less desirable responses can be included as an option.

Responses are usually clear and complete.

Easy to standardise, code and analyse.

Suitable for either self-completion or completion with researcher help.

Depend on participants understanding what is required of them and the concept of a preference or rating scale.

Participants may just guess, or tick any response at random.

Participants or researchers may make errors (e.g. tick the wrong box by mistake).

Don’t allow participants to expand on their responses or offer alternative views.

Allows for participant creativity and free expression.

Captures responses, feelings and ideas that researchers may not have thought of.

Participants may write as much or as little as they wish.

Take longer to complete (which can dissuade people from responding)

Responses can be extremely laborious (and expensive) to analyse. Coding and interpretation needed.

If handwriting is not clear data are lost.

Rely on participants wanting to be expressive and having writing skills.

Title

Is it clear and unambiguous?

Does it indicate accurately what the study is about?

Is it likely to mislead or distress participants?

Introductory letter or information sheet

Does it provide an outline of what the study is about and what the overall purpose of the research is?

Does it say how long the questionnaire should take to complete?

Does it adequately address issues of anonymity and confidentiality?

Does it inform participants that they can ask for help or stop completing the questionnaire at any time without having to give a reason?

Does it give clear and accurate contact details of whom to approach for further information?

If a postal questionnaire, do participants know what they need to send back?

Overall layout

Is the font size clear and legible to an individual with 6/12 vision? (Retype rather than photocopy if necessary)

Are graphics, illustrations and colour used judiciously to provide a clear and professional overall effect?

Are the pages numbered clearly and stapled securely?

Are there adequate instructions on how to complete each item, with examples where necessary?

Demographic information

Has all information necessary for developing a profile of participants been sought?

Are any questions in this section irrelevant, misleading or superfluous?

Are any questions offensive or otherwise inappropriate?

Will respondents know the answers to the questions?

Measures (main body of questionnaire)

Are the measures valid and reliable?

Are any items unnecessary or repetitive?

Is the questionnaire of an appropriate length?

Could the order of items bias replies or affect participation rates (in general, put sensitive questions towards the end)?

Closing comments

Is there a clear message that the end of the questionnaire has been reached?

Have participants been thanked for their co-operation?

Accompanying materials

If the questionnaire is to be returned by post, has a stamped addressed envelope (with return address on it) been included?

If an insert (eg leaflet), gift (eg book token) or honorarium is part of the study protocol, has this been included?

Participants are selected from a group who are available at time of study (e.g. GPs attending a practice meeting).

Good for canvassing a known group of participants. Should be avoided if you are trying to complete a random study, or one where you wish to generalise results to a wider population.

A sample group is identified, and a selection of people from that group is invited to participate. For example, every forth practice on a list of GPs in the whole of Scotland are contacted.

Use in studies where you wish to reflect the viewpoints of a wider population. Random samples can be ‘simple’ (every nth person is contacted – and all have an equal chance of selection), ‘systematic’ sampling (participants don’t have equal chance of selection), or ‘stratified’ sampling (where a sample of participants is broken down into groups and a subsample within these are approached).

Subsections of groups are identified and a selection of those are randomly approached to participate. For example, in the case above, it wouldn’t be feasible to contact GPs across the whole of Scotland, but a cluster group within Lanarkshire could be approached.

Studies where you wish to maintain random selection, but are limited in the number of people you can contact.

Participants who match the wider population are identified (e.g. into groups such as social class, gender age etc). Researchers are given a set number within each group to interview (e.g. so many young middle-class women).

For studies where you want to reflect outcomes as closely representative of the wider population as possible. Frequently used in political opinion polls etc.

Participants are recruited, and asked to identify other similar people to take part in the research.

Helpful when working with hard-to-reach groups (e.g. lesbian mothers).

What information did the researchers seek to obtain?

 

Was a questionnaire the most appropriate method and if not, what design might have been more appropriate?

 

Were there any existing measures (questionnaires) that the researchers could have used? If so, why was a new one developed and was this justified?

 

Were the views of consumers sought about the design, distribution, and administration of the questionnaire?

 

What claims for validity have been made, and are they justified? (In other words, what evidence is there that the instrument measures what it sets out to measure?)

 

What claims for reliability have been made, and are they justified? (In other words, what evidence is there that the instrument provides stable responses over time and between researchers?)

 

Was the title of the questionnaire appropriate and if not, what were its limitations?

 

What format did the questionnaire take, and were open and closed questions used appropriately?

 

Were easy, non-threatening questions placed at the beginning of the measure and sensitive ones near the end?

 

Was the questionnaire kept as brief as the study allowed?

 

Did the questions make sense, and could the participants in the sample understand them? Were any questions ambiguous or overly complicated?

 

Did the questionnaire contain adequate instructions for completion—eg example answers, or an explanation of whether a ticked or written response was required?

 

Were participants told how to return the questionnaire once completed?

 

Did the questionnaire contain an explanation of the research, a summary of what would happen to the data, and a thank you message?

 

Was the questionnaire adequately piloted in terms of the method and means of administration, on people who were representative of the study population?

 

How was the piloting exercise undertaken—what details are given?

 

In what ways was the definitive instrument changed as a result of piloting?

 

What was the sampling frame for the definitive study and was it sufficiently large and representative?

 

Was the instrument suitable for all participants and potential participants? In particular, did it take account of the likely range of physical/mental/cognitive abilities, language/literacy, understanding of numbers/scaling, and perceived threat of questions or questioner?

 

How was the questionnaire distributed?

 

How was the questionnaire administered?

 

Were the response rates reported fully, including details of participants who were unsuitable for the research or refused to take part?

 

Have any potential response biases been discussed?

 

What sort of analysis was carried out and was this appropriate? (eg correct statistical tests for quantitative answers, qualitative analysis for open ended questions)

 

What measures were in place to maintain the accuracy of the data, and were these adequate?

 

Is there any evidence of data dredging—that is, analyses that were not hypothesis driven?

 

What were the results and were all relevant data reported?

 

Are quantitative results definitive (significant), and are relevant non-significant results also reported?

 

Have qualitative results been adequately interpreted (e.g. using an explicit theoretical framework), and have any quotes been properly justified and contextualised?

 

What do the results mean and have the researchers drawn an appropriate link between the data and their conclusions?

 

Have the findings been placed within the wider body of knowledge in the field (eg via a comprehensive literature review), and are any recommendations justified?

 

Related articles

  • Correction Use of automated external defibrillator by first responders in out of hospital cardiac arrest: prospective controlled trial Published: 12 February 2004; BMJ 328 doi:10.1136/bmj.328.7436.396
  • Whooping cough: What’s behind the rise in cases and deaths in England? BMJ May 17, 2024, 385 q1118; DOI: https://doi.org/10.1136/bmj.q1118
  • Dengue: Argentinians turn to homemade repellent amid surge in cases BMJ April 17, 2024, 385 q885; DOI: https://doi.org/10.1136/bmj.q885
  • Devolved powers for Greater Manchester led to some health improvements, study shows BMJ March 28, 2024, 384 q767; DOI: https://doi.org/10.1136/bmj.q767
  • Long waits in child mental health are a “ticking time bomb” regulator warns BMJ March 22, 2024, 384 q724; DOI: https://doi.org/10.1136/bmj.q724
  • Doctors report big rise in patients with illness because of socioeconomic factors BMJ March 01, 2024, 384 q538; DOI: https://doi.org/10.1136/bmj.q538

Cited by...

medical research questionnaire sample

  • How it works

researchprospect post subheader

Quantitative Research Questionnaire – Types & Examples

Published by Alvin Nicolas at August 20th, 2024 , Revised On August 21, 2024

Research is usually done to provide solutions to an ongoing problem. Wherever the researchers see a gap, they tend to launch research to enhance their knowledge and to provide solutions to the needs of others. If they want to research from a subjective point of view, they consider qualitative research. On the other hand, when they research from an objective point of view, they tend to consider quantitative research.

There’s a fine line between subjectivity and objectivity. Qualitative research, related to subjectivity, assesses individuals’ personal opinions and experiences, while quantitative research, associated with objectivity, collects numerical data to derive results. However, the best medium to collect data in quantitative research is a questionnaire.

Let’s discuss what a quantitative research questionnaire is, its types, methods of writing questions, and types of survey questions. By thoroughly understanding these key essential terms, you can efficiently create a professional and well-organised quantitative research questionnaire.

What is a Quantitative Research Questionnaire?

Quantitative research questionnaires are preferably used during quantitative research. They are a well-structured set of questions designed specifically to gather specific, close-ended participant responses. This allows the researchers to gather numerical data and obtain a deep understanding of a particular event or problem.

As you know, qualitative research questionnaires contain open-ended questions that allow the participants to express themselves freely, while quantitative research questionnaires contain close-ended and specific questions, such as multiple-choice and Likert scales, to assess individuals’ behaviour.

Quantitative research questionnaires are usually used in research in various fields, such as psychology, medicine, chemistry, and economics.

Let’s see how you can write quantitative research questions by going through some examples:

  • How much do British people consume fast food per week?
  • What is the percentage of students living in hostels in London?

Types of Quantitative Research Questions With Examples

After learning what a quantitative research questionnaire is and what quantitative research questions look like, it’s time to thoroughly discuss the different types of quantitative research questions to explore this topic more.

Dichotomous Questions

Dichotomous questions are those with a margin for only two possible answers. They are usually used when the answers are “Yes/No” or “True/False.” These questions significantly simplify the research process and help collect simple responses.

Example: Have you ever visited Istanbul?

Multiple Choice Questions

Multiple-choice questions have a list of possible answers for the participants to choose from. They help assess people’s general knowledge, and the data gathered by multiple-choice questions can be easily analysed.

Example: Which of the following is the capital of France?

Multiple Answer Questions

Multiple-answer questions are similar to multiple-choice questions. However, there are multiple answers for participants to choose from. They are used when the questions can’t have a single, specific answer.

Example: Which of the following movie genres are your favourite?

Likert Scale Questions

Likert scale questions are used when the preferences and emotions of the participants are measured from one extreme to another. The scales are usually applied to measure likelihood, frequency, satisfaction, and agreement. The Likert scale has only five options to choose from.

Example: How satisfied are you with your job?

Semantic Differential Questions

Similar to Likert scales, semantic differential questions are also used to measure the emotions and attitudes of participants. The only difference is that instead of using extreme options such as strongly agree and strongly disagree, opposites of a particular choice are given to reduce bias.

Example: Please rate the services of our company.

Rank Order Questions

Rank-order questions are usually used to measure the preferences and choices of the participants efficiently. In this, multiple choices are given, and participants are asked to rank them according to their perspective. This helps to create a good participant profile.

Example: Rank the given books according to your interest.

Matrix Questions

Matrix questions are similar to Likert scales. In Likert scales, participants’ responses are measured through separate questions, while in matrix questions, multiple questions are compiled in a single row to simplify the data collection method efficiently.

Example: Rate the following activities that you do in daily life.

How To Write Quantitative Research Questions?

Quantitative research questions allow researchers to gather empirical data to answer their research problems. As we have discussed the different types of quantitative research questions above, it’s time to learn how to write the perfect quantitative research questions for a questionnaire and streamline your research process.

Here are the steps to follow to write quantitative research questions efficiently.

Step 1: Determine the Research Goals

The first step in writing quantitative research questions is to determine your research goals. Determining and confirming your research goals significantly helps you understand what kind of questions you need to create and for what grade. Efficiently determining the research goals also reduces the need for further modifications in the questionnaire.

Step 2: Be Mindful About the Variables

There are two variables in the questions: independent and dependent. It is essential to decide what would be the dependent variable in your questions and what would be the independent. It significantly helps to understand where to emphasise and where not. It also reduces the probability of additional and vague questions.

Step 3: Choose the Right Type of Question

It is also important to determine the right type of questions to add to your questionnaire. Whether you want Likert scales, rank-order questions, or multiple-answer questions, choosing the right type of questions will help you measure individuals’ responses efficiently and accurately.

Step 4: Use Easy and Clear Language

Another thing to keep in mind while writing questions for a quantitative research questionnaire is to use easy and clear language. As you know, quantitative research is done to measure specific and simple responses in empirical form, and using easy and understandable language in questions makes a huge difference.

Step 5: Be Specific About The Topic

Always be mindful and specific about your topic. Avoid writing questions that divert from your topic because they can cause participants to lose interest. Use the basic terms of your selected topic and gradually go deep. Also, remember to align your topic and questions with your research objectives and goals.

Step 6: Appropriately Write Your Questions

When you have considered all the above-discussed things, it’s time to write your questions appropriately. Don’t just haste in writing. Think twice about the result of a question and then consider writing it in the questionnaire. Remember to be precise while writing. Avoid overwriting.

Step 7: Gather Feedback From Peers

When you have finished writing questions, gather feedback from your researcher peers. Write down all the suggestions and feedback given by your peers. Don’t panic over the criticism of your questions. Remember that it’s still time to make necessary changes to the questionnaire before launching your campaign.

Step 8: Refine and Finalise the Questions

After gathering peer feedback, make necessary and appropriate changes to your questions. Be mindful of your research goals and topic. Try to modify your questions according to them. Also, be mindful of the theme and colour scheme of the questionnaire that you decided on. After refining the questions, finalise your questionnaire.

Types of Survey Questionnaires in Quantitative Research

Quantitative research questionnaires have close-ended questions that allow the researchers to measure accurate and specific responses from the participants. They don’t contain open-ended questions like qualitative research, where the response is measured by interviews and focus groups. Good combinations of questions are used in the quantitative research survey .

However, here are the types of surveys in quantitative research:

Descriptive Survey

The descriptive survey is used to obtain information about a particular variable. It is used to associate a quantity and quantify research variables. The questions associated with descriptive surveys mostly start with “What is” and “How much”.

Example: A descriptive survey to measure how much money children spend to buy toys.

Comparative Survey

A comparative survey is used to establish a comparison between one or more dependable variables and two or more comparison groups. This survey aims to form a comparative relation between the variables under study. The structure of the question in a comparative survey is, “What is the difference in [dependable variable] between [two or more groups]?”.

Example: A comparative survey on the difference in political awareness between Eastern and Western citizens.

Relationship-Based Survey

Relationship-based survey is used to understand the relationship or association between two or more independent and dependent variables. Cause and effect between two or more variables is measured in the relationship-based survey. The structure of questions in a relationship-based survey is, “What is the relation [between or among] [independent variable] and [dependable variable]?”.

Example: What is the relationship between education and lifestyle in America?

Advantages & Disadvantages of Questionnaires in Quantitative Research

Quantitative research questionnaires are an excellent tool to collect data and information about the responses of individuals. Quantitative research comes with various advantages, but along with advantages, it also has its disadvantages. Check the table below to learn about the advantages and disadvantages of a quantitative research questionnaire.

It is an efficient source for quickly collecting data. It restricts the depth of the topic during collection.
There is less risk of subjectivity and research bias. There is a high risk of artificial and unreal expectations of research questions.
It significantly helps to collect extensive insights into the population. It overemphasises empirical data, avoiding personal opinions.
It focuses on simplicity and particularity. There is a risk of over-simplicity.
There are clear and achievable research objectives. There is a risk of additional amendments and modifications.

Quantitative Research Questionnaire Example

Here is an example of a quantitative research questionnaire to help you get the idea and create an efficient and well-developed questionnaire for your research:

Warm welcome, and thank you for participating in our survey. Please provide your response to the questions below. Your esteemed response will significantly help us to achieve our research goals and provide effective solutions to society.

17-20

21-24

25-28

29-32

ii) What is your gender?

Male

Female

Other

Prefer not to say

ii) Have you graduated?

Yes

No

iii) Are you employed?

iv) Are you married?

Yes

No

 

Part 2: Provide your honest response. 

Question 1: I have tried online shopping.

Strongly Disagree

Disagree

Neutral 

Agree

Strongly Agree

Question 2: I have good experience with online shopping.

Strongly Disagree

Disagree

Neutral

Agree

Strongly Agree

Question 3: I have a bad experience with online shopping.

Question 4: I received my order on time. 

Question 5: I like physical shopping more. 

Frequently Asked Questions

What is a quantitative research questionnaire.

A quantitative research questionnaire is a well-structured set of questions designed specifically to gather specific and close-ended participant responses.

What is the difference between qualitative and quantitative research?

The difference between qualitative and quantitative research is subjectivity and objectivity. Subjectivity is associated with qualitative research, while objectivity is associated with quantitative research. 

What are the advantages of a quantitative research questionnaire?

  • It is quick and efficient.
  • There is less risk of research bias and subjectivity.
  • It is particular and simple.

You May Also Like

A meta-analysis is a formal, epidemiological, quantitative study design that uses statistical methods to generalise the findings of the selected independent studies.

Disadvantages of primary research – It can be expensive, time-consuming and take a long time to complete if it involves face-to-face contact with customers.

A survey includes questions relevant to the research topic. The participants are selected, and the questionnaire is distributed to collect the data.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

medical research questionnaire sample

Home Surveys Questionnaire

21 Questionnaire Templates: Examples and Samples

Questionnaire Templates and Examples

Questionnaire: Definition

A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. A questionnaire is typically a mix of open-ended questions and close-ended questions ; the latter allowing for respondents to enlist their views in detail.

A questionnaire can be used in both, qualitative market research as well as quantitative market research with the use of different types of questions .

LEARN ABOUT: Open-Ended Questions

Types of Questionnaires

We have learnt that a questionnaire could either be structured or free-flow. To explain this better:

  • Structured Questionnaires: A structured questionnaires helps collect quantitative data . In this case, the questionnaire is designed in a way that it collects very specific type of information. It can be used to initiate a formal enquiry on collect data to prove or disprove a prior hypothesis.
  • Unstructured Questionnaires: An unstructured questionnaire collects qualitative data . The questionnaire in this case has a basic structure and some branching questions but nothing that limits the responses of a respondent. The questions are more open-ended.

LEARN ABOUT:   Structured Question

Types of Questions used in a Questionnaire

A questionnaire can consist of many types of questions . Some of the commonly and widely used question types though, are:

  • Open-Ended Questions: One of the commonly used question type in questionnaire is an open-ended question . These questions help collect in-depth data from a respondent as there is a huge scope to respond in detail.
  • Dichotomous Questions: The dichotomous question is a “yes/no” close-ended question . This question is generally used in case of the need of basic validation. It is the easiest question type in a questionnaire.
  • Multiple-Choice Questions: An easy to administer and respond to, question type in a questionnaire is the multiple-choice question . These questions are close-ended questions with either a single select multiple choice question or a multiple select multiple choice question. Each multiple choice question consists of an incomplete stem (question), right answer or answers, close alternatives, distractors and incorrect answers. Depending on the objective of the research, a mix of the above option types can be used.
  • Net Promoter Score (NPS) Question: Another commonly used question type in a questionnaire is the Net Promoter Score (NPS) Question where one single question collects data on the referencability of the research topic in question.
  • Scaling Questions: Scaling questions are widely used in a questionnaire as they make responding to the questionnaire, very easy. These questions are based on the principles of the 4 measurement scales – nominal, ordinal, interval and ratio .

Questionnaires help enterprises collect valuable data to help them make well-informed business decisions. There are powerful tools available in the market that allows using multiple question types, ready to use survey format templates, robust analytics, and many more features to conduct comprehensive market research.

LEARN ABOUT: course evaluation survey examples

For example, an enterprise wants to conduct market research to understand what pricing would be best for their new product to capture a higher market share. In such a case, a questionnaire for competitor analysis can be sent to the targeted audience using a powerful market research survey software which can help the enterprise conduct 360 market research that will enable them to make strategic business decisions.

Now that we have learned what a questionnaire is and its use in market research , some examples and samples of widely used questionnaire templates on the QuestionPro platform are as below:

LEARN ABOUT: Speaker evaluation form

Customer Questionnaire Templates: Examples and Samples

QuestionPro specializes in end-to-end Customer Questionnaire Templates that can be used to evaluate a customer journey right from indulging with a brand to the continued use and referenceability of the brand. These templates form excellent samples to form your own questionnaire and begin testing your customer satisfaction and experience based on customer feedback.

LEARN ABOUT: Structured Questionnaire

USE THIS FREE TEMPLATE

Employee & Human Resource (HR) Questionnaire Templates: Examples and Samples

QuestionPro has built a huge repository of employee questionnaires and HR questionnaires that can be readily deployed to collect feedback from the workforce on an organization on multiple parameters like employee satisfaction, benefits evaluation, manager evaluation , exit formalities etc. These templates provide a holistic overview of collecting actionable data from employees.

Community Questionnaire Templates: Examples and Samples

The QuestionPro repository of community questionnaires helps collect varied data on all community aspects. This template library includes popular questionnaires such as community service, demographic questionnaires, psychographic questionnaires, personal questionnaires and much more.

Academic Evaluation Questionnaire Templates: Examples and Samples

Another vastly used section of QuestionPro questionnaire templates are the academic evaluation questionnaires . These questionnaires are crafted to collect in-depth data about academic institutions and the quality of teaching provided, extra-curricular activities etc and also feedback about other educational activities.

MORE LIKE THIS

age gating

Age Gating: Effective Strategies for Online Content Control

Aug 23, 2024

medical research questionnaire sample

Customer Experience Lessons from 13,000 Feet — Tuesday CX Thoughts

Aug 20, 2024

insight

Insight: Definition & meaning, types and examples

Aug 19, 2024

employee loyalty

Employee Loyalty: Strategies for Long-Term Business Success 

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Dean's Message
  • Mission Statement
  • Diversity At VUSN
  • Our History
  • Faculty Fellows & Honors
  • Accreditation
  • Privacy Policy
  • Academic Programs
  • Master of Science in Nursing
  • Master of Nursing
  • Doctor of Nursing Practice
  • PhD in Nursing Science
  • Post-Master's Certificate
  • Postdoctoral Program
  • Special (Non-Degree) Students
  • Admissions Information
  • Admissions Requirements
  • MSN Admissions
  • MN Admissions
  • DNP Admissions
  • PhD Admissions
  • Post-Master's Certificates
  • Postdoctoral Admissions
  • Center for Research Development and Scholarship (CRDS)
  • Signature Areas
  • CRDS Behavorial Labs
  • Research Resources
  • Faculty Scholarship Program
  • Research Faculty
  • Preparing For Practice
  • Faculty Practice Network
  • Credentialing Process
  • Faculty Practice History
  • Vanderbilt Nurse-Midwifery Faculty Practice
  • What is Advanced Practice Nursing?
  • Preceptor Resources
  • The Vanderbilt Advantage
  • Making A Difference
  • Informatics
  • Global Health
  • Organizations
  • Veterans/Military

Examples of Research Questions

Phd in nursing science program, examples of broad clinical research questions include:.

  • Does the administration of pain medication at time of surgical incision reduce the need for pain medication twenty-four hours after surgery?
  • What maternal factors are associated with obesity in toddlers?
  • What elements of a peer support intervention prevent suicide in high school females?
  • What is the most accurate and comprehensive way to determine men’s experience of physical assault?
  • Is yoga as effective as traditional physical therapy in reducing lymphedema in patients who have had head and neck cancer treatment?
  • In the third stage of labor, what is the effect of cord cutting within the first three minutes on placenta separation?
  • Do teenagers with Type 1 diabetes who receive phone tweet reminders maintain lower blood sugars than those who do not?
  • Do the elderly diagnosed with dementia experience pain?
  •  How can siblings’ risk of depression be predicted after the death of a child?
  •  How can cachexia be prevented in cancer patients receiving aggressive protocols involving radiation and chemotherapy?

Examples of some general health services research questions are:

  • Does the organization of renal transplant nurse coordinators’ responsibilities influence live donor rates?
  • What activities of nurse managers are associated with nurse turnover?  30 day readmission rates?
  • What effect does the Nurse Faculty Loan program have on the nurse researcher workforce?  What effect would a 20% decrease in funds have?
  • How do psychiatric hospital unit designs influence the incidence of patients’ aggression?
  • What are Native American patient preferences regarding the timing, location and costs for weight management counseling and how will meeting these preferences influence participation?
  •  What predicts registered nurse retention in the US Army?
  • How, if at all, are the timing and location of suicide prevention appointments linked to veterans‘ suicide rates?
  • What predicts the sustainability of quality improvement programs in operating rooms?
  • Do integrated computerized nursing records across points of care improve patient outcomes?
  • How many nurse practitioners will the US need in 2020?

PhD Resources

Twitter

  • REQUEST INFO
  • CAREER SERVICES
  • PRIVACY POLICY
  • VANDERBILT UNIVERSITY
  • VANDERBILT UNIVERSITY MEDICAL CENTER

DISTINCTIONS

NLN logo

Examples

Health Questionnaire

Questionnaire generator.

medical research questionnaire sample

A questionnaire is a systematic tool using a series of questions to gather information. Survey questionnaires are mostly administered to a large group of people in a particular area. These people are known as the respondents of the survey.

Questionnaires are useful especially in conducting surveys for a research of any kind. Many researchers prefer questionnaires in conducting surveys for its convenience. This is because questionnaires often do not require face-to-face interviews with the respondents, and can be sent simultaneously to a large number of respondents, making it easier and quicker to get the data the researchers need.

Health Questionnaire Template

Health Questionnaire Template2

  • Google Docs
  • Apple Pages

Size: 38 KB

Patient Health Questionnaire

New patient health questionnaire.

New Patient Questionnaire

Size: 42 KB

Patient Health History Questionnaire Example

Patient Health History Example

Size: 127 KB

Occupational Health Questionnaire

Confidential occupational health questionnaire.

Confidential Occupational Health

Size: 315 KB

Occupational Health Questionnaire in Doc

Occupational Health in Doc

Size: 52 KB

Occupational Health Questionnaire in PDF

Occupational Health in PDF

Size: 67 KB

What Is a Health Questionnaire?

A health questionnaire can be defined as a questionnaire asking a set of questions about an individual’s health.

In a health questionnaire, the respondent is asked to answer a few questions regarding his/her overall health condition, health history including previous or current illnesses and medications or treatments, alcohol consumption and cigarette use, physical activity and diet, as well as family medical history. You may also see questionnaire examples .

Health questionnaires also include basic personal information (e.g. name, age, gender, weight, height, etc.) including the respondent’s address and contact information.

Health questionnaires (e.g. physical activity questionnaire or personality questionnaire ) are usually administered upon requesting for a medical checkup, or if an employer or school head wishes to administer them to employees or students.

Also, healthcare professionals or providers may use healthcare questionnaires in conducting healthcare surveys. Results of such healthcare surveys are usually interpreted in order to provide a better service to the patients or clients. All information being written in a health questionnaire must remain confidential, especially the sensitive ones or those that the patient wants to be kept confidential. You may also like  evaluation questionnaire examples & samples .

Child Health Questionnaire

Childhood health assessment questionnaire.

Childhood Health Assessment

Children’s Health Questionnaire Sample

ChildrenS Health Sample

Size: 136 KB

Health Assessment Questionnaire

Student health assessment questionnaire.

Student Health Assessment

Size: 12 KB

Health Risk Assessment Questionnaire

Health Risk Assessment Questionnaire

Size: 15 KB

Personal Health Assessment Questionnaire Example

Personal Health Assessment Example

Size: 93 KB

Health Screening Questionnaire

General health screening questionnaire sample.

General Health Screening Sample

Size: 162 KB

Pre Employment Health Screening Questionnaire

Pre Employment Health Screening

Size: 407 KB

Purpose of a Health Questionnaires

Health questionnaires are often used to screen a person’s physical and mental health. A health questionnaire asks series of questions of which the answers are interpreted to determine any disorder or problem with a person’s overall health.

A patient must always answer the questions with all honesty and of course, must input everything he/she knows about his/her health history including his/her family’s medical history.

Health questionnaire examples are useful in healthcare surveys and medical research. Researchers often use health questionnaires to gather different patient information Patient respondents often prefer health questionnaires from personal interviews because they can have the privacy they need to be able to answer sensitive questions about their health history. Others prefer answering questionnaires since they may be too shy to answer sensitive questions in the presence of another person.

In some cases, results from health assessment questionnaires are used to determine how certain diseases affect a specific community or population. They also help researchers analyze factors affecting the spread of certain diseases, or how these factors affect treatments and treatment availability. Others use the aid of healthcare questionnaires to determine what treatment or procedure is most effective in treating diseases or what most patients prefer to use in treating particular diseases.

Healthcare questionnaires may also be used by healthcare providers especially when formulating new treatments or procedures. They analyze answers of respondents to identify what kind of treatment or procedure would fit patient or client preference most.

Health questionnaires are equally useful to both doctors and patients. Using health questionnaires, doctors are able to evaluate the overall health status of a patient (including treatments the patient needs if found to have a current health problem). At the same time, patients are given the chance to evaluate their own overall health status. You may aslo check out student questionnaire examples .

Employee Health Questionnaire

Pre-employment health example.

Pre Employment Health Example

Size: 53 KB

New Employee Questionnaire

New Employee Questionnaire

Size: 262 KB

Health Survey Questionnaire

Mental health survey sample questionnaire.

Mental Health Survey Sample

Size: 88 KB

Community Survey Questionnaire

Community Survey Questionnaire

Size: 129 KB

Health History Questionnaire

Family health history questionnaire example.

Family Health History Example

Size: 36 KB

Confidential History Questionnaire

Confidential History Questionnaire

Size: 456 KB

Physical Health Questionnaire

Adult physical health questionnaire sample.

Adult Physical Health Sample

Size: 169 KB

General Physical Questionnaire

General Physical Questionnaire

Size: 55 KB

Health Behavior Questionnaire

Health related behavior questionnaire example.

Health Related Behavior Example

Adult Health Questionnaire

Adult new patient questionnaire.

Adult New Patient Questionnaire

Size: 72 KB

Adult Health History Questionnaire Sample

Adult Health History Sample

Importance of Health History Questionnaire

A patient’s health history contains all details of factors affecting and those which may affect his/her health status.

This usually includes information regarding his/her lifestyle including

  • cigarette/tobacco or substance use and alcohol consumption,
  • previous illnesses,
  • social and cultural aspects,

or any other factor affecting his/her present health condition.

Health history questionnaires may also include questions regarding a respondent’s family medical history, meaning history of alcohol or substance use and abuse, hereditary diseases (diseases that run in the family) or diseases that may have caused hospitalization or death of any family member, and family traditions or culture which may have affected the patient’s overall present health status. You may also see service questionnaire examples & samples .

Information gathered from health history questionnaire importance will be essential in determining the factors that may have caused problems (if any) in a patient’s health condition.

Any history of diseases within the family will also be taken into consideration in examining his/her health. Results from interpreting a patient’s health history will determine if he/she is prone to any diseases, especially considering his/her present lifestyle. You may also like interview questionnaire examples .

Based on these results, doctors or health professionals will advise the patient to change his/her lifestyle to a healthier one in order to avoid contracting such diseases.

Pediatric Health Questionnaire

Pediatric health history questionnaire sample.

Pediatric Health History Sample

Size: 206 KB

Pediatric Health Questionnaire in PDF

Pediatric Health in PDF

Size: 112 KB

Health Insurance Questionnaire

Private health insurance questionnaire example.

Private Health Insurance Example

Group Health Questionnaire

Group Health Questionnaire

Size: 341 KB

Family Health Questionnaire Example

Family Health Example

Size: 56 KB

Mental Health Questionnaire in PDF

Mental Health in PDF

Size: 71 KB

Holistic Health Questionnaire

Holistic Health Questionnaire

Size: 77 KB

Health Evaluation Questionnaire in PDF

Health Evaluation in PDF

Size: 96 KB

Standard Health Questionnaire

Standard Health Questionnaire

Size: 736 KB

Driver Health Questionnaire in PDF

Driver Health in PDF

Size: 193 KB

Health Literacy Questionnaire Sample

Health Literacy Sample

Size: 57 KB

Health Anxiety Questionnaire in Doc

Health Anxiety in Doc

General Health Questionnaire

General Health Questionnaire

Size: 118 KB

Tips in Writing a Health Questionnaire

Frankly speaking, no one must mess with a health questionnaire. If you are planning to construct one, you need to be careful of the questions you include. Some patients may be very sensitive to a lot of questions about their health, thus crafting a health questionnaire will never be branded as easy. Here are some things to be considered if you are planning on constructing one:

1. Identify your purpose.

What is the purpose of conducting this survey questionnaire ?

2. Choose a question type.

Questions may be open-ended or close-ended depending on what results you desire to have.

3. Make a list of possible questions.

What do you want to know about your respondent’s health?

4. Organize.

Create categories to differentiate your questions. You may group sensitive questions into one category and put it at the last part of the survey. You may also see feedback questionnaire examples & samples .

5. Determine the method of administration.

You can choose to send your survey through the mail, electronic mail, or in-person interaction.

6. Guide respondents.

Health questionnaires have quite a lot of sensitive questions for the patient to comprehend, so be nice and provide guidance (especially if respondents find something confusing). You may also like how to develop a questionnaire .

7. Keep it short.

You might want to keep your health questionnaire short (two pages at most will do).

8. Review and revise as needed.

Remove inappropriate questions, revise other questions and polish your questionnaire.

Twitter

Text prompt

  • Instructive
  • Professional

Create a fun quiz to find out which historical figure you're most like in your study habits

Design a survey to discover students' favorite school subjects and why they love them.

U.S. flag

An official website of the Department of Health & Human Services

AHRQ: Agency for Healthcare Research and Quality

  • Search All AHRQ Sites
  • Email Updates

20 years Digital healthcare research

Advancing today's Discoveries, Transforming Tomorrow's Care

  • Contact DHR

Questionnaire/Survey

National survey of physicians on practice experience.

This is a questionnaire designed to be completed by physicians in ambulatory and inpatient settings. The tool includes questions to assess the current state of clinical decision support systems, electronic health records, practice management systems, and secure messaging.

2009 International Survey of Primary Care Doctors

This is a questionnaire designed to be completed by physicians in an ambulatory setting. The tool includes questions to assess the usability of electronic health records and electronic prescribing.

Massachusetts Survey of Physicians and Computer Technology

This is a questionnaire designed to be completed by physicians in an ambulatory setting. The tool includes questions to assess user's perceptions of electronic health records.

Sharing Electronic Behavior Health Records: A Nebraska Perspective

This is a questionnaire designed to be completed by physicians, implementers, and nurses across a health care system setting. The tool includes questions to assess benefit, the current state, usability, perception, and attitudes of users electronic health records and health information exchange.

Canada Health Infoway System And Use Assessment Survey

This is a questionnaire designed to be completed by administrators, clinical staff, and pharmacists across a health care system. The tool includes questions to assess the usability of clinical decision support systems, electronic health records, and enterprise systems.

Community Chronic Care Network (CCCN) Online Publication and Education: User Needs Survey

This is a questionnaire designed to be completed by clinical staff in an ambulatory setting. The tool includes questions to assess the usability of disease registries.

Community Chronic Care Network (CCCN) Stakeholder Survey

This is a questionnaire designed to be completed by administrators, clinical staff, and office staff in an ambulatory setting. The tool includes questions to assess user's perceptions of disease registry.

Community Chronic Care Network (CCCN) Online Registry: User Interviews and Survey Questions

This is a questionnaire designed to be completed by administrators, clinical staff, and office staff in an ambulatory setting. The tool includes questions to assess functionality of disease registries.

Clinical Portal Survey: Mt. Ascutney Hospital and Health Center

This is a questionnaire designed to be completed by nurses, physicians, and hospital staff in an inpatient setting. The tool includes questions to assess user's needs of electronic health records.

Clinician Survey on Quality Improvement, Best Practice Guidelines and Information Technology

This is a questionnaire designed to be completed by physicians, clinical staff, and nurses across a health care system. The tool includes questions to assess user's perceptions and the current state of electronic health records.

  • Director's Corner
  • Current Priorities
  • Executive Summary
  • Research Spotlight
  • Research Themes and Findings
  • DHR Impact and Reach
  • Research Dissemination
  • Research Overview
  • Engaging and Empowering Patients
  • Optimizing Care Delivery for Clinicians
  • Supporting Health Systems in Advancing Care Delivery
  • Our Experts
  • DHR 20th Anniversary Blog Series
  • DHR 20th Anniversary Timeline
  • Milestones and Achievements
  • Search AHRQ-Funded Projects
  • AHRQ-Funded Projects Map
  • AHRQ Digital Healthcare Research Publications Database
  • A Practical Guide for Implementing the Digital Healthcare Equity Framework
  • ePROs in Clinical Care
  • Guide to Integrate Patient-Generated Digital Health Data into Electronic Health Records in Ambulatory Care Settings
  • Health IT Survey Compendium
  • Time and Motion Studies Database
  • Health Information Security and Privacy Collaboration Toolkit
  • Implementation in Independent Pharmacies
  • Implementation in Physician Offices
  • Children's Electronic Health Record (EHR) Format
  • Project Resources Archives
  • Archived Tools & Resources
  • National Webinars
  • Funding Opportunities
  • Digital Healthcare Research Home
  • 2018 Year in Review Home
  • Research Summary
  • Research Spotlights
  • 2019 Year in Review Home
  • Annual Report Home

50+ SAMPLE Medical Questionnaires in PDF | MS Word

Medical questionnaires | ms word, 50+ sample medical questionnaires, what is a medical questionnaire, parts of a medical questionnaire, how to create a medical questionnaire, who prepares the medical questionnaire, who uses medical questionnaires, why is a medical questionnaire important.

Complete Medical Questionnaire

Sample Complete Medical Questionnaire

Basic Medical Questionnaire Template

Basic Medical Questionnaire Template

Health and Medical Questionnaire

Health and Medical Questionnaire

General Medical Questionnaire

General Medical Questionnaire

Medical Group Questionnaire

Medical Group Questionnaire

Medical History Questionnaiire Template

Medical History Questionnaire Template

Physician Medical Questionnaire

Physician Medical Questionnaire

Initial Medical Questionnaire

Initial Medical Questionnaire

Medical Exam Questionnaire

Medical Exam Questionnaire

Workers Medical Status Questionnaire

Workers Medical Status Questionnaire

Diabeties Medical Questionnaire

Diabeties Medical Questionnaire

Detailed Medical Questionnaire

Detailed Medical Questionnaire

Employee and Family Medical Questionnaire

Family Medical Questionnaire

Travel Guard Medical Questionnaire

Travel Guard Medical Questionnaire

Pre Employment Medical Questionnaire

Pre-Employment Medical Questionnaire

Confidential Medical Questionnaire

Confidential Medical Questionnaire

Corporate Medical Questionnaire

Corporate Medical Questionnaire

Medical Screening Questionnaire

Medical Screening Questionnaire

Driver Health Questionnaire Template

Driver Health Questionnaire Template

Individual Medical Questionnaire Template

Individual Medical Questionnaire Template

Employment Medical Evaluation Questionnaire

Employment Medical Evaluation Questionnaire

Medical Questionnaire Format

Medical Questionnaire Format

New Employee Medical Questionnaire

New Employee Medical Questionnaire

International Travel Medical Questionnaire

International Travel Medical Questionnaire

Simple Medical Questionnaire Template

Simple Medical Questionnaire Template

Occupational Medical Health Questionnaire

Occupational Medical Health Questionnaire

Student Medical Questionnaire

Student Medical Questionnaire

Transat Medical Questionnaire

Transat Medical Questionnaire

Medical Business Questionnaire

Medical Business Questionnaire

Trail Medical Questionnaire

Trail Medical Questionnaire

Respirator Medical Evaluation Questionnaire

Respirator Medical Evaluation Questionnaire

Exposure and Medical Questionnaire

Exposure and Medical Questionnaire

Adventures Medical Questionnaire

Adventures Medical Questionnaire

Confidential Medical Questionnaire Template

Confidential Medical Questionnaire Template

Group Employer Medical Questionnaire

Group Employer Medical Questionnaire

Formal Medical Questionnaire Template

Formal Medical Questionnaire Template

Respirator Medical Questionnaire Template

Respirator Medical Questionnaire Template

Medical and Dental Questionnaire

Medical and Dental Questionnaire

Adult New Patient Questionnaire

Adult New Patient Questionnaire

Adult New Patient Questionnaire Example

Adult New Patient Questionnaire Example

Post Offer Medical Questionnaire

Post Offer Medical Questionnaire

Medical Questionnaire for Respiratory Equipment

Medical Questionnaire for Respiratory Equipment

Family Medical History Questionnaire

Family Medical History Questionnaire

Respiratory Medical Evaluation Questionnaire

Respiratory Medical Evaluation Questionnaire

Coronavirus Medical Questionnaire

Coronavirus Medical Questionnaire

Pre Admission Medical Questionnaire

Pre-Admission Medical Questionnaire

Accessibility and Medical questionnaire

Accessibility and Medical Questionnaire

Staff Medical Questionnaire

Staff Medical Questionnaire

Medical and Disability Questionnaire

Medical and Disability Questionnaire

Medical Surveillance Questionnaire

Medical Surveillance Questionnaire

Medical Information Questionnaire

Medical Information Questionnaire

Step 1: consider your purpose, step 2: insert the medical questionnaire’s parts, step 3: prepare clear and direct questions, step 4: use an easy-to-answer questionnaire, share this post on your network, you may also like these articles, 51+ sample food questionnaire templates in pdf | ms word | google docs | apple pages.

sample food questionnaire templates

A food questionnaire can be used for a lot of purposes by a variety of businesses in the food service, hospitality, catering, and restaurant industry. Developing a food questionnaire is…

42+ SAMPLE Audit Questionnaire Templates in PDF | MS Word

sample audit questionnaire

Whether you come from a startup business to a long-time respected company, any work contains inevitable problems. Indeed, every success, even the smallest ones, deserves to be celebrated. But…

browse by categories

  • Questionnaire
  • Description
  • Reconciliation
  • Certificate
  • Spreadsheet

Information

  • privacy policy
  • Terms & Conditions
  • Open access
  • Published: 20 August 2024

Assessment of multi-professional primary healthcare center quality by patients with multimorbidity

  • Antoine Dany 1 , 2 ,
  • Paul Aujoulat 1 , 3 ,
  • Floriane Colin 1 , 3 ,
  • Jean-Yves Le Reste 1 , 3 &
  • Delphine Le Goff 1 , 3  

BMC Health Services Research volume  24 , Article number:  954 ( 2024 ) Cite this article

73 Accesses

Metrics details

The main aim of this study was to build an item bank for assessing the care quality of multi-professional healthcare centers (MPHCC) from the perspective of patients with multimorbidity. This study was part of the QUALSOPRIM (QUALité des SOins PRIMaires; primary healthcare quality) research project to create a psychometrically robust self-administered questionnaire to assess healthcare quality.

First, twelve experts built an item bank using data from a previous qualitative work and a systematic literature review. Second, the validity of each item was assessed in a sample of patients. Adult patients with multimorbidity were recruited from six French MPHCC. Items were assessed based on ceiling effects, the level of missing or neutral responses and patient feedback. Patient feedback was recorded after the item bank completion. Based on results, items were validated, improved, or removed during expert meetings. In case of disagreement the Delphi method was used to reach consensus.

The study sample included 209 outpatients. The most frequent medical conditions were cardiovascular risk factors, cardiovascular diseases and rheumatological conditions. In total, a bank of 109 items classified in nine domains was built. The validity assessment led to the removal of 34 items. Retained items explored a variety of topics related to care quality: availability, accessibility, premises’ layout and building, technical care, expertise, organization, relationships with caregivers and communication, involvement and personal relationships.

Conclusions

This study allowed cross-validation of a bank of 75 items, leading to a complete picture of the patient perception of care quality items. Overall, patients were generally satisfied with their care at the MPHCC. Nonetheless, there were still numerous items on subjects for which patients’ satisfaction could be improved.

Peer Review reports

Introduction

Patients with multimorbidity have one chronic disease and at least another (acute or chronic) disease, a biopsychosocial risk factor and/or a somatic risk factor. They often experience complex healthcare interactions [ 1 ]. To meet their increasing care needs, healthcare systems are shifting to a more patient-centered and comprehensive approach with increasing numbers of multi-professional healthcare centers (MPHCC) [ 2 , 3 , 4 , 5 ]. Patient-centered care, including user experience, quality of care and outcomes, can help to produce high-quality healthcare systems [ 6 ]. Healthcare quality can be measured to improve patient-centered care and healthcare quality [ 7 , 8 ].

Primary healthcare provides integrated, accessible healthcare services by clinicians who address most healthcare needs by developing a partnership with patients, and by practicing in the context of family and community [ 9 ]. Healthcare quality can be evaluated from the patients and healthcare professionals’ (HCP) perspective. Independent medical evaluation is the gold standard approach to assess healthcare quality from the HCP perspective. To assess healthcare quality from the patients’ perspective, two approaches can be used: (i) patient experience that reflects the patients’ perception of the received care, and (ii) patient satisfaction that reflects the gap between the quality of the received care and the personal expectations [ 10 , 11 ]. The patient perspective allows assessing the patient centeredness of care, which is a feature of high-quality care, like safety and efficacy [ 12 ].

A recent systematic review revealed that many self-assessment instruments are available to measure care quality at MPHCC from the perspective of patients with multimorbidity. These instruments capture many patient experiences, but few have strong psychometric properties. This review highlighted the need of a valid, responsive, reliable and robust instrument to assess and improve the quality of primary care [ 13 ]. Therefore, the aim of the QUALSOPRIM (QUALité des SOins PRIMaires; primary healthcare quality) project was to build a new evaluation tool with robust psychometric properties. First, a qualitative study was performed using in-depth, face-to-face interviews with 26 patients, 23 informal caregivers and 57 HCPs from five MPHCC in France. This study showed that patients, informal caregivers, and HCPs shared a common vision to improve primary care quality. Nine core domains for care quality were identified [ 14 ].

The main aim of the present study was to build an item bank for MPHCC care quality assessment by patients with multimorbidity.

Item bank construction

Twelve experts [two general practitioners (GP), a psychometrician, and seven GP trainees] developed the bank of items that were formulated according to the Question Appraisal System-1999 guideline [ 15 ]. Response options were phrased during expert meetings. To formulate the response options experts were instructed to assign an objective quantity to each response option (e.g. numbers or common situations that patients could easily relate to).Item response options were designed to ensure the possible range of responses would be captured. The maximum number of response options was set at five.

The item bank included questions on patient experience and satisfaction (available as supplementary file). As healthcare quality is a multi-dimensional concept, items were classified into several domains on the basis of previous phases of the QUALSOPRIM project, clinicians’ experience at a MPHCC and psychometric model requirements.

Validity assessment of the item bank

Validity assessment was conducted in Brittany, a region in Northwest France. It started in the summer 2019 and was planned to last 1 year. The assessment goal was to validate, improve or remove items. Face validity and psychometric dysfunction were assessed to reduce the item bank size. Indeed, reducing the item bank size will facilitate all subsequent work.

Inclusion criteria

Patients (≥ 18 years of age) with at least two chronic conditions, and followed by at least two HCPs were recruited. Purposive sampling was used to ensure that enough patients had home care and an informal caregiver so that related items could be assessed. At least 25% of recruited patients needed to have an informal caregiver and 50% were to have home care to assess corresponding items. An investigating physician identified the eligible patients at the MPHCC. Patients were included if they were able to express their informed consent and signed a written consent. Patients were anonymized and received an ID number. The experimental protocol was approved by the ethical research committee (Comités de Protection des Personnes SUD-EST IV) and was categorized as observational.

GP trainees underwent training on the item bank construction methodology and the required administrative tasks. A GP trainee met the included patients at their MPHCC or their home to explain how to assess the item bank. Patient data were collected: age, sex, and care details (duration, frequency, HCP type, place of care), current medical conditions. Each patient completed a paper version of the item bank. The GP trainee could provide help for the first ten items, if needed. Patients with an informal caregiver could complete the related domain items with their help. Each answer was scored from 1 (total disagreement) to 4 (total agreement); neutral answers (unconcerned or did not mind) were scored zero. Following the item bank assessment completion, patients participated in an open discussion with the GP trainee who included them in the study. The aim was to evaluate how well they understood the item bank and to detect problematic items (e.g. unknown words or irrelevant answers). Patients were encouraged to be uncompromisingly honest and to highlight all difficulties, inconsistencies, or misunderstandings. Patients could suggest improvements and judge the overall item bank acceptability.

Patient feedback was recorded in an Excel spreadsheet and was used to propose adaptations or improvements for each item in a meeting with the same expert group that constructed the item bank. In case of disagreement during these meetings, the Delphi method was later used to reach a consensus (> 80% agreement), one to four rounds were planned [ 16 ].

The scores for each item were recorded manually and independently in two Excel spreadsheets by two GP trainees. The two spreadsheets were automatically compared to identify mismatches. For each mismatch, the two GP trainees entered the definitive value in a third spreadsheet, after discussion. After the last validation by all investigators, the final data file was frozen.

Items were considered to have a poor face validity if they had either a low response rate (missing answer rate > 10%) or relevant negative patient feedback. Items were considered to have psychometric dysfunction if they had either a high percentage (> 75%) of neutral/unconcerned responses or a floor/ceiling effect (> 90% of most negative or most positive answers).

Items with negative feedback, poor face validity and/or psychometric dysfunction were improved if possible or removed from the item bank.

The main results and their integration in the whole research project are presented in Fig.  1 .

figure 1

The main results of the present paper are within the dashed rectangle. Results from previous QUALSOPRIM studies used in the present paper are above it. The next phase of the QUALSOPRIM project is below. QUALSOPRIM: QUALité des SOins PRIMaires; primary healthcare quality

Item drafting

In total, 109 items were included in the item bank in the form of a self-report questionnaire. Each item was given 3 to 5 ordered response categories (Likert scale).

Classification in domains

The initial version of the item bank had ten domains: nine core domains and a general satisfaction domain. The nine core domains were: (i) “HCP availability”, (ii) “Care accessibility”, (iii) “MPHCC layout”, (iv) “Medical-technical care”, (v) “GP’s expertise”, (vi) “Care organization within the MPHCC”, (vii) “Patient-HCP relationship and communication”, (viii) “Patient’s role in their care”, and (ix) “Main informal caregiver’s role in the care pathway”. Each domain included 6 to 18 items. The general satisfaction domain included three items and was at the end of the item bank. It will serve as a point of comparison for later-stage psychometric assessments.

The item bank is available as supplementary material.

In total, 209 patients (60% of women) were included at six MPHCC from July 26, 2019 to October 29, 2020. Their characteristics are presented in Table  1 . Their age ranged from 29 to 99 years (median = 74 years) and was between 67 and 83 years in 53% of patients. Almost half of patients had been followed by their current GP for > 10 years, exclusively at the MPHCC or at home and at the MPHCC. Most patients were seen every 3 months, mainly by their GP and nurses. The most frequent medical conditions were cardiovascular risk factors, cardiovascular diseases and rheumatological conditions.

The time needed to complete the item bank assessment ranged between 45 min and 1 h. Both patients and GP trainees repeatedly stressed that as the item bank was very long, it was difficult to maintain concentration and questions were missed. Therefore, several items and answers were simplified, and terminology was changed to improve clarity. The important terms in each item were underlined. Neutral answers were added for some items to avoid missing answers when patients felt unconcerned/indifferent. The layout was improved to facilitate reading. Overall, 69% of items were retained, resulting in an adapted 75-item bank. Domain-specific results are summarized in Table  2 . Thirty-four items had psychometric dysfunction and/or poor face validity. They were removed from the item bank. For detailed results see the additional table provided as supplementary material.

Overall, 75 items of the item bank had suitable validity for future development. Despite our purposive sampling technique, the study sample adequately represented the rural and semi-rural French population demographics. Indeed, most patients were women (60%) and in the 60–80 years age group (56%), in line with French national epidemiological data [ 17 ]. The patient feedback and quantitative results on validity allowed cross-validation, leading to a complete picture of the patient perception of care quality items. A satisfactory outcome from this study was that overall, patients were generally satisfied with care at the MPHCC. This is supported by the fact that low rated items were infrequent, the presence of many ceiling effects and the absence of any floor effect. Although many items on subjects considered important by patients had to be removed because of ceiling effects, there were still numerous items for which patients’ satisfactions could be improved which is essential for the future questionnaire.

Almost all items assessing HCP availability had proper validity. This probably come from the fact that they raised important issues for many patients in the context of growing primary care needs (e.g. choosing or changing GP). Indeed, the region has experienced a decline in HCP numbers. For instance, the number of GP in Brittany decreased by 5.6% from 2012 to 2021 [ 18 ]. The situation was probably more significantly affected in the rural and semi-urban areas that participated to this study. Removed items were on subsidiary matters which may depend very much on the situation. For example, most patients were not concerned about the possibility of grouping appointments with different HCPs, possibly because the chosen MPHCC were small with a limited number of HCPs. However, this item may be relevant in larger MPHCC.

In the Care accessibility domain, the four items that showed acceptable validity were about two subtopics: accessibility for people with reduced mobility and ease to reach HCP. Although building compliance with accessibility for people with reduced mobility is mandatory in France, there were still few cases where exemptions or delay had been granted. Ease to reach GP was a frequent expectation for patients with multimorbidity which was emphasized by the increasing number of modalities available through mobile phone, and internet software. This was however often perceived as overburdening by HCP who wanted to be seldom contacted by patients outside consultation. The items on consultation costs and receiving care without upfront payment were both removed because of ceiling effects. These items were not relevant for the French healthcare system where patients with multimorbidity receive care without upfront costs and 100% of their medical expenses are covered by the national health insurance system. However, these items can be important in other countries where patients can face high costs [ 19 ]. Moreover, as economic pressure on the French healthcare system increases, they might become relevant also in France.

Few items on the MPHCC layout and building were considered important by patients. They were related to overall comfort, building sound insulation, things to pass time in the waiting room and information displayed in the waiting room. Conversely, there were strong ceiling effects for several items (e.g., building temperature, ventilation) which probably reflected the little interest by patients in these factors. Indeed, although some MPHCC included in this study were not built recently and their layout was not always optimal and ergonomic, these were not a priority for patients.

Concerning the medical-technical and GP’s expertise domains, although most patients felt they had little ability or legitimacy to judge these aspects, most items had proper validity. The ceiling effect on the GP’s capacity to focus on the health condition that led to the consultation may be related to the fact that patient-centered care is one of the main GP’s skills [ 20 , 21 ]. Most patients rated highly their GP’s skills. This is consistent with data from the National French Directorate of Research, Studies, Evaluation and Statistics (DREES; Direction de la Recherche, des Études, de l’Évaluation et des Statistiques) showing that 88% of French people are satisfied with the quality of care and information provided by the GP on their health status [ 22 ].

Most items on care organization within the MPHCC had proper validity. For instance, the item on consultations with specialists had the lowest score, possibly due to the scarcity of some specialists in MPHCC in France. Recently, this issue has slightly improved because young specialists are more attracted by interprofessional cooperation in primary care centers, besides their role in hospitals. For instance, in the Finistère department, three MPHCC offer consultations with a cardiologist, neurosurgeon, orthopedic surgeon, psychiatrist, gastroenterologist, urologist, and dermatologist. This item is thus particularly valuable for the item bank because it contained a large number of high rated items and lacked low-rated items.

Items on the Patient-HCP relationship and communication showed adequate face validity. Only over-specific items such as medical record sharing, communications among professionals, and openness on complementary medicine were removed because of poor validity.

The validity of most items on the patient’s role in their care domain are promising because patient involvement in their care (e.g. therapeutic patient education) will be developed at these MPHCC in the coming years [ 23 ]. Again, only over-specific items assessing formal intervention such as focus group or dedicated therapeutic education session were removed because of poor validity. Indeed, patients were not familiar with these expert terms or participated in therapeutic education in a non-formal way. Interestingly, although they were introduced in the French public health code in 2005 most patients (62.21%) said that they had never discussed about advanced care directives with their HCPs. This indicator needs to be measured to facilitate approaching this topic because HCPs must support patients in drafting their advanced care directives [ 24 , 25 ].

Most items on the main informal caregiver’s role were hard to evaluate due to the smaller amount of available data (i) by design (only 25% of recruited patients had an informal caregiver), and (ii) because this domain was at the questionnaire end and thus patients might have lost interest. Thus, for conservative reasons, very few items were removed from this domain compared with the other domains. The results for the item on informal caregiver resources at the MPHCC showed that 68% of patients reported that few or no resources were available for the informal caregiver. This confirms the poor informal caregivers’ recognition in France as well as in other developed countries [ 26 , 27 ].

The present study had two important limitations. First, the length of the item bank negatively affected the study feasibility. Indeed, almost all patients lost motivation and became disinterested towards the questionnaire end. The adapted version with fewer items should address this major limitation. Second, as this was a pilot study, the sample size was small. Therefore, only important item psychometric dysfunctions were detected and items with poor psychometric properties might still be present.

The next study phase will analyze the item bank psychometric properties using the item response theory [ 28 , 29 ] in a larger patient sample. This will allow further reducing the item number and assessing the domain structure and psychometric properties.

A bank of 109 items was built and then reduced to 75 items on the basis of their validity assessed in a sample of 209 patients. Retained items explored a large variety of topics related to care quality: availability, accessibility, premises’ layout and building, technical care, expertise, organization, relationships with caregivers and communication, involvement and personal relationships. The analysis of item validity provided a valuable insight on how patients with multimorbidity evaluated their MPHCC care. Most removed items were about topics that were either subsidiary (e.g., possibility to refuse the presence of a student during care), fully satisfactory (e.g., consultation cost), over specific (e.g., possibility to group several appointments together) or which patients felt unable to answer (e.g., communication effectiveness between professionals for home care).

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

National French Directorate of Research, Studies, Evaluation and Statistics

General practitioners

Healthcare professionals

Multi-professional healthcare centers

QUALité des SOins PRIMaires (primary healthcare quality)

Le Reste JY, Nabbe P, Manceau B, Lygidakis C, Doerr C, Lingner H, et al. The European General Practice Research Network presents a comprehensive definition of Multimorbidity in Family Medicine and Long Term Care, following a systematic review of relevant Literature. J Am Med Dir Assoc. 2013;14(5):319–25.

Article   PubMed   Google Scholar  

Moffat K, Mercer SW. Challenges of managing people with multimorbidity in today’s healthcare systems. BMC Fam Pract. 2015;16(1):129.

Article   PubMed   PubMed Central   Google Scholar  

Stumm J, Thierbach C, Peter L, Schnitzer S, Dini L, Heintze C, et al. Coordination of care for multimorbid patients from the perspective of general practitioners – a qualitative study. BMC Fam Pract. 2019;20(1):160.

Schuttner L, Parchman M. Team-based primary care for the multimorbid patient: matching complexity with complexity. Am J Med. 2019;132(4):404–6.

Skou ST, Mair FS, Fortin M, Guthrie B, Nunes BP, Miranda JJ, et al. Multimorbidity Nat Rev Dis Primers. 2022;8(1):48.

Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, et al. High-quality health systems in the Sustainable Development goals era: time for a revolution. Lancet Glob Health. 2018;6(11):e1196–252.

Anhang Price R, Elliott MN, Zaslavsky AM, Hays RD, Lehrman WG, Rybowski L, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014 Oct;71(5):522–54.

Kingsley C, Patel S. Patient-reported outcome measures and patient-reported experience measures. BJA Education. 2017 Apr 1;17(4):137–44.

Vanselow NA, Donaldson MS, Yordy KD. A New Definition of Primary Care. JAMA. 1995 Jan 18;273(3):192–192.

Bull C, Byrnes J, Hettiarachchi R, Downes M. A systematic review of the validity and reliability of patient-reported experience measures. Health Serv Res. 2019;54(5):1023–35.

Anufriyeva V, Pavlova M, Stepurko T, Groot W. The validity and reliability of self-reported satisfaction with healthcare as a measure of quality: a systematic literature review. Int J Qual Health Care. 2021 Jan 1;33(1):mzaa152.

Ruseckaite R, Maharaj AD, Dean J, Krysinska K, Ackerman IN, Brennan AL, et al. Preliminary development of recommendations for the inclusion of patient-reported outcome measures in clinical quality registries. BMC Health Serv Res. 2022 Dec;22(1):276.

Derriennic J, Nabbe P, Barais M, Le Goff D, Pourtau T, Penpennic B, et al. A systematic literature review of patient self-assessment instruments concerning quality of primary care in multiprofessional clinics. Fam Pract. 2022 Sep 24;39(5):951–963.

Derriennic J, Barais M, Goff DL, Fernandez G, Borne FL, Reste JYL. Patient, carer and healthcare professional experiences of complex care quality in multidisciplinary primary healthcare centres: qualitative study with face-to-face, in-depth interviews and focus groups in five French multidisciplinary primary healthcare centres. BMJ Open. 2021 Dec 1;11(12):e050165.

Willis GB, Lessler JT. Question Appraisal System: QAS 99 [Internet]. National Cancer Institute; 1999. Available from: http://appliedresearch.cancer.gov/areas/cognitive/qas99.pdf .

Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, et al. Defining consensus: A systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014 Apr;67(4):401–9.

Coste J, Valderas JM, Carcaillon-Bentata L. The epidemiology of multimorbidity in France: Variations by gender, age and socioeconomic factors, and implications for surveillance and prevention. PLoS One. 2022;17(4):e0265842

Anguis M, Bergeat M, Pisarik J, Vergier N, Chaput H, Laffeter Q, et al. Quelle démographie récente et à venir pour les professions médicales et pharmaceutique? [Internet]. Drees; 2021 [cited 2023 Jun 22]. Available from: https://drees.solidarites-sante.gouv.fr/sites/default/files/2021-03/DD76.pdf .

Sloane PD, Eleazer GP, Phillips SL, Batchelor F. Removing the Financial Barriers to Home-Based Medical Care for Frail Older Persons. J Am Med Dir Assoc. 2022 Oct;23(10):1611–1613.

Michielsen L, Bischoff EWMA, Schermer T, Laurant M. Primary healthcare competencies needed in the management of person-centred integrated care for chronic illness and multimorbidity: Results of a scoping review. BMC Prim Care. 2023 Apr 12;24(1):98.

WONCA Europe. WONCA_European_Definitions_2_v7.pdf [Internet]. 2023 [cited 2023 Nov 2]. Available from: https://www.woncaeurope.org/file/41f61fb9-47d5-4721-884e-603f4afa6588/WONCA_European_Definitions_2_v7.pdf .

Castell L, Dennevault C. Qualité et accès aux soins : que pensent les Français de leurs médecins Direction de la recherche, des études, de l’évaluation et des statistiques [Internet]. Drees; 2017 [cited 2023 Jun 22]. Available from: https://drees.solidarites-sante.gouv.fr/publications/etudes-et-resultats/qualite-et-acces-aux-soins-que-pensent-les-francais-de-leurs .

Haute Autorité de Santé [Internet]. 2015 [cited 2023 Jun 22]. Démarche centrée sur le patient: information, conseil, éducation thérapeutique, suivi. Available from: https://www.has-sante.fr/jcms/c_2040144/fr/demarche-centree-sur-le-patient-information-conseil-education-therapeutique-suivi .

Xu C, Yan S, Chee J, Lee EPY, Lim HW, Lim SWC, et al. Increasing the completion rate of the advance directives in primary care setting – a randomized controlled trial. BMC Fam Pract. 2021 Dec;22(1):115.

Van Der Plas AGM, Pasman HRW, Kox RMK, Ponstein M, Dame B, Onwuteaka-Philipsen BD. Information meetings on end-of-life care for older people by the general practitioner to stimulate advance care planning: a pre-post evaluation study. BMC Fam Pract. 2021 Dec;22(1):109

Chan CY, De Roza JG, Ding GTY, Koh HL, Lee ES. Psychosocial factors and caregiver burden among primary family caregivers of frail older adults with multimorbidity. BMC Prim Care. 2023 Jan 30;24(1):36.

Cronin M, McLoughlin K, Foley T, McGilloway S. Supporting family carers in general practice: a scoping review of clinical guidelines and recommendations. BMC Prim Care. 2023 Nov 6;24(1):234

Masters GN. A rasch model for partial credit scoring. Psychometrika. 1982 Jun;47(2):149–174.

Sijtsma K, Molenaar I. Introduction to Nonparametric Item Response Theory [Internet]. Thousand Oaks, California; 2023. Available from: https://methods.sagepub.com/book/introduction-to-nonparametric-item-response-theory .

Download references

Acknowledgements

This paper is dedicated to the memory of Dr. Jérémy Derriennic. We thank all the GP trainees who contributed greatly to this project: Quentin Beauvillard, Vincent Abiliou, Igor-Nicolas Oilleau, Aude Konkuyt, Floriane Colin, Élise Clédic, Lucie Daniel, Léa Robin, and Marion Lastennet.

This study was funded by a grant by the French national health ministry (PREPS15-0472). No conflicts of interest.

Author information

Authors and affiliations.

ER 7479 SPURBO, University of Western Brittany, Brest, France

Antoine Dany, Paul Aujoulat, Floriane Colin, Jean-Yves Le Reste & Delphine Le Goff

Department of Human Sciences, University of Western Brittany, Brest, France

Antoine Dany

Department of General Practice, University of Western Brittany, Brest, France

Paul Aujoulat, Floriane Colin, Jean-Yves Le Reste & Delphine Le Goff

You can also search for this author in PubMed   Google Scholar

Contributions

The authors declare that all authors meet criteria for authorship as stated in the Uniform Requirements for Manuscripts Submitted to Biomedical Journals. Author contributions: AD, JYLR DLG designed the study; JYLR coordinated the ethics approval process; PA, FC, JYLR, DLG participated in patient recruitment; AD analyzed the data and drafted the manuscript. All authors reviewed the manuscript and approved the decision to submit for publication.

Corresponding author

Correspondence to Antoine Dany .

Ethics declarations

Ethics approval and consent to participate.

The experimental protocol was approved by the ethical research committee (Comité de Protection des Personnes SUD-EST IV, n°19.03.19.67407) and was categorized as observational (n°ID RCB 2019-A00199-48).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Dany, A., Aujoulat, P., Colin, F. et al. Assessment of multi-professional primary healthcare center quality by patients with multimorbidity. BMC Health Serv Res 24 , 954 (2024). https://doi.org/10.1186/s12913-024-11315-2

Download citation

Received : 22 March 2024

Accepted : 15 July 2024

Published : 20 August 2024

DOI : https://doi.org/10.1186/s12913-024-11315-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Multimorbidity
  • Primary health care
  • Quality of health care
  • Patient satisfaction
  • Face validity

BMC Health Services Research

ISSN: 1472-6963

medical research questionnaire sample

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Informa Healthcare Open Access

Logo of informaopen

Developing questionnaires for educational research: AMEE Guide No. 87

Anthony r. artino, jr..

1 Uniformed Services University of the Health Sciences, USA

Jeffrey S. La Rochelle

Kent j. dezee, hunter gehlbach.

2 Harvard Graduate School of Education, USA

In this AMEE Guide, we consider the design and development of self-administered surveys, commonly called questionnaires. Questionnaires are widely employed in medical education research. Unfortunately, the processes used to develop such questionnaires vary in quality and lack consistent, rigorous standards. Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales. These seven steps do not address all aspects of survey design, nor do they represent the only way to develop a high-quality questionnaire. Instead, these steps synthesize multiple survey design techniques and organize them into a cohesive process for questionnaire developers of all levels. Addressing each of these steps systematically will improve the probabilities that survey designers will accurately measure what they intend to measure.

Introduction: Questionnaires in medical education research

Surveys are used throughout medical education. Examples include the ubiquitous student evaluation of medical school courses and clerkships, as well as patient satisfaction and student self-assessment surveys. In addition, survey instruments are widely employed in medical education research. In our recent review of original research articles published in Medical Teacher in 2011 and 2012, we found that 37 articles (24%) included surveys as part of the study design. Similarly, surveys are commonly used in graduate medical education research. Across the same two-year period (2011–2012), 75% of the research articles published in the Journal of Graduate Medical Education used surveys.

Despite the widespread use of surveys in medical education, the medical education literature provides limited guidance on the best way to design a survey (Gehlbach et al. 2010 ). Consequently, many surveys fail to use rigorous methodologies or “best practices” in survey design. As a result, the reliability of the scores that emerge from surveys is often inadequate, as is the validity of the scores’ intended interpretation and use. Stated another way, when surveys are poorly designed, they may fail to capture the essence of what the survey developer is attempting to measure due to different types of measurement error. For example, poor question wording, confusing question layout and inadequate response options can all affect the reliability and validity of the data from surveys, making it extremely difficult to draw useful conclusions (Sullivan 2011 ). With these problems as a backdrop, our purpose in this AMEE Guide is to describe a systematic process for developing and collecting reliability and validity evidence for survey instruments used in medical education and medical education research. In doing so, we hope to provide medical educators with a practical guide for improving the quality of the surveys they design for evaluation and research purposes.

A systematic, seven-step process for survey scale design

The term “survey” is quite broad and could include the questions used in a phone interview, the set of items employed in a focus group and the questions on a self-administered patient survey (Dillman et al. 2009 ). Although the processes described in this AMEE Guide can be used to improve all of the above, we focus primarily on self-administered surveys, which are often referred to as questionnaires. For most questionnaires, the overarching goals are to develop a set of items that every respondent will interpret the same way, respond to accurately and be willing and motivated to answer. The seven steps depicted in Table 1 , and described below, do not address all aspects of survey design nor do they represent the only way to develop a high-quality questionnaire. Rather, these steps consolidate and organize the plethora of survey design techniques that exist in the social sciences and guide questionnaire developers through a cohesive process. Addressing each step systematically will optimize the quality of medical education questionnaires and improve the chances of collecting high-quality survey data.

A seven-step, survey scale design process for medical education researchers.

StepPurpose
1. Conduct a literature reviewTo ensure that the construct definition aligns with relevant prior research and theory and to identify existing survey scales or items that might be used or adapted
2. Conduct interviews and/or focus groupsTo learn how the population of interest conceptualizes and describes the construct of interest
3. Synthesize the literature review and interviews/focus groupsTo ensure that the conceptualization of the construct makes theoretical sense to scholars in the field and uses language that the population of interest understands
4. Develop itemsTo ensure items are clear, understandable and written in accordance with current best practices in survey design
5. Conduct expert validationTo assess how clear and relevant the items are with respect to the construct of interest
6. Conduct cognitive interviewsTo ensure that respondents interpret items in the manner that survey designer intends
7. Conduct pilot testingTo check for adequate item variance, reliability and convergent/discriminant validity with respect to other measures

Adapted with permission from Lippincott Williams and Wilkins/Wolters Kluwer Health: Gehlbach et al. ( 2010 ). AM last page: Survey development guidance for medical education researchers. Acad Med 85:925.

Questionnaires are good for gathering data about abstract ideas or concepts that are otherwise difficult to quantify, such as opinions, attitudes and beliefs. In addition, questionnaires can be useful for collecting information about behaviors that are not directly observable (e.g. studying at home), assuming respondents are willing and able to report on those behaviors. Before creating a questionnaire, however, it is imperative to first decide if a survey is the best method to address the research question or construct of interest. A construct is the model, idea or theory that the researcher is attempting to assess. In medical education, many constructs of interest are not directly observable – student satisfaction with a new curriculum, patients’ ratings of their physical discomfort, etc. Because documenting these phenomena requires measuring people’s perceptions, questionnaires are often the most pragmatic approach to assessing these constructs.

In medical education, many constructs are well suited for assessment using questionnaires. However, because psychological, non-observable constructs such as teacher motivation, physician confidence and student satisfaction do not have a commonly agreed upon metric, they are difficult to measure with a single item on a questionnaire. In other words, for some constructs such as weight or distance, most everyone agrees upon the units and the approach to measurement, and so a single measurement may be adequate. However, for non-observable, psychological constructs, a survey scale is often required for more accurate measurement. Survey scales are groups of similar items on a questionnaire designed to assess the same underlying construct (DeVellis 2003). Although scales are more difficult to develop and take longer to complete, they offer researchers many advantages. In particular, scales more completely, precisely and consistently assess the underlying construct (McIver & Carmines 1981 ). Thus, scales are commonly used in many fields, including medical education, psychology and political science. As an example, consider a medical education researcher interested in assessing medical student satisfaction. One approach would be to simply ask one question about satisfaction (e.g. How satisfied were you with medical school?). A better approach, however, would be to ask a series of questions designed to capture the different facets of this satisfaction construct (e.g. How satisfied were you with the teaching facilities? How effective were your instructors? and How easy was the scheduling process?). Using this approach, a mean score of all the items within a particular scale can be calculated and used in the research study.

Because of the benefits of assessing these types of psychological constructs through scales, the survey design process that we now turn to will focus particularly on the development of scales.

Step 1: Conduct a literature review

The first step to developing a questionnaire is to perform a literature review. There are two primary purposes for the literature review: (1) to clearly define the construct and (2) to determine if measures of the construct (or related constructs) already exist. A review of the literature helps to ensure the construct definition aligns with related theory and research in the field, while at the same time helping the researcher identify survey scales or items that could be used or adapted for the current purpose (Gehlbach et al. 2010 ).

Formulating a clear definition of the construct is an indispensable first step in any validity study (Cook & Beckman 2006 ). A good definition will clarify how the construct is positioned within the existing literature, how it relates to other constructs and how it is different from related constructs (Gehlbach & Brinkworth 2011 ). A well-formulated definition also helps to determine the level of abstraction at which to measure a given construct (the so-called “grain size”, as defined by Gehlbach & Brinkworth 2011 ). For example, to examine medical trainees’ confidence to perform essential clinical skills, one could develop scales to assess their confidence to auscultate the heart (at the small-grain end of the spectrum), to conduct a physical exam (at the medium-grain end of the spectrum) or to perform the clinical skills essential to a given medical specialty (at the large-grain end of the spectrum).

Although many medical education researchers prefer to develop their own surveys independently, it may be more efficient to adapt an existing questionnaire – particularly if the authors of the existing questionnaire have collected validity evidence in previous work – than it is to start from scratch. When this is the case, a request to the authors to adapt their questionnaire will usually suffice. It is important to note, however, that the term “previously validated survey” is a misnomer. The validity of the scores that emerge from a given questionnaire or survey scale is sensitive to the survey’s target population, the local context and the intended use of the scale scores, among other factors. Thus, survey developers collect reliability and validity evidence for their survey scales in a specified context, with a particular sample, and for a particular purpose.

As described in the Standards for Educational and Psychological Testing , validity refers to the degree to which evidence and theory support a measure’s intended use (AERA, APA, & NCME 1999 ). The process of validation is the most fundamental consideration in developing and evaluating a measurement tool, and the process involves the accumulation of evidence across time, settings and samples to build a scientifically sound validity argument. Thus, establishing validity is an ongoing process of gathering evidence (Kane 2006 ). Furthermore, it is important to acknowledge that reliability and validity are not properties of the survey instrument, per se , but of the survey’s scores and their interpretations (AERA, APA, & NCME 1999 ). For example, a survey of trainee satisfaction might be appropriate for assessing aspects of student well-being, but such a survey would be inappropriate for selecting the most knowledgeable medical students. In this example, the survey did not change, only the score interpretation changed (Cook & Beckman 2006 ).

Many good reasons exist to use, or slightly adapt, an existing questionnaire. By way of analogy, we can compare this practice to a physician who needs to decide on the best medical treatment. The vast majority of clinicians do not perform their own comparative research trials to determine the best treatments to use for their patients. Rather, they rely on the published research, as it would obviously be impractical for clinicians to perform such studies to address every disease process. Similarly, medical educators cannot develop their own questionnaires for every research question or educational intervention. Just like clinical trials, questionnaire development requires time, knowledge, skill and a fair amount of resources to accomplish correctly. Thus, an existing, well-designed questionnaire can often permit medical educators to put their limited resources elsewhere.

Continuing with the clinical research analogy, when clinicians identify a research report that is relevant to their clinical question, they must decide if it applies to their patient. Typically, this includes determining if the relationships identified in the study are causal (internal validity) and if the results apply to the clinician’s patient population (external validity). In a similar way, questionnaires identified in a literature search must be reviewed critically for validity evidence and then analyzed to determine if the questionnaire could be applied to the educator’s target audience. If survey designers find scales that closely match their construct, context and proposed use, such scales might be useable with only minor modification. In some cases, the items themselves might not be well written, but the content of the items might be helpful in writing new items (Gehlbach & Brinkworth 2011 ). Making such determinations will be easier the more the survey designer knows about the construct (through the literature review) and the best practices in item writing (as described in Step 4).

Step 2: Conduct interviews and/or focus groups

Once the literature review has shown that it is necessary to develop a new questionnaire, and helped to define the construct, the next step is to ascertain whether the conceptualization of the construct matches how prospective respondents think about it (Gehlbach & Brinkworth 2011 ). In other words, do respondents include and exclude the same features of the construct as those described in the literature? What language do respondents use when describing the construct? To answer these questions and ensure the construct is defined from multiple perspectives, researchers will usually want to collect data directly from individuals who closely resemble their population of interest.

To illustrate this step, another clinical analogy might be helpful. Many clinicians have had the experience of spending considerable time developing a medically appropriate treatment regimen but have poor patient compliance with that treatment (e.g. too expensive). The clinician and patient then must develop a new plan that is acceptable to both. Had the patient’s perspective been considered earlier, the original plan would likely have been more effective. Many clinicians have also experienced difficulty treating a patient, only to have a peer reframe the problem, which subsequently results in a better approach to treatment. A construct is no different. To this point, the researcher developing the questionnaire, like the clinician treating the patient, has given a great deal of thought to defining the construct. However, the researcher unavoidably brings his/her perspectives and biases to this definition, and the language used in the literature may be technical and difficult to understand. Thus, other perspectives are needed. Most importantly, how does the target population (the patient from the previous example) conceptualize and understand the construct? Just like the patient example, these perspectives are sometimes critical to the success of the project. For example, in reviewing the literature on student satisfaction with medical school instruction, a researcher may find no mention of the instructional practice of providing students with video or audio recordings of lectures (as these practices are fairly new). However, in talking with students, the researcher may find that today’s students are accustomed to such practices and consider them when forming their opinions about medical school instruction.

In order to accomplish Step 2 of the design process, the survey designer will need input from prospective respondents. Interviews and/or focus groups provide a sensible way to get this input. Irrespective of the approach taken, this step should be guided by two main objectives. First, researchers need to hear how participants talk about the construct in their own words, with little to no prompting from the researcher. Following the collection of unprompted information from participants, the survey designers can then ask more focused questions to evaluate if respondents agree with the way the construct has been characterized in the literature. This procedure should be repeated until saturation is reached; this occurs when the researcher is no longer hearing new information about how potential respondents conceptualize the construct (Gehlbach & Brinkworth 2011 ). The end result of these interviews and/or focus groups should be a detailed description of how potential respondents conceptualize and understand the construct. These data will then be used in Steps 3 and 4.

Step 3: Synthesize the literature review and interviews/focus groups

At this point, the definition of the construct has been shaped by the medical educator developing the questionnaire, the literature and the target audience. Step 3 seeks to reconcile these definitions. Because the construct definition directs all subsequent steps (e.g. development of items), the survey designer must take care to perform this step properly.

One suitable way to conduct Step 3 is to develop a comprehensive list of indicators for the construct by merging the results of the literature review and interviews/focus groups (Gehlbach & Brinkworth 2011 ). When these data sources produce similar lists, the process is uncomplicated. When these data are similar conceptually, but the literature and potential respondents describe the construct using different terminology, it makes sense to use the vocabulary of the potential respondents. For example, when assessing teacher confidence (sometimes referred to as teacher self-efficacy), it is probably more appropriate to ask teachers about their “confidence in trying out new teaching techniques” than to ask them about their “efficaciousness in experimenting with novel pedagogies” (Gehlbach et al. 2010 ). Finally, if an indicator is included from one source but not the other, most questionnaire designers will want to keep the item, at least initially. In later steps, designers will have opportunities to determine, through expert reviews (Step 5) and cognitive interviews (Step 6), if these items are still appropriate to the construct. Whatever the technique used to consolidate the data from Steps 1 and 2, the final definition and list of indicators should be comprehensive, reflecting both the literature and the opinions of the target audience.

It is worth noting that scholars may have good reasons to settle on a final construct definition that differs from what is found in the literature. However, when this occurs, it should be clear exactly how and why the construct definition is different. For example, is the target audiences’ perception different from previous work? Does a new educational theory apply? Whatever the reason, this justification will be needed for publication of the questionnaire. Having an explicit definition of the construct, with an explanation of how it is different from other versions of the construct, will help peers and researchers alike decide how to best use the questionnaire both in comparison with previous studies and with the development of new areas of research.

Step 4: Develop items

The goal of this step is to write survey items that adequately represent the construct of interest in a language that respondents can easily understand. One important design consideration is the number of items needed to adequately assess the construct. There is no easy answer to this question. The ideal number of items depends on several factors, including the complexity of the construct and the level at which one intends to assess it (i.e. the grain size). In general, it is good practice to develop more items than will ultimately be needed in the final scale (e.g. developing 15 potential items in the hopes of ultimately creating an eight-item scale), because some items will likely be deleted or revised later in the design process (Gehlbach & Brinkworth 2011 ). Ultimately, deciding on the number of items is a matter of professional judgment, but for most narrowly defined constructs, scales containing from 6 to 10 items will usually suffice in reliably capturing the essence of the phenomenon in question.

The next challenge is to write a set of clear, unambiguous items using the vocabulary of the target population. Although some aspects of item-writing remain an art form, an increasingly robust science and an accumulation of best practices should guide this process. For example, writing questions rather than statements, avoiding negatively worded items and biased language, matching the item stem to the response anchors and using response anchors that emphasize the construct being measured rather than employing general agreement response anchors (Artino et al. 2011 ) are all well-documented best practices. Although some medical education researchers may see these principles as “common sense”, experience tells us that these best practices are often violated.

Reviewing all the guidelines for how best to write items, construct response anchors and visually design individual survey items and entire questionnaires is beyond the scope of this AMEE Guide. As noted above, however, there are many excellent resources on the topic (e.g. DeVillis 2003; Dillman et al. 2009 ; Fowler 2009). To assist readers in grasping some of the more important and frequently ignored best practices, Table 2 presents several item-writing pitfalls and offers solutions.

Item-writing “best practices” based on scientific evidence from questionnaire design research.

PitfallSurvey example(s)Why it’s a problemSolution(s)Survey example(s)References
Creating a double- barreled item– How often do you talk to your nurses and administrative staff when you have a problem?Respondents have trouble answering survey items that contain more than one question (and thus could have more than one answer). In this example, the respondent may talk to his nurses often but talk to administrative staff much less frequently. If this were the case, the respondent would have a difficult time answering the question. Survey items should address one idea at a time.When you have multiple questions/premises within a given item, either (1) create multiple items for each question that is important or (2) include only the more important question. Be especially wary of conjunctions in your items.– How often do you talk to your nurses when you have a problem?
– How often do you talk to your administrative staff when you have a problem?
Tourangeau et al. ; Dillman et al.
Creating a negatively worded item– In an average week, how many times are you unable to start class on time?
– The chief resident should not be responsible for denying admission to patients.
Negatively worded survey items are challenging for respondents to comprehend and answer accurately. Double negatives are particularly problematic and increase measurement error. If a respondent has to say “yes” in order to mean “no” (or “agree” in order to “disagree”), the item is flawed.Make sure “yes” means yes and “no” means no. This generally means wording items positively.– In an average week, how many times do you start class on time?
– Should the chief resident be responsible for admitting patients?
Dillman et al.
Using statements instead of questionsI am confident I can do well in this course.
• Not at all true
• A little bit true
• Somewhat true
• Mostly true
• Completely true
A survey represents a conversation between the surveyor and the respondents. To make sense of survey items, respondents rely on “the tacit assumptions that govern the conduct of conversation in everyday life” (Schwarz 1999). Only rarely do people engage in rating statements in their everyday conversations.Formulate survey items as questions. Questions are more conversational, more straightforward and easier to process mentally. People are more practiced at responding to them.How confident are you that you can do well in this course?
• Not at all confident
• Slightly confident
• Moderately confident
• Quite confident
• Extremely confident
Krosnick 1999; Schwarz 1999; Tourangeau et al. ; Dillman et al.
Using agreement response anchorsThe high cost of health care is the most important issue in America today.
• Strongly disagree
• Disagree
• Neutral
• Agree
• Strongly agree
Agreement response anchors do not emphasize the construct being measured and are prone to acquiescence (i.e. the tendency to endorse any assertion made in an item, regardless of its content). In addition, agreement response options may encourage respondents to think through their responses less thoroughly while completing the survey.Use construct-specific response anchors that emphasize the construct of interest. Doing so reduces acquiescence and keeps respondents focused on the construct in question; this results in less measurement error.How important is the issue of high healthcare costs in America today?
• Not at all important
• Slightly important
• Moderately important
• Quite important
• Extremely important
Krosnick 1999; Tourangeau et al. ; Dillman et al.
Using too few or too many response anchorsHow useful was your medical school training in clinical decision making?
• Not at all useful
• Somewhat useful
• Very useful
The number of response anchors influences the reliability of a set of survey items. Using too few response anchors generally reduces reliability. There is, however, a point of diminishing returns beyond which more response anchors do not enhance reliability.Use five or more response anchors to achieve stable participant responses. In most cases, using more than seven to nine anchors is unlikely to be meaningful to most respondents and will not improve reliability.How useful was your medical school training in clinical decision making?
• Not at all useful
• Slightly useful
• Moderately useful
• Quite useful
• Extremely useful
Weng 2004

Adapted with permission from Lippincott Williams and Wilkins/Wolters Kluwer Health: Artino et al. 2011 . AM last page: Avoiding five common pitfalls in survey design. Acad Med 86:1327.

Another important part of the questionnaire design process is selecting the response options that will be used for each item. Closed-ended survey items can have unordered (nominal) response options that have no natural order or ordered (ordinal) response options. Moreover, survey items can ask respondents to complete a ranking task (e.g. “rank the following items, where 1 = best and 6 = worst”) or a rating task that asks them to select an answer on a Likert-type response scale. Although it is outside the scope of this AMEE Guide to review all of the response options available, questionnaire designers are encouraged to tailor these options to the construct(s) they are attempting to assess (and to consult one of the many outstanding resources on the topic; e.g. Dillman et al. 2009 ; McCoach et al. 2013 ). To help readers understand some frequently ignored best practices Table 2 and Figure 1 present several common mistakes designers commit when writing and formatting their response options. In addition, because Likert-type response scales are by far the most popular way of collecting survey responses – due, in large part, to their ease of use and adaptability for measuring many different constructs (McCoach et al. 2013 ) – Table 3 provides several examples of five- and seven-point response scales that can be used when developing Likert-scaled survey instruments.

An external file that holds a picture, illustration, etc.
Object name is MTE-36-463-g001.jpg

Visual-design “best practices” based on scientific evidence from questionnaire design research.

Examples of various Likert-type response options.

Construct being assessedFive-point, unipolar response scalesSeven-point, bipolar response scales
Confidence• Not at all confident
• Slightly confident
• Moderately confident
• Quite confident
• Extremely confident
• Completely unconfident
• Moderately unconfident
• Slightly unconfident
• Neither confident nor  unconfident (or neutral)
• Slightly confident
• Moderately confident
• Completely confident
Interest• Not at all interested
• Slightly interested
• Moderately interested
• Quite interested
• Extremely interested
• Very uninterested
• Moderately uninterested
• Slightly uninterested
• Neither interested nor  uninterested (or neutral)
• Slightly interested
• Moderately interested
• Very interested
Effort• Almost no effort
• A little bit of effort
• Some effort
• Quite a bit of effort
• A great deal of effort
Importance• Not important
• Slightly important
• Moderately important
• Quite important
• Essential
Satisfaction• Not at all satisfied
• Slightly satisfied
• Moderately satisfied
• Quite satisfied
• Extremely satisfied
• Completely dissatisfied
• Moderately dissatisfied
• Slightly dissatisfied
• Neither satisfied nor  dissatisfied (or neutral)
• Slightly satisfied
• Moderately satisfied
• Completely satisfied
Frequency• Almost never
• Once in a while
• Sometimes
• Often
• Almost always

Once survey designers finish drafting their items and selecting their response anchors, there are various sources of evidence that might be used to evaluate the validity of the questionnaire and its intended use. These sources of validity have been described in the Standards for Educational and Psychological Testing as evidence based on the following: (1) content, (2) response process, (3) internal structure, (4) relationships with other variables and (5) consequences (AERA, APA & NCME 1999 ). The next three steps of the design process fit nicely into this taxonomy and are described below.

Step 5: Conduct expert validation

Once the construct has been defined and draft items have been written, an important step in the development of a new questionnaire is to begin collecting validity evidence based on the survey’s content (so-called content validity ) (AERA, APA & NCME 1999 ). This step involves collecting data from content experts to establish that individual survey items are relevant to the construct being measured and that key items or indicators have not been omitted (Polit & Beck 2004 ; Waltz et al. 2005 ). Using experts to systematically review the survey’s content can substantially improve the overall quality and representativeness of the scale items (Polit & Beck 2006 ).

Steps for establishing content validity for a new survey instrument can be found throughout the literature (e.g. McKenzie et al. 1999 ; Rubio et al. 2003 ). Below, we summarize several of the more important steps. First, before selecting a panel of experts to evaluate the content of a new questionnaire, specific criteria should be developed to determine who qualifies as an expert. These criteria are often based on experience or knowledge of the construct being measured, but, practically speaking, these criteria also are dependent on the willingness and availability of the individuals being asked to participate (McKenzie et al. 1999 ). One useful approach to finding experts is to identify authors from the reference lists of the articles reviewed during the literature search. There is no consensus in the literature regarding the number of experts that should be used for content validation; however, many of the quantitative techniques used to analyze expert input will be impacted by the number of experts employed. Rubio et al. ( 2003 ) recommends using 6–10 experts, while acknowledging that more experts (up to 20) may generate a clearer consensus about the construct being assessed, as well as the quality and relevance of the proposed scale items.

In general, the key domains to assess through an expert validation process are representativeness, clarity, relevance and distribution. Representativeness is defined as how completely the items (as a whole) encompass the construct, clarity is how clearly the items are worded and relevance refers to the extent each item actually relates to specific aspects of the construct. The distribution of an item is not always measured during expert validation as it refers to the more subtle aspect of how “difficult” it would be for a respondent to select a high score on a particular item. In other words, an average medical student may find it very difficult to endorse the self-confidence item, “How confident are you that you can get a 100% on your anatomy exam”, but that same student may find it easier to strongly endorse the item, “How confident are you that you can pass the anatomy exam”. In general, survey developers should attempt to have a range of items of varying difficulty (Tourangeau et al. 2000 ).

Once a panel of experts has been identified, a content validation form can be created that defines the construct and gives experts the opportunity to provide feedback on any or all of the aforementioned topics. Each survey designer’s priorities for a content validation may differ; as such, designers are encouraged to customize their content validation forms to reflect those priorities.

There are a variety of methods for analyzing the quantitative data collected on an expert validation form, but regardless of the method used, criterion for the acceptability of an item or scale should be determined in advanced (Beck & Gable 2001 ). Common metrics used to make inclusion and exclusion decisions for individual items are the content validity ratio, the content validity index and the factorial validity index. For details on how to calculate and interpret these indices, see McKenzie et al. ( 1999 ) and Rubio et al. ( 2003 ). For a sample content validation form, see Gehlbach & Brinkworth ( 2011 ).

In addition to collecting quantitative data, questionnaire designers should provide their experts with an opportunity to provide free-text comments. This approach can be particularly effective for learning what indicators or aspects of the construct are not well-represented by the existing items. The data gathered from the free-text comments and subsequent qualitative analysis often reveal information not identified by the quantitative data and may lead to meaningful additions (or subtractions) to items and scales (McKenzie et al. 1999 ).

There are many ways to analyze the content validity of a new survey through the use of expert validation. The best approach should look at various domains where the researchers have the greatest concerns about the scale (relevance, clarity, etc.) for each individual item and for each set of items or scale. The quantitative data combined with qualitative input from experts is designed to improve the content validity of the new questionnaire or survey scale and, ultimately, the overall functioning of the survey instrument.

Step 6: Conduct cognitive interviews

After the experts have helped refine the scale items, it is important to collect evidence of response process validity to assess how prospective participants interpret your items and response anchors (AERA, APA & NCME 1999 ). One means of collecting such evidence is achieved through a process known as cognitive interviewing or cognitive pre-testing (Willis 2005 ). Similar to how experts are utilized to determine the content validity of a new survey, it is equally important to determine how potential respondents interpret the items and if their interpretation matches what the survey designer has in mind (Willis 2005 ; Karabenick et al. 2007 ). Results from cognitive interviews can be helpful in identifying mistakes respondents make in their interpretation of the item or response options (Napoles-Springer et al. 2006 ; Karabenick et al. 2007 ). As a qualitative technique, analysis does not rely on statistical tests of numeric data but rather on coding and interpretation of written notes from the interview. Thus, the sample sizes used for cognitive interviewing are normally small and may involve just 10–30 participants (Willis & Artino 2013 ). For small-scale medical education research projects, as few as five to six participants may suffice, as long as the survey designer is sensitive to the potential for bias in very small samples (Willis & Artino 2013 ).

Cognitive interviewing employs techniques from psychology and has traditionally assumed that respondents go through a series of cognitive processes when responding to a survey. These steps include comprehension of an item stem and answer choices, retrieval of appropriate information from long-term memory, judgment based on comprehension of the item and their memory and finally selection of a response (Tourangeau et al. 2000 ). Because respondents can have difficulty at any stage, a cognitive interview should be designed and scripted to address any and all of these potential problems. An important first step in the cognitive interview process is to create coding criteria that reflects the survey creator’s intended meaning for each item (Karabenick et al. 2007 ), which can then be used to help interpret the responses gathered during the cognitive interview.

The two major techniques for conducting a cognitive interview are the think-aloud technique and verbal probing . The think-aloud technique requires respondents to verbalize every thought that they have while answering each item. Here, the interviewer simply supports this activity by encouraging the respondent to keep talking and to record what is said for later analysis (Willis & Artino 2013 ). This technique can provide valuable information, but it tends to be unnatural and difficult for most respondents, and it can result in reams of free-response data that the survey designer then needs to cull through.

A complementary procedure, verbal probing, is a more active form of data collection where the interviewer administers a series of probe questions designed to elicit specific information (Willis & Artino 2013 ; see Table 4 for a list of commonly used verbal probes). Verbal probing is classically divided into concurrent and retrospective probing. In concurrent probing, the interviewer asks the respondent specific questions about their thought processes as the respondent answers each question. Although disruptive, concurrent probing has the advantage of allowing participants to respond to questions while their thoughts are recent. Retrospective probing, on the other hand, occurs after the participant has completed the entire survey (or section of the survey) and is generally less disruptive than concurrent probing. The downside of retrospective probing is the risk of recall bias and hindsight effects (Drennan 2003 ). A modification to the two verbal probing techniques is defined as immediate retrospective probing, which allows the interviewer to find natural break points in the survey. Immediate retrospective probing allows the interviewer to probe the respondent without interrupting between each item (Watt et al. 2008 ). This approach has the potential benefit of reducing the recall bias and hindsight effects while limiting the interviewer interruptions and decreasing the artificiality of the process. In practice, many cognitive interviews will actually use a mixture of think-aloud and verbal probing techniques to better identify potential errors.

Examples of commonly used verbal probes.

Type of verbal probeExample
Comprehension/interpretation“What does the term ‘continuing medical education’ mean to you?”
Paraphrasing“Can you restate the question in your own words?”
Confidence judgment“How sure are you that you have participated in 3 formal educational programs?”
Recall“How do you remember that you have participated in 3 formal educational programs?” “How did you come up with your answer?”
Specific“Why do you say that you think it is very important that physicians participant in continuing medical education?”
General“How did you arrive at that answer?” “Was that easy or hard to answer?” “I noticed that you hesitated. Tell me what you were thinking.” “Tell me more about that.”

Adapted with permission from the Journal of Graduate Medical Education : Willis & Artino 2013 . What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ 5:353–356.

Once a cognitive interview has been completed, there are several methods for analyzing the qualitative data obtained. One way to quantitatively analyze results from a cognitive interview is through coding. With this method, pre-determined codes are established for common respondent errors (e.g. respondent requests clarification), and the frequency of each type of error is tabulated for each item (Napoles-Springer et al. 2006 ). In addition, codes may be ranked according to the pre-determined severity of the error. Although the quantitative results of this analysis are often easily interpretable, this method may miss errors not readily predicted and may not fully explain why the error is occurring (Napoles-Springer et al. 2006 ). As such, a qualitative approach to the cognitive interview can also be employed through an interaction analysis. Typically, an interaction analysis attempts to describe and explain the ways in which people interpret and interact during a conversation, and this method can be applied during the administration of a cognitive interview to determine the meaning of responses (Napoles-Springer et al. 2006 ). Studies have demonstrated that the combination of coding and interaction analysis can be quite effective, providing more information about the “cognitive validity” of a new questionnaire (Napoles-Springer et al. 2006 ).

The importance of respondents understanding each item in a similar fashion is inherently related to the overall reliability of the scores from any new questionnaire. In addition, the necessity for respondents to understand each item in the way it was intended by the survey creator is integrally related to the validity of the survey and the inferences that can be made with the resulting data. Taken together, these two factors are critically important to creating a high-quality questionnaire, and each factor can be addressed through the use of a well-designed cognitive interview. Ultimately, regardless of the methods used to conduct the cognitive interviews and analyze the data, the information gathered should be used to modify and improve the overall questionnaire and individual survey items.

Step 7: Conduct pilot testing

Despite the best efforts of medical education researchers during the aforementioned survey design process, some survey items may still be problematic (Gehlbach & Brinkworth 2011 ). Thus, the next step is to pilot test the questionnaire and continue collecting validity evidence. Two of the most common approaches are based on internal structure and relationships with other variables (AERA, APA & NCME 1999 ). During pilot testing, members of the target population complete the survey in the planned delivery mode (e.g. web-based or paper-based format). The data obtained from the pilot test is then reviewed to evaluate item range and variance, assess score reliability of the whole scale and review item and composite score correlations. During this step, survey designers should also review descriptive statistics (e.g. means and standard deviations) and histograms, which demonstrate the distribution of responses by item. This analysis can aid in identifying items that may not be functioning in the way the designer intended.

To ascertain the internal structure of the questionnaire and to evaluate the extent to which items within a particular scale measure a single underlying construct (i.e. the scale’s uni-dimensionality), survey designers should consider using advanced statistical techniques such as factor analysis. Factor analysis is a statistical procedure designed to evaluate “the number of distinct constructs needed to account for the pattern of correlations among a set of measures” (Fabrigar & Wegener 2012, p. 3). To assess the dimensionality of a survey scale that has been deliberately constructed to assess a single construct (e.g. using the processes described in this study), we recommend using confirmatory factor analysis techniques; that said, other scholars have argued that exploratory factor analysis is more appropriate when analyzing new scales (McCoach et al. 2013 ). Regardless of the specific analysis employed, researchers should know that factor analysis techniques are often poorly understood and poorly implemented; fortunately, the literature is replete with many helpful guides (see, for example, Pett et al. 2003 ; McCoach et al. 2013 ).

Conducting a reliability analysis is another critical step in the pilot testing phase. The most common means of assessing scale reliability is by calculating a Cronbach’s alpha coefficient. Cronbach’s alpha is a measure of the internal consistency of the item scores (i.e. the extent to which the scores for the items on a scale correlate with one another). It is a function of the inter-item correlations and the total number of items on a particular scale. It is important to note that Cronbach’s alpha is not a good measure of a scale’s uni-dimensionality (measuring a single concept) as is often assumed (Schmitt 1996 ). Thus, in most cases, survey designers should first run a factor analysis, to assess the scale’s uni-dimensionality and then proceed with a reliability analysis, to assess the internal consistency of the item scores on the scale (Schmitt 1996 ). Because Cronbach’s alpha is sensitive to scale length, all other things being equal, a longer scale will generally have a higher Cronbach’s alpha. Of course, scale length and the associated increase in internal consistency reliability must be balanced with over-burdening respondents and the concomitant response errors that can occur when questionnaires become too long and respondents become fatigued. Finally, it is critical to recognize that reliability is a necessary but insufficient condition for validity (AERA, APA & NCME 1999 ). That is, to be considered valid, survey scores must first be reliable. However, scores that are reliable are not necessarily valid for a given purpose.

Once a scale’s uni-dimensionality and internal consistency have been assessed, survey designers often create composite scores for each scale. Depending on the research question being addressed, these composite scores can then be used as independent or dependent variables. When attempting to assess hard-to-measure educational constructs such as motivation, confidence and satisfaction, it usually makes sense to create a composite score for each survey scale than it does to use individual survey items as variables (Sullivan & Artino 2013 ). A composite score is simply a mean score (either weighted or unweighted) of all the items within a particular scale. Using mean scores has several distinct advantages over summing the items within a particular scale or subscale. First, mean scores are usually reported using the same response scale as the individual items; this approach facilitates more direct interpretation of the mean scores in terms of the response anchors. Second, the use of mean scores makes it clear how big (or small) measured differences really are when comparing individuals or groups. As Colliver et al. ( 2010 ) warned, “the sums of ratings reflect both the ratings and the number of items, which magnifies differences between scores and makes differences appear more important than they are” (p. 591).

After composite scores have been created for each survey scale, the resulting variables can be examined to determine their relations to other variables that have been collected. The goal in this step is to determine if these associations are consistent with theory and previous research. So, for example, one might expect the composite scores from a scale designed to assess trainee confidence for suturing to be positively correlated with the number of successful suture procedures performed (since practice builds confidence) and negatively correlated with procedure-related anxiety (as more confident trainees also tend to be less anxious). In this way, survey designers are assessing the validity of the scales they have created in terms of their relationships to other variables (AERA, APA & NCME 1999 ). It is worth noting that in the aforementioned example, the survey designer is evaluating the correlations between the newly developed scale scores and both an objective measure (number of procedures) and a subjective measure (scores on an anxiety scale). Both of these are reasonable approaches to assessing a new scale’s relationships with other variables.

Concluding thoughts

In this AMEE Guide, we described a systematic, seven-step design process for developing survey scales. It should be noted that many important topics related to survey implementation and administration fall outside our focus on scale design and thus were not discussed in this guide. These topics include, but are not limited to, ethical approval for research questionnaires, administration format (paper vs. electronic), sampling techniques, obtaining high response rates, providing incentives and data management. These topics, and many more, are reviewed in detail elsewhere (e.g. Dillman et al. 2009 ). We also acknowledge that the survey design methodology presented here is not the only way to design and develop a high-quality questionnaire. In reading this Guide, however, we hope medical education researchers will come to appreciate the importance of following a systematic, evidence-based approach to questionnaire design. Doing so not only improves the questionnaires used in medical education but it also has the potential to positively impact the overall quality of medical education research, a large proportion of which employs questionnaires.

Closed-ended question – A survey question with a finite number of response categories from which the respondent can choose.

Cognitive interviewing (or cognitive pre-testing) – An evidence-based qualitative method specifically designed to investigate whether a survey question satisfies its intended purpose.

Concurrent probing – A verbal probing technique wherein the interviewer administers the probe question immediately after the respondent has read aloud and answered each survey item.

Construct – A hypothesized concept or characteristic (something “constructed”) that a survey or test is designed to measure. Historically, the term “construct” has been reserved for characteristics that are not directly observable. Recently, however, the term has been more broadly defined.

Content validity – Evidence obtained from an analysis of the relationship between a survey instrument’s content and the construct it is intended to measure.

Factor analysis – A set of statistical procedures designed to evaluate the number of distinct constructs needed to account for the pattern of correlations among a set of measures.

Open-ended question – A survey question that asks respondents to provide an answer in an open space (e.g. a number, a list or a longer, in-depth answer).

Reliability – The extent to which the scores produced by a particular measurement procedure or instrument (e.g. a survey) are consistent and reproducible. Reliability is a necessary but insufficient condition for validity.

Response anchors – The named points along a set of answer options (e.g. not at all important, slightly important, moderately important, quite important and extremely important ).

Response process validity – Evidence of validity obtained from an analysis of how respondents interpret the meaning of a survey scale’s specific survey items.

Retrospective probing – A verbal probing technique wherein the interviewer administers the probe questions after the respondent has completed the entire survey (or a portion of the survey).

Scale – Two or more items intended to measure a construct.

Think-aloud interviewing – A cognitive interviewing technique wherein survey respondents are asked to actively verbalize their thoughts as they attempt to answer the evaluated survey items.

Validity – The degree to which evidence and theory support the proposed interpretations of an instrument’s scores.

Validity argument – The process of accumulating evidence to provide a sound scientific basis for the proposed uses of an instrument’s scores.

Verbal probing – A cognitive interviewing technique wherein the interviewer administers a series of probe questions specifically designed to elicit detailed information beyond that normally provided by respondents.

Notes on contributors

ANTHONY R. ARTINO, Jr., PhD, is an Associate Professor of Preventive Medicine and Biometrics. He is the Principal Investigator on several funded research projects and co-directs the Long-Term Career Outcome Study (LTCOS) of Uniformed Services University (USU) trainees. His research focuses on understanding the role of academic motivation, emotion and self-regulation in a variety of settings. He earned his PhD in educational psychology from the University of Connecticut.

JEFFREY S. LA ROCHELLE, MD, MPH, is an Associate Program Director for the Internal Medicine residency at Walter Reed National Military Medical Center and is the Director of Integrated Clinical Skills at USU where he is an Associate Professor of Medicine. His research focuses on the application of theory-based educational methods and assessments and the development of observed structured clinical examinations (OSCE). He earned his MD and MPH from USU.

KENT J. DEZEE, MD, MPH, is the General Medicine Fellowship Director and an Associate Professor of Medicine at USU. His research focuses on understanding the predictors of medical student success in medical school, residency training and beyond. He earned his MD from The Ohio State University and his MPH from USU.

HUNTER GEHLBACH, PhD, is an Associate Professor at Harvard’s Graduate School of Education. He teaches a course on the construction of survey scales, and his research includes experimental work on how to design better scales as well as scale development projects to develop better measures of parents’ and students’ perceptions of schools. In addition, he has a substantive interest in bringing social psychological principles to bear on educational problems. He earned his PhD from Stanford’s Psychological Studies in Education program.

Declaration of interest : Several of the authors are military service members. Title 17 U.S.C. 105 provides that “Copyright protection under this title is not available for any work of the United States Government”. Title 17 U.S.C. 101 defines a United States Government work as a work prepared by a military service member or employee of the United States Government as part of that person’s official duties.

The views expressed in this article are those of the authors and do not necessarily reflect the official views of the Uniformed Services University of the Health Sciences, the U.S. Navy, the U.S. Army, the U.S. Air Force, or the Department of Defense.

Portions of this AMEE Guide were previously published in the Journal of Graduate Medical Education and Academic Medicine and are used with the express permission of the publishers (Gehlbach et al. 2010 ; Artino et al. 2011 ; Artino & Gehlbach 2012 ; Rickards et al. 2012 ; Magee et al. 2013; Willis & Artino 2013 ).

  • American Educational Research Association (AERA), American Psychological Association (APA) & National Council on Measurement in Education (NCME) Standards for education and psychological testing. Washington, DC: American Educational Research Association; 1999. [ Google Scholar ]
  • Artino AR, Gehlbach H, Durning SJ. AM last page: Avoiding five common pitfalls of survey design. Acad Med. 2011; 86 :1327. [ PubMed ] [ Google Scholar ]
  • Artino AR, Gehlbach H. AM last page: Avoiding four visual-design pitfalls in survey development. Acad Med. 2012; 87 :1452. [ PubMed ] [ Google Scholar ]
  • Beck CT, Gable RK. Ensuring content validity: An illustration of the process. J Nurs Meas. 2001; 9 :201–215. [ PubMed ] [ Google Scholar ]
  • Christian LM, Parsons NL, Dillman DA. 2009. Designing scalar questions for web surveys. Sociol Method Res 37:393–425. [ Google Scholar ]
  • Colliver JA, Conlee MJ, Verhulst SJ, Dorsey JK. Reports of the decline of empathy during medical education are greatly exaggerated: A reexamination of the research. Acad Med. 2010; 85 :588–593. [ PubMed ] [ Google Scholar ]
  • Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: Theory and application. Am J Med. 2006; 119 :166.e7–166.e16. [ PubMed ] [ Google Scholar ]
  • DeVellis RF. 2003. Scale development: Theory and applications. 2nd ed. Newbury Park, CA: Sage. [ Google Scholar ]
  • Dillman D, Smyth J, Christian L. Internet, mail, and mixed-mode surveys: The tailored design method. 3rd. Hoboken, NJ: Wiley; 2009. [ Google Scholar ]
  • Drennan J. Cognitive interviewing: Verbal data in the design and pretesting of questionnaires. J Adv Nurs. 2003; 42 (1):57–63. [ PubMed ] [ Google Scholar ]
  • Fabrigar LR, Wegener DT. 2012. Exploratory factor analysis. New York: Oxford University Press. [ Google Scholar ]
  • Fowler FJ. 2009. Survey research methods. 4th ed. Thousand Oaks, CA: Sage. [ Google Scholar ]
  • Gehlbach H, Artino AR, Durning S. AM last page: Survey development guidance for medical education researchers. Acad Med. 2010; 85 :925. [ PubMed ] [ Google Scholar ]
  • Gehlbach H, Brinkworth ME. Measure twice, cut down error: A process for enhancing the validity of survey scales. Rev Gen Psychol. 2011; 15 :380–387. [ Google Scholar ]
  • Kane MT. Validation in educational measurement. 4th. Westport, CT: American Council on Education/Praeger; 2006. [ Google Scholar ]
  • Karabenick SA, Woolley ME, Friedel JM, Ammon BV, Blazevski J, Bonney CR, De Groot E, Gilbert MC, Musu L, Kempler TM, Kelly KL. Cognitive processing of self-report items in educational research: Do they think what we mean? Educ Psychol. 2007; 42 (3):139–151. [ Google Scholar ]
  • Krosnick JA. 1999. Survey research. Annu Rev Psychol 50:537–567. [ PubMed ] [ Google Scholar ]
  • Magee C, Byars L, Rickards G, Artino AR. Tracing the steps of survey design: A graduate medical education research example. J Grad Med Educ. 2013; 5 (1):1–5. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • McCoach DB, Gable RK, Madura JP. Instrument development in the affective domain: School and corporate applications. 3rd. New York: Springer; 2013. [ Google Scholar ]
  • McIver JP, Carmines EG. Unidimensional scaling. Beverly Hills, CA: Sage; 1981. [ Google Scholar ]
  • McKenzie JF, Wood ML, Kotecki JE, Clark JK, Brey RA. Establishing content validity: Using qualitative and quantitative steps. Am J Health Behav. 1999; 23 (4):311–318. [ Google Scholar ]
  • Napoles-Springer AM, Olsson-Santoyo J, O’Brien H, Stewart AL. Using cognitive interviews to develop surveys in diverse populations. Med Care. 2006; 44 (11):s21–s30. [ PubMed ] [ Google Scholar ]
  • Pett MA, Lackey NR, Sullivan JJ. Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Thousand Oaks, CA: Sage Publications; 2003. [ Google Scholar ]
  • Polit DF, Beck CT. Nursing research: Principles and methods. 7th. Philadelphia: Lippincott, Williams, & Wilkins; 2004. [ Google Scholar ]
  • Polit DF, Beck CT. The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006; 29 :489–497. [ PubMed ] [ Google Scholar ]
  • Rickards G, Magee C, Artino AR. You can’t fix by analysis what you’ve spoiled by design: developing survey instruments and collecting validity evidence. J Grad Med Educ. 2012; 4 (4):407–410. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rubio DM, Berg-Weger M, Tebb SS, Lee ES, Rauch S. Objectifying content validity: Conducting a content validity study in social work research. Soc Work Res. 2003; 27 (2):94–104. [ Google Scholar ]
  • Schmitt N. Uses and abuses of coefficient alpha. Psychol Assess. 1996; 8 :350–353. [ Google Scholar ]
  • Schwarz N. 1999. Self-reports: How the questions shape the answers. Am Psychol 54:93–105. [ Google Scholar ]
  • Sullivan G. A primer on the validity of assessment instruments. J Grad Med Educ. 2011; 3 (2):119–120. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sullivan GM, Artino AR. Analyzing and interpreting data from Likert-type scales. J Grad Med Educ. 2013; 5 (4):541–542. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Tourangeau R, Rips LJ, Rasinski KA. The psychology of survey response. New York: Cambridge University Press; 2000. [ Google Scholar ]
  • Waltz CF, Strickland OL, Lenz ER. Measurement in nursing and health research. 3rd. New York: Springer Publishing Co; 2005. [ Google Scholar ]
  • Watt T, Rasmussen AK, Groenvold M, Bjorner JB, Watt SH, Bonnema SJ, Hegedus L, Feldt-Rasmussen U. Improving a newly developed patient-reported outcome for thyroid patients, using cognitive interviewing. Quality of Life Research. 2008; 17 :1009–1017. [ PubMed ] [ Google Scholar ]
  • Weng LJ. 2004. Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educ Psychol Meas 64:956–972. [ Google Scholar ]
  • Willis GB, Artino AR. What do our respondents think we’re asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ. 2013; 5 (3):353–356. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Willis GB. Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage Publications; 2005. [ Google Scholar ]

IMAGES

  1. Free Medical Questionnaire Template

    medical research questionnaire sample

  2. FREE 11+ Sample Medical Questionnaire Forms in PDF

    medical research questionnaire sample

  3. Free Medical Questionnaire Template

    medical research questionnaire sample

  4. FREE 40+ Questionnaire Forms in PDF

    medical research questionnaire sample

  5. Medical Questionnaires

    medical research questionnaire sample

  6. FREE 75+ Questionnaire Samples in PDF, Word, Pages

    medical research questionnaire sample

COMMENTS

  1. Medical Surveys: Questions & Templates for Patients

    Using surveys makes it easy to collect and analyze data for anything from basic research to clinical trials to epidemiological studies. Use surveys to get the data you need today for FREE.

  2. 25 Health care surveys| Questionnaire Templates| QuestionPro

    Use 25 free health care surveys with targeted survey questions to collect extensive feedback from medical professionals, healthcare organizations, and patients.

  3. Designing and validating a research questionnaire

    Learn how to design and validate a research questionnaire in this two-part article, with practical examples and tips.

  4. Practical Guidelines to Develop and Evaluate a Questionnaire

    Abstract Life expectancy is gradually increasing due to continuously improving medical and nonmedical interventions. The increasing life expectancy is desirable but brings in issues such as impairment of quality of life, disease perception, cognitive health, and mental health. Thus, questionnaire building and data collection through the questionnaires have become an active area of research ...

  5. Downloadable Templates and Tools for Clinical Research

    Also By The Editorial Team Webinar on community engagement in clinical research involving pregnant women Free Webinar: Science, technology and innovation for upskilling knowledge-based economies in Africa Open Public Consultation on "Strengthened cooperation against vaccine preventable diseases"

  6. Top 15 Health Survey Questions for Questionnaires

    What are the Health Survey Questions? Health survey questions is a questionnaire to gather data from respondents on the state of their health and well-being. Such questions enable a researcher to understand the overall health, illness factors, opinion on healthcare services provided, and risk factors associated with the individual's health.

  7. 20 Amazing health survey questions for questionnaires

    Gather information from the patient about health research, lifestyle choices, and the use of health services. There are 20 great health survey question examples.

  8. Questionnaires in clinical trials: guidelines for optimal design and

    There is good evidence for design features that improve data completeness but further research is required to evaluate strategies in clinical trials. Theory-based guidelines for style, appearance, and layout of self-administered questionnaires have been proposed but require evaluation.

  9. Hands-on guide to questionnaire research: Selecting, designing, and

    Numerous research students and conference delegates provided methodological questions and case examples of real life questionnaire research, which provided the inspiration and raw material for this series.

  10. Healthcare Surveys: Questions & Templates

    Healthcare surveys support patient-provider communications by getting feedback from both patients and medical employees. A lot of hospitals, clinics, and other providers have recognized the need for this valuable feedback. Our powerful survey platform can help you analyze the results and export professional charts. Get started now.

  11. (PDF) Surveys and questionnaires in health research

    PDF | On Jan 1, 2013, Margot J. Schofield and others published Surveys and questionnaires in health research | Find, read and cite all the research you need on ResearchGate

  12. Medical Health Questionnaire & Example

    Our Medical Health Questionnaire software boasts a user-friendly interface, ensuring accessibility for all, regardless of technical expertise. Benefit from customizable templates catering to various health questionnaires, including daily symptom surveys, medical assessments, and health risk evaluations.

  13. Selecting, designing, and developing your questionnaire

    Anybody can write down a list of questions and photocopy it, but producing worthwhile and generalisable data from questionnaires needs careful planning and imaginative design The great popularity with questionnaires is they provide a "quick fix" for research methodology. No single method has been so abused. 1 Questionnaires offer an objective means of collecting information about people's ...

  14. Quantitative Research Questionnaire

    Quantitative research questions allow researchers to gather empirical data to answer their research problems. As we have discussed the different types of quantitative research questions above, it's time to learn how to write the perfect quantitative research questions for a questionnaire and streamline your research process.

  15. 20+ SAMPLE Research Questionnaires Templates in PDF

    What Is a Research Questionnaire? A research questionnaire is a tool that consists of a series of standardized questions with the intent of collecting information from a sample of people. Think of it as a kind of written interview that follows a fixed scheme to ensure that data remains accurate. The questions included in the survey questionnaire can be both qualitative and quantitative in ...

  16. A quick guide to survey research

    Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs ...

  17. 21 Questionnaire Templates: Examples and Samples

    A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. This article enlists 21 questionnaire templates along with samples and examples. It also describes the different types of questionnaires and the question types that are used in these questionnaires.

  18. Examples of Research Questions

    Examples of broad clinical research questions include: Does the administration of pain medication at time of surgical incision reduce the need for pain medication twenty-four hours after surgery? What maternal factors are associated with obesity in toddlers?

  19. Health Questionnaire

    Health Questionnaire A questionnaire is a systematic tool using a series of questions to gather information. Survey questionnaires are mostly administered to a large group of people in a particular area. These people are known as the respondents of the survey. Questionnaires are useful especially in conducting surveys for a research of any kind. Many researchers prefer questionnaires in ...

  20. Questionnaire/Survey

    This is a questionnaire designed to be completed by physicians, implementers, and nurses across a health care system setting. The tool includes questions to assess benefit, the current state, usability, perception, and attitudes of users electronic health records and health information exchange. Year of Survey. Created prior to 2011.

  21. 50+ SAMPLE Medical Questionnaires in PDF

    This article brings you easy-to-use sample medical questionnaires to be guided on how to process a medical assessment conveniently.

  22. Chapter 13 Methods for Survey Studies

    Often the terms "survey" and "questionnaire" are used interchangeably as if they are the same. But strictly speaking, the survey is a research approach where subjective opinions are collected from a sample of subjects and analyzed for some aspects of the study population that they represent.

  23. Sample Size for Feasibility Survey

    Indian Council of Medical Research. Meaghan Brown For survey research where inferential statistics are not being used, there isn't a strict rule for sample size, but a common benchmark in survey ...

  24. Assessment of multi-professional primary healthcare center quality by

    The main aim of this study was to build an item bank for assessing the care quality of multi-professional healthcare centers (MPHCC) from the perspective of patients with multimorbidity. This study was part of the QUALSOPRIM (QUALité des SOins PRIMaires; primary healthcare quality) research project to create a psychometrically robust self-administered questionnaire to assess healthcare quality.

  25. Developing questionnaires for educational research: AMEE Guide No. 87

    Consequently, the quality of the questionnaires used in medical education research is highly variable. To address this problem, this AMEE Guide presents a systematic, seven-step process for designing high-quality questionnaires, with particular emphasis on developing survey scales.

  26. Dealing with small samples in disability research: Do not fret

    Purpose/Objective: Small sample sizes are a common problem in disability research. Here, we show how Bayesian methods can be applied in small sample settings and the advantages that they provide. Method/Design: To illustrate, we provide a Bayesian analysis of employment status (employed vs. unemployed) for those with disability. Specifically, we apply empirically informed priors, based on ...