Creating a new NHS England: Health Education England, NHS Digital and NHS England have merged.  Learn more

More results...

Workplace-based assessment feedback form

This document is the new, generic feedback form that should be used by assessors to provide feedback on Case-Based Discussions (CBDs), Observed Clinical Events (OCEs) and Direct Observations of Practical Skills (DOPS).

Document(s):

Word Document

Workplace-based Assessment Feedback Form

Cookies on the nshcs website.

We've put some small files called cookies on your device to make our site work.

We'd like to set some additional cookies to understand how you use the website and to improve it.

We also use cookies set by other sites to help us deliver content from their services.

Let us know if this is OK. We'll use a cookie to save your choice. You can read more about our cookies before you choose.

  • I'm OK with analytics cookies
  • Do not use analytics cookies
  • Accessibility options:

Improving Care by Using Patient Feedback

Health and Social Care Services Research

doi: 10.3310/themedreview-04237

Both staff and patients want feedback from patients about the care to be heard and acted upon and the NHS has clear policies to encourage this. Doing this in practice is, however, complex and challenging. This report features nine new research studies about using patient experience data in the NHS.  These show what organisations are doing now and what could be done better.  Evidence ranges from hospital wards to general practice to mental health settings.  There are also insights into new ways of mining and analyzing big data, using online feedback and approaches to involving patients in making sense of feedback and driving improvements.

Large amounts of patient feedback are currently collected in the NHS, particularly data from surveys and the NHS Friends and Family Test. Less attention is paid to other sources of patient feedback. A lot of resource and energy goes in to collecting  feedback data but less into analysing it in ways that can lead to change or into sharing the feedback with staff who see patients on a day to day basis. Patients’ intentions in giving feedback are sometimes misunderstood. Many want to give praise and support staff and to have two way conversations about care, but the focus of healthcare providers can be on complaints and concerns, meaning they unwittingly disregard useful feedback.

There are many different reasons for looking at patient experience feedback data. Data is most often used for performance assessment and benchmarking in line with regulatory body requirements,  making comparisons with other healthcare providers or to assess progress over time. Staff are sometimes unaware of the feedback, or when they are, they struggle to make sense of it in a way that can lead to improvements. They are not always aware of unsolicited feedback, such as that received online and when they are, they are often uncertain how to  respond.

Staff need the time, skills and resources to make changes in practice. In many organisations, feedback about patient experience is managed in different departments from those that lead quality improvement. Whilst most organisations have a standardised method for quality improvement, there is less clarity and consistency in relation to using patient experience data.

Staff act on informal feedback in ways that are not always recognised as improvement. Where change does happen, it tends to be on transactional tasks rather than relationships and the way patients feel.

The research featured in this review shows that these challenges can be overcome and provides recommendations and links to practical resources for services and staff.

What can healthcare providers change as a result of these findings?

Organisations should embrace all forms of feedback (including complaints and unsolicited feedback) as an opportunity to review and improve care. While the monitoring of performance and compliance needs to conform to measures of reliability and validity, not all patient experience data needs to be numerical and representative – there can still be value in qualitative and unrepresentative intelligence to provide rich insights on specific issues and to highlight opportunities for improvement. Organisations sometimes try to triangulate different types of information into a single view, but it is important to respect different sources - sometimes the outliers are more useful data than the average. Organisations should also learn from positive as well as negative feedback.

Organisations should collect, collate and analyse feedback in ways that remain recognisable to the people who provide it whilst offering staff actionable findings. In  many areas, including general practice, one of the major blocks to getting to grips with feedback is not having dedicated staff time allotted to it. Where there are dedicated staff leading patient experience, specialist training is likely to be needed, particularly in relation to improvement science. They also need to understand the strengths and weaknesses of different sources of feedback and be given the resources to use a broad range of collection systems.

The UK has led the world in the use of patient surveys, but staff are not always aware of them. Other forms of feedback including both structured and unstructured online feedback are emerging faster than the NHS’s ability to respond. Staff want to engage but need more understanding of, and confidence in, the use of different methods. As well as improving the transactional aspects of care (things like appointments and waiting times), organisations need to consider how data on relational experience (how the staff made them feel) is presented to staff. Summaries and infographics, together with patient stories, can motivate staff who interact with patients directly to explore aspects of the feedback about how staff made them feel. Patient experience data should be presented alongside safety and clinical effectiveness data and the associations between them made explicit.

Leaders need to ensure that staff have the authority and resources, together with the confidence and skill, to act on both formal and informal feedback. They may also need expert facilitation to help them decide what action to take in response to feedback and to integrate this with patient safety and effectiveness programmes. Engaging staff and patients in co design to analyse feedback is likely to result in sustainable improvements at a local level. A number of the featured studies have produced and tested toolkits that can assist with this. These  can be found in the body of the review and in the study summaries.

The area of online feedback is a growing field, but staff are often uneasy and many organisations do not have systems and processes for responding to it. Organisations need to think about how they respond to unsolicited feedback, including complaints.

Looking to the future

Our review of the evidence on the use and usefulness of patient experience feedback shows that whilst there is a growing interest in using feedback for both accountability and service improvement, there are gaps in healthcare providers’ capacity to analyse and use it. These studies have added to our understanding of what helps and hinders staff and services to use patient experience feedback to drive improvement.  But there are still areas where we do not know enough.

Further research is needed to determine methods of easily capturing patient experience that can meet multiple purposes, including performance monitoring and system redesign, and how to present this to staff in an easy to use way.

Understanding the patient experience is part of the wider evaluation of healthcare services and research is needed to consider how to integrate it with clinical outcomes and safety evaluations and with the ‘soft intelligence’ that staff and patients have about delivering and receiving care.

Research examining the ways in which patient safety, patient experience and clinical outcomes data overlap and interact in the everyday practices of hospital work (e.g. care on the wards, meetings, reports, etc.) would provide useful insights to inform improvement. More research is needed on how organisations and teams use different types of patient experience data for improving services and what support they need to do this well. Including how best to present the data to staff

Observational studies are needed that take a longitudinal perspective to understand how staff and organisations deal with patient feedback over time. These should consider comments on acute care as well as from people with chronic conditions. As we move into an era where services become more integrated it will become essential to view feedback across organisational boundaries.

Why we wrote this review

Research into patient experience feedback is relatively recent, gathering pace in the 1990s. Research into how healthcare providers then use this data to improve services is at an early stage. We wrote this review to bring together emergent themes and to provide practitioners, policy makers and the public with an overview of the findings of current NIHR funded research and to influence debate, policy and practice on the use of patient experience data.

Introduction to patient experience feedback

Healthcare is increasingly understood as an experience as well as an outcome. In addition, in a publicly funded service, patient experience feedback is a form of holding those services to account. The NHS Constitution for England enshrines the focus of patients’ experience in principle 4, which states that:

The patient will be at the heart of everything the NHS does. It should support individuals to promote and manage their own health. NHS services must reflect, and should be coordinated around and tailored to, the needs and preferences of patients, their families and their carers .

Evidence shows that patient experience feedback can shape services to better meet patient needs. We also know that better patient experience is associated with the efficient use of services. It results in the patient being better able to use the clinical advice given, and to use primary care more effectively. It has also been shown to affect hospital length of stay (Doyle et al 2013).

Good patient experience is therefore seen as a central outcome for the NHS, alongside clinical effectiveness and safety; however, people have different ideas about what constitutes patient experience and feedback is collected in different ways. Patients and healthcare providers do not come to this on an equal footing and the power to determine the question, the nature of the feedback, the interpretation and action remains largely with the professionals. The public inquiry into Mid Staffordshire NHS Foundation Trust showed that healthcare providers have some way to go in ensuring that patient experience becomes central to care management.

What feedback does the NHS collect?

Learning from what patients think about the care they have received is widely accepted as a key to improving healthcare services. However, collecting actionable data across the breadth of patient experience and making change based on it remains a challenge.

The NHS excels in collecting certain kinds of patient experience data, with the national in-patient survey in England being one of the first of its kind in the world when introduced in 2001. The Friends and Family Test (FFT) is a further mandatory method of data collection used in the NHS in England (see box X). Scotland introduced a national survey in 2010, Wales in 2013 and Northern Ireland in 2014. Robert and Cornwell (2013) reported that the first similar international public reporting was in 2008 in the USA.  Australia, Canada, New Zealand and most European countries have not developed measures of patients’ experience at national level. It is less clear how the national survey has triggered improvement.

Feedback is also collected through complaint systems but Liu et al (2019) found that an emphasis on ‘putting out fires’ may detract from using the feedback within them to improve care for future patients.

Other types of feedback are collected in the NHS in more ad hoc ways, including online feedback such as Care Opinion and the former website, NHS Choices. It is also collected through local unit level surveys, patient forums, informal feedback to Patient Advice and Liaison Services, and quality improvement projects. Data is collected in a variety of ways with local organisations using different methods, from feedback kiosks to narratives and patient stories.

There has been widespread acceptance that good patient experience is an important outcome of care in its own right… patient experience is a domain of quality that is distinct from, but complementary to, the quality of clinical care. Although an increasing number of surveys have been developed to measure patient experience, there has been equally widespread acceptance that these measures have not been very effective at actually improving care. Study H 

Research featured in this review

What do we mean by patient experience data.

We define patient experience data in this review as what individuals say about the care they have received.  This is different from patients evaluating their care and treatment, such as through patient reported outcome data. Patient experience is wider than the data collected about it and this review focuses on documented forms of experience feedback, including unsolicited feedback. We acknowledge that this does not fully represent patient experience nor all the ways it is provided or the ways in which it influences change.

Studies we have included

This review focuses on research funded by the National Institute for Health Research (NIHR), which has made a substantial investment in new research in this area. In 2014 the NIHR called for research on the use and usefulness of patient experience data. Eight large, multi-method bids were successful, seven of which have been published in 2019 or will be published early in 2020. Prior to this, two other large studies looking at patient experience data were funded. Together these studies cover a range of care settings including general practice and mental health, although most are set in acute hospitals These nine studies form the core of this themed review. We do not consider them individually in the main text but use them to illustrate our main themes and we refer to them as studies A-I (listed below). For more details about how they were undertaken and some of their key findings, please see the study summaries and references to full text reports.

The featured studies found some remarkably consistent themes. While we have used particular studies to illustrate each theme, this does not mean other studies didn’t find similar things.

In addition to the main nine studies, we also mention some other important evidence funded by NIHR and some research funded by other bodies, to add context and supporting information, and these are referenced in.

Featured studies in this review

A -  Sheard, L.   Using patient experience data to develop a patient experience toolkit to improve hospital care: a mixed-methods study (published October 2019 ) B -   Weich, S. Evaluating the Use of Patient Experience Data to Improve the Quality of Inpatient Mental Health Care (estimated publication March 2020) C -   Donetto, S. Organisational strategies and practices to improve care using patient experience data in acute NHS hospital trusts: an ethnographic study (published October 2019) D -   Locock, L. Understanding how frontline staff use patient experience data for service improvement – an exploratory case study evaluation (published June2019) E -   Powell, J. Using online patient feedback to improve NHS services: the INQUIRE multimethod study (published October 2019) F -   Sanders, M.  Enhancing the credibility, usefulness and relevance of patient experience data in services for people with long-term physical and mental health conditions using digital data capture and improved analysis of narrative data (estimated publication January 2020) G -  Rivas, C.   PRESENT: Patient Reported Experience Survey Engineering of Natural Text: developing practical automated analysis and dashboard representations of cancer survey free text answers (published July 2019) H -   Burt, J. Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience (published May 2017) I -  Graham, C. An evaluation of a near real-time survey for improving patients’ experiences of the relational aspects of care: a mixed-methods evaluation (published March 2018) A summary of these studies is included in the Summaries Studies section at the end of this review.

This review seeks to explore the complexity and ambiguities around understanding and learning from patient experience. We also offer some solutions and recommendations. We hope that by shining a light on the tensions and assumptions uncovered in the featured studies, the review will help to start  conversations between different parts of the system and with users of healthcare services. We anticipate that the audience will be broad, including policy makers, healthcare provider boards (including Non-Executive directors and governors), clinical staff as well as service users and the public at large.

A few words on research methods

Patient experience is highly personal and not all aspects of experience lend themselves to quantitative measurement. This makes research into patient experience complex.

The most common methods in this review were surveys, case studies, action research and improvement science approaches. Our featured studies contain a mixture of these methods, and each method has to be judged separately on its own merits rather than applying a universal set of criteria to all.

  • Surveys obtain structured information although they can provide opportunity for free text comments
  • Case studies provide rich contextual data that can enable deeper understanding of how things work. They rely on multiple sources of evidence and often use mixed methods.  This might include routine data collected from local sites combined with observation methods – researchers spending time shadowing staff and services – and interviews or focus groups around practices, process and relationships.
  • Action research involves researchers working alongside staff and patients to plan and study change as it happens. The researcher is involved in constant assessment and redesign of a process as it moves toward a desired outcome.
  • Improvement science approaches explore how successful strategies to improve healthcare service are, and why.

Structure of this review

In this review, we ask three key questions about collecting and using patient experience feedback; why, what and how?  before considering how healthcare service providers make sense of and act on the feedback. We then reflect on the gaps in our knowledge and offer some immediate and longer term recommendations.

Why should people share their experience?

Different approaches to collecting patient experience feedback to some extent reflect underlying assumptions about why we might be interested in it and therefore how we respond to it. For some healthcare staff and policy makers, feedback helps to assess service performance against expectations. For others its primary purpose is to understand and respect individual experiences and for others to improve services.

Table: Different purposes of patient experience feedback data for healthcare organisations of practitioners

For performance or comparison

The distinction between different purposes directs the type of information collected, the way it is analysed and how it is subsequently used. Where the focus is on performance or comparisons, quantitative data such as that obtained from surveys are the most common method and patients largely report their experience against a pre-determined set of criteria. Individual responses to core questions are aggregated into a score for the organisation or the individual member of staff.  Surveys often attempt to address wider issues by including free text boxes that allow people to discuss what is important to them. We discuss some of the challenges around analysing free text data later in this review.

For sharing with other patients and public

For patients and the public, there may also be other purposes. For example, to share with other potential service users. Ziebland et al (2016) reviewed the literature and identified that hearing about other people’s experiences of a health condition could influence a person’s own health through seven domains: finding information, feeling supported, maintaining relationships, using health services, changing behaviours, learning to tell the story and visualising illness. Study E found that some people see giving feedback as a form of public accountability for the service and part of a sense of ‘caring for care’ (although it is not always used as such by the healthcare providers). There can be tension between these purposes, and sometimes the intended purpose of the person giving the feedback is not matched by its use by care providers. Study F found that the purpose of providing feedback was not clear to most patients. The lack of organisational response to their survey feedback meant they perceived it as a ‘tick box exercise’ and they thought that their comments would not be used.

Study D notes that although survey data are collected from patients, they are not usually  involved in analysing it nor deciding how to act on it.

Patients' understanding of the purpose of feedback

A number of studies found that patients value giving feedback as ‘conversations’, using their own words to focus on the aspects of care important to them rather than what is important to the organisation. Most importantly they want a response so that feedback can be a two way street. This can be a powerful way of understanding the difference between an ideal service imagined by planners and the lived experience of how the service works in practice. Patient stories often identify parts of the process that have previously been overlooked, or the impact of local context on how services are experienced.

Patients felt that their feedback could serve different purposes. Study E found that patients distinguish between generalised feedback that is intended for other patients, carers and their families and feedback that is raising a specific concern to the service provider. When providing online feedback for other patients, people said the fact that feedback is public and anonymised data is key.  This sort of feedback was seen by patients as a way of publicly thanking staff, boosting morale, encouraging best practice and providing other patients with a positive signal about good care. Significantly, they reported that they saw their feedback as something staff might use to promote and maintain the service at times of increased financial pressure and cuts. This sort of feedback was shared over social media, especially health forums and Facebook. Twitter, and to a lesser extent blogs, were often used to communicate with healthcare professionals and service providers, including policy-makers and opinion leaders, while at the same time being accessible to the wider public. Patients said that concerns and complaints about their care require focused feedback and they use different online routes for this, such as local websites (these differed depending on the Trust and/or service in question) or third party platforms such as Care Opinion or iWantGreatCare.

Frontline staff perception of the purpose

The central collection of some types of feedback means that hospital staff are often unaware that the data was even being collected. Study B found that patient experience data were often viewed (particularly by frontline staff) as necessary only for regulatory compliance. Concerns about ‘reliability’ meant that vulnerable people, such as those with acute mental health problems, were sometimes excluded from giving the formal feedback that feeds into compliance and assurance programmes.

Matching purpose and interpretation

Understanding the purpose of feedback is critical not only to decisions about how to collect it but also to interpreting it. Study H notes a mismatch between what patients say about their experience in response to survey questionnaires and what they say at interview and Gallan et. al. (2017) report a difference between survey scores and free text. It is not possible, or even helpful, to triangulate patient experience data from different sources down to a single ‘truth’. Instead, comparing different sources elicits common themes and provides a rounded picture. Study G found that negative comments can be sandwiched between positive comments and vice versa and that staff felt it was important to consider this context rather than separate out the positive from the negative

What sort of feedback?

Patient experience is multi-dimensional. On one hand, it can be the ‘transactional’ experience of the process, e.g. waiting times, information provided and the immediate outcomes. On the other hand, it can be how people feel about interactions with healthcare staff and whether people felt treated with dignity and respect. Different types of information are needed to explore different aspects of experience. Whilst many surveys ask about relational as well as transactional experience, other methods, such as patient narratives might provide richer information. This difference is reflected in Study A’s  observation that there are at least thirty eight different types of data collection on patient experience.  Healthcare providers don’t always know how to make the best of this diverse range of data, or of the myriad of unsolicited feedback. This can leave both the public and healthcare providers confused and uncertain about how best to collect patient experience and also how to act on it.

Graph describing types of patient feedback on a scale from more generalisable ( eg. surveys, comments cards) to less generalisable (eg. complaints and compliments, public meetings) and also from less decriptive (eg. SNS questions, kiosk questions) to more descriptive (eg. patient stories, focus/groups panels, in-depth interviews)

Image taken from Measuring Patient Experience, by the Health Foundation

Who gives the feedback?

Study D asks whether data have to come directly from patients to ‘count’ as patient experience data. This raises the question of whether families and friends, who often have a ringside seat to the care, might be legitimate sources of patient experience feedback. Study D also found staff feedback and observations of their patients’ experience as well as their own experience had potential as a source of improvement ideas and motivation to achieve change care.

Credibility of the data

Some healthcare staff feel more comfortable with feedback that meets traditional measures of objectivity and are sceptical about the methods by which patient experience data is collected. Study H reported that general medical practitioners express strong personal commitments to incorporating patient feedback in quality improvement efforts. At the same time, they express strong negative views about the credibility of survey findings and patients’ motivations and competence in providing feedback, leading on balance to negative engagements with patient feedback.

Despite Trusts routinely collecting patient experience data, this data is often felt to be of limited value, because of methodological problems (including poor or unknown psychometric properties or missing data) or because the measures used lack granular detail necessary to produce meaningful action plans to address concerns raised. Study B

The suggestion that patient experience does not need to be representative of the whole population or collected in a standardised way has led some to question the quasi-research use of the term ‘data’, with its assumptions about which data are acceptable.

Objectivity or richness?

In contrast to objective data, Martin et al (2015) discuss the importance of ‘soft’ intelligence: the informal data that is hard to classify and quantify but which can provide holistic understanding and form the basis for local  interventions. This applies well to patient experience and Locock et. al. (2014), in another NIHR-funded study, discuss the use of narratives as a powerful way of understanding human experience.  Bate and Robert (2007) note that narratives are not intended to be objective or verifiable but celebrate the uniquely human and subjective recalled experience or set of experiences. One form of narrative based approach to service improvement is experience based co-design.

Experience-Based Co-Design (EBCD)

EBCD involves gathering experiences from patients and staff through in-depth interviewing, observations and group discussions, identifying key ‘touch points’ (emotionally significant points) and assigning positive or negative feelings. A short edited film is created from the patient interviews. This is shown to staff and patients, conveying in an impactful way how patients experience the service. Staff and patients are then brought together to explore the findings and to work in small groups to identify and implement activities that will improve the service or the care pathway. Accelerated EBCD, which replaces the individual videos with existing videos from an archive has been found to generate a similar response.

The Point of Care Foundation has a toolkit which includes short videos from staff and patients involved in experience-based co-design (EBCD) projects.

How is feedback on patient experience collected and used?

Currently the NHS expends a lot of energy on collecting data with less attention paid to whether this provides the information necessary to act. Study B’s economic modelling revealed that the costs of collecting patient feedback (i.e. staff time) far outweighed efforts to use the findings to drive improvement in practice at present. This may go some way to explaining DeCourcy et al (2012) findings that results of the national NHS patient survey in England have been remarkably stable over time, with the only significant improvements seen in areas where there have been coordinated government-led campaigns, targets and incentives. Study G found that staff consider presentation of the data to be important. They want to be able to navigate it in ways that answer questions specific to their service or to particular patients

Study C found variations in how data are generated and processed at different Trusts and describe the differences in how Friends and Family Test data are collected. The 2015 guidance from NHS England states that ‘Patients should have the opportunity to provide their feedback via the FFT on the day of discharge, or within 48 hours after discharge’. However, discharge is a complex process and ward managers and matrons frequently said that this was an inappropriate point at which to ask patients for feedback.

The Friends and Family Test

Launched in April 2013, the Friends and Family Test (FFT) has been rolled out in phases to most NHS funded services in England. FFT asks people if they would recommend the services they have used and offers supplementary follow-up questions. A review of the test and the way it is used has led to revised guidance for implementing the new FFT from 1 April 2020.

The changes announced already mean that:

  • There will be a new FFT mandatory question and six new response options
  • Mandatory timescales where some services are currently required to seek feedback from users within a specific period will be removed to allow more local flexibility and enable people to give feedback at any time, in line with other services
  • There will be greater emphasis on use of the FFT feedback to drive improvement
  • New, more flexible, arrangements for ambulance services where the FFT has proved difficult to implement in practice

Hearing stories (not just counting them)

Entwistle et al (2012) have argued that existing health care quality frameworks do not cover all aspects that patients want to feedback and that procedure-driven, standardised approaches such as surveys  and checklists  are too narrow. For many of the public, patient experience feedback is about being heard as a unique individual and not just as part of a group. This requires their experience to be considered as a whole, rather than reduced to a series of categories. Patient stories are also powerful ways of connecting with healthcare staff, however they are often seen as too informal to be considered as legitimate data.

Study A suggests that hearing stories means more than simply collecting patient stories but also including the patient voice in interpreting feedback. Shifting away from the idea of patient feedback as objective before and after measures for improvement, the authors used participative techniques including critical, collective reflection to consider what changes should take place. The researchers suggest that this approach has similarities with the principles of experience based co-design and other participatory improvement frameworks  and that is an area that is ripe for further exploration.

Asking about patient experience can appear straightforward, however Study B observed that the quality of relationships between staff and patients affects the nature of feedback patients or carers give to staff. In their study of people in mental health units, patients or carers would only offer feedback selectively and only about particular issues at the end of their stay if they had experience good relationships with staff during the admission.

Positive experience feedback

Many patients provide feedback about good experience, but staff don’t always recognise and value it.   Study A observed that most wards had plenty of generic positive feedback. However, this feedback is not probed and therefore specific elements of positive practice that should be illuminated and encouraged are rarely identified. Study G found that positive feedback tends to be shorter; often a single word like ‘fantastic’. There is a danger of giving less weight to this type of feedback. Study B described how patients in mental health settings spent time thinking about the way to frame and phrase praise, however, positive feedback was often treated in an (unintentionally) dismissive way by staff.

Learning from positive experience feedback

Vanessa Sweeney, Deputy Chief Nurse and Head of Nursing – Surgery and Cancer Board at University College London Hospitals NHS FT decided to share an example of positive feedback from a patient with staff. The impact on the staff was immediate and Vanessa decided to share their reaction with the patient who provided the feedback. The letter she sent, and the patient’s response are reproduced here:

Dear XXXXX,

Thank you for your kind and thoughtful letter, it has been shared widely with the teams and the named individuals and has had such a positive impact.

I’m the head of nursing for the Surgery and Cancer Board and the wards and departments where you received care. I’m also one of the four deputy chief nurses for UCLH and one of my responsibilities is to lead the trust-wide Sisters Forum. It is attended by more than 40 senior nurses and midwives every month who lead wards and departments across our various sites. Last week I took your letter to this forum and shared it with the sisters and charge nurses. I removed your name but kept the details about the staff. I read your letter verbatim and then gave the sisters and charge nurses the opportunity in groups to discuss in more detail. I asked them to think about the words you used, the impact of care, their reflections and how it will influence their practice. Your letter had a very powerful impact on us as a group and really made us think about how we pay attention to compliments but especially the detail of your experience and what really matters. I should also share that this large room of ward sisters were so moved by your kindness, compassion and thoughtfulness for others.

We are now making this a regular feature of our Trust Sisters Forum and will be introducing this to the Matrons Forum – sharing a compliment letter and paying attention to the narrative, what matters most to a person.

Thank you again for taking the time to write this letter and by doing so, having such a wide lasting impact on the teams, individuals and now senior nurses from across UCLH. We have taken a lot from it and will have a lasting impact on the care we give.

The patient replied: Thank you so much for your email and feedback. As a family we were truly moved on hearing what impact the compliment has had. My son said – “really uplifting”. I would just like to add that if you ever need any input from a user of your services please do not hesitate to contact me again.

Informal feedback

Study F describes how staff recognised that sometimes experience is shared naturally in day to day discussions with service users but does not get formally captured. Staff expressed a  wish for more opportunities to capture verbal feedback, especially in mental health services.  Study D found that staff do use informal feedback and patient stories to inform quality improvements at ward level, but this was not considered as ‘data’. This made the patient contribution invisible and staff could not always reference where the intelligence informing a proposed change came from.

So, the ‘big ticket’ items like clinical outcomes, Never Events, tend to be subject to QI [Quality Improvement] methodology. Patient experience on the other hand tends to get addressed through ‘actions’, which isn’t necessarily a formal method as such and not in line with QI methodology. So, for instance, you get a set of complaints or comments about a particular thing on a ward. They act to change it, that’s an action. They just change that. It’s not formal and it’s not following a method. That’s not to say it’s not a quality improvement, because it is: the action was based on feedback and it’s led to a change. But it is informal as opposed to formal. It’s because we don’t know how to deal with the feedback that is informal. Study C

Online feedback

A new and developing area of patient experience feedback is through digital platforms. UK and US data show that online feedback about healthcare is increasing and likely to continue to grow fast but this presents its own specific challenges to healthcare providers.

Who writes and reads it?

Study E surveyed 1824 internet users. 42% of people surveyed read online patient experience feedback, even though only 8% write it. Younger people and with higher incomes are more likely to read feedback, particularly women, and they were more likely to be experiencing a health challenge, live in urban areas and be frequent internet users.

What are they writing/reading?

The majority of online reviews are positive and numeric ratings for healthcare providers tend to be high. Members of the public who had left or read on line feedback framed it as a means of improving healthcare services, supporting staff and other patients and described it as ‘caring for care’ to be supportive and help the NHS to learn. Respondents said they would like more of a ‘conversation’ although in practice they often struggled to do this. In contrast to the positive intent expressed by the public, many healthcare professionals are cautious about online feedback, believing it to be mainly critical and unrepresentative, and rarely encourage it. This reflects a lack of value given to different types of feedback data. In Study E , medical staff were more likely than nurses to believe online feedback is unrepresentative and generally negative in tone with primary care professionals being more cautious than their secondary care counterparts.

Healthcare providers' response to online feedback

Baines et. al. (2018) found that adult mental health patients leaving feedback in an online environment expected a response within seven days, but healthcare professionals are unsure of how to respond to online feedback. Ramsey et al (2019) report five response types to online feedback: non-responses, generic responses, appreciative responses, offline responses and transparent, conversational responses. The different response types reflect the organisational approach to patient experience data, which in itself may reflect deeper cultures. As yet unpublished work by Gillespie (2019) suggests that there is an association between defensiveness in staff responses to online feedback and the summary hospital mortality indicator. This suggests that staff responses to online feedback might reveal a broader hospital culture which blocks critical, but potentially important, information moving from patients to staff.

The idea of a digitally sophisticated health consumer at the centre of a technology-enabled health system, actively engaged in managing their own care, which elsewhere we have characterised as the “digital health citizen”, has caught the imagination of policy makers seeking to address the challenges of twenty-first century health care. At the same time…providing online feedback is a minority activity, there is professional scepticism and a lack of organisational preparedness. Study E 

Study E notes that a vast amount of feedback left online is largely unseen by Trusts, either because they are not looking in those places, or because they do not think of those avenues as legitimate feedback channels. Organisations often state that they can only respond to individual, named cases.  In general, only sanctioned channels get monitored and responded to with feedback from other channels ignored. Staff are often unsure where the responsibility to respond to online feedback lies or feel powerless to do so as anonymous comments are perceived to restrict what response can be made. The authors recommend the NHS must improve the culture around receiving unsolicited feedback and consider their response-ability (their ability to respond specifically to online feedback), as well as their responsivity (ensuring responses are timely, as well as visible).

Organisational attitudes to online feedback influence the ways in which individual staff respond. Study D suggests that staff find unstructured and unsolicited online feedback interesting and potentially useful but they do not feel it has organisational endorsement and so  it is rarely used proactively. Study E reports that a quarter of nurses  and over half of doctors surveyed said they had never yet changed practices as a result of online feedback.

Who, where and when?

Feedback on general practice care.

Much of the research into using patient feedback has been on inpatient, acute hospital care. However, Study H looked at how people using primary care services provide feedback through patient surveys and how the staff in GP practices used the findings. The study was particularly interested in practices with low scores on the GP Patient Survey and whether patient feedback reflected actual GP behaviours. The findings were similar to studies in hospitals. GP practice staff neither believed nor trusted patient surveys and expressed concerns about their validity and reliability and the likely representativeness of respondents.  They were also more comfortable with addressing transactional experience such as appointment systems and telephone answering. Addressing relational aspects, such as an individual doctor’s communication skills was seen to be much more difficult.

The researchers videoed a number of patient/GP consultations and then asked the patients to complete a questionnaire about the GP’s communication. The researchers interviewed a sample of these patients, showing them the video and asked them to reflect on how they completed the questionnaire. The patients readily criticised the care verbally when reviewing the videos and acknowledged that they had been reluctant to be critical when completing the questionnaire because of the need to maintain a relationship with the GP but also because they were grateful for NHS care that they had received in the past.  Patients rating of the videos were similar to those of trained raters when communication was good. But when the raters judged communication in a consultation to be poor, patients’ assessments were highly variable from ‘poor’ to ‘very good’. The authors concluded that patient reluctance to give negative feedback on surveys means that when scores for a GP are lower than those in comparable practices, there is likely to be a significant concern.

Although the GP Patient Survey is available in 15 languages, fewer than 0.2% of surveys are completed in languages other than English and feedback from people who have minority ethnicity backgrounds tends to be low. Study H explored the feedback of patients from South Asian backgrounds. These respondents tend to be registered in practices with generally low scores, explaining about half of the difference between South Asian and white British patients in their experience of care. In fact, when people from both white and South Asian backgrounds were shown videos of simulated consultations with GPs, people with South Asian backgrounds gave scores that were much higher when adjusted for sociodemographic characteristics than white respondents. This suggests that low patient experience scores from South Asian communities reflect care that isworse than for their white counterparts.

Feedback from vulnerable people

Healthcare staff often express concerns about asking vulnerable people and those who have had a traumatic experience to give feedback but are reluctant to accept anonymised feedback. Speed et al (2016) describe this as the anonymity paradox where some patients feel anonymity is a prerequisite for effective use of feedback processes in order to ensure future care is not compromised but professionals see it as a major barrier as they are concerned about reputational damage if they cannot fact check the account.

Study B studied patient experience feedback in mental health settings. Some staff felt that inpatient settings were an inappropriate place to obtain feedback or that the feedback would be unhelpful. This was partly because staff recognised they felt they did not have enough time to spend with people who were very unwell and to make sense of their feedback. However, there was also a belief in some units that the feedback from those who were acutely unwell (especially if psychotic) was not reliable. The researchers found that people in mental health settings are able to provide feedback about their experiences even when unwell, but detailed and specific feedback was only available near to or after discharge. Some patients were wary of giving formal feedback before discharge for fear of the consequences, an anxiety shared by carers. However, patients wanted their feedback gathered informally at different points during their stay, including their day to day experience, irrespective of wellness. The researchers also found that where patients were not listened to in the early part of their admission, they were less likely to provide feedback when asked at the end of their stay.

Feedback from people with long term conditions

Study F explored patient experience data in services for people with long-term musculo- skeletal conditions and people with mental health conditions across inpatient, outpatient and general practice settings. Patients felt that there should be more opportunities to capture verbal feedback, especially in mental health services. Gaining feedback required considerable sensitivity given the complexity that some people live with. People with mental health problems often said they would be unlikely to use digital methods to give feedback, especially when unwell, when they might feel  unable to write and would prefer to give verbal feedback. Some older respondents with experience of musculoskeletal conditions expressed a concern that some people with painful or swollen hands or mobility restrictions may find feedback kiosks difficult to use. The study also highlighted the issues for people for whom English is not a first language.

When to give feedback

The use of feedback relates to the timing of both its collection and reporting. There can be a tension between the different needs of people providing feedback and of those acting on it. Patients often value the anonymity and reflection space of giving feedback after the care episode has been completed, however Study A reported that staff want real time feedback rather than the delayed feedback from surveys that can be months old.  In contrast, Study I notes the concerns of some that near real time feedback surveys have potential for sampling bias from staff, who select which patients are most suitable to provide feedback. However, Davies et al (2008)  argue that the aims of real-time data collection are not about representative samples but to feed data back quickly to staff so that the necessary changes can be identified and acted on. Study I explored the use of real-time patient experience data collection on older people’s wards and in A&E departments and found that it was associated with a small but statistically significant improvement (p = 0.044) in measured relational aspects of care over the course of the study.

Ensuring patient feedback is collected from a diverse range of people and places, and at different points in their journey, is important and the evidence suggests that it will require multiple routes, tailored to the specific circumstances of different groups of service users and different settings. What do healthcare providers do with patient experience feedback?

Numerous studies point to an appetite amongst healthcare staff for ‘credible’ feedback. However, despite the rhetoric and good intentions, healthcare providers appear to struggle to use patient experience data to change practice. Study B’s national survey found few English Mental Health NHS Trusts were able to describe how patient experience data were analysed and used to drive service improvements or change. Only 27% of Trusts were able to collect, analyse and use patient experience data to support change. 51% of Trusts were collecting feedback but they were experiencing difficulty in using it to create change, whilst 22% were struggling to collect patient experience feedback routinely. The researchers report it was clear that data analysis was the weakest point in the cycle. Study D found only half of Trusts responding to their survey had a specific plan/strategy for the collection and use of patient experience data, although 60% said their quality improvement (QI) strategy included how they would use patient experience data.

Study B produced a short video explaining their research :

A still from a study video. The image reads 'Systematic review, national survey, case studies, consensus conference, economic evaluation'

How does the data get analysed?

Understanding the feedback depends on how it is analysed and by whom. It is rare that patients are invited to participate or to confirm the analysis. Data for performance assessment is reduced to categories and  stripped of its context. Whilst many healthcare staff express a wish to have an overview or average figure, evidence shows that there is a tendency for more people who are either very pleased or very unhappy to response. This means that there is a U shape distribution of responses and using averages can therefore be misleading. This is echoed in aggregated organisational scores, which can mask significant variation between different teams and units.

Data collection and analysis of surveys is often outsourced and individual organisations may not receive support to make sense of survey findings and to translate that into improvement actions (Flott et al 2016).

Study D found that staff look at multiple feedback sources plus their own ideas of what needs to change, using a sense-making process akin to ‘clinical mind lines’ as described by Gabbay and Le May (2004), where understanding is informed by a combination of evidence and  experience, resulting in socially constructed “knowledge in practice”.

Despite the desire for patients to tell their stories in their own words, the challenge of managing and integrating large volumes of free text feedback prevents its widespread use. Two of the NIHR studies featured in this review sought to address this by developing automated tools to analyse free text feedback. Study F and Study G both applied data mining techniques to free-text comments to identify themes and associated sentiments (positive, negative, or neutral) although they used slightly different techniques to do so. In Study F the text mining around sentiment compared well against those produced by qualitative researchers working on the same datasets in both general hospital and mental health facilities, although for some themes, e.g. care quality, the qualitative researchers appeared to provide a higher number of positive sentiments than the text mining. The researchers produced an electronic tool that allowed the rapid automated processing of free-text comments to give an overview of comments for particular themes whilst still providing an opportunity to drill down into specific or unusual comments for further manual analysis to gain additional insight. The study highlighted the challenges of dealing with informal and complex language that frequently appears in patient feedback. This meant that many comments are automatically excluded from analysis by the text mining computer programmes. Whilst text mining can provide useful analysis for reporting on large datasets and within large organisation: qualitative analysis may be more useful for small datasets or small teams.

Managing big data

Large amount of free text data on patient experience are collected in surveys. There comes a point where it is impossible to manage and analyse this data manually. Using automated qualitative analysis techniques, whilst still allowing drill down to individual feedback is a promising approach to enabling these data to be used.

Study G sought to involve patients and carers as well as NHS staff and the third sector (the stakeholders) in the development of their approach. The aim was to the use of patient experience free text comments in the National Cancer Patient Experience Survey. They developed a toolkit to process raw survey free text data and to sort it into themes; quantitative summaries of the themes were provided in graphs, with local, regional and national benchmarks. The rule based data mining approach used was  86% accurate when compared with a human coder. Data could be sorted and filter in bespoke ways then drilled down to the original comments. Dta was also sorted by sentiment and the weighting to positive or negative shown on a graphic for staff to see at a glance which areas might need improving and which could be highlighted to boost staff morale. The software was designed specifically for the Cancer Experience survey, but the researchers believe it will be possible to develop the software to be as accurate on other clinical aspects of care.

How does the data get used?

Study C set out to describe the journey from data to impact but found that ‘journey’ was not a useful way of looking at what happened to the data and the processing did not follow a linear path.  They found that the transformation of the data to action is partly dependent on whether the process is independent from other concerns and whether the people involved have the authority to act in a meaningful way.  For example, Clinical Nurse Specialists in cancer care have a formal responsibility for patient experience and have the authority to act on data in ways that clearly lead to improvements in care. Similarly, organisationally recognised and validated mechanisms such as ward accreditation schemes are seen as producing recognised data which can lead directly to change. Where there is no recognised system or person to act, change can falter.

Clinical staff are busy and need information in quick access presentations. Study I and Study G found that the response of patient facing staff to formal feedback (e.g. surveys) was influenced by the format of the feedback: accessible reporting such as infographics were particularly helpful Study A found senior ward staff were often sent spreadsheets of unfiltered and unanalysed feedback, but a lack of skills meant they could not interrogate it. This was compounded by a lack of time as staffing calculations did not factor in any time for reflecting and acting on patient feedback. Short summaries (e.g. dashboards and graphs) were essential tools to help staff understand areas for improvements quickly.

Who uses patient experience data?

Patient experience data is most widely used to assess performance. Study A observed that formal sources of patient feedback (such as surveys or the Friends and Family Test) are used by hospital management for assurance and benchmarking purposes. Study C noted that patient feedback in national surveys is frequently presented at the  corporate level rather than at individual unit level which hinders local ownership. It is rarely linked to other indicators of quality (such as safety and clinical effectiveness) and this is compounded by the delay between data collection and receiving the reports. They also identified a frequent disconnect between the data generation and management work carried out by Patient Experience teams and the action for care improvement resulting from that data, which is more often the responsibility of nursing teams.

What gets improved?

There is a potential tension between quick wins and more complex improvement. Study A reported that ward teams want to get information from patients about things that can be quickly fixed, but they also want to understand how their patients feel so they can develop more appropriate ways of relating to them. This is reflected in the observations by many studies that actions taken in response to feedback are largely to improve transactional experience.   Study B contrasts ‘environmental’ change (changes that related to the physical environment or tangibles like diet, seating areas in wards, temperature control and the physical environment of the ward) with ‘cultural’ change (changes related to relationships with patients including feelings of respect and dignity and staff attitudes). This resonates with Gleeson et al.’s (2016) systematic review which found that patient experience data were largely used to identify small areas of incremental change to services that do not require a change to staff behaviour.

The Yorkshire Patient Experience Toolkit (PET), developed by Study A , helps frontline staff to make changes based on patient experience feedback.

Are service providers ready for patient feedback?

Gkeredakis et al (2011) point out that simply presenting NHS staff with raw data will not lead to change. An organisation’s capacity to collect and act on patient experience is related to its systems and processes and the way staff work. Study A for example, found it was difficult to establish multi-disciplinary involvement in patient experience initiatives unless teams already work in this way.   Study A referred to an earlier study (Sheard et al, 2017) to help explain some of the challenges observed. Two conditions need to be in place for effective use of patient experience feedback. Firstly, staff need to believe that listening to patients is worthwhile and secondly, they need organisational permission and  resources to make changes. Staff in most (but not all) wards they studied believed patient experience feedback was worthwhile but did not have the resources to act. Even where staff expressed strong belief in the importance of listening to patient feedback, they did not always have confidence in their ability or freedom to make change. Study A sought to address some of these barriers as they arose through its action research approach but found them to be persistent and hard to shift.  In addition to organisational and resource issues, Study A also revealed that staff found responding to patient experience emotionally difficult and needed sensitive support to respond constructively.

Study I found buy-in from senior staff was a key factor in both the collection and use of the feedback. For example, directors of nursing or ward leaders revisiting action plans during regularly scheduled meetings and progress monitoring.

Patient Experience and Quality Improvement

Whilst patient experience is often talked about as one of the cornerstones of quality, patient experience feedback was seen by a number of researchers to have an ambiguous relationship with quality improvement systems. Study C noted that informal feedback gets acted on, but the improvements are also seen as informal and not captured. This illustrates a theme running through many of the featured studies that patient experience data is seen as separate from other quality processes and that it is often collected and considered outside organisational quality improvement structures.

…patient experience is almost [our emphasis] an indicator of something but it’s not used as a direct measure in any improvement project […] I like things in black and white, I don’t like things that are grey. Patient experience is grey. (Head of Quality Improvement). Study C

Lee et al (2017) studied how two NHS Trust boards used patient experience feedback. They found that although patient survey findings were presented to the boards, they were not used as a form of quality assurance. The discussion of surveys and other kinds of feedback did not of itself lead to action or explicit assurance, and external pressures were equally important in determining whether and how boards use feedback.

Study A   found that Quality Improvement teams were rarely involved in managing and acting on patient experience feedback, or if they were, they focused on strategy at an organisational level rather than practice change at local level. Study D observed that in most organisations ‘experience’ and ‘complaints’ are dealt with separately, by different teams with different levels of authority. There was a strong feeling that there needs to be a formal process for managing experience data with sufficient resources to ensure specific action can be taken.

The Point of Care Foundation website hosts a guide developed as part of Study D .  It provides a guide for clinical, patient experience and quality teams to draw on patient experience data to improve quality in healthcare and covers gathering data, getting started and improvement methods.

Study C observed complex relationships between institutionally recognised quality improvement efforts (formal QI) and the vast amount of unsystematised improvement work that takes place in response to patient experience data in less well-documented ways (everyday QI). They found that when  frontline staff (often nurses) had the right skills, they were able to use imperfect data, set it into context and search for further data to fill the gaps and use it to improve services.

Study C created a video explaining what they found about how staff can use patient experience feedback to improve care.

NHS Improvement Patient Experience Improvement Framework

The framework was developed to help NHS organisations to achieve good and outstanding ratings in their Care Quality Commission (CQC) inspections. The framework enables organisations to carry out an organisational diagnostic to establish how far patient experience is embedded in its leadership, culture and operational processes. It is divided into six sections, each sub-divided and listing the characteristics and processes of organisations that are effective in continuously improving the experience of patients. The framework integrates policy guidance with the most frequent reasons CQC gives for rating acute trusts ‘outstanding’.

Conclusions

Our review of the evidence shows that there is much work in NHS organisations exploring how to collect and use data about patient experience. This complements the ‘soft intelligence’  acquired through experience and informal inquiry by staff and patients. However, we found that this work can be disjointed and stand alone from other quality improvement work and the management of complaints.

The research we feature highlights that patients are often motivated to give praise, or to be constructively critical and suggest improvements and wanting to help the NHS. NHS England has developed a programme to pilot and test  Always Events, those aspects of the patient and family experience that should always occur when patients interact with healthcare professionals and the health care delivery system. However, the research features here suggest  a managerial  focus on ‘bad’ experiences and therefore the rich information about what goes right and what can be learnt from this can be overlooked. Positive feedback often comes from unsolicited feedback and the NHS needs to think about how to use this well.

Our featured studies show that staff need time and skills to collect, consider and act on patient feedback, and that patients often want to be actively involved in all stages.

The NHS has made important strides towards partnering with patients to improve services and the research featured in this review can help direct the next steps.

Acknowledgements

This report was written by Elaine Maxwell with Tara Lamont of the NIHR Dissemination Centre.

We acknowledge the input of the following experts who formed our Steering Group:

Jocelyn Cornwell - CEO,  Point of Care Foundation Dr Sara Donetto - Lecturer, King's College London Chris Graham - CEO, Picker Institute Julia Holding -  Head of Patient Experience, NHS Improvement  Professor Louise Locock  - Professor in Health Service Research, University of Aberdeen Dr Claire Marsh   - Senior Research Fellow/Patient and Public Engagement Lead - Bradford Institute for Health Research David McNally   - Head of Experience of Care, NHS England James Munro  - Chief Executive, Care Opinion Laurie Olivia  - Head of Public Engagement and Involvement, NIHR Clinical Research Network Professor John Powell  - Associate Professor, University of Oxford Professor Glenn Robert  - Professor, King's College London Neil Tester  - Director, Richmond Group Professor Scott Weich   - Professor of Mental Health - University of Sheffield

We are also grateful to the following staff in NHS organisations who reviewed the final draft:

Melanie Gager - Nurse Consultant in Critical Care Royal Berkshire NHS FT Lara Harwood - Service Experience Lead, Hertfordshire Partnership NHS FT Lisa Anderton - Head of Patient Experience, University College London Hospitals NHS FT

Study summaries

Study a - u sing patient experience data to develop a patient experience toolkit to improve hospital care: a mixed-methods study .

Principal investigator Lawton, R. (Sheard et al, 2019)

This study aimed to understand and enhance how hospital staff learn from and act on patient experience feedback. A scoping review, qualitative exploratory interviews and focus groups with 50 NHS staff found use of patient feedback is hindered at both micro and macro levels.  These findings fed into a co-design process with staff and patients to produce a Patient Experience Tool that could overcome these challenges. The tool was trialled, tested and refined in six wards across three NHS Trusts (chosen to reflect diversity in size and patient population) over a 12 month period using an action research methodology. Its critical components were open-ended conversational interviews with patients by volunteers to elicit key topics of importance, facilitated team discussions around these topics, and coached quality improvement cycles to enact changes.  A large, mixed methods evaluation was conducted over the same 12 month period to understand what aspects of the toolkit worked or did not work, how and why, with a view to highlighting critical success factors. Ethnographic observations of key meetings were collected, together with in depth interviews at the half way and end point with key stakeholders and detailed reflective diaries kept by the action researchers. Ritchie and Spencer’s Framework approach was used to analyse these data. A 12 item patient experience survey was completed by around 15 - 20 patients per week in total (across the six wards) beginning four weeks before the action research formally started and ending four weeks after it formally ceased.

Sheard L, Marsh C, Mills T, Peacock R, Langley J, Partridge R, et al. Using patient experience data to develop a patient experience toolkit to improve hospital care: a mixed-methods study. Health Serv Deliv Res 2019;7(36)

Sheard L, Peacock R, Marsh C, Lawton L. (2019). What’s the problem with patient experience feedback? A macro and micro understanding, based on findings from a three site UK quality study . Health Expectations, 22 (1) 46-53

Study B - Evaluating the Use of Patient Experience Data to Improve the Quality of Inpatient Mental Health Care (EURIPDIES)

Principal investigator Weich, S. (2020)

This study looked at the way patient experience data are collected and used in inpatient mental health services. Using a realist research design it had five work packages; (1) a systematic review of 116 papers to identify patient experience themes relevant to inpatient mental health care,  (2) a national survey of patient experience leads in inpatient mental health Trusts in England, (3) six in-depth case studies of what works, for whom, in what circumstances and why, including which types of patient experience measures and organisational processes facilitate effective translation of these data into service improvement actions, (4) a consensus conference with forty four participants to agree recommendations about best practice in the collection and use of mental health inpatient experience data and (5) health economic modelling to estimate resource requirements and barriers to adoption of best practice.

Patient experience work was well regarded but vulnerable to cost improvement pressure and only 27% of Trusts were able to collect and analyse patient experience data to support change. Few trusts had robust or extensive processes for analysing data in any detail and there was little evidence that patient feedback led to service change. A key finding was that patients can provide feedback about their experiences even when unwell and there is a loss of trust when staff are unwilling to listen at these times.  The researchers described a set of conditions necessary for effective collection and use of data from the program theories tested in the case studies. These were refined by the consensus conference and provide a series of recommendations to support people at the most vulnerable point in their mental health care.

The researchers have produced a video of their findings.

Study C - Organisational strategies and practices to improve care using patient experience data in acute NHS hospital trusts: an ethnographic study

Principal investigator Donetto, S. (2019)

The main aim of this study was to explore the strategies and practices organisations use to collect and interpret patient experience data and to translate the findings into quality improvements in five purposively sampled acute NHS hospital trusts in England. A secondary aim was to understand and optimise the involvement and responsibilities of nurses in senior managerial and frontline roles with respect to such data. An ethnographic study of the ‘journeys’ of patient experience data, in cancer and dementia services in particular, guided by Actor-Network Theory, was undertaken. This was followed by workshops  (one cross-site, and one at each Trust) bringing together different stakeholders (members of staff, national policymakers, patient/carer representatives) considering how to use patient experience data. The researchers observed that each type of data takes multiple forms and can generate improvements in care at different stages in its complex ‘journey’ through an organisation. Some improvements are part of formal quality improvement systems, but many are informal and therefore not identified as quality improvement. Action is dependent on the context of the patient experience data collection and on people or systems interacting with the data having the autonomy and authority to act. The responsibility for acting on patient experience data falls largely on nurses, but other professionals also have important roles. The researchers found that sense-making exercises to understand and reflect on the findings can support organisational learning. The authors conclude that it is not sufficient to focus solely on improving the quantity and quality of data NHS Trusts collect. Attention should also be paid , to how these data are made meaningful to staff and ensuring systems are in place that enable these data to trigger action for improvement.

Donetto S, Desai A, Zoccatelli G, Robert G, Allen D, Brearley S & Rafferty AM (2019) Organisational strategies and practices to improve care using patient experience data in acute NHS hospital trusts: an ethnographic study . Health Serv Deliv Res;7(34)

Study D - Un derstanding how frontline staff use patient experience data for service improvement - an exploratory case study evaluation and national survey (US-PEx)

Principal Investigator Locock L (2019)

This mixed methods study used national staff survey results and a new national survey of patient experience leads together with case study research in six carefully selected hospital medical wards to explore what action staff took in relation to particular quality improvement projects prompted by concerns raised by patient experience data.  The effects of the projects were measured by surveys of experience of medical patients before and after the changes and observation on the wards.

Over a third of patient experience leads responded to the survey and they reported the biggest barrier to making good use of patient experience data was lack of time. Responses to the before and after patient surveys were received from about a third of patients in total, although this varied between sites (1134 total survey responses before, 1318 after) and these showed little significant change in patient experience ratings following quality improvement projects. Insights from observations and interviews showed that staff were often unsure about how to use patient experience data.  Ward-specific feedback, as generated for this study, appeared helpful. Staff drew on a range of informal and formal sources of intelligence about patient experience, not all of which they recognized as ‘data’. This included their own informal observations of care and conversations with patients and families. Some focused on improving staff experience as a route to improving patient experience. Research showed that teams with people from different backgrounds, including a mix of disciplines and NHS Band levels, brought a wider range of perspectives, organisational networks and resources (‘team capital’). This meant the they  were more likely to be able to make changes on the ground and therefore be more likely to be successful in improving care.

Insights from this study were used to develop an online guide to using patient experience data for improvement.

Study E - Online pat ient feedback: a multimethod study to understand how to Improve NHS Quality Using Internet Ratings and Experiences (INQUIRE)

Principal investigator powell, j. (2019).

This study explored what patients feed back online, how the public and healthcare professionals in the United Kingdom feel about it and how it can be used by the NHS to improve the quality of services. It  comprised five work programmes, starting with a scoping review on what was already known.  A questionnaire survey of the public was used  to find out who reads and write online feedback are, and the reasons they choose to comment on health services in this way. 8% had written and 42% had read online health care feedback in the last year. This was followed up by face to face interviews with patients about their experiences of giving feedback to the NHS. A further questionnaire explored the views and experiences of doctors and nurses. Finally, the researchers spent time in four NHS trusts to learn more about the approaches that NHS organisations take to receiving and dealing with online feedback from patients. A key finding was that people who leave feedback online are motivated primarily to improve healthcare services and they want their feedback to form part of a conversation. However, many professionals are cautious about online patient feedback and rarely encourage it. Doctors were more likely than nurses to believe online feedback is unrepresentative and generally negative in tone. NHS trusts do not monitor all feedback routes and staff are often unsure where the responsibility to respond lies.   It is important that NHS staff have the ability to respond and can do so in a timely and visible way.

Study F  - Developing and Enhancing the Usefulness of Patient Experience Data using digital methods in services for long term conditions (the DEPEND mixed methods study). Understanding and enhancing how hospital staff learn from and act on patient experience feedback.

Principal investigator Sanders, C. (2020)

This mixed methods study explored digital collection and use of patient experience data in services for people with long term severe mental health or musculoskeletal conditions in an acute NHS Trust, a mental health NHS Trust and in two general practices. The study had four parts. Firstly, semi structured interviews and focus groups with staff (n=66), patients (n=41 and carers (n=13) about the timing, form and method of providing feedback. Secondly, computer science text analytics methods were used to analyse two datasets containing free-text comments extracted from various patient experience surveys (e.g. FFT, Picker Survey). The raw dataset from Site A contained 110,854 comments (2,114,726 words), whilst the Site B dataset contained 1,653 comments.  This was compared with a qualitative thematic analysis. Thirdly,  workshops were conducted with patients, carers and staff to co design new ways of collecting and presenting  patient experience data. A survey was administered via a digital kiosk, online or pen and paper..  Fourthly, interviews and focus groups with 51 staff, 24 patients and 8 carers, combined with 41 focused observations were analysed using Normalisation Process Theory to evaluate how successful the different methods were. The study found that staff and patients were largely positive about using digital methods but wanted more meaningful and informal feedback to suit local contexts.. Text mining could analyse detailed patient comments, although there were challenges e.g. informal and complex language. Whilst text mining can provide useful analysis for reporting on large datasets and within large organisations; however, qualitative analysis may be more useful for small datasets/teams. Staff thought new ways of analysing and reporting feedback gave some new insights, but there was limited time for embedding new tools, and changes in service provision were not observed during the testing period. Observations showed that patients were apprehensive about the digital kiosks but were more likely to participate if given support. The authors have developed a video describing their findings.

The text mining programmes and user manual are available via this link

Study G - Practical auto mated analysis and dashboard representations of cancer survey free-text answers PRESENT: Patient Reported Experience Survey Engineering of Natural Text

Principal investigator Rivas, C. (2019)

The aim of this research was to improve the use of free-text comments collected in surveys. Template-based machine learning has been tried in the past but is time consuming as data needs  first to be analysed manually for themes that then act as templates for the software to use. The study had three parts. A scoping review  of 43 studies, along with surveys of  32 different stakeholders in healthcare on clinical digital toolkit design informed the development of draft rule-based machine learning and prototype toolkit dashboards. Co-design consensus-forming mixed stakeholder concept mapping workshops with 34 participants  and interviews reached consensus on a shortlist of 19 themes, six of which were core. Finally, discrete choice experiment (DCE) explored which toolkit features were preferred, with simple cost-benefit analysis through a survey. An automated approach to analyse free text comment, based on the themes, was developed using  rule-based machine learning or ‘text engineering’. The  researchers tested themes on the 2013 Welsh Cancer Patient Experience Survey. Rule- based machine learning  is much more flexible and easier to modify than templates for transferable use. The approach was found to have Accuracy = 86%; Precision = 88%; Sensitivity = 96%; F-score (which shows the balance between precision and sensitivity)=92% thus similar to human coding levels. A website that summarises the number of comments patients make on each theme was created. The toolkit  was tested with 13 staff in 3 UK NHS Trusts (Leeds, London, Wessex) who considered it very useful. This multidisciplinary mixed stakeholder co-design demonstrated proof of concept for automated display of patient experience free-text comments in a way that could drive healthcare improvements in real time, although the machine learning needs further refinement, especially for transferability to other services. The discrete choice experiment suggested the toolkit would be well accepted, with a favourable cost-benefit ratio, if implemented into practice with appropriate infrastructure support.

Rivas C, Tkacz D, Antao L, Mentzakis E, Gordon M, Anstee S & Giordano R. Automated analysis of free-text comments and dashboard representations in patient experience surveys: a multimethod co-design study. Health Serv Deliv Res 2019;7(23)

Study H - Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience

Principal investigators Burt, J.  Campbell, J. and Roland, M (2017)

This programme of research explored how different patients record their experiences of general practice and out of hours services and how primary care staff respond to feedback. It also considered how best to engage primary care staff in responding to feedback. The programme featured a range of different studies using surveys, focus groups and interviews and some experimental designs.  Qualitative research suggested  that patients can effectively critique their care but are reluctant to be critical when completing questionnaires. General practice and  out-of-hours centre staff were sceptical about the value of patient surveys and their ability to support service reconfiguration and quality improvement. Staff expressed a preference for free-text comments, as these provided more tangible, actionable data. Whilst people from South Asian backgrounds are more likely to give low survey scores, this study showed that they are more likely to be registered with poor performing practices. The authors asked 1120 people, stratified by age and ethnicity (half white British, half Pakistani) to score the quality of communication of filmed simulated consultations showing various combinations of white and Asian doctors and patients. When viewing the same consultation, Pakistani respondents gave scores that were much higher than white participants.  This suggests that low patient experience scores from South Asian communities reflect care that is worse than their white British counterparts. An exploratory trial of real time feedback (RTF) in practices found that only 2.5% of patients left feedback using touch screens in the waiting room, although more did so when reminded. Staff were broadly positive about using RTF and valued the ability to include their own questions.

Burt J, Campbell J, Abel G, Aboulghate A, Ahmed F, Asprey A, et al. Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience. Programme Grants Appl Res 2017;5(9).

Study I - An evaluation of a near real-time survey for improving patients’ experiences of the relational aspects of care: a mixed-methods evaluation  

Principal investigator Graham, C. (2018)

This mixed methods research evaluated whether near real-time feedback can measure relational aspects of care and whether it can be used to improve relational aspects of care. Factor analysis of national patient experience survey data was used to identify composite indicators to measure NHS Trusts’ performance on relational aspects of care. This was used to recruit six case study NHS hospitals with varying patient experience survey results. A real-time survey tool was developed through a review of existing instruments, patient focus groups and interviews. The survey was administered by volunteers using a tablet computer-based methodology to 3928 participants on elderly care wards who had sought care in accident and emergency departments in the six Trusts over a ten month period. A small, but statistically significant, improvement in overall patient experiences of relational care over the course of the study was demonstrated. Staff and volunteer surveys (n = 274) and interviews (n = 82) highlighted several factors which influenced the use of near real time feedback including the reporting format, free-text comments, buy-in from senior staff, volunteer engagement and initial start-up challenges.

Graham C, Käsbauer S, Cooper R, King J, Sizmur S, Jenkinson C, et al. An evaluation of a near real-time survey for improving patients' experiences of the relational aspects of care: a mixed-methods evaluation. Health Serv Deliv Res 2018;6(15)

Baines R, Donovan J, Regan de Bere S, Archer J, Jones R. (2018) Responding effectively to adult mental health patient feedback in an online environment: A coproduced framework. Health Expectations 21(5), pp.887-898

Bate P, Robert G. (2007) Bringing user experience to healthcare improvement : the concepts, methods and practices of experience-based design. Oxford: Radcliffe Publishing

Davies E, Shaller D, Edgman‐Levitan S, Safran DG, Oftedahl G, Sakowski J, Cleary PD. (2008) Evaluating the use of a modified CAHPS® survey to support improvements in patient‐centred care: lessons from a quality improvement collaborative. Health Expectations 11(2): 160-176.

DeCourcy A, West E, Barron D. (2012)The National Adult Inpatient Survey conducted in the English National Health Service from 2002 to 2009: how have the data been used and what do we know as a result? BMC Health Serv Res 12:71.

Doyle C, Lennox L, Bell D. (2013) A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open 2013;3.

Entwistle V, Firnigl D, Ryan M, Francis J, Kinghorn P. (2012)Which experiences of health care delivery matter to service users and why? A critical interpretive synthesis and conceptual map. J Health Serv Res Policy 17:70-8

Flott KM, Graham C, Darzi A, Mayer E. (2016) Can we use patient-reported feedback to drive change? The challenges of using patient-reported feedback and how they might be addressed. BMJ Quality & Safety. doi: 10.1136/bmjqs-2016-005223

Gabbay, J. and Le May, A. (2004) Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. BMJ, 329(7473), p.1013.

Gallan AS, Girju M, Girju R. (2017) Perfect ratings with negative comments: Learning from contradictory patient survey responses. Patient Experience Journal 4:15-28.

Gkeredakis E, Swan J, Powell J, Nicolini D, Scarbrough H, Roginski C, et al (2011) Mind the gap: Understanding utilisation of evidence and policy in health care management practice. Journal of Health Organization and Management. 2011;25(3):298-314.

Gillespie, A. (2019). Online feedback, hospital defensiveness and the possibilities for organizational learning. Paper presented at the conference Improving Patient Safety New Horizons | New Perspectives, Leeds UK.

Gleeson, H., Calderon, A., Swami, V., Deighton, J., Wolpert, M. and Edbrooke-Childs, J., (2016) Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open, 6(8)

Lee, R., Baeza, J. and Fulop, N. (2018) The use of patient feedback by hospital boards of directors: a qualitative study of two NHS hospitals in England. BMJ Quality & Safety 27(2), pp.103-109.

Locock, L., Robert, G., Boaz, A., Vougioukalou, S., Shuldham, C., Fielden, J., Ziebland, S., Gager, M., Tollyfield, R. and Pearcey, J. (2014) Using a national archive of patient experience narratives to promote local patient-centered quality improvement: an ethnographic process evaluation of ‘accelerated’ experience-based co-design. Journal of Health Services Research & Policy, 19(4), pp.200-207.

Liu JJ, Rotteau L, Bell CM, et al (2019) Putting out fires: a qualitative study exploring the use of patient complaints to drive improvement at three academic hospitals BMJ Quality & Safety Published Online First: 23 May 2019. doi: 10.1136/bmjqs-2018-008801

Martin, G.P., McKee, L. and Dixon-Woods, M. (2015) Beyond metrics? Utilizing ‘soft intelligence’ for healthcare quality and safety. Social Science & Medicine, 142, pp.19-26.

NHS England (2015) The Friends and Family Test: Guidance https://www.england.nhs.uk/wp-content/uploads/2015/07/fft-guidance-160615.pdf

Ramsey, L.P., Sheard, L. and O'Hara, J. (2019) How do healthcare staff respond to patient experience feedback online? A typology of responses published on Care Opinion. Patient Experience Journal, 6(2), pp.42-50.

Robert, G., & Cornwell, J. (2013). Rethinking policy approaches to measuring and improving patient experience. Journal of Health Services Research & Policy, 18(2), 67–69.

Ryan, S. (2019) NHS Inquiries and Investigations; an Exemplar in Peculiarity and Assumption. The Political Quarterly 90:2 pp 224-228

Sheard, L., Marsh, C., O'Hara, J., Armitage, G., Wright, J. and Lawton, R. (2017) The patient feedback response framework–understanding why UK hospital staff find it difficult to make improvements based on patient feedback: a qualitative study. Social Science & Medicine, 178, pp.19-27

Speed, E., Davison, C. and Gunnell, C. (2016). The anonymity paradox in patient engagement: reputation, risk and web-based public feedback. Medical humanities, 42(2), pp.135-140.

Ziebland S, Powell J, Briggs P, Jenkinson C, Wyke S, Sillence E, et al. (2016) Examining the role of patients’ experiences as a resource for choice and decision-making in health care: a creative, interdisciplinary mixed-method study in digital health. Programme Grants Appl Res 2016;4(17).

Produced by the University of Southampton on behalf of NIHR through the NIHR Dissemination Centre

Print article

Privacy Overview

Internet explorer is no longer supported

We have detected that you are using Internet Explorer to visit this website. Internet Explorer is now being phased out by Microsoft. As a result, NHS Digital no longer supports any version of Internet Explorer for our web-based products, as it involves considerable extra effort and expense, which cannot be justified from public funds. Some features on this site will not work. You should use a modern browser such as Edge, Chrome, Firefox, or Safari. If you have difficulty installing or accessing a different browser, contact your IT support team.

Standard 6 Evaluation and Assessment

Overview of evaluation and assessment.

Evaluation and assessment processes typically start at the outset of the training lifecycle, as early as at the point of training service strategy development, where evaluation/assessment methodologies and tools are identified.

As the planned training proceeds through the training lifecycle, the evaluation and assessment activity becomes more focused on the measurement of effectiveness, impact and outcome, and the quality of work being delivered by the training service.

Evaluation and assessment results should be fed back into the training lifecycle and detailed in the training plan.

There are different reasons why training should be evaluated:

  • Find out if the learning is actually being applied in the workplace
  • Check the impact of the training on job performance
  • Confirm that the training delivered is the right training
  • Continuously improve training delivery using feedback
  • Demonstrate that training is adding value, has made a difference or has had an impact
  • Prove the training has delivered what was agreed with stakeholders.

All training must have some form of assessment to measure the level of learning that has taken place. The training service could consider the following:

  • Assessment conducted during the training, for example, learners undertake exercises during the training which are observed and results are recorded. Learners who are not able to complete certain exercises can be identified and followed up.
  • Post course assessment, for example, an online competency assessment.

To request a copy of the Standards

To request a copy of the Standards, please email us at [email protected] with your full name, name of organisation, where your organisation is based, and if it is an NHS organisation. You do not have to work in a health or social care organisation to request a copy of the Standards.

Note: Adobe version 7 or above is required for the standards.

Further information

internal Standard 1 Strategy and Service Improvement

internal Standard 2 Planning and Learning Needs Analysis

internal Standard 3 Design and Delivery

internal Standard 4 Administration, Facilities and Equipment

internal Standard 5 Team Management and Development

This section provides further useful resources for training professionals, including all trainers and training managers. A set of resources and guidance is available to support trainers and managers in planning their career progression and professional development.

Last edited: 1 July 2019 2:05 pm

  • Open access
  • Published: 26 April 2022

Development and validation of the oral presentation evaluation scale (OPES) for nursing students

  • Yi-Chien Chiang 1 ,
  • Hsiang-Chun Lee 2 ,
  • Tsung-Lan Chu 3 ,
  • Chia-Ling Wu 2 &
  • Ya-Chu Hsiao 4  

BMC Medical Education volume  22 , Article number:  318 ( 2022 ) Cite this article

4414 Accesses

1 Citations

16 Altmetric

Metrics details

Oral presentations are an important educational component for nursing students and nursing educators need to provide students with an assessment of presentations as feedback for improving this skill. However, there are no reliable validated tools available for objective evaluations of presentations. We aimed to develop and validate an oral presentation evaluation scale (OPES) for nursing students when learning effective oral presentations skills and could be used by students to self-rate their own performance, and potentially in the future for educators to assess student presentations.

The self-report OPES was developed using 28 items generated from a review of the literature about oral presentations and with qualitative face-to-face interviews with university oral presentation tutors and nursing students. Evidence for the internal structure of the 28-item scale was conducted with exploratory and confirmatory factor analysis (EFA and CFA, respectively), and internal consistency. Relationships with Personal Report of Communication Apprehension and Self-Perceived Communication Competence to conduct the relationships with other variables evidence.

Nursing students’ ( n  = 325) responses to the scale provided the data for the EFA, which resulted in three factors: accuracy of content, effective communication, and clarity of speech. These factors explained 64.75% of the total variance. Eight items were dropped from the original item pool. The Cronbach’s α value was .94 for the total scale and ranged from .84 to .93 for the three factors. The internal structure evidence was examined with CFA using data from a second group of 325 students, and an additional five items were deleted. Except for the adjusted goodness of fit, fit indices of the model were acceptable, which was below the minimum criteria. The final 15-item OPES was significantly correlated with the students’ scores for the Personal Report of Communication Apprehension scale ( r  = −.51, p  < .001) and Self-Perceived Communication Competence Scale ( r  = .45, p  < .001), indicating excellent evidence of the relationships to other variables with other self-report assessments of communication.

Conclusions

The OPES could be adopted as a self-assessment instrument for nursing students when learning oral presentation skills. Further studies are needed to determine if the OPES is a valid instrument for nursing educators’ objective evaluations of student presentations across nursing programs.

Peer Review reports

Competence in oral presentations is important for medical professionals to communicate an idea to others, including those in the nursing professions. Delivering concise oral presentations is a useful and necessary skill for nurses [ 1 , 2 ]. Strong oral presentation skills not only impact the quality of nurse-client communications and the effectiveness of teamwork among groups of healthcare professionals, but also promotion, leadership, and professional development [ 2 ]. Nurses are also responsible for delivering health-related knowledge to patients and the community. Therefore, one important part of the curriculum for nursing students is the delivery of oral presentations related to healthcare issues. A self-assessment instrument for oral presentations could provide students with insight into what skills need improvement.

Three components have been identified as important for improving communication. First, a presenter’s self-esteem can influence the physio-psychological reaction towards the presentation; presenters with low self-esteem experience greater levels of anxiety during presentations [ 3 ]. Therefore, increasing a student’s self-efficacy can increase confidence in their ability to effectively communicate, which can reduce anxiety [ 3 , 4 ]. Second, Liao (2014) reported improving speaking efficacy can also improve oral communications and collaborative learning among students could improve speech efficacy and decrease speech anxiety [ 5 ]. A study by De Grez et al. provided students with a list of skills to practice, which allowed them to feel more comfortable when a formal presentation was required, increased presentation skills, and improved communication by improving self-regulation [ 6 ]. Third, Carlson and Smith-Howell (1995) determined quality and accuracy of the information presented was also an important aspect of public speaking performances [ 7 ]. Therefore, all three above mentioned components are important skills for effective communication during an oral presentation.

Instruments that provide an assessment of a public speaking performance are critical for helping students’ improve oral presentation skills [ 7 ]. One study found peer evaluations were higher than those of university tutors for student presentations, using a student-developed assessment form [ 8 ]. The assessment criteria included content (40%), presentation (40%), and structure (20%); the maximum percent in each domain was given for “excellence”, which was relative to a minimum “threshold”. Multiple “excellence” and “threshold” benchmarks were described for each domain. For example, benchmarks included the use of clear and appropriate language, enthusiasm, and keeping the audience interested. However, the percentage score did not provide any information about what specific benchmarks were met. Thus, these quantitative scores did not include feedback on specific criteria that could enhance future presentations.

At the other extreme is an assessment that is limited to one aspect of the presentation and is too detailed to evaluate the performance efficiently. An example of this is the 40-item tool developed by Tsang (2018) [ 6 ] to evaluate oral presentation skills, which measured several domains: voice (volume and speed), facial expressions, passion, and control of time. An assessment tool developed by De Grez et al. (2009) includes several domains: three subcategories for content (quality of introduction, structure, and conclusion), five subcategories of expression (eye-contact, vocal delivery, enthusiasm, interaction with audience, and body-language), and a general quality [ 9 ]. Many items overlap, making it hard to distinguish specific qualities. Other evaluation tools include criteria that are difficult to objectively measure, such as body language, eye-contact, and interactions with the audience [ 10 ]. Finally, most of the previous tools were developed without testing the reliability and validity of the instrument.

Nurses have the responsibility of not only providing medical care, but also medical information to other healthcare professionals, patients, and members of the community. Therefore, improving nursing students’ speaking skills is an important part of the curriculum. A self-report instrument for measuring nursing students’ subjective assessment of their presentation skills could help increase competence in oral communication. However, to date, there is a no reliable and valid instrument of evaluating oral presentation performance in nursing education. Therefore, the aim of this study was to develop a self-assessment instrument for nursing students that could guide them in understanding their strengths and development areas in aspects of oral presentations. Development of a scale that is a valid and reliable instrument for nursing students could then be examined for use as a scale for objective evaluations of oral presentations by peers and nurse educators.

Study design

This study developed and validated an oral presentation evaluation scale (OPES) that could be employed as a self-assessment instrument for students when learning skills for effective oral presentations. The instrument was developed in two phases: Phase I (item generation and revision) and Phase II (scale development) [ 11 ]. The phase I was aimed to generate items by a qualitative method and to collect content evidence for the OPES. The phase II focused on scale development which was established internal structure evidence including CFA, EFA, and internal consistency of the scale for the OPES. In addition, the phase II collected the evidence of OPES on relationships with other variables. Because we hope to also use the instrument as an aid for nurse educators in objective evaluations of nursing students’ oral presentations, both students and educators were involved in item generation and revision. Only nursing students participated in Phase II.

Approval was obtained from Chang Gung Medical Foundation institutional review board (ID: 201702148B0) prior to initiation of the study. Informed consent was obtained from all participants prior to data collection. All participants being interviewed for item generation in phase I provided signed informed consent indicating willingness to be audiotaped during the interview. All the study methods were carried out in accordance with relevant guidelines and regulations.

Phase I: item generation and item revision

Participants.

A sample of nurse educators ( n  = 8) and nursing students (n  = 11) participated in the interviews for item generation. Nursing students give oral presentations to meet the curriculum requirement, therefore the educators were university tutors experienced in coaching nursing students preparing to give an oral presentation. Nurse educators specializing in various areas of nursing, such as acute care, psychology, and community care were recruited if they had at least 10 years’ experience coaching university students. The mean age of the educators was 52.1 years ( SD  = 4.26), 75% were female, and the mean amount of teaching experience was 22.6 years ( SD  = 4.07). Students were included if they had given at least one oral presentation and were willing to share their experiences of oral presentation. The mean age of the students was 20.7 ( SD  = 1.90), 81.8% were female; 36.3%, four were second year students, three were third students, and four were in their fourth year.

An additional eight educators participated in the evaluation of content evidence of the ORES. All had over 10 years’ experience in coaching students in giving an oral presentation that would be evaluated for a grade.

Item generation

Development of item domains involved deductive evaluations of the about oral presentations [ 2 , 3 , 6 , 7 , 8 , 12 , 13 , 14 ]. Three domains were determined to be important components of an oral presentation: accuracy of content, effective communication, and clarity of speech. Inductive qualitative data from face-to-face semi-structured interviews with nurse educators and nursing student participants were used to identify domain items [ 11 ]. Details of interview participants are described in the section above. The interviews with nurse educators and students followed an interview guide (Table  1 ) and lasted approximately 30–50 min for educators and 20–30 min for students. Deduction of the literature and induction of the interview data was used to determine categories considered important for the objective evaluation of oral presentations.

Analysis of interview data. Audio recordings of the interviews were transcribed verbatim at the conclusion of each interview. Interview data were analyzed by the first, second, and corresponding author, all experts in qualitative studies. The first and second authors coded the interview data to identify items educators and student described as being important to the experience of an oral presentation [ 11 ]. The corresponding author grouped the coded items into constructs important for oral presentations. Meetings with the three researchers were conducted to discuss the findings; if there were differences in interpretation, an outside expert in qualitative studies was included in the discussions until consensus was reached among the three researchers.

Analysis of the interview data indicated items involved in preparation, presentation, and post-presentation were important to the three domains of accuracy of content, effective communication, and clarity of speech. Items for accuracy of content involved preparation (being well-prepared before the presentation; preparing materials suitable for the target audience; practicing the presentation in advance) and post-presentation reflection; and discussing the content of the presentation with classmates and teachers. Items for effective communication involved the presentation itself: obtain the attention of the audience; provide materials that are reliable and valuable; express confidence and enthusiasm; interact with the audience; and respond to questions from the audience. The third domain, clarity of speech, involved of items could be, post-presentation, involved a student’s ability to reflect on the content and performance of their presentation and willingness to obtain feedback from peers and teachers.

Item revision: content evidence

Based on themes that emerged during, 28 items were generated. Content evidence of the 28 items of the OPES was established with a panel of eight experts who were educators that had not participated in the face-to-face interviews. The experts were provided with a description of the research purpose, a list of the proposed items, and were asked to rate each item on a 4-point Likert scale (1 = not representative, 2 = item needs major revision, 3 = representative but needs minor revision, 4 = representative). For item-level content validity index (I-CVI) was determined by the total items rated 3 or 4 divided by the total number of experts; scale-level content validity index (S-CVI) was determined by the total items rated 3 or 4 divided by the total number of items.

Based on the suggestions of the experts, six items of the OPES were reworded for clarity: item 12 was revised from “The presentation is riveting” to “The presenter’s performance is brilliant; it resonates with the audience and arouses their interests”. Two items were deleted because they duplicated other items: “demonstrates confidence” and “presents enthusiasm” were combined and item 22 became, “demonstrates confidence and enthusiasm properly”. The item “the presentation allows for proper timing and sequencing” and “the length of time of the presentation is well controlled” were also combined into item 9, “The content of presentation follows the rules, allowing for the proper timing and sequence”. Thus, a total of 26 items were included in the OPES at this phase. The I-CVI value was .88 ~ 1 and the scale-level CVI/universal agreement was .75, indicating that the OPES was an acceptable instrument for measuring an oral presentation [ 11 ].

Phase II: scale development

Phase II, scale development, aimed to establish the internal structure evidence for OPES. The evidence of relation to other variables was also evaluated as well in this phase. More specifically, the internal structure evidence for OPES was evaluated by exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The evidence of relationships to other variables was determined by examining the relationships between the OPES and the PRCA and SPCC [ 15 ].

A sample of nursing students was recruited purposively from a university in Taiwan. Students were included if they were: (a) full-time students; (b) had declared nursing as their major; and (c) were in their sophomore, junior, or senior year. First-year university students (freshman) were excluded. A bulletin about the survey study was posted outside of classrooms; 707 students attend these classes. The bulletin included a description of the inclusion criteria and instructions to appear at the classroom on a given day and time if students were interested in participating in the study. Students who appeared at the classroom on the scheduled day ( N  = 650) were given a packet containing a demographic questionnaire (age, gender, year in school), a consent form, the OPES instrument, and two scales for measuring aspects of communication, the Personal Report of Communication Apprehension (PRCA) and the Self-Perceived Communication Competence (SPCC); the documents were labeled with an identification number to anonymize the data. The 650 students were divided into two groups, based on the demographic data using the SPSS random case selection procedure, (Version 23.0; SPSS Inc., Chicago, IL, USA). The selection procedure was performed repeatedly until the homogeneity of the baseline characteristics was established between the two groups ( p  > .05). The mean age of the participants was 20.5 years ( SD  = 0.98) and 87.1% were female ( n  = 566). Participants were comprised of third-year students (40.6%, n  = 274), fourth year (37.9%, n  = 246) and second year (21.5%, n  = 93). The survey data for half the group (calibration sample, n  = 325) was used for EFA; the survey data from the other half (the validation sample, n  = 325) was used for CFA. Scores from the PRCA and SPCC instruments were used for evaluating the evidence of relationships to other variables.

The aims of phase II were to collect the scale of internal structure evidence, which identify the items that nursing students perceived as important during an oral presentation and to determine the domains that fit a set of items. The 325 nursing students for EFA (described above) were completed the data collection. We used EFA to evaluate the internal structure of the scale. The items were presented in random order and were not nested according to constructs. Internal consistency of the scale was determined by calculating Cronbach’s alpha.

Then, the next step involved determining if the newly developed OPES was a reliable and valid self-report scale for subjective assessments of nursing students’ previous oral presentations. Participants (the second group of 325 students) were asked, “How often do you incorporate each item into your oral presentations?”. Responses were scored on a 5-point Likert scale with 1 = never to 5 = always; higher scores indicated a better performance. The latent structure of the scale was examined with CFA.

Finally, the evidence of relationships with other variables of the OPES was determined by examining the relationships between the OPES and the PRCA and SPCC, described below.

The 24-item PRCA scale

The PRCA scale is a self-report instrument for measuring communication apprehension, which is an individual’s level of fear or anxiety associated with either real or anticipated communication with a person or persons [ 12 ]. The 24 scale items are comprised of statements concerning feelings about communicating with others. Four subscales are used for different situations: group discussions, interpersonal communications, meetings, and public speaking. Each item is scored on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree); scores range from 24 to 120, with higher scores indicating greater communication anxiety. The PRCA has been demonstrated to be a reliable and valid scale across a wide range of related studies [ 5 , 13 , 14 , 16 , 17 ]. The Cronbach’s alpha for the scale is .90 [ 18 ]. We received permission from the owner of the copyright to translate the scale into Chinese. Translation of the scale into Chinese by a member of the research team who was fluent in English was followed by back-translation from a differed bi-lingual member of the team to ensure semantic validity of the translated PRCA scale. The Cronbach’s alpha value in the present study was .93.

The 12-item SPCC scale

The SPCC scale evaluates a persons’ self-perceived competence in a variety of communication contexts and with a variety of types of receivers. Each item is a situation which requires communication, such as “Present a talk to a group of strangers”, or “Talk with a friend”. Participants respond to each situation by ranking their level of competence from 0 (completely incompetent) to 100 (completely competent). The Cronbach’s alpha for reliability of the scale is .85. The SPCC has been used in similar studies [ 13 , 19 ]. We received permission owner of the copyright to translate the scale into Chinese. Translation of the SPCC scale into Chinese by a member of the research team who was fluent in English was followed by back-translation from a differed bi-lingual member of the team to ensure semantic validity of the translated scale. The Cronbach’s alpha value in the present study was .941.

Statistical analysis

Data were analyzed using SPSS for Windows 23 (SPSS Inc., Chicago, IL, USA). Data from the 325 students designated for EFA was used to determine the internal structure evidence of the OPES. The Kaiser-Meyer-Olkin measure for sampling adequacy and Bartlett’s test of sphericity demonstrated factor analysis was appropriate [ 20 ]. Principal component analysis (PCA) was performed on the 26 items to extract the major contributing factors; varimax rotation determined relationships between the items and contributing factors. Factors with an eigenvalue > 1 were further inspected. A factor loading greater than .50 was regarded as significantly relevant [ 21 ].

All item deletions were incorporated one by one, and the EFA model was respecified after each deletion, which reduced the number of items in accordance with a priori criteria. In the EFA phase, the internal consistency of each construct was examined using Cronbach’s alpha, with a value of .70 or higher considered acceptable [ 22 ].

Data from the 325 students designated for CFA was used to validate the factor structure of the OPES. In this phase, items with a factor loading less than .50 were deleted [ 21 ]. The goodness of the model fit was assessed using the following: absolute fit indices, including goodness of fit index (GFI), adjusted goodness of fit index (AGFI), standardized root mean squared residual (SRMR), and the root mean square error of approximation (RMSEA); relative fit indices, normed and non-normed fit index (NFI and NNFI, respectively), and comparative fit index (CFI); and the parsimony NFI, CFI, and likelihood ratio ( x 2 /df ) [ 23 ].

In addition to the validity testing, a research team, which included a statistician, determined the appropriateness of either deleting or retaining each item. The convergent validity (internal quality of the items and factor structures), was further verified using standardized factor loading, with values of .50 or higher considered acceptable, and average variance extraction (AVE), with values of .5 or higher considered acceptable [ 21 ]. Convergent reliability (CR) was assessed using the construct reliability from the CFA, with values of .7 or higher considered acceptable [ 24 ]. The AVE and correlation matrices among the latent constructs were used to establish discriminant validity of the instrument. The square root of the AVE of each construct was required to reach a value that was larger than the correlation coefficient between itself and the other constructs [ 24 ].

The evidence of relationships with other variables was determined by examining the relationship of nursing students’ scores ( N  = 650) on the newly developed OPES with scores for constructs of communication of the translated scales for PRCA and SPCC. The hypotheses between OPES to PRCA and SPCC individually indicated the strong self-reported presentation competence were associated with lower communication anxiety and greater communication competence.

Development of the OPES: internal structure evidence

EFA was performed sequentially six times until there were no items with a loading factor < .50 or that were cross-loaded, and six items were deleted (Table  2 ). EFA resulted in 20 items with a three factors solution, which represented 64.75% of the variance of the OPES. The Cronbach’s alpha estimates for the total scale was .94. indicating the scale had sound internal reliability (Table 2 ). The three factors were labeled in accordance with the item content via a panel discussion and had Cronbach’s alpha values of .93, .89, and .84 for factors 1, 2 and 3, respectively.

Factor 1, Accuracy of Content, was comprised of 11 items and explained 30.03% of the variance. Items in Accuracy of Content evaluated agreement between the topic (theme) and content of the presentation, use of presentation aids to highlight the key points of the presentation, and adherence to time limitations. These items included statements such as: “The content of the presentation matches the theme” (item 7), “Presentation aids, such as PowerPoint and posters, highlight key points of the report” (item 14), and “The organization of the presentation is structured to provide the necessary information, while also adhering to time limitations” (item 9). Factor 2, “Effective Communication”, was comprised of five items, which explained 21.72% of the total variance. Effective Communication evaluated the attitude and expression of the presenter. Statements included “Demonstrates confidence and an appropriate level of enthusiasm” (item 22), “Uses body language in a manner that increases the audience’s interest in learning” (item 21), and “Interacts with the audience using eye contact and a question and answer session” (item 24). Factor 3, “Clarity of Speech” was comprised of four items, which explained 13.00% of the total variance. Factor 3 evaluated the presenter’s pronunciation with statements such as “The words and phrases of the presenter are smooth and fluent” (item 19).

The factor structure of the 20-items of the EFA were examined with CFA. We sequentially removed items 1, 4, 20, 15, and 16, based on modification indices. The resultant 15-item scale had acceptable fit indices for the 3-factor model of the OPES for chi-square ( x 2 /df  = 2.851), RMSEA (.076), NNFI (.933), and CFI = .945. However, the AGFI, which was .876, was below the acceptable criteria of .9. A panel discussion with the researchers determined that items 4, 15, and 16 were similar in meaning to item 14; item 1 was similar in meaning to item 7. Therefore, the panel accepted the results of the modified CFA model of the OPES with 15 items and 3-factors.

As illustrated in Table  3 and Fig.  1 , all standardized factor loadings exceeded the threshold of .50, and the AVE for each construct ranged from .517 to .676, indicating acceptable convergent validity. In addition, the CR was greater than .70 for the three constructs (range = .862 to .901), providing further evidence for the reliability of the instrument [ 25 ]. As shown in Table  4 , all square roots of the AVE for each construct (values in the diagonal elements) were greater than the corresponding inter-construct correlations (values below the diagonal) [ 24 , 25 ]. These findings provide further support for the validity of the OPES.

figure 1

The standardized estimates of CFA model for validation sample

Development of the OPES: relationships with other variables

Relationships with other variable evidence was examined with correlation coefficients for the total score and subscale scores of the OPES with the total score and subscale scores of the PRCA and SPCC (Table  5 ) from all nursing students who participated in the study and complete all three scales ( N  = 650). Correlation coefficients for the total score of the OPES with total scores for the PRCA and SPCC were − .51 and .45, respectively (both p  < .001). Correlation coefficients for subscale scores of the OPES with the subscale scores of the PRCA and SPCC were all significant ( p  < .001), indicating strong valid evidence of the scale as a self-assessment for effective communication.

The 15-item OPES was found to be a reliable and valid instrument for nursing students’ self-assessments of their performance during previous oral presentations. The strength of this study is that the initial items were developed using both literature review and interviews with nurse educators, who were university tutors in oral presentation skills, as well as nursing students at different stages of the educational process. Another strength of this study is the multiple methods used to establish the validity and reliability of the OPES, including internal structure evidence (both EFA and CFA) and relationships with other variables [ 15 , 26 ].

Similar to previous to other oral presentation instruments, content analysis of items of the OPES generated from the interviews with educators and students indicated accuracy of the content of a presentation and effective communication were important factors for a good performance [ 3 , 4 , 5 , 6 , 8 ]. Other studies have also included self-esteem as a factor that can influence the impact of an oral presentation [ 3 ], however, the subscale of effective communication included the item “Demonstrates confidence and an appropriate level of enthusiasm”, which a quality of self-esteem. The third domain was identified as clarity of speech, which is unique to our study.

Constructs that focus on a person’s ability to deliver accurate content are important components for evaluations of classroom speaking because they have been shown to be fundamental elements of public speaking ([ 7 ]). Accuracy of content as it applies to oral presentation for nurses is important not only for communicating information involving healthcare education for patients, but also for communicating with team members providing medical care in a clinical setting.

The two other factors identified in the OPES, effective communication and clarity of speech, are similar to constructs for delivery of a presentation, which include interacting with the audience through body-language, eye-contact, and question and answer sessions. These behaviors indicate the presenter is confident and enthusiastic, which engages and captures the attention of an audience. It seems logical that the voice, pronunciation, and fluency of speech were not independent factors because the presenter’s voice qualities all are keys to effectively delivering a presentation. A clear and correct pronunciation, appropriate tone and volume of a presentation assists audiences in more easily receiving and understanding the content.

Our 15-item OPES scale evaluated the performance based on outcome. The original scale was composed of 26 items that were derived from qualitative interviews with nursing students and university tutors in oral presentations. These items were the result of asking about important qualities at three timepoints of a presentation: before, during, and after. However, most of the items that were deleted were those about the period before the presentation (1 to 6); two items (25 and 26) were about the period after the presentation. Analysis did not reflect the qualitative interview data expressed by educators and students regarding the importance of preparing with practice and rehearsal, and the importance of peer and teacher evaluations. Other studies have suggested that preparation and self-reflection is important for a good presentation, which includes awareness of the audience receiving the presentation, meeting the needs of the audience, defining the purpose of the presentation, use of appropriate technology to augment information, and repeated practices to reduce anxiety [ 2 , 5 , 27 ]. However, these items were deleted in the scale validation stage, possibly because it is not possible to objectively evaluate how much time and effort the presenter has devoted to the oral presentation.

The deletion of item 20, “The clothing worn by the presenter is appropriate” was also not surprising. During the interviews, educators and students expressed different opinions about the importance of clothing for a presentation. Many of the educators believed the presenter should be dressed formally; students believed the presenter should be neatly dressed. These two perspectives might reflect generational differences. However, these results are reminders assessments should be based on a structured and objective scale, rather than one’s personal attitude and stereotype of what should be important about an oral presentation.

The application of the OPES may be useful not only for educators but also for students. The OPES could be used a checklist to help students determine how well their presentation matches the 15 items, which could draw attention to deficiencies in their speech before the presentation is given. Once the presentation has been given, the OPES could be used as a self-evaluation form, which could help them make modifications to improve the next the next presentation. Educators could use the OPES to evaluate a performance during tutoring sessions with students, which could help identify specific areas needing improvement prior to the oral presentation. Although, analysis of the scale was based on data from nursing students, additional assessments with other populations of healthcare students should be conducted to determine if the OPES is applicable for evaluating oral presentations for students in general.

Limitations

This study had several limitations. Participants were selected by non-random sampling, therefore, additional studies with nursing students from other nursing schools would strengthen the validity and reliability of the scale. In addition, the OPES was developed using empirical data, rather than basing it on a theoretical framework, such as anxiety and public speaking. Therefore, the validity of the OPES for use in other types of student populations or cultures that differ significantly from our sample population should be established in future studies. Finally, the OPES was in the study was examined as a self-assessment instrument for nursing students who rated themselves based on their perceived abilities previous oral presentations rather than from peer or nurse educator evaluations. Therefore, applicability of the scale as an assessment instrument for educators providing an objective score of nursing students’ real-life oral presentations needs to be validated in future studies.

This newly developed 15-item OPES is the first report of a valid self-assessment instrument for providing nursing students with feedback about whether necessary targets for a successful oral presentation are reached. Therefore, it could be adopted as a self-assessment instrument for nursing students when learning what oral presentation require skills require strengthening. However, further studies are needed to determine if the OPES is a valid instrument for use by student peers or nursing educators evaluating student presentations across nursing programs.

Availability of data and materials

The datasets and materials of this study are available to the corresponding author on request.

Hadfield-Law L. Presentation skills. Presentation skills for nurses: how to prepare more effectively. Br J Nurs. 2001;10(18):1208–11.

Article   Google Scholar  

Longo A, Tierney C. Presentation skills for the nurse educator. J Nurses Staff Dev. 2012;28(1):16–23.

Elfering A, Grebner S. Getting used to academic public speaking: global self-esteem predicts habituation in blood pressure response to repeated thesis presentations. Appl Psychophysiol Biofeedback. 2012;37(2):109–20.

Turner K, Roberts L, Heal C, Wright L. Oral presentation as a form of summative assessment in a master’s level PGCE module: the student perspective. Assess Eval High Educ. 2013;38(6):662–73.

Liao H-A. Examining the role of collaborative learning in a public speaking course. Coll Teach. 2014;62(2):47–54.

Tsang A. Positive effects of a programme on oral presentation skills: high- and low-proficient learners' self-evaluations and perspectives. Assess Eval High Educ. 2018;43(5):760–71.

Carlson RE, Smith-Howell D. Classroom public speaking assessment: reliability and validity of selected evaluation instruments. Commun Educ. 1995;44:87–97.

Langan AM, Wheater CP, Shaw EM, Haines BJ, Cullen WR, Boyle JC, et al. Peer assessment of oral presentations: effects of student gender, university affiliation and participation in the development of assessment criteria. Assess Eval High Educ. 2005;30(1):21–34.

De Grez L, Valcke M, Roozen I. The impact of an innovative instructional intervention on the acquisition of oral presentation skills in higher education. Comput Educ. 2009;53(1):112–20.

Murillo-Zamorano LR, Montanero M. Oral presentations in higher education: a comparison of the impact of peer and teacher feedback. Assess Eval High Educ. 2018;43(1):138–50.

Polit DF, Beck CT. The content validity index: are you sure you know what’s being reported? Critique and recommendations. Res Nurs Health. 2006;29(5):489–97.

McCroskey CJ. Oral communication apprehension: a summary of recent theory and research. Hum Commun Res. 1977;4(1):78–96.

Dupagne M, Stacks DW, Giroux VM. Effects of video streaming technology on public speaking Students' communication apprehension and competence. J Educ Technol Syst. 2007;35(4):479–90.

Kim JY. The effect of personality, situational factors, and communication apprehension on a blended communication course. Indian J Sci Technol. 2015;8(S1):528–34.

Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):166 e7–16.

Pearson JC, Child JT, DeGreeff BL, Semlak JL, Burnett A. The influence of biological sex, self-esteem, and communication apprehension on unwillingness to communicate. Atl J Commun. 2011;19(4):216–27.

Degner RK. Prevalence of communication apprehension at a community college. Int J Interdiscip Soc Sci. 2010;5(6):183–91.

Google Scholar  

McCroskey JC. An introduction to rhetorical communication, vol. 4th ed. Englewood Cliffs: NJ: Prentice-Hall; 1982.

Hancock AB, Stone MD, Brundage SB, Zeigler MT. Public speaking attitudes: does curriculum make a difference? J Voice. 2010;24(3):302–7.

Nunnally JC, Bernstein IH. Psychometric theory. New York: McGraw-Hill; 1994.

Hair JF, Black B, Babin B, Anderson RE, Tatham RL. Multivariate data analysis. 6th ed. Upper Saddle River: NJ: Prentice-Hall; 2006.

DeVellis RF. Scale development: theory and applications. 2nd ed. Oaks, CA: SAGE; 2003.

Bentler PM. On the fit of models to covariances and methodology to the bulletin. Psychol Bull. 1992;112(3):400–4.

Fornell C, Larcker D. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18:39–50.

Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis: a global perspective vol. 7th ed. Upper Saddle River: Pearson Prentice Hall; 2009.

Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ. 2003;37(9):830–7.

Foulkes M. Presentation skills for nurses. Nurs Stand. 2015;29(25):52–8.

Download references

Acknowledgements

The authors thank all the participants for their kind cooperation and contribution to the study.

This study was supported by grants from the Ministry of Science and Technology Taiwan (MOST 107–2511-H-255-007), Ministry of Education (PSR1090283), and the Chang Gung Medical Research Fund (CMRPF3K0021, BMRP704, BMRPA63).

Author information

Authors and affiliations.

Department of Nursing, Chang Gung University of Science and Technology, Division of Pediatric Hematology and Oncology, Linkou Chang Gung Memorial Hospital, Taoyuan City, Taiwan, Republic of China

Yi-Chien Chiang

Department of Nursing, Chang Gung University of Science and Technology, Taoyuan City, Taiwan, Republic of China

Hsiang-Chun Lee & Chia-Ling Wu

Administration Center of Quality Management Department, Chang Gung Medical Foundation, Taoyuan City, Taiwan, Republic of China

Tsung-Lan Chu

Department of Nursing, Chang Gung University of Science and Technology; Administration Center of Quality Management Department, Linkou Chang Gung Memorial Hospital, No.261, Wenhua 1st Rd., Guishan Dist, Taoyuan City, 333 03, Taiwan, Republic of China

Ya-Chu Hsiao

You can also search for this author in PubMed   Google Scholar

Contributions

All authors conceptualized and designed the study. Data were collected by Y-CH and H-CL. Data analysis was conducted by Y-CH and Y-CC. The first draft of the manuscript was written by Y-CH, Y-CC, and all authors contributed to subsequent revisions. All authors read and approved the final submission.

Corresponding author

Correspondence to Ya-Chu Hsiao .

Ethics declarations

Ethics approval and consent to participate.

All the study methods and materials have been performed in accordance with the Declaration of Helsink. The study protocol and the procedures of the study were approved by Chang Gung Medical Foundation institutional review board (number: 201702148B0) for the protection of participants’ confidentiality. All of the participants received oral and written explanations of the study and its procedures, as well as informed consent was obtained from all subjects.

Consent for publication

Not applicable.

Competing interests

No conflict of interest has been declared by the authors.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Chiang, YC., Lee, HC., Chu, TL. et al. Development and validation of the oral presentation evaluation scale (OPES) for nursing students. BMC Med Educ 22 , 318 (2022). https://doi.org/10.1186/s12909-022-03376-w

Download citation

Received : 25 February 2021

Accepted : 14 April 2022

Published : 26 April 2022

DOI : https://doi.org/10.1186/s12909-022-03376-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nurse educators
  • Nursing students
  • Oral presentation
  • Scale development

BMC Medical Education

ISSN: 1472-6920

presentation feedback form nhs

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • How to present patient...

How to present patient cases

  • Related content
  • Peer review
  • Mary Ni Lochlainn , foundation year 2 doctor 1 ,
  • Ibrahim Balogun , healthcare of older people/stroke medicine consultant 1
  • 1 East Kent Foundation Trust, UK

A guide on how to structure a case presentation

This article contains...

-History of presenting problem

-Medical and surgical history

-Drugs, including allergies to drugs

-Family history

-Social history

-Review of systems

-Findings on examination, including vital signs and observations

-Differential diagnosis/impression

-Investigations

-Management

Presenting patient cases is a key part of everyday clinical practice. A well delivered presentation has the potential to facilitate patient care and improve efficiency on ward rounds, as well as a means of teaching and assessing clinical competence. 1

The purpose of a case presentation is to communicate your diagnostic reasoning to the listener, so that he or she has a clear picture of the patient’s condition and further management can be planned accordingly. 2 To give a high quality presentation you need to take a thorough history. Consultants make decisions about patient care based on information presented to them by junior members of the team, so the importance of accurately presenting your patient cannot be overemphasised.

As a medical student, you are likely to be asked to present in numerous settings. A formal case presentation may take place at a teaching session or even at a conference or scientific meeting. These presentations are usually thorough and have an accompanying PowerPoint presentation or poster. More often, case presentations take place on the wards or over the phone and tend to be brief, using only memory or short, handwritten notes as an aid.

Everyone has their own presenting style, and the context of the presentation will determine how much detail you need to put in. You should anticipate what information your senior colleagues will need to know about the patient’s history and the care he or she has received since admission, to enable them to make further management decisions. In this article, I use a fictitious case to show how you can structure case presentations, which can be adapted to different clinical and teaching settings (box 1).

Box 1: Structure for presenting patient cases

Presenting problem, history of presenting problem, medical and surgical history.

Drugs, including allergies to drugs

Family history

Social history, review of systems.

Findings on examination, including vital signs and observations

Differential diagnosis/impression

Investigations

Case: tom murphy.

You should start with a sentence that includes the patient’s name, sex (Mr/Ms), age, and presenting symptoms. In your presentation, you may want to include the patient’s main diagnosis if known—for example, “admitted with shortness of breath on a background of COPD [chronic obstructive pulmonary disease].” You should include any additional information that might give the presentation of symptoms further context, such as the patient’s profession, ethnic origin, recent travel, or chronic conditions.

“ Mr Tom Murphy is a 56 year old ex-smoker admitted with sudden onset central crushing chest pain that radiated down his left arm.”

In this section you should expand on the presenting problem. Use the SOCRATES mnemonic to help describe the pain (see box 2). If the patient has multiple problems, describe each in turn, covering one system at a time.

Box 2: SOCRATES—mnemonic for pain

Associations

Time course

Exacerbating/relieving factors

“ The pain started suddenly at 1 pm, when Mr Murphy was at his desk. The pain was dull in nature, and radiated down his left arm. He experienced shortness of breath and felt sweaty and clammy. His colleague phoned an ambulance. He rated the pain 9/10 in severity. In the ambulance he was given GTN [glyceryl trinitrate] spray under the tongue, which relieved the pain to 5/10. The pain lasted 30 minutes in total. No exacerbating factors were noted. Of note: Mr Murphy is an ex-smoker with a 20 pack year history”

Some patients have multiple comorbidities, and the most life threatening conditions should be mentioned first. They can also be categorised by organ system—for example, “has a long history of cardiovascular disease, having had a stroke, two TIAs [transient ischaemic attacks], and previous ACS [acute coronary syndrome].” For some conditions it can be worth stating whether a general practitioner or a specialist manages it, as this gives an indication of its severity.

In a surgical case, colleagues will be interested in exercise tolerance and any comorbidity that could affect the patient’s fitness for surgery and anaesthesia. If the patient has had any previous surgical procedures, mention whether there were any complications or reactions to anaesthesia.

“Mr Murphy has a history of type 2 diabetes, well controlled on metformin. He also has hypertension, managed with ramipril, and gout. Of note: he has no history of ischaemic heart disease (relevant negative) (see box 3).”

Box 3: Relevant negatives

Mention any relevant negatives that will help narrow down the differential diagnosis or could be important in the management of the patient, 3 such as any risk factors you know for the condition and any associations that you are aware of. For example, if the differential diagnosis includes a condition that you know can be hereditary, a relevant negative could be the lack of a family history. If the differential diagnosis includes cardiovascular disease, mention the cardiovascular risk factors such as body mass index, smoking, and high cholesterol.

Highlight any recent changes to the patient’s drugs because these could be a factor in the presenting problem. Mention any allergies to drugs or the patient’s non-compliance to a previously prescribed drug regimen.

To link the medical history and the drugs you might comment on them together, either here or in the medical history. “Mrs Walsh’s drugs include regular azathioprine for her rheumatoid arthritis.”Or, “His regular drugs are ramipril 5 mg once a day, metformin 1g three times a day, and allopurinol 200 mg once a day. He has no known drug allergies.”

If the family history is unrelated to the presenting problem, it is sufficient to say “no relevant family history noted.” For hereditary conditions more detail is needed.

“ Mr Murphy’s father experienced a fatal myocardial infarction aged 50.”

Social history should include the patient’s occupation; their smoking, alcohol, and illicit drug status; who they live with; their relationship status; and their sexual history, baseline mobility, and travel history. In an older patient, more detail is usually required, including whether or not they have carers, how often the carers help, and if they need to use walking aids.

“He works as an accountant and is an ex-smoker since five years ago with a 20 pack year history. He drinks about 14 units of alcohol a week. He denies any illicit drug use. He lives with his wife in a two storey house and is independent in all activities of daily living.”

Do not dwell on this section. If something comes up that is relevant to the presenting problem, it should be mentioned in the history of the presenting problem rather than here.

“Systems review showed long standing occasional lower back pain, responsive to paracetamol.”

Findings on examination

Initially, it can be useful to practise presenting the full examination to make sure you don’t leave anything out, but it is rare that you would need to present all the normal findings. Instead, focus on the most important main findings and any abnormalities.

“On examination the patient was comfortable at rest, heart sounds one and two were heard with no additional murmurs, heaves, or thrills. Jugular venous pressure was not raised. No peripheral oedema was noted and calves were soft and non-tender. Chest was clear on auscultation. Abdomen was soft and non-tender and normal bowel sounds were heard. GCS [Glasgow coma scale] was 15, pupils were equal and reactive to light [PEARL], cranial nerves 1-12 were intact, and he was moving all four limbs. Observations showed an early warning score of 1 for a tachycardia of 105 beats/ min. Blood pressure was 150/90 mm Hg, respiratory rate 18 breaths/min, saturations were 98% on room air, and he was apyrexial with a temperature of 36.8 ºC.”

Differential diagnoses

Mentioning one or two of the most likely diagnoses is sufficient. A useful phrase you can use is, “I would like to rule out,” especially when you suspect a more serious cause is in the differential diagnosis. “History and examination were in keeping with diverticular disease; however, I would like to rule out colorectal cancer in this patient.”

Remember common things are common, so try not to mention rare conditions first. Sometimes it is acceptable to report investigations you would do first, and then base your differential diagnosis on what the history and investigation findings tell you.

“My impression is acute coronary syndrome. The differential diagnosis includes other cardiovascular causes such as acute pericarditis, myocarditis, aortic stenosis, aortic dissection, and pulmonary embolism. Possible respiratory causes include pneumonia or pneumothorax. Gastrointestinal causes include oesophageal spasm, oesophagitis, gastro-oesophageal reflux disease, gastritis, cholecystitis, and acute pancreatitis. I would also consider a musculoskeletal cause for the pain.”

This section can include a summary of the investigations already performed and further investigations that you would like to request. “On the basis of these differentials, I would like to carry out the following investigations: 12 lead electrocardiography and blood tests, including full blood count, urea and electrolytes, clotting screen, troponin levels, lipid profile, and glycated haemoglobin levels. I would also book a chest radiograph and check the patient’s point of care blood glucose level.”

You should consider recommending investigations in a structured way, prioritising them by how long they take to perform and how easy it is to get them done and how long it takes for the results to come back. Put the quickest and easiest first: so bedside tests, electrocardiography, followed by blood tests, plain radiology, then special tests. You should always be able to explain why you would like to request a test. Mention the patient’s baseline test values if they are available, especially if the patient has a chronic condition—for example, give the patient’s creatinine levels if he or she has chronic kidney disease This shows the change over time and indicates the severity of the patient’s current condition.

“To further investigate these differentials, 12 lead electrocardiography was carried out, which showed ST segment depression in the anterior leads. Results of laboratory tests showed an initial troponin level of 85 µg/L, which increased to 1250 µg/L when repeated at six hours. Blood test results showed raised total cholesterol at 7.6 mmol /L and nil else. A chest radiograph showed clear lung fields. Blood glucose level was 6.3 mmol/L; a glycated haemoglobin test result is pending.”

Dependent on the case, you may need to describe the management plan so far or what further management you would recommend.“My management plan for this patient includes ACS [acute coronary syndrome] protocol, echocardiography, cardiology review, and treatment with high dose statins. If you are unsure what the management should be, you should say that you would discuss further with senior colleagues and the patient. At this point, check to see if there is a treatment escalation plan or a “do not attempt to resuscitate” order in place.

“Mr Murphy was given ACS protocol in the emergency department. An echocardiogram has been requested and he has been discussed with cardiology, who are going to come and see him. He has also been started on atorvastatin 80 mg nightly. Mr Murphy and his family are happy with this plan.”

The summary can be a concise recap of what you have presented beforehand or it can sometimes form a standalone presentation. Pick out salient points, such as positive findings—but also draw conclusions from what you highlight. Finish with a brief synopsis of the current situation (“currently pain free”) and next step (“awaiting cardiology review”). Do not trail off at the end, and state the diagnosis if you are confident you know what it is. If you are not sure what the diagnosis is then communicate this uncertainty and do not pretend to be more confident than you are. When possible, you should include the patient’s thoughts about the diagnosis, how they are feeling generally, and if they are happy with the management plan.

“In summary, Mr Murphy is a 56 year old man admitted with central crushing chest pain, radiating down his left arm, of 30 minutes’ duration. His cardiac risk factors include 20 pack year smoking history, positive family history, type 2 diabetes, and hypertension. Examination was normal other than tachycardia. However, 12 lead electrocardiography showed ST segment depression in the anterior leads and troponin rise from 85 to 250 µg/L. Acute coronary syndrome protocol was initiated and a diagnosis of NSTEMI [non-ST elevation myocardial infarction] was made. Mr Murphy is currently pain free and awaiting cardiology review.”

Originally published as: Student BMJ 2017;25:i4406

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed

  • ↵ Green EH, Durning SJ, DeCherrie L, Fagan MJ, Sharpe B, Hershman W. Expectations for oral case presentations for clinical clerks: opinions of internal medicine clerkship directors. J Gen Intern Med 2009 ; 24 : 370 - 3 . doi:10.1007/s11606-008-0900-x   pmid:19139965 . OpenUrl CrossRef PubMed Web of Science
  • ↵ Olaitan A, Okunade O, Corne J. How to present clinical cases. Student BMJ 2010;18:c1539.
  • ↵ Gaillard F. The secret art of relevant negatives, Radiopedia 2016; http://radiopaedia.org/blog/the-secret-art-of-relevant-negatives .

presentation feedback form nhs

Learn more

How it works

Transform your enterprise with the scalable mindsets, skills, & behavior change that drive performance.

Explore how BetterUp connects to your core business systems.

We pair AI with the latest in human-centered coaching to drive powerful, lasting learning and behavior change.

Build leaders that accelerate team performance and engagement.

Unlock performance potential at scale with AI-powered curated growth journeys.

Build resilience, well-being and agility to drive performance across your entire enterprise.

Transform your business, starting with your sales leaders.

Unlock business impact from the top with executive coaching.

Foster a culture of inclusion and belonging.

Accelerate the performance and potential of your agencies and employees.

See how innovative organizations use BetterUp to build a thriving workforce.

Discover how BetterUp measurably impacts key business outcomes for organizations like yours.

A demo is the first step to transforming your business. Meet with us to develop a plan for attaining your goals.

Request a demo

  • What is coaching?

Learn how 1:1 coaching works, who its for, and if it's right for you.

Accelerate your personal and professional growth with the expert guidance of a BetterUp Coach.

Types of Coaching

Navigate career transitions, accelerate your professional growth, and achieve your career goals with expert coaching.

Enhance your communication skills for better personal and professional relationships, with tailored coaching that focuses on your needs.

Find balance, resilience, and well-being in all areas of your life with holistic coaching designed to empower you.

Discover your perfect match : Take our 5-minute assessment and let us pair you with one of our top Coaches tailored just for you.

Find your Coach

Best practices, research, and tools to fuel individual and business growth.

View on-demand BetterUp events and learn about upcoming live discussions.

The latest insights and ideas for building a high-performing workplace.

  • BetterUp Briefing

The online magazine that helps you understand tomorrow's workforce trends, today.

Innovative research featured in peer-reviewed journals, press, and more.

Founded in 2022 to deepen the understanding of the intersection of well-being, purpose, and performance

We're on a mission to help everyone live with clarity, purpose, and passion.

Join us and create impactful change.

Read the buzz about BetterUp.

Meet the leadership that's passionate about empowering your workforce.

For Business

For Individuals

30 presentation feedback examples

Understand Yourself Better:

Big 5 Personality Test

Find my Coach

Jump to section

You're doing great

You should think of improving

Tips to improve

3 things to look for when providing presentation feedback

3 tips for giving effective feedback.

We’re all learning as we go. 

And that’s perfectly OK — that’s part of being human. On my own personal growth journey, I know I need to get better at public speaking and presenting. It’s one of those things that doesn’t necessarily come naturally to me. 

And I know there are plenty of people in my shoes. So when it comes to presenting in the workplace, it can be intimidating. But there’s one thing that can help people continue to get better at presentations: feedback . 

The following examples not only relate to presentations. They can also be helpful for public speaking and captivating your audience. 

You’re doing great 

  • You really have the natural ability to hand out presentation material in a very organized way! Good job!
  • Your presentations are often compelling and visually stunning. You really know how to effectively captivate the audience. Well done!
  • You often allow your colleagues to make presentations on your behalf. This is a great learning opportunity for them and they often thrive at the challenge.
  • Keeping presentations focused on key agenda items can be tough, but you’re really good at it. You effectively outline exactly what it is that you will be discussing and you make sure you keep to it. Well done!!
  • You created downloadable visual presentations and bound them for the client. Excellent way to portray the company! Well done!
  • Your content was relevant and your format was visually appealing and easy to follow and understand. Great job! You’re a real designer at heart!
  • You always remain consistent with the way you present and often your presentations have the same style and layout. This is great for continuity. Well done!
  • You always remain consistent with every presentation, whether it be one on ones, small group chats, with peers, direct reports, and the company bosses. You have no problem presenting in any one of these situations. Well done!
  • You are an effective presenter both to employees and to potential clients. When controversial topics come up, you deal with them in a timely manner and you make sure these topics are fully dealt with before moving on. Well done!
  • You effectively command attention and you have no problem managing groups during the presentation.

subscribe-cta

You should think of improving 

  • You’re a great presenter in certain situations, but you struggle to present in others. Try to be more consistent when presenting so that you get one single-minded message across. This will also help you broaden your presentation skills by being able to portray one single idea or message.
  • You tend to be a little shy when making presentations. You have the self-confidence in one-on-one conversations , so you definitely have the ability to make compelling presentations. Come on! You can do it!
  • During presentations, there seems to be quite a lack of focus . I know it can be difficult to stick to the subject matter, however you need to in order for people to understand what the presentation is about and what is trying to be achieved.
  • To engage with your audience and make them attentively listen to what you have to say, you need to be able to use your voice in an effective manner to achieve this. Try to focus on certain words that require extra attention and emphasis these words during your presentation.
  • Knowing your audience is critical to the success of any presentation. Learn to pick up on their body language and social cues to gauge your style and tone. Listen to what your audience has to say and adjust your presentation accordingly.

presentation-feedback-examples-person-handing-out-papers

  • During presentations, it’s expected that there will be tough questions . Try to prepare at least a couple of days before the time so that you can handle these questions in an effective manner.
  • To be an effective presenter you need to be able to adjust to varying audiences and circumstances. Try learning about who will be in the room at the time of the presentation and adjust accordingly.
  • Remember not to take debate as a personal attack. You tend to lose your cool a little too often, which hinders the discussion and people feel alienated. You can disagree without conflict .
  • The only way you are going to get better at public speaking is by practicing, practicing, practicing. Learn your speech by heart, practice in the mirror, practice in front of the mirror. Eventually, you’ll become a natural and you won't be afraid of public speaking any longer.
  • Your presentations are beautiful and I have no doubt you have strong presentation software skills. However, your content tends to be a bit weak and often you lack the substance. Without important content, the presentation is empty.

Tips to improve 

  • Remember it’s always good to present about the things you are passionate about . When you speak to people about your passions they can sense it. The same goes for presentations. Identify what it is that excites you and somehow bring it into every presentation. it’ll make it easier to present and your audience will feel the energy you portray.
  • Sometimes it can be easier to plan with the end result in mind. Try visualizing what it is you are exactly expecting your audience to come away with and develop your presentation around that.
  • Simplicity is a beautiful thing. Try to keep your presentations as simple as possible. Make it visually appealing with the least amount of words possible. Try interactive pictures and videos to fully immerse your audience in the presentation.
  • It’s a fine balance between winging the presentation and memorizing the presentation. If you wing it too much it may come across as if you didn't prepare. If you memorize it, the presentation may come off a bit robotic. Try to find the sweet spot, if you can.
  • When presenting, try to present in a way that is cause for curiosity . Make people interested in what you have to say to really captivate them. Have a look at some TED talks to get some tips on how you can go about doing this.
  • Remember presentations should be about quality, not quantity. Presentations that are text-heavy and go on for longer than they should bore your audience and people are less likely to remember them.
  • Try to arrive at every staff meeting on time and always be well prepared. This will ensure that meetings will go smoothly in the future.
  • Remember to respect other people's time by always arriving on time or five minutes before the presentation.
  • Remember to ask the others in the meeting for their point of view if there are individuals during presentations.
  • If you notice presentations are deviating off-topic, try to steer it back to the important topic being discussed.

Presentation feedback can be intimidating. It’s likely the presenter has spent a good deal of time and energy on creating the presentation.

As an audience member, you can hone in on a few aspects of the presentation to help frame your feedback. If it's an oral presentation, you should consider also audience attention and visual aids.

It’s important to keep in mind three key aspects of the presentation when giving feedback. 

presentation-feedback-examples-presenting-team-meeting

Communication

  • Were the key messages clear? 
  • Was the speaker clear and concise in their language?
  • Did the presenter clearly communicate the key objectives? 
  • Did the presenter give the audience clear takeaways? 
  • How well did the presenter’s voice carry in the presentation space? 

Delivery 

  • Was the presentation engaging? 
  • How well did the presenter capture their audience? 
  • Did the presenter engage employees in fun or innovative ways? 
  • How interactive was the presentation? 
  • How approachable did the presenter appear? 
  • Was the presentation accessible to all? 

Body language and presence 

  • How did the presenter carry themselves? 
  • Did the presenter make eye contact with the audience? 
  • How confident did the presenter appear based on nonverbal communication? 
  • Were there any nonverbal distractions to the presentation? (i.e. too many hand gestures, facial expressions, etc.)  

There are plenty of benefits of feedback . But giving effective feedback isn’t an easy task. Here are some tips for giving effective feedback. 

1. Prepare what you’d like to say 

I’m willing to bet we’ve all felt like we’ve put our foot in our mouth at one point or another. Knee-jerk, emotional reactions are rarely helpful. In fact, they can do quite the opposite of help. 

Make sure you prepare thoughtfully. Think through what feedback would be most impactful and helpful for the recipient. How will you word certain phrases? What’s most important to communicate? What feedback isn’t helpful to the recipient? 

You can always do practice runs with your coach. Your coach will serve as a guide and consultant. You can practice how you’ll give feedback and get feedback … on your feedback. Sounds like a big loop, but it can be immensely helpful. 

2. Be direct and clear (but lead with empathy) 

Have you ever received feedback from someone where you’re not quite sure what they’re trying to say? Me, too. 

I’ve been in roundabout conversations where I walk away even more confused than I was before. This is where clear, direct, and concise communication comes into play. 

Be clear and direct in your message. But still, lead with empathy and kindness . Feedback doesn’t need to be harsh or cruel. If it’s coming from a place of care, the recipient should feel that care from you. 

3. Create dialogue (and listen carefully) 

Feedback is never a one-way street. Without the opportunity for dialogue, you’re already shutting down and not listening to the other person. Make sure you’re creating space for dialogue and active listening . Invite questions — or, even better, feedback. You should make the person feel safe, secure, and trusted . You should also make sure the person feels heard and valued. 

Your point of view is just that: it's one perspective. Invite team members to share their perspectives, including positive feedback . 

You might also offer the recipient the opportunity for self-evaluation . By doing a self-evaluation, you can reflect on things like communication skills and confidence. They might come to some of the same important points you did — all on their own.

Now, let’s go practice that feedback 

We're all learners in life.

It's OK to not be perfect . In fact, we shouldn't be. We're perfectly imperfect human beings, constantly learning , evolving, and bettering ourselves. 

The same goes for tough things like presentations. You might be working on perfecting your students' presentation. Or you might want to get better at capturing your audience's attention. No matter what, feedback is critical to that learning journey . 

Even a good presentation has the opportunity for improvement . Don't forget the role a coach can play in your feedback journey.

Your coach will be able to provide a unique point of view to help you better communicate key points. Your coach can also help with things like performance reviews , presentation evaluations, and even how to communicate with others.

New call-to-action

Madeline Miles

Madeline is a writer, communicator, and storyteller who is passionate about using words to help drive positive change. She holds a bachelor's in English Creative Writing and Communication Studies and lives in Denver, Colorado. In her spare time, she's usually somewhere outside (preferably in the mountains) — and enjoys poetry and fiction.

How to not be nervous for a presentation — 13 tips that work (really!)

6 presentation skills and how to improve them, how to give a good presentation that captivates any audience, josh bersin on the importance of talent management in the modern workplace, reading the room gives you an edge — no matter who you're talking to, 8 clever hooks for presentations (with tips), how to make a presentation interactive and exciting, the self presentation theory and how to present your best self, coaching insider: trusting your team as a new manager, similar articles, 30 communication feedback examples, 30 leadership feedback examples for managers, how to empower your team through feedback, how to give kudos at work. try these 5 examples to show appreciation, stay connected with betterup, get our newsletter, event invites, plus product insights and research..

3100 E 5th Street, Suite 350 Austin, TX 78702

  • Platform Overview
  • Integrations
  • Powered by AI
  • BetterUp Lead
  • BetterUp Manage™
  • BetterUp Care™
  • Sales Performance
  • Diversity & Inclusion
  • Case Studies
  • Why BetterUp?
  • About Coaching
  • Find your Coach
  • Career Coaching
  • Communication Coaching
  • Life Coaching
  • News and Press
  • Leadership Team
  • Become a BetterUp Coach
  • BetterUp Labs
  • Center for Purpose & Performance
  • Leadership Training
  • Business Coaching
  • Contact Support
  • Contact Sales
  • Privacy Policy
  • Acceptable Use Policy
  • Trust & Security
  • Cookie Preferences
  • Giving and Receiving Feedback

Feedback is an essential part of every developmental and performance journey. Ongoing feedback is part of a healthy and transparent organisational environment. Well delivered, it can provide opportunities to build skills, improve communications, enhance relationships, and improve patient safety. Yet, for many, giving and receiving feedback can prove challenging. What if we cause offence? What might the reaction be? This course examines the key themes and benefits of creating a feedback culture within your Trust or ICS to genuinely improve teamworking and the patient experience.

Who is it for?

Anyone in the organisation who wants to develop their skills and practice. Team managers who manage change and inter-disciplinary relationships.

Learning Objectives

By the end of this course, delegates will be able to:

  • Understand the benefits of feedback in an individual and team setting
  • Provide informal and formal feedback to colleagues in a 360-degree manner
  • Manage emotion and conflict when giving and receiving feedback
  • Plan and structure feedback conversations to attain better outcomes and maintain relationships

Learning Content:

  • Feedback as part of the developmental cycle
  • Understanding the difference between submissive, aggressive, and assertive behaviours
  • Managing emotion – dealing with the facts
  • Delivering feedback using the SNIPP model
  • Facilitation and questioning techniques
  • Managing conflict
  • Developing objectives and action planning
  • Practice sessions

Indicative Duration

Get in touch.

Contact our Academy team today for a consultation to discuss your specific education and training needs.

Contact Academy team

  • Academy Courses
  • Introduction to Project Management
  • Clinical Supervision Supervisor Training
  • Introduction to Continence Care
  • Introduction to Dementia
  • Introduction to ECG
  • Introduction to Venepuncture
  • Introduction to Wound Management
  • Managing Fluid Balance
  • Pressure Ulcer Care
  • Understanding Nutrition and Hydration
  • Action Learning Set Facilitator Training
  • Problem Solving Group Facilitation
  • Simulation Facilitator Training
  • Train the Trainer
  • Healthcare Support Worker Development
  • Care Planning
  • Co-ordinated Care - A Multi-disciplinary approach
  • Appraisals and Review Meetings
  • Coaching Skills
  • Developing Cultural Competence for Leaders
  • Developing Your Team
  • Management Essentials
  • Managing Difficult Conversations
  • Mastering Organisational Relationships
  • System Leadership
  • OSCE Preparation Training – Adult Acute
  • OSCE Mental Health Programme - NEW
  • OSCE Clinical Facilitator Programme - NEW
  • OSCE Revision and Mock Assessment - NEW
  • Developing an organisational 'Just Culture'
  • Duty of Candour and Being Open Principles
  • Incident Response
  • Introduction to Complex Systems and Systems Thinking
  • Introduction to Human Factors
  • Investigator Preparation Training
  • Just Culture
  • Purpose of Patient Safety Incident Response
  • Critical Thinking and Decision Making
  • Developing Your Confidence
  • Effective Communication Persuasion and Influence
  • Essential Writing Skills
  • Presentations Skills
  • Personal Development Courses
  • Clinical Courses
  • Education and Training Courses
  • Integrated Care
  • Management and Leadership Courses
  • OSCE Preparation and Acclimatisation Courses
  • Patient Safety Courses
  • Mobile Forms
  • INTEGRATIONS
  • See 100+ integrations
  • FEATURED INTEGRATIONS
  • See more Integrations
  • See more CRM Integrations

FTP

  • See more Storage Integrations
  • See more Payment Integrations

Mad Mimi

  • See more Email Integrations
  • Jotform Teams
  • Admin Console
  • Enterprise Mobile
  • Prefill Forms
  • HIPAA Forms
  • Secure Forms
  • Assign Forms
  • Online Payments
  • See more features
  • Multiple Users
  • White Labeling
  • See more Enterprise Features
  • Contact Sales
  • Contact Support
  • Help Center
  • Jotform for Beginners
  • Jotform Academy

Get a dedicated support team with Jotform Enterprise.

Apply to Jotform Enterprise for a dedicated support team.

  • Sign Up for Free

Presentation Feedback

More templates like this

Peer Evaluation Form Template

Peer Evaluation

A Peer Evaluation form is a form template designed to streamline the process of collecting feedback and evaluations from peers in the workplace. With this peer feedback form, Human Resources departments can eliminate paperwork and gather all evaluations online. The form includes questions about feedback for success in the job, the person's strengths and weaknesses, and their ability to collaborate with other team members. This form is essential for HR professionals looking to gather comprehensive feedback on employees' performance and foster a culture of continuous improvement within the organization.Jotform, a user-friendly and drag-and-drop online form builder, provides a seamless experience for creating and customizing the Peer Evaluation form. With Jotform's extensive field options and widgets, HR professionals can easily tailor the form to their specific requirements. Additionally, Jotform offers integration capabilities with popular apps and services like Google Drive, Salesforce, and Dropbox, allowing for seamless data transfer and automation. The platform also provides the Jotform Sign feature, which enables users to collect electronic signatures on forms and documents, ensuring enhanced security and compliance. With Jotform's ease of use, ease of -customization, and ease of -collecting e-signatures, HR professionals can streamline their evaluation processes and make more informed decisions based on comprehensive feedback.

Employee Peer Review Template Form Template

Employee Peer Review Template

An employee peer review lets employees evaluate their coworkers’ performance and behavior in the workplace. Use our free, online Employee Peer Review Template to simplify and speed up the evaluation process at your company. Once you’ve customized it to meet your needs, publish the form on your company site or send a direct form link to employees. Your staff will be able to name the employee they’re reviewing, describe the review period, and rate their coworkers on a scale from exceptional to unsatisfactory. Using our drag-and-drop Form Builder, you’re free to change the rating scale however you like. You can view submissions from your Jotform account on any device, even offline with Jotform Mobile Forms.Need to make some changes to our Employee Peer Review Template? With our drag-and-drop Form Builder, you can easily customize this template to perfectly align with your needs — no coding necessary! Feel free to replace the input table with questions or slider rating scales. You can even upload your company’s logo for a more professional look! And while you’re at it, sync your Employee Peer Review Form to apps like Google Drive, Dropbox, and Airtable to store evaluations in your other online accounts too. Boost employee performance with a custom Employee Peer Review Template that makes it easier for employees to evaluate their coworkers!

Student Peer Evaluation Form Template

Student Peer Evaluation Form

A student peer evaluation form is a tool used by teachers to collect feedback about students from their peers. Whether you teach at a middle school, high school, or college, collect peer evaluations from your students with a free Student Peer Evaluation Form! Embed this form in your online class website, or share it with students to complete with a link to keep communication between you and your students fast and easy. You can easily customize this form to match your classroom and grading scheme.You can even add questions, edit the layout, or choose a new background image, making your Student Peer Evaluation Form match your needs exactly. Integrate with online storage services like Google Drive or Dropbox to make the most of your data. You can even send students’ evaluations and others’ submissions to CRM platforms like Salesforce (also available on Salesforce AppExchange). Save yourself time and effort by using a free Student Peer Evaluation Form to get peer evaluations from your students.

  • Form Templates /
  • Feedback Forms /
  • Peer Feedback Forms /

Peer Feedback Forms

A Peer Evaluation form is a form template designed to streamline the process of collecting feedback and evaluations from peers in the workplace

Presentation Feedback Form Template

Whether you just gave a presentation or were a viewer at a seminar, a presentation feedback form is a great way to collect constructive feedback. Customize the presentation feedback form template to include the presenters name, commentary fields and grading rubrics. Additionally, presentation feedback templates have access to Jotform's collection of themes, apps, and widgets to help user engagement. Use our presentation feedback form sample as a guide for creating your own, customizing it to fit your needs.

An Employee Peer Review Template is a form template designed to facilitate the evaluation of coworkers' performance and behavior in the workplace.

A student peer evaluation form is a tool used by teachers to collect feedback about students from their peers. No coding!

Presentation Peer Feedback Form Template

Presentation Peer Feedback Form

A presentation peer feedback form is used by students to give feedback on presentations that their peers have created in the classroom. Customize and share online.

Feedback For SSDP Graduate Form Template

Feedback For SSDP Graduate

SOS Peer Feedback Form Template

SOS Peer Feedback

Feedback form

Feedback Session GLSS Form Template

Feedback Session GLSS

Testing prototype

About Peer Feedback Forms

Whether you need feedback on employee performance reviews or a group project, gather the data you need with Jotform’s free online Peer Feedback Forms. Start by choosing a free template below and customizing it with no coding required — then embed the form in your website or share it with a link to start collecting feedback from your peers on any device. All responses are stored in your secure Jotform account.

Feel free to add more questions, choose new fonts and colors, or upload photos with our drag-and-drop builder. If you’d like to analyze feedback to reveal important insights, create reports instantly with Jotform Report Builder — or send feedback to other accounts automatically with 100+ readymade integrations! Switch from time-consuming paper forms or emails and collect feedback more efficiently with free Peer Feedback Forms from JotForm.

Your account is currently limited to {formLimit} forms.

Go to My Forms and delete an existing form or upgrade your account to increase your form limit.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • v.7(4); 2017

Logo of bmjo

What does patient feedback reveal about the NHS? A mixed methods study of comments posted to the NHS Choices online service

Gavin brookes.

1 School of English, University of Nottingham, Nottingham, UK

2 Department of Linguistics and English Language, Lancaster University, Lancaster, UK

Associated Data

To examine the key themes of positive and negative feedback in patients’ online feedback on NHS (National Health Service) services in England and to understand the specific issues within these themes and how they drive positive and negative evaluation.

Computer-assisted quantitative and qualitative studies of 228 113 comments (28 971 142 words) of online feedback posted to the NHS Choices website. Comments containing the most frequent positive and negative evaluative words are qualitatively examined to determine the key drivers of positive and negative feedback.

Participants

Contributors posting comments about the NHS between March 2013 and September 2015.

Overall, NHS services were evaluated positively approximately three times more often than negatively. The four key areas of focus were: treatment, communication, interpersonal skills and system/organisation. Treatment exhibited the highest proportion of positive evaluative comments (87%), followed by communication (77%), interpersonal skills (44%) and, finally, system/organisation (41%). Qualitative analysis revealed that reference to staff interpersonal skills featured prominently, even in comments relating to treatment and system/organisational issues. Positive feedback was elicited in cases of staff being caring, compassionate and knowing patients’’ names, while rudeness, apathy and not listening were frequent drivers of negative feedback.

Conclusions

Although technical competence constitutes an undoubtedly fundamental aspect of healthcare provision, staff members were much more likely to be evaluated both positively and negatively according to their interpersonal skills. Therefore, the findings reported in this study highlight the salience of such ‘soft’ skills to patients and emphasise the need for these to be focused upon and developed in staff training programmes, as well as ensuring that decisions around NHS funding do not result in demotivated and rushed staff. The findings also reveal a significant overlap between the four key themes in the ways that care is evaluated by patients.

Strengths and limitations of this study

  • This study examines the largest (228 113 comments and approximately 29 million words) and most recent (March 2013 to September 2015) collection of online patient comments on NHS services analysed to date.
  • Building on previous research, the feedback data examined relate to a wider range of areas of healthcare service provision, including: acute trusts, care organisations, care providers, clinical commissioning groups, clinics, dentists, general practitioner practices, hospitals, mental health trusts, opticians and pharmacies. Although the comments relating to these various areas of provision are not compared in the analysis, this nonetheless makes for a more widely representative dataset.
  • The use of quantitative computer-assisted linguistic techniques produces large-scale, generalisable insights into this vast dataset, while more fine-grained, qualitative analysis helps elucidate nuances and areas of difference and overlap that have been overlooked by research employing solely quantitative approaches.
  • Further data and research are required to assess possible demographic trends in the feedback given.

Introduction

Since the 1980s, patient feedback exercises have been undertaken by an increasing number of healthcare providers worldwide in order to monitor the quality of the services they provide and stimulate improvements where needed. 1 Although the reliability of patient feedback as an indicator of the technical quality of care remains a topic of debate, 2  patient feedback exercises have nonetheless become a staple way of measuring and regulating healthcare standards, 3 4 as well as ensuring public involvement in the design and improvement of healthcare provision. 5  Patient empowerment is, as Gann puts it, ‘here to stay’ (p. 150), 6 and policy makers over the world have come to recognise the potential of active patient involvement to drive service improvements, improve self-care and ultimately improve the affordability and sustainability of the services they provide. 7 In England, since 2002, patient feedback has played an increasingly significant role in the way that care quality is assessed, with all National Health Service (NHS) trusts required to collect and report the results of feedback on their services to the regulatory body, the Healthcare Commission. The importance of the insights gained from patient feedback exercises is all the more pronounced in this context, where reductions in government expenditure in areas of social provision have required healthcare providers to constantly demonstrate both the quality and financial viability of the services they provide.

Healthcare providers can obtain feedback from their patients using a range of methods which can be implemented in different settings and at differing times following an episode of treatment. Ziebland and Coulter 8 provide a list of such methods, which include (but are not limited to): face-to-face interviews, postal questionnaires, telephone interviewers (using automated and live interviewers), web-based online questionnaires, diaries, questions on handheld portable devices, touch screen kiosks and bedside consoles. Moreover, feedback can be collected on-site, at the point of service contact or at patients’ homes, some days, weeks or months later. 8 The analysis reported in this study focuses on feedback given in the form of online patient comments. In recent years, increasing attention has been paid by researchers and healthcare providers to the internet as a site for patients to recount their express of healthcare services and to draw attention to what was good and bad about those experiences. 9–11 One such recent study was undertaken by Greaves et al , 12 who compared patients’ ratings of care posted to the NHS Choices online service with the results of non-experiential measures of service performance, such as morality rates. The researchers reported that, overall, patients’ ratings tended to correlate with the non-experiential measures. For example, hospitals that were poorly evaluated by patients were found to have higher mortality rates. This research therefore supports the value of online forms of patient feedback for assessing care quality and targeting areas for improvement.

Given the increasing significance of patient feedback to the ways that healthcare services are designed, delivered and regulated, there is a pressing need for research that accounts for the concerns expressed by patients in their feedback. However, rather than explore the content of patient feedback itself, the majority of existing research in this area is concerned chiefly with: reviewing the suitability of instruments and methods of collecting and analysing feedback 13 ; considering the reliability of feedback data for assessing healthcare quality 14 ; reflecting on the extent to which insights gained from such exercises have actually improved service provision 15  and recommending how such insights might be translated into positive clinical outcomes in the future. 16 The comparatively few studies that have examined the content of patient feedback (even fewer of which relate to healthcare in England) have reported recurring drivers of feedback to include the technical quality of care, accessibility to care, and the interpersonal and communication skills of practitioners (with the latter two often conflated). 17–19

The present study identifies and examines the key drivers of positive and negative feedback on healthcare services given in patients’ online comments posted to the NHS Choices website between March 2013 and September 2015. Our findings build on existing patient feedback research in several important ways. At 228 113 comments and approximately 29 million words, the feedback data we analyse are considerably larger than those examined in previous research on this topic, which have mainly accounted for hundreds of comments 20–22 and at the most in the tens of thousands. 23 Moreover, the data we analyse represent feedback relating to a wider range of healthcare services than considered in previous research, which has often focused on specific areas of healthcare provision. 24 The lion’s share of research on patient feedback was conducted using data collected in the 1990s and early 2000s, while our dataset contains comments made as recently as September 2015, making this dataset the most up-to-date of its kind. More broadly, given the ever changing landscape of healthcare provision in England and the UK, the present study responds to the need for regular and up-to-date research that assesses patient attitudes towards the NHS and specifically identifies the key drivers of feedback about the particular services they access.

We studied the written feedback posted to the NHS Choices online service ( http://www.nhs.uk/pages/home.aspx ) between March 2013 and September 2015 (data made available to the researchers). The comments were collected from the NHS Choices service’s comprehensive RSS feed for posted comments, using a developer key provided to us for this purpose, and then converted from RSS/XML to suitable structured corpus/database format for analysis. The data comprise a total of 228 113 comments, amounting to 28 971 142 words. The comments relate to a variety of healthcare organisations, including acute trusts, care organisations, care providers, clinical commissioning groups, clinics, dentists, general practitioner (GP) practices, hospitals, mental health trusts, opticians and pharmacies. However, the majority of the comments (27 005 715; 93.21%) relate to three primary care services: GP practices, hospitals and dentists. A numerical breakdown of the data is provided in table 1 .

Breakdown of the NHS comments database

GP, general practitioner; NHS, National Health Service; CCGs, Clinical Commissioning Groups.

We examined the comments using computer-assisted methods of linguistic analysis afforded by CQPweb, 25 an online tool that offers a range of techniques for quantitatively and qualitatively analysing large collections of digitised language data. We began by identifying the 10 most frequently occurring linguistic markers of positive and negative evaluation across the comments. These words were manually identified from a list of all the words occurring in the data provided by the ‘frequency’ function of CQPweb. Evaluation is a complex linguistic phenomenon, and can be made according to a variety of parameters, including the extent to which things are important, expected, comprehensible, possible and reliable. To ensure that our analysis captured the broadest range of themes concerning the positive and negative evaluation in the comments, we focused on the most generic evaluative items, that is, words that were broadly used to describe something as either being good or bad.

Using CQPweb, we then generated a list of those words that tend to occur frequently alongside the positive and negative evaluative words in the comments, that is, their most frequent ‘collocates’. Collocation refers to ‘the characteristic co-occurrence patterns of words’. 26 By analysing the collocates of the evaluative words, we were able to get a sense of what tended to be the target of the evaluation in the feedback—that is, of what was evaluated as ‘good’ and ‘bad’ in the comments. These words therefore reflect the key themes of positive and negative feedback in the data.

Building on this, the next more qualitative step in our procedure involved closely reading a randomly selected sample of comments in which each theme was evaluated positively and negatively to determine the more specific reasons or ‘drivers’ of the evaluation. Each sample consisted of 100 comments and contained comments relating to all organisations represented in the data. To ensure that 100 comments provided a sufficiently representative sample for this stage in our analysis, we adopted a saturation point procedure, well established in such quantitative linguistic research, 27 of randomly selecting 30 comments, analysing the emergent patterns, proceeding to examine another 30 randomly selected comments and continuing the process until saturation point was reached and new patterns had ceased to emerge. New patterns, or drivers, were no longer emergent by the time we analysed the 100th comment (positive and negative) for each theme, and so this sample size was deemed sufficiently large to account for the common drivers of positive and negative feedback, yet small enough to facilitate fine-grain qualitative examination.

Quantitative findings

Table 2 displays the 10 most frequent positive and negative evaluative words used in the comments. The comparatively higher frequencies of the positive words (total: 223 439) compared with the negative words (total: 73 363) provide a quantitative indication that the patients are more likely to evaluate the services they access positively than negatively. The positive evaluation words occur, on average, across almost three times as many comments as the negative words.

Ten most frequent positive and negative evaluative words in the comments 

CQPweb was then used to generate lists of those words occurring frequently within the three words preceding and following the positive and negative evaluation words in table 2 (ie, their collocates). The collocational span of 3 is the default value in CQPweb and is fairly standard in collocation analyses, which tend to operate with spans ranging from three to five words. 28 As we mentioned earlier, evaluation is a complex linguistic phenomenon, and the positive words below could, in some circumstances, be used to evaluate something negatively (‘not good’) and vice versa (eg, ‘isn’t bad’). Such cases comprised a tiny proportion (under 1%) of cases and were removed from the remainder of the analysis below.

The 100 most frequent words occurring alongside the positive and negative words were thematically coded to reflect the most frequently evaluated areas of concern for patients giving feedback. Four areas emerged as frequent across the comments (corresponding words in brackets): (1) treatment ( care , treatment , dental ); (2) communication ( communication , attention , listener(s) , advice ); (3) interpersonal skills ( atmosphere , attitude(s) , manner(s) ) and (4) system/organisation ( system , appointment , management , waiting time(s) ). As the forthcoming qualitative analysis shall demonstrate, feedback concerning communication and interpersonal skills related to a mixture of medical and non-medical staff groups, with the latter including staff members such as receptionists and managers. Note that we combined waiting and time(s) together into one linguistic item, as references to time(s) by itself often appeared in statements like ‘I had a really bad time’, which were too vague to be categorised. Based on the corresponding words (in brackets), we then examined how often each concern featured alongside the positive versus negative evaluative words ( figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is bmjopen-2016-013821f01.jpg

Collocation of most frequent feedback themes with positive and negative evaluation words.

Of the four key themes displayed above, treatment exhibited the highest proportion of positive feedback, occurring alongside the positive evaluation words 87% of the time. Communication was also evaluated positively overall (77%), while interpersonal skills were only evaluated positively 44% of the time and system/organisational issues fared worst of all with only a 41% positive evaluation.

Qualitative findings

To understand why the four key themes identified in our quantitative analysis were evaluated positively and negatively across the patients’ comments, we examined a sample of comments (n=100) in which each occurred alongside the positive and then negative evaluation words. Tables 3–10 report the reasons each theme was positively and negatively evaluated in our sample. This section deals with each theme in turn, starting with the theme that fared best in the patients’ comments (treatment), and concluding with the theme that fared worst (system/organisation).

Reasons treatment was positively evaluated

Reasons treatment was negatively evaluated

Reasons communication was negatively evaluated

Reasons interpersonal skills were positively evaluated

Reasons interpersonal skills were negatively evaluated

Reasons system and organisational issues were positively evaluated

Reasons system and organisational issues were negatively evaluated

Communication

Interpersonal skills, system and organisation, statement of principal findings.

The online comments analysed in this study paint a generally positive picture of healthcare services provided by the NHS in England. Our quantitative analysis of the patient comments revealed the most commonly used linguistic markers of positive evaluation occurred approximately three times as often as markers of negative evaluation and, on average, across approximately three times as many comments. Patients’ experiences and impressions of their treatment, communication, staff members’ interpersonal skills and system/organisational issues were identified as key to the ways that healthcare services were both positively and negatively evaluated. Of these key themes, treatment exhibited the highest proportion of positive feedback, occurring alongside the positive evaluation words 87% of the time. Communication was also evaluated positively overall (77%), while interpersonal skills were only evaluated positively 44% of the time and system/organisational issues fared worst of all with only a 41% positive evaluation.

Qualitative examination of the data was able to reveal the more precise nature of the evaluation made in the comments, as well as uncover overlaps and nuances between the key themes of positive and negative feedback. This part of the analysis suggested that as well as constituting a key theme in its own right, staff members’ interpersonal skills also emerged as a frequent driver of both positive and negative feedback in relation to how treatment and staff communication skills were evaluated. Other frequently cited drivers of positive and negative feedback included accessibility to care, patient centredness, and staff-to-staff and staff-to-patient communication. Staff technical competence was a less prominent driver of feedback, cited only in relation to the evaluation of treatment itself, and accounting for a relative minority of these comments. Our findings therefore support the notion that there is a discord between the significance that practitioners and patients place on technical competence when judging the overall quality of care. 29

Strengths and weaknesses of the study

This study has examined the largest and most up-to-date collection of patient feedback on NHS services in England in any format. Our use of quantitative computer-assisted linguistic techniques has produced large-scale, generalisable insights into this dataset. Yet at the same time, the more fine-grained, qualitative analysis was able to elucidate areas of difference and overlap that have been overlooked by research employing solely quantitative approaches in the past.

Although this dataset has proven to be a valuable resource for learning about individuals’ perspectives on the healthcare services they access, its lack of metadata regarding the demographic information of individual contributors meant that it was not possible to attribute particular types of comment or concern to any demographic group. It is also worth bearing in mind that the majority of the comments we analysed (93.21%) relate to the primary care services of GPs, hospitals and dentists. While this is unlikely to present issues respecting the general trends examined, more specific comments relating to these areas might be said to be over-represented compared with other areas of service provision, such as care providers and mental health trusts. Moreover, the data analysed in this study represent feedback given in one specific form (online comments), posted to one particular website (NHS Choices), about organisations based in one country (England). This raises issues surrounding representativeness; for those who choose to share their experiences online are not necessarily representative of the general population. It is now well documented that, compared with non-internet users, internet users tend to be younger, are more educated and are from higher income brackets. 30 Although this digital divide is estimated to have narrowed over time, 11 the perspectives of people from these so-called ‘hard-to-reach’ or ‘seldom-heard’ groups are still likely to be under-represented in our data. 8 31

Strengths and weaknesses in relation to other studies and important differences in results

Where a great deal of existing research has explored patient feedback in terms of predetermined themes, the data-driven approach adopted in the present study has allowed drivers of feedback to emerge from the comments themselves throughout the course of the analysis. As a consequence, system and organisation issues, which have remained largely unexplored in existing research, have emerged here as significant drivers of positive and negative evaluation with respect to various other aspects of care, including quality of treatment and staff communication skills.

As well as providing fresh insight into the perspectives of patients accessing contemporary healthcare services in England, the findings reported in this study also provide more substantive quantitative evidence to support the findings reported in existing studies of patient feedback that are based on comparatively smaller and older datasets. 32 33 However, our findings highlight the centrality of interpersonal skills as a key area of concern in its own right and as significant to the ways that treatment quality and staff communication are evaluated.

Meaning of the study: possible explanations and implications for clinicians and policy makers

While the majority of research into patient feedback has focused principally—in most cases exclusively—on what motivates negative feedback, the present study has elucidated the drivers of positive and negative feedback equally. Accordingly, while the reported drivers of negative feedback might flag up areas that require attention, the specific drivers of positive feedback outlined over the course of the analysis offer insight that can be used to stimulate and guide quality improvement efforts. 34–36

The quantitative section of our analysis suggested system and organisational issues to be a prominent theme in negative feedback. This often relates to issues surrounding accessibility of care, such as (emergency) appointment availability, waiting times, technical difficulties experienced with online booking systems, telephone waiting times and practice opening times. Tightening government expenditure in healthcare provision and resultant constraints on practitioner time and availability mean that these issues are unlikely to abate. Such issues arguably lie within the remit of policy makers and governing bodies. However, practitioners and other staff can improve patient feedback in this area by making an effort to ensure that appointments run on time and informing and updating patients and their families in the case of cancellations or delays.

The qualitative section of our analysis suggests that staff interpersonal skills lie central to improving care, as these were shown to motivate negative (and positive) feedback in relation to a variety of areas of concern. On the surface, the findings of this study suggest that developing the interpersonal skills of staff should be a priority in staff training. The interpersonal skills of both medical and non-medical staff were evaluated positively for qualities such as being friendly and approachable, empathetic, for smiling, and not being afraid to laugh and joke with patients. Allied to this, practitioners were frequently positively evaluated for providing care that was patient centred and involved discussing treatment options with patients, as well as explaining treatment plans and listening to their concerns. Conversely, staff were negatively evaluated when they were perceived as being rude, dismissive, lazy, not listening to patients’ concerns, as well as for not smiling and appearing unhappy.

Most professional medical training (eg, in medicine and nursing) includes the development of communication skills as a key element at the undergraduate level and onwards. This kind of interpersonal training is often focused on developing skills such as  information gathering and shared decision making. Our findings suggest that, as far as patients are concerned, the interpersonal aspect of interaction is given a high premium. Such skills might be developed more effectively through greater opportunity for hands-on human engagement, rather than instruction alone, at the early stages of training. In terms of developing the interpersonal skills of other non-medical staff groups, it is likely that many staff working in administrative capacities, such as receptionists, will not have received formal training in interpersonal skills (although healthcare providers are increasingly running courses in ‘customer service’ to address this). Likewise, there is an intermediate group, which includes staff working in healthcare assistance, who may have received limited or no formal training, but who nonetheless engage with patients at very significant levels and may benefit from some form of interpersonal skills training.

However, many of these specific interpersonal or ‘soft’ skills can be linked to the concept of emotional labour, 37 which involves the regulation of emotion to create a publicly visible facial and bodily display within the workplace. These kinds of attributes might seem more like individual character traits, and so incorporating these into training poses challenges, especially as members of the public are often unimpressed by ‘scripted’ interactions which are rightly seen as inauthentic. 38 Soft skills training, while clearly helpful in some areas, may sometimes be a ‘sticking plaster’ solution to cover for wider structural problems involving overstretched systems. Other positively evaluated interpersonal aspects of care, such as involving patients in communication and decision making and ensuring that patients have sufficient time to interact with medical staff and are not made to feel as though they are ‘rushed,’ might constitute more tangible and attainable targets for skills development programmes.

In an effort to stimulate such improvements, the findings from the project from which this research derives, including the results reported in this article, have been presented to the Insight and Feedback Team at NHS England as well as the Care Quality Commission in the UK. However, in reality, translating such findings into practice is seldom straightforward. After all, although most major public sector healthcare providers collect feedback on their services at least annually, this information is not always used to improve service quality. 3 Suggested reasons for this include a lack of attention to patients’ experience at senior levels, 39 as well as feedback data not being specific to particular wards or teams. 40

As well as gesturing towards areas for improvement in healthcare provision, the findings reported here also provide insights into patient feedback more generally; insights that likely bear implications for how such feedback should be interpreted in the future. Our quantitative examination of the data revealed significant overlap between the drivers of positive and negative feedback. As an example, although treatment fared best of these four themes of feedback in terms of positive evaluation, it was only by examining the comments relating to treatment that we were able to show that 47% of the positive comments relating to this theme actually praised interpersonal aspects of care, rather than the technical competence of staff, which accounted for only 10% of these comments. It is therefore beneficial, where possible, to gauge feedback at a granular level. This is where combining quantitative and qualitative approaches can bear significant advantages for researchers, allowing us to deal with large datasets in a way that is sensitive to subtle nuances and overlaps and to  point to specific areas for praise or improvement that only become apparent at the more granular level.

Unanswered questions and future research

Although the data contain feedback relating to a variety of healthcare organisations, such distinctions have not figured in the analysis undertaken in this study. Future studies on this particular dataset should therefore take a modular, even comparative approach, to ascertain similarities or differences in the ways that care is evaluated in each of these areas. Furthermore, we did not have access to the demographic information of the comment posters. Future research should endeavour to collect and examine data that comprise this kind of demographic metadata in order to determine whether particular concerns are attributable to people living in certain locations or belonging to particular age, ethnic or sex-related groups. Future research should also assess feedback given in other, particularly non-digital, mediums in order to help account for the perspectives of patients from ‘hard-to-reach’, ‘seldom-heard’ groups who might be less likely to give feedback online. Ziebland and Coulter 8 recommend that non-traditional methods of data collection, such as pictures, stories and drama, might be used to incorporate the views of such groups in future feedback collection exercises.

Supplementary Material

Acknowledgments.

We wish to thank Dick Churchill for his advice concerning the pedagogical implications of our findings.

Contributors: GB co-planned and co-conducted the research, and took charge of writing the paper. He is responsible for the content as guarantor. PB co-planned and co-conducted the research and contributed to the writing of the paper. Both authors had access to the data.

Funding: Economic and Social Research Council; grant number: ES/K002155/1.

Disclaimer: All authors have completed the ICMJE uniform disclosure form and declare: all authors had financial support from the Economic and Social Research Council for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous 3 years; no other relationships or activities that could appear to have influenced the submitted work.

Competing interests: None declared.

Ethics approval: Research ethics approval for the study was obtained from Lancaster University, Lancaster, UK.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data sharing statement: No additional data available.

Word & Excel Templates

Printable Word and Excel Templates

Presentation evaluation and feedback form

Presentation Evaluation & Feedback Forms

Opinion matters. If you are working for any institute or a company, your opinion will matter to them if they want to make any progress in the related field. No company suppresses the views of its workers about its policies, presentations etc. Presentations are the part and parcel of every office and organization. Whether it is a worker who is delivering the presentation to get any tender, a speaker at an event to give a clearer insight into any topic, or it is a teacher conducting any lecture in the class, the main aim is to give a presentation that has all the necessary info for the audience.

What is a presentation evaluation form?

A presentation evaluation form is a form that is given to the audience to evaluate and access the performance and credibility of the presentation. It ensures that the presentation was relevant to the topic of discussion and the speaker was successful in imparting the desired knowledge to the audience.

Why do we need a presentation evaluation form?

We need this form to ensure that the presentation was successful. It’s needed to make sure that the agenda of the discussion is met and all the audience has got the desired knowledge as well. Secondly, it is essential to evaluate the performance of the speaker as well. Sometimes the data is not delivered properly and the goals of the presentation are not met. All this has happened due to the inability of the speaker.

It is impossible to satisfy everyone. So you might get some mixed reviews about the presentation. Do not worry about them. You just have to get yourself better.

What to ask in a presentation evaluation form?

  • Ask about the timing and location of the event in which the presentation is conducted.
  • Ask about the quality of the presentation.
  • Ask about the communication skills of the presenter.
  • Was the presentation relevant to the topic of discussion or not?
  • Was the presenter able to answer all your questions and queries?
  • Were the audio/visual slides and handouts helpful?

Here is a sample which is designed by our team for your convenience.

Preview & Details of Template

Presentation evaluation & feedback form template.

File: Word  (. doc ) 2003 + and iPad Size 46 Kb |  Download

File : OpenOffice (.odt) Writer  [writer] Size 18 Kb | Download

License: ENERGY (Personal Use Only) Distribution by Kate Elizabeth (CEO)

IMAGES

  1. FREE 10+ Feedback Forms for Nursing

    presentation feedback form nhs

  2. Greater Manchester NHS 111 professional feedback form

    presentation feedback form nhs

  3. FREE 4+ 360 Feedback Templates in PDF

    presentation feedback form nhs

  4. FREE 10+ Professional Feedback Forms in PDF

    presentation feedback form nhs

  5. FREE 10+ Feedback Forms for Nursing

    presentation feedback form nhs

  6. Fillable Online bexleyccg nhs Feedback form

    presentation feedback form nhs

VIDEO

  1. PRACTICING VARIAL FLIPS

  2. How to study languages with a low attention span

  3. Lecture 06: Simple Form Designing and Implementation, Patient Record data entry Form (Urdu/Hindi)

  4. UPSC ESIC Nursing Officer Recruitment 2024

  5. Presenting my research on re-use of CIDR/PRID in Islamia University of Bahawalpur

  6. Strategies for working with delusions Evidence, alternatives, and reality testing

COMMENTS

  1. PDF Trainee presentation feedback form

    Main points - Clearly expressed, pitched at right level? Content - Was it GP relevant? Slides - Well set out, no information overload, good use of illustrations? Ending - Main conclusions / learning points delivered clearly? Length of presentation appropriate, with question time? Interaction with the audience - eye contact, NVC ...

  2. PDF Teaching session feedback form

    Teaching session feedback form Date Title of teaching MDT role Please rate your agreement with the following statements: Put a number between 1 and 10 in the box: 1=Strongly disagree, 10=Strongly agree The session was useful The content of the session was appropriate The session was interesting

  3. PDF How to Give Feedback (and Receive it)

    For feedback to be effective, we should: Be sensitive to the other person's feelings. Always be sincere - otherwise, don't do it at all! Give feedback on good points and successes, as well as weaknesses and disappointments. Recognise, but don't dwell on, areas where there is little or no scope for change.

  4. PDF Teaching Evaluation Form

    Teaching Evaluation Form A: Content and Presentation Please mark the appropriate box Amount of material Just right. Almost right Too much / ... at www.effectivepractitioner.nes.scot.nhs.uk. B: Performance . Please mark the appropriate box . Clarity of presentation . Highly skilled . Skilled :

  5. Workplace-based assessment feedback form

    Workplace-based assessment feedback form. This document is the new, generic feedback form that should be used by assessors to provide feedback on Case-Based Discussions (CBDs), Observed Clinical Events (OCEs) and Direct Observations of Practical Skills (DOPS). Filed under. Scientist Training Programme. Published. 2022. Publication type. Document.

  6. Improving Care by Using Patient Feedback

    Both staff and patients want feedback from patients about the care to be heard and acted upon and the NHS has clear policies to encourage this. Doing this in practice is, however, complex and challenging. This report features nine new research studies about using patient experience data in the NHS. These show what organisations are doing now ...

  7. DOC Ophthalmology Teaching Feedback Form

    between 1 and 5 to indicate which response best fits your experience of the presentations. Please answer all the statements according to the following 1 to 5 scale:- ... This feedback form is a record of your attendance. Name: Title: OPHTHALMOLOGY TEACHING FEEDBACK FORM Author: Adam.Ross Last modified by: Horton Andrew (NHS South West) Created ...

  8. Standard 6 Evaluation and Assessment

    Evaluation. There are different reasons why training should be evaluated: Find out if the learning is actually being applied in the workplace. Check the impact of the training on job performance. Confirm that the training delivered is the right training. Continuously improve training delivery using feedback. Demonstrate that training is adding ...

  9. Development and validation of the oral presentation evaluation scale

    Oral presentations are an important educational component for nursing students and nursing educators need to provide students with an assessment of presentations as feedback for improving this skill. However, there are no reliable validated tools available for objective evaluations of presentations. We aimed to develop and validate an oral presentation evaluation scale (OPES) for nursing ...

  10. PDF The Leadership Framework and 360° Feedback

    your Feedback Facilitator PRIOR to beginning your 360° feedback set-up. A database of newly accredited LF Feedback Facilitators is available, and you MUST ensure that you talk to your facilitator to confirm that he/she is willing and able to provide you with your feedback before you complete the online registration process.

  11. How to present patient cases

    Presenting patient cases is a key part of everyday clinical practice. A well delivered presentation has the potential to facilitate patient care and improve efficiency on ward rounds, as well as a means of teaching and assessing clinical competence. 1 The purpose of a case presentation is to communicate your diagnostic reasoning to the listener, so that he or she has a clear picture of the ...

  12. 30 Presentation Feedback Examples

    3. Create dialogue (and listen carefully) Feedback is never a one-way street. Without the opportunity for dialogue, you're already shutting down and not listening to the other person. Make sure you're creating space for dialogue and active listening. Invite questions — or, even better, feedback.

  13. Teaching Session Feedback Form Template

    Use our presentation feedback form sample as a guide for creating your own, customizing it to fit your needs. Use Template Preview. Preview: Employee Peer Review Template. Employee Peer Review Template. An Employee Peer Review Template is a form template designed to facilitate the evaluation of coworkers' performance and behavior in the workplace.

  14. PDF Using patient feedback

    Using patient feedback: a practical guide to improving patient experience Section 2: What patient feedback is and how you obtain it 1 What is patient feedback? Patient feedback consists of the views and opinions of patients and service users on the care they have experienced. Healthcare organisations can gather patient feedback in a variety of ways

  15. Using online patient feedback to improve NHS services: the INQUIRE

    Many people (42% of internet users in the general population) read online feedback from other patients. Fewer people (8%) write online feedback, but when they do one of their main reasons is to give praise. Most online feedback is positive in its tone and people describe caring about the NHS and wanting to help it ('caring for care').

  16. NHS England » Giving your feedback

    Giving your feedback. We believe it is important to hear patients' views on services they have received; this could be at a GP practice or in a hospital. You can make your voice heard by taking part in surveys, such as the GP survey, when you get the opportunity. Some of these are national and some are for individual healthcare organisations.

  17. What does patient feedback reveal about the NHS? A mixed methods study

    Objective To examine the key themes of positive and negative feedback in patients' online feedback on NHS (National Health Service) services in England and to understand the specific issues within these themes and how they drive positive and negative evaluation. Design Computer-assisted quantitative and qualitative studies of 228 113 comments (28 971 142 words) of online feedback posted to ...

  18. PDF Feedback Form

    the needs of users. We welcome feedback and comments on this publication. Please complete all the questions below. Information about you Q1. What is your role, for which you need to use these statistics? (select all that apply) NHS England or other health related body Department of Health Other Government department or organisation Patient Academic

  19. Giving and Receiving Feedback

    Learning Objectives. Understand the benefits of feedback in an individual and team setting. Provide informal and formal feedback to colleagues in a 360-degree manner. Manage emotion and conflict when giving and receiving feedback. Plan and structure feedback conversations to attain better outcomes and maintain relationships.

  20. Presentation Feedback Form Template

    Whether you just gave a presentation or were a viewer at a seminar, a presentation feedback form is a great way to collect constructive feedback. Customize the presentation feedback form template to include the presenters name, commentary fields and grading rubrics. Additionally, presentation feedback templates have access to Jotform's collection of themes, apps, and widgets to help user ...

  21. What does patient feedback reveal about the NHS? A mixed methods study

    Since the 1980s, patient feedback exercises have been undertaken by an increasing number of healthcare providers worldwide in order to monitor the quality of the services they provide and stimulate improvements where needed. 1 Although the reliability of patient feedback as an indicator of the technical quality of care remains a topic of debate ...

  22. Presentation Evaluation & Feedback Forms

    A presentation evaluation form is a form that is given to the audience to evaluate and access the performance and credibility of the presentation. It ensures that the presentation was relevant to the topic of discussion and the speaker was successful in imparting the desired knowledge to the audience.

  23. PDF Patient Feedback

    The NHS Friends and Family Test (FFT) was introduced in April 2013 to gauge ... FFT form • How likely would you be to recommend our service to friends and family if they needed similar treatment ? • Patients can choose from six options, ranging from 'extremely likely' to ... The amount of feedback we receive From 1st April 2015 -31 ...

  24. Cancer-causing chemical can form at 'unacceptably high levels' in

    High levels of benzene, a cancer-causing chemical, can form in acne treatment products containing benzoyl peroxide, according to a new report from Valisure, an independent laboratory.