• Open access
  • Published: 23 May 2016

Research impact: a narrative review

  • Trisha Greenhalgh 1 ,
  • James Raftery 2 ,
  • Steve Hanney 3 &
  • Matthew Glover 3  

BMC Medicine volume  14 , Article number:  78 ( 2016 ) Cite this article

28k Accesses

167 Citations

235 Altmetric

Metrics details

Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon.

Peer Review reports

This paper addresses the question: ‘What is research impact and how might we measure it?’. It has two main aims, first, to introduce the general reader to a new and somewhat specialised literature on the science of research impact assessment and, second, to contribute to the development of theory and the taxonomy of method in this complex and rapidly growing field of inquiry. Summarising evidence from previous systematic and narrative reviews [ 1 – 7 ], including new reviews from our own team [ 1 , 5 ], we consider definitions of impact and its conceptual and philosophical basis before reviewing the strengths and limitations of different approaches to its assessment. We conclude by suggesting where future research on research impact might be directed.

Research impact has many definitions (Box 1). Its measurement is important considering that researchers are increasingly expected to be accountable and produce value for money, especially when their work is funded from the public purse [ 8 ]. Further, funders seek to demonstrate the benefits from their research spending [ 9 ] and there is pressure to reduce waste in research [ 10 ]. By highlighting how (and how effectively) resources are being used, impact assessment can inform strategic planning by both funding bodies and research institutions [ 1 , 11 ].

We draw in particular on a recent meta-synthesis of studies of research impact funded by the UK Health Technology Assessment Programme (HTA review) covering literature mainly published between 2005 and 2014 [ 1 ]. The HTA review was based on a systematic search of eight databases (including grey literature) plus hand searching and reference checking, and identified over 20 different impact models and frameworks and 110 studies describing their empirical applications (as single or multiple case studies), although only a handful had proven robust and flexible across a range of examples. The material presented in this summary paper, based on much more extensive work, is inevitably somewhat eclectic. Four of the six approaches we selected as ‘established’ were the ones most widely used in the 110 published empirical studies. Additionally, we included the Societal Impact Assessment despite it being less widely used since it has recently been the subject of a major EU-funded workstream (across a range of fields) and the UK Research Excellence Framework (REF; on which empirical work post-dated our review) because of the size and uniqueness of the dataset and its significant (?) international interest. The approaches we selected as showing promise for the future were chosen more subjectively on the grounds that there is currently considerable academic and/or policy interest in them.

Different approaches to assessing research impact make different assumptions about the nature of research knowledge, the purpose of research, the definition of research quality, the role of values in research and its implementation, the mechanisms by which impact is achieved, and the implications for how impact is measured (Table  1 ). Short-term proximate impacts are easier to attribute, but benefits from complementary assets (such as the development of research infrastructure, political support or key partnerships [ 8 ]) may accumulate in the longer term but are more difficult – and sometimes impossible – to fully capture.

Knowledge is intertwined with politics and persuasion. If stakeholders agree on what the problem is and what a solution would look like, the research-impact link will tend to turn on the strength of research evidence in favour of each potential decision option, as depicted in column 2 of Table  1 [ 12 ]. However, in many fields – for example, public policymaking, social sciences, applied public health and the study of how knowledge is distributed and negotiated in multi-stakeholder collaborations – the links between research and impact are complex, indirect and hard to attribute (for an example, see Kogan and Henkel’s rich ethnographic study of the Rothschild experiment in the 1970s, which sought – and failed – to rationalize the links between research and policy [ 13 ]). In policymaking, research evidence is rather more often used conceptually (for general enlightenment) or symbolically (to justify a chosen course of action) than instrumentally (feeding directly into a particular policy decision) [ 12 , 14 ], as shown empirically by Amara et al.’s large quantitative survey of how US government agencies drew on university research [ 15 ]. Social science research is more likely to illuminate the complexity of a phenomenon than produce a simple, ‘implementable’ solution that can be driven into practice by incorporation into a guideline or protocol [ 16 , 17 ], as was shown by Dopson and Fitzgerald’s detailed ethnographic case studies of the implementation of evidence-based healthcare in healthcare organisations [ 18 ]. In such situations, the research-impact relationship may be productively explored using approaches that emphasise the fluidity of knowledge and the multiple ways in which it may be generated, assigned more or less credibility and value, and utilised (columns 3 to 6 in Table  1 ) [ 12 , 19 ].

Many approaches to assessing research impact combine a logic model (to depict input-activities-output-impact links) with a ‘case study’ description to capture the often complex processes and interactions through which knowledge is produced (perhaps collaboratively and/or with end-user input to study design), interpreted and shared (for example, through engagement activities, audience targeting and the use of champions, boundary spanners and knowledge brokers [ 20 – 24 ]). A nuanced narrative may be essential to depict the non-linear links between upstream research and distal outcomes and/or help explain why research findings were not taken up and implemented despite investment in knowledge translation efforts [ 4 , 6 ].

Below, we describe six approaches that have proved robust and useful for measuring research impact and some additional ones introduced more recently. Table  2 lists examples of applications of the main approaches reviewed in this paper.

Established approaches to measuring research impact

The payback framework.

Developed by Buxton and Hanney in 1996 [ 25 ], the Payback Framework (Fig.  1 ) remains the most widely used approach. It was used by 27 of the 110 empirical application studies in the recent HTA review [ 1 ]. Despite its name, it does not measure impact in monetary terms. It consists of two elements: a logic model of the seven stages of research from conceptualisation to impact, and five categories to classify the paybacks – knowledge (e.g. academic publications), benefits to future research (e.g. training new researchers), benefits to policy (e.g. information base for clinical policies), benefits to health and the health system (including cost savings and greater equity), and broader economic benefits (e.g. commercial spin-outs). Two interfaces for interaction between researchers and potential users of research (‘project specification, selection and commissioning’ and ‘dissemination’) and various feedback loops connecting the stages are seen as crucial.

The Payback Framework developed by Buxton and Hanney (reproduced under Creative Commons Licence from Hanney et al [ 70 ])

The elements and categories in the Payback Framework were designed to capture the diverse ways in which impact may arise, notably the bidirectional interactions between researchers and users at all stages in the research process from agenda setting to dissemination and implementation. The Payback Framework encourages an assessment of the knowledge base at the time a piece of research is commissioned – data that might help with issues of attribution (did research A cause impact B?) and/or reveal a counterfactual (what other work was occurring in the relevant field at the time?).

Applying the Payback Framework through case studies is labour intensive: researcher interviews are combined with document analysis and verification of claimed impacts to prepare a detailed case study containing both qualitative and quantitative information. Not all research groups or funders will be sufficiently well resourced to produce this level of detail for every project – nor is it always necessary to do so. Some authors have adapted the Payback Framework methodology to reduce the workload of impact assessment (for example, a recent European Commission evaluation populated the categories mainly by analysis of published documents [ 26 ]); nevertheless, it is not known how or to what extent such changes would compromise the data. Impacts may be short or long term [ 27 ], so (as with any approach) the time window covered by data collection will be critical.

Another potential limitation of the Payback Framework is that it is generally project-focused (commencing with a particular funded study) and is therefore less able to explore the impact of the sum total of activities of a research group that attracted funding from a number of sources. As Meagher et al. concluded in their study of ESRC-funded responsive mode psychology projects, “ In most cases it was extremely difficult to attribute with certainty a particular impact to a particular project’s research findings. It was often more feasible to attach an impact to a particular researcher’s full body of research, as it seemed to be the depth and credibility of an ongoing body of research that registered with users ” [ 28 ] (p. 170).

Similarly, the impact of programmes of research may be greater than the sum of their parts due to economic and intellectual synergies, and therefore project-focused impact models may systematically underestimate impact. Application of the Payback Framework may include supplementary approaches such as targeted stakeholder interviews to fully capture the synergies of programme-level funding [ 29 , 30 ].

Research Impact Framework

The Research Impact Framework was the second most widely used approach in the HTA review of impact assessment, accounting for seven out of 110 applications [ 1 ], but in these studies it was mostly used in combination with other frameworks (especially Payback) rather than as a stand-alone approach. It was originally developed by and for academics who were interested in measuring and monitoring the impact of their own research. As such, it is a ‘light touch’ checklist intended for use by individual researchers who seek to identify and select impacts from their work “ without requiring specialist skill in the field of research impact assessment ” [ 31 ] (p. 136). The checklist, designed to prompt reflection and discussion, includes research-related impacts, policy and practice impacts, service (including health) impacts, and an additional ‘societal impact’ category with seven sub-categories. In a pilot study, its authors found that participating researchers engaged readily with the Research Impact Framework and were able to use it to identify and reflect on different kinds of impact from their research [ 31 , 32 ]. Because of its (intentional) trade-off between comprehensiveness and practicality, it generally produces a less thorough assessment than the Payback Framework and was not designed to be used in formal impact assessment studies by third parties.

Canadian Academy of Health Sciences (CAHS) Framework

The most widely used adaptation of the Payback Framework is the CAHS Framework (Fig.  2 ), which informed six of the 110 application studies in the HTA review [ 33 ]. Its architects claim to have shaped the Payback Framework into a ‘systems approach’ that takes greater account of the various non-linear influences at play in contemporary health research systems. CAHS was constructed collaboratively by a panel of international experts (academics, policymakers, university heads), endorsed by 28 stakeholder bodies across Canada (including research funders, policymakers, professional organisations and government) and refined through public consultation [ 33 ]. The authors emphasise that the consensus-building process that generated the model was as important as the model itself.

Simplified Canadian Academy of Health Sciences (CAHS) Framework (reproduced with permission of Canadian Academy of Health Sciences [ 33 ])

CAHS encourages a careful assessment of context and the subsequent consideration of impacts under five categories: advancing knowledge (measures of research quality, activity, outreach and structure), capacity-building (developing researchers and research infrastructure), informing decision-making (decisions about health and healthcare, including public health and social care, decisions about future research investment, and decisions by public and citizens), health impacts (including health status, determinants of health – including individual risk factors and environmental and social determinants – and health system changes), and economic and social benefits (including commercialization, cultural outcomes, socioeconomic implications and public understanding of science).

For each category, a menu of metrics and measures (66 in total) is offered, and users are encouraged to draw on these flexibly to suit their circumstances. By choosing appropriate sets of indicators, CAHS can be used to track impacts within any of the four ‘pillars’ of health research (basic biomedical, applied clinical, health services and systems, and population health – or within domains that cut across these pillars) and at various levels (individual, institutional, regional, national or international).

Despite their differences, Payback and CAHS have much in common, especially in how they define impact and their proposed categories for assessing it. Whilst CAHS appears broader in scope and emphasises ‘complex system’ elements, both frameworks are designed as a pragmatic and flexible adaptation of the research-into-practice logic model. One key difference is that CAHS’ category ‘decision-making’ incorporates both policy-level decisions and the behaviour of individual clinicians, whereas Payback collects data separately on individual clinical decisions on the grounds that, if they are measurable, decisions by clinicians to change behaviour feed indirectly into the improved health category.

As with Payback (but perhaps even more so, since CAHS is in many ways more comprehensive), the application of CAHS is a complex and specialist task that is likely to be highly labour-intensive and hence prohibitively expensive in some circumstances.

Monetisation models

A significant innovation in recent years has been the development of logic models to monetise (that is, express in terms of currency) both the health and the non-health returns from research. Of the 110 empirical applications of impact assessment approaches in our HTA review, six used monetization. Such models tend to operate at a much higher level of aggregation than Payback or CAHS – typically seeking to track all the outputs of a research council [ 34 , 35 ], national research into a broad disease area (e.g. cardiovascular disease, cancer) [ 36 – 38 ], or even an entire national medical research budget [ 39 ].

Monetisation models express returns in various ways, including as cost savings, the money value of net health gains via cost per quality-adjusted life year (QALY) using the willingness-to-pay or opportunity cost established by NICE or similar bodies [ 40 ], and internal rates of return (return on investment as an annual percentage yield). These models draw largely from the economic evaluation literature and differ principally in terms of which costs and benefits (health and non-health) they include and in the valuation of seemingly non-monetary components of the estimation. A national research call, for example, may fund several programmes of work in different universities and industry partnerships, subsequently producing net health gains (monetised as the value of QALYs or disability-adjusted life-years), cost savings to the health service (and to patients), commercialisation (patents, spin-outs, intellectual property), leveraging of research funds from other sources, and so on.

A major challenge in monetisation studies is that, in order to produce a quantitative measure of economic impact or rate of return, a number of simplifying assumptions must be made, especially in relation to the appropriate time lag between research and impact and what proportion of a particular benefit should be attributed to the funded research programme as opposed to all the other factors involved (e.g. social trends, emergence of new interventions, other research programmes occurring in parallel). Methods are being developed to address some of these issues [ 27 ]; however, whilst the estimates produced in monetised models are quantitative, those figures depend on subjective, qualitative judgements.

A key debate in the literature on monetisation of research impact addresses the level of aggregation. First applied to major research budgets in a ‘top-down’ or macro approach [ 39 ], whereby total health gains are apportioned to a particular research investment, the principles of monetisation are increasingly being used in a ‘bottom-up’ [ 34 , 36 – 38 ] manner to collect data on specific project or programme research outputs. The benefits of new treatments and their usage in clinical practice can be built up to estimate returns from a body of research. By including only research-driven interventions and using cost-effectiveness or cost-utility data to estimate incremental benefits, this method goes some way to dealing with the issue of attribution. Some impact assessment models combine a monetisation component alongside an assessment of processes and/or non-monetised impacts, such as environmental impacts and an expanded knowledge base [ 41 ].

Societal impact assessment

Societal impact assessment, used in social sciences and public health, emphasises impacts beyond health and is built on constructivist and performative philosophical assumptions (columns 3 and 6 in Table  1 ). Some form of societal impact assessment was used in three of the 110 empirical studies identified in our HTA review. Its protagonists distinguish the social relevance of knowledge from its monetised impacts, arguing that the intrinsic value of knowledge may be less significant than the varied and changing social configurations that enable its production, transformation and use [ 42 ].

An early approach to measuring societal impact was developed by Spaapen and Sylvain in the early 1990s [ 43 ], and subsequently refined by the Royal Netherlands Academy of Arts and Science [ 44 ]. An important component is self-evaluation by a research team of the relationships, interactions and interdependencies that link it to other elements of the research ecosystem (e.g. nature and strength of links with clinicians, policymakers and industry), as well as external peer review of these links. Spaapen et al. subsequently conducted a research programme, Evaluating Research in Context (ERiC) [ 45 ], which produced the Sci-Quest model [ 46 ]. Later, they collaborated with researchers (who had led a major UK ESRC-funded study on societal impact [ 47 ]) to produce the EU-funded SIAMPI (Social Impact Assessment Methods through the study of Productive Interactions) Framework [ 48 ].

Sci-Quest was described by its authors as a ‘fourth-generation’ approach to impact assessment – the previous three generations having been characterised, respectively, by measurement (e.g. an unenhanced logic model), description (e.g. the narrative accompanying a logic model) and judgement (e.g. an assessment of whether the impact was socially useful or not). Fourth-generation impact assessment, they suggest, is fundamentally a social, political and value-oriented activity and involves reflexivity on the part of researchers to identify and evaluate their own research goals and key relationships [ 46 ].

Sci-Quest methodology requires a detailed assessment of the research programme in context and the development of bespoke metrics (both qualitative and quantitative) to assess its interactions, outputs and outcomes, which are presented in a unique Research Embedment and Performance Profile, visualised in a radar chart. SIAMPI uses a mixed-methods case study approach to map three categories of productive interaction: direct personal contacts, indirect contacts such as publications, and financial or material links. These approaches have theoretical elegance, and some detailed empirical analyses were published as part of the SIAMPI final report [ 48 ]. However, neither approach has had significant uptake elsewhere in health research – perhaps because both are complex, resource-intensive and do not allow easy comparison across projects or programmes.

Whilst extending impact to include broader societal categories is appealing, the range of societal impacts described in different publications, and the weights assigned to them, vary widely; much depends on the researchers’ own subjective ratings. An attempt to capture societal impact (the Research Quality Framework) in Australia in the mid-2000s was planned but later abandoned following a change of government [ 49 ].

UK Research Excellence Framework

The 2014 REF – an extensive exercise to assess UK universities’ research performance – allocated 20 % of the total score to research impact [ 50 ]. Each institution submitted an impact template describing its strategy and infrastructure for achieving impact, along with several four-page impact case studies, each of which described a programme of research, claimed impacts and supporting evidence. These narratives, which were required to follow a linear and time-bound structure (describing research undertaken between 1993 and 2013, followed by a description of impact occurring between 2008 and 2013) were peer-reviewed by an intersectoral assessment panel representing academia and research users (industry and policymakers) [ 50 ]. Other countries are looking to emulate the REF model [ 51 ].

An independent evaluation of the REF impact assessment process by RAND Europe (based on focus groups, interviews, survey and documentary analysis) concluded that panel members perceived it as fair and robust and valued the intersectoral discussions, though many felt the somewhat crude scoring system (in which most case studies were awarded 3, 3.5 or 4 points) lacked granularity [ 52 ]. The 6679 non-redacted impact case studies submitted to the REF (1594 in medically-related fields) were placed in the public domain ( http://results.ref.ac.uk ) and provide a unique dataset for further analysis.

In its review of the REF, the members of Main Panel A, which covered biomedical and health research, noted that “ International MPA [Main Panel A] members cautioned against attempts to ‘metricise’ the evaluation of the many superb and well-told narrations describing the evolution of basic discovery to health, economic and societal impact ” [ 50 ].

Approaches with potential for the future

The approaches in this section, most of which have been recently developed, have not been widely tested but may hold promise for the future.

Electronic databases

Research funders increasingly require principal investigators to provide an annual return of impact data on an online third-party database. In the UK, for example, Researchfish® (formerly MRC e-Val but now described as a ‘federated system’ with over 100 participating organisations) allows funders to connect outputs to awards, thereby allowing aggregation of all outputs and impacts from an entire funding stream. The software contains 11 categories: publications, collaborations, further funding, next destination (career progression), engagement activities, influence on policy and practice, research materials, intellectual property, development of products or interventions, impacts on the private sector, and awards and recognition.

Provided that researchers complete the annual return consistently and accurately, such databases may overcome some of the limitations of one-off, resource-intensive case study approaches. However, the design (and business model) of Researchfish® is such that the only funding streams captured are from organisations prepared to pay the membership fee, thereby potentially distorting the picture of whose input accounts for a research team’s outputs.

Researchfish® collects data both ‘top-down’ (from funders) and ‘bottom-up’ (from individual research teams). A comparable US model is the High Impacts Tracking System, a web-based software tool developed by the National Institute of Environmental Health Sciences; it imports data from existing National Institutes of Health databases of grant information as well as the texts of progress reports and notes of programme managers [ 53 ].

Whilst electronic databases are increasingly mainstreamed in national research policy (Researchfish® was used, for example, to populate the Framework on Economic Impacts described by the UK Department of Business, Innovation and Skills [ 54 ]), we were unable to identify any published independent evaluations of their use.

Realist evaluation

Realist evaluation, designed to address the question “what works for whom in what circumstances”, rests on the assumption that different research inputs and processes in different contexts may generate different outcomes (column 4 in Table  1 ) [ 55 ]. A new approach, developed to assess and summarise impact in the national evaluation of UK Collaborations for Leadership in Applied Health Research and Care, is shown in Fig.  3 [ 56 ]. Whilst considered useful in that evaluation, it was resource-intensive to apply.

Realist model of research-service links and impacts in CLAHRCs (reproduced under UK non-commercial government licence from [ 56 ])

Contribution mapping

Kok and Schuit describe the research ecosystem as a complex and unstable network of people and technologies [ 57 ]. They depict the achievement of impact as shifting and stabilising the network’s configuration by mobilising people and resources (including knowledge in material forms, such as guidelines or software) and enrolling them in changing ‘actor scenarios’. In this model, the focus is shifted from attribution to contribution – that is, on the activities and alignment efforts of different actors (linked to the research and, more distantly, unlinked to it) in the three phases of the research process (formulation, production and extension; Fig.  4 ). Contribution mapping, which can be thought of as a variation on the Dutch approaches to societal impact assessment described above, uses in-depth case study methods but differs from more mainstream approaches in its philosophical and theoretical basis (column 6 in Table  1 ), in its focus on processes and activities, and in its goal of producing an account of how the network of actors and artefacts shifts and stabilises (or not). Its empirical application to date has been limited.

Kok and Schuit’s ‘contribution mapping’ model (reproduced under Creative Commons Attribution Licence 4.0 from [ 57 ])

The SPIRIT Action Framework

The SPIRIT Action Framework, recently published by Australia’s Sax Institute [ 58 ], retains a logic model structure but places more emphasis on engagement and capacity-building activities in organisations and acknowledges the messiness of, and multiple influences on, the policy process (Fig.  5 ). Unusually, the ‘logic model’ focuses not on the research but on the receiving organisation’s need for research. We understand that it is currently being empirically tested but evaluations have not yet been published.

The SPIRIT Action Framework (reproduced under Creative Commons Attribution Licence from [ 58 ] Fig.  1 , p. 151)

Participatory research impact model

Community-based participatory research is predicated on a critical philosophy that emphasises social justice and the value of knowledge in liberating the disadvantaged from oppression (column 5 in Table  1 ) [ 59 ]. Cacari-Stone et al.’s model depicts the complex and contingent relationship between a community-campus partnership and the policymaking process [ 60 ]. Research impact is depicted in synergistic terms as progressive strengthening of the partnership and its consequent ability to influence policy decisions. The paper introducing the model includes a detailed account of its application (Table  2 ), but beyond those, it has not yet been empirically tested.

This review of research impact assessment, which has sought to supplement rather than duplicate more extended overviews [ 1 – 7 ], prompts four main conclusions.

First, one size does not fit all. Different approaches to measuring research impact are designed for different purposes. Logic models can be very useful for tracking the impacts of a funding stream from award to quantitised (and perhaps monetised) impacts. However, when exploring less directly attributable aspects of the research-impact link, narrative accounts of how these links emerged and developed are invariably needed.

Second, the perfect is the enemy of the good. Producing detailed and validated case studies with a full assessment of context and all major claims independently verified, takes work and skill. There is a trade-off between the quality, completeness and timeliness of the data informing an impact assessment, on the one hand, and the cost and feasibility of generating such data on the other. It is no accident that some of the most theoretically elegant approaches to impact assessment have (ironically) had limited influence on the assessment of impact in practice.

Third, warnings from critics that focusing on short-term, proximal impacts (however accurately measured) could create a perverse incentive against more complex and/or politically sensitive research whose impacts are likely to be indirect and hard to measure [ 61 – 63 ] should be taken seriously. However, as the science of how to measure intervening processes and activities advances, it may be possible to use such metrics creatively to support and incentivise the development of complementary assets of various kinds.

Fourth, change is afoot. Driven by both technological advances and the mounting economic pressures on the research community, labour-intensive impact models that require manual assessment of documents, researcher interviews and a bespoke narrative may be overtaken in the future by more automated approaches. The potential for ‘big data’ linkage (for example, supplementing Researchfish® entries with bibliometrics on research citations) may be considerable, though its benefits are currently speculative (and the risks unknown).

Conclusions

As the studies presented in this review illustrate, research on research impact is a rapidly growing interdisciplinary field, spanning evidence-based medicine (via sub-fields such as knowledge translation and implementation science), health services research, economics, informatics, sociology of science and higher education studies. One priority for research in this field is an assessment of how far the newer approaches that rely on regular updating of electronic databases are able to provide the breadth of understanding about the nature of the impacts, and how they arise, that can come for the more established and more ‘manual’ approaches. Future research should also address the topical question of whether research impact tools could be used to help target resources and reduce waste in research (for example, to decide whether to commission a new clinical trial or a meta-analysis of existing trials); we note, for example, the efforts of the UK National Institute for Health Research in this regard [ 64 ].

Once methods for assessing research impact have been developed, it is likely that they will be used. As the range of approaches grows, the challenge is to ensure that the most appropriate one is selected for each of the many different circumstances in which (and the different purposes for which) people may seek to measure impact. It is also worth noting that existing empirical studies have been undertaken primarily in high-income countries and relate to health research systems in North America, Europe and Australasia. The extent to which these frameworks are transferable to low- or middle-income countries or to the Asian setting should be explored further.

Box 1: Definitions of research impact

Impact is the effect research has beyond academia and consists of “ ….benefits to one or more areas of the economy, society, culture, public policy and services, health, production, environment, international development or quality of life, whether locally, regionally, nationally or internationally ” (paragraph 62) and as “ …manifested in a wide variety of ways including, but not limited to: the many types of beneficiary (individuals, organisations, communities, regions and other entities); impacts on products, processes, behaviours, policies, practices; and avoidance of harm or the waste of resources. ” (paragraph 63) UK 2014 Research Excellence Framework [ 65 ]
“ ‘Health impacts’ can be defined as changes in the healthy functioning of individuals (physical, psychological, and social aspects of their health), changes to health services, or changes to the broader determinants of health. ‘Social impacts’ are changes that are broader than simply those to health noted above, and include changes to working systems, ethical understanding of health interventions, or population interactions. ‘Economic impacts’ can be regarded as the benefits from commercialization, the net monetary value of improved health, and the benefits from performing health research. ” Canadian Academy of Health Sciences [ 33 ] (p. 51)
Academic impact is “ The demonstrable contribution that excellent research makes to academic advances, across and within disciplines, including significant advances in understanding, methods, theory and application. ” Economic and societal impact is “ fostering global economic performance, and specifically the economic competitiveness of the UK , increasing the effectiveness of public services and policy, [and] enhancing quality of life, health and creative output. ” Research Councils UK Pathways to Impact ( http://www.rcuk.ac.uk/innovation/impacts/ )
“ A research impact is a recorded or otherwise auditable occasion of influence from academic research on another actor or organization. […] It is not the same thing as a change in outputs or activities as a result of that influence, still less a change in social outcomes. Changes in organizational outputs and social outcomes are always attributable to multiple forces and influences. Consequently, verified causal links from one author or piece of work to output changes or to social outcomes cannot realistically be made or measured in the current state of knowledge. […] However, secondary impacts from research can sometimes be traced at a much more aggregate level, and some macro-evaluations of the economic net benefits of university research are feasible. Improving our knowledge of primary impacts as occasions of influence is the best route to expanding what can be achieved here. ” London School of Economics Impact Handbook for Social Scientists [ 66 ]

Raftery J, Hanney S, Greenhalgh T, Glover M, Young A. Models and applications for measuring the impact of health research: Update of a systematic review for the Health Technology Assessment Programme Health technology assessment (Winchester, England) 2016 (in press).

Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: A review. Res Evaluation. 2013:21-32.

Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13:18.

Article   PubMed   PubMed Central   Google Scholar  

Grant J, Brutscher P-B, Kirk SE, Butler L, Wooding S. Capturing Research Impacts: A Review of International Practice. Documented Briefing. Rand Corporation 2010.

Greenhalgh T. Research impact in the community based health sciences: what would good look like? (MBA Dissertation). London: UCL Institute of Education; 2015.

Google Scholar  

Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: A literature review. Sci Public Policy. 2009;36(4):255–70.

Article   Google Scholar  

Hanney S, Buxton M, Green C, Coulson D, Raftery J. An assessment of the impact of the NHS Health Technology Assessment Programme. Health technology assessment (Winchester, England) 2007. 11(53).

Hughes A, Martin B. Enhancing Impact: The value of public sector R&D. CIHE & UKirc, available at wwwcbrcamacuk/pdf/Impact%20Report 2012, 20.

Anonymous. Rates of return to investment in science and innovation: A report prepared for the Department of Business, Innovation and Skills. Accessed 17.12.14 on https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/333006/bis-14-990-rates-of-return-to-investment-in-science-and-innovation-revised-final-report.pdf . London: Frontier Economics; 2014.

Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383(9913):267–76.

Article   PubMed   Google Scholar  

Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J. Measuring research: a guide to research evaluation frameworks and tools. Arlington, VA: RAND Corporation; 2013.

Weiss CH. The many meanings of research utilization. Public Administration Review 1979:426-431.

Kogan M, Henkel M. Government and research: the Rothschild experiment in a government department. London: Heinemann Educational Books; 1983.

Smith K. Beyond evidence based policy in public health: The interplay of ideas: Palgrave Macmillan; 2013.

Amara N, Ouimet M, Landry R. New evidence on instrumental, conceptual, and symbolic utilization of university research in government agencies. Sci Commun. 2004;26(1):75–106.

Swan J, Bresnen M, Robertson M, Newell S, Dopson S. When policy meets practice: colliding logics and the challenges of ‘mode 2’ initiatives in the translation of academic knowledge. Organ Stud. 2010;31(9-10):1311–40.

Davies H, Nutley S, Walter I. Why ‘knowledge transfer’ is misconceived for applied social research. J Health Serv Res Policy. 2008;13(3):188–90.

Dopson S, Fitzgerald L. Knowledge to action? Evidence-based health care in context: Oxford University Press; 2005.

Gabbay J, Le May A. Practice-based evidence for healthcare: Clinical mindlines. London: Routledge; 2010.

Lomas J. Using ‘linkage and exchange’ to move research into policy at a Canadian foundation. Health Affairs (Project Hope). 2000;19(3):236–40.

Article   CAS   Google Scholar  

Lomas J. The in-between world of knowledge brokering. BMJ. 2007;334(7585):129–32.

Bero LA, Grilli R, Grimshaw JM, Harvey E, Oxman AD, Thomson MA. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. BMJ. 1998;317(7156):465–8.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14:2.

Long JC, Cunningham FC, Braithwaite J. Bridges, brokers and boundary spanners in collaborative networks: a systematic review. BMC Health Serv Res. 2013;13:158.

Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1(1):35–43.

CAS   PubMed   Google Scholar  

Expert Panel for Health Directorate of the European Commission’s Research Innovation Directorate General: Review of Public Health Research Projects Financed under the Commission’s Framework Programmes for Health Research. Downloaded from https://ec.europa.eu/research/health/pdf/review-of-public-health-research-projects-subgoup1_en.pdf on 12.8.15. Brussels: European Commission; 2013.

Hanney SR, Castle-Clarke S, Grant J, Guthrie S, Henshall C, Mestre-Ferrandiz J, Pistollato M, Pollitt A, Sussex J, Wooding S: How long does biomedical research take? Studying the time taken between biomedical and health research and its translation into products, policy, and practice. Health research policy and systems/BioMed Central 2015, 13.

Meagher L, Lyall C, Nutley S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res Eval. 2008;17(3):163–73.

Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding S. The impact of the National Institute for Health Research Health Technology Assessment programme, 2003–13: a multimethod evaluation. 2015.

Klautzer L, Hanney S, Nason E, Rubin J, Grant J, Wooding S. Assessing policy and practice impacts of social science research: the application of the Payback Framework to assess the Future of Work programme. Res Eval. 2011;20(3):201–9.

Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv Res. 2006;6:134.

Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. J Health Serv Res Policy. 2007;12 suppl 1:23–31.

Canadian Academy of Health Sciences: Making an Impact, A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. Downloadable from http://www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf . Ottawa: CAHS; 2009.

Johnston SC, Rootenberg JD, Katrak S, Smith WS, Elkins JS. Effect of a US National Institutes of Health programme of clinical trials on public health and costs. Lancet. 2006;367(9519):1319–27.

Deloitte Access Economics. Returns on NHMRC funded Research and Development. Commissioned by the Australian Society for Medical Research Sydney, Australia: Author 2011.

de Oliveira C, Nguyen HV, Wijeysundera HC, Wong WW, Woo G, Grootendorst P, et al. Estimating the payoffs from cardiovascular disease research in Canada: an economic analysis. CMAJ Open. 2013;1(2):E83–90.

Glover M, Buxton M, Guthrie S, Hanney S, Pollitt A, Grant J. Estimating the returns to UK publicly funded cancer-related research in terms of the net value of improved health outcomes. BMC Med. 2014;12:99.

Buxton M, Hanney S, Morris S, Sundmacher L, Mestre-Ferrandiz J, Garau M, Sussex J, Grant J, Ismail S, Nason E: Medical research–what’s it worth? Estimating the economic benefits from medical research in the UK. In: London: UK Evaluation Forum (Academy of Medical Sciences, MRC, Wellcome Trust): 2008; 2008.

Access Economics. Exceptional returns: the value of investing in health R&D in Australia: Australian Society for Medical Research; 2008.

National Institute for Health and Care Excellence (NICE): Guide to the methods of technology appraisal. Accessed at https://www.nice.org.uk/article/pmg9/resources/non-guidance-guide-to-the-methods-of-technology-appraisal-2013-pdf on 21.4.16. Lonodn: NICE; 2013.

Roback K, Dalal K, Carlsson P. Evaluation of health research: measuring costs and socioeconomic effects. Int J Preventive Med. 2011;2(4):203.

Bozeman B, Rogers JD. A churn model of scientific knowledge value: Internet researchers as a knowledge value collective. Res Policy. 2002;31(5):769–94.

Spaapen J, Sylvain C. Societal Quality of Research: Toward a Method for the Assessment of the Potential Value of Research for Society: Science Policy Support Group; 1994.

Royal Netherlands Academy of Arts and Sciences. The societal impact of applied research: towards a quality assessment system. Amsterdam: Royal Netherlands Academy of Arts and Sciences; 2002.

ERiC: Evaluating Research in Context: Evaluating the societal relevance of academic research: A guide. Den Haag: Science System Assessment Departmnet, Rathenau Instituut.; 2010.

Spaapen J, Dijstelbloem H, Wamelink F. Evaluating research in context. A method for comprehensive assessment, 2nd edition, The Hague: COS 2007.

Molas-Gallart J, Tang P, Morrow S. Assessing the non-academic impact of grant-funded socio-economic research: results from a pilot study. Res Eval. 2000;9(3):171–82.

Spaapen J. Social Impact Assessment Methods for Research and Funding Instruments Through the Study of Productive Interactions (SIAMPI): Final report on social impacts of research. In. Amsterdam: Royal Netherlands Academy of Arts and Sciences; 2011.

Donovan C. The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research. N Dir Eval. 2008;118:47–60.

Higher Education Funding Council. Research Excellence Framework 2014: Overview report by Main Panel A and Sub-panels 1 to 6. London: HEFCE. Accessed 1.2.15 on http://www.ref.ac.uk/media/ref/content/expanel/member/Main Panel A overview report.pdf; 2015.

Morgan B. Research impact: Income for outcome. Nature. 2014;511(7510):S72–5.

Article   CAS   PubMed   Google Scholar  

Manville C, Guthrie S, Henham M-L, Garrod B, Sousa S, Kirtley A, Castle-Clarke S, Ling T: Assessing impact submissions for REF 2014: An evaluation. Downloaded from http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/REF,impact,submissions/REF_assessing_impact_submissions.pdf on 11.8.15. Cambridge: RAND Europe; 2015.

Drew CH, Pettibone KG, Ruben E. Greatest ‘HITS’: A new tool for tracking impacts at the National Institute of Environmental Health Sciences. Res Eval. 2013;22(5):307–15.

Medical Research Council: Economic Impact report 2013-14. Downloaded from http://www.mrc.ac.uk/documents/pdf/economic-impact-report-2013-14/on 18.8.15. Swindon: MRC; 2015.

Pawson R. The science of evaluation: a realist manifesto: Sage; 2013.

Rycroft-Malone J, Burton C, Wilkinson J, Harvey G, McCormack B, Baker R, Dopson S, Graham I, Staniszewska S, Thompson C et al: Health Services and Delivery Research. In: Collective action for knowledge mobilisation: a realist evaluation of the Collaborations for Leadership in Applied Health Research and Care. Volume 3, edn. Southampton (UK): NIHR Journals Library.; 2015: 44.

Kok MO, Schuit AJ. Contribution mapping: a method for mapping the contribution of research to enhance its impact. Health Res Policy Syst. 2012;10:21.

Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015;136-137c:147–55.

Jagosh J, Macaulay AC, Pluye P, Salsberg J, Bush PL, Henderson J, et al. Uncovering the Benefits of Participatory Research: Implications of a Realist Review for Health Research and Practice. Milbank Quarterly. 2012;90(2):311–46.

Cacari-Stone L, Wallerstein N, Garcia AP, Minkler M. The Promise of Community-Based Participatory Research for Health Equity: A Conceptual Model for Bridging Evidence With Policy. American Journal of Public Health 2014:e1-e9.

Kelly U, McNicoll I. Through a glass, darkly: Measuring the social value of universities. Downloaded from http://www.campusengage.ie/sites/default/files/resources/80096 NCCPE Social Value Report (2).pdf on 11.8.15. 2011.

Hazelkorn E. Rankings and the reshaping of higher education: The battle for world-class excellence: Palgrave Macmillan; 2015.

Nowotny H. Engaging with the political imaginaries of science: Near misses and future targets. Public Underst Sci. 2014;23(1):16–20.

Anonymous. Adding value in research. London: National Institute for Health Research. Accessed 4.4.16 on http://www.nets.nihr.ac.uk/about/adding-value-in-research ; 2016.

Higher Education Funding Council for England: 2014 REF: Assessment framework and guidance on submissions. Panel A criteria. London (REF 01/2012): HEFCE; 2012.

LSE Public Policy Group. Maximizing the impacts of your research: A handbook for social scientists. http://www.lse.ac.uk/government/research/resgroups/LSEPublicPolicy/Docs/LSE_Impact_Handbook_April_2011.pdf . London: LSE; 2011.

Kwan P, Johnston J, Fung AY, Chong DS, Collins RA, Lo SV. A systematic evaluation of payback of publicly funded health and health services research in Hong Kong. BMC Health Serv Res. 2007;7:121.

Scott JE, Blasinsky M, Dufour M, Mandai RJ, Philogene GS. An evaluation of the Mind-Body Interactions and Health Program: assessing the impact of an NIH program using the Payback Framework. Res Eval. 2011;20(3):185–92.

The Madrillon Group. The Mind-Body Interactions and Health Program Outcome Evaluation. Final Report. Bethesda, Maryland: Report prepared for Office of Behavioral and Social Sciences Research, National Institutes of Health; 2011.

Hanney SR, Watt A, Jones TH, Metcalf L. Conducting retrospective impact analysis to inform a medical research charity’s funding strategies: the case of Asthma UK. Allergy Asthma Clin Immunol. 2013;9:17.

Donovan C, Butler L, Butt AJ, Jones TH, Hanney SR. Evaluation of the impact of National Breast Cancer Foundation-funded research. Med J Aust. 2014;200(4):214–8.

Wooding S, Hanney SR, Pollitt A, Grant J, Buxton MJ. Understanding factors associated with the translation of cardiovascular research: a multinational case study approach. Implement Sci. 2014;9:47.

Montague S, Valentim R. Evaluation of RT&D: from ‘prescriptions for justifying’to ‘user-oriented guidance for learning’. Res Eval. 2010;19(4):251–61.

Adam P, Solans-Domènech M, Pons JM, Aymerich M, Berra S, Guillamon I, et al. Assessment of the impact of a clinical and health services research call in Catalonia. Res Eval. 2012;21(4):319–28.

Graham KER, Chorzempa HL, Valentine PA, Magnan J. Evaluating health research impact: Development and implementation of the Alberta Innovates – Health Solutions impact framework. Res Eval. 2012;21:354–67.

Cohen G, Schroeder J, Newson R, King L, Rychetnik L, Milat AJ, et al. Does health intervention research have real world policy and practice impacts: testing a new impact assessment tool. Health Res Policy Syst. 2015;13:3.

Molas-Gallart J, Tang P. Tracing ‘productive interactions’ to identify social impacts: an example from the social sciences. Res Eval. 2011;20(3):219–26.

Hinrichs S, Grant J. A new resource for identifying and assessing the impacts of research. BMC Med. 2015;13:148.

Greenhalgh T, Fahy N. Research impact in the community based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Med. 2015;13:232.

Download references

Acknowledgements

This paper is largely but not entirely based on a systematic review funded by the NIHR HTA Programme, grant number 14/72/01, with additional material from TG’s dissertation from the MBA in Higher Education Management at UCL Institute of Education, supervised by Sir Peter Scott. We thank Amanda Young for project management support to the original HTA review and Alison Price for assistance with database searches.

Author information

Authors and affiliations.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Radcliffe Primary Care Building, Woodstock Rd, Oxford, OX2 6GG, UK

Trisha Greenhalgh

Primary Care and Population Sciences, Faculty of Medicine, University of Southampton, Southampton General Hospital, Southampton, SO16 6YD, UK

James Raftery

Health Economics Research Group (HERG), Institute of Environment, Health and Societies, Brunel University London, ᅟ, UB8 3PH, UK

Steve Hanney & Matthew Glover

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Trisha Greenhalgh .

Additional information

Competing interests.

TG was Deputy Chair of the 2014 Research Excellence Framework Main Panel A from 2012 to 2014, for which she received an honorarium for days worked (in common with all others on REF panels). SH received grants from various health research funding bodies to help develop and test the Payback Framework. JR is a member of the NIHR HTA Editorial Board, on paid secondment. He was principal investigator in a study funded by the NIHR HTA programme which reviewed methods for measuring the impact of the health research programmes and was director of the NIHR Evaluation, Trials and Studies Coordinating Centre to 2012. MG declares no conflict of interest.

All authors have completed the unified competing interest form at http://www.spp.pt/UserFiles/file/APP_2015/Declaracao_ICMJE_nao_editavel.pdf (available on request from the corresponding author) and declare (1) no financial support for the submitted work from anyone other than their employer; (2) no financial relationships with commercial entities that might have an interest in the submitted work; (3) no spouses, partners, or children with relationships with commercial entities that might have an interest in the submitted work; and (4) no non-financial interests that may be relevant to the submitted work.

Authors’ contributions

JR was principal investigator on the original systematic literature review and led the research and writing for the HTA report (see Acknowledgements), to which all authors contributed by bringing different areas of expertise to an interdisciplinary synthesis. TG wrote the initial draft of this paper and all co-authors contributed to its refinement. All authors have read and approved the final draft.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Greenhalgh, T., Raftery, J., Hanney, S. et al. Research impact: a narrative review. BMC Med 14 , 78 (2016). https://doi.org/10.1186/s12916-016-0620-8

Download citation

Received : 26 February 2016

Accepted : 27 April 2016

Published : 23 May 2016

DOI : https://doi.org/10.1186/s12916-016-0620-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research impact
  • Knowledge translation
  • Implementation science
  • Research utilization
  • Payback Framework
  • Monetisation
  • Research accountability
  • Health gains

BMC Medicine

ISSN: 1741-7015

research impact a narrative review

Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Nuffield Department of Primary Care Health Sciences, University of Oxford

  • Publications

Research impact: a narrative review

Greenhalgh t., raftery j., hanney s., glover m..

Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon.

BioMed Central

Publication Date

research impact, knowledge translation, implementation science, monetisation, payback framework, health gains , research accountability, research utilization

Logo image

Research impact: a narrative review

Files and links (2), usage policy.

Usage details for all content viewed and downloaded in this site are shared with IRUS-UK (Institutional Repository Usage Statistics UK). Cookies are used to remember your decision. Click Accept to accept usage details sharing and the cookies.

Company Logo

Cookie Preference Center

Your preferences, strictly necessary cookies.

As described in our Corporate Privacy Notice and Cookie Policy , we use cookies (including pixels or other similar technologies) on our websites, mobile applications and related products (the “services”). The types of cookies we use are described below.​

These are cookies necessary for the services to function and are always active. They are usually only set in response to actions made by the user which amount to a request for services, such as setting privacy preferences, logging in, or filling in forms. ​

Cookie List

Research Impact: A Narrative Review

Greenhalgh, T., Raferty, J., Hanney, S., & Glover, M. (2016). Research impact: A narrative review. BMC Medicine, 14 (78), 1-16. https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-016-0620-8 Abstract Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon. This article is an important review of some of the dominant frameworks for research impact assessment (RIA) and points the way to some new arrivals on the scene. Anything by Trish Greenhalgh is good so this was an easy pick for this month’s journal club. The frameworks review Payback, Research Impact Framework, Canadian Academy of Health Sciences (CAHS), monetisation, societal impact assessment and UK Research Excellence Framework. Context is interesting here as I had never heard about monetisation or societal impact assessment and this review is lacking the Knowledge to Action Cycle which is dominant in Canada. Bottom line: if you want to do it well it really doesn’t matter what framework you choose. It will be time consuming, require dedicated skills and, therefore, be expensive. If you don’t want to dive into the deep end of the RIA pool, the research impact framework was developed by and for academics interested in measuring themselves. “ As such, it is a ‘light touch’ checklist intended for use by individual researchers who seek to identify and select impacts from their work “without requiring specialist skill in the field of research impact assessment” . Apart from this, all others were labour intensive. A couple of things that stand out that aren’t in the abstract (which is a great summary of the take home information): • The SPRIT Action Framework (a newer model) employs a logic model (as most do), but unusually, the ‘logic model’ focuses not on the research but on the receiving organisation’s need for research . I find this compelling because impact is a function of our industry partners making products, our government partners developing policies and our community partners delivering social services. It is true that clinical research or education research in the classroom can make an impact on immediate patients or students but the opportunity for this research is to scale beyond the single clinic or classroom. To assess the impact of research you *must* engage those who are using it as well as those producing it. This is the basis of the co-produced pathway to impact which would be interesting to compare against these models ( as I have already done ). • Contribution mapping is a new approach in RIA. “ In this model, the focus is shifted from attribution to contribution – that is, on the activities and alignment efforts of different actors (linked to the research and, more distantly, unlinked to it) in the three phases of the research process (formulation, production and extension ”. Contribution mapping aligns with a performative paradigm (column 6 in table 1 in the article). For more on contribution analysis see a previous journal club post . Questions for brokers: 1. What framework is the basis for your research impact assessment approach? Does it focus on the producers of research or the users of research? 2. Are you a researcher (or a research administrator) using Researchfish to capture the evidence of impact? Why are you capturing the evidence of impact from a researcher during or at the end of a research study when the researcher isn’t the one making the impact (see above) and the impact hasn’t usually occurred by the end of the study? 3. Are you an experience research impact assessor? If not who are collaborating with that has these skills? Where can you go to build your skill set in research impact assessment? Research Impact Canada is producing this journal club series to make evidence on knowledge mobilization more accessible to knowledge brokers and to create on line discussion about research on knowledge mobilization. It is designed for knowledge brokers and other knowledge mobilization stakeholders. Read this open access article. Then come back to this post and join the journal club by posting your comments.

Leave a Reply

Your email address will not be published. Required fields are marked *

RIC secondary logo

Get Connected

Sign up for our monthly newsletter to stay informed about upcoming events, news, and resources for knowledge mobilization!

  • Partner With Us

© Research Impact Canada. All rights reserved. No part of this site may be reproduced without our written permission.

  • Advanced search
  • Peer review
  • Record : found
  • Abstract : found
  • Article : found

Research impact: a narrative review

research impact a narrative review

Read this article at

  • Download PDF
  • Review article
  • Invite someone to review

Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon.

Author and article information

Contributors, affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Custom metadata

Comment on this article.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Int J Prev Med

How to Write a Systematic Review: A Narrative Review

Ali hasanpour dehkordi.

Social Determinants of Health Research Center, Shahrekord University of Medical Sciences, Shahrekord, Iran

Elaheh Mazaheri

1 Health Information Technology Research Center, Student Research Committee, Department of Medical Library and Information Sciences, School of Management and Medical Information Sciences, Isfahan University of Medical Sciences, Isfahan, Iran

Hanan A. Ibrahim

2 Department of International Relations, College of Law, Bayan University, Erbil, Kurdistan, Iraq

Sahar Dalvand

3 MSc in Biostatistics, Health Promotion Research Center, Iran University of Medical Sciences, Tehran, Iran

Reza Ghanei Gheshlagh

4 Spiritual Health Research Center, Research Institute for Health Development, Kurdistan University of Medical Sciences, Sanandaj, Iran

In recent years, published systematic reviews in the world and in Iran have been increasing. These studies are an important resource to answer evidence-based clinical questions and assist health policy-makers and students who want to identify evidence gaps in published research. Systematic review studies, with or without meta-analysis, synthesize all available evidence from studies focused on the same research question. In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.

A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[ 1 ] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[ 2 , 3 ] To identify assess and interpret available research, identify effective and ineffective health-care interventions, provide integrated documentation to help decision-making, and identify the gap between studies is one of the most important reasons for conducting systematic review studies.[ 4 ]

In the review studies, the latest scientific information about a particular topic is criticized. In these studies, the terms of review, systematic review, and meta-analysis are used instead. A systematic review is done in one of two methods, quantitative (meta-analysis) and qualitative. In a meta-analysis, the results of two or more studies for the evaluation of say health interventions are combined to measure the effect of treatment, while in the qualitative method, the findings of other studies are combined without using statistical methods.[ 5 ]

Since 1999, various guidelines, including the QUORUM, the MOOSE, the STROBE, the CONSORT, and the QUADAS, have been introduced for reporting meta-analyses. But recently the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement has gained widespread popularity.[ 6 , 7 , 8 , 9 ] The systematic review process based on the PRISMA statement includes four steps of how to formulate research questions, define the eligibility criteria, identify all relevant studies, extract and synthesize data, and deduce and present results (answers to research questions).[ 2 ]

Systematic Review Protocol

Systematic reviews start with a protocol. The protocol is a researcher road map that outlines the goals, methodology, and outcomes of the research. Many journals advise writers to use the PRISMA statement to write the protocol.[ 10 ] The PRISMA checklist includes 27 items related to the content of a systematic review and meta-analysis and includes abstracts, methods, results, discussions, and financial resources.[ 11 ] PRISMA helps writers improve their systematic review and meta-analysis report. Reviewers and editors of medical journals acknowledge that while PRISMA may not be used as a tool to assess the methodological quality, it does help them to publish a better study article [ Figure 1 ].[ 12 ]

An external file that holds a picture, illustration, etc.
Object name is IJPVM-12-27-g001.jpg

Screening process and articles selection according to the PRISMA guidelines

The main step in designing the protocol is to define the main objectives of the study and provide some background information. Before starting a systematic review, it is important to assess that your study is not a duplicate; therefore, in search of published research, it is necessary to review PREOSPERO and the Cochrane Database of Systematic. Sometimes it is better to search, in four databases, related systematic reviews that have already been published (PubMed, Web of Sciences, Scopus, Cochrane), published systematic review protocols (PubMed, Web of Sciences, Scopus, Cochrane), systematic review protocols that have already been registered but have not been published (PROSPERO, Cochrane), and finally related published articles (PubMed, Web of Sciences, Scopus, Cochrane). The goal is to reduce duplicate research and keep up-to-date systematic reviews.[ 13 ]

Research questions

Writing a research question is the first step in systematic review that summarizes the main goal of the study.[ 14 ] The research question determines which types of studies should be included in the analysis (quantitative, qualitative, methodic mix, review overviews, or other studies). Sometimes a research question may be broken down into several more detailed questions.[ 15 ] The vague questions (such as: is walking helpful?) makes the researcher fail to be well focused on the collected studies or analyze them appropriately.[ 16 ] On the other hand, if the research question is rigid and restrictive (e.g., walking for 43 min and 3 times a week is better than walking for 38 min and 4 times a week?), there may not be enough studies in this area to answer this question and hence the generalizability of the findings to other populations will be reduced.[ 16 , 17 ] A good question in systematic review should include components that are PICOS style which include population (P), intervention (I), comparison (C), outcome (O), and setting (S).[ 18 ] Regarding the purpose of the study, control in clinical trials or pre-poststudies can replace C.[ 19 ]

Search and identify eligible texts

After clarifying the research question and before searching the databases, it is necessary to specify searching methods, articles screening, studies eligibility check, check of the references in eligible studies, data extraction, and data analysis. This helps researchers ensure that potential biases in the selection of potential studies are minimized.[ 14 , 17 ] It should also look at details such as which published and unpublished literature have been searched, how they were searched, by which mechanism they were searched, and what are the inclusion and exclusion criteria.[ 4 ] First, all studies are searched and collected according to predefined keywords; then the title, abstract, and the entire text are screened for relevance by the authors.[ 13 ] By screening articles based on their titles, researchers can quickly decide on whether to retain or remove an article. If more information is needed, the abstracts of the articles will also be reviewed. In the next step, the full text of the articles will be reviewed to identify the relevant articles, and the reason for the removal of excluded articles is reported.[ 20 ] Finally, it is recommended that the process of searching, selecting, and screening articles be reported as a flowchart.[ 21 ] By increasing research, finding up-to-date and relevant information has become more difficult.[ 22 ]

Currently, there is no specific guideline as to which databases should be searched, which database is the best, and how many should be searched; but overall, it is advisable to search broadly. Because no database covers all health topics, it is recommended to use several databases to search.[ 23 ] According to the A MeaSurement Tool to Assess Systematic Reviews scale (AMSTAR) at least two databases should be searched in systematic and meta-analysis, although more comprehensive and accurate results can be obtained by increasing the number of searched databases.[ 24 ] The type of database to be searched depends on the systematic review question. For example, in a clinical trial study, it is recommended that Cochrane, multi-regional clinical trial (mRCTs), and International Clinical Trials Registry Platform be searched.[ 25 ]

For example, MEDLINE, a product of the National Library of Medicine in the United States of America, focuses on peer-reviewed articles in biomedical and health issues, while Embase covers the broad field of pharmacology and summaries of conferences. CINAHL is a great resource for nursing and health research and PsycINFO is a great database for psychology, psychiatry, counseling, addiction, and behavioral problems. Also, national and regional databases can be used to search related articles.[ 26 , 27 ] In addition, the search for conferences and gray literature helps to resolve the file-drawn problem (negative studies that may not be published yet).[ 26 ] If a systematic review is carried out on articles in a particular country or region, the databases in that region or country should also be investigated. For example, Iranian researchers can use national databases such as Scientific Information Database and MagIran. Comprehensive search to identify the maximum number of existing studies leads to a minimization of the selection bias. In the search process, the available databases should be used as much as possible, since many databases are overlapping.[ 17 ] Searching 12 databases (PubMed, Scopus, Web of Science, EMBASE, GHL, VHL, Cochrane, Google Scholar, Clinical trials.gov, mRCTs, POPLINE, and SIGLE) covers all articles published in the field of medicine and health.[ 25 ] Some have suggested that references management software be used to search for more easy identification and removal of duplicate articles from several different databases.[ 20 ] At least one search strategy is presented in the article.[ 21 ]

Quality assessment

The methodological quality assessment of articles is a key step in systematic review that helps identify systemic errors (bias) in results and interpretations. In systematic review studies, unlike other review studies, qualitative assessment or risk of bias is required. There are currently several tools available to review the quality of the articles. The overall score of these tools may not provide sufficient information on the strengths and weaknesses of the studies.[ 28 ] At least two reviewers should independently evaluate the quality of the articles, and if there is any objection, the third author should be asked to examine the article or the two researchers agree on the discussion. Some believe that the study of the quality of studies should be done by removing the name of the journal, title, authors, and institutions in a Blinded fashion.[ 29 ]

There are several ways for quality assessment, such as Sack's quality assessment (1988),[ 30 ] overview quality assessment questionnaire (1991),[ 31 ] CASP (Critical Appraisal Skills Program),[ 32 ] and AMSTAR (2007),[ 33 ] Besides, CASP,[ 34 ] the National Institute for Health and Care Excellence,[ 35 ] and the Joanna Briggs Institute System for the Unified Management, Assessment and Review of Information checklists.[ 30 , 36 ] However, it is worth mentioning that there is no single tool for assessing the quality of all types of reviews, but each is more applicable to some types of reviews. Often, the STROBE tool is used to check the quality of articles. It reviews the title and abstract (item 1), introduction (items 2 and 3), implementation method (items 4–12), findings (items 13–17), discussion (Items 18–21), and funding (item 22). Eighteen items are used to review all articles, but four items (6, 12, 14, and 15) apply in certain situations.[ 9 ] The quality of interventional articles is often evaluated by the JADAD tool, which consists of three sections of randomization (2 scores), blinding (2 scores), and patient count (1 scores).[ 29 ]

Data extraction

At this stage, the researchers extract the necessary information in the selected articles. Elamin believes that reviewing the titles and abstracts and data extraction is a key step in the review process, which is often carried out by two of the research team independently, and ultimately, the results are compared.[ 37 ] This step aimed to prevent selection bias and it is recommended that the chance of agreement between the two researchers (Kappa coefficient) be reported at the end.[ 26 ] Although data collection forms may differ in systematic reviews, they all have information such as first author, year of publication, sample size, target community, region, and outcome. The purpose of data synthesis is to collect the findings of eligible studies, evaluate the strengths of the findings of the studies, and summarize the results. In data synthesis, we can use different analysis frameworks such as meta-ethnography, meta-analysis, or thematic synthesis.[ 38 ] Finally, after quality assessment, data analysis is conducted. The first step in this section is to provide a descriptive evaluation of each study and present the findings in a tabular form. Reviewing this table can determine how to combine and analyze various studies.[ 28 ] The data synthesis approach depends on the nature of the research question and the nature of the initial research studies.[ 39 ] After reviewing the bias and the abstract of the data, it is decided that the synthesis is carried out quantitatively or qualitatively. In case of conceptual heterogeneity (systematic differences in the study design, population, and interventions), the generalizability of the findings will be reduced and the study will not be meta-analysis. The meta-analysis study allows the estimation of the effect size, which is reported as the odds ratio, relative risk, hazard ratio, prevalence, correlation, sensitivity, specificity, and incidence with a confidence interval.[ 26 ]

Estimation of the effect size in systematic review and meta-analysis studies varies according to the type of studies entered into the analysis. Unlike the mean, prevalence, or incidence index, in odds ratio, relative risk, and hazard ratio, it is necessary to combine logarithm and logarithmic standard error of these statistics [ Table 1 ].

Effect size in systematic review and meta-analysis

OR=Odds ratio; RR=Relative risk; RCT= Randomized controlled trial; PPV: positive predictive value; NPV: negative predictive value; PLR: positive likelihood ratio; NLR: negative likelihood ratio; DOR: diagnostic odds ratio

Interpreting and presenting results (answers to research questions)

A systematic review ends with the interpretation of results. At this stage, the results of the study are summarized and the conclusions are presented to improve clinical and therapeutic decision-making. A systematic review with or without meta-analysis provides the best evidence available in the hierarchy of evidence-based practice.[ 14 ] Using meta-analysis can provide explicit conclusions. Conceptually, meta-analysis is used to combine the results of two or more studies that are similar to the specific intervention and the similar outcomes. In meta-analysis, instead of the simple average of the results of various studies, the weighted average of studies is reported, meaning studies with larger sample sizes account for more weight. To combine the results of various studies, we can use two models of fixed and random effects. In the fixed-effect model, it is assumed that the parameters studied are constant in all studies, and in the random-effect model, the measured parameter is assumed to be distributed between the studies and each study has measured some of it. This model offers a more conservative estimate.[ 40 ]

Three types of homogeneity tests can be used: (1) forest plot, (2) Cochrane's Q test (Chi-squared), and (3) Higgins I 2 statistics. In the forest plot, more overlap between confidence intervals indicates more homogeneity. In the Q statistic, when the P value is less than 0.1, it indicates heterogeneity exists and a random-effect model should be used.[ 41 ] Various tests such as the I 2 index are used to determine heterogeneity, values between 0 and 100; the values below 25%, between 25% and 50%, and above 75% indicate low, moderate, and high levels of heterogeneity, respectively.[ 26 , 42 ] The results of the meta-analyzing study are presented graphically using the forest plot, which shows the statistical weight of each study with a 95% confidence interval and a standard error of the mean.[ 40 ]

The importance of meta-analyses and systematic reviews in providing evidence useful in making clinical and policy decisions is ever-increasing. Nevertheless, they are prone to publication bias that occurs when positive or significant results are preferred for publication.[ 43 ] Song maintains that studies reporting a certain direction of results or powerful correlations may be more likely to be published than the studies which do not.[ 44 ] In addition, when searching for meta-analyses, gray literature (e.g., dissertations, conference abstracts, or book chapters) and unpublished studies may be missed. Moreover, meta-analyses only based on published studies may exaggerate the estimates of effect sizes; as a result, patients may be exposed to harmful or ineffective treatment methods.[ 44 , 45 ] However, there are some tests that can help in detecting negative expected results that are not included in a review due to publication bias.[ 46 ] In addition, publication bias can be reduced through searching for data that are not published.

Systematic reviews and meta-analyses have certain advantages; some of the most important ones are as follows: examining differences in the findings of different studies, summarizing results from various studies, increased accuracy of estimating effects, increased statistical power, overcoming problems related to small sample sizes, resolving controversies from disagreeing studies, increased generalizability of results, determining the possible need for new studies, overcoming the limitations of narrative reviews, and making new hypotheses for further research.[ 47 , 48 ]

Despite the importance of systematic reviews, the author may face numerous problems in searching, screening, and synthesizing data during this process. A systematic review requires extensive access to databases and journals that can be costly for nonacademic researchers.[ 13 ] Also, in reviewing the inclusion and exclusion criteria, the inevitable mindsets of browsers may be involved and the criteria are interpreted differently from each other.[ 49 ] Lee refers to some disadvantages of these studies, the most significant ones are as follows: a research field cannot be summarized by one number, publication bias, heterogeneity, combining unrelated things, being vulnerable to subjectivity, failing to account for all confounders, comparing variables that are not comparable, just focusing on main effects, and possible inconsistency with results of randomized trials.[ 47 ] Different types of programs are available to perform meta-analysis. Some of the most commonly used statistical programs are general statistical packages, including SAS, SPSS, R, and Stata. Using flexible commands in these programs, meta-analyses can be easily run and the results can be readily plotted out. However, these statistical programs are often expensive. An alternative to using statistical packages is to use programs designed for meta-analysis, including Metawin, RevMan, and Comprehensive Meta-analysis. However, these programs may have limitations, including that they can accept few data formats and do not provide much opportunity to set the graphical display of findings. Another alternative is to use Microsoft Excel. Although it is not a free software, it is usually found in many computers.[ 20 , 50 ]

A systematic review study is a powerful and valuable tool for answering research questions, generating new hypotheses, and identifying areas where there is a lack of tangible knowledge. A systematic review study provides an excellent opportunity for researchers to improve critical assessment and evidence synthesis skills.

Authors' contributions

All authors contributed equally to this work.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Dietary impact on fasting and stimulated GLP-1 secretion in different metabolic conditions - a narrative review

Affiliations.

  • 1 The Sahlgrenska Academy at the University of Gothenburg, Institute of Neuroscience and Physiology, Department of Psychiatry and Neurochemistry, Mölndal, Sweden.
  • 2 University of Bonn, Institute of Nutrition and Food Science, Nutrition and Microbiota, Bonn, Germany.
  • 3 University of Copenhagen, Faculty of Health and Medical Sciences, Department of Biomedical Sciences, Copenhagen, Denmark; University of Copenhagen, Faculty of Health and Medical Sciences, The Novo Nordisk Foundation Center for Basic Metabolic Research, Copenhagen, Denmark.
  • 4 University of Bonn, Institute of Nutrition and Food Science, Nutrition and Microbiota, Bonn, Germany. Electronic address: [email protected].
  • PMID: 38218319
  • DOI: 10.1016/j.ajcnut.2024.01.007

Introduction: Glucagon-like peptide 1 (GLP-1), a gastrointestinal peptide and central mediator of glucose metabolism, is secreted by L cells in the intestine in response to food intake. Postprandial secretion of GLP-1 is triggered by nutrient sensing via transporters and G protein-coupled receptors (GPCRs). GLP-1 secretion may be lower in adults with obesity (OW) or type 2 diabetes mellitus (T2DM) than in those with normal glucose tolerance (NGT), but these findings are inconsistent. Because of the actions of GLP-1 on stimulating insulin secretion and promoting weight loss, GLP-1 and its analogs are used in pharmacological preparations for the treatment of T2DM. However, physiologically stimulated GLP-1 secretion through the diet might be a preventive or synergistic method for improving glucose metabolism in individuals who are OW, or have impaired glucose tolerance (IGT) or T2DM.

Rationale: This narrative review focuses on fasting and postprandial GLP-1 secretion in individuals with different metabolic conditions and degrees of glucose intolerance. Further, the influence of relevant diet-related factors (e.g., specific diets, meal composition and size, phytochemical content, and gut microbiome) that could affect fasting and postprandial GLP-1 secretion are discussed.

Results: Some studies showed a diminished glucose- or meal-stimulated GLP-1 response in participants with T2DM, IGT, or OW compared to those with NGT, whereas other studies have reported an elevated or unchanged GLP-1 response in T2DM or IGT. Meal composition, especially the relationship between macronutrients and interventions targeting the microbiome can impact postprandial GLP-1 secretion, although it is not clear which macronutrients are strong stimulants of GLP-1. Moreover, glucose tolerance, antidiabetic treatment, grade of overweight/obesity, and sex were important factors influencing GLP-1 secretion.

Conclusion: The results presented in this review highlight the potential of nutritional and physiological stimulation of GLP-1 secretion. Further research on fasting and postprandial GLP-1 levels and the resulting metabolic consequences under different metabolic conditions is needed.

Keywords: Glucagon-like peptide 1; glucose tolerance; human; meal challenge; postprandial metabolism; type 2 diabetes mellitus.

Copyright © 2024 The Author(s). Published by Elsevier Inc. All rights reserved.

Publication types

Europe PMC requires Javascript to function effectively.

Either your web browser doesn't support Javascript or it is currently turned off. In the latter case, please turn on Javascript support in your web browser and reload this page.

Search life-sciences literature ( 43,488,602 articles, preprints and more)

  • Free full text
  • Citations & impact
  • Similar Articles

Research impact: a narrative review.

Author information, affiliations.

  • Greenhalgh T 1
  • Raftery J 2

ORCIDs linked to this article

  • Glover M | 0000-0001-9454-2668
  • Hanney S | 0000-0002-7415-5932
  • Greenhalgh T | 0000-0003-2369-8088

BMC Medicine , 23 May 2016 , 14: 78 https://doi.org/10.1186/s12916-016-0620-8   PMID: 27211576  PMCID: PMC4876557

Abstract 

Free full text .

Logo of bmcmedicine

Research impact: a narrative review

Trisha greenhalgh.

Nuffield Department of Primary Care Health Sciences, University of Oxford, Radcliffe Primary Care Building, Woodstock Rd, Oxford, OX2 6GG UK

James Raftery

Primary Care and Population Sciences, Faculty of Medicine, University of Southampton, Southampton General Hospital, Southampton, SO16 6YD UK

Steve Hanney

Health Economics Research Group (HERG), Institute of Environment, Health and Societies, Brunel University London, ᅟ, UB8 3PH UK

Matthew Glover

Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of impact is not without its critics. We review the strengths and limitations of six established approaches (Payback, Research Impact Framework, Canadian Academy of Health Sciences, monetisation, societal impact assessment, UK Research Excellence Framework) plus recently developed and largely untested ones (including metrics and electronic databases). We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact link can and should be measured; and (4) research on research impact is a rapidly developing field with new methodologies on the horizon.

This paper addresses the question: ‘What is research impact and how might we measure it?’. It has two main aims, first, to introduce the general reader to a new and somewhat specialised literature on the science of research impact assessment and, second, to contribute to the development of theory and the taxonomy of method in this complex and rapidly growing field of inquiry. Summarising evidence from previous systematic and narrative reviews [ 1 – 7 ], including new reviews from our own team [ 1 , 5 ], we consider definitions of impact and its conceptual and philosophical basis before reviewing the strengths and limitations of different approaches to its assessment. We conclude by suggesting where future research on research impact might be directed.

Research impact has many definitions (Box 1). Its measurement is important considering that researchers are increasingly expected to be accountable and produce value for money, especially when their work is funded from the public purse [ 8 ]. Further, funders seek to demonstrate the benefits from their research spending [ 9 ] and there is pressure to reduce waste in research [ 10 ]. By highlighting how (and how effectively) resources are being used, impact assessment can inform strategic planning by both funding bodies and research institutions [ 1 , 11 ].

We draw in particular on a recent meta-synthesis of studies of research impact funded by the UK Health Technology Assessment Programme (HTA review) covering literature mainly published between 2005 and 2014 [ 1 ]. The HTA review was based on a systematic search of eight databases (including grey literature) plus hand searching and reference checking, and identified over 20 different impact models and frameworks and 110 studies describing their empirical applications (as single or multiple case studies), although only a handful had proven robust and flexible across a range of examples. The material presented in this summary paper, based on much more extensive work, is inevitably somewhat eclectic. Four of the six approaches we selected as ‘established’ were the ones most widely used in the 110 published empirical studies. Additionally, we included the Societal Impact Assessment despite it being less widely used since it has recently been the subject of a major EU-funded workstream (across a range of fields) and the UK Research Excellence Framework (REF; on which empirical work post-dated our review) because of the size and uniqueness of the dataset and its significant (?) international interest. The approaches we selected as showing promise for the future were chosen more subjectively on the grounds that there is currently considerable academic and/or policy interest in them.

Different approaches to assessing research impact make different assumptions about the nature of research knowledge, the purpose of research, the definition of research quality, the role of values in research and its implementation, the mechanisms by which impact is achieved, and the implications for how impact is measured (Table  1 ). Short-term proximate impacts are easier to attribute, but benefits from complementary assets (such as the development of research infrastructure, political support or key partnerships [ 8 ]) may accumulate in the longer term but are more difficult – and sometimes impossible – to fully capture.

Philosophical assumptions underpinning approaches to research impact

Knowledge is intertwined with politics and persuasion. If stakeholders agree on what the problem is and what a solution would look like, the research-impact link will tend to turn on the strength of research evidence in favour of each potential decision option, as depicted in column 2 of Table  1 [ 12 ]. However, in many fields – for example, public policymaking, social sciences, applied public health and the study of how knowledge is distributed and negotiated in multi-stakeholder collaborations – the links between research and impact are complex, indirect and hard to attribute (for an example, see Kogan and Henkel’s rich ethnographic study of the Rothschild experiment in the 1970s, which sought – and failed – to rationalize the links between research and policy [ 13 ]). In policymaking, research evidence is rather more often used conceptually (for general enlightenment) or symbolically (to justify a chosen course of action) than instrumentally (feeding directly into a particular policy decision) [ 12 , 14 ], as shown empirically by Amara et al.’s large quantitative survey of how US government agencies drew on university research [ 15 ]. Social science research is more likely to illuminate the complexity of a phenomenon than produce a simple, ‘implementable’ solution that can be driven into practice by incorporation into a guideline or protocol [ 16 , 17 ], as was shown by Dopson and Fitzgerald’s detailed ethnographic case studies of the implementation of evidence-based healthcare in healthcare organisations [ 18 ]. In such situations, the research-impact relationship may be productively explored using approaches that emphasise the fluidity of knowledge and the multiple ways in which it may be generated, assigned more or less credibility and value, and utilised (columns 3 to 6 in Table  1 ) [ 12 , 19 ].

Many approaches to assessing research impact combine a logic model (to depict input-activities-output-impact links) with a ‘case study’ description to capture the often complex processes and interactions through which knowledge is produced (perhaps collaboratively and/or with end-user input to study design), interpreted and shared (for example, through engagement activities, audience targeting and the use of champions, boundary spanners and knowledge brokers [ 20 – 24 ]). A nuanced narrative may be essential to depict the non-linear links between upstream research and distal outcomes and/or help explain why research findings were not taken up and implemented despite investment in knowledge translation efforts [ 4 , 6 ].

Below, we describe six approaches that have proved robust and useful for measuring research impact and some additional ones introduced more recently. Table  2 lists examples of applications of the main approaches reviewed in this paper.

Examples of applications of research impact assessment frameworks

  • Established approaches to measuring research impact

The Payback Framework

Developed by Buxton and Hanney in 1996 [ 25 ], the Payback Framework (Fig.  1 ) remains the most widely used approach. It was used by 27 of the 110 empirical application studies in the recent HTA review [ 1 ]. Despite its name, it does not measure impact in monetary terms. It consists of two elements: a logic model of the seven stages of research from conceptualisation to impact, and five categories to classify the paybacks – knowledge (e.g. academic publications), benefits to future research (e.g. training new researchers), benefits to policy (e.g. information base for clinical policies), benefits to health and the health system (including cost savings and greater equity), and broader economic benefits (e.g. commercial spin-outs). Two interfaces for interaction between researchers and potential users of research (‘project specification, selection and commissioning’ and ‘dissemination’) and various feedback loops connecting the stages are seen as crucial.

research impact a narrative review

The Payback Framework developed by Buxton and Hanney (reproduced under Creative Commons Licence from Hanney et al [ 70 ])

The elements and categories in the Payback Framework were designed to capture the diverse ways in which impact may arise, notably the bidirectional interactions between researchers and users at all stages in the research process from agenda setting to dissemination and implementation. The Payback Framework encourages an assessment of the knowledge base at the time a piece of research is commissioned – data that might help with issues of attribution (did research A cause impact B?) and/or reveal a counterfactual (what other work was occurring in the relevant field at the time?).

Applying the Payback Framework through case studies is labour intensive: researcher interviews are combined with document analysis and verification of claimed impacts to prepare a detailed case study containing both qualitative and quantitative information. Not all research groups or funders will be sufficiently well resourced to produce this level of detail for every project – nor is it always necessary to do so. Some authors have adapted the Payback Framework methodology to reduce the workload of impact assessment (for example, a recent European Commission evaluation populated the categories mainly by analysis of published documents [ 26 ]); nevertheless, it is not known how or to what extent such changes would compromise the data. Impacts may be short or long term [ 27 ], so (as with any approach) the time window covered by data collection will be critical.

Another potential limitation of the Payback Framework is that it is generally project-focused (commencing with a particular funded study) and is therefore less able to explore the impact of the sum total of activities of a research group that attracted funding from a number of sources. As Meagher et al. concluded in their study of ESRC-funded responsive mode psychology projects, “ In most cases it was extremely difficult to attribute with certainty a particular impact to a particular project’s research findings. It was often more feasible to attach an impact to a particular researcher’s full body of research, as it seemed to be the depth and credibility of an ongoing body of research that registered with users ” [ 28 ] (p. 170).

Similarly, the impact of programmes of research may be greater than the sum of their parts due to economic and intellectual synergies, and therefore project-focused impact models may systematically underestimate impact. Application of the Payback Framework may include supplementary approaches such as targeted stakeholder interviews to fully capture the synergies of programme-level funding [ 29 , 30 ].

Research Impact Framework

The Research Impact Framework was the second most widely used approach in the HTA review of impact assessment, accounting for seven out of 110 applications [ 1 ], but in these studies it was mostly used in combination with other frameworks (especially Payback) rather than as a stand-alone approach. It was originally developed by and for academics who were interested in measuring and monitoring the impact of their own research. As such, it is a ‘light touch’ checklist intended for use by individual researchers who seek to identify and select impacts from their work “ without requiring specialist skill in the field of research impact assessment ” [ 31 ] (p. 136). The checklist, designed to prompt reflection and discussion, includes research-related impacts, policy and practice impacts, service (including health) impacts, and an additional ‘societal impact’ category with seven sub-categories. In a pilot study, its authors found that participating researchers engaged readily with the Research Impact Framework and were able to use it to identify and reflect on different kinds of impact from their research [ 31 , 32 ]. Because of its (intentional) trade-off between comprehensiveness and practicality, it generally produces a less thorough assessment than the Payback Framework and was not designed to be used in formal impact assessment studies by third parties.

Canadian Academy of Health Sciences (CAHS) Framework

The most widely used adaptation of the Payback Framework is the CAHS Framework (Fig.  2 ), which informed six of the 110 application studies in the HTA review [ 33 ]. Its architects claim to have shaped the Payback Framework into a ‘systems approach’ that takes greater account of the various non-linear influences at play in contemporary health research systems. CAHS was constructed collaboratively by a panel of international experts (academics, policymakers, university heads), endorsed by 28 stakeholder bodies across Canada (including research funders, policymakers, professional organisations and government) and refined through public consultation [ 33 ]. The authors emphasise that the consensus-building process that generated the model was as important as the model itself.

research impact a narrative review

Simplified Canadian Academy of Health Sciences (CAHS) Framework (reproduced with permission of Canadian Academy of Health Sciences [ 33 ])

CAHS encourages a careful assessment of context and the subsequent consideration of impacts under five categories: advancing knowledge (measures of research quality, activity, outreach and structure), capacity-building (developing researchers and research infrastructure), informing decision-making (decisions about health and healthcare, including public health and social care, decisions about future research investment, and decisions by public and citizens), health impacts (including health status, determinants of health – including individual risk factors and environmental and social determinants – and health system changes), and economic and social benefits (including commercialization, cultural outcomes, socioeconomic implications and public understanding of science).

For each category, a menu of metrics and measures (66 in total) is offered, and users are encouraged to draw on these flexibly to suit their circumstances. By choosing appropriate sets of indicators, CAHS can be used to track impacts within any of the four ‘pillars’ of health research (basic biomedical, applied clinical, health services and systems, and population health – or within domains that cut across these pillars) and at various levels (individual, institutional, regional, national or international).

Despite their differences, Payback and CAHS have much in common, especially in how they define impact and their proposed categories for assessing it. Whilst CAHS appears broader in scope and emphasises ‘complex system’ elements, both frameworks are designed as a pragmatic and flexible adaptation of the research-into-practice logic model. One key difference is that CAHS’ category ‘decision-making’ incorporates both policy-level decisions and the behaviour of individual clinicians, whereas Payback collects data separately on individual clinical decisions on the grounds that, if they are measurable, decisions by clinicians to change behaviour feed indirectly into the improved health category.

As with Payback (but perhaps even more so, since CAHS is in many ways more comprehensive), the application of CAHS is a complex and specialist task that is likely to be highly labour-intensive and hence prohibitively expensive in some circumstances.

Monetisation models

A significant innovation in recent years has been the development of logic models to monetise (that is, express in terms of currency) both the health and the non-health returns from research. Of the 110 empirical applications of impact assessment approaches in our HTA review, six used monetization. Such models tend to operate at a much higher level of aggregation than Payback or CAHS – typically seeking to track all the outputs of a research council [ 34 , 35 ], national research into a broad disease area (e.g. cardiovascular disease, cancer) [ 36 – 38 ], or even an entire national medical research budget [ 39 ].

Monetisation models express returns in various ways, including as cost savings, the money value of net health gains via cost per quality-adjusted life year (QALY) using the willingness-to-pay or opportunity cost established by NICE or similar bodies [ 40 ], and internal rates of return (return on investment as an annual percentage yield). These models draw largely from the economic evaluation literature and differ principally in terms of which costs and benefits (health and non-health) they include and in the valuation of seemingly non-monetary components of the estimation. A national research call, for example, may fund several programmes of work in different universities and industry partnerships, subsequently producing net health gains (monetised as the value of QALYs or disability-adjusted life-years), cost savings to the health service (and to patients), commercialisation (patents, spin-outs, intellectual property), leveraging of research funds from other sources, and so on.

A major challenge in monetisation studies is that, in order to produce a quantitative measure of economic impact or rate of return, a number of simplifying assumptions must be made, especially in relation to the appropriate time lag between research and impact and what proportion of a particular benefit should be attributed to the funded research programme as opposed to all the other factors involved (e.g. social trends, emergence of new interventions, other research programmes occurring in parallel). Methods are being developed to address some of these issues [ 27 ]; however, whilst the estimates produced in monetised models are quantitative, those figures depend on subjective, qualitative judgements.

A key debate in the literature on monetisation of research impact addresses the level of aggregation. First applied to major research budgets in a ‘top-down’ or macro approach [ 39 ], whereby total health gains are apportioned to a particular research investment, the principles of monetisation are increasingly being used in a ‘bottom-up’ [ 34 , 36 – 38 ] manner to collect data on specific project or programme research outputs. The benefits of new treatments and their usage in clinical practice can be built up to estimate returns from a body of research. By including only research-driven interventions and using cost-effectiveness or cost-utility data to estimate incremental benefits, this method goes some way to dealing with the issue of attribution. Some impact assessment models combine a monetisation component alongside an assessment of processes and/or non-monetised impacts, such as environmental impacts and an expanded knowledge base [ 41 ].

Societal impact assessment

Societal impact assessment, used in social sciences and public health, emphasises impacts beyond health and is built on constructivist and performative philosophical assumptions (columns 3 and 6 in Table  1 ). Some form of societal impact assessment was used in three of the 110 empirical studies identified in our HTA review. Its protagonists distinguish the social relevance of knowledge from its monetised impacts, arguing that the intrinsic value of knowledge may be less significant than the varied and changing social configurations that enable its production, transformation and use [ 42 ].

An early approach to measuring societal impact was developed by Spaapen and Sylvain in the early 1990s [ 43 ], and subsequently refined by the Royal Netherlands Academy of Arts and Science [ 44 ]. An important component is self-evaluation by a research team of the relationships, interactions and interdependencies that link it to other elements of the research ecosystem (e.g. nature and strength of links with clinicians, policymakers and industry), as well as external peer review of these links. Spaapen et al. subsequently conducted a research programme, Evaluating Research in Context (ERiC) [ 45 ], which produced the Sci-Quest model [ 46 ]. Later, they collaborated with researchers (who had led a major UK ESRC-funded study on societal impact [ 47 ]) to produce the EU-funded SIAMPI (Social Impact Assessment Methods through the study of Productive Interactions) Framework [ 48 ].

Sci-Quest was described by its authors as a ‘fourth-generation’ approach to impact assessment – the previous three generations having been characterised, respectively, by measurement (e.g. an unenhanced logic model), description (e.g. the narrative accompanying a logic model) and judgement (e.g. an assessment of whether the impact was socially useful or not). Fourth-generation impact assessment, they suggest, is fundamentally a social, political and value-oriented activity and involves reflexivity on the part of researchers to identify and evaluate their own research goals and key relationships [ 46 ].

Sci-Quest methodology requires a detailed assessment of the research programme in context and the development of bespoke metrics (both qualitative and quantitative) to assess its interactions, outputs and outcomes, which are presented in a unique Research Embedment and Performance Profile, visualised in a radar chart. SIAMPI uses a mixed-methods case study approach to map three categories of productive interaction: direct personal contacts, indirect contacts such as publications, and financial or material links. These approaches have theoretical elegance, and some detailed empirical analyses were published as part of the SIAMPI final report [ 48 ]. However, neither approach has had significant uptake elsewhere in health research – perhaps because both are complex, resource-intensive and do not allow easy comparison across projects or programmes.

Whilst extending impact to include broader societal categories is appealing, the range of societal impacts described in different publications, and the weights assigned to them, vary widely; much depends on the researchers’ own subjective ratings. An attempt to capture societal impact (the Research Quality Framework) in Australia in the mid-2000s was planned but later abandoned following a change of government [ 49 ].

UK Research Excellence Framework

The 2014 REF – an extensive exercise to assess UK universities’ research performance – allocated 20 % of the total score to research impact [ 50 ]. Each institution submitted an impact template describing its strategy and infrastructure for achieving impact, along with several four-page impact case studies, each of which described a programme of research, claimed impacts and supporting evidence. These narratives, which were required to follow a linear and time-bound structure (describing research undertaken between 1993 and 2013, followed by a description of impact occurring between 2008 and 2013) were peer-reviewed by an intersectoral assessment panel representing academia and research users (industry and policymakers) [ 50 ]. Other countries are looking to emulate the REF model [ 51 ].

An independent evaluation of the REF impact assessment process by RAND Europe (based on focus groups, interviews, survey and documentary analysis) concluded that panel members perceived it as fair and robust and valued the intersectoral discussions, though many felt the somewhat crude scoring system (in which most case studies were awarded 3, 3.5 or 4 points) lacked granularity [ 52 ]. The 6679 non-redacted impact case studies submitted to the REF (1594 in medically-related fields) were placed in the public domain ( http://results.ref.ac.uk ) and provide a unique dataset for further analysis.

In its review of the REF, the members of Main Panel A, which covered biomedical and health research, noted that “ International MPA [Main Panel A] members cautioned against attempts to ‘metricise’ the evaluation of the many superb and well-told narrations describing the evolution of basic discovery to health, economic and societal impact ” [ 50 ].

  • Approaches with potential for the future

The approaches in this section, most of which have been recently developed, have not been widely tested but may hold promise for the future.

Electronic databases

Research funders increasingly require principal investigators to provide an annual return of impact data on an online third-party database. In the UK, for example, Researchfish® (formerly MRC e-Val but now described as a ‘federated system’ with over 100 participating organisations) allows funders to connect outputs to awards, thereby allowing aggregation of all outputs and impacts from an entire funding stream. The software contains 11 categories: publications, collaborations, further funding, next destination (career progression), engagement activities, influence on policy and practice, research materials, intellectual property, development of products or interventions, impacts on the private sector, and awards and recognition.

Provided that researchers complete the annual return consistently and accurately, such databases may overcome some of the limitations of one-off, resource-intensive case study approaches. However, the design (and business model) of Researchfish® is such that the only funding streams captured are from organisations prepared to pay the membership fee, thereby potentially distorting the picture of whose input accounts for a research team’s outputs.

Researchfish® collects data both ‘top-down’ (from funders) and ‘bottom-up’ (from individual research teams). A comparable US model is the High Impacts Tracking System, a web-based software tool developed by the National Institute of Environmental Health Sciences; it imports data from existing National Institutes of Health databases of grant information as well as the texts of progress reports and notes of programme managers [ 53 ].

Whilst electronic databases are increasingly mainstreamed in national research policy (Researchfish® was used, for example, to populate the Framework on Economic Impacts described by the UK Department of Business, Innovation and Skills [ 54 ]), we were unable to identify any published independent evaluations of their use.

Realist evaluation

Realist evaluation, designed to address the question “what works for whom in what circumstances”, rests on the assumption that different research inputs and processes in different contexts may generate different outcomes (column 4 in Table  1 ) [ 55 ]. A new approach, developed to assess and summarise impact in the national evaluation of UK Collaborations for Leadership in Applied Health Research and Care, is shown in Fig.  3 [ 56 ]. Whilst considered useful in that evaluation, it was resource-intensive to apply.

research impact a narrative review

Realist model of research-service links and impacts in CLAHRCs (reproduced under UK non-commercial government licence from [ 56 ])

Contribution mapping

Kok and Schuit describe the research ecosystem as a complex and unstable network of people and technologies [ 57 ]. They depict the achievement of impact as shifting and stabilising the network’s configuration by mobilising people and resources (including knowledge in material forms, such as guidelines or software) and enrolling them in changing ‘actor scenarios’. In this model, the focus is shifted from attribution to contribution – that is, on the activities and alignment efforts of different actors (linked to the research and, more distantly, unlinked to it) in the three phases of the research process (formulation, production and extension; Fig.  4 ). Contribution mapping, which can be thought of as a variation on the Dutch approaches to societal impact assessment described above, uses in-depth case study methods but differs from more mainstream approaches in its philosophical and theoretical basis (column 6 in Table  1 ), in its focus on processes and activities, and in its goal of producing an account of how the network of actors and artefacts shifts and stabilises (or not). Its empirical application to date has been limited.

research impact a narrative review

Kok and Schuit’s ‘contribution mapping’ model (reproduced under Creative Commons Attribution Licence 4.0 from [ 57 ])

The SPIRIT Action Framework

The SPIRIT Action Framework, recently published by Australia’s Sax Institute [ 58 ], retains a logic model structure but places more emphasis on engagement and capacity-building activities in organisations and acknowledges the messiness of, and multiple influences on, the policy process (Fig.  5 ). Unusually, the ‘logic model’ focuses not on the research but on the receiving organisation’s need for research. We understand that it is currently being empirically tested but evaluations have not yet been published.

research impact a narrative review

The SPIRIT Action Framework (reproduced under Creative Commons Attribution Licence from [ 58 ] Fig.  1 , p. 151)

Participatory research impact model

Community-based participatory research is predicated on a critical philosophy that emphasises social justice and the value of knowledge in liberating the disadvantaged from oppression (column 5 in Table  1 ) [ 59 ]. Cacari-Stone et al.’s model depicts the complex and contingent relationship between a community-campus partnership and the policymaking process [ 60 ]. Research impact is depicted in synergistic terms as progressive strengthening of the partnership and its consequent ability to influence policy decisions. The paper introducing the model includes a detailed account of its application (Table  2 ), but beyond those, it has not yet been empirically tested.

This review of research impact assessment, which has sought to supplement rather than duplicate more extended overviews [ 1 – 7 ], prompts four main conclusions.

First, one size does not fit all. Different approaches to measuring research impact are designed for different purposes. Logic models can be very useful for tracking the impacts of a funding stream from award to quantitised (and perhaps monetised) impacts. However, when exploring less directly attributable aspects of the research-impact link, narrative accounts of how these links emerged and developed are invariably needed.

Second, the perfect is the enemy of the good. Producing detailed and validated case studies with a full assessment of context and all major claims independently verified, takes work and skill. There is a trade-off between the quality, completeness and timeliness of the data informing an impact assessment, on the one hand, and the cost and feasibility of generating such data on the other. It is no accident that some of the most theoretically elegant approaches to impact assessment have (ironically) had limited influence on the assessment of impact in practice.

Third, warnings from critics that focusing on short-term, proximal impacts (however accurately measured) could create a perverse incentive against more complex and/or politically sensitive research whose impacts are likely to be indirect and hard to measure [ 61 – 63 ] should be taken seriously. However, as the science of how to measure intervening processes and activities advances, it may be possible to use such metrics creatively to support and incentivise the development of complementary assets of various kinds.

Fourth, change is afoot. Driven by both technological advances and the mounting economic pressures on the research community, labour-intensive impact models that require manual assessment of documents, researcher interviews and a bespoke narrative may be overtaken in the future by more automated approaches. The potential for ‘big data’ linkage (for example, supplementing Researchfish® entries with bibliometrics on research citations) may be considerable, though its benefits are currently speculative (and the risks unknown).

  • Conclusions

As the studies presented in this review illustrate, research on research impact is a rapidly growing interdisciplinary field, spanning evidence-based medicine (via sub-fields such as knowledge translation and implementation science), health services research, economics, informatics, sociology of science and higher education studies. One priority for research in this field is an assessment of how far the newer approaches that rely on regular updating of electronic databases are able to provide the breadth of understanding about the nature of the impacts, and how they arise, that can come for the more established and more ‘manual’ approaches. Future research should also address the topical question of whether research impact tools could be used to help target resources and reduce waste in research (for example, to decide whether to commission a new clinical trial or a meta-analysis of existing trials); we note, for example, the efforts of the UK National Institute for Health Research in this regard [ 64 ].

Once methods for assessing research impact have been developed, it is likely that they will be used. As the range of approaches grows, the challenge is to ensure that the most appropriate one is selected for each of the many different circumstances in which (and the different purposes for which) people may seek to measure impact. It is also worth noting that existing empirical studies have been undertaken primarily in high-income countries and relate to health research systems in North America, Europe and Australasia. The extent to which these frameworks are transferable to low- or middle-income countries or to the Asian setting should be explored further.

  • Box 1: Definitions of research impact
Impact is the effect research has beyond academia and consists of “ ….benefits to one or more areas of the economy, society, culture, public policy and services, health, production, environment, international development or quality of life, whether locally, regionally, nationally or internationally ” (paragraph 62) and as “ …manifested in a wide variety of ways including, but not limited to: the many types of beneficiary (individuals, organisations, communities, regions and other entities); impacts on products, processes, behaviours, policies, practices; and avoidance of harm or the waste of resources. ” (paragraph 63) UK 2014 Research Excellence Framework [ 65 ]
“ ‘Health impacts’ can be defined as changes in the healthy functioning of individuals (physical, psychological, and social aspects of their health), changes to health services, or changes to the broader determinants of health. ‘Social impacts’ are changes that are broader than simply those to health noted above, and include changes to working systems, ethical understanding of health interventions, or population interactions. ‘Economic impacts’ can be regarded as the benefits from commercialization, the net monetary value of improved health, and the benefits from performing health research. ” Canadian Academy of Health Sciences [ 33 ] (p. 51)
Academic impact is “ The demonstrable contribution that excellent research makes to academic advances, across and within disciplines, including significant advances in understanding, methods, theory and application. ” Economic and societal impact is “ fostering global economic performance, and specifically the economic competitiveness of the UK , increasing the effectiveness of public services and policy, [and] enhancing quality of life, health and creative output. ” Research Councils UK Pathways to Impact ( http://www.rcuk.ac.uk/innovation/impacts/ )
“ A research impact is a recorded or otherwise auditable occasion of influence from academic research on another actor or organization. […] It is not the same thing as a change in outputs or activities as a result of that influence, still less a change in social outcomes. Changes in organizational outputs and social outcomes are always attributable to multiple forces and influences. Consequently, verified causal links from one author or piece of work to output changes or to social outcomes cannot realistically be made or measured in the current state of knowledge. […] However, secondary impacts from research can sometimes be traced at a much more aggregate level, and some macro-evaluations of the economic net benefits of university research are feasible. Improving our knowledge of primary impacts as occasions of influence is the best route to expanding what can be achieved here. ” London School of Economics Impact Handbook for Social Scientists [ 66 ]
  • Acknowledgements

This paper is largely but not entirely based on a systematic review funded by the NIHR HTA Programme, grant number 14/72/01, with additional material from TG’s dissertation from the MBA in Higher Education Management at UCL Institute of Education, supervised by Sir Peter Scott. We thank Amanda Young for project management support to the original HTA review and Alison Price for assistance with database searches.

Competing interests

TG was Deputy Chair of the 2014 Research Excellence Framework Main Panel A from 2012 to 2014, for which she received an honorarium for days worked (in common with all others on REF panels). SH received grants from various health research funding bodies to help develop and test the Payback Framework. JR is a member of the NIHR HTA Editorial Board, on paid secondment. He was principal investigator in a study funded by the NIHR HTA programme which reviewed methods for measuring the impact of the health research programmes and was director of the NIHR Evaluation, Trials and Studies Coordinating Centre to 2012. MG declares no conflict of interest.

All authors have completed the unified competing interest form at http://www.spp.pt/UserFiles/file/APP_2015/Declaracao_ICMJE_nao_editavel.pdf (available on request from the corresponding author) and declare (1) no financial support for the submitted work from anyone other than their employer; (2) no financial relationships with commercial entities that might have an interest in the submitted work; (3) no spouses, partners, or children with relationships with commercial entities that might have an interest in the submitted work; and (4) no non-financial interests that may be relevant to the submitted work.

Authors’ contributions

JR was principal investigator on the original systematic literature review and led the research and writing for the HTA report (see Acknowledgements), to which all authors contributed by bringing different areas of expertise to an interdisciplinary synthesis. TG wrote the initial draft of this paper and all co-authors contributed to its refinement. All authors have read and approved the final draft.

Full text links 

Read article at publisher's site: https://doi.org/10.1186/s12916-016-0620-8

Citations & impact 

Impact metrics, citations of article over time, alternative metrics.

Altmetric item for https://www.altmetric.com/details/8120646

Smart citations by scite.ai Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles. Explore citation contexts and check if this article has been supported or disputed. https://scite.ai/reports/10.1186/s12916-016-0620-8

Article citations, prioritising and incentivising productivity within indicator-based approaches to research impact assessment: a commentary..

Deeming S , Hure A , Attia J , Nilsson M , Searles A

Health Res Policy Syst , 21(1):136, 18 Dec 2023

Cited by: 0 articles | PMID: 38110938 | PMCID: PMC10726490

Wearable technology may assist in reducing jockeys' injuries if integrated into their safety vests: a qualitative study.

Giusti Gestri L

Front Sports Act Living , 5:1167110, 21 Jun 2023

Cited by: 0 articles | PMID: 37416317 | PMCID: PMC10321524

Development of a framework and research impact capture tool for nursing, midwifery, allied health professions, healthcare science, pharmacy and psychology (NMAHPPs).

Newington L , Wells M , Begum S , Lavender AJ , Markham S , Tracy O , Alexander CM

BMC Health Serv Res , 23(1):433, 03 May 2023

Cited by: 0 articles | PMID: 37138350 | PMCID: PMC10157965

"And when will you install the new water pump?": disconcerted reflections on how to be a 'good' Global Health scholar.

Borst RAJ , Wehrens R , Bal R

Global Health , 19(1):19, 21 Mar 2023

Cited by: 0 articles | PMID: 36944977 | PMCID: PMC10029300

Retrospective Impact Evaluation Continuing to Prove Challenging Irrespective of Setting: A Study of Research Impact Enablers and Challenges Cloaked as an Impact Evaluation? Comment on "'We're Not Providing the Best Care If We Are Not on the Cutting Edge of Research': A Research Impact Evaluation at a Regional Australian Hospital and Health Service".

Ramanathan S

Int J Health Policy Manag , 12:7742, 24 Jan 2023

Cited by: 0 articles | PMID: 37579477 | PMCID: PMC10241435

Similar Articles 

To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

[Developing and applying the Payback Framework to assess the socioeconomic impact of health research].

Buxton MJ , Hanney S

Med Clin (Barc) , 131 Suppl 5:36-41, 01 Dec 2008

Cited by: 11 articles | PMID: 19631821

Strengthening and measuring research impact in global health: lessons from applying the FAIT framework.

Dodd R , Ramanathan S , Angell B , Peiris D , Joshi R , Searles A , Webster J

Health Res Policy Syst , 17(1):48, 06 May 2019

Cited by: 8 articles | PMID: 31060617 | PMCID: PMC6501392

School-linked sexual health services for young people (SSHYP): a survey and systematic review concerning current models, effectiveness, cost-effectiveness and research opportunities.

Owen J , Carroll C , Cooke J , Formby E , Hayter M , Hirst J , Lloyd Jones M , Stapleton H , Stevenson M , Sutton A

Health Technol Assess , 14(30):1-228, iii-iv, 01 Jan 2010

Cited by: 20 articles | PMID: 20561461

Review Books & documents Free full text in Europe PMC

No one's discussing the elephant in the room: contemplating questions of research impact and benefit in Aboriginal and Torres Strait Islander Australian health research.

Bainbridge R , Tsey K , McCalman J , Kinchin I , Saunders V , Watkin Lui F , Cadet-James Y , Miller A , Lawson K

BMC Public Health , 15:696, 23 Jul 2015

Cited by: 51 articles | PMID: 26202429 | PMCID: PMC4511988

Looking both ways: a review of methods for assessing research impacts on policy and the policy utilisation of research.

Newson R , King L , Rychetnik L , Milat A , Bauman A

Health Res Policy Syst , 16(1):54, 25 Jun 2018

Cited by: 7 articles | PMID: 29940961 | PMCID: PMC6019310

Funding 

Funders who supported this work.

Health Technology Assessment Programme (1)

Grant ID: 14/72/01

2 publication s

National Institute for Health Research (NIHR) (2)

Grant ID: NF-SI-0512-10031

130 publication s

Update of the literature review for the Health Technology Assessment Programme of methods and their use in assessing the impact from health research programmes

Professor James Raftery, University of Southampton

4 publication s

Europe PMC is part of the ELIXIR infrastructure

  • Open access
  • Published: 27 November 2023

A systematic review and narrative synthesis of physical activity referral schemes’ components

  • Eriselda Mino   ORCID: orcid.org/0000-0002-1885-0009 1 ,
  • Coral L. Hanson 2 ,
  • Inga Naber 1 ,
  • Anja Weissenfels 1 ,
  • Sheona McHale 2 ,
  • Jane Saftig 1 ,
  • Sarah Klamroth 1 ,
  • Peter Gelius 1 ,
  • Karim Abu-Omar 1 ,
  • Stephen Whiting 3 ,
  • Kremlin Wickramasinghe 3 ,
  • Gauden Galea 3 ,
  • Klaus Pfeifer 1 &
  • Wolfgang Geidl 1  

International Journal of Behavioral Nutrition and Physical Activity volume  20 , Article number:  140 ( 2023 ) Cite this article

910 Accesses

13 Altmetric

Metrics details

Physical activity referral schemes (PARS) are complex multicomponent interventions that represent a promising healthcare-based concept for physical activity (PA) promotion. This systematic review and narrative synthesis aimed to identify the constitutive components of PARS and provide an overview of their effectiveness.

Following a published protocol, we conducted a systematic search of PubMed, Scopus, Web of Science, CINAHL, ScienceDirect, SpringerLink, HTA, Wiley Online Library, SAGE Journals, Taylor & Francis, Google Scholar, OpenGrey, and CORE from 1990 to January 2023. We included experimental, quasi-experimental, and observational studies that targeted adults participating in PARS and reported PA outcomes, scheme uptake, or adherence rates. We performed an intervention components analysis using the PARS taxonomy to identify scheme components and extracted data related to uptake, adherence, and PA behavior change. We combined these to provide a narrative summary of PARS effectiveness.

We included 57 studies reporting on 36 PARS models from twelve countries. We identified 19 PARS components: a patient-centered approach, individualized content, behavior change theory and techniques, screening, brief advice, written materials, a written prescription, referral, baseline and exit consultation, counselling support session(s), PA sessions, education session(s), action for non-attendance, structured follow-up, a PA network, feedback for the referrer, and exit strategies/routes. The PARS models contained a mean of 7 ± 2.9 components (range = 2–13). Forty-five studies reported PA outcome data, 28 reported uptake, and 34 reported adherence rates. Of these, approximately two-thirds of studies reported a positive effect on participant PA levels, with a wide range of uptake (5.7–100.0%) and adherence rates (8.5–95.0%).

Conclusions

Physical activity referral scheme components are an important source of complexity. Despite the heterogeneous nature of scheme designs, our synthesis was able to identify 19 components. Further research is required to determine the influence of these components on PARS uptake, adherence, and PA behavior change. To facilitate this, researchers and scheme providers must report PARS designs in more detail. Process evaluations are also needed to examine implementation and increase our understanding of what components lead to which outcomes. This will facilitate future comparisons between PARS and enable the development of models to maximize impact.

Chronic non-communicable diseases (NCDs) present a challenge to public health and modern healthcare systems [ 1 ]. Physical activity (PA) interventions offer a window of opportunity for NCD prevention and management, [ 2 ] particularly in primary care [ 3 , 4 ]. This is because healthcare professionals are considered to be a credible source of information about the well-established health-enhancing benefits of PA [ 5 ]. In 2016, 39 billion outpatient healthcare visits were made globally [ 6 ], which, if utilized concurrently for PA promotion, might have reached an estimated 1.4 billion insufficiently inactive adults [ 7 ]. Physical activity healthcare interventions, such as brief advice and physical activity referral schemes (PARS), are considered viable approaches that enable healthcare professionals to encourage patients to be more active [ 4 , 8 , 9 ]. At the system level, PARS offer a practical way for healthcare professionals to harness the role of PA in reducing the burden of NCDs and help overcome fragmented efforts in PA promotion. At the individual level, referral schemes are suggested to improve not only PA of participants, but also their depression levels [ 10 , 11 ], insulin sensitivity [ 12 ], body composition, and cardiometabolic risk factors [ 13 ]. Additionally, participants have reported a sense of belonging and social inclusion [ 14 ].

Physical activity referral schemes are widespread, complex interventions that involve the coordinated efforts of healthcare and exercise professionals in an individual’s journey to achieve PA behavior change. They are comparable to other healthcare referrals, which are defined as “the direction of an individual to the appropriate facility or specialist in a health system or network of service providers to address the relevant health needs ” [ 4 ]. In PARS, individuals who have or are at risk of NCDs and have a health need in terms of insufficient PA are directed to appropriate PA specialists, facilities, or activities. These types of interventions offer an opportunity to break the ice between PA offers and inactive patients. As such, the World Health Organization advocates offering brief PA interventions, including referral pathways, in primary care to support PA behavior change [ 7 ]. Despite this endorsement, PARS have only demonstrated a modest impact on PA levels [ 15 ]. Current understanding of effectiveness is limited by the dominance of UK-based studies, which are characterized by high heterogeneity [ 16 ]. This has resulted in a lack of understanding about what works [ 17 ]. There is a need to better define different PARS models, so that reviews of evidence can distinguish between distinct designs (e.g., UK versus Swedish models). However, even with small individual-level effects, great benefits can be seen at the population level when interventions are disseminated at scale [ 18 ]. Thus, attention has been directed to embedding PARS into healthcare systems; for example, the European Physical Activity on Prescription model (EUPAP) project aims to establish the Swedish model in Belgium, Denmark, Germany, Italy, Lithuania, Malta, Portugal, Romania, and Spain [ 19 ].

Physical activity referral schemes incorporate various components to elicit behavior change [ 8 , 20 ]. The Swedish model includes five components: a patient-centered approach, evidence-based PA recommendations, a written prescription, follow-up, and a community-based network [ 20 , 21 ]. Schemes that incorporate these components are known to be effective, but it is unclear whether some components produce more favorable results than others [ 11 ]. Previous systematic reviews have called attention to PARS components [ 15 ], especially the component-effectiveness relationship [ 11 ] that is recognized as a researchable link in the complex intervention field [ 22 , 23 ]. Complex intervention understanding and research can be approached by treating an intervention as a uniform package, “downplaying complexity,” or as an intervention composed of components, “recognizing complexity” [ 22 ]. At the systematic review level, PARS effectiveness has been examined as a complete package [ 10 , 11 , 15 , 24 ], pooling only effect sizes and discounting intervention components. Other systematic reviews have explored PARS effectiveness in terms of scheme characteristics (referral reason and follow-up) [ 25 , 26 ], but this is different from examining components. Components are single, active parts that comprise the entire PARS [ 22 , 27 ] or guiding operational principles at scheme level [ 28 ], such as counseling using a patient-centered approach [ 20 ]. In contrast, PARS characteristics include setting, scheme length, and provider profession. While we acknowledge that complexity is multifaceted [ 29 ] and PARS characteristics may impact effectiveness [ 25 ], in this review, we have focused only on components as a source of complexity. The identification of components can enable the future investigation of their relative impact on effectiveness, creating useful knowledge for program developers and decision-makers [ 22 , 29 ].

Review question

As per our previously published protocol [ 28 ], we planned to examine PARS by reviewing the design of interventions to identify their constitutive components (Review Question 1) and further analyze their impact on effectiveness in terms of PA, uptake, and adherence (Review Question 2). In this paper, we focus on the first question by providing an overview of components that make up PARS models and information on their characteristics. Additionally, we present a narrative summary of the evidence of effectiveness.

This systematic review was conducted by following the Cochrane Handbook for Systematic Reviews of Interventions [ 30 ] and reported by adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 31 ] and Synthesis without meta-analysis (SwiM) [ 32 ] guidelines. The methods were pre-registered in the protocol [ 28 ] and are briefly described here.

Eligibility criteria

Eligible studies were those that investigated PARS initiated in a primary or secondary healthcare setting; targeted a population aged ≥ 16 years; and reported PA, uptake, or adherence outcomes. We considered all interventions labeled as PARS, exercise referral schemes, or exercise on prescription or any similar intervention, such as PA counselling that included at least some form of documentation, such as a prescription or referral form. Advice only, exercise/PA only, or combined lifestyle intervention studies that included other health behaviors in addition to PA were excluded. We included experimental, quasi-experimental, and observational studies that were published in English or German and reported the outcomes of interest, irrespective of the type of outcome measurement, methodological quality, comparison group, and follow-up duration.

Search and study selection

We conducted systematic searches in Scopus, PubMed, Web of Science, CINAHL, ScienceDirect, SpringerLink, HTA, Wiley Online Library, SAGE Journals, Taylor & Francis, Google Scholar, OpenGrey, and CORE for articles published since 1990 (Additional file 1 ), combined with search methods such as citation and hand searching. The initial search was conducted by one author (EM) in June 2020 and updated on January 31, 2023 (Additional file 1 ). Duplicates were removed, and the remaining articles were downloaded into Citavi V.6 (Swiss Academic Software). Titles and abstracts were screened independently by one reviewer (EM) and a pair of reviewers (IN, AW). One reviewer (EM) screened all full texts. An independent second full-text screening was distributed among the team (AW, IN, JS). The extent of agreement was measured using Cohen’s kappa, and divergences were resolved via discussion.

Data extraction and items

Reports on the same study were grouped together, and data on study characteristics, PARS content (characteristics and components), and effectiveness outcomes (PA, uptake, and adherence) were extracted. A single reviewer (EM) extracted the data into a customized Microsoft Excel spreadsheet (Microsoft Corporation, Washington, USA), with a second reviewer (JS) extracting 15% of included studies to check for accuracy.

Scheme content

Data were extracted at the scheme level using the PARS taxonomy, a classification system to document, audit, monitor, and report such programs [ 16 ]. We contacted twelve primary investigators to clarify questions or ask for support in the form of additional information, and half of them replied.

Effectiveness outcomes

We extracted total PA and also moderate to vigorous PA, leisure time PA, and walking when available. Additionally, we extracted scheme uptake and adherence rates. When the primary investigators did not explicitly define uptake or adherence, we extracted data that fit our predefined uptake definition, that is, attendance at the first PARS activity after receiving a referral or prescription or the extent to which the prescribed activities or enrolled programs were completed [ 28 ].

Risk of bias in individual studies

This systematic review was solely focused on content analysis to identify PARS components (first review question [ 28 ]) and did not include a meta-analysis of the effects of components. A risk-of-bias assessment is not included in this review but is being prepared for a subsequent analysis related to the second review question, that is, which of the identified components has the potential to maximize scheme effectiveness in terms of PA level, uptake, and adherence rates [ 28 ].

Synthesis methods

Data were synthesized following the principles of the first stage of intervention component analysis (ICA), which is intended to compare interventions in terms of their similarities and differences [ 33 ]. The first stage of ICA involves two parallel processes: (a) content analysis and (b) narrative effectiveness synthesis.

We combined the inductive ICA approach to content analysis with a deductive approach using levels one and two of the PARS taxonomy, scheme classification, and characteristics [ 16 ]. The use of this taxonomy reduced the chances of the arbitrary identification of the components given that at least 43 experts from research, PARS provision, healthcare, and policy-making backgrounds were involved in its creation.

Two authors (CLH and SM) conducted the content analysis, using NVIVO20 (QSR International, Melbourne, Australia) to organize the data. The analysis was checked by a third reviewer (EM). Given that PARS do not follow a standard design, we mapped the referral routes using cross-functional flowcharts in Lucidchart software [ 30 ] to aid in the comparison and identify patterns and structural components as per our protocol [ 23 ].

Along with the identified components, effectiveness data were synthesized and presented in a tabular format. Physical activity outcomes were displayed by employing vote counting; that is, for each included study, we indicated the direction of the effect regardless of statistical significance [ 34 ]. Scheme uptake and adherence are given as percentages, as reported in the individual studies.

Studies included

The systematic search of the databases yielded 6,211 unique records, and an additional seven were found through snowball searching (Fig.  1 ). We examined 243 full texts, and 74 met with this study’s eligibility criteria. Using the study as the unit of analysis [ 30 ], we conflated multiple reports of a single study, leading to 57 unique studies as the sample size for this systematic review. Reports of the same study presenting different outcomes (e.g., one reporting PA data and another reported adherence data) were included as separate study units ([ 35 , 36 , 37 , 38 ]). The extent of the agreement between reviewers for the inclusion of studies was strong (Cohen’s kappa = 0.804, 95% CI = 0.797–0.809).

figure 1

Study selection process

Study characteristics

The majority of studies ( n  = 28, 49.0%) used an experimental design (randomized controlled trial [RCT], pragmatic or cluster RCT) [ 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 ]. Sample sizes ranged from 14 [ 68 ] to 6,610 [ 69 ]. Studies were spread across four continents, and the most common location was Europe ( n  = 42, 73.7%) [ 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 50 , 51 , 57 , 58 , 60 , 61 , 63 , 64 , 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 84 , 85 , 86 , 87 , 88 ]. Table 1 summarizes the study characteristics, and Additional file 2 describes them in more detail.

Scheme characteristics

Table 2 summarizes the characteristics of the PARS models investigated in the included studies. More detailed information about each scheme (e.g., the content of the PA sessions) can be found in Additional file 4 . The studies collectively investigated 36 PARS models, and seven schemes were researched by multiple studies. The Swedish Physical Activity on Prescription (PAP) model was investigated the most [ 35 , 36 , 37 , 38 , 40 , 41 , 42 , 43 , 44 , 58 , 70 , 71 , 72 , 89 , 90 , 91 ], with some studies examining schemes with the standard core components of this model [ 35 , 36 , 37 , 38 , 41 , 58 , 72 ] and others focusing on enhanced variations [ 40 , 42 , 43 , 70 , 71 ]. The second most investigated model was the Green Prescription (GRx), originating from New Zealand, including the standard scheme [ 48 , 62 , 92 ] and variations [ 47 , 53 , 54 ]. This scheme was also replicated in the US [ 59 ]. Eighteen different schemes were included from the UK [ 45 , 46 , 50 , 51 , 63 , 64 , 68 , 69 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 , 83 , 88 ]. These ranged from a simple referral to a PA program [ 81 ] to more complex referral systems [ 46 ].

PARS components

The component analysis revealed 19 components that make up PARS (Table 3 ). While there was some inconsistency in the use of terms to designate intervention components, the definitions that were established during the analysis can be found in Additional file 3 .

The identified components appertain to the following:

the theoretical basis (person-centered approach, individualized content, and behavior change theory and techniques);

scheme entry and transitioning and exit (screening, brief advice, written prescription, referral, exit routes/strategies, and feedback to the referrer);

behavioral support (baseline consultation, final consultation, counseling support session(s), structured follow-up, action for non-attendance, education session(s), and written materials);

and PA opportunities (PA sessions and a PA network).

For some of the components, we were able to identify specific elements that are listed in Table 3 , together with frequencies.

There was substantial variation in the number of components included within the design of various PARS. The PARS models contained a mean of 7 ± 2.9 components (range = 2–13).

Narrative effectiveness synthesis

Table 4 summarizes the distribution of the 19 components across the 57 studies. For each study, the components are indicated as present or not and mapped against the effect direction on PA level (regardless of significance level), uptake rate, and adherence rate. These data are solely descriptive and are not intended to indicate the effectiveness of specific components.

The majority of studies reported positive effects on the part of PARS on PA levels [ 39 , 41 , 46 , 47 , 48 , 52 , 57 , 67 , 99 , 100 ] as compared with usual care, while four RCTs reported no group difference [ 58 , 61 ] or mixed results [ 56 , 63 ]. In contrast, only one randomized trial reported any additional benefit on the part of PARS on PA level [ 62 ] when compared with PA advice alone, while three trials did not detect any additional benefit [ 40 , 49 , 59 ], and one reported mixed results [ 51 ]. The offer of a PARS program was shown to be more beneficial in terms of increasing PA than prescription only [ 50 , 100 ], with inconsistent results being found in one study [ 60 ]. Approximately one-fifth of the included studies compared different versions of PARS regarding intensity and the activities offered. Most studies did not report added benefits for an enhanced intervention over standard provisions [ 42 , 43 , 44 , 45 , 73 , 85 ]. However, two trials [ 64 , 65 ] and one observational study [ 70 ] reported that more intensive PARS offer added benefits for participants, and one study reported inconsistent results [ 55 ]. Observational and pre-post studies consistently reported an increase in PA levels for PARS participants [ 35 , 37 , 68 , 71 , 72 , 75 , 76 , 78 , 79 , 80 , 84 , 86 ], with the exception of one study [ 77 ].

Among the 28 studies that reported uptake, rates ranged from 5.7% [ 87 ] to 100.0% [ 44 , 54 ]. Although not always explicitly stated, the uptake definition was consistent among studies, i.e., the number of participants who entered the scheme after being referred to. In other words, those who participated in at least one scheme activity after the referral. The adherence or attendance rate was reported in 34 studies, with variations in terms of definitions. For example, adherence was defined as adherence to the prescribed PA, adherence to the allocated PARS intervention, scheme completion, or the average attended PA sessions. Adherence rates varied from 8.5% [ 92 ] to 95.0% in terms of completing the entire PARS [ 47 ].

This is the first review to examine the components that are included in PARS. We identified 19 components: using a person-centered approach, individualized content, being based on behavior change theory, the use of BCTs, screening, brief advice, the provision of written materials, written prescriptions, referral to a PARS program/professional, a baseline consultation, an exit consultation, counseling support session(s), PA sessions, education session(s), action for non-attendance, structured follow-up, PA networks, feedback to the referrer, and having exit routes/strategies. The PARS models we examined contained a mean of 7 ± 2.9 components (range = 2–13). The level of detail provided in studies of PARS content varied, making it difficult to ensure that all components were identified. In our narrative effectiveness synthesis, approximately two-thirds of studies reported a positive effect on participant PA levels, with wide ranges of uptake (5.7–100.0%) and adherence rates (8.5–95.0%). The large cross-country and within-country (for example, UK) differences in the number and arrangement of components included in the PARS models in this review highlights the complexity of understanding which components affect which outcomes. This is not only because these differences might impact effect sizes (changes in PA) and participant engagement with the scheme (uptake and adherence). The inclusion of different components in a scheme creates differing implementation demands, which must be adequately resourced. Implementation fidelity will be reflected in scheme outcomes, adding another layer of complexity.

The complexity of the role of components within PARS has played a limited role in evidence synthesis to date. Existing PARS meta-analyses have synthesized the effects of PARS interventions as an uniform package [ 10 , 15 ], without any consideration of differences in design and delivery. Thus, the true heterogeneity of PARS models, as a function of their components, has not been incorporated in the effectiveness equation. Previous reviews have considered the potential influence of demographics (e.g., age, sex, and socio-economic status) [ 109 , 110 ], personal factors (e.g., referral reasons, medical conditions, and psychological factors) [ 14 , 109 , 110 ], healthcare system/team-related factors (e.g., adequacy of health services and participant-provider relationship) [ 110 ], and scheme characteristics (e.g., scheme length, number of exercise sessions, and scheme setting) [ 25 , 109 ] on uptake and adherence rates, as well as PA behavior change. Our findings advance the prior understanding of PARS complexity by highlighting specific scheme components (e.g., brief advice and PA sessions), in addition to other relevant demographic or personal factors.

The reviewed evidence demonstrates that single PARS components are a subject of growing interest, but they have not been included in meta-analyses. Many of the included studies have the potential added effect of certain components, such as behavior change theory [ 45 , 65 ], a written prescription [ 59 , 62 ], written materials [ 99 ], counseling support [ 42 , 55 , 64 , 70 ], and PA sessions [ 49 ], on PA and health outcomes. Additionally, components such as individualization [ 40 , 42 , 44 ], exit routes and strategies [ 74 , 75 ], measures to keep scheme participation high [ 77 ], baseline consultation [ 77 ], and structured follow-up [ 51 , 75 , 99 ] have been suggested to be important to scheme success. This growing attention to the role of components in individual studies, in combination with heterogeneous scheme designs, risks producing research that is difficult to combine for synthesis. Our review highlights the fact that there is not yet a standard terminology that can be used to understand these differences between PARS designs. Our analysis adds value because it has distinguished between PARS components and provides a basis for a future standardized terminology. This will aid in scheme comparison and allow for evidence harmonization and synthesis. To enable better differentiation between PARS and an examination of which components add value, researchers and providers must improve the reporting of scheme content.

A lack of detailed information on intervention content and other study-relevant items is a known problem, despite the widespread recommendations of reporting guidelines [ 111 ], and this is reflected in the findings of this review. The incomplete reporting of behavioral interventions has a direct impact on identifying and understanding how intervention characteristics actually impact behavior [ 112 ]. Therefore, we suggest using the PARS checklist [ 16 ] to provide sufficient data quantity and clear information in a standardized way. Given the review findings, it may be beneficial to extend the checklist to include a section about counseling support session(s) and how these are offered. The PARS checklist [ 16 ] can be employed directly at the protocol stage, as utilized in one of our projects [ 113 ], or as a compass when designing interventions. Differentiating between scheme components strengthens comparability at the scheme level and can facilitate future research endeavors.

Studies show that individual components may have potential to maximize PARS effectiveness [ 62 , 65 , 70 , 99 ]. This is important given the ambiguity in the existing evidence regarding the effect of PARS on PA and other health outcomes [ 8 , 18 ]; thus, we strongly recommend the further investigation of the role of components in order to improve the case for investment in PARS. While we have identified potential components, their role in the effectiveness equation depends on their successful implementation. Only if the component under study is delivered as intended can its relevance to scheme success be determined. Thus, process evaluations of PARS [ 97 , 104 ] are essential to understanding components.

Strengths and weaknesses

The strength of this systematic review lies in the prior publication of the protocol [ 28 ], which reduced the chance of bias. We used a comprehensive search strategy, involving independent reviewers in the selection of studies for inclusion and using a standardized synthesis process in the identification of components. Additionally, the use of ICA [ 33 ] in combination with the PARS taxonomy [ 16 ] allowed for a systematic assessment of the intervention content of 36 models.

The results of the component analysis are, however, bound by two unique limitations. Both are closely related to the identification of the components and the rating of schemes as having or not having these components. Firstly, because the identification of PARS components was partially subjective, confirmation bias cannot be ruled out. Thus, the components list is by no means exhausting, and we may have overlooked other potentially relevant components. Secondly, poor reporting may have compromised our ability to detect certain components within a PARS when they were, in fact, present. The reporting level of the included studies varied substantially, from very detailed (e.g., [ 60 , 73 ]) to a scant description of PARS content (e.g., [ 79 ]).

The terminology used to label components was inconsistent. Thus, during the ICA, the rating of a component as present or absent was based on its content, rather than the original label provided by the primary investigators. The identified components might also overlap with one another. For instance, individualization can be an inherent part of a person-centered approach, but one can individualize the content of PA sessions in an arbitrary way, without actively involving the participant in the process. Thus, we separated the concepts of person-centeredness and individualization, although they were often conflated in individual studies. One can also argue that a specific BCT, such as goal setting, could be a separate [ 114 ] of PA interventions. However, we focused on scheme-level components, that is, whether BCTs were incorporated. We applied the same reasoning for behavior change theory. While a particular type of theory can impact the intervention effects, the question of whether a PARS being theory-based impacts the PARS outcomes is more relevant to this review.

Physical activity referral scheme components are an important source of complexity, and this review identified 19 components included in 36 PARS models that were delivered in twelve countries. Further research is required to determine the influence of these components on PARS uptake, adherence, and PA behavior change. To facilitate this, we recommend that researchers and scheme providers report PARS designs in more detail. We also suggest the need for process evaluations to examine the implementation of PARS designs and the role of components. This will increase our understanding of what works, leading to increased scheme optimization.

Availability of data and materials

All data relevant to the results of this systematic review are available in the main tables and additional files. The data collected, including data extraction forms, can be made available upon reasonable request.

Abbreviations

Behavior change techniques

Intervention component analysis

Non-communicable diseases

  • Physical activity

Physical activity on prescription

Physical activity referral scheme(s)

Preferred reporting items for systematic review and meta-analysis

Randomized controlled trial

Synthesis without meta-analysis

Holman HR. The relation of the chronic disease epidemic to the health care crisis. ACR Open Rheumatol. 2020;2:167–73. https://doi.org/10.1002/acr2.11114 .

Article   PubMed   PubMed Central   Google Scholar  

World Health Organization. Global action plan on physical activity 2018–2030: more active people for a healtier world. Geneva: World Health Organization; 2018.

Google Scholar  

World Health Organization. WHO package of essential noncommunicable (PEN) disease interventions for primary health care. Geneva: World Health Organization; 2020.

World Health Organization. Promoting physical activity through primary health care: a toolkit. Geneva: World Health Organization; 2021.

Warburton DER, Bredin SSD. Health benefits of physical activity: a systematic review of current systematic reviews. Curr Opin Cardiol. 2017;32:541–56. https://doi.org/10.1097/HCO.0000000000000437 .

Article   PubMed   Google Scholar  

Moses MW, Pedroza P, Baral R, Bloom S, Brown J, Chapin A, et al. Funding and services needed to achieve universal health coverage: applications of global, regional, and national estimates of utilisation of outpatient visits and inpatient admissions from 1990 to 2016, and unit costs from 1995 to 2016. Lancet Public Health. 2019;4:e49–73. https://doi.org/10.1016/S2468-2667(18)30213-5 .

World Health Organization. Global status report on physical activity 2022. Geneva: World Health Organization; 2022.

National Institute for Health and Care Excellence. Physical activity: exercise referral schemes (PH54). London: NICE; 2014.

WHO Regional Office for Europe. Integrated brief interventions for noncommunicable disease risk factors in primary care: the manual. Copenhagen: BRIEF project; 2022.

Pavey TG, Taylor AH, Fox KR, Hillsdon M, Anokye N, Campbell JL, et al. Effect of exercise referral schemes in primary care on physical activity and improving health outcomes: systematic review and meta-analysis. BMJ. 2011;343: d6462. https://doi.org/10.1136/bmj.d6462 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Onerup A, Arvidsson D, Blomqvist Å, Daxberg E-L, Jivegård L, Jonsdottir IH, et al. Physical activity on prescription in accordance with the Swedish model increases physical activity: a systematic review. Br J Sports Med. 2019;53:383–8. https://doi.org/10.1136/bjsports-2018-099598 .

Hellgren MI, Jansson P-A, Wedel H, Lindblad U. A lifestyle intervention in primary care prevents deterioration of insulin resistance in patients with impaired glucose tolerance: a randomised controlled trial. Scand J Public Health. 2016;44:718–25. https://doi.org/10.1177/1403494816663539 .

Kallings LV, Sierra Johnson J, Fisher RM, Faire Ud, Ståhle A, Hemmingsson E, Hellénius M-L. Beneficial effects of individualized physical activity on prescription on body composition and cardiometabolic risk factors: results from a randomized controlled trial. Eur J Cardiovasc Prev Rehabil. 2009;16:80–4. https://doi.org/10.1097/HJR.0b013e32831e953a .

Eynon M, Foad J, Downey J, Bowmer Y, Mills H. Assessing the psychosocial factors associated with adherence to exercise referral schemes: a systematic review. Scand J Med Sci Sports. 2019;29:638–50. https://doi.org/10.1111/sms.13403 .

Campbell F, Holmes M, Everson-Hock E, Davis S, Buckley Woods H, Anokye N, et al. A systematic review and economic evaluation of exercise referral schemes in primary care: a short report. Health Technol Assess. 2015;19:1–110. https://doi.org/10.3310/hta19600 .

Hanson CL, Oliver EJ, Dodd-Reynolds CJ, Pearsons A, Kelly P. A modified Delphi study to gain consensus for a taxonomy to report and classify physical activity referral schemes (PARS). Int J Behav Nutr Phys Act. 2020;17:158. https://doi.org/10.1186/s12966-020-01050-2 .

Oliver EJ, Hanson CL, Lindsey IA, Dodd-Reynolds CJ. Exercise on referral: evidence and complexity at the nexus of public health and sport policy. Int J Sport Policy Politics. 2016;8:731–6. https://doi.org/10.1080/19406940.2016.1182048 .

Article   Google Scholar  

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7. https://doi.org/10.2105/ajph.89.9.1322 .

European Physical Activity on Prescription model (EUPAP). https://www.eupap.org/ . Accessed 1 December 2022.

Kallings L. The Swedish approach on physical activity on prescription. Clin Health Promot. 2016;6:31–3.

Kallings L. Physical activity on prescription in the Nordic region: experiences and recommendations. 2010.

Clark AM. What are the components of complex interventions in healthcare? Theorizing approaches to parts, powers and the whole intervention. Soc Sci Med. 2013;93:185–93. https://doi.org/10.1016/j.socscimed.2012.03.035 .

Petticrew M, Anderson L, Elder R, Grimshaw J, Hopkins D, Hahn R, et al. Complex interventions and their implications for systematic reviews: a pragmatic approach. J Clin Epidemiol. 2013;66:1209–14. https://doi.org/10.1016/j.jclinepi.2013.06.004 .

Laake J-P, Fleming J. Effectiveness of physical activity promotion and exercise referral in primary care: protocol for a systematic review and meta-analysis of randomised controlled trials. Syst Rev. 2019;8:303. https://doi.org/10.1186/s13643-019-1198-y .

Arsenijevic J, Groot W. Physical activity on prescription schemes (PARS): do programme characteristics influence effectiveness? Results of a systematic review and meta-analyses. BMJ Open. 2017;7: e012156. https://doi.org/10.1136/bmjopen-2016-012156 .

Rowley N, Mann S, Steele J, Horton E, Jimenez A. The effects of exercise referral schemes in the United Kingdom in those with cardiovascular, mental health, and musculoskeletal disorders: a preliminary systematic review. BMC Public Health. 2018;18:949. https://doi.org/10.1186/s12889-018-5868-9 .

Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, et al. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Med Res Methodol. 2017;17:76. https://doi.org/10.1186/s12874-017-0349-x .

Mino E, Geidl W, Naber I, Weissenfels A, Klamroth S, Gelius P, et al. Physical activity referral scheme components: a study protocol for systematic review and meta-regression. BMJ Open. 2021;11: e049549. https://doi.org/10.1136/bmjopen-2021-049549 .

Skivington K, Matthews L, Simpson SA, Craig P, Baird J, Blazeby JM, et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. BMJ. 2021;374: n2061.

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor. Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated August 2022): Cochrane; 2022. https://training.cochrane.org/handbook .

Page MJ, Mckenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372: n71. https://doi.org/10.1136/bmj.n71 .

Campbell M, Mckenzie JE, Sowden A, Katikireddi SV, Brennan SE, Ellis S, et al. Synthesis without meta-analysis (SWiM) in systematic reviews: reporting guideline. BMJ. 2020;368: l6890. https://doi.org/10.1136/bmj.l6890 .

Sutcliffe K, Thomas J, Stokes G, Hinds K, Bangpan M. Intervention Component Analysis (ICA): a pragmatic approach for identifying the critical features of complex interventions. Syst Rev. 2015;4:140. https://doi.org/10.1186/s13643-015-0126-z .

Mckenzie JE, Brennan S. E. Chapter 12: Synthesizing and presenting findings using other methods. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editor. Cochrane Handbook for Systematic Reviews of Interventions version 6.3 (updated August 2022): Cochrane; 2022. https://training.cochrane.org/handbook/current/chapter-12 .

Kallings LV, Leijon M, Hellenius ML, Stahle A. Physical activity on prescription in primary health care: a follow-up of physical activity level and quality of life. Scand J Med Sci Sports. 2008;18:154–61. https://doi.org/10.1111/j.1600-0838.2007.00678.x .

Article   CAS   PubMed   Google Scholar  

Kallings LV, Leijon ME, Kowalski J, Hellénius M-L, Ståhle A. Self-reported adherence: a method for evaluating prescribed physical activity in primary health care patients. J Phys Act Health. 2009;6:483–92. https://doi.org/10.1123/jpah.6.4.483 .

Leijon ME, Bendtsen P, Nilsen P, Festin K, Ståhle A. Does a physical activity referral scheme improve the physical activity among routine primary health care patients? Scand J Med Sci Sports. 2009;19:627–36. https://doi.org/10.1111/j.1600-0838.2008.00820.x .

Leijon ME, Bendtsen P, Ståhle A, Ekberg K, Festin K, Nilsen P. Factors associated with patients self-reported adherence to prescribed physical activity in routine primary health care. BMC Fam Pract. 2010;11:38. https://doi.org/10.1186/1471-2296-11-38 .

Aittasalo M, Miilunpalo S, Kukkonen-Harjula K, Pasanen M. A randomized intervention of physical activity promotion and patient self-monitoring in primary health care. Prev Med. 2006;42:40–6. https://doi.org/10.1016/j.ypmed.2005.10.003 .

Bendrik R, Kallings LV, Bröms K, Kunanusornchai W, Emtner M. Physical activity on prescription in patients with hip or knee osteoarthritis: a randomized controlled trial. Clin Rehabil. 2021;35:1465–77. https://doi.org/10.1177/02692155211008807 .

Kallings LV, Johnson JS, Fisher RM, Faire Ud, Ståhle A, Hemmingsson E, Hellénius M-L. Beneficial effects of individualized physical activity on prescription on body composition and cardiometabolic risk factors: results from a randomized controlled trial. Eur J Cardiovasc Prev Rehabil. 2009b;16:80–4. https://doi.org/10.1097/HJR.0b013e32831e953a .

Lundqvist S, Börjesson M, Cider Å, Hagberg L, Ottehall CB, Sjöström J, Larsson MEH. Long-term physical activity on prescription intervention for patients with insufficient physical activity level—a randomized controlled trial. Trials. 2020;21:793. https://doi.org/10.1186/s13063-020-04727-y .

Romé A, Persson U, Ekdahl C, Gard G. Physical activity on prescription (PAP): costs and consequences of a randomized, controlled trial in primary healthcare. Scand J Prim Health Care. 2009;27:216–22. https://doi.org/10.3109/02813430903438734 .

Sørensen JB, Kragstrup J, Skovgaard T, Puggaard L. Exercise on prescription: a randomized study on the effect of counseling vs counseling and supervised exercise. Scand J Med Sci Sports. 2008;18:288–97. https://doi.org/10.1111/j.1600-0838.2008.00811.x .

Duda JL, Williams GC, Ntoumanis N, Daley A, Eves FF, Mutrie N, et al. Effects of a standard provision versus an autonomy supportive exercise referral programme on physical activity, quality of life and well-being indicators: a cluster randomised controlled trial. Int J Behav Nutr Phys Act. 2014;11:10. https://doi.org/10.1186/1479-5868-11-10 .

Murphy SM, Edwards RT, Williams N, Raisanen L, Moore G, Linck P, et al. An evaluation of the effectiveness and cost effectiveness of the National Exercise Referral Scheme in Wales, UK: a randomised controlled trial of a public health policy initiative. J Epidemiol Community Health. 2012;66:745–53. https://doi.org/10.1136/jech-2011-200689 .

Lawton BA, Rose SB, Elley CR, Dowell AC, Fenton A, Moyes SA. Exercise on prescription for women aged 40–74 recruited through primary care: two year randomised controlled trial. BMJ. 2008;337: a2509. https://doi.org/10.1136/bmj.a2509 .

Elley CR, Kerse N, Arroll B, Robinson E. Effectiveness of counselling patients on physical activity in general practice: cluster randomised controlled trial. BMJ. 2003;326:793. https://doi.org/10.1136/bmj.326.7393.793 .

Gallegos-Carrillo K, García-Peña C, Salmerón J, Salgado-de-Snyder N, Lobelo F. Brief counseling and exercise referral scheme: a pragmatic trial in Mexico. Am J Prev Med. 2017;52:249–59. https://doi.org/10.1016/j.amepre.2016.10.021 .

Harrison RA, Roberts C, Elton PJ. Does primary care referral to an exercise programme increase physical activity one year later? A randomized controlled trial. J Public Health (Oxf). 2005a;27:25–32. https://doi.org/10.1093/pubmed/fdh197 .

Isaacs AJ, Critchley JA, Tai SS, Buckingham K, Westley D, Harridge SDR, et al. Exercise Evaluation Randomised Trial (EXERT): a randomised trial comparing GP referral for leisure centre-based exercise, community-based walking and advice only. Health Technol Assess. 2007;11(1–165):iii–iv. https://doi.org/10.3310/hta11100 .

James EL, Ewald BD, Johnson NA, Stacey FG, Brown WJ, Holliday EG, et al. Referral for expert physical activity counseling: a pragmatic RCT. Am J Prev Med. 2017;53:490–9. https://doi.org/10.1016/j.amepre.2017.06.016 .

Kolt GS, Schofield GM, Kerse N, Garrett N, Ashton T, Patel A. Healthy Steps trial: pedometer-based advice and physical activity for low-active older adults. Ann Fam Med. 2012;10:206–12. https://doi.org/10.1370/afm.1345 .

Williams MH, Cairns SP, Simmons D, Rush EC. Face-to-face versus telephone delivery of the Green Prescription for Maori and New Zealand Europeans with type-2 diabetes mellitus: influence on participation and health outcomes. N Z Med J. 2017;130:71–9.

PubMed   Google Scholar  

Fortier MS, Hogg W, O’Sullivan TL, Blanchard C, Sigal RJ, Reid RD, et al. Impact of integrating a physical activity counsellor into the primary health care team: physical activity and health outcomes of the Physical Activity Counselling randomized controlled trial. Appl Physiol Nutr Metab. 2011;36:503–14. https://doi.org/10.1139/h11-040 .

Livingston PM, Craike MJ, Salmon J, Courneya KS, Gaskin CJ, Fraser SF, et al. Effects of a clinician referral and exercise program for men who have completed active treatment for prostate cancer: a multicenter cluster randomized controlled trial (ENGAGE). Cancer. 2015;121:2646–54. https://doi.org/10.1002/cncr.29385 .

Martín-Borràs C, Giné-Garriga M, Puig-Ribera A, Martín C, Solà M, Cuesta-Vargas AI. A new model of exercise referral scheme in primary care: is the effect on adherence to physical activity sustainable in the long term? A 15-month randomised controlled trial. BMJ Open. 2018;8: e017211. https://doi.org/10.1136/bmjopen-2017-017211 .

Morén C, Welmer A-K, Hagströmer M, Karlsson E, Sommerfeld DK. The effects of “physical activity on prescription” in persons with transient ischemic attack: a randomized controlled study. J Neurol Phys Ther. 2016;40:176–83. https://doi.org/10.1097/NPT.0000000000000134 .

Pfeiffer BA, Clay SW, Conatser JRRR. A green prescription study: does written exercise prescribed by a physician result in increased physical activity among older adults? J Aging Health. 2001;13:527–38. https://doi.org/10.1177/089826430101300405 .

Riera-Sampol A, Bennasar-Veny M, Tauler P, Aguilo A. Effectiveness of physical activity prescription by primary care nurses using health assets: a randomized controlled trial. J Adv Nurs. 2020. https://doi.org/10.1111/jan.14649 .

Samdal GB, Meland E, Eide GE, Berntsen S, Abildsnes E, Stea TH, Mildestvedt T. The Norwegian Healthy Life Centre Study: a pragmatic RCT of physical activity in primary care. Scand J Public Health. 2019;47:18–27. https://doi.org/10.1177/1403494818785260 .

Swinburn BA, Walter LG, Arroll B, Tilyard MW, Russell DG. The green prescription study: a randomized controlled trial of written exercise advice provided by general practitioners. Am J Public Health. 1998;88:288–91. https://doi.org/10.2105/ajph.88.2.288 .

Taylor AH, Doust J, Webborn N. Randomised controlled trial to examine the effects of a GP exercise referral programme in Hailsham, East Sussex, on modifiable coronary heart disease risk factors. J Epidemiol Community Health. 1998;52:595–601. https://doi.org/10.1136/jech.52.9.595 .

Taylor AH, Taylor RS, Ingram WM, Anokye N, Dean S, Jolly K, et al. Adding web-based behavioural support to exercise referral schemes for inactive adults with chronic health conditions: the e-coachER RCT. Health Technol Assess. 2020;24:1–106. https://doi.org/10.3310/hta24630 .

Petrella RJ, Lattanzio CN, Shapiro S, Overend T. Improving aerobic fitness in older adults: effects of a physician-based exercise counseling and prescription program. Can Fam Physician. 2010;56:e191–200.

PubMed   PubMed Central   Google Scholar  

Shepich J, Slowiak JM, Keniston A. Do subsidization and monitoring enhance adherence to prescribed exercise? Am J Health Promot. 2007;22:2–5. https://doi.org/10.4278/0890-1171-22.1.2 .

Gademan MGJ, Deutekom M, Hosper K, Stronks K. The effect of exercise on prescription on physical activity and wellbeing in a multi-ethnic female population: a controlled trial. BMC Public Health. 2012;12:758. https://doi.org/10.1186/1471-2458-12-758 .

Webb R, Thompson JES, Ruffino J-S, Davies NA, Watkeys L, Hooper S, et al. Evaluation of cardiovascular risk-lowering health benefits accruing from laboratory-based, community-based and exercise-referral exercise programmes. BMJ Open Sport Exerc Med. 2016;2: e000089. https://doi.org/10.1136/bmjsem-2015-000089 .

Harrison RA, McNair F, Dugdill L. Access to exercise referral schemes – a population based analysis. J Public Health (Oxf). 2005b;27:326–30. https://doi.org/10.1093/pubmed/fdi048 .

Andersen P, Holmberg S, Årestedt K, Lendahls L, Nilsen P. Physical activity on prescription in routine health care: 1-year follow-up of patients with and without counsellor support. Int J Environ Res Public Health. 2020;17:5679. https://doi.org/10.3390/ijerph17165679 .

Sjöling M, Lundberg K, Englund E, Westman A, Jong MC. Effectiveness of motivational interviewing and physical activity on prescription on leisure exercise time in subjects suffering from mild to moderate hypertension. BMC Res Notes. 2011;4:1–7. https://doi.org/10.1186/1756-0500-4-352 .

Rödjer L, H. Jonsdottir I, Börjesson M. Physical activity on prescription (PAP): self-reported physical activity and quality of life in a Swedish primary care population, 2-year follow-up. Scand J Prim Health Care. 2016;34:443–52. doi: https://doi.org/10.1080/02813432.2016.1253820 .

Buckley B, Thijssen DH, Murphy RC, Graves LE, Cochrane M, Gillison F, et al. Pragmatic evaluation of a coproduced physical activity referral scheme: a UK quasi-experimental study. BMJ Open. 2020;10: e034580. https://doi.org/10.1136/bmjopen-2019-034580 .

Edmunds J, Ntoumanis N, Duda JL. Adherence and well-being in overweight and obese patients referred to an exercise on prescription scheme: a self-determination theory perspective. Psychol Sport Exerc. 2007;8:722–40. https://doi.org/10.1016/j.psychsport.2006.07.006 .

Dodd-Reynolds CJ, Vallis D, Kasim A, Akhter N, Hanson CL. The Northumberland Exercise Referral Scheme as a universal community weight management programme: a mixed methods exploration of outcomes, expectations and experiences across a social gradient. Int J Environ Res Public Health. 2020;17:5297. https://doi.org/10.3390/ijerph17155297 .

Hanson CL, Allin LJ, Ellis JG, Dodd-Reynolds CJ. An evaluation of the efficacy of the exercise on referral scheme in Northumberland, UK: association with physical activity and predictors of engagement. A naturalistic observation study BMJ Open. 2013;3: e002849. https://doi.org/10.1136/bmjopen-2013-002849 .

Hanson CL, Neubeck L, Kyle RG, Brown N, Gallagher R, Clark RA, et al. Gender differences in uptake, adherence and experiences: a longitudinal, mixed-methods study of a physical activity referral scheme in Scotland, UK. Int J Environ Res Public Health. 2021;18:1700. https://doi.org/10.3390/ijerph18041700 .

Prior F, Coffey M, Robins A, Cook P. Long-term health outcomes associated with an exercise referral scheme: an observational longitudinal follow-up study. J Phys Act Health. 2019;16:288–93. https://doi.org/10.1123/jpah.2018-0442 .

Stewart L, Dolan E, Carver P, Swinton PA. Per-protocol investigation of a best practice exercise referral scheme. Public Health. 2017;150:26–33. https://doi.org/10.1016/j.puhe.2017.04.023 .

Ward M, Phillips CJ, Farr A, Harries D. Heartlinks—A real world approach to effective exercise referral. Int J Health Promot Educ. 2010;48:20–7. https://doi.org/10.1080/14635240.2010.10708176 .

Crone D, Johnston LH, Gidlow C, Henley C, James DVB. Uptake and participation in physical activity referral schemes in the UK: an investigation of patients referred with mental health problems. Issues Ment Health Nurs. 2008;29:1088–97. https://doi.org/10.1080/01612840802319837 .

Lord JC, Green F. Exercise on prescription: does it work? Health Educ J. 1995;54:453–64. https://doi.org/10.1177/001789699505400408 .

Dinan S, Lenihan P, Tenn T, Iliffe S. Is the promotion of physical activity in vulnerable older people feasible and effective in general practice? Br J Gen Pract. 2006;56:791–3.

Sørensen J, Sørensen JB, Skovgaard T, Bredahl T, Puggaard L. Exercise on prescription: changes in physical activity and health-related quality of life in five Danish programmes. Eur J Public Health. 2011;21:56–62. https://doi.org/10.1093/eurpub/ckq003 .

Bredahl T, Singhammer J, Roessler K. “ Is Intensity Decisive?” Changes in Levels of Self-efficacy, Stages of Change and Physical Activity for Two Different Forms of Prescribed Exercise. Sport Science Review. 2011;20:85. https://doi.org/10.2478/v10237-011-0056-1 .

Pardo A, Violán M, Cabezas C, García J, Miñarro C, Rubinat M, et al. Effectiveness of a supervised physical activity programme on physical activity adherence in patients with cardiovascular risk factors. Apunts Medicina de l’Esport. 2014;49:37–44. https://doi.org/10.1016/j.apunts.2014.02.001 .

van de Vijver PL, Schalkwijk FH, Numans ME, Slaets JPJ, van Bodegom D. Linking a peer coach physical activity intervention for older adults to a primary care referral scheme. BMC Prim Care. 2022;23:118. https://doi.org/10.1186/s12875-022-01729-4 .

Hesketh K, Jones H, Kinnafick F, Shepherd SO, Wagenmakers AJM, Strauss JA, Cocks M. Home-Based HIIT and traditional MICT prescriptions improve cardiorespiratory fitness to a similar extent within an exercise referral scheme for at-risk individuals. Front Physiol. 2021;12: 750283. https://doi.org/10.3389/fphys.2021.750283 .

Leijon ME, Faskunger J, Bendtsen P, Festin K, Nilsen P. Who is not adhering to physical activity referrals, and why? Scand J Prim Health Care. 2011;29:234–40. https://doi.org/10.3109/02813432.2011.628238 .

Lundqvist S, Börjesson M, Larsson MEH, Hagberg L, Cider Å. Physical Activity on Prescription (PAP), in patients with metabolic risk factors. A 6-month follow-up study in primary health care. PLoS ONE. 2017;12:e0175190. https://doi.org/10.1371/journal.pone.0175190 .

Romé Å, Persson U, Ekdahl C, Gard G. Costs and outcomes of an exercise referral programme – A 1-year follow-up study. Eur J Physiother. 2014;16:82–92. https://doi.org/10.3109/21679169.2014.886291 .

Foley L, Maddison R, Jones Z, Brown P, Davys A. Comparison of two modes of delivery of an exercise prescription scheme. N Z Med J. 2011;124:44–54.

Elley CR, Garrett S, Rose SB, Des O'Dea, Lawton BA, Moyes SA, Dowell AC. Cost-effectiveness of exercise on prescription with telephone support among women in general practice over 2 years. Br J Sports Med. 2011;45:1223–9.

Elley R, Kerse N, Arroll B, Swinburn B, Ashton T, Robinson E. Cost-effectiveness of physical activity counselling in general practice. N Z Med J. 2004;117:U1216.

Kerse N, Elley CR, Robinson E, Arroll B. Is physical activity counseling effective for older people? A cluster randomized, controlled trial in primary care. J Am Geriatr Soc. 2005;53:1951–6. https://doi.org/10.1111/j.1532-5415.2005.00466.x .

Edwards RT, Linck P, Hounsome N, Raisanen L, Williams N, Moore L, Murphy S. Cost-effectiveness of a national exercise referralprogramme for primary care patients in Wales: results of a randomised controlled trial. BMC Public Health. 2013;13:1021. https://doi.org/10.1186/1471-2458-13-1021 .

Moore GF, Raisanen L, Moore L, Din NU, Murphy S. Mixed-method process evaluation of the Welsh National Exercise Referral Scheme. Health Edu. 2013;113:476–501. https://doi.org/10.1108/HE-08-2012-0046 .

Johnson NA, Ewald B, Plotnikoff RC, Stacey FG, Brown WJ, Jones M, et al. Predictors of adherence to a physical activity counseling intervention delivered by exercise physiologists: secondary analysis of the NewCOACH trial data. Patient Prefer Adherence. 2018;12:2537–43. https://doi.org/10.2147/PPA.S183938 .

Smith BJ, Bauman AE, Bull FC, Booth ML, Harris MF. Promoting physical activity in general practice: a controlled trial of written advice and information materials. Br J Sports Med. 2000;34:262–7. https://doi.org/10.1136/bjsm.34.4.262 .

Galaviz K, Lévesque L, Kotecha J. Evaluating the effectiveness of a physical activity referral scheme among women. J Prim Care Community Health. 2013;4:167–71. https://doi.org/10.1177/2150131912463243 .

Gallegos-Carrillo K, Reyes-Morales H, Pelcastre-Villafuerte B, García-Peña C, Lobelo F, Salmeron J, Salgado-de-Snyder N. Understanding adherence of hypertensive patients in Mexico to an exercise-referral scheme for increasing physical activity. Health Promot Int. 2020. https://doi.org/10.1093/heapro/daaa110 .

Gallegos-Carrillo K, Garcia-Peña C, Salgado-de-Snyder N, Salmerón J, Lobelo F. Levels of adherence of an exercise referral scheme in primary health care: effects on clinical and anthropometric variables and depressive symptoms of hypertensive patients. Front Physiol. 2021;12.

Lundqvist S, Cider Å, Larsson MEH, Hagberg L, Björk MP, Börjesson M. The effects of a 5-year physical activity on prescription (PAP) intervention in patients with metabolic risk factors. PLoS ONE. 2022;17:e0276868. https://doi.org/10.1371/journal.pone.0276868 .

Lambert J, Taylor A, Streeter A, Greaves C, Ingram WM, Dean S, et al. A process evaluation, with mediation analysis, of a web-based intervention to augment primary care exercise referral schemes: the e-coachER randomised controlled trial. Int J Behav Nutr Phys Act. 2022;19:128. https://doi.org/10.1186/s12966-022-01360-7 .

Taylor A, Taylor RS, Ingram W, Dean SG, Jolly K, Mutrie N, et al. Randomised controlled trial of an augmented exercise referral scheme using web-based behavioural support for inactive adults with chronic health conditions: the e-coachER trial. Br J Sports Med. 2021;55:444. https://doi.org/10.1136/bjsports-2020-103121 .

Bredahl T, Singhammer J. The influence of self-rated health on the development of change in the level of physical activity for participants in prescribed exercise. Sport Sci Rev. 2011;20:73–94. https://doi.org/10.2478/v10237-011-0065-0

Patel A, Keogh JWL, Kolt GS, Schofield GM. The long-term effects of a primary care physical activity intervention on mental health in low-active, community-dwelling older adults. Aging Ment Health. 2013;17:766–72. https://doi.org/10.1080/13607863.2013.781118 .

Gidlow C, Johnston LH, Crone D, Morris C, Smith A, Foster C, James DVB. Socio-demographic patterning of referral, uptake and attendance in physical activity referral schemes. J Public Health (Oxf). 2007;29:107–13.

Pavey T, Taylor A, Hillsdon M, Fox K, Campbell J, Foster C, et al. Levels and predictors of exercise referral scheme uptake and adherence: a systematic review. J Epidemiol Community Health. 2012;66:737–44. https://doi.org/10.1136/jech-2011-200354 .

Calonge Pascual S, Casajús Mallén JA, González-Gross M. Adherence factors related to exercise prescriptions in healthcare settings: a review of the scientific literature. Res Q Exerc Sport. 2020:1–10. https://doi.org/10.1080/02701367.2020.1788699 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348: g1687. https://doi.org/10.1136/bmj.g1687 .

Michie S, Abraham C. Advancing the science of behaviour change: a plea for scientific reporting. Addiction. 2008;103:1409–10. https://doi.org/10.1111/j.1360-0443.2008.02291.x .

Weissenfels A, Klamroth S, Carl J, Naber I, Mino E, Geidl W, et al. Effectiveness and implementation success of a co-produced physical activity referral scheme in Germany: study protocol of a pragmatic cluster randomised trial. BMC Public Health. 2022;22:1545. https://doi.org/10.1186/s12889-022-13833-2 .

McEwan D, Harden SM, Zumbo BD, Sylvester BD, Kaulius M, Ruissen GR, et al. The effectiveness of multi-component goal setting interventions for changing physical activity behaviour: a systematic review and meta-analysis. Health Psychol Rev. 2016;10:67–88. https://doi.org/10.1080/17437199.2015.1104258 .

Download references

Acknowledgements

Not applicable.

Open Access funding enabled and organized by Projekt DEAL. Open Access funding enabled and organized by Projekt DEAL. This systematic review was conducted within the BewegtVersorgt project, which is supported by the Federal Ministry of Health based on a resolution of the German Bundestag by the Federal Government (ZMV I 1—2519FSB109). No direct funding was sought for this paper. The Federal Ministry of Health had no input into any of the review stages, including review design, data extraction, analysis, results, and writing.

Author information

Authors and affiliations.

Department of Sport Science and Sport, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Gebbertstraße 123B, 91058, Erlangen, Germany

Eriselda Mino, Inga Naber, Anja Weissenfels, Jane Saftig, Sarah Klamroth, Peter Gelius, Karim Abu-Omar, Klaus Pfeifer & Wolfgang Geidl

School of Health and Social Care, Edinburgh Napier University, Sighthill Campus, Edinburgh, EH11 4DN, UK

Coral L. Hanson & Sheona McHale

WHO European Office for Prevention and Control of Noncommunicable Diseases (NCD Office), Copenhagen, Denmark

Stephen Whiting, Kremlin Wickramasinghe & Gauden Galea

You can also search for this author in PubMed   Google Scholar

Contributions

WG and KP supervised EM, who coordinated the systematic review process and conducted the literature search, collection and screening, data extraction, synthesis, and table and manuscript preparation. AW, IN, and JS participated in the literature screening. JS helped in the data extraction. CLH and SM conducted the content analysis. EM drafted the manuscript, tables and supplementary materials. CLH edited the manuscript. All authors (WG, KP, CLH, AW, IN, SK, SM, JS, SW, KW, GG, KA, and PG) critically reviewed and approved the final manuscript.

Corresponding author

Correspondence to Eriselda Mino .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors have no interests to declare.

SW, KW and GG are staff members of the WHO. The authors alone are responsible for the views expressed in this publication, and they do not necessarily represent the views, decisions, or policies of the institutions with which they are affiliated.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Search strategy results. This file contains the systematic search strategy and results for all the literature databases.  Additional file 2. Overview of included studies sorted by comparison group. This file contains the characteristics of all the studies included in the systematic review, including the main results. Additional file 3. Description of PARS components. This file contains the description of the nineteen components identified through the content analysis. Additional file 4. Overview of PARS characteristics sorted by country. This file contains the characteristics of PARS models included in the systematic review. Additional file 5. PARS identified worldwide. This file provides an overview of all the PARS models identified during the screening for eligible articles, including those that we were not able to include in this review.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mino, E., Hanson, C.L., Naber, I. et al. A systematic review and narrative synthesis of physical activity referral schemes’ components. Int J Behav Nutr Phys Act 20 , 140 (2023). https://doi.org/10.1186/s12966-023-01518-x

Download citation

Received : 21 April 2023

Accepted : 20 September 2023

Published : 27 November 2023

DOI : https://doi.org/10.1186/s12966-023-01518-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Physical activity referral scheme
  • Physical activity prescription
  • Exercise prescription

International Journal of Behavioral Nutrition and Physical Activity

ISSN: 1479-5868

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research impact a narrative review

research impact a narrative review

  Nigerian Journal of Medicine Journal / Nigerian Journal of Medicine / Vol. 32 No. 4 (2023) / Articles (function() { function async_load(){ var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; var theUrl = 'https://www.journalquality.info/journalquality/ratings/2401-www-ajol-info-njm'; s.src = theUrl + ( theUrl.indexOf("?") >= 0 ? "&" : "?") + 'ref=' + encodeURIComponent(window.location.href); var embedder = document.getElementById('jpps-embedder-ajol-njm'); embedder.parentNode.insertBefore(s, embedder); } if (window.attachEvent) window.attachEvent('onload', async_load); else window.addEventListener('load', async_load, false); })();  

Article sidebar.

Open Access

Article Details

Main article content, cardiovascular health implications of worsening economic indices in nigeria: a narrative review, margaret adefunke ajibare, adeola olubunmi ajibare, oluwafemi tunde ojo, akinola olusola dada, ayoola stephen odeyemi, adedayo ayodele aderibigbe, adebowale olayinka adekoya.

Poor economic indices are implicated in adverse health outcomes. Cardiovascular diseases are the leading cause of death globally with more impact in low‑.and middle‑income countries. Despite some documented associations between worsening economic indices and cardiovascular health, there is however knowledge gap on this topic in this environment. We conducted a narrative review to provide an overview of the impact of dwindling economy and cardiovascular health in Nigeria. Acomprehensive search of electronic databases including PubMed, Scopus, and Google Scholar was conducted. The search terms were cardiovascular health, economic indices, and Nigeria. Inclusion criteria were studies published in English language between 2010 and 2021. There is limited knowledge on the association between worsening economic indices and cardiovascular health in Nigeria. The Nigerian government’s spending on health care was less than the recommended 15% of the budget. There was a decline in gross domestic product from 5.31% in 2011 to 3.65% in 2021 and a rise in the inflation rate and unemployment rate from 10.84% and 3.77% to 16.95% and 9.79%, respectively, over this period. The prevalence of hypertension, diabetes mellitus, and dyslipidemia, which are the leading causes of cardiovascular morbidity, increased in the period of study. The exchange rate of naira to other global currencies worsened with attendant rise in the cost of health‑care and cardiovascular medications. There is a huge knowledge gap on the impact of worsening economic indices and cardiovascular health. However, the existing evidence showed that the Nigerian government’s spending on health is low and poor economic indices may be related to the worsening cardiovascular health in Nigeria. There is a need for more research to assess the impact of these indices on cardiovascular health.

AJOL is a Non Profit Organisation that cannot function without donations. AJOL and the millions of African and international researchers who rely on our free services are deeply grateful for your contribution. AJOL is annually audited and was also independently assessed in 2019 by E&Y.

Your donation is guaranteed to directly contribute to Africans sharing their research output with a global readership.

  • For annual AJOL Supporter contributions, please view our Supporters page.

Journal Identifiers

research impact a narrative review

REVIEW article

This article is part of the research topic.

Global Advances in the Diagnosis, Management, and Treatment of Low Back Pain

Common differential diagnosis of low back pain in contemporary medical practice: a narrative review

  • 1 Faculty of Medicine, Medical University Sofia, Bulgaria
  • 2 University Hospital St. Ivan Rilski, Bulgaria

The final, formatted version of the article will be published soon.

With a wide range of etiologies, low back pain (LBP) presents a true clinical challenge, finding its origins both in intrinsic spinal and systemic conditions, as well as referred ones. This review categorizes the LBP into these three groups and aims to offer a comprehensive look at the tools required to diagnose and differentiate them.The intrinsic etiologies are based on conditions that affect the musculoskeletal components of the lumbar spine, such as intervertebral disc disease, stenosis, muscular imbalance, and facet joint degeneration. The systemic causes usually extend beyond local structures. Such are the cases of neoplasia, infections, and chronic inflammation. The diagnosis is rendered even more complex by adding the referred pain, which only manifests in the lower back yet arises in more distant locations.By synthesizing the literature that encompasses the problem, this review aims to augment the understanding of the differential diagnoses of LBP by showcasing the subject's nuances. This categorization provides a structured approach to a patient-centered diagnosis, which could facilitate the medical practitioners' efforts to navigate this pathology more effectively.

Keywords: Low Back Pain, differential diagnosis, Mechanical pain, non-mechanical pain, Referred pain

Received: 06 Jan 2024; Accepted: 22 Jan 2024.

Copyright: © 2024 Ferdinandov, Yankov and Trandzhiev. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. Dilyan Ferdinandov, Faculty of Medicine, Medical University Sofia, Sofia, Bulgaria

People also looked at

IMAGES

  1. (PDF) A narrative review of research impact assessment models and methods

    research impact a narrative review

  2. Phases and steps of a narrative literature review.

    research impact a narrative review

  3. Narrative review process.

    research impact a narrative review

  4. The main steps in writing a narrative review

    research impact a narrative review

  5. Narrative Literature Review

    research impact a narrative review

  6. The Seven Steps in Narrative Research Method by Sheena Peter

    research impact a narrative review

VIDEO

  1. Introduction to Research

  2. Research Tuesdays: 3 Minute Thesis

  3. Narrative Analysis

  4. Writing Systematic and Narrative Review

  5. Narrative Research Design

  6. WRITING RESEARCH INTRO PART 1

COMMENTS

  1. Research impact: a narrative review

    Summarising evidence from previous systematic and narrative reviews [ 1 - 7 ], including new reviews from our own team [ 1, 5 ], we consider definitions of impact and its conceptual and philosophical basis before reviewing the strengths and limitations of different approaches to its assessment.

  2. Research impact: a narrative review

    Different approaches to assessing research impact make different assumptions about the nature of research knowledge, the purpose of research, the definition of research quality, the role of values in research and its implementation, the mechanisms by which impact is achieved, and the implications for how impact is measured (Table 1 ).

  3. Research impact: a narrative review

    We conclude that (1) different approaches to impact assessment are appropriate in different circumstances; (2) the most robust and sophisticated approaches are labour-intensive and not always feasible or affordable; (3) whilst most metrics tend to capture direct and proximate impacts, more indirect and diffuse elements of the research-impact lin...

  4. (PDF) Research impact: A narrative review

    Research impact: A narrative review Authors: Trisha Greenhalgh University of Oxford James Raftery Steve Hanney Matthew J Glover University of Surrey Abstract and Figures Impact occurs when...

  5. Research impact: a narrative review

    Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of ...

  6. Narrative Reviews: Flexible, Rigorous, and Practical

    Narrative reviews are often useful for topics that require a meaningful synthesis of research evidence that may be complex or broad and that require detailed, nuanced description and interpretation. 1 See Boxes 1 and 2 for resources on writing a narrative review as well as a case example of a program director's use of a narrative review for an i...

  7. A narrative review of research impact assessment models and methods

    The purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing policy and practice impacts of public health research. Methods The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013.

  8. Evaluating impact from research: A methodological framework

    New definitions of research impact and impact evaluation are proposed. ... Unlike systematic reviews or meta-analyses, a narrative literature review is an expert-based "best-evidence synthesis" of key literature; it does not seek to capture all literature (Baumeister and Leary, 1997).

  9. A narrative review of research impact assessment models and methods

    A narrative review of research impact assessment models and methods Research impact assessment is a new field of scientific endeavour and there are a growing number of conceptual frameworks applied to assess the impacts of research.

  10. Research impact: a narrative review

    Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of ...

  11. Research Impact: A Narrative Review

    Research impact: A narrative review. BMC Medicine, 14 (78), 1-16. https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-016-0620-8 Abstract Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base.

  12. An Introduction to Writing Narrative and Systematic Reviews

    A narrative review is the "older" format of the two, presenting a (non-systematic) summation and analysis of available literature on a specific topic of interest. Interestingly, probably because the "approach" is non-systematic, there are no acknowledged formal guidelines for writing narrative reviews.

  13. Research impact: a narrative review

    A collective summary of existing methodological frameworks for research impact is provided, which funders and researchers may use to inform the measurement of research impact and study design decisions aimed at maximising the short-, medium-, and long-term impact of their research. 117

  14. Research impact: a narrative review

    Impact occurs when research generates benefits (health, economic, cultural) in addition to building the academic knowledge base. Its mechanisms are complex and reflect the multiple ways in which knowledge is generated and utilised. Much progress has been made in measuring both the outcomes of research and the processes and activities through which these are achieved, though the measurement of ...

  15. PDF Formatting Guide for Narrative Reviews

    Narrative reviews are evidence-based summaries on a particular, defined topic, often covering a range of specific questions from pathophysiology to treatment. The content may be clinical, ethical, policy or legal review. The scope of the narrative review should be defined in the work. Though the standards of

  16. Narrative Review

    At its most basic, narrative reviews are most useful for obtaining a broad perspective on a topic and are often more comparable to a textbook chapter including sections on the physiology and/or epidemiology of a topic. When reading and evaluating a narrative review, keep in mind that author's bias may or may not be present.

  17. How to Write a Systematic Review: A Narrative Review

    A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies. [ 2, 3] To identify assess and interpret available research, identify effective and ineffective health-care interventions, provide integrated documentation to help decisi...

  18. Narrative Review

    Box 3.3 Steps for Conducting a Narrative Literature Review Step 1: Conduct a Search The published scientific literature is indexed in a variety of databases. Search these databases for studies. It is important to search numerous databases to ensure that the majority of relevant studies have been identified.

  19. Dietary impact on fasting and stimulated GLP-1 secretion in ...

    Rationale: This narrative review focuses on fasting and postprandial GLP-1 secretion in individuals with different metabolic conditions and degrees of glucose intolerance. Further, the influence of relevant diet-related factors (e.g., specific diets, meal composition and size, phytochemical content, and gut microbiome) that could affect fasting ...

  20. Research impact: a narrative review.

    Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015; 13:18. doi: 10.1186/s12961-015-0003-1. [Europe PMC free article] [Google Scholar] 4. Grant J, Brutscher P-B, Kirk SE, Butler L, Wooding S. Capturing Research Impacts: A Review of International Practice. ...

  21. A systematic review and narrative synthesis of physical activity

    Physical activity referral schemes (PARS) are complex multicomponent interventions that represent a promising healthcare-based concept for physical activity (PA) promotion. This systematic review and narrative synthesis aimed to identify the constitutive components of PARS and provide an overview of their effectiveness. Following a published protocol, we conducted a systematic search of PubMed ...

  22. Narrative Reviews in Medical Education: Key Steps for Researchers

    The first step in conducting a narrative review requires researchers to describe the rationale and justification for the review. Narrative reviews are useful for research questions across many different topics. For example, researchers may be seeking clarity on a topic where there is limited knowledge, or to synthesize and analyze an existing ...

  23. Cardiovascular health implications of worsening economic indices in

    We conducted a narrative review to provide an overview of the impact of dwindling economy and cardiovascular health in Nigeria. Acomprehensive search of electronic databases including PubMed, Scopus, and Google Scholar was conducted. ... There is a need for more research to assess the impact of these indices on cardiovascular health.

  24. PDF Research impact: a narrative review

    REVIEW Open Access Research impact: a narrative review Trisha Greenhalgh1*, James Raftery2, Steve Hanney3 and Matthew Glover 3 ... narrative reviews [1 7], including new reviews from our

  25. Frontiers

    With a wide range of etiologies, low back pain (LBP) presents a true clinical challenge, finding its origins both in intrinsic spinal and systemic conditions, as well as referred ones. This review categorizes the LBP into these three groups and aims to offer a comprehensive look at the tools required to diagnose and differentiate them.The intrinsic etiologies are based on conditions that ...