OPINION article

Guiding undergraduates through the process of first authorship.

\r\nTraci A. Giuliano*

  • Department of Psychology, Southwestern University, Georgetown, TX, United States

Introduction

Dozens of excellent papers have recently been written that describe best practices for publishing journal articles with undergraduates (see “Engaging Undergraduates in Publishable Research: Best Practices,” Frontiers in Psychology ); for the most part, these involve students as co-authors in general rather than as lead authors. In this paper, I specifically focus on how to guide undergraduates through the process of first authorship. After describing potential barriers, I discuss issues of authorship contribution before outlining several successful strategies I've developed during my 24 years of collaborating with undergraduates. Although mentoring students to be first authors can be challenging, the rewards can also be immense—for both the students and the faculty mentors who are up to the challenge.

The Undergraduate First Author: a Unicorn?

A literature search revealed not a single article on the topic of undergraduates publishing as first author. Without any data, it's hard to know for certain how common it is for undergraduates to publish as first authors, but informal discussions with psychology colleagues around the world who collaborate with undergraduates (and examinations of faculty vitae) suggest that it is far less common than undergraduates publishing as non-lead authors.

Barriers (Real or Perceived) to Undergraduate First Authorship

Because it is rare to see undergraduate first authors, many faculty are likely unaware that at least some undergraduates can—with proper training, encouragement, and careful mentoring—be capable of serving as first authors on papers in refereed journals. Even if faculty members are made aware of this fact (as I hope to accomplish with this article), other barriers exist. For example, many faculty work under a reward system in which publications (and first author publications in particular) determine tenure, promotion, pay, likelihood of securing grants, and job security (e.g., Costa and Gatz, 1992 ; Fine and Kurdek, 1993 ; Wilcox, 1998 ). The primary tradeoff is that the time it takes to mentor undergraduates through first authorship is generally much longer than the time it would take for the faculty member to be the lead author. The great experience provided to the student (see Matthews and Rosa, 2018 ), therefore, can come at the cost of decreased productivity (e.g., fewer publications overall, fewer first author publications, publications in lower-tier journals), which could be problematic for faculty at institutions that don't highly value faculty-undergraduate research. Finally, recent trends in psychological science, such as the difficulty of publishing single-study papers in some subfields and the “open science” movement calling for large sample sizes, pre-registration, and replication (see Chambers, 2017 ; Nelson et al., 2018 ) can seem like roadblocks to publishing with undergraduates. Fortunately, faculty from diverse subfields have come up with creative solutions involving high-quality replications (e.g., McKelvie and Standing, 2018 ; Wagge et al., 2019 ), preregistered projects (e.g., Strand and Brown, 2019 ), large-scale single-experiment class projects designed for publication (e.g., LoSchiavo, 2018 ; Mickley Steinmetz and Reid, 2019 ), and multi-study projects involving student coauthors across years (e.g., Grysman and Lodi-Smith, 2019 ; Holmes and Roberts, 2019 ).

Authorship Contribution and Order of Authorship

Much has been written about the ethics of assigning authorship credit in the sciences and social sciences (see Maurer, 2017 , for a review), and attempts have been made to fairly determine authorship order by (a) surveying past authors about their experiences (e.g., Wagner et al., 1994 ; Sandler and Russell, 2005 ; Moore and Griffin, 2006 ; Geelhoed et al., 2007 ), (b) assessing reactions to hypothetical authorship scenarios (e.g., Costa and Gatz, 1992 ; Bartle et al., 2000 ; Apgar and Congress, 2005 ), (c) proposing step-by-step decision-making models ( Fine and Kurdek, 1993 ; Foster and Ray, 2012 ; Maurer, 2017 ), and (d) outlining quantitative systems that assign weighted points to tasks associated with publishing (e.g., Winston, 1985 ; Kosslyn, 2015 ). The consensus seems to be that writing the manuscript is either the most important factor in determining first authorship (e.g., Winston, 1985 ; Bartle et al., 2000 ; Apgar and Congress, 2005 ) or at least tied with idea origination as the most important factor ( Wagner et al., 1994 ; Kosslyn, 2015 ). The “authorship determination scorecard” on the American Psychological Association's website ( https://www.apa.org/science/leadership/students/authorship-paper.aspx ), for example, allots 170 of 1,040 points (16%) for idea generation/refinement; 110 points (11%) for design/measures; 160 points (15%) for statistical analysis, and 600 points (58%) for writing/revision.

Given the clear importance of writing as a factor in first authorship, and because students' contributions to idea generation, design, and analysis are often similar to those of their collaborators up to this point, I always require students to take responsibility for the manuscript drafts and revisions (with my feedback and editing help) to earn their first authorship. I am typically second author (consistent with the “order of contribution” norm in social psychology) because I play a significant role in the publication process, but less than the first author. The remaining student authors tend to be less involved (consistent with Geelhoed et al., 2007 ) because of lack of time or interest, or geographical distance. Nonetheless, all authors are always asked to read and approve the final manuscript before submission.

Paths to Undergraduate First Authorship

My mentor, the late Dan Wegner (a social psychologist who ended his career at Harvard but started at a small liberal arts university doing research with undergraduates) advised me as I began my career at an undergraduate-only institution that “the best undergraduates are often better than graduate students” because they are “not only very bright, but often are more intrinsically motivated—if you hold them to high standards, they will meet or exceed them, and you can publish great work with them.” I followed his advice, and indeed have published the vast majority of my papers with undergraduates as co-authors, and especially as first authors: Of my 33 post-graduate school publications, 29 papers involve a total of 68 undergraduate co-authors, and 24 of the 29 are first-authored by undergraduates 1 .

In my experience, there have been three primary paths to undergraduate first authorship, each representing approximately one-third of my publications with students. First, during our one-semester research methods course with a lab (capped at 12 students), sophomores and sometimes juniors complete two original projects and manuscript write-ups, and conducting high quality, original projects is a big factor (see LoSchiavo, 2018 ); about 10% of my class projects lead to publication. Second, each faculty member has a capstone course in which they work with 5 to 6 seniors (or sometimes juniors) for two consecutive semesters; about 90% of my capstone projects lead to publication 2 . Third, I occasionally accept projects for individual honors theses or independent studies (independent research outside of capstone is rare in our department, perhaps one senior every several years) if I think they are publishable; about 90% of these projects lead to publication.

Best Practices

Here are some of the strategies I've developed over the years to successfully mentor students to first authorship:

1. Provide good writing instruction throughout the curriculum . It is crucial to teach good writing skills throughout the curriculum ( Soysa et al., 2013 ) so that the largest number of students possible has a strong background and the potential capacity to be first author. (My university has 1,400 students, and we graduate 25–30 psychology majors annually, so with 4–5 faculty members striving to publish with students each year, this step is crucial). Our department places a strong emphasis on students learning APA style as well as proper grammar (see Giuliano, 2019 ), and all instructors provide copious feedback on student drafts. Although group writing is popular elsewhere (e.g., small groups of students who write APA-style papers together on their research methods project), instructors in our department require individual writing (as well as peer review) in both research methods and capstone courses so that every student improves and gets the maximum amoun of practice.

2. Select the most “first-author-ready” students . I've found that it is important to select students with certain characteristics—those who not only have the strongest writing skills, but who are hardworking, independent, intellectually curious, and intrinsically motivated 3 . The process starts when I read a paper (e.g., a research methods final paper, a senior capstone paper, or an honors thesis) that has good results, that is “close enough” that I can envision grooming it into a publishable paper, and that has been written by a student with the characteristics described above.

3. Explain what authorship entails . At that point, I ask the student if she or he would like to first author a publication under my supervision (Virtually every invitee will have already first-authored a conference presentation with me, so I know that we are a “good fit” and that they know exactly what to expect when working with me.). As recommended by Foster and Ray (2012) , I explain which contributions determine first authorship: I tell them they have already earned authorship by making significant contributions in the idea, design, and analysis stages, as have their student collaborators, so they will earn first authorship by being responsible for writing the manuscript, with plenty of feedback and supervision from me. To provide “informed consent” about this decision ( Fine and Kurdek, 1993 ), I outline clear expectations (i.e., that they can expect to write 10–15 drafts or more over a period of several months, that this will be a much higher standard of writing than they have ever done in the past, and that at times this process could get frustrating and tedious) and let them know that they are free to accept or decline without any adverse consequences (about 95% of students accept). I also tell them that first authorship is not guaranteed and that authorship order may need to be revised if contributions change (Only once or twice in 24 years has first authorship changed; my students have generally been excellent at following through with their commitments.).

4. Get them ready to write . Once students agree to be first author, the next step is to provide them with exemplar articles (I use past publications from my own students). I then set an initial calendar of deadlines (e.g., when their drafts are due to me, when my feedback is due to them); I usually draft this first and then allow students to make modifications according to their schedule. Finally, I have students research and take notes on potential target journals (we then discuss the pros and cons together and decide where to send the paper once finished).

5. Find time to write . Finding time to write can be tricky, because students are often either busy with other courses or have moved on to jobs or graduate school. Summers are usually optimal for both students and me. For research methods class projects, I usually suggest writing during the summer after the course is over (setting the final deadline before the new semester starts). If students are in town, we meet in person occasionally but generally trade drafts over email and have in-person or by-phone meetings when necessary. Writing with students who have graduated is often more difficult because those with jobs are busy working during the day and no longer in “academic mode,” so I find that it takes more patience and encouragement to get them back into the writing. If they are in graduate school, they are already immersed in research, which is helpful, but projects with their graduate advisor compete for their attention. Students who have graduated are also more likely to be out of town, which is only a problem if in-person meetings (e.g., to re-analyze data) are necessary, although online meeting applications (e.g., Facetime, Skype) work fine. Ultimately, it may take some creativity to find the time and space for writing, as in “writing weekends” (see Scherman, under review), but in the end, it is worth it.

Publishing with students is truly my favorite part of being a professor—the thrill I get upon seeing a student's name in print (especially in the lead position) is often greater than the thrill I get from seeing my own name. As others have argued (e.g., Malachowski, 2012 ; Maurer, 2017 ), when working with students, it is best to treat them as equals and true partners in the collaboration process, with high levels of autonomy and a strong focus on student learning. In doing so, the rewards—for both students and faculty alike—are incredibly worthwhile.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

I'd like to thank Sarah Matthews, Carin Perilloux, Abby Riggs, Marissa Rosa, and Toni Wegner for their helpful comments on earlier drafts of this paper.

1. ^ Four are in the Psi Chi Journal of Undergraduate Research ; the remainder are in professional, peer-reviewed journals.

2. ^ It should be noted that our department recently switched from an informal system in which either faculty or students approached the other about the possibility of research collaboration to a more formal capstone assignment process in which all students (during their required research methods course) complete a written application describing their interest in conducting a research-based capstone and rank their preference for faculty labs. This process not only improved transparency, but also provided more equitable information, access, and opportunity for all students, who are assigned to labs by fit and interest. Recent articles have addressed both the benefits of increasing diversity and inclusion in undergraduate research and publication (e.g., Peifer, 2019 ) and specific strategies for doing so (e.g., Ahmad et al., under review; Chan, 2019 ; Scisco et al., 2019 ) and are highly recommended.

3. ^ Approximately half of my first authors went on to Ph.D. programs in psychology; the other half went to law school, medical school, master's programs, or did not seek a graduate degree.

Apgar, D., and Congress, E. (2005). Authorship credit: a national study of social work educators' beliefs. J. Soc. Work Educ. 41, 101–112. doi: 10.5175/JSWE.2005.200300356

CrossRef Full Text | Google Scholar

Bartle, S. A., Fink, A. A., and Hayes, B. C. (2000). Psychology of the scientist: LXXX. Attitudes regarding authorship issues in psychological publications. Psychol. Rep. 86, 771–788. doi: 10.2466/pr0.2000.86.3.771

PubMed Abstract | CrossRef Full Text | Google Scholar

Chambers, C. D. (2017). The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice . Princeton, NJ: Princeton University Press.

Google Scholar

Chan, E. (2019). Student research and publication: strategic planning for inclusion using a systems mapping approach. Front. Psychol. 10:6. doi: 10.3389/fpsyg.2019.00006

Costa, M. M., and Gatz, M. (1992). Determination of authorship credit in published dissertations. Psychol. Sci. 3, 354–357. doi: 10.1111/j.1467-9280.1992.tb00046.x

Fine, M. A., and Kurdek, L. A. (1993). Reflections on determining authorship credit and authorship order on faculty-student collaborations. Am. Psychol. 48, 1141–1147. doi: 10.1037/0003-066X.48.11.1141

Foster, R. D., and Ray, D. C. (2012). An ethical decision-making model to determine authorship credit in published faculty-student collaborations. Counsel. Values 57, 214–228 doi: 10.1002/j.2161-007X.2012.00018.x

Geelhoed, R. J., Phillips, J. C., Rischer, A. R., Shpungin, E., and Gong, Y. (2007). Authorship decision making: an empirical investigation. Ethics Behav. 17, 95–115. doi: 10.1080/10508420701378057

Giuliano, T. (2019). The “Writing Spiral”: A handy tool for training undergraduates to write publishable-quality manuscripts. Front. Educ . 10:915. doi: 10.3389/fpsyg.2019.00915

Grysman, A., and Lodi-Smith, J. (2019). Methods for conducting and publishing narrative research with undergraduates. Front. Psychol. 9:2771. doi: 10.3389/fpsyg.2018.02771

Holmes, K. J., and Roberts, T. (2019). Mentor as sculptor, makeover artist, coach or CEO: evaluating contrasting models for mentoring undergraduates mesearch toward publishable research. Front. Psychol. 10:231. doi: 10.3389/fpsyg.2019.00231

Kosslyn, S. M. (2015). “Authorship: credit where credit is due,” in Ethical Challenges in the Behavioral and Brain Sciences: Case Studies and Commentaries , eds R. J. Sternberg and S. T. Fiske (New York, NY: Cambridge University Press), 50–52. doi: 10.1017/CBO9781139626491.021

LoSchiavo, F. M. (2018). Incorporating a professional-grade all-class project into a research methods course. Front. Psychol. 9:2143. doi: 10.3389/fpsyg.2018.02143

Malachowski, M. R. (2012). “Living in parallel universes: the great faculty divide between product-oriented and process-oriented scholarship,” in Faculty Support and Undergraduate Research: Innovations in Faculty Role Definition, Workload, and Reward , eds N. H. Hensel and E. L. Paul (Washington, DC: Council on Undergraduate Research), 7–18.

Matthews, S. J., and Rosa, M. N (2018). Trials, tribulations, and triumphs: research and publishing from the undergraduate perspective. Front. Psychol. 9:2411. doi: 10.3389/fpsyg.2018.02411

Maurer, T. (2017). Guidelines for authorship credit, order, and co-inquireer learning in collaborative faculty-student SoTL projects. Teach. Learn. Inquiry 5, 1–17. doi: 10.20343/teachlearninqu.5.1.9

McKelvie, S., and Standing, L. G. (2018). Teaching psychology research methodology across the curriculum to promote undergraduate publication: an eight-course structure and two helpful practices. Front. Psychol. 9:2295. doi: 10.3389/fpsyg.2018.02295

Mickley Steinmetz, K. R., and Reid, A. K. (2019). Providing outstanding undergraduate research experiences and sustainable faculty development in load. Front. Psychol. 10:196. doi: 10.3389/fpsyg.2019.00196

Moore, M. T., and Griffin, B. W. (2006). Identification of factors that influence authorship name placement and decision to collaborate in peer-reviewed, education-related publications. Stud. Educ. Eval. 32, 125–135. doi: 10.1016/j.stueduc.2006.04.004

Nelson, L. D., Simmons, J., and Simonsohn, U. (2018). Psychology's renaissance. Annual Rev. Psychol. 69, 511–534. doi: 10.1146/annurev-psych-122216-011836

Peifer, J. S. (2019). Context and reasons for bolstering diversity in undergraduate research. Front. Psychol. 10:336. doi: 10.3389/fpsyg.2019.00336

Sandler, J. C., and Russell, B. L. (2005). Faculty-student collaborations: ethics and satisfaction in authorship credit. Ethics Behav. 15, 65–80. doi: 10.1207/s15327019eb1501_5

Scisco, J. L., McCabe, J. A., Mendoza, A. T. O., Fallong, M., and Domenech Rodriquez, M. M. (2019). Strategies for selecting, managing, and engaging undergraduate co-authors: a multi-site perspective. Front. Psychol. 10:325. doi: 10.3389/fpsyg.2019.00325

Soysa, C. K., Dunn, D. S., Dottolo, A. L., Burns-Glover, A. L., and Gurung, R. A. R. (2013). Orchestrating authorship: teaching writing across the psychology curriculum. Teach. Psychol. 40, 88–97. doi: 10.1177/0098628312475027

Strand, J. F., and Brown, V. A. (2019). Publishing open, reproducible research with undergraduates. Front. Psychol. 10:564. doi: 10.3389/fpsyg.2019.00564

Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., et al. (2019). Publishing research with undergraduate students via replication work: the collaborative replications and extensions project. Front. Psychol. 10:247. doi: 10.3389/fpsyg.2019.00247

Wagner, M. K., Dodds, A., and Bundy, M. B. (1994). Psychology of the scientist: LXVII. Assignment of authorship credit in psychological research. Psychol. Rep. 74, 179–187. doi: 10.2466/pr0.1994.74.1.179

Wilcox, L. J. (1998). Authorship: the coin of the realm, the source of complaints. J. Am. Med. Assoc. 280, 216–217. doi: 10.1001/jama.280.3.216

Winston, R. B. Jr. (1985). A suggested procedure for determining order of authorship in research publications. J. Counsel. Dev. 63, 515–518. doi: 10.1002/j.1556-6676.1985.tb02749.x

Undergraduate* First-Author Publications

*Butterworth, S. E., Giuliano, T. A., *White, J. R., *Cantu, L., & *Fraser, K. C. (In Press). Is he flirting with me? How sender gender influences emoji interpretation. Frontiers in Psychology .

*Matthews, S. J., Giuliano, T. A., *Rosa, M. N., *Thomas, K. H., *Swift, B. A., *Ahearn, N. D., *Garcia, A. G., *Smith, S. R., *Niblett, C. M., & *Mills, M. M. (2018). The battle against bedroom boredom: Development and validation of a brief measure of sexual novelty in relationships. Canadian Journal of Human Sexuality, 27 , 277-287.

*Matthews, S. J., Giuliano, T. A., *Thomas, K. H., *Straup, M. L., & *Martinez, M. A. (2018). Not cool, dude: Perceptions of solicited vs. unsolicited sext messages from men and women. Computers in Human Behavior, 88 , 1-4. 10.1016/j.chb.2018.06.14

*Matthews, S. J., Giuliano, T. A., *Rosa, M. N., *Thomas, K. H., & *Swift, B. A. (2018). Sexual Novelty Scale. Handbook of Sexuality-Related Measures. Thousand Oaks, CA: Sage Publications.

*Hutzler, K. T., Giuliano, T. A, *Herselman, J. R., & *Johnson, S. M. (2015). Three's a crowd: Public awareness and (mis)perceptions of polyamory. Psychology & Sexuality, 7 , 69-87. 10.1080/19419899.2015.1004102

*Johnson, S. M., Giuliano, T. A, *Herselman, J. R., & *Hutzler, K. T. (2015). Development of a brief measure of attitudes towards polyamory. Psychology & Sexuality, 6 , 325-339. 10.1080/19419899.2014.1001774

*Blomquist, B.A., & Giuliano, T. A. (2012). “Do you love me, too?” Perceptions of Responses to ‘I love you.” North American Journal of Psychology, 14 , 407-418.

*Gomillion, S. C., & Giuliano, T. A. (2011). The influence of media role models on gay, lesbian, and bisexual identity. Journal of Homosexuality, 58 , 330-354.

*Howell, J., * & Giuliano, T. A. (2011). The effects of expletive use and team gender on perceptions of coaching effectiveness. Journal of Sport Behavior, 34 , 69-81.

*Howell, J., *Egan, P., *Ackley, B., & Giuliano, T. A. (2011). The reverse double standard in perceptions of student-teacher sexual relationships: The role of gender, initiation, and power. Journal of Social Psychology, 151 (2), 180-200.

*Egan, P., & Giuliano, T. A. (2009). Unaccommodating attitudes: Perceptions of students with learning disabilities as a function of accommodation use and test performance. North American Journal of Psychology, 11 , 487-500.

*Osborne, R. L, *Ackley B. D, & Giuliano, T. A., (2008). The “skinny” on coffee drinkers: Gender differences in healthy beverage choice. Psi Chi Journal of Undergraduate Research, 13(4) , 159-163.

*Riggs, A. L., & Giuliano, T. A. (2007). Running in the family or swimming in the gene pool: The role of family history and genetic risk in individuals' illness perceptions. Journal of Health Psychology, 12 , 883-894.

*Stanzer, M., Guarraci, F., Giuliano, T. A., & Sims, A. (2007). Paramedic or EMT-basic partner? Study evaluates preferred partner types & the effect of partners on work-related stress levels. Journal of Emergency Medical Services 32: 72-74.

*Knight, J. L., & Giuliano, T. A. (2003). Blood, sweat, and jeers: The impact of the media's heterosexist portrayals on perceptions of male and female athletes. Journal of Sport Behavior, 26 , 272-284.

*Wilke, K. M., *Turner, K. L., & Giuliano, T. A. (2003). Smoke screens: Cross-cultural effectiveness of anti-smoking messages. North American Journal of Psychology, 5 , 431-442.

*Dodd E. H., Giuliano, T. A., *Boutell, J. M., & *Moran, B. E. (2001). Respected or rejected: Perceptions of women who confront sexist remarks. Sex Roles , 45, 567-577.

*Knight, J. L., & Giuliano, T. A. (2001). She's a “looker;” he's a Laker: The consequences of gender-stereotypical portrayals of male and female athletes by the print media. Sex Roles, 45 , 217-229.

*Knight, J. L., Giuliano, T. A., & *Sanchez-Ross, M. G. (2001). Famous or infamous? The influence of celebrity status and race on perceptions of responsibility for rape. Basic and Applied Social Psychology, 23 , 183-190.

*Dickson, A., Giuliano, T. A., *Morris, J. C., & *Cass, K. L. (2001). Eminem versus Charley Pride: Race, stereotypes, and perceptions of rap and country music performers. Psi Chi Journal of Undergraduate Research, 6 , 175-179.

*Kirkendall, K. D., *Dixon, D. P., Giuliano, T. A., & *Raney, A. E. (2001). The bold and the beautiful: The effect of physical attractiveness and extraversion on desirability. Psi Chi Journal of Undergraduate Research, 6 , 180-186.

*Cohorn, C. A., & Giuliano, T. A. (1999). Predictors of adjustment and institutional attachment in first-year college students. Psi Chi Journal of Undergraduate Research, 4 , 47-56.

*Cox, C. B., & Giuliano, T. A. (1999). Constructing obstacles vs. making excuses: Examining perceivers' reactions to behavioral and self-reported self-handicapping. Journal of Social Behavior and Personality,14 , 419-432.

*Fiala, S. E., Giuliano, T. A., *Remlinger, N. M., & *Braithwaite, L. C. (1999). Lending a helping hand: The effects of sex stereotypes and gender on likelihood of helping. Journal of Applied Social Psychology, 29 , 2164-2176.

Keywords: undergraduate research, undergraduate publication, publishing, first authorship, faculty-student collaboration

Citation: Giuliano TA (2019) Guiding Undergraduates Through the Process of First Authorship. Front. Psychol . 10:857. doi: 10.3389/fpsyg.2019.00857

Received: 05 February 2019; Accepted: 01 April 2019; Published: 18 April 2019.

Reviewed by:

Copyright © 2019 Giuliano. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Traci A. Giuliano, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Author Insights - What to expect when publishing your first research paper.

Advice and tips from influential researchers who have been in the exact same starting position as you..

Are you considering publishing your research? Do you want to understand what to expect and learn some tips and tricks? Do you know the benefits and opportunities publishing your paper will bring?

In this case study, we talk to some influential researchers and dig down into what to expect when publishing your first paper, including what publishing was like for them and what they did after publishing their first paper.

Beyond these interviews, we highlight the key tips for researchers new to publishing, and identify the best publishing support tools researchers should be aware of, such as writing your journal article , the submission check list and sites such as Editage .

Q&A with Dr Eden Morales-Narváez

published first author research paper

Q&A with Dr Eden Morales-Narváez

We chat with Dr Eden Morales Narvaez, winner of the  Emerging Leaders 2020 award in JPhys Photonics , who has published 20 papers, starting with his first paper on  Plasmonic colored nanopaper: a potential preventive healthcare tool against threats emerging from uncontrolled UV exposure  published in 2019. Eden is now an editorial board member for the  JPhys Photonics  journal .

What made you decide to publish your first research paper?

“A great mentor combined with surprising results led me to publish my first paper.

I joined Professor Arben Merkoçi’s team (Catalan Institute of Nanoscience and Nanotechnology) in early 2011. In those times, our research team was very motivated by graphene: the wonder material. So, I performed some experiments with graphene oxide and discovered that we were able to quench the photoluminescence of quantum dots with an almost 100% efficiency.

At the beginning, I did not feel convinced about this surprising results that I was getting (I believed that perhaps my results were wrong), but a chat with my mentor was really encouraging, and then my mind changed completely. I remember that he said: Yes, Eden, people in science get surprising results, people do that! So, my mentor vitalized my self-confidence, and this was simply the starting point of my first research paper and somehow my scientific journey. Nowadays, I still publish papers taking advantage of the wonder material and its photoluminescence quenching capabilities.”

What do you know now that you wish you knew when you were starting the process to publish your paper?

“I have to confess that at the beginning I was looking for a publication related to the biomedical field, because my thesis was expected to be focused in such a field. However, when I published my first paper in the materials field, I realized that early career researchers can modify the scope of their thesis to eventually unveil new steps and future opportunities in their career.

Besides, I am now aware that writing skills are your Swiss Army knife to succeed in the process to publish your paper. Editors and reviewers demand high quality papers, but they also enjoy manuscripts nicely and clearly written, the same as the readers. Professor Osvaldo Oliveira (University of São Paulo) says that writing skills are your best investment as a scientist and he also points out that a scientist with good writing skills is much better equipped than a scientist with other kinds of skills or resources. I could not agree more with Professor Oliveira.”

What did you enjoy/not-enjoy about publishing your research?

“I enjoyed sharing and discussing my published results in conferences and presentations. Seeing that my peers were reading and citing my research was also very satisfying. But no one enjoys rejection of their manuscripts, which is also part of the journey. Rejection is discouraging, but it is also an opportunity to change the scope of your research and/or improve the quality of your manuscript.”

How can IOP Publishing help early career researchers who are starting out in their publishing journey?

“Offer webinars on writing skills, promote all type of tools which are valuable in such a journey and explain their particularities and usefulness; for example, scientific search engines, journal suggesters/finders, plagiarism detectors, journal citation reports, research metrics, etc.”

Are there any tips, tools or websites that you would recommend?

  • Feel passionate about your field or research topic. Mix such a passion with patience and resilience, which are crucial abilities to be developed in a scientific career.
  • Seek a mentor whose results are inspirational and motivating for you. Mentors not only shape your current career but also the future of your career.
  • Invest in your writing skills (as highlighted by Professor Oliveira).
  • In order to publish innovative literature, you have to be aware of the state-of-the art in your field.
  • Read, read and read more, especially the journals you would like to publish in.
  • Be critical, spot agreements, gaps and controversies in your field.
  • One of your goals should be to write and publish a review article related to your thesis/research topic.
  • Avoid plagiarism, this type of misconduct can be easily spotted by peers using tools like ithenticate .
  • Promote your research on social media using messages easy to understand. Social media is a perfect way to reach society, decision-makers, colleagues and stakeholders. Follow and interact with inspirational colleagues on social media.
  • If you are not sure about the target journal of your manuscript, I recommend Master Journal List . This fantastic tool helps you to find suitable journals for your manuscript depending on the title and abstract.

What did you do after you published your paper? Did you promote it? How?

“The acceptance and publication of your first paper is a very special moment. Nausea, by Beck, was playing on my computer when I received the news of the acceptance of my first publication (I will never forget it). I happily jumped from my seat and celebrated the good news with my wife. In those times, I was not particularly active on social media but I immediately had the opportunity to share my results in NanoSpain 2012 (Santander), where I received valuable feedback on my research. It was really useful to plan new experiments and future work.”

How has publishing your paper influenced your career and networking?

“As I previously mentioned, I still publish papers taking advantage of the wonder material and its photoluminescence quenching capabilities. I am also the inventor of two related patents and several of my post-graduate students are developing their thesis taking advantage of the wonder material, even in translational settings. My networking opportunities were also enhanced; for instance, together with prestigious colleagues, I have organized some special issues dealing with 2D materials in reputable journals, I have several collaborations related to 2D materials and I have been invited by many editors to review countless manuscripts related to graphene derivatives. My first paper is also one of the most cited in my list of research papers. Definitely, that first publication represents a cornerstone in my career and networking opportunities.”

What would you say to an early career researcher who is asking the question “Should I consider publishing my research?”

“Absolutely! It will boost your career!”

Q&A with Professor Caterina Cocchi

published first author research paper

Q&A with Professor Caterina Cocchi

We talk to Professor Caterina Cocchi, who is heavily involved in Electronic Structure (EST) ’s Emerging Leaders issues ( 2020 and  2021 ) as well as the past events, and who has also joined EST as a guest editor.

“IOP Publishing and publishers in general could offer more resources to train young scientists to write papers and to act as peer reviewers. For some unknown reason, academic education does not typically include official seminars or training about scientific writing and publishing. Both activities are typically passed on from mentor to mentee, naturally generating big gaps among scientists, which may ultimately affect their career. The ability to write a clear and convincing scientific text is not only key for publishing good papers but also to win grants, positions and, ultimately, to be visible in the community.”

“When writing a paper, it is important to communicate a clear message and to give the manuscript a clear structure. Also, using only essential words is much more effective than diluting the content in endless prose. During the peer-review process, it is important to always consider the referees’ comments on a factual level. Never take them personally.

I follow a few blogs about scientific writing. I can definitely suggest the one by Anna Clemens . It is regularly updated and offers a broad spectrum of suggestions and hints about scientific writing and about the whole publication process.”

“I published my first paper in 2010 and back then social media was not very much used by the scientific community. To disseminate, I attended a number of conferences and workshops in which I presented the results of that paper.”

“Publishing is the essence of scientific work. Any piece of work that is not published or disseminated to the community simply does not exist. Hence, if you want to give visibility to your work, you have to publish it. Very often, I see in young scientists the fear of submitting something that is not perfect, and this is usually the cause of big delays in publications. My motto is “published is better than perfect” and I encourage my young co-workers to wrap up their work effectively and disseminate it in a timely manner. Should the results be disproved later, well, this is how science works, right?”

Author Insights Summary

We hope you enjoyed reading these inspiring interviews and have gathered some useful knowledge to help you with your publishing journey. Below are some of our key take-aways from both interviews useful for early career researchers publishing their first article.

Alongside this, we also have an extremely useful Publishing Support hub for both authors and reviewers which include free resources such as:

  • Article templates – both double and single anonymous templates. These may help to speed the publication of accepted articles.
  • Editage – Language and figure editing services. Helping you prepare your paper ready for submissions.
  • Track my article – a platform which helps you find out where in the journey your paper is at.
  • Paperpal Preflight – A free pre submission feedback service which checks for and highlights issues before you submit your paper.
  • IOP Academy resources and events – workshops, webinars and online training covering various aspects of publishing in journals.
  • Submission checklist – check you have covered everything before submitting your paper.

Key findings:

Promoting and networking is important:

  • Take part in discussions and presentations at conferences and workshops to present the results of your paper.
  • Use social media to get your messages across in an accessible way.

Rejection isn’t bad:

  • Rejection is part of the journey – it’s an opportunity to change the scope of your research as well, potentially unveiling new steps and future opportunities.
  • Always consider the referees’ comment on a factual level. Never take them personally.

Writing skills are key:

  • Writing skills are your “Swiss Army knife” to succeed in publishing your paper.
  • Make sure you have clear and well written manuscripts.
  • There are useful blogs and websites about scientific writing.

Keep on top of the research in your area:

  • Be aware of the state-of-the art in your field.
  • Read more, especially the journals you would like to publish in.

Pros of publishing

  • “Any piece of work that is not published or disseminated to the community simply does not exist.”
  • Having published work helps networking and other opportunities for your career.
  • “The ability to write a clear and convincing scientific text is not only key for publishing good papers but also to win grants, positions and, ultimately, to be visible in the community.”

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 03 September 2021

What makes an author

Nature Methods volume  18 ,  page 983 ( 2021 ) Cite this article

9029 Accesses

43 Altmetric

Metrics details

  • Research management

Constructing a fair and accurate author list can be one of the most fraught aspects of manuscript publication. We provide some advice and resources for authors at all career levels.

The acknowledgement of scientific contributions in the form of manuscript authorship is vital at all stages of a researcher’s career, from the well-established principal investigator applying for million-dollar grants to the undergraduate student applying to PhD programs. It’s essential that authorship lists are constructed with utmost care.

The variety of authorship practices across the scientific literature, however, is vast. Different fields, different countries, even different labs have different norms. Some practices are troubling: lab technicians not included for their major contributions to a study because they are not on an academic track; contributors removed from author lists due to personal disputes; researchers who have not substantially contributed added to papers (in a misguided attempt to increase ‘impact’) without their consent; senior scientists taking advantage of power imbalance to undeservedly gain publications.

Even researchers with the best intentions can struggle with finalizing a fair and accurate author list. Here, we provide some best practice guidelines and explain how Nature Methods handles authorship issues.

First of all, community guidelines for authorship are available. Nature Portfolio’s authorship policies are based on guidelines developed by McNutt et al. ( Proc. Natl Acad. Sci. USA 115 , 2557–2560, 2018). Other guidelines in common use include those from the International Committee of Medical Journal Editors . As defined by Nature Portfolio, an author listed on a paper should have made a substantial contribution to the design of the work, the collection or analysis of data, the creation of a software tool, or the writing of the paper. This policy is meant to be broad and flexible, leaving “substantial contribution” up for quite a bit of room for interpretation.

In our view, job title or rank should never exclude a potential author. The lab technician or core facility scientist who developed a custom experimental workflow for the study should be included as an author. The first-year rotation student who spent several weeks collecting data should be included as an author. The software engineer who made substantial developments to an existing algorithm to analyze the data should be included as an author.

That said, not just any kind of assistance justifies authorship. People who provided routine services or basic technical help, contributed resources (such as by giving plasmids), proofread the manuscript, or gave general advice but did not otherwise significantly contribute to the scientific content of the paper should be thanked in the Acknowledgements. If previously published datasets or software tools are utilized in a new study without further development from their generators, there is no need to name them as authors. Even the person who secured funding need not necessarily be an author on a paper—they too ought to have scientifically contributed in a meaningful way. This is almost unheard of in lab-based science, where a principal investigator typically supervises the design of experiments and analysis of the resulting data, but it’s relatively common practice in, for example, computer science for grad students to publish sole-author papers.

Though different research fields have different traditions, the custom in life sciences research is to name the person or people who did the bulk of the research first, followed by other contributors in descending order of the significance of their contributions, with the principal investigator(s) named at the very end of the list. Disputes often arise over who is named first on a paper. Most journals allow co-first-authorship designations to recognize cases of equal contribution, but one name must necessarily come first; the research community should take care to recognize these equal contributions. Those listed second should not feel that their contributions are minimized in any way.

Project managers should make defining authorship and authorship order a priority of a new study. Students and postdocs, collaborators, and service providers should speak up if authorship is not discussed early on. Setting clear parameters and communicating openly from the outset of a research study—in some cases even by signing formal authorship agreements—can go a long way toward preventing disputes and hurt feelings down the line.

All authors on a paper have a responsibility for at least part of its content. Nature Portfolio journals require author contribution statements, which in our view are crucial to clarify each author’s role and responsibilities, to assign credit where it is due, to discourage the practice of including authors who did not significantly contribute to the study, and to assign accountability in (rare) cases of misconduct. The corresponding author, the main point of contact with a journal, has extra responsibilities. They are tasked with communicating with all coauthors at the submission, revision and final acceptance stages, including ensuring that all are satisfied with the manuscript text and content. The corresponding author must also check that all coauthors agree with changes to the author list, that any competing interests are declared, and that the paper complies with all of the journal’s policies regarding data, materials and code sharing. Note that the journal corresponding author need not be the same person as the corresponding author(s) listed on the published paper, who take responsibility for post-publication inquiries.

We encourage our authors to speak up to let us know when best practices for authorship are not being followed. However, our editorial power is limited to delaying review or publication until disputes can be resolved, making corrections to papers, adding an ‘editorial expression of concern’ or, in very rare cases, retracting a paper. We rely on authors to behave responsibly and we cannot investigate or adjudicate authorship disputes. We advise those embroiled in disputes to seek help from their department head, university or other employer. We also recommend speaking to an experienced neutral party familiar with the study for advice—it’s human nature to often overestimate our own contributions, but it’s right to speak up about unfair treatment.

Unfortunately, we do not have the space to cover all possible authorship scenarios in this short piece. We look forward to answering your questions and perhaps sparking some lively discussion on Twitter, where you can follow us at @naturemethods .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

What makes an author. Nat Methods 18 , 983 (2021). https://doi.org/10.1038/s41592-021-01271-8

Download citation

Published : 03 September 2021

Issue Date : September 2021

DOI : https://doi.org/10.1038/s41592-021-01271-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

published first author research paper

Science Editor

  • A Publication of the Council of Science Editors

Ten Tips for Research Scholars Aiming for Their First Significant Publication

In many scientific disciplines, research scholars not only hope to get published but aspire for a prominent authorship position on a journal paper. There is nothing vain about this; for a research scholar, a first-author publication opens doors to advancement because it shows that they can contribute substantially to a research project.

But first, who is a research scholar? Definitions and notions vary. For this article, I define a research scholar as a student in a PhD program or a research-intensive Master’s program, or a research assistant in a similar league. If you are an early-stage research scholar in a scientific discipline, you might have some experience with research and even gained authorship credit, but have you played a major role in a collaborative project? Such a role is essential if you hope to see your name as the first author on a published research paper or obtain weighty authorship credit in another form.

In this primer, I concentrate on the outcome—which I call “first significant publication”—and provide a set of tips to help you navigate your way.

1. Leadership: Take the Initiative to Lead

By “lead,” I mean leading a research project, or a particular aspect of a project, under the direction of your research advisor.

Perhaps everything has been agreed on and you find yourself playing a lead role without much deliberation. But sometimes—especially in research groups with many students, scholars, and postdocs—you should put yourself forward. You might have to be competitive.

The first step is to discuss the lead role you are hoping for with your research advisor. Do they have an opportunity for you? Or can you think of an opportunity?

Research scholars in many scientific disciplines often work within a team. Some research colleagues could be less experienced than you and some could be senior (e.g., faculty members). How are you going to convince your team that you are the right leader (or one of the leaders)? Having a positive attitude and a collaborative spirit will go a long way.

2. Teamwork: Agree on Research Questions, Goals, and Authorship

Research scholars do not usually form their project team; this is the responsibility of the research advisor (along with research funding, assistantships, and other matters). However, as a lead project member, you should have a say in the following matters and reach a consensus with your team:

  • Research questions or hypotheses.   What exactly are you going to investigate in your research project? Why does it matter? Reading relevant publications (covered in Tip 6) will help you come up with good research questions.
  • Goals.   This primer is focused on one of the typical goals of a scientific research project: publication in a scholarly journal. In which journal do you hope to publish the paper (or the first paper) emerging from your project? And why? Identify a few target journals at the start of your project, recognizing the many factors to consider. 1 Read the instructions for authors given by your target journals to find out if you need to be aware of anything that concerns the research process. These instructions often cover a lot more than how to format your paper.
  • Authorship.   Use authorship guidelines, such as the International Committee of Medical Journal Editors guidelines, 2 to determine who is eligible to be an author. Then comes the tricky question of the order of authors. Often, the biggest question is who should be the first author? There are many viewpoints and norms on this matter (and much vexation), which you can find online. Authorship pieces in Science and Nature are particularly instructive (e.g., Google site:science.org authorship ). Discussing the order of authors, at least provisionally, at the start of the project can prevent authorship disputes later.

3. Timeline: Be Realistic and Avoid Haste

When you take up a significant role in a collaborative research project, it can be unreasonable to expect that you will reach your publication goal by a certain date. Master’s students should be especially careful because they tend to have a graduation date in mind and may not have much flexibility.

Working with your team members and your research advisor, draft a project schedule in the form of a Gantt chart 3 or some other approach. Clarify who is responsible for what. It is likely that not everything will go as per the plan. Update the project schedule as needed; don’t give up on it! When a research project seems to drag on indefinitely, it is not necessarily because the research is difficult. Perhaps the research was not treated as a project.

4. Support: Seek Fellowship and Mentoring

By “fellowship,” I mean collegiality and a sense of belonging. You will hopefully find this within your own research group but look for it elsewhere too. Join scientific societies in your field and see how you can become an active member. Are there networking or volunteering opportunities?

A mentor is different from your research advisor. A mentor is not formally responsible for your work, and you do not report to your mentor. A mentor is someone who can capably and willingly guide you when you need support and might even be proactive in helping you progress. Is there someone in your research group who can mentor you?

Finally, take care of your health. Burnout is real. 4 Speak with your advisor or mentor if you are struggling to cope. Check if your institution provides support for wellness and mental health.

5. Ethics: Practice Responsible Conduct of Research

Responsible conduct of research (RCR) is a big deal. It encompasses protections for research participants, handling research data, writing and publication practices, dealing with conflicts of interest, and other matters. Another name for RCR is responsible and ethical conduct of research.

Consider this: From July 2023 onward, all researchers named on proposals submitted to the US National Science Foundation—not just students as before—must complete RCR training. 5 This shows that RCR is not just something for research scholars to learn. After all, high-profile cases of research misconduct are in the news every now and then.

Even if you are not required to undergo RCR training, it is your responsibility to learn about RCR and practice it. The US Office of Research Integrity provides a detailed introduction to RCR. 6 Your university might provide further guidelines, and you should look up guidelines given by your target journals as well.

6. Reading: Learn How to Read Science

A basic goal of a scientific research project is to advance knowledge. So early on, you need to have a strong grasp of the knowledge that already exists—in other words, the knowledge and findings disseminated through publications (and other sources perhaps, but primarily scholarly publications). With this foundation, you can develop your research questions, make a case for why your research is relevant, design your research methodology, compare your research findings with previous findings, and so on.

Good reading skills are essential to carry out a research project. But it can be hard work to make sense of scientific research papers. What’s more, a study in 2017 found that the readability of scientific texts is decreasing. 7

Instead of assuming you somehow need to get smarter to make sense of papers, look for resources online: there are many; for example, a PLOS “ten simple rules” article 8 and a guide published in Science . 9

Upstream from the reading stage is the research access stage. To read research publications, you need to be able to access them. How can you locate the most relevant publications in the ocean of scientific literature? And how can you read publications that are behind a paywall or available only to subscribers? A subject librarian or reference librarian at your university can help you with both. You can also look for online library guides (commonly called “libguides”) published by your university or others.

While accessing research, build your own collection of research publications using a reference management tool. Compare the popular tools 10 and find one that works for you. When you write your paper, use the same tool to cite the publications you have used.

7. Focus: Keep the Research Questions in Mind

Recall Tip 2: agreeing on the research questions before you start your research project. Exploratory research has its place, but to orient a research project toward an outcome, such as a publication, you need to set up research questions or hypotheses in advance. The questionable practice of HARKing—hypothesizing after the results are known—was described in 1998, 11 and you can find more recent commentaries online.

A research project can go on for months or even years, and during this journey with all its ups and downs, it is possible to lose track of the research questions. Make it a point to always keep the research questions at the center of your project. One easy way to do this is to include them in status updates about your work to your team. This way, you can also gauge how far you have progressed toward answering the questions.

8. Writing: Don’t Wait Until the End

“A paper is not just an archival device for storing a completed research program; it is also a structure for planning your research in progress,” is the advice in a classic essay on scientific writing. 12 Consider using the “outline method” described in that essay to develop your paper.

If you want to learn the nuts and bolts of scientific writing, such as how to write the different sections of a paper or how to write effective sentences and paragraphs, you can find books, library guides, and online courses on this topic. But it can be difficult to get feedback on your own writing beyond what your research advisor offers. If your university has a writing center, do they offer feedback for research scholars? If you have a research mentor (see Tip 4), ask if they can give you feedback on your writing. If you do not have much support locally to improve your writing skills, look at the training or mentoring offerings from the AuthorAID project. 13 (Disclaimer: I work for this nonprofit project.)

When developing your paper, your target journal’s instructions for authors can be your friend or your enemy, depending on when you refer to these instructions and how seriously you follow them. Referring to these instructions only after you finish writing your paper will likely result in much frustration and lost time. Keep the instructions handy throughout your research-and-writing journey, and do not ignore anything in the instructions.

Finally, do not forget to cite as you write, making use of a reference management tool (see Tip 6). A good citation habit can be a bulwark against plagiarism. 14

9. Visuals: Master Tables and Figures

Tables and figures are crucial elements of scientific communication in many disciplines. Tables, well, present data in a tabular format. They can be complex, but figures are certainly more diverse: images obtained from scientific processes or equipment, diagrams of experimental setups, charts showing data, etc. A sustained reading of scientific papers (see Tip 6) will help you get a sense of how tables and figures are presented and how they are referred to in the text.

Designing effective figures can be especially challenging. Researchers in the biological sciences (and possibly other disciplines) will benefit from the large collection of articles on data visualization available on the Nature website. 15 Guidelines can also be found in style guides such as Scientific Style and Format 16 and in the instructions for authors given by journals.

Be careful to avoid image manipulation, a matter of increasing concern 17 and related to RCR (see Tip 5).

10. Peer Review: Keep Calm and Carry On

Once you have completed your research-and-writing journey and double-checked that your paper complies with your target journal’s instructions for authors, you will submit the paper and wait for the peer review process to run its course. (Rejection before peer review can happen, but let’s stay positive.) You should then be prepared to get a variety of comments from multiple peer reviewers. These comments can be minor, major, constructive, affirming, annoying, upsetting, disheartening … as you go through your research career, you will likely attach further adjectives.

Dealing with reviewers’ comments can sometimes be easy, and it can sometimes require not just much revision to your paper but also much tact in responding to the comments. The first thing to recognize is that the first author should not automatically be saddled with the bulk of the work. All the authors should be involved in this effort, taking responsibility for the part of the paper (and the related part of the research) in which they had a role to play.

Responding to reviewers’ comments is often the last mile to publication, and the going can get tough. You are not alone: the complexity and emotional toil of this stage is understood, and you can find advice online. 18

Closing Words

Your first significant publication is a milestone—and something you want to be proud of. I hope this primer is of some use in helping you get there.

References and Links

  • Suiter AM, Sarli CC. Selecting a journal for publication: criteria to consider. Mo Med. 2019;116:461–465. [accessed October 25, 2023]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6913840/ .
  • International Committee of Medical Journal Editors. Defining the role of authors and contributors. [accessed October 25, 2023]. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html .
  • Levin SP, Levin M. Managing ideas, people, and projects: organizational tools and strategies for researchers. iScience. 2019;20:278–291. https://doi.org/10.1016/j.isci.2019.09.017 . 
  • Nagy GA, Fang CM, Hish AJ, Kelly L, Nicchitta CV, Dzirasa K, Rosenthal MZ. Burnout and mental health problems in biomedical doctoral students. CBE Life Sci Educ. 2019;18:ar27. https://doi.org/10.1187%2Fcbe.18-09-0198 .
  • Wing JN, Schrag NJ, Smith AP. New NSF training requirement for faculty and senior researchers. Columbia Research. 2023. [accessed October 25, 2023]. https://research.columbia.edu/news/new-nsf-training-requirement-faculty-and-senior-researchers .
  • Steneck NH. Introduction to the responsible conduct of research. The Office of Research Integrity; 2007. [accessed October 25, 2023]. https://ori.hhs.gov/ori-introduction-responsible-conduct-research .
  • Plavén-Sigray P, Matheson GJ, Schiffler BC, Thompson WH. The readability of scientific texts is decreasing over time. eLife. 2017;6:e27725. https://doi.org/10.7554/eLife.27725 .
  • Carey MA, Steiner KL, Jr WAP. Ten simple rules for reading a scientific paper. PLoS Comput Biol. 2020;16:e1008032. https://doi.org/10.1371/journal.pcbi.1008032 .
  • Pain E. How to (seriously) read a scientific paper. Science. 2016. [accessed October 25, 2023]. https://www.science.org/content/article/how-seriously-read-scientific-paper .
  • Lane Medical Library, Stanford Medicine. Why use reference management software? [accessed October 25, 2023]. https://laneguides.stanford.edu/reference-management/software .
  • Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998;2:196–217. https://doi.org/10.1207/s15327957pspr0203_4 .
  • Whitesides GM. Whitesides’ group: writing a paper. Adv Mater. 2004;16:1375–1377. https://doi.org/10.1002/adma.200400767 .
  • Gastel B. AuthorAID and editors: collaborating to assist authors in developing countries. Sci Ed. 2015;38:103–105. [accessed October 25, 2023]. https://www.csescienceeditor.org/article/authoraid-and-editors-collaborating-to-assist-authors-in-developing-countries/ .
  • Penn Libraries, University of Pennsylvania. Citation practices and avoiding plagiarism: getting started. [accessed October 25, 2023]. https://guides.library.upenn.edu/citationpractices/gettingstarted .
  • Evanko D. Data visualization: a view of every Points of View column. Methagora: A Blog from Nature Methods. 2013. [accessed October 25, 2023]. https://blogs.nature.com/methagora/2013/07/data-visualization-points-of-view.html .
  • [CSE] Council of Science Editors. Scientific style and format. 8th ed. CSE; 2014. [accessed October 25, 2023]. https://www.scientificstyleandformat.org .
  • Marcus A, Oransky I. Image manipulation in science is suddenly in the news. But these cases are hardly rare. STAT. 2022. [accessed October 25, 2023]. https://www.statnews.com/2022/12/02/image-manipulation-in-science-is-suddenly-in-the-news-but-these-cases-are-hardly-rare/
  • Parletta N. How to respond to difficult or negative peer-reviewer feedback. Nature Index. 2021. [accessed October 25, 2023]. https://www.nature.com/nature-index/news/how-to-respond-difficult-negative-peer-reviewer-feedback

Ravi Murugesan ( https://orcid.org/0000-0002-1898-0559 ) was once a bewildered research scholar, and he is now a freelance science editor with training responsibilities in the AuthorAID project.

Opinions expressed are those of the authors and do not necessarily reflect the opinions or policies of the Council of Science Editors or the Editorial Board of Science Editor.

  • Corrections

Search Help

Get the most out of Google Scholar with some helpful tips on searches, email alerts, citation export, and more.

Finding recent papers

Your search results are normally sorted by relevance, not by date. To find newer articles, try the following options in the left sidebar:

  • click "Since Year" to show only recently published papers, sorted by relevance;
  • click "Sort by date" to show just the new additions, sorted by date;
  • click the envelope icon to have new results periodically delivered by email.

Locating the full text of an article

Abstracts are freely available for most of the articles. Alas, reading the entire article may require a subscription. Here're a few things to try:

  • click a library link, e.g., "FindIt@Harvard", to the right of the search result;
  • click a link labeled [PDF] to the right of the search result;
  • click "All versions" under the search result and check out the alternative sources;
  • click "Related articles" or "Cited by" under the search result to explore similar articles.

If you're affiliated with a university, but don't see links such as "FindIt@Harvard", please check with your local library about the best way to access their online subscriptions. You may need to do search from a computer on campus, or to configure your browser to use a library proxy.

Getting better answers

If you're new to the subject, it may be helpful to pick up the terminology from secondary sources. E.g., a Wikipedia article for "overweight" might suggest a Scholar search for "pediatric hyperalimentation".

If the search results are too specific for your needs, check out what they're citing in their "References" sections. Referenced works are often more general in nature.

Similarly, if the search results are too basic for you, click "Cited by" to see newer papers that referenced them. These newer papers will often be more specific.

Explore! There's rarely a single answer to a research question. Click "Related articles" or "Cited by" to see closely related work, or search for author's name and see what else they have written.

Searching Google Scholar

Use the "author:" operator, e.g., author:"d knuth" or author:"donald e knuth".

Put the paper's title in quotations: "A History of the China Sea".

You'll often get better results if you search only recent articles, but still sort them by relevance, not by date. E.g., click "Since 2018" in the left sidebar of the search results page.

To see the absolutely newest articles first, click "Sort by date" in the sidebar. If you use this feature a lot, you may also find it useful to setup email alerts to have new results automatically sent to you.

Note: On smaller screens that don't show the sidebar, these options are available in the dropdown menu labelled "Year" right below the search button.

Select the "Case law" option on the homepage or in the side drawer on the search results page.

It finds documents similar to the given search result.

It's in the side drawer. The advanced search window lets you search in the author, title, and publication fields, as well as limit your search results by date.

Select the "Case law" option and do a keyword search over all jurisdictions. Then, click the "Select courts" link in the left sidebar on the search results page.

Tip: To quickly search a frequently used selection of courts, bookmark a search results page with the desired selection.

Access to articles

For each Scholar search result, we try to find a version of the article that you can read. These access links are labelled [PDF] or [HTML] and appear to the right of the search result. For example:

A paper that you need to read

Access links cover a wide variety of ways in which articles may be available to you - articles that your library subscribes to, open access articles, free-to-read articles from publishers, preprints, articles in repositories, etc.

When you are on a campus network, access links automatically include your library subscriptions and direct you to subscribed versions of articles. On-campus access links cover subscriptions from primary publishers as well as aggregators.

Off-campus access

Off-campus access links let you take your library subscriptions with you when you are at home or traveling. You can read subscribed articles when you are off-campus just as easily as when you are on-campus. Off-campus access links work by recording your subscriptions when you visit Scholar while on-campus, and looking up the recorded subscriptions later when you are off-campus.

We use the recorded subscriptions to provide you with the same subscribed access links as you see on campus. We also indicate your subscription access to participating publishers so that they can allow you to read the full-text of these articles without logging in or using a proxy. The recorded subscription information expires after 30 days and is automatically deleted.

In addition to Google Scholar search results, off-campus access links can also appear on articles from publishers participating in the off-campus subscription access program. Look for links labeled [PDF] or [HTML] on the right hand side of article pages.

Anne Author , John Doe , Jane Smith , Someone Else

In this fascinating paper, we investigate various topics that would be of interest to you. We also describe new methods relevant to your project, and attempt to address several questions which you would also like to know the answer to. Lastly, we analyze …

You can disable off-campus access links on the Scholar settings page . Disabling off-campus access links will turn off recording of your library subscriptions. It will also turn off indicating subscription access to participating publishers. Once off-campus access links are disabled, you may need to identify and configure an alternate mechanism (e.g., an institutional proxy or VPN) to access your library subscriptions while off-campus.

Email Alerts

Do a search for the topic of interest, e.g., "M Theory"; click the envelope icon in the sidebar of the search results page; enter your email address, and click "Create alert". We'll then periodically email you newly published papers that match your search criteria.

No, you can enter any email address of your choice. If the email address isn't a Google account or doesn't match your Google account, then we'll email you a verification link, which you'll need to click to start receiving alerts.

This works best if you create a public profile , which is free and quick to do. Once you get to the homepage with your photo, click "Follow" next to your name, select "New citations to my articles", and click "Done". We will then email you when we find new articles that cite yours.

Search for the title of your paper, e.g., "Anti de Sitter space and holography"; click on the "Cited by" link at the bottom of the search result; and then click on the envelope icon in the left sidebar of the search results page.

First, do a search for your colleague's name, and see if they have a Scholar profile. If they do, click on it, click the "Follow" button next to their name, select "New articles by this author", and click "Done".

If they don't have a profile, do a search by author, e.g., [author:s-hawking], and click on the mighty envelope in the left sidebar of the search results page. If you find that several different people share the same name, you may need to add co-author names or topical keywords to limit results to the author you wish to follow.

We send the alerts right after we add new papers to Google Scholar. This usually happens several times a week, except that our search robots meticulously observe holidays.

There's a link to cancel the alert at the bottom of every notification email.

If you created alerts using a Google account, you can manage them all here . If you're not using a Google account, you'll need to unsubscribe from the individual alerts and subscribe to the new ones.

Google Scholar library

Google Scholar library is your personal collection of articles. You can save articles right off the search page, organize them by adding labels, and use the power of Scholar search to quickly find just the one you want - at any time and from anywhere. You decide what goes into your library, and we’ll keep the links up to date.

You get all the goodies that come with Scholar search results - links to PDF and to your university's subscriptions, formatted citations, citing articles, and more!

Library help

Find the article you want to add in Google Scholar and click the “Save” button under the search result.

Click “My library” at the top of the page or in the side drawer to view all articles in your library. To search the full text of these articles, enter your query as usual in the search box.

Find the article you want to remove, and then click the “Delete” button under it.

  • To add a label to an article, find the article in your library, click the “Label” button under it, select the label you want to apply, and click “Done”.
  • To view all the articles with a specific label, click the label name in the left sidebar of your library page.
  • To remove a label from an article, click the “Label” button under it, deselect the label you want to remove, and click “Done”.
  • To add, edit, or delete labels, click “Manage labels” in the left column of your library page.

Only you can see the articles in your library. If you create a Scholar profile and make it public, then the articles in your public profile (and only those articles) will be visible to everyone.

Your profile contains all the articles you have written yourself. It’s a way to present your work to others, as well as to keep track of citations to it. Your library is a way to organize the articles that you’d like to read or cite, not necessarily the ones you’ve written.

Citation Export

Click the "Cite" button under the search result and then select your bibliography manager at the bottom of the popup. We currently support BibTeX, EndNote, RefMan, and RefWorks.

Err, no, please respect our robots.txt when you access Google Scholar using automated software. As the wearers of crawler's shoes and webmaster's hat, we cannot recommend adherence to web standards highly enough.

Sorry, we're unable to provide bulk access. You'll need to make an arrangement directly with the source of the data you're interested in. Keep in mind that a lot of the records in Google Scholar come from commercial subscription services.

Sorry, we can only show up to 1,000 results for any particular search query. Try a different query to get more results.

Content Coverage

Google Scholar includes journal and conference papers, theses and dissertations, academic books, pre-prints, abstracts, technical reports and other scholarly literature from all broad areas of research. You'll find works from a wide variety of academic publishers, professional societies and university repositories, as well as scholarly articles available anywhere across the web. Google Scholar also includes court opinions and patents.

We index research articles and abstracts from most major academic publishers and repositories worldwide, including both free and subscription sources. To check current coverage of a specific source in Google Scholar, search for a sample of their article titles in quotes.

While we try to be comprehensive, it isn't possible to guarantee uninterrupted coverage of any particular source. We index articles from sources all over the web and link to these websites in our search results. If one of these websites becomes unavailable to our search robots or to a large number of web users, we have to remove it from Google Scholar until it becomes available again.

Our meticulous search robots generally try to index every paper from every website they visit, including most major sources and also many lesser known ones.

That said, Google Scholar is primarily a search of academic papers. Shorter articles, such as book reviews, news sections, editorials, announcements and letters, may or may not be included. Untitled documents and documents without authors are usually not included. Website URLs that aren't available to our search robots or to the majority of web users are, obviously, not included either. Nor do we include websites that require you to sign up for an account, install a browser plugin, watch four colorful ads, and turn around three times and say coo-coo before you can read the listing of titles scanned at 10 DPI... You get the idea, we cover academic papers from sensible websites.

That's usually because we index many of these papers from other websites, such as the websites of their primary publishers. The "site:" operator currently only searches the primary version of each paper.

It could also be that the papers are located on examplejournals.gov, not on example.gov. Please make sure you're searching for the "right" website.

That said, the best way to check coverage of a specific source is to search for a sample of their papers using the title of the paper.

Ahem, we index papers, not journals. You should also ask about our coverage of universities, research groups, proteins, seminal breakthroughs, and other dimensions that are of interest to users. All such questions are best answered by searching for a statistical sample of papers that has the property of interest - journal, author, protein, etc. Many coverage comparisons are available if you search for [allintitle:"google scholar"], but some of them are more statistically valid than others.

Currently, Google Scholar allows you to search and read published opinions of US state appellate and supreme court cases since 1950, US federal district, appellate, tax and bankruptcy courts since 1923 and US Supreme Court cases since 1791. In addition, it includes citations for cases cited by indexed opinions or journal articles which allows you to find influential cases (usually older or international) which are not yet online or publicly available.

Legal opinions in Google Scholar are provided for informational purposes only and should not be relied on as a substitute for legal advice from a licensed lawyer. Google does not warrant that the information is complete or accurate.

We normally add new papers several times a week. However, updates to existing records take 6-9 months to a year or longer, because in order to update our records, we need to first recrawl them from the source website. For many larger websites, the speed at which we can update their records is limited by the crawl rate that they allow.

Inclusion and Corrections

We apologize, and we assure you the error was unintentional. Automated extraction of information from articles in diverse fields can be tricky, so an error sometimes sneaks through.

Please write to the owner of the website where the erroneous search result is coming from, and encourage them to provide correct bibliographic data to us, as described in the technical guidelines . Once the data is corrected on their website, it usually takes 6-9 months to a year or longer for it to be updated in Google Scholar. We appreciate your help and your patience.

If you can't find your papers when you search for them by title and by author, please refer your publisher to our technical guidelines .

You can also deposit your papers into your institutional repository or put their PDF versions on your personal website, but please follow your publisher's requirements when you do so. See our technical guidelines for more details on the inclusion process.

We normally add new papers several times a week; however, it might take us some time to crawl larger websites, and corrections to already included papers can take 6-9 months to a year or longer.

Google Scholar generally reflects the state of the web as it is currently visible to our search robots and to the majority of users. When you're searching for relevant papers to read, you wouldn't want it any other way!

If your citation counts have gone down, chances are that either your paper or papers that cite it have either disappeared from the web entirely, or have become unavailable to our search robots, or, perhaps, have been reformatted in a way that made it difficult for our automated software to identify their bibliographic data and references. If you wish to correct this, you'll need to identify the specific documents with indexing problems and ask your publisher to fix them. Please refer to the technical guidelines .

Please do let us know . Please include the URL for the opinion, the corrected information and a source where we can verify the correction.

We're only able to make corrections to court opinions that are hosted on our own website. For corrections to academic papers, books, dissertations and other third-party material, click on the search result in question and contact the owner of the website where the document came from. For corrections to books from Google Book Search, click on the book's title and locate the link to provide feedback at the bottom of the book's page.

General Questions

These are articles which other scholarly articles have referred to, but which we haven't found online. To exclude them from your search results, uncheck the "include citations" box on the left sidebar.

First, click on links labeled [PDF] or [HTML] to the right of the search result's title. Also, check out the "All versions" link at the bottom of the search result.

Second, if you're affiliated with a university, using a computer on campus will often let you access your library's online subscriptions. Look for links labeled with your library's name to the right of the search result's title. Also, see if there's a link to the full text on the publisher's page with the abstract.

Keep in mind that final published versions are often only available to subscribers, and that some articles are not available online at all. Good luck!

Technically, your web browser remembers your settings in a "cookie" on your computer's disk, and sends this cookie to our website along with every search. Check that your browser isn't configured to discard our cookies. Also, check if disabling various proxies or overly helpful privacy settings does the trick. Either way, your settings are stored on your computer, not on our servers, so a long hard look at your browser's preferences or internet options should help cure the machine's forgetfulness.

Not even close. That phrase is our acknowledgement that much of scholarly research involves building on what others have already discovered. It's taken from Sir Isaac Newton's famous quote, "If I have seen further, it is by standing on the shoulders of giants."

  • Privacy & Terms

Help | Advanced Search

Computer Science > Digital Libraries

Title: every author as first author.

Abstract: We propose a new standard for writing author names on papers and in bibliographies, which places every author as a first author -- superimposed. This approach enables authors to write papers as true equals, without any advantage given to whoever's name happens to come first alphabetically (for example). We develop the technology for implementing this standard in LaTeX, BibTeX, and HTML; show several examples; and discuss further advantages.

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

This paper is in the following e-collection/theme issue:

Published on 10.4.2024 in Vol 26 (2024)

Methodological Frameworks and Dimensions to Be Considered in Digital Health Technology Assessment: Scoping Review and Thematic Analysis

Authors of this article:

Author Orcid Image

  • Joan Segur-Ferrer, BSS, PT, MSc   ; 
  • Carolina Moltó-Puigmartí, BScPharm, PhD   ; 
  • Roland Pastells-Peiró, BA, MA, MsC   ; 
  • Rosa Maria Vivanco-Hidalgo, MD, MPH, PhD  

Agency for Health Quality and Assessment of Catalonia, Barcelona, Spain

Corresponding Author:

Joan Segur-Ferrer, BSS, PT, MSc

Agency for Health Quality and Assessment of Catalonia

Roc Boronat Street, 81-95, 2nd Fl

Barcelona, 08005

Phone: 34 935 513 900

Fax:34 935 517 510

Email: [email protected]

Background: Digital health technologies (dHTs) offer a unique opportunity to address some of the major challenges facing health care systems worldwide. However, the implementation of dHTs raises some concerns, such as the limited understanding of their real impact on health systems and people’s well-being or the potential risks derived from their use. In this context, health technology assessment (HTA) is 1 of the main tools that health systems can use to appraise evidence and determine the value of a given dHT. Nevertheless, due to the nature of dHTs, experts highlight the need to reconsider the frameworks used in traditional HTA.

Objective: This scoping review (ScR) aimed to identify the methodological frameworks used worldwide for digital health technology assessment (dHTA); determine what domains are being considered; and generate, through a thematic analysis, a proposal for a methodological framework based on the most frequently described domains in the literature.

Methods: The ScR was performed in accordance with the guidelines established in the PRISMA-ScR guidelines. We searched 7 databases for peer reviews and gray literature published between January 2011 and December 2021. The retrieved studies were screened using Rayyan in a single-blind manner by 2 independent authors, and data were extracted using ATLAS.ti software. The same software was used for thematic analysis.

Results: The systematic search retrieved 3061 studies (n=2238, 73.1%, unique), of which 26 (0.8%) studies were included. From these, we identified 102 methodological frameworks designed for dHTA. These frameworks revealed great heterogeneity between them due to their different structures, approaches, and items to be considered in dHTA. In addition, we identified different wording used to refer to similar concepts. Through thematic analysis, we reduced this heterogeneity. In the first phase of the analysis, 176 provisional codes related to different assessment items emerged. In the second phase, these codes were clustered into 86 descriptive themes, which, in turn, were grouped in the third phase into 61 analytical themes and organized through a vertical hierarchy of 3 levels: level 1 formed by 13 domains, level 2 formed by 38 dimensions, and level 3 formed by 11 subdimensions. From these 61 analytical themes, we developed a proposal for a methodological framework for dHTA.

Conclusions: There is a need to adapt the existing frameworks used for dHTA or create new ones to more comprehensively assess different kinds of dHTs. Through this ScR, we identified 26 studies including 102 methodological frameworks and tools for dHTA. The thematic analysis of those 26 studies led to the definition of 12 domains, 38 dimensions, and 11 subdimensions that should be considered in dHTA.

Introduction

Digital health technologies (dHTs) are driving the transformation of health care systems. They are changing the way in which health services are delivered, and showing great potential to address some of the major challenges that European health systems, including the Spanish National Health System (SNS), are facing, such as the progressive aging of the population [ 1 , 2 ]; the growing demand for health and long-term care services [ 2 ]; the rise in health care costs, increasing financial pressures on health and welfare systems [ 1 , 3 ]; and the unequal distribution of health services across different geographical regions [ 4 , 5 ]. In addition, dHT can improve the accessibility, sustainability, efficiency, and quality of health care systems [ 6 , 7 ], leading to their becoming a determinant of health on their own [ 6 , 8 ].

However, the digital transformation of health care systems and the implementation of dHT (eg, artificial intelligence [AI]–based solutions, data-driven health care services, or the internet of things) are slow and unequal across different European regions [ 9 , 10 ]. Some of the reasons for this are (1) the immaturity of regulatory frameworks for the use of dHTs [ 9 ], (2) the lack of funding and investment for the implementation of dHTs [ 9 ], (3) the lack of sufficient and appropriate infrastructures and common standards for data management [ 6 , 9 ], (4) the absence of skills and expertise of professionals and users [ 10 ], and (5) the scarcity of strong evidence regarding the real benefits and effects of dHTs on health systems and people’s well-being, as well as the cost-effectiveness of these technologies. This makes decision-making difficult, potentially leading to the development and reproduction of low-value and short-lived dHTs [ 6 , 11 ].

To overcome these challenges, harness the potential of dHTs, and avoid nonintended consequences, the World Health Organization (WHO) [ 4 , 11 ] states that dHTs should be developed under the principles of transparency, accessibility, scalability, privacy, security, and confidentiality. Their implementation should be led by robust strategies that bring together leadership, financial, organizational, human, and technological resources, and decisions should be guided by the best-available evidence [ 4 , 11 ].

Regarding this last aspect, health technology assessment (HTA), defined as a “multidisciplinary process that uses explicit methods to determine the value of a health technology at different points in its life cycle,” is a widely accepted tool to inform decision-making and promote equitable, efficient, and high-quality health systems [ 12 , 13 ].

Generally, HTA is conducted according to specific methodological frameworks, such as the HTA Core Model of the European Network for Health Technology Assessment (EUnetHTA) [ 14 ] and the guidelines for the development and adaptation of rapid HTA reports of the Spanish Network of Agencies for Assessing National Health System Technologies and Performance (RedETS) [ 15 ]. These frameworks establish the methodologies to follow and the elements to evaluate. Although these frameworks are helpful instruments for evaluating various health technologies, they have certain limitations in comprehensively assessing dHTs. For this reason, in the past few years, different initiatives have emerged to adapt existing methodological frameworks or develop new ones. The objective is to consider additional domains (eg, interoperability, scalability) to cover the intrinsic characteristics of dHTs [ 16 - 18 ]. Examples of these initiatives are the Evidence Standard Framework (ESF) of National Institute for Health and Care Excellence (NICE) [ 19 ] or the Digi-HTA Framework of the Finnish Coordinating Center for Health Technology Assessment (FinCCHTA) [ 16 ]. Nonetheless, the majority of these frameworks have certain constraints, such as being designed for a particular socioeconomic or national setting, which restricts their transferability or suitability for use in other countries; the specificity or exclusion of certain dHTs, resulting in limitations in their application; or the limited evidence regarding their actual usefulness.

In this context, we performed a scoping review (ScR) with the aim of identifying the methodological frameworks that are used worldwide for the evaluation of dHTs; determining what dimensions and aspects are considered for each type of dHT; and generating, through a thematic analysis, a proposal for a methodological framework that is based on the most frequently described dimensions in the literature. This research focused mainly on mobile health (mHealth), non–face-to-face care models and medical devices that integrate AI, as these particular dHTs are the ones most frequently assessed by HTA agencies and units of RedETS.

Identifying Research Questions

This ScR followed by a thematic analysis answered the following research questions:

  • What methodological frameworks currently exist for digital health technology assessment (dHTA)?
  • What domains and dimensions are considered in dHTA?
  • Do the different domains and dimensions considered depend on whether the dHT addressed is a non–face-to-face care model of health care provision, a mobile device (mHealth), or a device that incorporates AI?

Overview of Methods for Conducting the Scoping Review

We conducted an ScR of the literature and a thematic analysis of the studies included according to the published protocol [ 20 ]. The ScR aimed to answer the first research question, while the thematic analysis aimed to answer the second and third research questions. Spanish experts from various domains of HTA and dHT collaborated throughout the study design and development.

The ScR of the available scientific literature was carried out in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines ( Multimedia Appendix 1 ) [ 21 ] and following the recommendations of Peters et al [ 22 ] and Pollock et al [ 23 ].

Ethical Considerations

As this work was an ScR, no ethical board approval was required.

Search Strategy

The search strategy ( Multimedia Appendix 2 ) was designed by an experienced information specialist (author RP-P) in accordance with the research questions and using the validated filter of Ayiku et al [ 24 ] for health apps, adding the terms for concepts related to mHealth, remote care models, AI, digital health, methodological frameworks, and HTA. The strategy was peer-reviewed according to the “Peer Review of Electronic Search Strategies Statement” [ 25 ] by authors JS-F and CM-P and was executed in the following 7 databases, considering the characteristics of each in terms of syntax, controlled vocabulary, and proximity operators: Medline (OVID), CINAHL Plus, Embase, Cochrane Library, Scopus, Web of Science, and TripDatabase. Note that no time, language, or other filters were used.

The identification of relevant studies was complemented with a manual search based on the references in the included studies, as well as the websites of the HTA agencies identified through the web pages of EUnetHTA, the International Network for Agencies for Health Technology Assessment (INAHTA), and Health Technology Assessment International (HTAi). Additionally, a search was conducted in Google Scholar, limiting the results to the first 250 items in order to guarantee the inclusion of all pertinent studies [ 26 ].

Inclusion and Exclusion Criteria

The inclusion criteria used in the reference-screening process were based on the previously detailed research questions and are outlined in Textbox 1 using the Population/Problem, Phenomenon of Interest, Context and Design (PICo-D) format [ 27 , 28 ]. The PICo-D format was used instead of the traditional Population/Problem, Intervention, Comparator, Outcomes, Design (PICO-D) format due to the qualitative nature of the research questions and the characteristics of the phenomenon of interest.

Studies were excluded if they were published before 2011, due to the rapid evolution of dHTs in the past few years, did not describe dimensions or evaluation criteria, or were based on methodological frameworks not intended for the assessment of dHTs (eg, EUnetHTA Core Model 3.0). Likewise, we excluded comments, editorials, letters, conference abstracts, frameworks, or tools focusing on the evaluation of dHTs by users (eg, User version of Mobile App Rating Scale [uMARS]) or documents in languages other than English, Spanish. or Catalan.

Population/problem

Digital health technology assessment (dHTA)

Phenomenon of interest

Specific methodological frameworks for the evaluation of digital health (with special focus on mobile health [mHealth]: non–face-to-face care models and medical devices that integrate artificial intelligence [AI] due the type of technologies mostly assessed in the Spanish National Health System [SNS]) that describe the domains to be evaluated in dHTA

Health technology assessment (HTA)

Methodological guidelines and frameworks, scoping reviews (ScRs), systematic reviews (SRs), consensus documents, and qualitative studies

Reference Screening and Data Extraction

The screening of studies was carried out by authors CM-P and JS-F in 2 phases in accordance with the selection criteria detailed earlier ( Textbox 1 ) and in a single-blind peer review manner. The first phase consisted of screening of the titles and abstracts of the studies identified in the bibliographic search. The second phase consisted of full-text screening of the studies included in the previous phase.

Data extraction was performed by 3 authors (CM-P, RP-P, and JS-F) using the web and desktop versions of ATLAS.ti version 22.0 (Scientific Software Development GmbH) [ 29 ] and the data extraction sheets designed ad hoc for this purpose following the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions [ 30 ].

When disagreements emerged in either of the 2 processes, a consensus was reached between the 3 reviewers (CM-P, RP-P, and JS-F). When a consensus was not possible, a fourth reviewer (author RMV-H) was consulted.

Collecting, Summarizing, and Reporting the Results

A descriptive analysis was carried out to evaluate and report the existing methodological frameworks and their characteristics.

Overview of Methods for Thematic Analysis

The thematic analysis was performed following the recommendations and phases described by Thomas and Harden [ 31 ] to determine HTA dimensions for dHTs: (1) line-by-line text coding, (2) development of descriptive topics, and (3) generation of analytical themes. Both analyses were carried out by 3 authors (CM-P, RP-P, and JS-F) using the web and desktop versions of ATLAS.ti version 22.0 [ 29 ].

Dimensions identified from systematic reviews (SRs) that were derived from primary studies also identified in our systematic search were only counted once in order to avoid duplication of data and risk of bias. It is worth mentioning that the primary studies included in the SRs were not directly analyzed but were analyzed through the findings reported in the SRs.

Study Selection and Characteristics

A total of 3042 studies were retrieved throughout the systematic (n=3023, 99.4%) and the manual (n=19, 0.6%) search. Of these, 2238 (73.6%) studies were identified as unique after removing duplicates.

After title and abstract review, 81 (3.6%) studies were selected for full-text review, of which 26 (32.1%) were finally included in the analysis. The excluded studies and reasons for exclusion are detailed in Multimedia Appendix 3 ; in brief, the reasons for exclusion were phenomenon of interest (n=30, 37%), type of publication (n=15, 18.5%), purpose (n=6, 7.4%), language (n=2, 2.5%), and duplicated information (n=2, 2.5%). The study selection process is outlined in Figure 1 [ 32 ].

Of the 26 (32.1%) studies included in this ScR, 19 (73.1%) were designed as specific methodological frameworks for dHTA [ 16 , 17 , 33 - 47 ], 4 (15.4%) were SRs [ 48 - 51 ], 1 (3.9%) was a report from the European mHealth Hub’s working group on mHealth assessment guidelines [ 52 ], 1 (3.9%) was a qualitative study [ 53 ], and 1 (3.9%) was a viewpoint [ 54 ]. In addition, 3 (11.5%) focused on the assessment of non–face-to-face care models [ 33 - 35 ], 8 (30.8%) on mHealth assessment [ 36 - 40 , 52 , 53 , 55 ], 2 (7.7%) on the assessment of AI technology [ 41 , 54 ], 4 (15.4%) on eHealth [ 42 , 43 , 48 , 50 ], and 9 (34.6%) on the overall assessment of digital health [ 16 , 17 , 44 - 47 , 49 , 51 , 56 ].

published first author research paper

Research Question 1: Description of Identified Frameworks for dHTA

The 19 methodological frameworks for dHTA [ 16 , 17 , 33 - 47 ] were from various countries: The majority (n=5, 26.3%) originated in Australia [ 17 , 34 , 38 , 41 , 46 ], followed by 3 (15.8%) from the United States [ 43 , 45 , 56 ] and 2 (10.5%) from Switzerland [ 47 , 55 ]; the remaining 9 (47.4%) frameworks were developed in Afghanistan [ 42 ], Denmark [ 33 ], Scotland [ 35 ], Finland [ 16 ], Ireland [ 36 ], Israel [ 40 ], the United Kingdom [ 37 ], Spain [ 39 ], and Sweden [ 44 ].

The 19 methodological frameworks focused on evaluating various types of technologies. Specifically, 3 (15.8%) of them were designed for assessing non–face-to-face care models [ 33 - 35 ], 6 (31.6%) for mHealth [ 36 - 40 ], and 1 (5.3%) for AI solutions [ 41 ]. The other 9 (47.4%) frameworks addressed eHealth [ 42 , 43 , 56 ] or digital health in general [ 16 , 17 , 44 - 47 ], which encompasses non–face-to-face care models, mHealth, and occasionally AI-based solutions [ 18 ] within its scope. It is pertinent to mention that the differentiation between the methodological frameworks designed for the evaluation of eHealth and those designed for dHTA was based on the specific terminology and descriptions used by the authors of those frameworks.

The structures and characteristics of the analyzed methodological frameworks were considered heterogeneous in terms of evaluation specificity (whether they focused on a global evaluation that encompassed more than 1 domain or dimension or on a specific assessment that addressed only 1 domain or dimension), assessment approach (whether they adopted a phased evaluation, a domain evaluation, or a hybrid of both), and number of domains included. Regarding evaluation specificity, 17 (89.5%) methodological frameworks were classified as global as they covered various aspects or domains within their scope [ 16 , 17 , 33 - 36 , 38 - 47 , 55 , 56 ], while 2 (10.5%) were classified as specific as they concentrated exclusively on 1 element or domain of assessment [ 37 , 46 ]. Regarding the assessment approach, 14 (73.7%) methodological frameworks proposed a domain-based evaluation [ 16 , 17 , 33 , 35 , 36 , 38 - 40 , 43 , 44 , 46 , 55 , 56 ], while 4 (21.1%) proposed a hybrid one (phased and domain based) [ 41 , 42 , 45 , 47 ]; the remaining methodological framework did not fit into any of the previous categories, as it was not structured by domains or phases but by types of risk [ 37 ]. Finally, the number of evaluation domains considered ranged from 1 to 14, with an average of 7. Table 1 outlines the primary features of the included methodological frameworks and provides a thorough breakdown of the domains and dimensions they address.

In contrast, from 3 (75%) [ 49 - 51 ] of the 4 SRs [ 48 - 51 ] and the report from the working group on guidelines for the evaluation of mHealth solutions from the European mHealth Hub [ 52 ], we identified other methodological frameworks and tools focusing on the assessment of dHTs. Specifically, we identified 16 methodological frameworks or tools focusing on the evaluation of non–face-to-face care models [ 57 - 72 ], along with 37 for the evaluation of mHealth [ 10 , 52 , 73 - 95 ], 11 for the evaluation of eHealth [ 96 - 107 ], and 17 for the evaluation of dHTs in general [ 108 - 124 ]. Additionally, 5 (26.3%) [ 33 , 34 , 36 , 37 , 42 ] of the 19 methodological frameworks included in this ScR were also identified and analyzed in 1 or more of the 4 literature synthesis documents [ 49 - 52 ]. It is important to note that the difference between the frameworks we retrieved through our systematic search and those identified in the 4 SRs is the result of the narrower perspective we adopted, focusing exclusively on frameworks directly relevant to the HTA field, in line with the aims of our study. In Multimedia Appendix 4 , we provide a more detailed explanation of the methodological frameworks included in the studies mentioned earlier [ 19 , 49 - 52 , 57 - 73 , 75 - 135 ].

a ScR: scoping review.

b mHealth: mobile health.

c N/A: not applicable.

d AI: artificial intelligence.

e dHT: digital health technology.

Research Question 2: Domains and Dimensions Being Considered in dHTA

The 26 (32.1%) studies included encompassed a broad range of items to consider in dHTA and often used diverse expressions for analogous concepts. We reduced this heterogeneity through our thematic analysis according to the recommendations and phases described by Thomas and Harden [ 31 ].

In this sense, in the first phase of thematic analysis, we identified and coded 176 units of meaning (coded as provisional codes) that represented different items (domains or dimensions) of the assessment. These units were then grouped into 86 descriptive themes (second phase), which were further refined into 61 analytical themes that captured the key concepts and relationships between them (third phase). Lastly, the 61 analytical themes were arranged in a 3-level vertical hierarchy based on the evidence: level 1 (12 domains), level 2 (38 dimensions), and level 3 (11 subdimensions). We used the term “domain” to refer to a distinct area or topic of evaluation that is integral to the assessment of the technology in question. A domain may encompass multiple related concepts or dimensions that are relevant to the evaluation. Each dimension, in turn, represents a specific aspect of evaluation that belongs to the domain and contributes to an understanding of its overall significance. Finally, a subdimension refers to a partial element of a dimension that facilitates its analysis. By using these terms, we aimed to provide a clear, rigorous, and comprehensive framework for conducting HTA.

Table 2 displays the 61 analytical themes in descending order of coding frequency, aligned with the hierarchy derived from the data analysis. Additionally, the table specifies the intervention modalities or dHTs that correspond to each code and lists the studies from which each code originated. The network of relationships among the codes can be found in Multimedia Appendix 5 .

a dHT: digital health technology.

c AI: artificial intelligence.

d N/A: not applicable.

Research Question 3: Variability of Domains and Dimensions Among Technologies

Our thematic analysis revealed a significant degree of variability and heterogeneity in the number and type of domains and dimensions considered by the methodological frameworks.

In terms of numbers, the variability was quite pronounced when we compared frameworks addressing different types of dHTs. For instance, the thematic analysis of frameworks for assessing telemedicine only identified 9 (75%) domains and 6 (15.8%) dimensions; instead, in frameworks for assessing mHealth, we identified 10 (83.3%) domains, 20 (52.6%) dimensions, and 6 (54.5%) subdimensions, and in frameworks for assessing AI, we identified 8 (66.7%) different domains, 7 (18.4%) different dimensions, and 6 (54.5%) subdimensions.

In terms of the types of domains considered, certain dimensions and domains were identified as more distinctive for one kind of dHT than for another. For instance, clinical efficacy and effectiveness, technical safety, economic evaluation, and user experience were relevant for the evaluation of models of nonpresential health care and mHealth but not for AI. In contrast, there were specific dimensions and domains of mHealth that were not considered in the evaluation of non–face-to-face health care or AI, such as postmarketing monitoring, scientific basis, technical evaluation and validation, user control and self-determination, accessibility, content and adequacy of information, and data interoperability and integration. Finally, specific methodological frameworks for the evaluation of AI included dimensions such as technical aspects, adoption, use, integration, generalizability, reproducibility, and interpretability, which were not considered in the evaluation of telemedicine or mHealth. In conclusion, greater clarity and structuring in the presentation of these ideas are required to facilitate their understanding and assimilation.

Proposal for Domains, Dimensions, and Subdimensions for dHTA

These findings led to the development of a proposed methodological framework for dHTA, which comprises domains, dimensions, and subdimensions. These evaluation items were established objectively based on thematically analyzed evidence, without incorporating the researcher’s perspective. Consequently, the proposal for domains, dimensions, and subdimensions emerged from the literature and represents the entirety of identified evaluation domains, dimensions, and subdimensions (n=61). Figure 2 presents a visual representation of the proposed framework comprising 12 domains, 38 dimensions, and their corresponding 11 subdimensions. Notably, the figure highlights certain domains, dimensions, and subdimensions that are particularly relevant to the evaluation of non–face-to-face care models, mHealth, and AI according to the evidence.

published first author research paper

Principal Findings

In recent years, the interest in digital health has increased significantly, giving rise to a myriad of available technologies. This has brought about a profound transformation in health care systems, fundamentally changing the provision and consumption of health care services [ 9 ]. However, despite these advancements, the shift toward digital health has been accompanied by challenges. One such challenge is the emergence of a plethora of short-lived implementations and an overwhelming diversity of digital tools, which has created a need for careful evaluation and analysis of the benefits and drawbacks of these technologies [ 4 ].

In this context, our ScR aimed to identify the methodological frameworks used worldwide for the assessment of dHTs; determine what domains are considered; and generate, through a thematic analysis, a proposal for a methodological framework based on the most frequently described domains in the literature.

Throughout the ScR, we identified a total of 95 methodological frameworks and tools, of which 19 [ 16 , 17 , 33 - 47 ] were directly identified through a systematic search and 75 were indirectly identified through 4 SRs [ 49 - 52 ]. The difference in the number of methodological frameworks identified through the ScR and the 4 evidence synthesis documents [ 49 - 52 ] is attributed to the inclusion of keywords related to the concept of HTA in the search syntax, the exclusion of methodological frameworks published prior to 2011 during the screening process, and the differences in perspectives used for the development of this paper compared to the 4 evidence synthesis documents mentioned earlier. In this sense, these 4 documents [ 49 - 52 ] have analyzed methodological frameworks and tools aimed at evaluating digital health that have not been developed from an HTA perspective despite the authors analyzing them as such. For example, von Huben et al. [ 51 ] included in their analysis the Consolidated Standards of Reporting Trials (CONSORT)-EHEALTH tool [ 97 ], which aims to describe the information that should be reported in papers and reports that focus on evaluating web- and mHealth-based interventions; Koladas et al [ 49 ] included the mobile health evidence reporting and assessment (mERA) checklist [ 73 ], which aims to determine the information that should be reported in trials evaluating mHealth solutions; and the European mHealth Hub document [ 52 ] includes the Isys Score, which is for cataloguing apps for smartphones.

However, as detailed in the Results section, some of the methodological frameworks identified through the ScR were characterized by the authors themselves as being specific for evaluating certain types of dHTs (eg, non–face-to-face care models, mHealth), presenting certain differences according to each typology. It is important to note that the differentiation among various types of dHTs, as described throughout this paper and commonly used in the field of digital health, cannot always be made in a precise and exclusive manner [ 136 ]. This is because a technology often can be classified in more than 1 category. For instance, an mHealth solution may use AI algorithms, while simultaneously being integrated into a non–face-to-face care model [ 137 ]. In this context, future research should consider using alternative taxonomies or classification methods that are based on the intended purpose of the technology, such as those proposed by NICE in the updated version of the Evidence Standards Framework [ 18 ] or the new digital health interventions system classification put forward by WHO [ 138 ].

After conducting a thematic analysis of the 26 included studies, we observed that various methodological frameworks include a set of evaluation items, referred to as domains, dimensions, or criteria. These items primarily focus on the safety; effectiveness; technical aspects; economic impact; and ethical, legal, and social consequences of dHTs. However, there is significant heterogeneity among these frameworks in terms of the way they refer to the evaluation items, the quantity and depth of their description, the degree of granularity, and the proposed evaluation methods, especially when comparing frameworks that focus on different types of dHTs. Despite this heterogeneity, most methodological frameworks consider evaluation items related to the 9 domains described by the HTA Core Model of EUnetHTA, while some frameworks propose additional evaluation elements, such as usability [ 16 , 44 , 45 , 47 , 49 , 56 ], privacy [ 39 - 41 , 44 , 52 , 55 ], and technical stability [ 16 , 38 , 47 , 49 , 52 ] among others. These findings are consistent with earlier research [ 50 , 51 ].

In addition, through the thematic analysis, the heterogeneity identified among the different methodological frameworks included in this ScR was reduced to a total of 61 analytical themes related to various evaluation elements that were arranged in a 3-level vertical hierarchy based on the evidence: level 1 (12 domains), level 2 (38 dimensions), and level 3 (11 subdimensions). At this point, it is pertinent to note that although from the researchers’ perspective, some dimensions could have been classified under different domains (eg, responsibility under ethical aspects) or seen as essential for other kinds of dHTs, an effort was made to maintain the highest degree of objectivity possible. It is for this reason that privacy issues were not described as essential for non–face-to-face care models and why the dimension of accessibility was categorized within the domains of human and sociocultural aspects and technical aspects. This categorization was made because some of the methodological frameworks analyzed associated it with sociocultural elements (eg, evaluating whether users with functional diversity can access the technology and have sufficient ability to use it as expected), while others linked it to technical elements (eg, adequacy of the elements, options, or accessibility functionalities that the system incorporates according to the target audience) [ 16 , 52 ].

The ScR and thematic analysis conducted in this study led to a proposal for a methodological framework for dHTA. This framework was further developed using additional methodologies, such as consensus workshops by the Agency for Health Quality and Assessment of Catalonia (AQuAS), in collaboration with all agencies of RedETS, commissioned by the Ministry of Health of Spain. The final framework is a specific methodological tool for the assessment of dHTs, aimed at describing the domains and dimensions to be considered in dHTA and defining the evidence standards that such technologies must meet based on their associated risk level. The proposed methodological framework enables the assessment of a wide range of dHTs, mainly those classified as medical devices according to the Regulation (EU) 2017/745 for medical devices [ 139 ] and Regulation (EU) 2017/746 for in vitro diagnostic medical devices, although it can be adapted to assess dHTs not classified as medical devices [ 140 ]. Unlike existing frameworks, it establishes a clear link between the identified domains and dimensions and the evidence standards required for dHTs to meet. This approach will enhance the transparency and consistency of dHTAs and support evidence-based decision-making. The final document was published from November 2023 onward and is available on the RedETS website as well as on the main web page of AQuAS in the Spanish language [ 141 ]. From the first week of February, the respective websites have hosted an English version of this document [ 141 ], which also is accessible in the INAHTA database. In addition, the Spanish and English versions of the document will be periodically reviewed and, if necessary, adapted to align with emerging technologies and changes in legislation.

Limitations

Although this ScR was conducted in accordance with the PRISMA-ScR guidelines ( Multimedia Appendix 1 ) and following the recommendations of Peters et al [ 22 ] and Pollock et al [ 23 ], there were some limitations. First, the search incorporated a block of keywords related to the concept of HTA (see Multimedia Appendix 1 ) due to the perspective of our ScR, which may have limited the retrieval of some studies to meet the study objective. However, this limitation was compensated for by the analysis of the 3 SRs and the report of the working group on guidelines for the evaluation of mHealth solutions of the European mHealth Hub. Second, much of the literature related to HTA is gray literature and only published on the websites of the authoring agencies. Despite efforts to address this limitation through expert input and a comprehensive search of the websites of the world’s leading agencies, it is possible that certain studies were not identified. Third, the quality and limitations of the analysis conducted by the authors of methodological frameworks and tools included in SRs may have had an impact on the indirect thematic analysis. Therefore, it is possible that some data could have been omitted or not considered during this process. Fourth, the focus on dHTs encompassed within the 3 previously mentioned categories (mHealth, non–face-to-face care models, and medical devices that integrate AI) may have influenced the outcomes of the thematic analysis conducted. Fifth, only methodological frameworks written in Catalan, Spanish, and English were included.

Comparison With Prior Work

To the best of our knowledge, this is the first ScR to examine the methodological frameworks for dHTA, followed by a thematic analysis with the aim of proposing a new comprehensive framework that incorporates the existing literature in an objective manner and enables the assessment of various technologies included under the concept of digital health. In this sense, existing SRs and other evidence synthesis documents have only analyzed the literature and reported the results in a descriptive manner [ 36 , 48 , 49 , 51 , 56 , 125 , 126 ]. Furthermore, this ScR also considered, in addition to scientific literature, gray literature identified by searching the websites of the agencies, thus covering some limitations of previous reviews [ 50 ]. Moreover, this review was carried out from the perspective of HTA, addressing a clear need expressed by HTA agencies [ 16 ].

Future research should aim to identify what domains and dimensions are relevant at the different stages of the technology life cycle, to establish or develop a standardized set of outcomes for assessing or reporting each domain, and to evaluate the effectiveness and usefulness of the existing methodological frameworks for the different intended users [ 50 , 142 ]. Moreover, future research should aim to determine the specific evaluation criteria that ought to be considered based on the level of risk associated with different types of technologies [ 51 ].

Our ScR revealed a total of 102 methodological frameworks and tools designed for evaluating dHTs, with 19 being directly identified through a systematic search and 83 through 4 evidence synthesis documents. Only 19 of all the identified frameworks were developed from the perspective of HTA. These frameworks vary in assessment items, structure, and specificity, and their proven usefulness in practice is scarce.

The thematic analysis of the 26 studies that met the inclusion criteria led to the identification and definition of 12 domains, 38 dimensions, and 11 subdimensions that should be considered when evaluating dHTs. Building on our results, a methodological framework for dHTA was proposed.

Acknowledgments

We acknowledge Benigno Rosón Calvo (Servicio Gallego de Salud [SERGAS]), Carme Carrion (Universitat Oberta de Catalunya [UOC]), Carlos A Molina Carrón (Dirección General de Salud Digital y Sistemas de Información para el SNS. Ministerio de Sanidad, Gobierno de España), Carme Pratdepadua (Fundació Tic Salut i Social [FTSS]), Celia Muñoz (Instituto Aragonés de Ciencias de la Salud [IACS]), David Pijoan (Biocat, BioRegió de Catalunya), Felip Miralles (Eurecat – Centre Tecnològic de Catalunya), Iñaki Guiterrez Ibarluzea (Osasun Teknologien Ebaluazioko Zerbitzua [Osteba]), Janet Puñal Riobóo (Unidad de Asesoramiento Científico-técnico [avalia-t], Agencia Gallega para la Gestión del Conocimiento en Salud [ACIS]), Jordi Piera-Jiménez (Àrea de Sistemes d’Informació del Servei Català de la Salut [CatSalut]), Juan Antonio Blasco (Evaluación de Tecnologías Sanitarias de Andalucía [AETSA]), Liliana Arroyo Moliner (Direcció General de Societat Digital, Departament d’Empresa i Treball de la Generalitat de Catalunya), Lilisbeth Perestelo-Perez (Servicio de Evaluación del Servicio Canario de la Salud [SESCS]), Lucía Prieto Remón (IACS), Marifé Lapeña (Dirección General de Salud Digital y Sistemas de Información para el SNS. Ministerio de Sanidad, Gobierno de España), Mario Cárdaba (Insituto de Salud Carlos III [ISCIII]), Montserrat Daban (Biocat, BioRegió de Catalunya), Montserrat Moharra Frances (Agència de Qualitat i Avaluació Sanitàries de Catalunya), and Oscar Solans (CatSalut) for reviewing the protocol of this scoping review (ScR) and the ScR.

This research was framed within the budget of the work plan of the Spanish Network of Health Technology Assessment Agencies, commissioned by the General Directorate of Common Portfolio of Services of the National Health System and Pharmacy.

Authors' Contributions

JS-F and CM-P were responsible for conceptualization, methodology, formal analysis, investigation, data curation, writing—original draft, and visualization. RP-P handled conceptualization, methodology, formal analysis, investigation, resources, and writing—original draft. RMV-H handled conceptualization, writing—review and editing, supervision, and project administration.

Conflicts of Interest

None declared.

Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews (PRISMA-ScR) checklist [ 21 ].

Search strategies for each database.

References excluded at the full-text screening stage.

Methodological frameworks included in systematic reviews.

Network of relationships among the codes.

High-resolution image of Figure 2.

  • Avanzas P, Pascual I, Moris C. The great challenge of the public health system in Spain. J Thorac Dis. May 2017;9(Suppl 6):S430-S433. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grubanov BS, Ghio D, Goujon A, Kalantaryan S, Belmonte M, Scipioni M. Health and long-term care workforce: demographic challenges and the potential contribution of migration and digital technology. Luxembourg. European Commission; 2021;2045-2322.
  • World Health Organization (WHO). Health 2020: a European policy framework and strategy for the 21st century. Denmark. WHO; 2013.
  • World Health Organization (WHO). WHO guideline: recommendations on digital interventions for health system strengthening. Geneva. WHO; Jun 6, 2019.
  • Scholz N. Addressing health inequalities in the European Union. Luxembourg. European Parliamentary Research Service; 2020.
  • Kickbusch I, Agrawal A, Jack A, Lee N, Horton R. Governing health futures 2030: growing up in a digital world—a joint The Lancet and Financial Times Commission. Lancet. Oct 2019;394(10206):1309. [ CrossRef ]
  • Reeves JJ, Ayers JW, Longhurst CA. Telehealth in the COVID-19 era: a balancing act to avoid harm. J Med Internet Res. Feb 01, 2021;23(2):e24785. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Richardson S, Lawrence K, Schoenthaler AM, Mann D. A framework for digital health equity. NPJ Digit Med. Aug 18, 2022;5(1):119. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • European Union. State of health in the EU: companion report 2021. Luxembourg. European Union; 2022.
  • LATITUD: Anàlisi comparativa de models d'atenció no presencial en l'àmbit de la salut. TIC Salut Social. 2020. URL: https://ticsalutsocial.cat/wp-content/uploads/2021/07/latitud.pdf [accessed 2024-03-28]
  • Global strategy on digital health 2020-2025. World Health Organization. Aug 18, 2021. URL: https://www.who.int/publications/i/item/9789240020924 [accessed 2024-03-28]
  • Ming J, He Y, Yang Y, Hu M, Zhao X, Liu J, et al. Health technology assessment of medical devices: current landscape, challenges, and a way forward. Cost Eff Resour Alloc. Oct 05, 2022;20(1):54. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • O'Rourke B, Oortwijn W, Schuller T. The new definition of health technology assessment: a milestone in international collaboration. Int J Technol Assess Health Care. May 13, 2020;36(3):187-190. [ CrossRef ]
  • EUnetHTA Joint Action 2 Work Package 8. HTA Core Model ® version 3.0 2016. EUnetHTA. 2016. URL: https://www.eunethta.eu/wp-content/uploads/2018/03/HTACoreModel3.0-1.pdf [accessed 2024-03-28]
  • Puñal-Riobóo J, Baños, Varela LL, Castillo MM, Atienza MG, Ubago PR. Guía para la elaboración y adaptación de informes rápidos. Santiago de Compostela, Madrid. Agencia Gallega para la Gestión del Conocimientoto en Salud. Unidad de Asesoramiento Científico-técnico, avalia-t. Ministerio de Sanidad, Servicios Sociales e Igualdad; 2016.
  • Haverinen J, Keränen N, Falkenbach P, Maijala A, Kolehmainen T, Reponen J. Digi-HTA: health technology assessment framework for digital healthcare services. FinJeHeW. Nov 02, 2019;11(4):326-341. [ CrossRef ]
  • Hussain MS, Silvera-Tawil D, Farr-Wharton G. Technology assessment framework for precision health applications. Int J Technol Assess Health Care. May 26, 2021;37(1):e67. [ CrossRef ]
  • National Institute for Health and Care Excellence (NICE). Evidence standards framework (ESF) for digital health technologies. Contract no.: 30 de septiembre. London. NICE; 2022.
  • Evidence standards framework (ESF) for digital health technologies. National Institute for Health and Care Excellence. URL: https:/​/www.​nice.org.uk/​about/​what-we-do/​our-programmes/​evidence-standards-framework-for-digital-health-technologies [accessed 2024-03-28]
  • Segur-Ferrer J, Moltó-Puigmartí C, Pastells-Peiró R, Vivanco-Hidalgo RM. Methodological frameworks and dimensions to be taken into consideration in digital health technology assessment: protocol for a scoping review. JMIR Res Protoc. Oct 11, 2022;11(10):e39905. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 02, 2018;169(7):467-473. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Peters M, Marnie C, Tricco A, Pollock D, Munn Z, Alexander L. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119-2126. [ CrossRef ]
  • Pollock D, Davies EL, Peters MDJ, Tricco AC, Alexander L, McInerney P, et al. Undertaking a scoping review: a practical guide for nursing and midwifery students, clinicians, researchers, and academics. J Adv Nurs. Apr 04, 2021;77(4):2102-2113. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ayiku L, Hudson T, Glover S, Walsh N, Adams R, Deane J, et al. The NICE MEDLINE and Embase (Ovid) health apps search filters: development of validated filters to retrieve evidence about health apps. Int J Technol Assess Health Care. Oct 27, 2020;37(1):e16. [ CrossRef ]
  • McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. Jul 2016;75:40-46. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. Dec 06, 2017;6(1):245. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Healthc. 2015;13(3):179-187. [ CrossRef ]
  • Systematic reviews - research guide. Defining your review question. Murdoch University. 2023. URL: https://libguides.murdoch.edu.au/systematic [accessed 2024-03-28]
  • Scientific Software Development GmbH. Atlas.ti qualitative data analysis. 22.0 ed. Berlin. Scientific Software Development GmbH; 2021.
  • Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al. Cochrane handbook for systematic reviews of interventions version 6.3. London. Cochrane; 2022.
  • Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. Jul 10, 2008;8(1):45. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page M, McKenzie J, Bossuyt P, Boutron I, Hoffmann T, Mulrow C, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLoS Med. Mar 2021;18(3):e1003583. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kidholm K, Ekeland AG, Jensen LK, Rasmussen J, Pedersen CD, Bowes A, et al. A model for assessment of telemedicine applications: MAST. Int J Technol Assess Health Care. Jan 23, 2012;28(1):44-51. [ CrossRef ]
  • Nepal S, Li J, Jang-Jaccard J, Alem L. A framework for telehealth program evaluation. Telemed J E Health. Apr 2014;20(4):393-404. [ CrossRef ] [ Medline ]
  • Measuring the impact of telehealth and telecare: SCTT toolkit. Glasgow. SCTT; 2013.
  • Caulfield B, Reginatto B, Slevin P. Not all sensors are created equal: a framework for evaluating human performance measurement technologies. NPJ Digit Med. Feb 14, 2019;2(1):7. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lewis TL, Wyatt JC. mHealth and mobile medical apps: a framework to assess risk and promote safer use. J Med Internet Res. Sep 15, 2014;16(9):e210. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moshi MR, Tooher R, Merlin T. Development of a health technology assessment module for evaluating mobile medical applications. Int J Technol Assess Health Care. May 18, 2020;36(3):252-261. [ CrossRef ]
  • Puigdomènech E, Poses-Ferrer E, Espallargues CM, Blasco AJ, Varela LL, Paz VL. Evaluación de tecnología basada en mSalud para aplicaciones móviles. Madrid. Ministerio de Sanidad; 2020.
  • Henson P, David G, Albright K, Torous J. Deriving a practical framework for the evaluation of health apps. Lancet Digit Health. Jun 2019;1(2):e52-e54. [ CrossRef ]
  • Reddy S, Rogers W, Makinen V, Coiera E, Brown P, Wenzel M, et al. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform. Oct 12, 2021;28(1):e100444. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Khoja S, Durrani H, Scott RE, Sajwani A, Piryani U. Conceptual framework for development of comprehensive e-health evaluation tool. Telemed J E Health. Jan 2013;19(1):48-53. [ CrossRef ] [ Medline ]
  • Sockolow P, Bowles K, Rogers M. Health information technology evaluation framework (HITREF) comprehensiveness as assessed in electronic point-of-care documentation systems evaluations. Amsterdam. IOS Press; 2015.
  • Kowatsch T, Otto L, Harperink S, Cotti A, Schlieter H. A design and evaluation framework for digital health interventions. it - Inf Technol. Nov 2019;61(5):253-263. [ CrossRef ]
  • Mathews SC, McShea MJ, Hanley CL, Ravitz A, Labrique AB, Cohen AB. Digital health: a path to validation. NPJ Digit Med. May 13, 2019;2(1):38. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Milosevic Z. Ethics in digital health: a deontic accountability framework. 2019. Presented at: IEEE 23rd International Enterprise Distributed Object Computing Conference (EDOC); October 28-31, 2019; Paris, France. [ CrossRef ]
  • World Health Organization (WHO). Monitoring and evaluating digital health interventions: a practical guide to conducting research and assessment. Geneva. WHO; 2016.
  • Enam A, Torres-Bonilla J, Eriksson H. Evidence-based evaluation of eHealth interventions: systematic literature review. J Med Internet Res. Nov 23, 2018;20(11):e10971. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kolasa K, Kozinski G. How to value digital health interventions? A systematic literature review. Int J Environ Res Public Health. Mar 23, 2020;17(6):2119. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vis C, Bührmann L, Riper H, Ossebaard HC. Health technology assessment frameworks for eHealth: a systematic review. Int J Technol Assess Health Care. Apr 16, 2020;36(3):204-216. [ CrossRef ]
  • von Huben A, Howell M, Howard K, Carrello J, Norris S. Health technology assessment for digital technologies that manage chronic disease: a systematic review. Int J Technol Assess Health Care. May 26, 2021;37(1):e66. [ CrossRef ]
  • Report of the Working Group on mhealth assessment guidelines. European Commission. 2021. URL: https://digital-strategy.ec.europa.eu/en/library/report-working-group-mhealth-assessment-guidelines [accessed 2024-03-28]
  • Kumar S, Nilsen WJ, Abernethy A, Atienza A, Patrick K, Pavel M, et al. Mobile health technology evaluation: the mhealth evidence workshop. Am J Prev Med. Aug 2013;45(2):228-236. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alami H, Lehoux P, Auclair Y, de Guise M, Gagnon M, Shaw J, et al. Artificial intelligence and health technology assessment: anticipating a new level of complexity. J Med Internet Res. Jul 07, 2020;22(7):e17707. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vokinger KN, Nittas V, Witt CM, Fabrikant SI, von Wyl V. Digital health and the COVID-19 epidemic: an assessment framework for apps from an epidemiological and legal perspective. Swiss Med Wkly. May 04, 2020;150(1920):w20282. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Baumel A, Muench F. Heuristic evaluation of ehealth interventions: establishing standards that relate to the therapeutic process perspective. JMIR Ment Health. Jan 13, 2016;3(1):e5. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alfonzo A, Huerta M, Wong S, Passariello G, Diaz M, La CA, et al. Design of a methodology for assessing an electrocardiographic telemonitoring system. Annu Int Conf IEEE Eng Med Biol Soc. 2007;2007:3729-3732. [ CrossRef ]
  • Bashshur R, Shannon G, Sapci H. Telemedicine evaluation. Telemed J E Health. Jun 2005;11(3):296-316. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Beintner I, Vollert B, Zarski A, Bolinski F, Musiat P, Görlich D, et al. Adherence reporting in randomized controlled trials examining manualized multisession online interventions: systematic review of practices and proposal for reporting standards. J Med Internet Res. Aug 15, 2019;21(8):e14181. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brear M. Evaluating telemedicine: lessons and challenges. Health Inf Manag. Jul 21, 2006;35(2):23-31. [ CrossRef ] [ Medline ]
  • DeChant HK, Tohme WG, Mun SK, Hayes WS, Schulman KA. Health systems evaluation of telemedicine: a staged approach. Telemed J. Jan 1996;2(4):303-312. [ CrossRef ] [ Medline ]
  • Giansanti D, Morelli S, Macellari V. Telemedicine technology assessment part II: tools for a quality control system. Telemed J E Health. Apr 2007;13(2):130-140. [ CrossRef ] [ Medline ]
  • Giansanti D, Morelli S, Macellari V. Telemedicine technology assessment part I: setup and validation of a quality control system. Telemed J E Health. Apr 2007;13(2):118-129. [ CrossRef ] [ Medline ]
  • Grigsby J, Brega AG, Devore PA. The evaluation of telemedicine and health services research. Telemed J E Health. Jun 2005;11(3):317-328. [ CrossRef ] [ Medline ]
  • Hailey D, Jacobs P, Simpson J, Doze S. An assessment framework for telemedicine applications. J Telemed Telecare. Jun 23, 1999;5(3):162-170. [ CrossRef ] [ Medline ]
  • Ohinmaa A, Hailey D, Roine R. Elements for assessment of telemedicine applications. Int J Technol Assess Health Care. Jun 30, 2001;17(2):190-202. [ CrossRef ] [ Medline ]
  • Rajan B, Tezcan T, Seidmann A. Service systems with heterogeneous customers: investigating the effect of telemedicine on chronic care. Manag Sci. Mar 2019;65(3):1236-1267. [ CrossRef ]
  • Sisk JE, Sanders JH. A proposed framework for economic evaluation of telemedicine. Telemed J. Jan 1998;4(1):31-37. [ CrossRef ] [ Medline ]
  • Zissman K, Lejbkowicz I, Miller A. Telemedicine for multiple sclerosis patients: assessment using Health Value Compass. Mult Scler. Apr 30, 2012;18(4):472-480. [ CrossRef ] [ Medline ]
  • Grustam AS, Vrijhoef HJM, Koymans R, Hukal P, Severens JL. Assessment of a business-to-consumer (B2C) model for telemonitoring patients with chronic heart failure (CHF). BMC Med Inform Decis Mak. Oct 11, 2017;17(1):145. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hebert M. Telehealth success: evaluation framework development. Stud Health Technol Inform. 2001;84(Pt 2):1145-1149. [ Medline ]
  • Rojahn K, Laplante S, Sloand J, Main C, Ibrahim A, Wild J, et al. Remote monitoring of chronic diseases: a landscape assessment of policies in four European countries. PLoS One. May 19, 2016;11(5):e0155738. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Agarwal S, LeFevre AE, Lee J, L'Engle K, Mehl G, Sinha C, et al. WHO mHealth Technical Evidence Review Group. Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist. BMJ. Mar 17, 2016;352:i1174. [ CrossRef ] [ Medline ]
  • Safety and quality strategy in mobile health apps. Complete list of recommendations on design, use and assessment of health apps. Agencia de Calidad Sanitaria de Andalucía. 2012. URL: http://www.calidadappsalud.com/en/listado-completo-recomendaciones-app-salud/ [accessed 2023-07-17]
  • The app evaluation model. American Psychiatric Association Initiative. 2022. URL: https://www.psychiatry.org/psychiatrists/practice/mental-health-apps/the-app-evaluation-model [accessed 2024-03-28]
  • Gdd AppStore. Association of Regional Public Health Services (GGD) and Regional Medical Emergency Preparedness and Planning (GHOR). 2016. URL: https://tinyurl.com/58th4p4w [accessed 2023-07-18]
  • AppQ: quality criteria core set for more quality transparency in digital health applications. Bertelsmann Stiftung. Oct 29, 2019. URL: https://www.bertelsmann-stiftung.de/de/publikationen/publikation/did/appq [accessed 2024-03-28]
  • Bradway M, Carrion C, Vallespin B, Saadatfard O, Puigdomènech E, Espallargues M, et al. mHealth assessment: conceptualization of a global framework. JMIR Mhealth Uhealth. May 02, 2017;5(5):e60. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • PAS 277:2015 - health and wellness apps. Quality criteria across the life cycle. British Standards Institution. URL: https:/​/knowledge.​bsigroup.com/​products/​health-and-wellness-apps-quality-criteria-across-the-life-cycle-code-of-practice/​standard [accessed 2023-07-18]
  • MindApps. Centre for Telepsychiatry in the Region of Southern Denmark. 2017. URL: https://mindapps.dk/ [accessed 2023-07-18]
  • Dick S, O'Connor Y, Thompson MJ, O'Donoghue J, Hardy V, Wu TJ, et al. Considerations for improved mobile health evaluation: retrospective qualitative investigation. JMIR Mhealth Uhealth. Jan 22, 2020;8(1):e12424. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Federal Institute for Drugs and Medical Devices (BfArM). The fast-track process for digital health applications (diga) according to section 139e sgb v. A guide for manufacturers, service providers and users. Bonn, Germany. Federal Institute for Drugs and Medical Devices; 2020.
  • Gorski I, Bram JT, Sutermaster S, Eckman M, Mehta K. Value propositions of mHealth projects. J Med Eng Technol. Aug 12, 2016;40(7-8):400-421. [ CrossRef ] [ Medline ]
  • Hogaboam L, Daim T. Technology adoption potential of medical devices: the case of wearable sensor products for pervasive care in neurosurgery and orthopedics. Health Policy Technol. Dec 2018;7(4):409-419. [ CrossRef ]
  • Huckvale K, Torous J, Larsen ME. Assessment of the data sharing and privacy practices of smartphone apps for depression and smoking cessation. JAMA Netw Open. Apr 05, 2019;2(4):e192542. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Maar MA, Yeates K, Perkins N, Boesch L, Hua-Stewart D, Liu P, et al. A framework for the study of complex mhealth interventions in diverse cultural settings. JMIR Mhealth Uhealth. Apr 20, 2017;5(4):e47. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McMillan B, Hickey E, Patel MG, Mitchell C. Quality assessment of a sample of mobile app-based health behavior change interventions using a tool based on the National Institute of Health and Care Excellence behavior change guidance. Patient Educ Couns. Mar 2016;99(3):429-435. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Health and welness apps: new international guidelines to help to sort the best form the rest. NEN. URL: https://www.nen.nl/en/health-and-welness-apps [accessed 2024-03-28]
  • Continua design guidelines. Personal Connected Health Alliance. 2019. URL: https://www.pchalliance.org/continua-design-guidelines [accessed 2023-07-18]
  • Philpott D, Guergachi A, Keshavjee K. Design and validation of a platform to evaluate mhealth apps. Stud Health Technol Inform. 2017;235:3-7. [ Medline ]
  • Ruck A, Wagner BS, Lowe C. Second draft of guidelines. EU guidelines on assessment of the reliability of mobile health applications. European Commission, Directorate-General of Communications Networks, Content & Technology. Luxembourg; 2016.
  • Sax M, Helberger N, Bol N. Health as a means towards profitable ends: mhealth apps, user autonomy, and unfair commercial practices. J Consum Policy. May 22, 2018;41(2):103-134. [ CrossRef ]
  • Wyatt JC. How can clinicians, specialty societies and others evaluate and improve the quality of apps for patient use? BMC Med. Dec 03, 2018;16(1):225. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • IRBs could address ethical issues related to tracking devices: mobile devices raise new concerns. IRB Advisor. Nov 1, 2017. URL: https:/​/www.​reliasmedia.com/​articles/​141589-irbs-could-address-ethical-issues-related-to-tracking-devices [accessed 2024-03-28]
  • App check. Zentrum für Telematik und Telemedizin. URL: https://ztg-nrw.de/ [accessed 2024-03-28]
  • Bergmo TS. How to measure costs and benefits of ehealth interventions: an overview of methods and frameworks. J Med Internet Res. Nov 09, 2015;17(11):e254. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eysenbach G, CONSORT-EHEALTH Group. CONSORT-EHEALTH: improving and standardizing evaluation reports of web-based and mobile health interventions. J Med Internet Res. Dec 31, 2011;13(4):e126. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shaw NT. 'CHEATS': a generic information communication technology (ICT) evaluation framework. Comput Biol Med. May 2002;32(3):209-220. [ CrossRef ] [ Medline ]
  • Brown M, Shaw N. Evaluation practices of a major Canadian telehealth provider: lessons and future directions for the field. Telemed J E Health. Oct 2008;14(8):769-774. [ CrossRef ] [ Medline ]
  • Casper GR, Kenron DA. A framework for technology assessment: approaches for the selection of a home technology device. Clin Nurse Spec. 2005;19(4):170-174. [ CrossRef ] [ Medline ]
  • Sitting D, Kahol K, Singh H. Sociotechnical evaluation of the safety and effectiveness of point-of-care mobile computing devices: a case study conducted in India. Stud Health Technol Inform. 2013;192:515-519. [ CrossRef ]
  • Haute Autorité de Santé. Good practice guidelines on health apps and smart devices (mobile health or mhealth). Paris. Haute Autorité de Santé; Nov 7, 2016.
  • Health Information and Quality Authority. International review of consent models for the collection, use and sharing of health information. Dublin. Health Information and Quality Authority; 2020.
  • Jurkeviciute M. Planning of a holistic summative ehealth evaluation: the interplay between standards and reality PhD Thesis. Gotherburg. Chalmers University of Technology; 2018.
  • Vimarlund V, Davoody N, Koch S. Steps to consider for effective decision making when selecting and prioritizing eHealth services. Stud Health Technol Inform. 2013;192:239-243. [ Medline ]
  • Currie WL. TEMPEST: an integrative model for health technology assessment. Health Policy Technol. Mar 2012;1(1):35-49. [ CrossRef ]
  • Eivazzadeh S, Anderberg P, Larsson TC, Fricker SA, Berglund J. Evaluating health information systems using ontologies. JMIR Med Inform. Jun 16, 2016;4(2):e20. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Our data-driven future in healthcare: people and partnerships at the heart of health related technologies. Academy of Medical Sciences. Nov 2018. URL: https://acmedsci.ac.uk/file-download/74634438 [accessed 2024-03-28]
  • Australian Commission on Safety and Quality in Healthcare. National safety and quality digital mental health standards - consultation draft. Australia. Australian Commission on Safety and Quality in Healthcare; 2020.
  • A guide to good practice for digital and data-driven health technologies. London. Department of Health & Social Care; 2021.
  • Drury P, Roth S, Jones T, Stahl M, Medeiros D. Guidance for investing in digital health. In: Sustainable Development Working Papers. Mandaluyong, the Philippines. Asian Development Bank; May 2018.
  • European Commission. Synospis report. Consultation: transformation health and care in the digital single market. Luxembourg. European Commission; 2018.
  • Federal Ministry of Health. Regulation on the procedure and requirements for testing the eligibility for reimbursement of digital health applications in the statutory public health insurance (Digital Health Applications Ordinance - DiGAV). Alemania. Federal Ministry of Health; 2020.
  • Haute Autorité de Santé. Guide to the specific features of clinical evaluation of a connected medical device (CMD) in view of its application for reimbursement. Paris. Haute Autorité de Santé; 2019.
  • How we assess health apps and digital tools. NHS Digital. 2019. URL: https:/​/digital.​nhs.uk/​services/​nhs-apps-library/​guidance-forhealth-app-developers-commissioners-and-assessors/​how-we-assess-healthapps-and-digital-tools [accessed 2023-07-17]
  • Lennon MR, Bouamrane M, Devlin AM, O'Connor S, O'Donnell C, Chetty U, et al. Readiness for delivering digital health at scale: lessons from a longitudinal qualitative evaluation of a national digital health innovation program in the United Kingdom. J Med Internet Res. Feb 16, 2017;19(2):e42. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McNamee P, Murray E, Kelly MP, Bojke L, Chilcott J, Fischer A, et al. Designing and undertaking a health economics study of digital health interventions. Am J Prev Med. Nov 2016;51(5):852-860. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Haute Autorité de Santé. Medical Device and Health Technology Evaluation Committee (CNEDiMTS*). Paris. Haute Autorité de Santé; 2019.
  • Draft guidelines for preparing assessment reports for the Medical Services Advisory Committee: draft version 4.0. Australian Government Department of Health and Aged Care. Aug 2020. URL: https:/​/consultations.​health.gov.au/​technology-assessment-access-division/​msac-guidelines-review-consultation/​user_uploads/​draft-msac-guidelines---clean-version---28-august-2020-3.​pdf [accessed 2024-03-28]
  • Haute Autorité de Santé. Methodological choices for the clinical development of medical devices. Paris. Haute Autorité de Santé; 2013.
  • Michie S, Yardley L, West R, Patrick K, Greaves F. Developing and evaluating digital interventions to promote behavior change in health and health care: recommendations resulting from an international workshop. J Med Internet Res. Jun 29, 2017;19(6):e232. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mohr DC, Schueller SM, Riley WT, Brown CH, Cuijpers P, Duan N, et al. Trials of intervention principles: evaluation methods for evolving behavioral intervention technologies. J Med Internet Res. Jul 08, 2015;17(7):e166. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Murray E, Hekler EB, Andersson G, Collins LM, Doherty A, Hollis C, et al. Evaluating digital health interventions: key questions and approaches. Am J Prev Med. Nov 2016;51(5):843-851. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Steventon A, Grieve R, Bardsley M. An approach to assess generalizability in comparative effectiveness research: a case study of the whole systems demonstrator cluster randomized trial comparing telehealth with usual care for patients with chronic health conditions. Med Decis Making. May 18, 2015;35(8):1023-1036. [ CrossRef ]
  • D2.1 Knowledge Tool 1. Health apps assessment frameworks. European mHealth Hub. 2020. URL: https://mhealth-hub.org/download/d2-1-knowledge-tool-1-health-apps-assessment-frameworks [accessed 2024-03-28]
  • Moshi MR, Tooher R, Merlin T. Suitability of current evaluation frameworks for use in the health technology assessment of mobile medical applications: a systematic review. Int J Technol Assess Health Care. Sep 11, 2018;34(5):464-475. [ CrossRef ]
  • Stensgaard T, Sørensen T. Telemedicine in Greenland — the creation of an evaluation plan. J Telemed Telecare. Jun 22, 2016;7(1_suppl):37-38. [ CrossRef ]
  • HL7 Consumer Mobile Health Application Functional Framework (cMHAFF). HL7 International. 2018. URL: http://www.hl7.org/implement/standards/product_brief.cfm?product_id=476 [accessed 2024-03-28]
  • mHealth. Kantonen K-uKvBu. 2017. URL: https://www.e-health-suisse.ch/gemeinschaften-umsetzung/ehealth-aktivitaeten/mhealth.html [accessed 2023-07-18]
  • mHealthBelgium. mHealthBELGIUM. URL: https://mhealthbelgium.be/ [accessed 2024-03-28]
  • Mookherji S, Mehl G, Kaonga N, Mechael P. Unmet need: improving mhealth evaluation rigor to build the evidence base. J Health Commun. Jun 04, 2015;20(10):1224-1229. [ CrossRef ] [ Medline ]
  • MySNS Selecção. Serviços Partilhados do Minisério da Saúde. URL: https://mysns.min-saude.pt/mysns-seleccao-processo-de-avaliacao/ [accessed 2023-07-18]
  • Nielsen S, Rimpiläinen S. Report on international practice on digital apps. Glasgow. Digital Health & Care Institute; 2018.
  • Accreditation of Digital Health solutions is a fundamental foundation for their safe adoption, equipping healthcare providers and practitioners with access to health apps assured to your standards Internet. Organisation for the Review of Care and Health Applications (ORCHA). URL: https://orchahealth.com/services/ [accessed 2023-07-18]
  • mHealth. TIC Salut Social. 2022. URL: https://ticsalutsocial.cat/projecte/mhealth/ [accessed 2024-03-28]
  • Wienert J, Jahnel T, Maaß L. What are digital public health interventions? First steps toward a definition and an intervention classification framework. J Med Internet Res. Jun 28, 2022;24(6):e31921. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Deniz-Garcia A, Fabelo H, Rodriguez-Almeida AJ, Zamora-Zamorano G, Castro-Fernandez M, Alberiche Ruano MDP, et al. WARIFA Consortium. Quality, usability, and effectiveness of mhealth apps and the role of artificial intelligence: current scenario and challenges. J Med Internet Res. May 04, 2023;25:e44030. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • World Health Organization (WHO). Classification of digital interventions, services and applications in health: a shared language to describe the uses of digital technology for health, 2nd ed. Geneva. WHO; Oct 24, 2023.
  • European Parliament, Council of the European Union. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. EUR-Lex. 2017. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32017R0745 [accessed 2024-03-28]
  • European Parliament, Council of the European Union. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU. EUR-Lex. 2017. URL: https://eur-lex.europa.eu/eli/reg/2017/746/oj [accessed 2024-03-28]
  • Segur-Ferrer J, Moltó-Puigmartí C, Pastells-Peiró R, Vivanco-Hidalgo R. Marco de evaluación de tecnologías sanitarias: adaptación para la evaluación de tecnologías de salud digital. Madrid, Barcelona. Ministerio de Sanidad, Agència de Qualitat i Avaluació Sanitàries de Catalunya; 2023.
  • Benedetto V, Filipe L, Harris C, Spencer J, Hickson C, Clegg A. Analytical frameworks and outcome measures in economic evaluations of digital health interventions: a methodological systematic review. Med Decis Making. Oct 19, 2022;43(1):125-138. [ CrossRef ]

Abbreviations

Edited by T Leung; submitted 03.05.23; peer-reviewed by R Gorantla, KL Mauco, M Aymerich, J Haverinen, M Behzadifar; comments to author 10.11.23; revised version received 01.12.23; accepted 20.02.24; published 10.04.24.

©Joan Segur-Ferrer, Carolina Moltó-Puigmartí, Roland Pastells-Peiró, Rosa Maria Vivanco-Hidalgo. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 10.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Publication ethics: Role and responsibility of authors

Shubha singhal.

Department of Pharmacology, Maulana Azad Medical College, New Delhi, 110 002 India

Bhupinder Singh Kalra

Publication of scientific paper is critical for modern science evolution, and professional advancement. However, it comes with many responsibilities. An author must be aware of good publication practices. While refraining from scientific misconduct or research frauds, authors should adhere to Good Publication Practices (GPP). Publications which draw conclusions from manipulated or fabricated data could prove detrimental to society and health care research. Good science can blossom only when research is conducted and documented with complete honesty and ethics. Unfortunately, publish or perish attitude has led to unethical practices in scientific research and publications. There is need to identify, acknowledge, and generate awareness among junior researchers or postgraduate students to curb scientific misconduct and adopt GPP. This article discusses various unethical publication practices in research. Also, the role and responsibilities of authors have been discussed with the purpose of maintaining the credibility and objectivity of publication.

Introduction

Need to publish.

A scientific paper is an organized description of hypothesis, data, and conclusions, intended to instruct the readers. Research conducted has to be published or documented; otherwise, it is considered not done. Publication of paper is critical for the evolution of modern science, in which the work of one scientist builds upon that of others [ 1 ]. The roots of scholarly, scientific publishing can be traced to 1665, when Henry Oldenburg of the British Royal Society established the journal Philosophical Transactions of the Royal Society . The aim of the journal was to create a public record of original contribution to knowledge and also to encourage scientists to “speak” directly to others [ 2 ]. Documentation of research work followed by publication helps in the dissemination of observations and findings. This flow of knowledge guides and contributes towards research coalition. Established and budding researchers do get benefited by published literature and consolidates their research.

Publication of research in peer-reviewed journal not only validates the research and boosts confidence of the authors but also gives national and international recognition to an author, department, university, and institution [ 3 ]. Unfortunately, in some establishments, the most compelling reason for publication is to fulfill specific job requirements by employers. It may include promotion to an academic position and improving prospects of success in research grant application. The importance of publication in the career is further emphasized by the adage “Publish or perish,” i.e. publish your research or lose your identity.

Ethics-related organizations and their role

A good research involves many coordinated steps. It starts from hypothesis, selection of appropriate study design, study execution, data collection, analysis, and finally publication. Not only the conduct of the study requires ethics to be adhered to but also the process of publication comes under the purview of ethics. Any publication that reports the results and draws the conclusion from the data which have been manipulated is considered research fraud or scientific misconduct [ 4 ]. Recently, Lancet retracted a study entitled “Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis” because the veracity of the data underlying this observational study could not be assured by the study authors [ 5 ].

There are organizations which give recommendations and develop guidelines to assist authors, editors, and reviewers. The purpose is to create and disseminate accurate, clear, reproducible, unbiased research papers. The organizations involved with publication ethics are

  • International Committee of Medical Journals Editors (ICMJE).
  • World Association of Medical Editors (WAME)
  • Committee on Publication Ethics (COPE)

The ICMJE was established in 1978, in Vancouver, British Columbia, Canada, by a group of medical journal editors. ICMJE developed recommendations which are primarily for authors who want to submit their work in ICMJE member journals. These recommendations discuss the role and responsibilities of the authors, contributors, reviewers, and editors. Steps of manuscript preparation, submission, and editorial issues related to publication in medical journals are also discussed and drafted. The uniform requirements for manuscript submitted to biomedical journals, which most of the journals are following were drafted by ICMJE [ 6 ].

The WAME is a nonprofit voluntary association, which was established in 1995 by a group of members of the ICMJE. The goal was to improve editorial standards, promote professionalism in medical editing, and encourage research on the principals and practice of medical editing. The role of WAME is to facilitate worldwide cooperation and communication among editors of peer-reviewed medical journal. Membership in WAME is free and all decision-making editors of peer-reviewed journals are eligible to join. WAME has more than 1830 members representing more than 1000 journals from 92 countries [ 7 ].

The COPE also helps in ethical publication. COPE was founded in 1997 by a small number of UK medical editors as a self-help group to discuss troubling ethical cases in the publication process. It provides paid membership and currently has more than 7000 members in various disciplines from all parts of the world. The purpose of COPE is to find the practical ways to deal with the misconduct cases and to develop codes of conduct for good publication practice. It also generates the funding for the research based on the issues related to publication misconduct [ 8 ].

Process of publication

The scientific publication is a team effort. Transforming the research findings and observations into a published article is an art as well as science, which involves multiple steps. The very first step is the preparation of the manuscript as per the journal’s requirement. The language in which the manuscript has been drafted is important. It should be checked by an expert or native language speaker and the senior authors. Clear and concise language helps editors and reviewers to concentrate on the content. For up-to-date information, recent references should be cited. Final manuscript must be shared with all the authors and it should have approval of all the authors. Copyright transfer form should be signed by all the authors before submitting to the journal. Signing the copyright form brings responsibility.

Submitted manuscripts are first screened by the editors for its suitability, content, novelty, and what it adds to existing knowledge. The subject of research work should be synchronized with the target journal. It should comply with journal’s manuscript drafting guidelines. After the editorial screening, if some technical issues or non-adherence to manuscript guidelines are observed, it is sent back to the author for technical modifications. The peer review process gets initiated after technical modifications are acceptable. It may take a couple of weeks/months.

In light of reviewer’s recommendations, the editor sends the decision letter to the author mentioning the status of the manuscript, i.e. accepted, rejected, or requires revision. In case of revision, author(s) reply in detail to all comments of reviewers and submit to the journal again within stipulated time. After deliberation on replies and revised manuscript submitted, the editor decides for suitability of publication or if it needs to be sent out for review again. These steps get repeated until the manuscript is accepted or rejected. Once it gets accepted, it goes under proof read stage and finally gets published. The author is never in direct communication with the reviewer. He communicates with the Editorial board only. The reviewer should declare conflicts of interest (COI), if any, before reviewing the manuscript. Manuscripts are usually mailed to reviewers without information of the authors and their affiliations; hence, reviewers are blinded.

What is publishable or not publishable?

Writing for publication is an important yet challenging form of knowledge dissemination. Journals like to publish articles that present an exhaustive meaningful research. It should contribute towards the knowledge building and awareness of readers. At the very minimum, a publishable article needs to be original. It should be conducted and drafted with robust methodology and significant findings, well organized, well written, and concise yet clear. It should be drafted with clear explanation of how the article addresses the existing knowledge gap. Conclusion drawn should be relevant to the audience or readers with a comprehensive list of up-to-date references. Papers that are poorly organized, cluttered with unnecessary information, and consist of routine extension of previous reports or fragmentary reports of research results are not accepted for publication. Violation of ethical or legal norms, including plagiarism, duplicates publication lead to immediate rejection of the paper [ 9 ].

Scientific misconduct

Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of scientific research [ 10 ]. Misconduct in the scientific publication process by the authors is detrimental for integrity of the whole system and is considered unethical. Falsification or fabrication of data is the gravest form of scientific misconduct wherein authors either manipulate skewed data to look favorable or generate data where no data exists. Different forms of scientific misconduct are plagiarism or misappropriation of the ideas of others, improprieties of authorship, simultaneous publications, duplicate publications, salami slicing, and non-declaration of COI. Conducting research without informed consent or ethics approval and not maintaining data confidentiality is a form of scientific misconduct. Editors or publication houses do take disciplinary action as per COPE recommendations against scientific misconduct. Authors are blacklisted or banned to submit articles in the respective journal in the future [ 11 ].

Criteria of authorship

Academic life revolves around publications. The publication adds to the credibility of the research and brings fame and recognition. An author is an individual who fulfills enlisted criteria collectively: (1) substantial contributions to conception and design; (2) acquisition of data, or analysis and interpretation of data; (2) drafting the article or revising it critically for important intellectual content; and (3) final approval of the version to be published. Individuals who have provided technical services/translating text/identifying patients for study/supplying material/providing funds/applied statistics/medical writers are not eligible for authorship. However, all those contributors who do not meet the criteria for authorship should be listed in the acknowledgement section [ 12 , 13 ]. Because of the important role of publication in clinical practice and academic setting, the authorship of articles must be honest, reliable, trustworthy, and transparent.

Types of authors

Since authorship is sought after, many unethical practices are also prevalent. Ghost, guest, or gift authors are the examples of such practices. A ghost author is a person who has made a substantial contribution to the research or writing of a manuscript but is not listed as an author. A ghost author might be a direct employee or hired contract employee of pharmaceutical company and hence, listing him as an author amounts to COI [ 14 ]. It is dishonest to omit an author who has made significant contributions. In contrast to ghost author, guest or gift/honorary author is someone who is named as an author, but who did not contribute in a meaningful way to the design, research, analysis, or writing of a paper. Often guest or gift authors are well known and well respected in the field of research. The inclusion of their name in the author list might increase chances of acceptance for publication.

However, sometimes senior investigators may also give honorary authorship to their colleagues for encouraging collaborations and maintaining good working relations or as repayment of favors. Whatever the cause, the gift or guest authorship is an unacceptable practice in publication. The presence of well-known author on the board as a guest author can influence the opinion of clinicians, academicians, and politicians about a particular drug or device. Secondly, due to gift authorship, the person is perceived as being more skilled than his colleague who has not published [ 12 , 13 ]. In multicenter trials, since investigators from different sites have contributed, they qualify for the authorship and all those who qualify for authorship should be listed [ 15 ]. One should always remember that authorship brings responsibility and authors have to be accountable to the data and results which are published.

Authorship issues/disputes

Authorship issues or disputes account for 2% to 11% of all disagreement in the scientific community. The authorship disputes could range from order of authorship, inclusion or exclusion of authors, number of authors etc. Request for addition of authors after submission or even after publication is quite common. In contrast, there are examples where a co-author denies becoming a part of a manuscript, once any scientific misconduct including plagiarism is detected [ 16 ].

The order of authorship should be mutually decided before taking up the study. It has to be a joint decision of all co-authors. In multicenter trials, research group includes large number of researchers. Hence, the corresponding author specifies and registers the group name and clearly identifies the group members who can take credit and responsibility for the work as an author.

ICMJE and other organizations issued the guidelines regarding group authorship and stated that in case of group authorship the byline of the article identifies who is directly responsible for the manuscript, and MEDLINE lists as authors. If the byline includes a group name, MEDLINE will list the names of individual group members who are authors or who are collaborators [ 17 ]. Despite these guidelines, authorship battles for inappropriate attribution of credit are witnessed in this area also.

Usually, the dispute is for the “First author” place because most of the articles are cited by the name of the first author. Conventionally, the extent of involvement decides the order of authorship; for example, the person who has done the majority of the groundwork would be considered eligible for being the first author (junior researcher) and the person who planned and conceived the study would be the last author (supervisor). There is no general consensus in order of authorship, and there are different schools of thoughts [ 16 ]. During submission of revised manuscript, order of authorship should not be altered without any justification. Approval from all authors is warranted in case of revision of order of authorship. It affects the credibility of manuscript too.

How to resolve authorship issues

The best way to prevent disputes in authorship is to generate awareness among research groups about authorship criteria and to develop Standard Operating Procedure (SOP) for the conduct and publication of research. COPE guidelines are to be referred in case of authorship or conflicts [ 18 ]. The next best option to prevent disputes is to have open discussion among all the authors involved in multidisciplinary research prior to initiating research, i.e. at the time of protocol drafting. Defining the role and responsibility of each author further reduces the chances of disputes within the research team. Editors do ask for individual contributions of authors in designing manuscript. The journal can blacklist guest or ghost authors [ 12 ].

Plagiarism: do’s and don’ts

The word plagiarism was first used in the English language in the year 1601 by the dramatist Ben Jonson to describe someone who was guilty of theft. Plagiarism is derived from the Latin word “plagiare” which means to “kidnap.” A plagiarist is the person who commits plagiarism [ 19 ]. By definition, plagiarism is the use of previously published work by another author in one’s own manuscript without consent, credit, or acknowledgement. It is the most common form of scientific misconduct [ 4 ]. Plagiarism can be intentional or unintentional. Unintentional plagiarism is usually seen in articles written by students or junior researchers. Lack of awareness and ignorance lead to unintentional plagiarism. Intentional plagiarism happens when an author deliberately copies documented or published work and presents it as his/her own. Both types of plagiarism are unethical and illegal, which can ruin the career and reputation of the writer [ 19 ].

Plagiarism of idea occurs when a plagiarist copies or steals the idea or thought of someone else and presents it as his/her own. Such type of plagiarism is difficult to detect; however, once detected, it is considered serious offense. The example of plagiarism of idea is presenting or documenting an idea of someone else which is being discussed or presented in any conference or seminar without citing proper sources. Plagiarism of text or direct plagiarism, i.e. word to word writing, is when a researcher takes large section of an article from another source and pastes it in his/her own research without providing proper citation. One of the hybrid varieties of plagiarism is Mosaic plagiarism where the author steals the idea, opinion, words, and phrases from different sources and merges words without acknowledging the original author.

Self-plagiarism is the practice of an author using portions of their previous writings on the same topic in their subsequent publications, without specifically citing it formally in quotes. There is no consensus as to whether this is a form of scientific misconduct, or how many of one’s own words one can use before it is truly “plagiarism.” To be on the safer side, authors should cite source or give reference of their previous publications. There are examples in which plagiarism engulfed the entire career of authors and writers and it became the reason of article retraction or rejection [ 20 ].

Culture of publish or perish is one of the important causes of plagiarism. The researcher needs to publish a large number of papers in limited time period to get more opportunities in career and research. In addition, lack of knowledge, laziness, and fear of failure and desire of getting recognition also lead to plagiarism. Many softwares, which can detect plagiarism are available online. It is the responsibility of the author to run their manuscript through software before submitting it to the journal [ 19 , 21 ].

The very first step to prevent plagiarism is the awareness about plagiarism, the consequences, and how to avoid plagiarism. Authors can avoid plagiarism by acknowledging the original source of the idea or word and enclosing them within quotation marks. In case of paraphrasing, where the writer writes the text in his own word, authors must properly cite the original source. Authors must always obtain permission for use of published illustration. Authors should avoid writing multiple separate articles if he can present a large, complex study in a cohesive manner in a single article [ 21 ].

Conflict of interest

Conflict of interest is an attribute which is invisible to the reader or editor, but which may affect or influence his or her judgment or objectivity. Academicians/physicians and researchers often work in collaboration with pharmaceutical and biotechnology companies to develop a product for the well-being of society. However, there are examples where financial and non-financial ties of researches or physicians with the company have compromised the integrity of research [ 22 ].

Conflict of interest describes the situations where the impartiality of the research may be compromised because the researcher stands to profit in some way from the conclusions they draw [ 23 ]. Examples of potential conflicts of interests that are directly or indirectly related to the research may include research grants from funding agencies, honorarium for speaking at symposium, financial support for educational programs, employment, and multiple affiliations. In addition, non-financial benefits including recognition, career advancement, advocacy for a strongly held position, and support for friends and colleagues can also affect the research work and result biases in the research. These biases, when hidden, can affect clinical decision-making by making interventions appear safer or more effective than they really are [ 24 ].

Disclosure of COI is the basic requirement to prevent attribution-related bias in the research. The ICMJE has produced a common form to disclose any COI and that has to be individually signed by each co-author. It has to be uploaded along with the manuscript files. The intent of the disclosure form is not to prevent authors with a potential COI from publication. It is merely intended that any potential conflict should be declared so that the readers may form their own judgment about the findings and observations. It is for the readers to determine whether the authors outside interest may reflect a possible bias in either the exposition of the conclusions presented [ 25 ]. Authors are supposed to declare COI in the manuscript text too which is meant for readers.

Duplicate publication

Duplicate publication or redundant publication is a publication of a paper that substantially overlaps with one which is already published, without clear, visible reference to the previous publication [ 26 ]. As per copyright law and publication ethics, whatever is available in the journal for reading would be original unless there is a clear statement that the author and editor are intentionally republishing an article. Hence, duplication of publication is the breach in the copyright law and against the ethical conduct. In addition, duplication of publication causes waste of limited resources and also leads to inappropriate weighting of the result of a single study. It was observed that duplicate publications of Ondansetron led to overestimation of its efficacy by 23% in one of the meta-analyses [ 26 , 27 ].

The COPE classifies duplicate publication into major and minor offenses. The major offense is the one where duplicate publication is based on the same data set and findings which are already published. It is also considered if there is evidence that the author tried to hide duplication by changing the title or order of authorship or by not referring previous publication [ 28 ]. Minor or salami slicing is considered segmental publication or part publication of results or reanalysis derived from a single study. Authors do it to increase the number of publications and citations. It is considered unethical and it is taken in a bad taste because for a reader it may cause distortion in the conclusions drawn. Publication of the results of a single study in parts in different journals might lead to over-judgement. Wrong conclusions may be drawn from a study if it is done on a fixed number of subjects but the data are being presented in fragments in different journals.

When an author needs to submit a report that has been already published or closely related to another paper that has been submitted elsewhere, the letter of submission should clearly say so. The authors should declare and provide copies of the related submission to help the editor decide how to handle the submission. Authors who attempt to duplicate publication without such notification can face prompt rejection of the submitted manuscript. If the editor was not aware of the violations and the article has already been published, then the article might warrant retraction with or without the author’s explanation or approval.

Duplicate publication does not prevent the author to disseminate important public health information in case of public health emergency. In fact, ICMJE encourages editors to give priority to authors who have made crucial data publicly available without delay [ 26 ]. Duplicate publications are justified if it is about combined editorials, clinical guidelines, and translation of archives.

Predatory publishing

Predatory publishing is the publication of an article in the journal that lacks the usual feature of editorial oversight, transparent policies, and operating procedure of legitimate peer review journals. Predatory journals exploit the authors by charging the publication fee and deceiving them by providing the false claim about the journal’s impact factor, indexing, and peer review [ 29 ].

Predatory publishing is harmful for both the author and the community. Predatory publishing may tarnish the image of the author. Articles published in predatory journals are usually not appreciated by the subject expert. It can misinform the readers and propagate wrong science because of poor quality control. Sometimes genuine information also gets missed because most of the predatory journals are not indexed in the database, so papers are not easily traceable [ 30 ].

Predatory publishing can be avoided by educating researchers, supervisors, and administrators about fake journals. Authors should also learn how to identify trustworthy journals. If the journal website mentions of indexing, then it is important to cross check the inclusion of the journal in the mentioned databases. For an open-access journal, the inclusion in Directory of Open Access Journals (DOAJ) can be checked at the DOAJ website. The journal’s claim of the Journal Citation Report (JCR) impact factor can be verified by its International Standard Serial Number (ISSN) number in the JCR Master list. Another approach to check trustworthy journals is to self-asses the journal through websites like https://thinkchecksubmit.org/ [ 30 ].

Responsibility of author

Authorship is not just a list of names. It is the matter of pride that has to be deserved, earned, and declared [ 15 ]. To maintain the integrity and credibility of medical research and to nourish the trust of public in scientific endeavors, all authors must follow the rules of good scientific publication practice and should stick to the following responsibilities (Table ​ (Table1 1 ):

  • Do not fabricate or manipulate the data
  • Avoid plagiarism and give proper acknowledgment to other works
  • Decide the order of authorship prior to writing the paper to avoid future conflicts
  • Declare whether research work has been published or presented before
  • Declare COI
  • Avoid ghost/gift/guest authorship
  • Do not submit the manuscript to more than one journal for simultaneous consideration
  • Take approval from the Institutional Ethics Committee before conducting research
  • Last but not the least, take direct responsibility for appropriate portions of the content.

Role and responsibilities of author

COI conflict of interest, CONSORT Consolidated Standards of Reporting Trials, ARRIVE Animal research: reporting in vivo experiments, CTRI Clinical Trials Registry - India

Awareness of good publication practices should be generated among novice authors to prevent unethical practices in publication of scientific research. Each institute or department should resort to COPE or ICMJE recommendations for publications and draft their own SOP for authors who are actively involved in research. Unethical practices on the part of the authors or scientific misconduct should be discouraged and addressed by appropriate training and guidance.

Compliance with ethical standards

SS, and BSK declare that they have no conflict of interest.

The authors are solely responsible for the data and the contents of the paper. In no way, the Honorary Editor-in-Chief, Editorial Board Members, the Indian Society of Gastroenterology or the printer/publishers are responsible for the results/findings and content of this article.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

IMAGES

  1. (PDF) How to write an original research paper (and get it published)I

    published first author research paper

  2. How to publish research paper?

    published first author research paper

  3. How to Write and Publish a Research Paper.pdf

    published first author research paper

  4. British Author Research Paper

    published first author research paper

  5. How to Publish a Research Paper in Reputed Journals?

    published first author research paper

  6. How To Write An Abstract For Your Dissertation Undergraduate

    published first author research paper

VIDEO

  1. First Steps to Getting Published in Academia

  2. I just published my first-author Research paper!! #phd #usa #indian #research #trending #shorts

  3. How to get published in academic journals

  4. Lesson 1: Writing a Research Paper

  5. How to Publish Research Papers Successfully

  6. Academic Writing

COMMENTS

  1. Guiding Undergraduates Through the Process of First Authorship

    I followed his advice, and indeed have published the vast majority of my papers with undergraduates as co-authors, and especially as first authors: Of my 33 post-graduate school publications, 29 papers involve a total of 68 undergraduate co-authors, and 24 of the 29 are first-authored by undergraduates 1.

  2. How to navigate authorship of scientific manuscripts

    Typically in my field, the first author is the one who makes the most significant contributions to the research work, such as acquiring and analyzing the results, or to writing the manuscript. The last author is the lead PI, who has supervised, financed, or otherwise been the main person responsible for the project.

  3. Authorship: Who's on first?

    When scientists collaborate on an experiment and a paper, it can be hard to decide who gets the credit and how much. Stephen Kosslyn first started to consider how author lists come together when ...

  4. Author Insights

    Q&A with Dr Eden Morales-Narváez. We chat with Dr Eden Morales Narvaez, winner of the Emerging Leaders 2020 award in JPhys Photonics, who has published 20 papers, starting with his first paper on Plasmonic colored nanopaper: a potential preventive healthcare tool against threats emerging from uncontrolled UV exposure published in 2019. Eden is now an editorial board member for the JPhys ...

  5. Conventions of Scientific Authorship

    Conventions of Scientific Authorship. 16 Apr 2010. By Vijaysree Venkatraman. Share: P ardis Sabeti published her first scientific paper when she was an undergraduate at the Massachusetts Institute of Technology. Her name had appeared in acknowledgment sections before, but that was the first time she was listed as an author -- and she was first ...

  6. How I Published 7 First Author Papers in 4 Years

    From my PhD work, I published 7 first-author papers. These papers were published between 2018 and 2021. In reality, I completed 11 papers by the time I left my PhD program; however, four still ...

  7. Shared first authorship

    Shared co-first authorship is defined as two or more authors who have worked together on a publication and contributed equally [ 8 ]. This equal contribution is often indicated in the fine print of a published paper or in an investigator's curriculum vitae [ 9 ].

  8. Thousands of scientists publish a paper every five days

    The median number of full papers per hyperprolific author in 2000-2016 was 677; across all hyperprolific authors, last author positions accounted for 42.5%, first author positions for 7.1%, and ...

  9. What makes an author

    The lab technician or core facility scientist who developed a custom experimental workflow for the study should be included as an author. The first-year rotation student who spent several weeks ...

  10. A Guide to Authorship in Research and Scholarly Publishing

    This rapid growth in the number of global research collaborations, and has also led to an increase in the number of authors per paper. 1 For instance, a paper that was published on the ATLAS experiment at the Large Hadron Collider at CERN set the record for the largest author list with over 5,000 authors. 2 Such cases act as catalysts for ...

  11. Your First "First-Author" Paper: Part One--The Writing

    Never forget that in the first instance, you are writing for your editor and referees. These people don't suffer fools gladly. Once you've managed to write your first draft, you've overcome the hardest part. After all, the remainder of the process is about making the changes that other people suggest (or tell you to do).

  12. What does first authorship really mean in field X?

    The first author is generally the person who both had the "main idea" and led the effort to ensure that the efforts to carry out the research and write the paper occurred properly. The authors are generally then decreasing in order of their contribution. edited Oct 11, 2013 at 22:35. community wiki.

  13. Authorship issue explained

    The senior author sometimes takes responsibility for writing the paper, especially when the research student has not yet learned the skills of scientific writing. The senior author then becomes the corresponding author, but should the student be the first author? Some supervisors put their students first, others put their own names first.

  14. Ten Tips for Research Scholars Aiming for Their First Significant

    In many scientific disciplines, research scholars not only hope to get published but aspire for a prominent authorship position on a journal paper. There is nothing vain about this; for a research scholar, a first-author publication opens doors to advancement because it shows that they can contribute substantially to a research project.

  15. Defining authorship in your research paper

    It is very important to make sure people who have contributed to a paper, are given credit as authors. And also that people who are recognized as authors, understand their responsibility and accountability for what is being published. There are a couple of types of authorship to be aware of. Co-author. Any person who has made a significant ...

  16. Deciding the order of authors on a paper

    The importance of the first author is reflected in the common practice of referring to a paper by the first author's name e.g. 'Jones et al. report that…' Publishing a paper as the first author is very crucial for the scientific career of a Ph.D. student. Most Ph.D. programs worldwide require a Ph.D. student to have at least one first ...

  17. How to get an article published for the first time

    Our podcast, Getting published for the first time, hears from researchers and editors explaining their tips for getting an article published. Here, we summarize their advice and gather useful resources to help you navigate publishing your first article. Read the Getting published for the first time podcast transcript.

  18. Guiding Undergraduates Through the Process of First Authorship

    Introduction. Dozens of excellent papers have recently been written that describe best practices for publishing journal articles with undergraduates (see "Engaging Undergraduates in Publishable Research: Best Practices," Frontiers in Psychology); for the most part, these involve students as co-authors in general rather than as lead authors.In this paper, I specifically focus on how to ...

  19. Google Scholar Search Help

    click "Since Year" to show only recently published papers, sorted by relevance; click "Sort by date" to show just the new additions, sorted by date; ... There's rarely a single answer to a research question. Click "Related articles" or "Cited by" to see closely related work, or search for author's name and see what else they have written ...

  20. Search

    Find the research you need | With 160+ million publications, 1+ million questions, and 25+ million researchers, this is where everyone can access science

  21. How to Start Getting Published in Medical and Scientific Journals

    Trends in Medicine. How to Start Getting Published in Medical and Scientific Journals. Katherine J. Igoe April 10, 2024. Whether you're starting a research career, breaking into academic publishing, or pivoting your area of interest, it may seem difficult to build the expertise and network to become a coauthor on academic papers.

  22. [2304.01393] Every Author as First Author

    Every Author as First Author. Erik D. Demaine, Martin L. Demaine. We propose a new standard for writing author names on papers and in bibliographies, which places every author as a first author -- superimposed. This approach enables authors to write papers as true equals, without any advantage given to whoever's name happens to come first ...

  23. Journal of Medical Internet Research

    We searched 7 databases for peer reviews and gray literature published between January 2011 and December 2021. The retrieved studies were screened using Rayyan in a single-blind manner by 2 independent authors, and data were extracted using ATLAS.ti software. The same software was used for thematic analysis.

  24. Transcatheter or Surgical Treatment of Aortic-Valve Stenosis

    (Funded by the German Center for Cardiovascular Research and the German Heart Foundation; DEDICATE-DZHK6 ClinicalTrials.gov number, NCT03112980.) Notes This article was published on April 8, 2024 ...

  25. Citizens protein project: A self-funded, transparent, and... : Medicine

    The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest. ... and quality unclear. In the current published literature, there is dearth of transparent data on protein supplement quality analysis from a proactive healthcare ...

  26. How to Write Your First Research Paper

    After you get enough feedback and decide on the journal you will submit to, the process of real writing begins. Copy your outline into a separate file and expand on each of the points, adding data and elaborating on the details. When you create the first draft, do not succumb to the temptation of editing.

  27. PDF Nber Working Paper Series Working From Home, Worker Sorting and Development

    The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. ... (WFH) or from the office. We first find that the productivity of workers randomly assigned to WFH is 18%lower than those in the ...

  28. Empagliflozin after Acute Myocardial Infarction

    During a median follow-up of 17.9 months, a first hospitalization for heart failure or death from any cause occurred in 267 patients (8.2%) in the empagliflozin group and in 298 patients (9.1%) in ...

  29. Taxing the Ten Percent by Alex Raskolnikov :: SSRN

    The first part of this paper (pp 8-26) is a shorter version of the first part of Pay More (pp 7-41). The second part of this paper (pp 26-46) is new. It discusses multiple means of implementing a tax increase on the top ten percent and it critiques the optimal tax theory case for focusing only on the top one percent.***

  30. Publication ethics: Role and responsibility of authors

    Publication of scientific paper is critical for modern science evolution, and professional advancement. However, it comes with many responsibilities. An author must be aware of good publication practices. While refraining from scientific misconduct or research frauds, authors should adhere to Good Publication Practices (GPP).