Reporting quality of submissions to the National Conferences on Electronic Learning in Medical Education: implications from Iranian research performance

Document Type : Original Article

Authors

1 Medical Education Research Centre, Tabriz University of Medical Sciences, Tabriz, IRAN

2 Health Education & Promotion Department, Faculty of Health & Nutrition, Tabriz University of Medical

Abstract

Background: Reporting quality of research on medical education has come under scrutiny in recent years in wake of empirical evidence. Poor reporting quality of published abstracts may distract readers from careful reading of research evidence or in a worst case mislead scientists. Main objective of this study was to evaluate the extent and quality of the submitted abstracts to the 3rd and 4th National Conference on Electronic Learning in Medical Education which were held in Mashad (2010) and Tabriz (2011), Iran.
Methods: A stratified random sample of abstracts (n=188) representing quantitative and review studies were selected among a total of 366 accepted submissions. Their quality was assessed independently by authors based on the criteria explained by Reed et al and also the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guideline.
Results: Findings from primary studies were described in 60.1%, narrative reviews in 39.4% and a systematic review in only 0.5% of abstracts. Sampling methods were reported amongst 58.4% and participation rate in 25.7% of abstracts from primary studies. Main aim among those abstracts that representing findings of narrative reviews was provided in 14.9%.
Conclusions: Varied reporting quality of submitted abstracts may reflect the gaps we suffer to have a more robust national research performance in the field of medical education. They also may reflect pitfalls in our research methodology educational efforts but embrace national level challenges we face to ensure evidence based research outcome. To improve our national research productivity we recommend working on infrastructural prerequisites.

Keywords


Reporting quality of research on medical education has come under scrutiny in recent years in wake of evidence obtained from relevant studies (1-4). Main concern has been raised is the insufficiency of provided details for unbiased judgment of the readers to critically appraise the scientific literature. Abstracts are windows into their corresponding scientific processes and must be written in ways that attract interested readers’ attention to learn more about the study and its results.  But poor reporting quality of published abstracts may distract readers from careful reading of research evidence or in a worst case may mislead scientists. Despite development and administration of standard research reporting guidelines (5-7) we are still witnessing suboptimal research reports in our academia (8, 9) internationally.  Main objective of this study was to evaluate the extent and quality of the submitted abstracts to the 3rd and 4th National Conference on Electronic Learning in Medical Education which were held on 17-18 February 2010 in Mashad and 15-17 February 2011 in Tabriz, Iran. Our special focus was on the accuracy of reporting quality as well as the validity of conclusions.

 

METHODS

A stratified random sample of abstracts (n= 188) representing quantitative and review (narrative and systematic) studies (excluding qualitative studies) were selected among a total of 366 accepted submissions to the 3rd (N= 181) and 4th (N= 185) National Conference on E-learning in Medical Education in Iran. We assessed the quality of the selected abstracts based on the criteria explained by Reed et al (9) to determine the quality of experimental, quasi-experimental, and observational studies. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (6) were also considered to assess abstracts from systematic review studies. External validity criteria were checked for abstracts from primary studies based on the reported sampling method, sample size and response rate to dispose probability of sampling bias and hence generalisibilty of a study finding to the wider population. Internal validity criteria were also verified based on the provided numeric results of the studied outcome variables with their precision to endorse reliability of the study results. Explanation of the main aim of study, name of databases searched, study selection criteria, quality assessment procedures in the retrieved studies, providing summary results and also giving conclusion were also criteria to assess the abstracts from review studies. Authors independently examined the abstracts for all these quality criteria and any disagreement was resolved by discussion until final consensus was reached.

 

RESULTS

Table 1. Abstracts from primary studies that met quality criteria among submissions to the 3rd and 4th National Conference on E-learning in Medical Education, Iran (n= 113)

Assessed quality criteria

Frequency

Percent

Aim of the study

104

92.0

Sampling method

66

58.4

Sample size

83

73.5

Statistical analysis method

89

78.8

Outcome variable

72

63.7

Participation rate

29

25.7

Numeric description of outcome variables

63

55.8

Precision of outcome variables

12

10.6


Name of the statistical software to analyse data was given in the 47 abstracts (42.0%) from primary studies but in 5 abstracts (20.8%) it was given without explaining the statistical analysis method.

Amongst abstracts representing findings of narrative reviews (n=74) in 11 cases (14.9%) main aim of the review was not mentioned as indicated in Table 2.

Table 2. Abstracts of narrative reviews that met quality criteria among submissions to the 3rd and 4th National Conference on E-learning in Medical Education, Iran (n=74)

Assessed quality criteria

Frequency

Percent

Aim of the study

63

85.1

Name of databases searched

12

16.2

Search strategy

24

34.4

Study selection criteria

4

5.4

Quality assessment of retrieved studies

1

1.4

Summary results

59

79.7

Conclusion

67

90.5

 Among the abstracts from narrative reviews 12 abstract (17.9%) provided conclusion without explaining the study results. In the only one abstract which summarised findings of a systematic review name of databases searched, search strategy, studies selection criteria and their quality assessment procedures were not mentioned.

DISCUSSION

This study results revealed that 74.3% of abstracts from primary studies and 99.0% of abstracts from review studies lack all essential elements for informative reporting. Amongst the reviewed abstracts important information necessary to make judgment about their accuracy were absent frequently and this severely limits referees to critically appraise the submitted abstracts. Our study indicated varied reporting quality of submitted abstracts and the gaps wesuffer to have a more robust national research performance in the field of medical education. About 40% of submitted abstracts reported to introduce findings of narrative reviews but current debates (10-13) around their credibility in the hierarchy of research evidence merits revision in the current flow of decisions on submitted abstracts in the national conferences’ scientific committees. In the absence of internationally agreed standards to report study findings (14-18) reliance on less rigorous research reports can be embarrassing.  

To the best of our knowledge this is the first study in Iran which focuses on the quality issue of the submitted abstracts to the national medical education conferences. To understand the rationale behind the study procedures and conclusions in the reviewed abstracts it would be highly appealing for us to have access to the full text of corresponding studies, thereby to enhance our judgment about the quality of research evidence. A number of abstracts were written in a prospective way to indicate what will be done by author(s) in future. They should not be accepted as it is purely because their title is astonishing. Nonetheless, the title of a sizable number of abstracts were repetitive such as studies on learners satisfaction after applying blended educational methods in medical education instead of traditional teaching methods or  investigations on acquisition of knowledge and skills immediately after implementation of a educational programme on E-learning. Such a quality pattern among the submitted abstract to a scientific event is not unique for Iran and almost similar insufficiencies were also reported in other national and international conferences (19-22). We have not assessed the title of reviewed abstracts for multiple presentations but it has been shown that it is not unfamiliar in other countries (23-28). Overall quality of a research report will not be judged only upon its reporting quality but also based on its contribution to the scientific world. So to

improve our national research productivity we recommend working on infrastructural prerequisites needed for prevention of research misconducts in this field. Although we have selected a representative sample of abstracts submitted to only two national conferences and hence the findings may not be assumed to extend beyond these conferences and to all performed research in this field and also other medical fields, but these findings may reflect pitfalls in our research methodology educational efforts and embrace national level challenges we face to ensure evidence based research outcome. Such a conclusion is inline with the constraints were reported for countries across the whole Asian continent (29).

Due to the impact findings of a good quality research on different aspects of medical teaching-learning processes may have on the attainment of graduates from academic institutions and therefore on the enhancement of health care quality (30), our national research performance in the field of medical education merit further investigation and highly recommended.

ACKNOWLEDGEMENT

Authors would like to thank Iranian researchers who submitted their works to the 3rd and 4th National Conference on E.learning in Medical Education.

Conflict of interest: There are not any conflicts of interest regarding the submission and publication of the article and its potential implications. The authors agree to transfer the copyright of this article to the Future of Medical Education Journal and not to publish the manuscript elsewhere in any other language without the consent of the Journal.

Funding and support: This study was supported by Tabriz University of Medical Sciences.

1.Cook DA, Beckman TJ, Bordages G. Quality of reporting of experimental studies in medical education: A systematic review. Med Educ 2007; 41: 737-45.
2.Dauphinee WD, Wood-Dauphinee S. The need for evidence in medical education: The development of best evidence medical education as an opportunity to inform, guide, and sustain medical education research. Acad Med 2004; 79(10): 925-30.
3.Cook DA, Bordages G, Schmidt HG. Description, justification and clarification: A framework for classifying the purposes of research in medical education. Med Educ 2008; 42: 128-33.
4.Cook DA, Beckman TJ, Bordage G. A systematic review of titles and abstracts of experimental studies in medical education: Many informative elements missing. Med Educ 2007; 41: 1074-81.
5. Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Lancet 2001; 357(9263): 1191-4.
6. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151(4): 264-9.
7. Shea JA, Arnold L, Mann KV. A RIME perspective on the quality and relevance of current and future medical education research. Acad Med 2004; 79(10): 931-8.
8. Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med 2004; 79(10): 955-60.
9. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA 2007; 298(9): 1002-9.
10.  Mykhalovskiy E, Weir L. The problem of evidence-based medicine: Directions for social science. Soc Sci Med 2004; 59(5): 1059-69.
11. Lewis S. Toward a general theory of indifference to research-based evidence. J Health Serv Res Policy 2007; 12(3): 166-72.
12. Landry MD, Sibbald WJ. From data to
evidence: Evaluative methods in evidence-based medicine. Respir Care 2001; 46(11): 1226-35.
13. Lambert H. Accounting for EBM: Notions of evidence in medicine. Soc Sci Med 2006; 62(11): 2633-45.
14. APA Publications and Communications Board Working Group on Journal Article Reporting Standards. Reporting standards for research in psychology: Why do we need them? What might they be? Am Psychol 2008; 63(9): 839-51.
15. Altman DG, Moher D. Developing guidelines for reporting healthcare research: Scientific rationale and procedures. Med Clin (Barc) 2005; 125 (Suppl 1): 8-13. [In Spanish].
16. Meerpohl JJ, Blümle A, Antes G, Elm E. Reporting guidelines are also useful for readers of medical research publications: CONSORT, STARD, STROBE and others. Dtsch Med Wochenschr 2009; 134(41): 2078-83. [In German].
17. Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and elaboration. PLoS Med2007; 4(10): e297.
18. Howley L, Szauter K, Perkowski L, Clifton M, McNaughton N. Association of Standardized Patient Educators (ASPE). Quality of standardized patient research reports in the medical education literature: Review and recommendations. Med Educ 2008; 42(4): 350-8.
19. Hopewell S, Clarke M. Abstracts presented at the American Society of Clinical Oncology conference: How completely are trials reported? Clin Trials 2005; 2(3): 265-8.
20. Fesperman SF, West CS, Bischoff CJ, Algood CB, Vieweg J, Dahm P. Study characteristics of abstracts presented at the annual meetings of the southeastern section of the American Urological Association (1996-2005). J Urol 2008; 179(2): 667-71.
21. Wang L, Li Y, Li J, Zhang M, Xu L, Yuan W, et al. Quality of reporting of trial
abstracts needs to be improved: using the CONSORT for abstracts to assess the four leading Chinese medical journals of traditional Chinese medicine. Trials 2010; 11: 75.
22. Turpen RM, Fesperman SF, Smith WA, Vieweg J, Dahm P. Reporting quality and information consistency of randomized, controlled trials presented as abstracts at the American Urological Association annual meetings. J Urol 2010; 184(1): 249-53.
23.Pop GH, Fesperman SF, Ball DA, Yeung LL, Vieweg J, Dahm P. Duplicate presentations on prostate cancer at American Urological Association and European Association of Urology annual meetings. J Urol 2009; 182(2): 674-8.
24. Fesperman SF, West CS, Bischoff CJ, Algood CB, Vieweg J, Dahm P. Study characteristics of abstracts presented at the annual meetings of the southeastern section of the American Urological Association (1996-2005). J Urol 2008; 179(2): 667-71.
25.  Hoag CC, Elterman DS, McNeily AE. Abstracts presented at the American Urological Association Annual Meeting: determinants of subsequent peer reviewed publication. J Urol 2006; 176(6 Pt 1): 2624-9.
26.  Bhandari M, Patenall V, Devereaux PJ, Tornetta P, Dirschl D, Leece P, et al. An observational study of duplicate presentation rates between two national orthopedic meetings. Can J Surg 2005; 48(2): 117-22.
27. Kim S, Braga LH, Pemberton J, Demaria J, Lorenzo AJ. Analysis of duplicate presentations accepted at two top international pediatric urology meetings. J Pediatr Urol 2011; 7: 203-8.
28. Autorino R, Quarto G, Di Lorenzo G, Giugliano F, Quattrone C, Neri F, et al. What happens to the abstracts presented at the Societè Internationale d'Urologie meeting? Urology 2008; 71(3): 367-71.
29.  Majumder MAA. Issues and priorities of medical education research in Asia. Ann Acad Med Singapore 2004; 33(2): 257-63.
30.Norman G. Research in medical education, three decades of progress. BMJ 2002; 324: 1560-2.