An Investigation of the Application of Different Methods of Student evaluation by clinical education groups of Birjand University of Medical Sciences

Document Type : Original Article

Authors

1 Infectious Diseases Research Center, Birjand University of Medical Sciences, Birjand, Iran

2 Family Medicine Department, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, IRAN

3 Department of Medical Education, School of Medicine, Birjand University of Medical Sciences, Birjand, Iran

4 Medical of Education Research Center, Birjand University of Medical Sciences, Birjand, Iran

5 PhD. Student of Curriculum Planning ,Birjand University, Biriand, Iran

Abstract

Introduction: Evaluation is an effective factor to assure the quality of education. Evaluation of clinical education assessment methods, as the most important step in educating certified physicians, is essential in order to determine the strengths and eliminate the defects. Therefore, this study was conducted to assess the evaluation methods of students by clinical education groups of Birjand University of Medical Sciences.
 Method: This is a descriptive-cross sectional study conducted in the academic year of 2016-2017. The statistical population consisted of interns in clinical departments of Birjand University of Medical Sciences who were selected by purposeful sampling. The tool for evaluating the researcher-made checklist was based on the ACGME model, whose content and formal validity were confirmed by experts. After collecting data, data were recorded and analyzed in Excel software.
 Results: The results of this study showed that most of the tests conducted in internship courses were in the fields of written method, key points and history (76.66% of the clinical groups); in the evaluation by the supervisor, oral exam (40 percent), and in the multi-source evaluation or 360-degree area, only from the work folder (0.33 percent), and in the field of clinical simulation, none of the tests have been used for evaluation.
 Conclusion: Considering that the tests evaluated only the knowledge and also evaluated the skills in limited cases, it is suggested that, while creating the ground for increasing the awareness of the professors about other tests, an instruction to necessitate the evaluation of all areas of learning in all clinical sections should be developed.

Keywords

Main Subjects


Problem statement:

 The aim of medical education is to develop the level of competence of students in accordance with educational programs (1) and assessment of educational progress is a systematic process to measure the achievement of these educational goals (2) which is one of the essential steps of the educational process. This helps professors in decision making regarding educational activities; also, it makes the students aware about their weak points that will ultimately result in their learning improvement (3). The assessment not only affects the outcome of student learning, but also affects the student's learning methods (4, 5). Professors also need an evaluation tool to differentiate between students based on their level of learning and the determination of a standard to learn different areas. The assessment of academic achievement in higher education is done with a variety of methods and tools, but the important thing is that the tools used should be able to measure the knowledge and skills that are targeted in the curriculum, in which case it will be determined whether the education has led to learning or not (6). In Iran, medical students have to undergo a seven-year classical course consisting of four levels of basic science, physiopathology, internship and externship, all grades being internally evaluated, and to enter the physiopathology and internship courses, a test is held across the country. The assessment method in the country tests as well as in the basic sciences and physiopathology courses are multiple choice questions. In the internship and externship, evaluation is also mainly conducted in a multi-choice approach, along with OSCE or DOPS (7). Given the increasing community demand for more accountability and responsibility of physicians, the need for accurate implementation of curriculum for more student empowerment and a more accurate assessment of these capabilities seems necessary. In those circumstances, students will be able to complete their professional duties after graduation. To this end, the application of different evaluation methods appropriate to each competency will be necessary (8). Based on Miller's Pyramid, different methods of evaluating the four domains of learning (cognitive, attitude, performance, and clinical reasoning) (9). The review of the studies of Mesrabadi (2011) (10), Mousavi and Maghami (2012) (11), and Komeyli and Rezaei (2012) in this field show that multiple-choice tests and descriptive tests are the most widely used evaluation methods (12). Kajouri et al. (2014) also found that more than 99% of the educational groups used the multiple-choice test for evaluation, and at a later stage, a variety of evaluation methods such as logbook, OSCE test, DOPS, practical test, anatomical test, oral test, practical test and Mini-Cex test are used (13). The results of the research show the impact of evaluation methods on the promotion of professional skills (14). Therefore, each test must be applied in a coherent way, as each type of test evaluates an aspect of learning (15, 16). In this regard, the use of blueprint can be helpful for the proper use of tests to measure different areas (17). As the study of medical research has shown, the challenges in the field of medical education assessment and the method of choosing the appropriate method of evaluation and the effects of choosing the correct methods, in accordance with the educational goals in each of the educational groups, the need to address this issue is emphasized as one of the current priorities of medical education. Given that any change in the learning process requires monitoring the current status so that its results can be used to make decisions and plan for the next steps to improve the quality of education, and given that no research has been done in Birjand University of Medical Sciences in this regard, the aim of this study was to determine the current status of the use of various educational progress tests in clinical groups of Birjand University of Medical Sciences.

 Methods:

This is a descriptive-cross sectional study conducted in the academic year of 2016-2017. In this study, all clinical departments of Birjand University of Medical Sciences including obstetrics and gynecology, internal medicine, heart, pediatric, neurology, ENT, surgery, psychiatry, orthopedics, ophthalmology, dermatology, radiology, anesthesiology, emergency medicine, and urology were assessed according to the type of evaluation of the students. Purposeful sampling method was used; an interested student was selected from each department. The reason for the selection of the student to assess was that the student is the best and only tangible source of evaluation (19-18). The reason for choosing a student for evaluation in each department was based on the nature of the use of the checklist, and its clarity and clear results. The inclusion criteria were passing the final test for each department and the exclusion criteria was dissatisfaction and unwillingness to continue. A researcher-made checklist was used to collect data. To create a checklist, firstly, a variety of medical students' assessment methods that are widely used are selected based on the review of the literature and aligned with the methods mentioned in the clinical curriculum. Finally, a checklist of evaluation in four areas of written method, evaluation by a supervisor physician, clinical simulator, multi-source or 360-degree assessment with 15 types of tests, which are in accordance with the Accreditation Council for Graduate Medical Education Model (ACGME) (20). The factual and content validity of the checklist were confirmed by the opinion of five medical education specialists. The methodology was implemented so that all selected students were trained on a variety of evaluation methods at a meeting aimed at acquainting students with the titles of evaluation methods, and then a checklist was designed to provide them with a type of assessment. In order to comply with ethical principles, information was provided without mentioning the names of the training groups and the students of the sample group cooperated with consent and awareness of the project. Finally, the data was entered into Excel 2010 software and analyzed and the results are shown in frequency and percentage in each group based on the department.

 Results: The results of this study showed that most of the tests conducted in internship and externship courses were in the field of written method, key test and history match, which was performed in 66.67% of clinical groups. In the area of ​​assessment by the observer physician, the oral test was used by 40% of the groups, and in the multi-source or 360-degree assessment group only 33 percent of groups used the work folder for evaluation, and in the field of clinical simulation, none of the tests were used for evaluation in any of the groups. In Tables 1, 2, and 3, different types of tests are shown based on their use in each of the clinical groups and in basic sciences and clinical sciences.

Table 1: The frequency of written tests used in clinical groups

Frequency

Percentage

Course

Types of questions

6

40%

Internship

Multiple-choice questions

8

53.33%

Externship

0

0

Internship

Matching questions

0

0

Externship

12

80%

Internship

Key points and history matching

11

73.33%

Externship

8

53.33%

Internship

Short-answer questions

9

60%

Externship

5

33.33%

Internship

Structured questions

5

33.33%

Externship

 

 Table 2: The frequency of tests in the assessment by the supervisor physician in clinical groups

Frequency

Percentage

Course

Types of questions

5

33.33%

Internship

Overall scoring with comments at the end of the course

2

20%

Externship

2

13.33%

Internship

Direct Observed Procedural Skills with Score Checklist

2

13.33%

Externship

6

40%

Internship

Oral Test

6

40%

Externship

 

Table 2: Frequency of tests used in the clinical simulation in clinical groups

Frequency

Percentage

Course

Types of questions

0

0

Internship

Patients of Objective Structured Clinical Examination

0

0

Externship

0

0

Internship

Unknown Standard Patients

0

0

Externship

0

0

Internship

Simulation with Clinical Technology

0

0

Externship

Table 3: Frequency of tests used in the multi-source or 360-degree evaluation in clinical groups

Frequency

Percentage

Course

Types of questions

0

0

Internship

Evaluation by counterparts

0

0

Externship

0

0

Internship

Evaluation by the patient

0

0

Externship

0

0

Internship

Self-evaluation

0

0

Externship

6

40%

Internship

World folder

4

26.67%

Externship

 

Discussion and conclusion:

The results of this study showed that most of the methods used to evaluate medical students in Birjand University of Medical Sciences included written tests and key points methods and history matching; history matching evaluated clinical reasoning; four-choice tests that evaluate most of the knowledge are consistent with the results of Mirzai and Karimzadeh's (21) and Mousavi et al. (11) and Komeyli and Rezaei (12) tests, although the four-choice tests were most widely used. Performing the key points test and the four-choice tests of the series of written tests simultaneously will evaluate two competencies of knowledge and clinical reasoning, but the point to be taken into account is that; the four-choice tests should have high taxonomy. Because in most studies such as Rasoulne-Nejad et al. (2007) (22) and Haghshenas et al. (23), the taxonomic level of four-choice questions has been reported as low. On the other hand, the observing the structural rules, along with attention to taxonomy, is necessary in the four-choice questions; in many cases, even these four-choice questions have structural problems (25 -24). The results of this study also showed among that the assessment tests by the supervisor physician in the clinical groups, oral tests are more widely used, which is consistent with Mirzai and Karimzadeh's study (21). The results also showed that in this area, a structured test, such as OSCE, is less used, while the results of studies show the importance of the OSCE test (26). The results also suggested that clinical testing simulations such as standardized patients, OSCE, anonymized standardized patients, and clinical simulation, despite their importance in increasing student’s self-esteem in dealing with patients and enhancing their functional skills (27, 28), have not been used in any way, which may be due to the lack of knowledge and skill of professors regarding these tests. In multi-source or 360-degree evaluations, only the folder work is used in two departments, which is an effective pattern and provides a background for collecting information about the learning outcomes. In the study of Jarahi et al., world folder, as a function-based method (29) and in the study of Hekmat, for a more realistic assessment based on written evidence and effective learning in clinical settings (30), and in the study of Bahrainy et al., as a tool for the development of skills (31). Regarding the spread of the use of technology in education and learning, it seems that using electronic cartable is a more appropriate way to evaluate learning (32). In examining other tests, such as evaluation by counterparts, by the patient and self-evaluation, this study shows that these tests are not used in any way. While in other studies, the importance of evaluating counterparts has been shown to increase the sense of responsibility of students towards their own learning and their counterparts (33). Self-evaluation is also emphasized as an effective tool in assessing and enhancing self-directed learning (34). Overall, 360-degree evaluation is a work-based evaluation that evaluates the relationship with the patient, patient and staff, and managerial and teamwork skills and evaluates the behavior consistent with medical ethics that is less relevant to other tests. It can be said that this evaluation method leads to different dimensions of learning and in addition to assessing the knowledge and skills, it evaluates attitudes and behaviors of the students (35).

 Overall Conclusion: As the studies show, in most universities, in clinical groups, only the cognitive aspect of student learning is evaluated. However, professional competence is required for a physician in addition to knowledge, including skill and attitude. Therefore, based on the results of this study, it is suggested for policy makers to conduct similar tests, and also different evaluation methods should be taught to learners in addition to the professors.

Future studies on the adequacy of the facilities and equipment needed to conduct these tests should be carried out and the training of professors on how to become more familiar with the benefits and also the methodology of new evaluation structures is required.

This article is extracted from the doctoral thesis of professional doctorate. We are grateful from the colleagues of the research deputy of Birjand University of Medical Sciences.

References
1.Malakooti N, Bahadoran P, Ehsanpoor S. Assessment of the midwifery students' clinical competency before internship program in the field based on the objective structured clinical examination. Iranian journal of nursing and midwifery research. 2018 Jan;23(1):31
2.Owston R. Models and Methods for Evaluation. Handbook of Research on Educational Communications and Technology. Routledge, New York, NY; 2008.
3.Alizadeh M, Mazouchian H. Letter to editor. Iranian Journal of Medical Education. 2015;15(0):505-7.
4.Al Kadri HM, Al-Moamary MS, van der Vleuten C. Students' and teachers' perceptions of clinical assessment program: A qualitative study in a PBL curriculum. BMC research notes. 2009;2(1):263.
5.Norton* L. Using assessment criteria as learning criteria: a case study in psychology. Assessment & Evaluation in Higher Education. 2004;29(6):687-702.
6.Khademi Zare H, Fakhrzad MB. Integration of collaborative management and fuzzy systems for evaluating of students’ educational performance. Quarterly Journal of Research and Planning in Higher Education. 2013;19(3):23-40.
7.Javadinia SA. Training and Evaluation in Medicine Effective or Ineffective: Establishing an Educational Culture Based on Community Health Priorities. Iranian Journal of Medical Education. 2014;14(2):187-8.
8.Cushing A. Developments in attitude and professional behaviour assessment, oral presentation given at the 9th International Ottawa Conference on Medical Education. Cape Town, South Africa. 2000;28.
9.Shumway JM, Harden RM. AMEE Guide No. 25: The assessment of learning outcomes for the competent and reflective physician. Medical teacher. 2003;25(6):569-84.
10.Mesrabady J. Introduce and accreditation of concept map evaluation in learning progress and academic performance evaluation. J Educ Innov. 2011;10(38)7-24. [Persian]
11.Mousavi M, Maghami H. Comparison of new and old educational evaluation methods' efficacy on student's attitudes to innovation and academic achievement in elementary schools students. Inven Creat Hum J. 2012;2(6):125-46. [Persian]
12.Komeili G, Rezaei G. Study of student evaluation by basic sciences` instructors in Zahedan University of medical sciences in 2001. Iran J Med Edu. 2002;2(8):36. [Persian]
13.Kojury J, Rivaz S, Amini M, Rivaz M. Assessment of educational group’s status based on types of evaluation methods of medical students at the Shiraz University of Medical Sciences 2014. 2. 2017;5(1):7-13.
14.Khodayarian M, Vanaki Z, Navipour H, Vaezi A. The impact on clinical competence in nursing management, nursing care cardiac rehabilitation program. Journal of Kermanshah University of Medical Scienes 2011;15(1):40-50. [Persian]
15.O’Neill G. Choice of assessment methods within a module: students’ experiences and staff recommendations for practice. 2010.
16.Francis RA. An investigation into the receptivity of undergraduate students to assessment empowerment. Assessment & Evaluation in Higher Education. 2008;33(5):547-57.
17.O'Shaughnessy SM, Joyce P. Summative and formative assessment in medicine: The experience of an anaesthesia trainee. International Journal of Higher Education. 2015;4(2):198.
18.Emery C, Kramer K, Tian R. Return to academic standards: challenge the student evaluation of teaching effectiveness. [cited 2006 Jul 29]. Available from: http://www.bus.lsu.edu/academics/accounting/faculty/lcrumbley/stu_rat_of_%20instr.htm
19. Sproule P. Student evaluation of teaching: a methodological critique of conventional practices. [cited 2006Jul 29]. Available from: http://trc.ucdavis.edu/TRC/ta/TAdevel/seldin.pdf
20.Rose SH, Long TR. Accreditation council for graduate medical education (ACGME) annual anesthesiology residency and fellowship program review: a" report card" model for continuous improvement. BMC medical education. 2010 Dec;10(1):13.
21.Mirzaei A, Kawarizadeh F, Lohrabian V, Yegane Z. Evaluation Methods of the Academic Achievement of Students Ilam University of Medical Sciences. Education Strategies in Medical Sciences. 2015;8(2):91-7.
 22.Rasolinejad SA, Vakihi Z, Fakharion E, Mosayebi Z, Moniri R. Comparative survey of taxonomies of residents. Promotion examination, kashan Medical University 2006. The 8th National Congress of Medical Education. kerman: kerman University of Medical Sciences; 2007: 68. [Persian]
23.Haghshenas M, Vahidshahi K, Mahmudi M, Shahbaznejad L, Parvinnejad N, Emadi A. Evaluation of Multiple Choice Questions in the School of Medicine, Mazandaran University of Medical Sciences, the First Semester of 2007. Strides Dev Med Educ. 2009; 5 (2) :120-127. URL: http://sdmej.ir/article-1-256-fa.html.
24.McCoubrie P. Improving the fairness of multiple-choice questions: a literature review. Medical teacher. 2004;26(8):709-12.
25.Shakoornia A, Khosravi A, Shariati A, Zarei A. Survey on multiple choice questions of faculty members of Jondi Shapor Medical University of Ahwaz. The 8th National Congress of Medical Education. kerman: kerman University of Medical Sciences; 2007: 44. [Persian]
26.Du Y, Yu K, Li X, Wang F, Wang T. Brief analysis of application of objective structured clinical examination (OSCE) in graduation exams of clinical medical students. Higher Education Studies. 2016;1(2):92.
27.Howley LD. Performance assessment in medical education: where we’ve been and where we’re going. Evaluation & the health professions. 2004;27(3):285-303.
28.Newble DI, Jaeger K. The effect of assessments and examinations on the learning of medical students. Medical education. 1983;17(3):165-71.
29.Jarahi L, Shojaghalehdokhtar L , Mousavibazaz M, Erfanian M. Educational Evaluation of Medical Student in Health Centers Using Portfolios: A Pilot Study. Strides in Development of Medical Education. 2015; 12(1): 277-280[Persian]
30.Hekmatpou D. Effect of Portfolio Based Evaluation on Accuracy of Clinical Evaluation of Nursing Students during Internship in Arak University of Medical Sciences, Iran. Strides Dev Med Educ. 2013; 10(1):60-69[Persian].
31.Bahreini M, Shahamat S, Moattari M, Akaberian S, Sharifi S, Yazdankhah Fard M. Development of Reflective Skills among Nurses through Portfolio: A Qualitative Study. Iranian Journal of Medical Education. 2012; 12(2) :120-130[Persian]
32.Ahmadi A, Alian negad MR. The effect of e-portfolio on learning of emergency medical students in pharmacology courses. Bimonthly of Education Strategies in Medical Sciences. 2017;10(1):15-22.
33.kamali f, shakour m, Yousefy A. Peer Assessment in evaluation of Medical sciences students. Iranian Journal of Medical Education. 2012;11(9):1443-52.
34.Kokeyo CA, Oluoch J. Self Evaluation: A Case Study of a School in Dar Es Salaam, Tanzania. Journal of Education and Practice. 2015;6(21):50-4.
35.Sahebalzamani M, Farahani H, Mehrabani E, Shahbazi M. Validity and reliability of 360-degree evaluation in the assessment of clinical nursing students. Medical Sciences Journal. 2016;26(4):264-70.