Original Article
DOI : https://doi.org/10.3946/kjme.2003.15.2.141
Korean J Med Educ. 2003; 15(2): 141-150.
Published online 2003 August 31.
doi: https://doi.org/10.3946/kjme.2003.15.2.141
The Agreement of Checklist Recordings Between Faculties and Standardized Patients in an Objective Structured Clinical Examination (OSCE)
Hoonki Park1, Jungkwon Lee2, Hwansik Hwang1, Jaeung Lee3, Yunyoung Choi4, Hyuck Kim5, and Dong Hyun Ahn6
1Department of Family Medicine, Hanyang University College of Medicine, Korea.
2Department of Internal Medicine, Hanyang University College of Medicine, Korea.
3Department of Nuclear Medicine, Hanyang University College of Medicine, Korea.
4Department of Thoracic Surgery, Hanyang University College of Medicine, Korea.
5Department of Neuropsychiatry, Hanyang University College of Medicine, Korea.
6Department of Family Medicine1, Sungkyunkwan University School of Medicine, Korea.
Corresponding Author: Email: jkwonl@smc.samsung.co.kr
ABSTRACT
PURPOSE: A high degree of agreement between standardized patients (SP) check-list recordings and those of faculty will be necessary if SPs are to eventually replace faculties in the OSCE evaluaton process. This study was conducted to know to what degree SPs' checklist recordings agree with those of faculties during an OSCE. METHODS: One hundred and twenty one fourth-year medical students of Hanyang University College of Medicine took an OSCE. In each of two study stations, a student saw an SP for four minutes and the SP recorded the same checklists as a faculty examiner did, for the following fifty seconds. RESULTS: For the 'bad news delivery' station, SP evaluations were more lenient compared to those of faculties (56 vs 45, p< 0.01), but in the case of 'chest pain', there was no significant difference. Pearson correlation coefficients for the 'bad news delivery' station and for the 'chest pain' case were 0.60 and 0.65, respectively. The mean percentages of agreement for the 'bad news delivery' and the 'chest pain' checklists were 71% and 82%, respectively. The mean kappa statistics for the 'bad news delivery' and the 'chest pain' check-lists were 0.19 and 0.49, respectively. CONCLUSION: The ratings by SPs were found to be consistent with those of faculties only in moderate degree. The exactness of scoring criteria, and the optimal SP training are to be the premise for the replacement of faculties by SPs during OSCE checklist recordings.
Keywords : Clinical competence;Observer variation;Educational measurement;Patient simulation