The utility of multiple mini-interviews: experience of a medical school

Article information

Korean J Med Educ. 2017;29(1):7-14
Publication date (electronic) : 2017 February 28
doi : https://doi.org/10.3946/kjme.2017.48
1Department of Medical Education, Dongguk University School of Medicine, Goyang, Korea
2Department of Pharmacology, Dongguk University School of Medicine, Gyeongju, Korea
3Department of Rehabilitation Medicine, Dongguk University School of Medicine, Goyang, Korea
Corresponding Author: Bum Sun Kwon (http://orcid.org/0000-0001-7755-435X) Department of Rehabilitation Medicine, Dongguk University School of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang 10326, Korea Tel: +82.31.961.7480 Fax: +82.31.961.7488 email: bskwon@dumc.or.kr
Received 2016 September 6; Revised 2016 November 1; Accepted 2016 December 8.

Abstract

Purpose

This paper aims to introduce the design of multiple mini-interviews (MMIs) as a tool to assess medical school applicants’ attributes in alignment with the school’s educational goals and to evaluate its utility.

Methods

In this MMI, candidates rotated through six stations (10 minutes per station), in which specific interview topics were drawn by mapping the school’s educational goals with the core competencies for entering medical students. We conducted post-MMI surveys of all of the interviewers and candidates to investigate their experiences of MMIs. The G-coefficient and interclass correlation were analyzed to investigate the reliability of this test. Additionally, the candidates’ MMI scores were compared across different backgrounds and a univariate analysis was used to estimate correlations between their MMI scores and prior academic achievements.

Results

A total of 164 candidates (a 98.8% response rate) and 19 interviewers (a 100% response rate) completed the surveys in the years 2014 and 2015. Both candidates and assessors showed positive responses to MMIs. The G-coefficient of MMI scores was 0.88 and the interclass correlation coefficients ranged from 0.58 to 0.75. The participants’ total MMI scores did not differ across genders or undergraduate backgrounds and were not associated with age, undergraduate graduate point averages, nor the Korean medical school admission test (Medical Education Eligibility Test) scores.

Conclusion

Our study illustrates the utility of MMIs that utilize the institution’s educational goals to identify attributes to be pursued in the admission interviews in alignment with the institution’s core values. Future research is warranted of the predictive validity of this MMI.

Introduction

Multiple mini-interviews (MMIs) have recently been introduced in Korean medical schools since the inception of the graduate-entry programs in the basic medical education system, whereby medical schools have gained more autonomy in student selection, together with a growing need for transforming the traditional interview to better evaluate medical school applicants’ noncognitive attributes expected of competent doctors. MMIs began in McMaster University over a decade ago by adopting the format of objective structured clinical examinations (OSCEs) to the admissions interview [1], thereby incorporating a structured multi-sampling method. Previous studies have shown that MMIs offer greater validity and reliability than traditional interviews [1,2,3,4,5,6,7,8,9,10,11,12,13].

Admission interviews at most Korean medical schools have adopted the traditional format, whereby the candidate’s attributes that the interviews intend to assess are not clearly defined, and the duration of interview is relatively short with a small number of cases, raising the reliability issue [14]. In particular, the blueprinting process, which is mapping content domains to be assessed in MMIs with the qualities of candidates that the admission committee desire to pursue, is important to ensure its content validity [1]. MMIs can be designed to reflect the values of the individual medical school [15]. Moreover, it is argued that aligning the medical school’s mission and goals with the criteria used to evaluate candidates is fundamental to the holistic review process in student selection [16].

Still, only a few Korean medical schools have adopted MMIs thus far, which is also the case in the United States [17]. Such a rare adoption of MMIs in medical schools indicates that there are some challenges in implementing them. Moreover, previous studies of MMIs in Korean medical schools did not describe in detail in their blueprinting progress that had led to the development of MMI stations. This paper describes how we, at Dongguk University School of Medicine (DUSM) in South Korea, have transformed its admissions interviews from the traditional format to MMIs in order to select students who are better aligned with the school’s educational goals by using them as a framework for the blueprinting process. In addition, we present results of the evaluation of the utility of the MMI in terms of its validity, reliability, and acceptability by the assessors and candidates. We also discuss some lessons learned from our experiences.

Subjects and methods

1. Case development and interview procedures

DUSM had used a traditional interview format for more than 10 years, in which two interview topics were given to the candidates—one in the cognitive domain and the other in the noncognitive domain—for the duration of 40 minutes, and they were assessed by a panel of interviewers. The results of interviews from previous years had shown that they had a weak predictive validity on students’ academic performance. In particular, candidates’ interview scores on topics in the cognitive domain had been associated with their prior academic achievements, which meant they were measuring same attributes. Moreover, there had been issues of validity on interviews on subjects in the noncognitive domain because members of the admissions committee had not had a clear consensus on which noncognitive traits they would measure. Accordingly, discussions began to reform the interview process to enhance its validity and reliability.

As the school’s educational goals had been renewed in 2013 in an effort to transform its curriculum to an outcomes-based one, discussions began on using them as a framework for the blueprinting process for admission interviews. The school’s renewed educational goals comprise of four domains founded upon the institution’s unique and core values, which are to educate future doctors who have competency, wisdom, mercy, and practice. There are three specific competencies in each of the four domains, and as these 12 competencies in the school’s educational goals may include those that are not suited for prospective students, we mapped them with the core personal competencies for entering medical students recommended by the Association of American Medical Colleges (AAMC) [18]. Consequently, we identified six constructs that overlapped between the 12 competencies in the school’s educational goals and the AAMC’s 15 core competencies for entering medical students. These were knowledge of basic science, problem-solving, critical thinking, ethical decision-making, interpersonal skills, and self-regulation.

Some of the six constructs identified from the blueprinting process were more related to cognitive abilities than to noncognitive attributes. We chose to include these cognitive attributes to encompass all of the domains of the school’s educational goals in the MMI stations. The MMI stations that assessed cognitive abilities (i.e., knowledge of basic science and problem-solving skills) were designed in ways that they did not require specialized knowledge beyond the scope of the Medical Education Eligibility Test (MEET).

After this blueprinting process, it became quite clear that the traditional interview format, where only two interview topics were given, was not suited any more. Therefore, members of the admission committee decided to adopt the MMI and they were trained on it through a series of educational programs. About 10 faculty members were selected from the admissions committee for development of MMI cases, half of whom had prior experience in case development for traditional interviews. The committee members developed scenarios that matched the six constructs to be assessed in MMI stations, which were described earlier. Each member worked individually for the draft of scenarios and it was revised through a feedback from other members, which was repeated several times to reach the final version.

The admission interviews using the MMI format at DUSM was first implemented for the entering class of 2015 in November 2014. The MMI comprised of six stations, each of which took 10 minutes, including 5 minutes for the candidate to read information on the case and prepare for the interview. The interviews were conducted in two sessions in 1 day; one in the morning and the other in the afternoon. Candidates were randomly divided into two groups for the time of day of their interview. Each station was assessed by one interviewer, except for two cases which were assessed by two interviewers for analysis of interrater reliability. The interviewers evaluated the candidates’ performance on a 5-point scale of 1 being “unsuitable” to 5 being “outstanding” on two to three items presented in a scoresheet for each station.

2. Study procedures

The sample of this study was candidates for the medical program at DUSM who participated in the admission interviews in the years 2014 and 2015 (n=166; 97 in 2014 and 69 in 2015). All of the candidates who participated in the admission interviews were invited to the post-MMI survey to assess their perceptions of MMI. The questionnaires were administered at the end of each interview session; i.e., when the morning and afternoon sessions ended. The questionnaires comprised of 22 items; nine items on respondent demographics, 11 items on participant perceptions of MMI experiences, and two open-ended questions for general comments. The items regarding participant perceptions were adapted from Eva et al. [1] to compare the findings from our study with those from other institutions. Two items were added to the questionnaire conducted in 2015 to measure participant perceptions of the alignment between the MMI cases with the school’s educational goals.

In addition to the candidate survey, a questionnaire was performed on faculty members who were either interviewers or members of the admissions committee in December 2014. In this questionnaire, the interviewers responded to nine items on their perceptions of MMI. Both questionnaires were completed in the self-report format and the participants rated their responses on a 5-point Likert scale, where 1=“strongly disagree” and 5=“strongly agree.”

Candidates’ MMI scores and their prior academic achievements, i.e., undergraduate graduate point averages (GPAs), MEET scores, were obtained for the year 2014 cohort (n=97) to compare their performance in MMIs across different backgrounds and to estimate correlations between their MMI scores and their prior academic achievements.

Ethical review and informed consent were exempted by the Institutional Review Board of Dongguk University, Gyeongju (IRB number: DGU IRB 2016001-01). Participation was voluntary and consent was implied with the return of the survey as responses were collected anonymously.

3. Data analysis

A descriptive statistics was conducted of the data collected from the candidate and interviewer surveys. Student t-test and one-way analysis of variance were performed to compare the candidates’ MMI scores across different backgrounds. Pearson correlation analysis was used to test association between the candidates’ MMI scores and their prior academic achievements. In addition, a G-coefficient was analyzed from the MMI scores obtained to investigate the reliability of the test and an interrater reliability was analyzed using the interclass correlation. All data were analyzed with SPSS version 20.0 for Windows (IBM Corp., Armonk, USA). The significance levels were 0.05 for the statistical analysis.

Results

1. Participant characteristics

A total of 164 candidates completed the questionnaires (a 98.8% response rate). Of those, 55 (34%) were female and 109 (66%) were male, and they were between 23 and 37 years old with a mean age of 27.2 years (standard deviation [SD], 3.22). The candidates had diverse undergraduate backgrounds: science and engineering (n=72, 44%), life sciences and biotechnology (n=69, 42%), healthcare professions (n=11, 6%), humanities and social sciences (n=4, 3%), and others (n=8, 5%).

Nineteen faculty members responded to the interviewer surveys. Of those, 16% (n=6) were female and 84% were male (n=13); seven of them (37%) were professors, 10 (53%) were associate professors, and two (10%) were assistant professors. About half of those faculty members (n=10) had experiences in admission interviews in the traditional format. The response rate of the interviewer surveys was 100%.

2. Faculty and candidate perceptions of MMI

Table 1 shows the results of candidate post-MMI survey. The participants agreed with the statement that the MMI cases were interesting and slightly agreed with the statement that they were able to present an accurate portrayal of their ability in MMI. Sixty percent of the participants had previously experienced interviews in the traditional format and they were neutral about the statement that they felt more anxious about MMIs than the traditional interview. The participants disagreed with the statement that the use of MMI would stop them from applying to DUSM. Moreover, the participants agreed with the statement that they were generally satisfied with MMIs.

Candidate Responses to Post-MMI Survey (n=164)

The candidates slightly agreed with the statement that they were able to better understand the school’ educational goals through the MMI and that they were able to understand candidates’ traits pursued by the school through the MMI. Some candidates also commented in response to the open-ended questions that they felt they were being evaluated on a variety of traits and they were able to recognize the attributes DUSM pursue in selecting students through the MMI.

In terms of faculty perceptions of MMIs, 62% of the faculty participants agreed or strongly agreed with the statement that they were able to assess an accurate portrayal of the candidate’s ability in MMI (mean [M], 3.68; SD, 0.75), and 63% of them disagreed or strongly disagreed with the statement that MMIs were more difficult to implement than the traditional interview (M, 2.32; SD, 0.58). Additionally, 84% of the faculty participants agreed or strongly agreed with the statement that they were generally satisfied with MMIs (M, 3.87; SD, 0.66). Still, some faculty members commented in response to the open-ended question that the quality of cases needs to be improved.

3. Reliability analyses

Table 2 shows results of the reliability of the test using the variance components method. The G-coefficient of MMI scores was 0.88, which is in an acceptable level. In addition to the G-coefficient analysis, an interrater reliability was analyzed using the interclass correlation on two cases, which were assessed by a panel of two interviewers. The interclass correlation coefficients ranged from 0.58 to 0.75. Most of the interviewers who participated in the stations assessed by two interviewers also commented in the post-MMI survey that they felt there were agreements between the interviewers on their assessment of the candidate’s performance.

Summary of Effects, Estimated Variance Components, and the G-Coefficient Analysis

4. Relationships between participants’ MMI scores and their backgrounds, prior academic achievements

The mean of participants’ total MMI scores were 279.2 (SD, 5.36) out of possible 300. Participants’ MMI scores did not differ across genders (t=0.35, p=0.72) nor across different undergraduate backgrounds (F=2.15, p=0.08) and were not associated with their ages (r=0.01, p=0.97). Furthermore, there was no difference in the participants’ total MMI scores across time of the day in which the interview had taken place (t=0.90, p=0.37).

Table 3 shows the relationship between the participants’ MMI scores and their prior academic achievements. The participants’ total MMI scores were not associated with their undergraduate GPAs or MEET scores. The participants’ MMI scores obtained on each case were not associated with their MEET scores in five out of six cases, yet there was a weak association in one case, which required knowledge in basic science. There were no associations between the participants’ MMI scores and their undergraduate GPAs in all cases.

Correlations between Participants’ MMI Scores and Their Prior Academic Achievements (n=97)

Discussion

Our study indicates that the candidates showed positive responses to MMIs and that the reliability was within the acceptable level. These findings are consistent with other Korean medical schools that had also adopted MMIs [9,10,14], as well as with other studies from medical schools in North America and Europe [1,5,19,20]. In addition, our study shows that candidates recognized the alignment between the criteria used to evaluate them in MMIs and the school’s educational goals, even though we did not make it explicitly to them. These findings indicate that the MMI cases were well aligned with the educational goals.

Furthermore, our findings show that the MMI cases were not biased with regards to the candidate’s backgrounds, which is also regarded as evidence of validity [12]. In addition, we found that the candidates’ total MMI scores were not associated with their prior academic achievements, such as undergraduate GPAs and MEET scores. These findings suggest that MMIs assessed candidates’ attributes different from those that pertain to their cognitive abilities, which is suited for the purpose of admission interviews. Still, the candidates’ MMI scores were associated with their MEET scores in one station, which assessed knowledge in basic science. We offered a MMI station on this topic as it was one of the competencies in the school’ educational goals, yet it appears this MMI station assessed attributes similar to those in MEET. Additionally, the candidates’ scores in the MMI station on knowledge in basic science had no association with their undergraduate GPAs. This finding appears to reflect the various backgrounds of our candidates in their undergraduate education.

The feasibility and cost-effectiveness is important in implementing an admission tool in addition to its validity and reliability [1]. Our experience shows that MMIs are acceptable in medical schools. In particular, our study indicates that the school’s educational goals offer members of the admission committee a framework for identifying qualities of candidates that they desire to pursue in student selection and MMIs is an effective a tool for designing and implementing the interview process suited for that purpose. It is our reflection that the institution’s effort to transform its educational goals into those that focus on students’ learning outcomes can be easily translated into content domains for MMI cases. Moreover, it was our experience that MMIs do not require more resources than traditional interviews; thus, MMIs seem highly feasible in medical schools.

Our experiences of OSCEs for over a decade have made it easy for those involved in the interview process to understand the structure and process of MMIs. This seemed to help us transition to the new interview format smoothly, which resulted in positive responses from both interviewers and candidates. Still, faculty members had mixed reactions to the quality of MMI cases. This appears to be due to the lack of experience of faculty members in developing MMI cases. Efforts need to be made to train faculty members to develop their competencies in developing MMI cases.

Although our study has practical implications for implementing MMIs, it has limitations in that this is an initial study into the validity and reliability of MMIs for aligning them with the school’s educational goals. Future research is warranted to follow up on the candidates’ academic performance over the years in medical school to examine the predictive validity of MMIs. Because the MMI topics match with the school’s exit outcomes, future research of the relationship between students’ MMIs scores and their performance in those exit outcomes can shed more light into the effectiveness of aligning the topics of MMI cases with the institution’s educational goals.

Acknowledgements

None.

Notes

Funding

None.

Conflicts of interest

None.

References

1. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini-interview. Med Educ 2004;38:314–326.
2. Eva KW, Reiter HI, Trinh K, Wasi P, Rosenfeld J, Norman GR. Predictive validity of the multiple mini-interview for selecting medical trainees. Med Educ 2009;43:767–775.
3. Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ 2007;41:378–384.
4. Knorr M, Hissbach J. Multiple mini-interviews: same concept, different approaches. Med Educ 2014;48:1157–1175.
5. Pau A, Jeevaratnam K, Chen YS, Fall AA, Khoo C, Nadarajah VD. The multiple mini-interview (MMI) for student selection in health professions training: a systematic review. Med Teach 2013;35:1027–1041.
6. Sebok SS, Luu K, Klinger DA. Psychometric properties of the multiple mini-interview used for medical admissions: findings from generalizability and Rasch analyses. Adv Health Sci Educ Theory Pract 2014;19:71–84.
7. Lee HJ, Park SB, Park SC, Park WS, Ryu SW, Yang JH, Na S, Won JY, Chae GB. Multiple mini-interviews as a predictor of academic achievements during the first 2 years of medical school. BMC Res Notes 2016;9:93.
8. Pau A, Chen YS, Lee VK, Sow CF, De Alwis R. What does the multiple mini interview have to offer over the panel interview? Med Educ Online 2016;21:29874.
9. Kim JK, Kang SH, Lee HJ, Yang J. Can the multiple mini-interview predict academic achievement in medical school? Korean J Med Educ 2014;26:223–229.
10. Kim DH, Hwang J, Kim EJ, Yoon HB, Shin JS, Lee S. How different are premedical freshmen who enter after introducing a multiple mini-interview in a medical school? Korean J Med Educ 2014;26:87–98.
11. Eva KW, Reiter HI, Rosenfeld J, Trinh K, Wood TJ, Norman GR. Association between a medical school admission process using the multiple mini-interview and national licensing examination scores. JAMA 2012;308:2233–2240.
12. Rees EL, Hawarden AW, Dent G, Hays R, Bates J, Hassell AB. Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs. A BEME systematic review: BEME guide no. 37. Med Teach 2016;38:443–455.
13. Uijtdehaage S, Doyle L, Parker N. Enhancing the reliability of the multiple mini-interview for selecting prospective health care leaders. Acad Med 2011;86:1032–1039.
14. Roh H, Lee HJ, Park SB, Yang JH, Kim DJ, Kim SH, Lee SJ, Chae G. Multiple mini-interview in selecting medical students. Korean J Med Educ 2009;21:103–115.
15. Reiter HI, Eva KW. Reflecting the relative values of community, faculty, and students in the admissions tools of medical school. Teach Learn Med 2005;17:4–8.
16. Witzburg RA, Sondheimer HM. Holistic review: shaping the medical profession one applicant at a time. N Engl J Med 2013;368:1565–1567.
17. Glazer G, Startsman LF, Bankston K, Michaels J, Danek JC, Fair M. How many schools adopt interviews during the student admission process across the health professions in the United States of America? J Educ Eval Health Prof 2016;13:12.
18. Koenig TW, Parrish SK, Terregino CA, Williams JP, Dunleavy DM, Volsch JM. Core personal competencies important to entering students' success in medical school: what are they and how could they be assessed early in the admission process? Acad Med 2013;88:603–613.
19. Husbands A, Dowell J. Predictive validity of the Dundee multiple mini-interview. Med Educ 2013;47:717–725.
20. Lemay JF, Lockyer JM, Collin VT, Brownell AK. Assessment of non-cognitive traits through the admissions multiple mini-interview. Med Educ 2007;41:573–579.

Article information Continued

Table 1.

Candidate Responses to Post-MMI Survey (n=164)

Question Response
1. I felt I was able to present an accurate portrayal of my ability. 3.56±0.78
2. Compared to the traditional interview, I think the MMI would cause candidates more anxiety. 2.96±1.08
3. The use of the MMI would stop me from applying to DUMS. 1.21±0.45
4. The instructions given on the MMI were clear enough. 4.64±0.62
5. The MMI cases were interesting. 4.03±0.64
6. I felt comfortable during the MMI. 4.13±0.66
7. I am generally satisfied with the MMI. 4.13±0.63
8. I was able to better understand DUMS' educational goals through the MMI experience. 3.67±0.91
9. I was able to better understand the candidate's attributes that DUMS pursue through the MMI experience. 3.70±0.95

Data are presented as mean±standard deviation.

MMI: Multiple mini-interview, DUSM: Dongguk University School of Medicine.

Table 2.

Summary of Effects, Estimated Variance Components, and the G-Coefficient Analysis

Effect Degree of freedom Mean square Estimated variance
Candidate 96 198.614 0.930
Station 5 362.861 3.596
Candidatestation 480 14.033 14.033

G-coefficient=σ2(candidate)/(σ2(candidate)2(candidate×station)/6)=0.88.

Table 3.

Correlations between Participants’ MMI Scores and Their Prior Academic Achievements (n=97)

Case topic Correlations with MMI Score
MEET score, r (p) Undergraduate GPA, r (p)
1. Self-regulation 0.04 (0.69) 0.07 (0.46)
2. Knowledge in basic science 0.27 (<0.01) 0.11 (0.29)
3. Problem-solving ability -0.09 (0.36) 0.05 (0.63)
4. Ethical decision-making -0.03 (0.80) 0.12 (0.25)
5. Critical thinking -0.01 (0.92) -0.06 (0.57)
6. Interpersonal skills 0.11 (0.30) 0.02 (0.85)
Total score 0.11 (0.27) 0.11 (0.29)

MMI: Multiple mini-interview, MEET: Medical Education Eligibility Test, GPA: Graduate point average.