| Home | E-Submission | Sitemap | Contact us |  
top_img
Korean J Med Educ > Volume 28(3); 2016 > Article
Kim: Course-embedded assessment in Korean nursing baccalaureate education

Abstract

Purpose:

This study aimed to evaluate the program outcome in nursing baccalaureate in Korea. The analysis based on course embedded assessment. The concrete objectives were establishment of program outcome assessment, confirmation of student competency through weighting of program outcomes, and using the results of the evaluation in the circular feedback process in a nursing school in Korea.

Methods:

This study was conducted with a nursing education curriculum in a Korean nursing school. Data were collected through 28 students’ program outcome measurement from January 2013 to December 2014. Data were analyzed using a pairwise comparison method and analytic hierarchy process.

Results:

There were 1 to 3 direct and indirect assessment tools and for each program outcomes and each tool had measurable rubrics. There were 1 to 3 direct assessment tools for each program outcome, and each tool had measurable rubrics. This model derived rank of program outcomes from "care integration" to "global perception" through weight calculation. All direct assessment results were over 70%. The indirect assessment results were over the cutoff except for program outcomes 4 and 7.

Conclusion:

Each step of course embedded assessment was adaptive in nursing program outcome measure. The achievement of learning outcome provided reasonable tools for faculty and students.

Introduction

The educational paradigm has changed from a provider-centered to a learner-centered curriculum. An outcome-based curriculum means not just teaching consequences, but rather developing the process of the learner’s ability [1]. Outcome-based curriculum started in engineering education accreditation in the 1930’s in America, and program outcome measures and continuous quality improvement systems became common in colleges and universities beginning in 2001 [2].
Outcome-based curriculum has been appropriate for practical study because nursing education has borne the responsibility for nurses’ competency and patients’ safety [3]. The Korean Accreditation Board of Nursing Education (KABONE) proposed 12 program outcomes of nursing education [4]. Each Korean nursing education institution modified the proposed program outcomes according to the mission and philosophy of the nursing school. Nursing schools must ensure nursing competency by analyzing the educational environment, and developing, managing, and evaluating the curriculum [3].
Assessment is not an easy process because faculty must evaluate not at the lesson level but at the program level [5]. Program outcome assessment involves diverse approaches employing various methods such as direct and indirect measurement. Thus, program outcome measurement is larger than the sum of individual subjects’ achievement [6].
Individual course outcome assessments are not appropriate for program outcome measurement because learners’ experiences have progressed and been continuously integrated. Course-embedded assessments (CEA) can evaluate program outcomes in a diverse way from the perspectives of planning, organization, and appraisal [2]. CEA was introduced in 1995 as a systemic method of learning outcome measurement in university education [7]. CEA involves cutting-edge assessment including program outcome creation, course alignment, evaluation tool development, program outcome analysis, achievement confirmation, and feedback for improvement [8,9]. It enhances the ease and effectiveness of program outcome evaluations, as has been shown in some Korean studies in the field of engineering [9]. In addition, CEA has been applied in the areas of business [10], marketing [11], arts and science education [12], and nursing [13], although there has been no previous research specifically regarding the introduction of CEA within an applied nursing curriculum. This study aimed to full this research gap.
The theoretical framework of this study was CEA, which was proposed by Walvoord and Anderson in 1995 [7] and was utilized from 1998 by Gerretson and Golson [2] in general university education. CEA involves macro evaluation at the level of the program or institute, rather than a micro evaluation of the course or lesson. While past evaluations have measured individual student achievement, CEA focuses on the total student group. The CEA process comprises determining the learning aim and goal, developing an evaluation tool and rubric, program outcome assessment, and feedback [13,14].
CEA has the advantage that the teacher has an educational perspective in terms of the learning goal. Active interaction and communication with professors is another advantage. Learning outcomes can be improved because students understand the learning goals and perceive the criterion in the process of CEA. Through this circular process, educational content, methods, and media can change rapidly based on assessment results, resulting in better education. Furthermore, CEA is easy, effective, and efficient [2,8,14].
Disadvantages of CEA include the perception that the assessment is an additional burden, inconvenience, fear, and resistance among faculty. Discussion and deliberation within the teaching community is necessary to eliminate resistance. In addition, it is important to include teachers in the process of choosing an assessment tool [13,14].
The aim of this study was to assess nursing program outcomes in nursing baccalaureate education in Korea. The concrete objectives were (1) establishment of program outcome assessment, (2) confirmation of student competency through weighting of program outcomes, and (3) using the results of the evaluation in the circular feedback process in a nursing school in Korea.

Subjects and methods

This study was conducted to assess program outcomes of graduate students in February 2015 using CEA. Data collection was conducted between January 2013 and December 2014. The setting for this study was a nursing department at Doowon Technical University, which is located in Gyeonggi Province, Korea. A total of 28 senior students participated both directly and indirectly in the program outcome achievement assessments, which were performed based on 12 program outcomes selected by faculty and students in all grades, from freshmen to seniors. The results of the assessment data were used in the curriculum and an analysis hierarchy for direct assessment was performed. For an indirect assessment, the developed self-assessed questionnaire was surveyed. The researchers explained the study purpose, method, and the 10- to 15-minute duration that was required. Participants were made aware that their data would be kept confidential and informed of the possibility of withdrawal from the study.
Before the CEA, a literature review was performed with the keyword “course-embedded assessment” using RISS, ERIC, EBSCOhost, and PubMed. Eleven articles were chosen as relevant based on the researcher’s qualitative judgment of 281 articles (261 in English and 20 in Korean); three articles were added through a manual search. The literature review was conducted to assess and evaluate the analysis process for the possibility of adapting CEA for nursing education program outcomes.
The first step for CEA was the development of evaluation tools and refinement of rubrics [8]. Rubrics should be validated and designed to be calculated numerically [2]. Direct instruments were based on various assessment methods such as exams, research projects, paper assignments, presentations, or class assignments. The indirect self-assessment questionnaire was used with 79 items and 12 categories according to 12 program outcomes. The indirect tool was used in the form of a self-assessment questionnaire including 12 subcategories: care integration, core practice, communication skills, explanation of cooperation, coordination of roles, critical thinking, professional standards, legal and ethical considerations, leadership analysis, leadership exercise, research practice, and global perception. A 5-point Likert rating scale is used (1, never; 7, absolutely yes), with a total score range of 79 to 395. The construct validity was revealed fair through exploratory factor analysis and explanatory variance was 69.16%. The Cronbach’s α internal consistency was 0.91 and half split reliability was 0.84 and 0.85, and 0.93 in this study. Each factor’s cutoffs were determined through analysis sensitivity and specificity in receiver operating characteristic (ROC) curve and ROC showed an optimal cut point at 227 [15].
The second step was program outcome assessment and evaluation of students’ competency [13]. If at least 60% of students meet the desired goal, the program outcome is accomplished. However, if even one program outcome does not achieve a 50% success rate, the learning goal is considered not to be accomplished [2]. Every program outcome is weighted relative to other program outcomes based on importance [12]. Weighted program outcomes were calculated and students’ competencies were confirmed.
The importance of program outcomes was calculated based on ratings from eight nursing professors and three registered nurses through a pairwise comparison method. The faculty were included if they had experience with (1) over 3 years of nursing education, (2) over 2 years in a nursing career, and (3) over five participation workshops for program outcome assessment. The nurse inclusion criteria were having experienced: (1) over 10 years in a nursing career and (2) over 5 years in clinical education for nursing students. The scales were rated as “1=equal importance,” “2=weak or slight importance,” “3=moderate importance,” “4=moderate plus,” “5=strong importance,” “6=strong plus,” “7=very strong or demonstrated importance,” “8=very, very strong importance,” “9=extreme importance” [16].”
A pairwise comparison method was used to assess the relative importance of the program outcomes. This method makes judgments to determine the ranking of criteria, choices, and priorities in economic and government arena. The relative importance of one program outcome over the other 11 was expressed in the matrix. Eleven professionals ranked importance in terms of fractions such as 1/12 (the most important among 12), and then converted these fractions to decimals. Then, the matrix was squared and normalized by dividing the row sums by the row totals. The result was an eigenvector indicator for the sample. Analytic hierarchy processes were used as the logical processes to determine weight (Table 1) [17].
The third step is refinement of the evaluation and reflecting on the future curriculum. This circular process bases systematic modification of past curriculum in accordance with assessment results [2,12]. Evaluation results were confirmed, incorporated into the refined evaluation, and reflected in the next curriculum. Students’ competency achievements were confirmed with criterion weights and reflected in the next semester’s curriculum. Departments with relatively low achievement were reinforced and the curriculum was modified. The assessment tools and rubrics were refined and course completion time and credit hours were controlled. Individually, students not meeting the criterion were followed-up within the program to enhance competency before graduation.

1. Statistical analysis

Data were analyzed with Microsoft Excel 10.0 (Microsoft, Washington, USA). The relative importance of program outcome was evaluated with a pairwise comparison method and analytic hierarchy process. The eigenvectors of program outcomes were calculated through an analytic hierarchy process using a multiplied matrix through the “MMULT (Matrix MULTiplication)” in Microsoft Excel 10.0. The relative ranking of program outcome was obtained through computed eigenvectors. The evaluation of program outcome achievement was analyzed using means, weighted means, percentages, and standard deviations (SD).

Results

1. Evaluation tools

There were 1 to 3 direct assessment tools for each program outcome, and each tool had measurable rubrics. Tools for program outcomes 1 through 12 were as follows: (1) integrated simulation achievement and five clinical case study reports; (2) achievement of a fundamental nursing practicum and simulation practicum; (3) achievement of communication and a psychiatric nursing practicum; (4) achievement of human relationships and a communication role play in nursing administration; (5) a job description report for nursing administration and health program participation; (6) a graduate accreditation assessment and nursing process report; (7) a standard nursing report and special lecture participation of the nursing leader; (8) an ethics case report and achievement of medical law; (9) a nursing leadership report and achievement of nursing administration; (10) an autobiography in psychology and clinical performance examination in nursing leadership; (11) a nursing research report and research presentation in nursing statistics; and (12) the Test of English for International Communication (TOEIC), political forum participation in community nursing, and nursing politics report. There was indirect assessment tool as follows: (1) care integration; (2) core practice; (3) communication skills; (4) explanation of cooperation; (5) coordination of roles; (6) critical thinking; (7) professional standard; (8) legal/ethical understanding; (9) leadership analysis; (10) leadership exercise; (11) research practice; and (12) global perception (Table 2).

2. Weighting program outcomes and confirming competency

The weight range of program outcomes was from 0.032 to 0.292. The importance hierarchy of program outcomes was as follows: 0.292 (program outcome 1), 0.146 (2), 0.096 (6), 0.077 (8), 0.073 (3), 0.058 (4), 0.049 (5, 7, 9), 0.042 (10), 0.037 (11), and 0.032 (12). The direct assessment was calculated as percentage of achievement multiplied by weight. Weighted mean±SD (program outcome) was as follows: 25.6±2.1 (1), 12.3±1.0 (2), 6.9±0.2 (3), 6.1±0.3/0.8 (7, 8), 4.7±0.8 (6), 4.5±1.4 (4), 4.3±0.7 (5), 4.1±0.5 (9), 3.9±0.5 (10), 1.6±1.8 (11), and 2.5±0.5 (12) (Table 3).

3. Evaluation of the circular process

All direct assessment results were over 70%. The results of program outcomes 3, 6, 10, and 11 were over 90%, 1, 2, 5, and 9 were over 80%, and 4, 7, 8, and 12 were over 70%. The indirect assessment results were over the cutoff except for program outcomes 4 and 7. The indirect program outcome 4 measurement of 14.2±2.1 was lower than the cutoff of 23.5 and the program outcome 7 measurement of 14.5±1.9 was lower than the cutoff of 24.5 (Table 3). Program outcomes with low measurements were combined (4 with 5 and 7 with 8) in the next curriculum modification.

Discussion

This study demonstrated a systemic approach to program outcome evaluation through CEA. The results highlight the advantages of CEA, such as inducing faculty and students to achieve learning goals, reflecting the real importance of program outcomes, and ease of use.
In a competency-based curriculum, educational institutions should manage measurable assessment systems in terms of the level of learning outcomes. At the end of the formal curriculum, all students were evaluated with a summative assessment [18]. Students experienced the integration of lesson activities with learning goals when course alignment was announced according to program outcome [17].
The creation of direct and indirect tools was difficult stage of CEA. In developing the instruments, the researchers found redundancy and many tasks that belonged to no program outcome [12]. Assessment from a program view involved removing useless tasks and inclusion only of essentials. The key to assessment was alignment of the goals and evaluation. This study’s tools demonstrated validity in terms of relationship, accuracy, and usefulness. Relationship referred to direct evaluation of the program outcome. Accuracy was the maximum reflection of achievement. Finally, usefulness involved providing insight on quality improvement [14].
CEA was useful to judge the current status of achievement and improve the next course iteration [2]. Two or three direct tools were used in this study for evaluating improvement for each program outcome from the initial to final stage. More than one direct tool was used because measurement of a specific course’s achievement was not enough to evaluate the program outcome [14]. Indirect tools were useful for self-reflection by students, oriented to the goal, and appropriate for a student-centered curriculum [19]. However, indirect measurement could not ensure competency [14]; therefore, direct and indirect tools were used as the accreditation index and organized well in this study [17].
Students’ competency was measured with weighted importance in this study. The simple sum of a tool’s result did not reflect students’ strengths and weaknesses [14]. The relative importance of program outcome 1 was 29.2% because it had many related courses, while the relative importance of program outcomes 11 and 12 were 3.7% and 3.2% respectively, because they had minor courses. Weighted importance protected against measurement bias. The analytic hierarchy process could identify systematically complex criterion of importance [16]. Therefore, this method enhanced the validity of the competency measurement.
The final step of CEA was to input the evaluation result into the circular feedback process [2]. All results of the direct program outcome were over 70%, and program outcomes 4 and 7, which were under the cutoff point for indirect measurement, were integrated in the next curriculum after analysis of importance and similarity. The advantage of CEA was powerful progress through consistent assessment using rubrics [10].
The significance of CEA from the faculty’s perspective was that they had the opportunity to make lessons effective and efficient. They can obtain information about what the student knows and how well they have learned the content [9]. However, assessment would be a burden on the faculty if there was no system. To overcome resistance, a learning outcome committee should share the faculty’s opinion and obtain agreement on the CEA process [13]. The system construction should involve a circular self-improvement structure for program outcome assessment. Furthermore, a circular system requires organization, operation regulation, administration, and communication [1].
The significance of CEA from the student’s perspective was that assessment involved an absolute grade evaluating whether or not they reached a criterion rather than a relative grade according to ranking [20]. Students can be aware of their strengths and weaknesses in the process of assessment, expect improvement, and are motivated to put in effort [21]. CEA lead students to discover potential competencies and learning experiences to connect in the new text [6].
This study had the limitation that it was conducted with one sample within Korean nursing baccalaureate education. However, the CEA methodology used can be an example for assessment in competency-based curriculum.
This study evaluated program outcomes for graduate students in February 2015 in a Korean Baccalaureate nursing school using CEA. The evaluation tools included one to three direct tools and one indirect questionnaire for each program outcome. The tools quantitatively measured relationship, accuracy, usefulness, and diversity using rubrics. Relative importance was rated by educational professionals through a pairwise comparison method and competency was calculated through an analytic hierarchy process. The results were reflected in the next curriculum and the CEA-enhanced evaluation of the competency-based curriculum was systemically and scientifically validated.

Acknowledgments

None.

Notes

Funding
None.
Conflicts of interest
None.

Table 1.
Relative Importance of 12 Program Outcomes Using Pairwise Comparisons
Program outcome
1 2 3 4 5 6 7 8 9 10 11 12
Program outcome
1 1 2/1 4/1 5/1 6/1 3/1 6/1 4/1 6/1 7/1 8/1 9/1
2 1/2 1 4/2 5/2 6/2 3/2 6/2 4/2 6/2 7/2 8/2 9/2
3 1/4 2/4 1 5/4 6/4 3/4 6/4 4/4 6/4 7/4 8/4 9/4
4 1/5 2/5 4/5 1 6/5 3/5 6/5 4/5 6/5 7/5 8/5 9/5
5 1/6 2/6 4/6 5/6 1 3/6 6/6 4/6 6/6 7/6 8/6 9/6
6 1/3 2/3 4/3 5/3 6/3 1 6/3 4/3 6/3 7/3 8/3 9/3
7 1/6 2/6 4/6 5/6 6/6 3/6 1 4/6 6/6 7/6 8/6 9/6
8 1/4 2/4 4/4 5/4 6/4 3/4 6/4 1 6/4 7/4 8/4 9/4
9 1/6 2/6 4/6 5/6 6/6 3/6 6/6 4/6 1 7/6 8/6 9/6
10 1/7 2/7 4/7 5/7 6/7 3/7 6/7 4/7 6/7 1 8/7 9/7
11 1/8 2/8 4/8 5/8 6/8 3/8 6/8 4/8 6/8 7/8 1 9/8
12 1/9 2/9 4/9 5/9 6/9 3/9 6/9 4/9 6/9 7/9 8/9 1
Table 2.
Direct and Indirect Assessment Tools for Program Outcomes
Assessment tool Program outcome 1
Program outcome 2
Program outcome 3
Program outcome 4
Program outcome 5
Program outcome 6
Program outcome 7
Program outcome 8
Program outcome 9
Program outcome 10
Program outcome 11
Program outcome 12
Care integration Core practice Communication Cooperation Coordination Critical thinking Professional standard Ethic Leadership analysis Exercise leadership Research Global perception
Direct assessment tool Simulation (4-1) Fundamental nursing 1 (1-2) Communication (1-2) Human relationships (1-1) Nursing administration job analysis (4-2) Graduate accreditation (4-2) Nursing administration standards (4-2) Ethics case report (1-1) Nursing leadership report (3-2) Autobiography writing (2-1) Nursing research report (3-1) Nosing politics report (3-2)
Case study (4-2) Fundamental nursing 2 (2-1) Psychiatric nursing practicum (4-2) Nursing administration role play (4-2) Health program participation (4-2) Nursing process (4-2) Special lecture participation (4-2) Medical law (4-2) Nursing administration (4-2) Nursing leadership CPX (3-2) Nursing statistics research presentation (3-2) Community nursing political forum participation (4-1)
- Simulation practicum (4-1) - - - - - - - - - International nursinq TOEIC (4-2)
Indirect assessment tool Care integration Core practice Communication skills Explanation of cooperation Coordinatkin of roles Critical thinking Professional standards Legal/ethical understanding Leadership analysis Leadership exercise Research practice Global perception

The value in parenthesis represents grade-semester.

CPX: Cinical performance examination, TOEIC: Test of English for International Communication.

Table 3.
Rank and Achievement of Program Outcome through Indirect and Direct Assessment (n=28)
Program outcome Weight
Rank Indirect assessment (exit profile)
Direct assessment
Eigenvector Cutoff Mean±SD Mean±SD Weighted mean±SD
1. Care integration 0.292 1 23.5 26.4±2.8 87.6±7.0 25.6±2.1
2. Core practice 0.146 2 10.5 16.5±2.1 84.1±6.5 12.3±1.0
3. Communication 0.073 5 11.5 28.9±4.3 94.3±2.6 6.9±0.2
4. Cooperation 0.058 6 23.5 14.2±2.1 77.1±23.4 4.5±1.4
5. Coordination 0.049 7 14.5 32.5±4.9 88.6±13.5 4.3±0.7
6. Critical thinking 0.096 3 11.5 16.6±2.1 96.1±8.1 4.7±0.8
7. Professional standard 0.049 7 24.5 14.5±1.9 79.5±6.7 6.1±0.3
8. Ethic 0.077 4 12.5 31.4±5.0 79.5±10.3 6.1±0.8
9. Leadership analysis 0.049 7 21.5 27.5±3.9 82.7±10.4 4.1±0.5
10. Exercise leadership 0.042 8 22.5 26.8±3.3 92.2±11.3 3.9±0.5
11. Research 0.037 9 17.5 23.4±3.3 94.8±2.8 1.6±1.8

SD: Standard deviation.

References

1. Kim CS. A study of strategic approach for course and program outcomes assessment. J Eng Educ Res 2007;10:73-86.
crossref
2. Gerretson H, Golson E. Introducing and evaluating course-embedded assessment in general education. Assess Update 2004;16:4-6.

3. Taleff J, Salstrom J, Newton ER. Pioneering a universal curriculum: a look at six disciplines involved in women’s health care. J Midwifery Womens Health 2009;54:306-313.
crossref pmid
4. Korean Accreditation Board of Nursing Education. Nurses’ core competency and program outcome. Seoul, Korea: Korean Nurses Association; 2012.

5. Lee HW, Kim SH, Park K, Kim JY. A study of the assessment of program outcomes based on capstone design course. J Eng Educ Res 2010;13:143-151.
crossref
6. DeCoux Hampton M. Constructivism applied to psychiatric-mental health nursing: an alternative to supplement traditional clinical education. Int J Ment Health Nurs 2012;21:60-68.
crossref pmid
7. Walvoord BE, Anderson V. An assessment riddle. Assess Update 1995;7:8-11.
crossref
8. Gerretson H, Golson E. Synopsis of the use of course-embedded assessment in a medium sized public university’s general education program. J Gen Educ 2005;54:139-149.
crossref
9. Han J. The review on adaptation of course-embedded assessment for program outcome assessment in engineering education. J Eng Educ Res 2009;12:96-106.

10. Barboza GA, Pesek J. Linking course-embedded assessment measures and performance on the educational testing service major field test in business. J Educ Bus 2012;87:102-111.
crossref
11. LaFleur EK, Babin LA, Lopez TB. Assurance of learning for principles of marketing students: a longitudinal study of a course-embedded direct assessment. J Mark Educ 2009;31:131-141.
crossref
12. Galle JK, Galle J. Building an integrated student learning outcomes assessment for general education: three case studies. New Dir Teach Learn 2010;2010:79-87.
crossref
13. King J. Beyond the grade: developing opportunities for course-embedded assessment. Assess Update 2011;23:9-10.

14. Kim JG. A development on the course embedded assessment model for program outcome assessment in engineering education. J Comput Commun 2014;10:67-79.

15. Kim HK. Development of program outcome self- assessment tool in Korean nursing baccalaureate education. J Korean Acad Soc Nurs Educ 2015;21:215-226.
crossref
16. Saaty TL. Relative measurement and its generalization in decision making why pairwise comparisons are central in mathematics for the measurement of intangible factors: the analytic hierarchy/network process. Rev R Acad Cien Serie A Mat 2008;102:251-318.
crossref
17. Price BA, Randall CH. Assessing learning outcomes in quantitative courses: using embedded questions for direct assessment. J Educ Bus 2008;83:288-294.
crossref
18. Ulfvarson J, Oxelmark L. Developing an assessment tool for intended learning outcomes in clinical practice for nursing students. Nurse Educ Today 2012;32:703-708.
crossref pmid
19. Seo MW, Chi E, Hwang CI, Ju U. Developing and validating the measurement instrument of higher education learning outcomes. J Educ Eval 2013;26:275-296.

20. Kim S. The concept and necessity of learning outcome. Korean J Med Educ 2012;24:89-92.
crossref pmid pdf
21. Kerby D, Romine J. Develop oral presentation skills through accounting curriculum design and course-embedded assessment. J Educ Bus 2009;85:172-179.
crossref
Editorial Office
The Korean Society of Medical Education
(204 Yenji-Dreamvile) 10 Daehak-ro, 1-gil, Jongno-gu, Seoul 03129, Korea
Tel: +82-2-2286-1180   Fax: +82-2-747-6206
E-mail : kjme@ksmed.or.kr
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © 2024 by Korean Society of Medical Education.                 Developed in M2PI