This project utilized a new clinical judgment measurement model to create an opportunity for evaluating nursing students’ ability to identify and prioritize clinical problems for patients in a complex, high-fidelity simulation. An iterative process is being employed to refine an evaluation instrument.
Declines in the clinical performance and first-time licensure examination pass rates for new graduate nurses have prompted the professional organization that monitors entry-level nurse practice to call attention to the issue and encourage nurse educators to make changes to nursing student education. A clinical judgment measurement model has been designed by the National Council of State Boards of Nursing to measure clinical judgment. Educators have also been encouraged to use this model as a framework to create and evaluate new methods of teaching. Focusing on the model’s elements of analyzing cues and prioritizing hypotheses, a clinical problem list was constructed for nursing students to complete during a complex, high-fidelity simulation. Additionally, the effect of a five-minute time-out for students during the simulation to focus on their problem lists was tested. Work on the problem lists continues through an iterative process.
Focusing on Prelicensure Nursing Students’ Ability to Prioritize Clinical Problems
The goal of nursing education is to help students develop into compassionate and clinically sound nurses who are able to apply clinical judgment to manage complex patient situations. Unfortunately, clinical performance assessed in new graduate nurses has declined over the past decade (Kavanagh & Sharpnack, 2021), an alarming finding considering the continued increase in complexity of patient illness and nursing care. Concerns are that less than 25% of newly graduated nurses perform at an acceptably safe level of practice, and almost 50% have difficulties recognizing the urgency of or a change in a patient’s condition (del Bueno, 2005; Kavanagh & Szweda, 2017). More recently, the first-time pass rates for the National Council Licensure Examination (NCLEX) has exhibited a progressive decline from 91.2% in 2019 to 82.3% in 2022 (NCSBN, 2022). Rates were significantly impacted by the COVID-19 pandemic which resulted in disruptions to both nursing education and clinical practice.
In Spring 2023, the National Council of State Boards of Nursing (NCSBN), the organization that manages and administers the licensure exam, will start incorporating new, more-complex test items to better measure new graduate nurses’ ability to apply clinical judgment. The latest version of NCLEX is referred to as Next Generation NCLEX, or NGN, and is based on NCSBN’s Clinical Judgment Measurement Model (CJMM), which was first published in 2019 to introduce educators to the framework and encourage preparations to modify teaching and testing methods to best support student success (NCSBN, 2019). NCSBN has emphasized the need to make the steps of clinical judgment more explicit to nursing students throughout their learning experience, and educators have started to do that through classroom case study activities (Harden & Prochnow, 2023) and simulation (Jang et al., 2020). The six steps of clinical judgment are the following: recognize cues, analyze cues, prioritize hypotheses, generate solutions, take actions, and evaluate outcomes.
To get started integrating the CJMM concepts into the clinical capstone course that prelicensure nursing students pursuing a Bachelor of Science in Nursing degree take prior to graduating, I incorporated the specific elements of analyze cues and prioritize hypotheses into a complex, high-fidelity simulation that regularly runs within the course because they fit well into the existing activity. Analyzing cues entails organizing and linking data gathered through physical assessment and chart review to identify areas of concern and develop hypotheses about the patient’s clinical presentation (NCSBN, 2019). Developing hypotheses requires naming the identified clinical problem. Students are prompted to ask themselves, “What do vital signs or laboratory results indicate is going on with the patient?” Prioritizing hypotheses is the ability to evaluate and rank clinical problems by likelihood and urgency. “What is most important? Which clinical problem should be addressed first?”
The Priority Setting (PS) simulation centers on the application of priority setting frameworks commonly used in nursing practice, including the nursing process and ABC (airway-breathing-circulation), in a fast-paced, multi-patient simulated clinical experience. The context of the simulation is that a wildfire is heading toward an acute-care facility, which has received orders to evacuate. Students attend the simulation in groups of 15, are divided into smaller groups of five, and are assigned to three different open-ward rooms that each have three patients in them.
To address the CJMM elements, we collaborated with colleagues to create customized problem lists for each of the nine patients involved in the PS simulation. First, two course faculty members who were familiar with the PS simulation reviewed patient information provided to students at the start of the simulation and created a list of individualized clinical problems. The same two faculty members then reviewed the PS simulation scripts that contained unfolding patient scenarios and added problems to the lists that captured newer clinical information. Distractors were added to the lists by adding irrelevant clinical problems. Once completed, each list had 20 clinical problems and included high-priority, low-priority, and irrelevant problems.
To validate the problem lists, four other faculty members were recruited to rate the priority of each of the 20 clinical problems for all nine patients on a four-point Likert-type scale ranging from not a priority to high priority. Three of the faculty members had over 20 years each and one had 11 years of clinical experience. Three faculty members’ clinical experience was with caring for adults, two in critical care and one in home health, and the remaining faculty member practiced in women’s health.
The Friedman one-way ANOVA test was run on faculty responses to calculate mean ranks and find which clinical problems they identified as being the top three highest priority. After the simulation, student group responses were compared to faculty responses and were considered correct if they matched the faculty’s choice of three high priority problems.
At the start of the PS simulation (Time 1), during a prebrief session that included review of common priority setting frameworks, student groups received their room assignments and were provided with written handoff reports for their patients and access to patients’ charts in the simulated electronic medical record (sim-EMR) regularly utilized by the school of nursing. In addition, student groups were given customized clinical problem lists for their patients and instructed to select the top three highest priority problems for each patient based on information available at the moment. The groups were given 15 minutes to work together to complete the lists and formulate a care approach. Time 1 lists were collected at the end of prebrief, and student groups transitioned to the patient care rooms.
For the simulation, each student group was charged with providing care to the patients in their assigned room and deciding which one was most acute and should be evacuated first. Students were told that a medical transport vehicle capable of carrying three patients was en route to our facility to take patients to a facility outside of the fire zone. Patient scenarios unfolded quickly, with changing vital signs and patient needs. Midway through the simulation (Time 2), students were given the same customized clinical problem lists and directed to choose the top three highest-priority problems for each patient based on the newest information they had about the patients. Students were not reminded of the problems they chose at Time 1.
In addition to adding the clinical problem lists to this simulation-based learning experience (SBLE), I implemented a five-minute time-out for students to complete the Time 2 clinical problem lists. This intervention was used with half of the student groups to see if a quiet time-out would be a useful strategy to help students see the value of pausing amidst the busyness of nursing practice to reassess clinical situations. In past iterations of the SBLE, I had observed students getting caught up in a reactive state, responding to one patient event at a time without pausing to gain perspective. Time-outs are commonly used in surgical settings and are well-known to uphold patient safety. Time-outs have also been used to reduce medical errors related to improper medical diagnosis and medication prescription at time of hospital discharge (Beardsley et al., 2013; Yale et al., 2022).
For the groups that received the time-out (intervention groups), students were told that they had five minutes to complete the clinical problem lists; during this time, patients became quiet, and providers did not call into the room. In the rooms of the other groups (control groups) that did not receive the time-out, patients in the room still called for their nurses and providers called into the room by phone asking questions. Control groups were not limited in the time they had to complete the Time 2 clinical problem lists. Time 2 problem lists were collected as students entered the debriefing room. The time-out intervention was alternatively applied or not applied to all three groups participating in the SBLE at one time.
A timed cue was provided to let students know when they needed to make the final decision on which patient to evacuate first and transport that patient to the debriefing room. All students gathered together in the debriefing room with the three chosen patients, one from each group, and were then told that only one patient could be transported from the facility at that time. Student groups were charged with deciding which one of the three patients was most acute and should be evacuated first.
During the Spring 2022 semester, a total of 141 students participated in the PS simulation in 27 groups. The Emory University Institutional Review Board did not require full review of this course-focused project. At Time 1, correct student group selection of high priority problems ranged from 3-8 out of 9 possible (one point per high priority problem for each of three patients), with a median of 5.27 and a mode of 5. Thirteen groups received the five-minute time-out intervention at Time 2, and 14 did not. Scores could not be tallied for three intervention groups and 10 control groups at Time 2 because the groups did not complete problem lists. Some of these groups turned in blank lists, and some did not turn in all of their lists. For the remaining groups, correct selection of high priority problems at Time 2 ranged from 2-7 for the intervention groups with a median of 3.8 and a mode of 4 and ranged from 3-8 for the control groups with a median of 5.5 and no mode.
Intervention groups completed the problem list task with greater frequency but with less accuracy. Control groups completed the task with lesser frequency; and so, it is not possible to make a true determination of control groups’ accuracy at Time 2 based on the low rate of problem list completion for those groups.
There have been several challenges and limitations for this ongoing project. First, despite repeated attempts, I was only able to recruit four faculty colleagues to complete the problem lists. Second, the patient scenarios are designed at varying degrees of difficulty, which may impact student accuracy with identifying priority problems. Third, the low completion rates at Time 2 for the control groups have affected the project’s outcomes. Fourth, some of the problem lists may have redundancies or nuances that students are unable to discern.
I have run the PS simulation without the problem lists many times, and students have reported that they enjoy it. However, it has lacked an element of evaluation. Using it as a platform for this project has brought a better understanding of student ability. Further development of the problem lists shows promise as a mechanism for assessing students’ clinical reasoning and to identify specific ways to support their development of clinical judgment. Next steps in this project include continuing to refine the clinical problem lists to reduce redundancies and overlapping terms. In addition, I will build in a review of completed problem lists with students during debriefing so that they can receive feedback and improve clinical judgment. This step was not initiated previously because validation against faculty responses had not yet been completed.
Finally, I need to consider how to best leverage the value of the time-out intervention. It was apparent that the time-out was beneficial based on the completion rates of the problem lists, but it is not clear whether students appreciate its benefit without having to experience completing the task without it. The benefits of time-out interventions have been proven, particularly in regard to surgical procedures, in a few different settings but could be expanded to other areas and disciplines.
I would like to acknowledge and express gratitude to my Woodruff Health Educators Academy mentors Melissa “Moose” Alperin and T. J. Murphy; my school of nursing mentor Laura Kimble; and faculty colleagues Joanna Jardina, Michael Garbett, Priscilla Hall, Sandy Rosedale, and Jaime Young for contributing to this project. I have no conflicts of interest to disclose. I received no grant support for this work.
Beardsley, J. R., Schomberg, R. H., Heatherly, S. J., & Williams, B. S. (2013). Implementation of standardized discharge time-out process to reduce prescribing errors at discharge. Hospital Pharmacy, 48(1), 39-47. https://doi.org/10.1310/hpj4801-39
Craven, R. F., Hirnle, C. J., & Henshaw, C. M. (2021). Fundamentals of nursing: Concepts and competencies for practice (9th ed.). Wolters Kluwer.
del Bueno, D. (2005). A crisis in critical thinking. Nursing Education Perspectives, 26(5),278-282.
Harden, K., & Prochnow, L. (2023). Clinical Judgment Measurement Model helps maximize case-based didactic and clinical Learning. Nurse Educator, early access. https://doi.org/10.1097/NNI.00000000000001380
Jang, A., Kim, S., & Song, M. O. (2020). Development of a Clinical Judgment Measurement Model-based simulation module for ileus: A mixed-methods study. Journal of Nursing Education, 59(7), 382-387. https://doi.org/10.3928/01484834-20200617-05
Kavanagh, J. M., & Sharpnack, P. A. (2021). Crisis in competency: A defining moment in education. The Online Journal of Issues in Nursing, 26(1), Manuscript 2. https://doi.org/10.3912/OJIN.Vol26No01Man02
Kavanagh, J. M., & Szweda, C. (2017). A crisis in competency: The strategic and ethical imperative to assessing new graduate nurses’ clinical reasoning. Nursing Education Perspectives, 38(2), 57-62. https://doi.org/10.1097/01.NEP.0000000000000112
NCSBN. (2019, Winter). Next generation NCLEX news: Clinical judgment measurement model. https://www.ncsbn.org/public-files/NGN_Winter19.pdf
NCSBN. (2022). NCLEX pass rates. https://www.ncsbn.org/exams/exam-statistics-and-publications/nclex-pass-rates.page
Yale, S., Cohen, S., & Bordini, B. J. (2022). Diagnostic time-outs to improve diagnosis. Critical Care Clinics, 38(2), 185-194. https://doi.org/10.1016/j.ccc.2021.11.008