Using Automated Essay Scoring to Assess Higher-Level Thinking Skills in Nursing Education
Date
Author
Institution
Degree Level
Degree
Department
Specialization
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Automated essay scoring (AES) is a developing technology that is increasingly recognized as a potential strategy for managing the challenges associated with testing and scoring written assessments. The importance of using open-ended writing assessments to facilitate higher-level thinking, including making connections and critical thinking, has been shown in several studies. Health sciences education fields are increasingly recognizing the importance of using essay-type response examinations to assess the performance of learners particularly in the areas of critical thinking, clinical reasoning, and clinical judgement. The areas of critical thinking, clinical reasoning, and clinical judgement are recognized as critical aspects of clinical practice affecting patient safety, but are difficult to accurately assess using only selected-response item format (such as multiple-choice questions) examinations. The complexity of assessing the areas of critical thinking, clinical reasoning, and clinical judgement in patient situations supports the inclusion of constructed-response items (such as short answer essay questions) in assessments. However, there are several challenges to using essay-type response examinations, including time and costs of scoring, consistency in scoring, marker fatigue, timely feedback, and impact of subjectivity. The following research uses AES to score a constructed-response item to assess critical thinking, clinical reasoning, and clinical judgement for nursing students. The focus of this study is limited to scoring written assessments and the primary purpose of this study is to evaluate the effectiveness of using AES to score constructed-response items to assess higher-level thinking skills in nursing education.
