To our knowledge, this is the first study to explore theoretical knowledge specifically concerning the ABCDE approach. Healthcare professionals working with critically ill patients, scored on average 80% on a validated multiple-choice test involving the contents of the ABCDE approach. The type of department, profession category and age had a significant influence on the test score. Participants from the NICU and ICU scored lower than their colleagues from the PICU, Emergency Department and Anaesthesiology, residents and medical specialists outperformed nurses and NP/PA, and younger participants scored higher than more senior professionals. Apparently, there is variation in the level of knowledge of the ABCDE approach among different profession categories and various departments.
This study is the first to assess theoretical knowledge of the ABCDE approach among different disciplines and professions at a random moment. Previous studies did evaluate knowledge of the primary survey, but all in the context of life support courses and not regarding the ABCDE approach in particular [30,31,32]. Although comparison with previous research is difficult, all studies seem to support the importance of sufficient knowledge. Multiple studies corroborate that life support courses improve both theoretical knowledge and skills [30,31,32]. However, knowledge and skills deteriorate within 3–6 months without regular rehearsal [31, 33]. Since knowledge is a prerequisite for algorithm adherence, insufficiently acquired or retained knowledge may be a partial explanation for incomplete or incorrect application of the ABCDE approach [7, 10].
Although the ABCDE approach is well known and can be used in all patient categories, alternatives for or additions to the ABCDE approach have been created. An example of an alternative is the CAB approach, circulation-airway-breathing, in which the circulation is assessed first. This approach is important in cardiopulmonary resuscitation of patients with a cardiac arrest. Literature shows that this approach e.g. decreases the time to commencement of chest compressions [34]. However, the CAB approach is not recommended to use in the approach of critically ill patients without a cardiac arrest. An example of an addition is the paediatric assessment triangle (PAT) [35]. The PAT is a widely accepted tool for rapid, initial assessment of a child to establish the level of severity and to determine urgency for treatment. It precedes the ABCDE approach and does not replace it.
A study by Linders et al. assessed adherence to the approach during neonatal advanced life support scenarios. In line with the present study, lower adherence among nurses compared to residents and specialists was found [6]. Possible explanations are potential differences between the in-hospital courses for different profession categories or less accredited courses for nurses compared to physicians. Lastly, the amount of exposure could play a role, since more exposure facilitates retention of knowledge and skills [33, 36, 37]. Although all profession categories are expected to be familiar with the contents of the ABCDE approach, the resident usually performs the assessment when all profession categories are present.
This study shows that besides profession category, test scores differed between departments. One could argue this discrepancy could be partly attributed to the distribution of profession categories or discrepancy in age, but no significant interaction between these factors and department was present. The amount of exposure might play a role, although it is not likely to contribute considerably since all departments are caring for critically ill patients. Another possible explanation is that knowledge of the separate domains of the ABCDE approach might differ, related to the patient population. For example in Anaesthesiology, aspects of the airway might be more relevant or more frequently seen than neurological findings and the NICU does not have trauma patients. Although we tried to cover all parts of the ABCDE approach in the assessment tool, this might result in differences in test scores. Ongoing research focussing on adherence to the ABCDE approach in clinical practice might elucidate more on this subject.
The finding that younger participants scored higher on the test than the more senior participants was surprising. It can be assumed that more senior participants usually have more experience in clinical practice, have used the ABCDE approach more frequently and therefore score higher on the test. The fact that our results show otherwise, might be related to a difference in frequency, intensity and type of education or to a more executable role in clinical practice. Also, it is possible participants of a younger age are educated within a culture wherein the ABCDE approach is more universally acknowledged, but our data did not permit analysis of differences in education. At last, test score on theoretical knowledge cannot directly be related to adherence in clinical practice until proven otherwise.
Setting a cut-off score for passing or failing the test was difficult, since the optimal level of knowledge of the ABCDE approach cannot easily be determined [38,39,40]. It is unknown how the level of knowledge relates to clinical performance. Therefore it was used as a formative assessment tool without a threshold. However, based on the variation of the test score and the large standard deviations, knowledge of the ABCDE approach appears suboptimal in various healthcare professionals caring for critically ill patients.
Strengths and limitations
The multidisciplinary approach of this study, makes it fairly unique. It gives a general insight in the level of knowledge of the ABCDE approach of healthcare professionals of various profession categories and departments. Furthermore, it is the first study that has specifically assessed theoretical knowledge of the ABCDE approach at a random moment, instead of in the context of life support courses, providing a more realistic view of the situation in clinical practice. The assessment tool was constructed using an evidence-based method for reaching consensus, with representatives from every department, an external expert on the subject, an educationalist and an expert on test development. The assessment tool was tested on feasibility by healthcare professionals of multiple participating professions and departments, so can be considered applicable to every participant. Lastly, the knowledge test was validated on multiple sources of validity evidence including test-item statistics and expert-novice comparison.
Some limitations arose while conducting this study. First, this was a single-centre study. Although the results of this study might not necessarily be completely generalizable, we still think the results can be applicable to healthcare professionals working in the same departments in similar hospitals in countries with a comparable healthcare system. Furthermore, the knowledge test that was developed, validated and used for this study can be used by researchers, educationalists or other people with interest in this topic. Second, the expert panel consisted of only medical specialists. Since the expert panel developed the assessment tool, the other profession categories might theoretically be disadvantaged. However, some members of the expert panel are instructors of courses for nurses and the MCQ was tested by a variety of participants of all profession categories, including nurses. Third, the questionnaire could be filled out at any time, without supervision, since a controlled setting could unfortunately not be created due to logistical reasons. Although participants were asked to abstain from seeking information sources and it was emphasized that the test score was processed anonymously and without individual consequences, this theoretically gave participants the opportunity to study, look up the answers to questions, or ask help of colleagues. It is unknown to what extend this might have affected the results. We hypothesized that a potential effect would lead to an increase in test scores, supporting the conclusion that test scores can be improved. Fourth, participants had attended different types of accredited and unaccredited life support courses, making it not feasible to differentiate between individual types of training.
Lastly, the overall response rate of 25.5% can be considered as limitation. A meta-analysis estimated the average survey response rate among healthcare professionals at 53%, although response rates < 30% are not uncommon [41,42,43,44]. In this study, the ratio of profession categories and departments of the participants is comparable to the ratio of all approached healthcare providers. Therefore, the results seem to be an adequate reflection of reality, although the response rate could affect the generalizability of the results. Possible explanations of the lower response rate are the length of the questionnaire, lack of interest in the subject matter, insufficient time, or the fact that it was a ‘test’. Although the number of non-responders and the probability of nonresponse bias are very poorly related, nonresponse bias cannot be excluded [45]. However, if only the most motivated healthcare professionals participated, it is likely that the average test score would otherwise have been lower, indicating an even greater need for education.