The present study analyzed the scores of the contestants in a hybrid CPR skill competition comprising online and offline components. We found no statistical difference between online and offline evaluations of CPR quality based on sensors and programs on a digital simulator. The specialist scoring is significantly different between online and offline evaluations. The offline contestants tended to get higher scores from the specialist scoring system. However, we found that the simulator scores had higher standard derivation, this may be related to the state of competitors on the day and simulators will give higher resolution scoring. Also, this will reveal that simulator scoring may be more objective than specialist scoring.
The most critical maneuvers affecting a patient’s prognosis in CPR are compression depth and frequency, both of which are quantifiable indicators [5]. An instrumented direct feedback device measures compression rate, depth, hand position, recoil, and chest compression fraction and provides real-time audio or visual feedback (or both) on these critical CPR skills [22]. Training for chest compressions based on the use of real-time feedback software (Laerdal QCPR) guided by an instructor is superior to instructor-guided feedback training in terms of overall chest compression technical skill acquisition [11]. Therefore, the CPR quality evaluation system is effective for simulator scoring. However, the simulator is too sensitive: a slight change in the subject’s action may lead to fluctuations in the results. So it is not enough solely rely on machines.
In our study, practice quality evaluation in both online and offline modes did not affect the objective results obtained by the simulator evaluation. Therefore, it is essential for the evaluation of the quality of CPR on an objective basis. The more common specialist scoring approach also is more subjective. Even if the average score of five specialists is adopted, differences remain. Therefore, the subjective scores obtained by the judges should be considered a supplement to the objective scores, such as compression depth and frequency.
Medical simulations have great promise to provide training at less cost and without risk to patients. However, these advantages are not sufficient to conclude effectiveness [23]. A study showed that simulator assessment can be sufficient but they also suggest a combination of assessment tools will be much better [24]. This study analyzing two competitions with the same contestants found the offline specialist scoring to be significantly higher than the online specialist scoring. Explanations could reflect on both players and the specialists: Players sensed the offline finals as more important than online, making the onsite competition more urgent, preparation more extensive, and the competition mode more familiar. The onsite performances were more stable than in the preliminary round, and the results are slightly higher. Specialists applied the same scoring standards for the preliminary and final rounds. And the judges were all senior professors. Although there are three screens for online viewing, the details may not have been clear enough. Plus, the atmosphere of the finals and the neatness of the players’ clothing could cause specialists to subjectively increase emotional points.
This is consistent with the conclusion of the article published by Camilla Hansen and others in 2019. They believe that the basic life support (BLS) certified instructors still have a poor assessment of the quality of CPR, and the compression depth and artificial respiration are not clear [25]. Machines can ensure quality through a quantifiable index. Specialists can objectively integrate many factors to ensure the stability of the score. We need to combine these and establish a new quality evaluation system for online training.
The results of this article suggest that the current CPR quality evaluation system should not depend only on specialist scoring, and that differences in competition scenes will significantly affect the specialist scoring. Human judgments are susceptible to error due to a host of factors, from fatigue to various biases of judges [26]. Specialist scoring can be used as a supplement to the simulator scoring, and indicators that cannot be quantified and monitored, such as whether the practice posture is standardized, whether the arm is completely perpendicular to the ground when pressing, etc. Due to the high price of current simulators with evaluation functions, it is difficult to train enough people on CPR. Thus, adopting new approaches, technology, and further research are necessary.
CPR is the most effective rescue method for patients with CA. The quality of CPR is low during both in-hospital and out-of-hospital rescue [27]. The domestic penetration rate of CPR training is lower in China than in other countries [28]. As populations are, the prevalence of cardiovascular diseases is increasing. To achieve a rapid increase in the penetration rate in the short term and improve the quality of CPR, online training with feedback devices is the most apparent path.
The American Heart Association(AHA) statement on resuscitation education science emphasizes the importance of using objective data to improve BLS and advanced life support skills. The 2015 European CPR Guidelines also recommend the use of feedback devices to improve the quality of CPR [11]. However, feedback devices may be prohibitively expensive and not available in all situations. There is currently no other objective method to assess the quality of CPR [25].
Studies have shown that training with assistive devices can improve the quality of CPR [20]. Online training will be an inevitable trend in the post-epidemic era. CPR requires regular retaining to ensure the quality of operation. Online training can facilitate this. The AHA guidelines also suggest that continuous enhancement of this skill in the short term can improve the quality of practice [17]. Even doctors who have received multiple pieces of training should retrain regularly to improve the quality of their practice and ensure clinical safety.
This study has several limitations. Because of the influence of COVID-19, it is not possible to organize large-scale activities for comparison the sample size was lacking. All the participants come from different medical institutions and their educational backgrounds differed, which may have influenced comparisons. As this study only involved medical staff, the results apply only to that occupation.