Skip to main content

Grading and assessment of clinical predictive tools for paediatric head injury: a new evidence-based approach

Abstract

Background

Many clinical predictive tools have been developed to diagnose traumatic brain injury among children and guide the use of computed tomography in the emergency department. It is not always feasible to compare tools due to the diversity of their development methodologies, clinical variables, target populations, and predictive performances. The objectives of this study are to grade and assess paediatric head injury predictive tools, using a new evidence-based approach, and to provide emergency clinicians with standardised objective information on predictive tools to support their search for and selection of effective tools.

Methods

Paediatric head injury predictive tools were identified through a focused review of literature. Based on the critical appraisal of published evidence about predictive performance, usability, potential effect, and post-implementation impact, tools were evaluated using a new framework for grading and assessment of predictive tools (GRASP). A comprehensive analysis was conducted to explain why certain tools were more successful.

Results

Fourteen tools were identified and evaluated. The highest-grade tool is PECARN; the only tool evaluated in post-implementation impact studies. PECARN and CHALICE were evaluated for their potential effect on healthcare, while the remaining 12 tools were only evaluated for predictive performance. Three tools; CATCH, NEXUS II, and Palchak, were externally validated. Three tools; Haydel, Atabaki, and Buchanich, were only internally validated. The remaining six tools; Da Dalt, Greenes, Klemetti, Quayle, Dietrich, and Güzel did not show sufficient internal validity for use in clinical practice.

Conclusions

The GRASP framework provides clinicians with a high-level, evidence-based, comprehensive, yet simple and feasible approach to grade, compare, and select effective predictive tools. Comparing the three main tools which were assigned the highest grades; PECARN, CHALICE and CATCH, to the remaining 11, we find that the quality of tools’ development studies, the experience and credibility of their authors, and the support by well-funded research programs were correlated with the tools’ evidence-based assigned grades, and were more influential, than the sole high predictive performance, on the wide acceptance and successful implementation of the tools. Tools’ simplicity and feasibility, in terms of resources needed, technical requirements, and training, are also crucial factors for their success.

Peer Review reports

Background

Clinical decision support (CDS) systems proved to enhance evidence-based clinical practice and improve healthcare cost-effectiveness [1,2,3,4,5,6]. Based on Shortliffe’s three levels classification, clinical predictive tools, here referred to simply as predictive tools, belong to the highest CDS level; providing patient-specific recommendations based on clinical scenarios, which usually follow clinical rules and algorithms, cost benefit analysis, or clinical pathways [7, 8]. These research-based applications quantify the contributions of relevant patient characteristics to derive the likelihood of diseases, predict their courses and possible outcomes, or support the decision making on their management [9,10,11]. Among the healthcare areas that are increasingly utilising predictive tools is the emergency department (ED) [11, 12]. Some of these tools have been demonstrated to support EDs to overcome many of the encountered challenges, such as overcrowding of patients, lack of resources, variable acuity and diversity of clinical conditions [13, 14]. They also have the potential to help clinicians to improve effectiveness through achieving better clinical outcomes, improve efficiency by reducing costs, and improve patient safety by minimising complications and unintended consequences [15,16,17].

Traumatic brain injury (TBI) is one of the most commonly presenting emergency conditions and is the leading cause of death and disability among trauma patients [18, 19]. In 2017, the centers for disease control and prevention (CDC) reported that the annual TBI related ED visits were estimated at 2.5 million incidents in the United States (US) [20]. Approximately, one third of these incidents occurred among children aged 0 to 14 years [21]. Many predictive tools have been developed, over the last 25 years, to support the diagnosis of TBI among children and guide the use of computed tomography (CT) in the ED [22, 23]. Through predicting TBI and identifying children who are at low risk of clinically important incidents, these tools are designed to decrease CT scan over-utilisation, to save time and money, and to minimise the exposure of children to the harmful ionising radiation, without compromising their safety or missing clinically significant events [24,25,26,27,28].

When selecting a predictive tool, for implementation at their clinical practice or for recommendation in clinical practice guidelines, clinicians involved in the decision making are challenged with an overwhelming and ever-growing number of tools. Many of these tools have never been implemented or assessed for comparative effectiveness or post-implementation impact [29,30,31]. Currently, clinicians rely on their previous experience, subjective evaluation or recent exposure to predictive tools in making selection decisions. Objective methods and evidence based approaches are rarely used in such decisions [32, 33]. Some clinicians, especially those developing clinical guidelines, search the literature for the best available published evidence. Commonly they look for studies that describe the development, implementation or evaluation of predictive tools. More specifically, some clinicians look for systematic reviews on predictive tools, comparing their development processes or predictive performances. However, there are no available methods to objectively and comprehensively summarise and interpret such evidence [34, 35].

While there are many predictive tools that have been developed, to help clinicians rule out TBI among children at the ED, only a few were considered for use in clinical practice [22,23,24]. Therefore, we need to understand what makes certain tools more widely accepted and successfully implemented than the others. This will help national and institutional guideline developer clinicians to make better decisions in selecting and incorporating effective predictive tools in their clinical guidelines to help other clinicians through the decision-making process. Furthermore, this will also help expert clinicians develop better predictive tools for the clinical practice in the future. In addition to the predictive performance measures, such as the sensitivities and specificities of predictive tools, many other quantitative and qualitative measures can be considered for the analysis. The country and year of tools’ development could have an influence on the tools’ acceptance and success. In addition, the number of citations and studies that report the tools’ validation, evaluation or implementation could indicate some sort of attention and acceptance. Furthermore, the quality of the tools’ development studies, and the efforts invested in their development, reflected in the sample size of patients or records used in the development and the number of authors and their experiences, could support tools’ wide acceptance and successful implementation.

The primary objective of this study is to grade and assess paediatric head injury predictive tools using a new evidence-based framework for grading and assessment of predictive tools (The GRASP Framework). The secondary objective is to provide emergency clinicians with standardised objective information on clinical predictive tools to support their search for and selection of effective tools.

Methods

Our study is composed of three parts. The first includes identifying paediatric head injury predictive tools, proposed in the literature, and their related published evidence. The second part includes grading these predictive tools using our new evidence-based approach and eligible published evidence. The third part includes conducting a comprehensive and objective analysis to answer the research question.

Identifying predictive tools

We conducted a focused review of the literature on paediatric head injury predictive tools. The concepts used in the literature search included “paediatrics”, “head”, “injury”, “clinical prediction”, “tools”, “rules”, “models”, “development”, “validation”, “implementation”, and “evaluation”. The search was conducted for studies published in English language, with no specific time frame, using MEDLINE, EMBASE, CINAHL, and Google Scholar. The default time range of each database was used, including available publications since 1879, 1950, 1947, and 1937 respectively and up to January 2019. The search followed five steps. 1) Systematic reviews on paediatric head injury predictive tools were identified and retrieved. 2) Examining the systematic reviews; the primary studies, describing the development of the tools, were then identified and retrieved. 3) All secondary studies that cited the primary studies or that referred to the tools’ names or to any of their authors, anywhere in the text, were retrieved. 4) All tertiary studies that cited the secondary studies or that were used as references by the secondary studies were retrieved. 5) Secondary and tertiary studies were examined to exclude non-relevant studies or those not reporting the validation, implementation or evaluation of the tools. Additional file 1: Figure S2 shows the process of searching the literature for the paediatric head injury predictive tools and their related published evidence.

Grading predictive tools

Each paediatric head injury predictive tool was evaluated using our newly developed framework for grading and assessment of predictive tools (abbreviated as GRASP) [36]. Eligible studies were examined in detail for the reported evaluations of the predictive tools. Based on the critical appraisal of the published evidence on predictive tools, the GRASP framework uses three dimensions to grade predictive tools: 1) Phase of Evaluation, 2) Level of Evidence and 3) Direction of Evidence.

Phase of evaluation

Assigns A, B, or C based on the highest phase of evaluation. If a tool’s predictive performance, as reported in the literature, has been tested for validity, it is assigned phase C. If a tool’s usability and/or potential effect have been tested, it is assigned phase B. Finally, if a tool has been implemented in the clinical practice, and there is published evidence evaluating its post-implementation impact, it is assigned phase A.

Level of evidence

A numerical score, within each phase, is assigned based on the level of evidence associated with each tool. A tool is assigned grade C1 if it has been tested for external validity multiple times, grade C2 if it has been tested for external validity only once, and grade C3 if it has been tested only for internal validity. Grade C0 means that the tool did not show sufficient internal validity to be used in the clinical practice. Grade B1 is assigned to a predictive tool that has been evaluated, during the planning for implementation, for both of its potential effect, on clinical effectiveness, patient safety or healthcare efficiency, and for its usability. Grade B2 is assigned to a predictive tool that has been evaluated only for its potential effect, while if it has been studied only for its usability, it is assigned grade B3. Finally, if a predictive tool had been implemented then evaluated for its post-implementation impact, on clinical effectiveness, patient safety or healthcare efficiency, then it is assigned grade A1 if there is at least one experimental study of good quality evaluating its post-implementation impact, grade A2 if there are observational studies evaluating its impact, and grade A3 if the post-implementation impact has been evaluated only through subjective studies, such as expert panel reports.

Direction of evidence

For each phase and level of evidence, a direction of evidence is assigned based on the collective conclusions reported in the studies. The evidence is considered positive if all studies about a predictive tool reported positive conclusions and negative if all studies reported negative or equivocal conclusions. The evidence is considered mixed if some studies reported positive and some reported either negative or equivocal conclusions. To decide an overall direction of evidence, a protocol is used to sort the mixed evidence into 1) Mixed evidence that supports an overall positive conclusion or 2) Mixed evidence that supports an overall negative conclusion. This protocol is based on two main criteria; 1) Degree of matching between the evaluation study conditions and the original tool specifications, and 2) Quality of the evaluation study. Studies evaluating predictive tools in closely matching conditions to the tool specifications and providing high quality evidence are considered first; taking into account their conclusions in deciding the overall direction of evidence.

The final grade assigned to a tool is based on the highest phase of evaluation, supported by the highest level of positive evidence, or mixed evidence that supports a positive conclusion. The GRASP framework concept is shown in Fig. 1 and the GRASP framework detailed report is presented in Additional file 1: Table S3.

Fig. 1
figure 1

The GRASP Framework Concept [36]

Results

Identifying predictive tools

We identified five systematic reviews [22,23,24, 27, 28] and two literature reviews [37, 38] discussing paediatric head injury predictive tools. Through these seven reviews, we identified 16 studies describing the development and internal validation of 14 distinct predictive tools [39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54]. After development and internal validation, the PECARN rule (Paediatric Emergency Care Applied Research Network) [49] was evaluated in 23 studies [55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77]. The CHALICE rule (Children’s Head injury ALgorithm for the prediction of Important Clinical Events) [43] was evaluated in 13 studies [24, 48, 58,59,60,61,62, 66, 69, 72, 77,78,79,80]. The CATCH rule (Canadian Assessment of Tomography for Childhood Head injury) [51] was evaluated in 11 studies [48, 58,59,60,61, 63, 66, 72, 81,82,83]. The NEXUS II rule (National Emergency X-Radiography Utilization Study) [50, 54] was evaluated in four studies [48, 84,85,86]. Palchak rule [52] was evaluated in two studies [48, 87]. On the other hand, none of the remaining nine rules; Haydel [47], Atabaki [39], Buchanich [40], Da Dalt [41], Greenes [44, 45], Klemetti [48], Quayle [53], Dietrich [42], or Güzel [46] were evaluated in published studies after their initial development.

Grading predictive tools

Using the GRASP framework and eligible evidence, we assigned grades to the 14 paediatric head injury predictive tools. The PECARN rule was developed by Dr. Nathan Kuppermann in the US in 2009 and was tested successfully for internal validity [49]. The rule was tested multiple times for external validity and proved externally valid in all the reported studies [56, 58,59,60,61, 63, 66, 67, 70,71,72,73,74, 76, 77]. This qualifies the PECARN rule for grade C1. Four economic analysis studies discussed the positive potential effects of using the PECARN rule on lowering healthcare costs, decreasing the frequency of using CT scans and minimising the exposure of children to harmful ionising radiation [62, 68, 69, 75]. This qualifies the PECARN rule for grade B2. Three observational post-implementation impact studies were conducted. One study concluded that the PECARN intermediate-risk predictors did not play a major role in the physicians’ decision to perform a CT scan [65]. However, the other two studies concluded that implementing and using the PECARN rule was associated with a statistically significant decrease in CT utilisation without safety or effectiveness issues [57, 64]. Using the protocol, the mixed evidence here supports positive conclusion on the post-implementation impact of the PECARN rule. Accordingly, the final grade assigned to the PECARN rule is A2.

The CHALICE rule was developed by Dr. Joel Dunning in the United Kingdom in 2006 and was tested successfully for internal validity [43]. The rule was tested multiple times for external validity and proved externally valid in all the reported studies [48, 58,59,60,61, 66, 72, 77]. This qualifies the CHALICE rule for grade C1. Six cost-effectiveness studies discussed the potential effects of implementing the rule; whether it would increase or decrease the number and costs of CT scans and its potential effect on the exposure of children to radiation. Two of the six studies in 2010 reported that the implementation of CHALICE rule would increase the number of CT scans performed and increase the exposure of children to the harmful ionising radiation [79, 80]. However, four subsequent studies in 2011, 2013, 2015 and 2016 reported that implementing the CHALICE rule would be a cost-effective strategy to safely reduce unnecessary head CT scans [24, 62, 69, 78]. Using the protocol, the mixed evidence here supports positive conclusion on the cost-effectiveness and potential effects of implementing the CHALICE rule. The rule was not evaluated for usability or post-implementation impact. Accordingly, the final grade assigned to the CHALICE rule is B2.

The CATCH rule was developed by Dr. Martin Osmond in the US in 2010 and was tested successfully for internal validity [51]. The rule was tested multiple times for external validity and proved externally valid in all the reported studies [48, 58,59,60,61, 63, 66, 72, 81]. The rule was not evaluated for usability, potential effect or post-implementation impact. Accordingly, the final grade assigned to the CATCH rule is C1.

The NEXUS II rule was developed by Dr. William Mower in the US in 2005, primarily for the diagnosis of adult head injury [88, 89]. Later on, the rule was validated for paediatrics by Dr. Jennifer Oman in the US in 2006 [50]. The tool was then tested multiple times for external validity. One study failed to properly evaluate the rule after using a modified version, which did not show external validity [54]. Two studies proved the rule was externally valid for children less than 14 and 16 years [48, 85] and one study proved the rule was externally valid for children over 10 years [86]. Using the protocol, the mixed evidence here supports positive conclusion on external validity. The rule was not evaluated for usability, potential effect or post-implementation impact. Accordingly, the final grade assigned to the NEXUS II rule is C1.

Palchak rule was developed by Dr. Michael Palchak and Dr. Nathan Kuppermann in the US in 2003 and was tested successfully for internal validity [52]. A study by the same authors in 2009 included validation of the rule in comparison to clinicians’ judgement using the same dataset that was used for the rule development, so this is still considered an internal validation [87]. One external validation study reported the predictive performance of Palchak rule was acceptable [48]. The rule was not evaluated for usability, potential effect or post-implementation impact. Accordingly, the final grade assigned to Palchak rule is C2.

Haydel rule was developed by Dr. Micelle Haydel in the US in 2003 [47], Atabaki rule was developed by Dr. Shireen Atabaki in the US in 2008 [39], and Buchanich rule was developed by Dr. Jeanine Buchanich in the US in 2007 [40]. The three rules were tested successfully for internal validity. However, they were not tested for external validity; neither were they evaluated for usability, potential effect or post-implementation impact. Accordingly, the final grade assigned to these three rules is C3.

Da Dalt rule was developed by Dr. Liviana Da Dalt in Italy in 2006 [41], Greenes rule was developed by Dr. David Greenes in the US in 2001 [44, 45], and Klemetti rule was developed Dr. Sanna Klemetti in Finland in 2009 [48]. The studies conducted by these three researchers followed correct development methods for their proposed tools. However, the internal validation processes of the tools were not clearly reported. Accordingly, the final grade assigned to these three rules is C0.

Dr. Kimberly Quayle in the US in 1997 [53], Dr. Ann Dietrich in the US in 1993 [42], and Dr. Ahmet Güzel in Turkey in 2009 [46], each tried to develop a clinical prediction rule to identify children at low risk for traumatic brain injury after head trauma. Their studies discussed clinical risk factors, symptoms and signs that could reliably predict abnormalities in cranial computed tomography (CT) scans. Even though each used a different mix of common clinical variables, none of the three studies could demonstrate sufficient correlations between clinical variables, symptoms and signs of significant TBI and the later findings on CT.

Therefore, they could not produce predictive rules with sufficient internal validity. Accordingly, the final grade assigned to these three rules is C0. A summary of the results of grading the 14 paediatric head injury predictive tools, using the GRASP framework, is presented in Table 1. The GRASP framework detailed reports, of each of the 14 paediatric head injury predictive tools, are presented in Additional file 1: Tables S4 to S17.

Table 1 Summary of Grading Paediatrics Head Injury Predictive Tools

Findings of the tools’ analysis

The PECARN rule was the only tool evaluated in post-implementation impact studies. The PECARN and the CHALICE rules were evaluated for potential effect on healthcare, while the remaining 12 tools were only evaluated for predictive performance. Three of these 12 tools were externally validated; CATCH, NEXUS II, and Palchak rules, three were only internally validated; Haydel, Atabaki, and Buchanich rules, and the remaining six tools; Da Dalt, Greenes, Klemetti, Quayle, Dietrich, and Güzel rules did not show sufficient internal validity to be used in clinical practice.

Using statistical analysis, we explored possible correlations between different criteria of predictive tools and their evidence-based assigned grades. There is no correlation between the country of the tools’ development and their assigned grades. For example, the 10 tools developed in the US include some of the highest and some of the lowest grades, so the country of a tool’s development is not related to the grade of the tool. There is a weak correlation between the year of the tools’ development and their assigned grades. The tools developed more recently could be higher in grade. There is a strong correlation between the number of citations of the tools, in the literature, and their assigned grades. The tools with higher citations are expected to be higher in grade. There is a very strong correlation between the number of studies discussing the tools and their assigned grades. The tools discussed and reported in more studies are higher in grade.

To provide clinicians with a few more objective measures to compare the tools, in addition to the citations and the published studies, we developed three derived values; the citation index, the publication index, and the literature index. The PECARN, the CHALICE and the CATCH rules were cited in the literature 885, 309, and 319 times respectively. To make these figures comparable, we calculated the citation index as the average annual citations for each tool, by dividing the total citations of each tool by its age in years. Similarly, the publication index is the average annual studies discussing each tool. We also calculated a literature index; by multiplying the total number of citations by the total number of studies, on each tool, divided by 1000, for simplification. This figure reflects the post-implementation impact of each tool in the literature. Like the citations and publications, the three indices of the tools are strongly correlated with their assigned grades.

Looking at more detailed objective measures, reported in the development studies of the 14 paediatric head injury predictive tools; we find very interesting results. The predictive tools were developed using two main methodologies. Recursive partitioning was used to develop the PECARN, CHALICE, CATCH, NEXUS II, Palchak, Haydel, Atabaki, and Buchanich rules. Multivariate logistic regression analysis was used to develop Greenes, Da Dalt, Klemetti, Quayle, Dietrich and Güzel rules. In addition, many clinical variables were used in the development of the tools, such as altered mental status, amnesia, focal neurological signs, occurrence of seizure after injury, presence of skull fractures, loss of consciousness, history of headache and/or vomiting. The mix of clinical variables used, to build the tools’ predictive models and their outcome scores, were similar but not the same for any of the tools. Moreover, the tools development studies used different paediatric populations and sample sizes. Consequently, the predictive performances of the tools, such as their sensitivities and specificities, were variable. Most of the tools showed high sensitivities, with the majority ranging from 90 to 100%, while their specificities were very different; ranging from 15 to 87%.

There is no correlation between the tools’ development methodologies and their predictive performances. However, most of the tools developed using recursive partitioning showed relatively higher sensitivities but not necessarily better specificities. In addition, there is no correlation between the tools’ development methodologies and their assigned grades. However, the six tools that used multivariate logistic regression analysis were all assigned grade C0; reporting no internal validity, while the other eight tools that used recursive partitioning showed higher variable grades. Furthermore, there is no correlation between the predictive performances of the tools and their assigned grades. For example, Da Dalt rule is assigned grade C0.

However, it has the highest sensitivity of 100% and the highest specificity of 87% among all the tools. This could be explained by the fact that Da Dalt rule was not internally validated, which makes it unqualified for external validation or implementation. While the CHALICE rule, which is assigned grade B2, has a sensitivity of 98% and a specificity of 86%, we find that the PECARN rule, which is the highest tool, assigned grade A2, has a similar sensitivity of 97% but lower specificity of 59%.

On the other hand, we find that there is a strong correlation between the size of the patient samples used in the development and internal validation studies of the tools and their assigned grades. The three main tools had the largest numbers of patients contributing to their development studies; 42,412 patients were enrolled and analysed to develop the PECARN rule, 22,772 to develop the CHALICE, and 3866 to develop the CATCH rule. The remaining 11 tools were developed using a relatively smaller number of patient samples, ranging from 3000 to only a hundred patients. In addition, there is a strong correlation between the number of researchers developing tools and their assigned grades. Two of the main three tools were developed by a large number of researchers; the PECARN rule was developed by 32 researchers and the CATCH rule was developed by 14 researchers. The remaining tools were developed by a relatively fewer number of researchers; ranging from 10 for the Palchak rule to only one researcher for the Buchanich rule.

Moreover, there is a correlation between the impact factor of the journal that published the development studies of the tools and their assigned grades. The PECARN rule, for example, was published in the Lancet, which is a highly ranked journal with an impact factor of 53.3. Furthermore, the three main tools; the PECARN, the CHALICE and the CATCH rules, in addition to the NEXUS II rule, were all supported by dedicated and well-funded research networks, programs, and professional groups, such as the Paediatric Emergency Care Applied Research Network for the PECARN rule, the Children’s Head Injury Algorithm for the Prediction of Important Clinical Events study group for the CHALICE rule, the Paediatric Emergency Research Canada (PERC) Head Injury Study Group for the CATCH rule, and the National Emergency X-Radiography Utilization Study II for the NEXUS II rule. There is a correlation between being supported by dedicated research programs, as a tool, and having a higher assigned grade. A summary of tools’ information, development studies indices, predictive performance and quality indicators of the 14 paediatric head injury predictive tools is presented in Table 2.

Table 2 Summary of tools’ information, indices, predictive performance and quality

Additional file 1: Figure S3 shows the tools’ distribution by their assigned grade. Additional file 1: Figure S4 distribution by country of development. Additional file 1: Figure S5 distribution by year of development. Additional file 1: Figure S6 number of citations of each tool. Additional file 1: Figure S7 number of studies reporting each tool. Additional file 1: Figure S8 size of patient samples used for development. Additional file 1: Figure S9 number of authors contributing to each tool. Additional file 1: Figure S10 the journal impact factor publishing each tool. Additional file 1: Figure S11 percentage of tools developed with/without dedicated support.

Discussion

This study presents a new evidence-based approach to grade and assess predictive tools. Based on the critical appraisal of the published evidence on predictive tools, the GRASP framework uses three dimensions to grade the tools: 1) phase of evaluation; before implementation, during planning for implementation and after implementation, 2) level of evidence; adding a numerical score within each phase, and 3) direction of evidence; positive, negative or mixed. The final grade is based on the highest phase of evaluation, supported by the highest level of positive evidence, or mixed evidence that supports a positive conclusion. Among the 14 paediatric head injury predictive tools, the PECARN rule stands out clearly, since it is the only tool evaluated in post-implementation impact studies, which needs some explanation.

The 14 predictive tools targeted variable paediatric age groups. Most of the tools focused on children less than 16 years of age. However, some tools extended their cover to less than 21 years, such as Atabaki, while others limited their population to children less than 2 or 3 years, such as Buchanich and Greenes. The tools used different development methodologies and their prediction models used different mix of clinical variables. Furthermore, the predictive performances of the tools, such as their sensitivities and specificities, were different. However, the predictive performances of the tools were not correlated with their assigned grades. This indicates that the technical specifications of the predictive tools did not, in the first place, influence their successful validation, acceptance, or implementation. The country and year of tools’ development were also non-significantly influential on their successful path from validation into implementation. On the other hand, the number of citations of the studies, describing the development of the tools, and the number of studies reporting them are clearly correlated with tools’ success. These two indicators are secondary to the main quality indicators of the tools’ development studies, such as the sample size of patients used in the development of the tools and the number of researchers developing these tools.

In addition, the experiences of the researchers have an important role in leading better-quality studies. Three of the researchers who developed the PECARN rule have already contributed to older but less successful tools. Before leading the team to develop the PECARN rule in 2009, Dr. Kuppermann contributed to developing the Quayle rule in 1997 and the Palchak rule in 2003. Dr. Quayle and Dr. Atabaki each developed her own rule in 1997 and 2008, before joining the team in developing the PECARN rule in 2009. The affiliations of the researchers, to highly ranked institutes, and the support of the studies by dedicated and well-funded research networks, programs, and professional groups, added to the credibility of the tools among clinicians and organisations. As a result of the better quality and higher credibility, the PECARN rule development study was published in a top ranked journal with a high impact factor; the Lancet. In addition, the three main tools; the PECARN, the CHALICE and the CATCH rules were endorsed by professional organisations and recommended in clinical practice guidelines, such as the paediatric head trauma clinical guidelines developed by the Royal Australian and New Zealand College of Radiologists [90].

Many studies compared paediatric head injury predictive tools. Among these, nine compared the three main tools; the PECARN, the CHALICE and the CATCH rules. Despite the fact that most of the studies reported PECARN as the highest quality tool, they reported that all three predictive tools had excellent sensitivities and performed well in assessing the outcome of clinically important TBI, suggesting that all were appropriate for use in assessing mild head injury in the ED [58, 91]. However, each tool is applicable to a different proportion of children with head injury. This makes the direct comparison of the three tools difficult [72]. The CHALICE rule applies to a broad population of head injuries of any severity, the PECARN rule was developed for minor head injuries only and the CATCH rule focused on a group of patients with specific signs or symptoms [59]. The PECARN rule is the most validated [37], and has the best sensitivity while the CHALICE rule has the best specificity [66, 91, 92]. Compared to senior, experienced, and high accuracy emergency physicians, the implementation of PECARN, CATCH or CHALICE rules have a potential to increase the CT rates with limited potential to increase the accuracy of detecting clinically important TBI [93]. In addition, the three tools were not more cost-effective than usual care in some ED settings [94]. Despite that CT is the imaging modality of choice in the ED, because of availability and speed, however, magnetic resonance imaging is recently becoming the preferred modality in children. This would change predictive tools’ comparability and priority for recommendation, where further research is required [92].

Some predictive tools, in other clinical areas, gained their widespread acceptance and successful implementation by providing simplicity and feasibility. The Ottawa ankle and the Ottawa knee rules are good examples of simple paper based five items check lists, designed to exclude the need for an X-ray for possible bone fracture in adult patients at the ED [95, 96]. The resources needed to implement such tools are minimal; no technical requirements, special training or financial support are needed. Both tools were implemented, within 2 years of their development, and demonstrated positive post-implementation impact on the efficiency of ED healthcare services through wide scale high quality experimental studies [97,98,99,100].

Accordingly, selecting effective predictive tools remains a major challenge for most clinicians who usually lack the time and experience required to evaluate such tools; assessing their quality or grading their level of evidence, especially as their number and complexity have increased tremendously over the recent years. This is made worse by the complex nature of the evaluation process itself and the variability in the quality of published evidence. Furthermore, it is not always feasible to compare tools, even those designed for the same predictive tasks, due to the diversity of their development methodologies, clinical variables, target populations, conditioned applications, and predictive performances. Therefore, we chose not to look at the details of every single validation or implementation study. Alternatively, the GRASP framework provides users with a higher level and evidence-based approach to grade predictive tools through the critical appraisal of published evidence on their development and validation before implementation, usability and potential effect during planning for implementation, and post-implementation impact on clinical effectiveness, patient safety and healthcare efficiency. Based on the available evidence, the framework identifies tools that are more trusted by clinicians and researchers and consequently can be more successful. Using the GRASP framework might need some training for expert healthcare professionals and researchers, who are going to grade predictive tools and some awareness for end user clinicians who are going to use GRASP output to select predictive tools.

The main limitations of this study include the possibility of missing some predictive tools which could have been developed by clinicians but not yet published, because the GRASP framework depends on grading predictive tools based on their published evidence. Similarly, some of the published predictive tools could have been implemented in clinical practice but no studies, reporting their implementation or evaluating their post-implementation impact, have been published yet. Furthermore, while this study is in press or soon after it is published, an evidence on some tools may become available and could have an influence on the assigned grade.

Conclusion

Comparing the three main tools, which were assigned the highest GRASP grades PECARN, CHALICE and CATCH, to the remaining 11, we find that three main factors are highly crucial and indicate better tools. Firstly, the quality of the predictive tools, which is indicated by the development methodology of the tools, the patient sample size used for development, and the number of contributing authors. The quality is also reflected through the number of citations and number of studies discussing each tool. Secondly, the experience and credibility of the tools’ authors, reflected in their clinical specialty and affiliated organisations. Thirdly, the support by dedicated and well-funded research programs. These three factors were more significantly influential, than the sole high predictive performance, on the wide acceptance and successful implementation of the tools. In addition, tools’ simplicity and feasibility, in terms of resources needed, financial support, technical requirements, complexity and number of predictors, and training, are crucial factors of their success. It is important to select tools which best fit the intended tasks, the clinical conditions, the healthcare settings and the patient populations. Based on detailed specifications, a group of best predictive tools can be recommended for use in clinical practice. Through evidence-based grading of predictive tools, the GRASP framework confirmed the PECARN rule as the highest quality tool, compared to the other tools, which have variable levels of supporting evidence. The online availability of the GRASP framework will enable clinicians and clinical guideline developers to access detailed information, reported evidence and assigned grades of predictive tools. However, keeping such information up-to-date requires continuous updating of tools’ reports when new evidence becomes available.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Abbreviations

CATCH:

Canadian Assessment of Tomography for Childhood Head injury

CDC:

Centers for Disease Control and Prevention

CDS:

Clinical Decision Support

CHALICE:

Children’s Head injury ALgorithm for the prediction of Important Clinical Events

CINAHL:

Cumulative Index to Nursing and Allied Health Literature

CT:

Computed Tomography

ED:

Emergency Department

EMBASE:

Excerpta Medica Abstract Journals Database

GRASP:

Grading and Assessment of Predictive Tools for Clinical Decision Support

MEDLINE:

Medical Literature Analysis and Retrieval System Online

NEXUS:

National Emergency X-Radiography Utilization Study

PECARN:

Paediatric Emergency Care Applied Research Network

PERC:

Paediatric Emergency Research Canada

TBI:

Traumatic Brain Injury

US:

United States

References

  1. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742–52.

    Article  PubMed  Google Scholar 

  2. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux P, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223–38.

    Article  CAS  PubMed  Google Scholar 

  3. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330(7494):765.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Oman KS. Evidence-based practice: an implementation guide for healthcare organizations. Burlington: Jones & Bartlett Publishers; 2010.

  5. Osheroff JA. Editor improving outcomes with clinical decision support: an implementer’s guide. Chicago: Himss; 2012.

  6. Osheroff JA, Teich JM, Middleton B, Steen EB, Wright A, Detmer DE. A roadmap for national action on clinical decision support. J Am Med Inform Assoc. 2007;14(2):141–5.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Musen MA, Middleton B, Greenes RA. Clinical decision-support systems. Biomed Inform. 2014;1:643–74. Springer.

  8. Shortliffe EH, Cimino JJ. Biomedical informatics: computer applications in health care and biomedicine. Berlin: Springer Science & Business Media; 2013.

  9. Adams ST, Leveson SH. Clinical prediction rules. BMJ. 2012;344:d8312.

    Article  PubMed  Google Scholar 

  10. Beattie P, Nelson R. Clinical prediction rules: what are they and what do they tell us? Aust J Physiother. 2006;52(3):157–63.

    Article  PubMed  Google Scholar 

  11. Steyerberg EW. Clinical prediction models: a practical approach to development, validation, and updating. Berlin: Springer Science & Business Media; 2008.

  12. Steyerberg EW, Vergouwe Y. Towards better clinical prediction models: seven steps for development and an ABCD for validation. Eur Heart J. 2014;35(29):1925–31.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, et al. Effect of clinical decision-support systemsa systematic review. Ann Intern Med. 2012;157(1):29–43.

    Article  PubMed  Google Scholar 

  14. Romano MJ, Stafford RS. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Arch Intern Med. 2011;171(10):897–903.

    PubMed  PubMed Central  Google Scholar 

  15. Bennett P, Hardiker NR. The use of computerized clinical decision support systems in emergency care: a substantive review of the literature. J Am Med Inform Assoc. 2016;24(3):655–68.

    PubMed Central  Google Scholar 

  16. Sahota N, Lloyd R, Ramakrishna A, Mackay JA, Prorok JC, Weise-Kelly L, et al. Computerized clinical decision support systems for acute care management: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes. Implement Sci. 2011;6(1):91.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Wilk S, Michalowski W, O’Sullivan D, Farion K, Sayyad-Shirabad J, Kuziemsky C, et al. A task-based support architecture for developing point-of-care clinical decision support systems for the emergency department. Methods Inf Med. 2013;52(01):18–32.

    Article  CAS  PubMed  Google Scholar 

  18. Greve MW, Zink BJ. Pathophysiology of traumatic brain injury. Mt Sinai J Med. 2009;76(2):97–104.

    Article  PubMed  Google Scholar 

  19. Azim A, Joseph B. Traumatic brain injury. Surgical critical care therapy. Berlin: Springer; 2018. p. 1–10.

    Google Scholar 

  20. Taylor CA, Bell JM, Breiding MJ, Xu L. Traumatic brain injury-related emergency department visits, hospitalizations, and deaths-United States, 2007 and 2013. MMWR Surveill Summ (Washington, DC: 2002). 2017;66(9):1–16.

    Article  Google Scholar 

  21. Langlois JA, Rutland-Brown W, Thomas KE. Traumatic brain injury in the United States; emergency department visits, hospitalizations, and deaths; 2006.

    Book  Google Scholar 

  22. Maguire JL, Boutis K, Uleryk EM, Laupacis A, Parkin PC. Should a head-injured child receive a head CT scan? A systematic review of clinical prediction rules. Pediatrics. 2009;124(1):e145–e54.

    Article  PubMed  Google Scholar 

  23. Maguire JL, Kulik DM, Laupacis A, Kuppermann N, Uleryk EM, Parkin PC. Clinical prediction rules for children: a systematic review. Pediatrics. 2011. https://doi.org/10.1542/peds.2011-0043.

  24. Pandor A, Goodacre S, Harnan S, Holmes M, Pickering A, Fitzgerald P, et al. Diagnostic management strategies for adults and children with minor head injury: a systematic review and an economic evaluation. Health Technol Assess (Winch. Eng.). 2011;15(27):1.

    CAS  Google Scholar 

  25. Mueller DL, Hatab M, Al-Senan R, Cohn SM, Corneille MG, Dent DL, et al. Pediatric radiation exposure during the initial evaluation for blunt trauma. J Trauma Acute Care Surg. 2011;70(3):724–31.

    Article  Google Scholar 

  26. Bregstein JS, Lubell TR, Ruscica AM, Roskind CG. Nuking the radiation: minimizing radiation exposure in the evaluation of pediatric blunt trauma. Curr Opin Pediatr. 2014;26(3):272–8.

    Article  PubMed  Google Scholar 

  27. Pandor A, Harnan S, Goodacre S, Pickering A, Fitzgerald P, Rees A. Diagnostic accuracy of clinical characteristics for identifying CT abnormality after minor brain injury: a systematic review and meta-analysis. J Neurotrauma. 2012;29(5):707–18.

    Article  PubMed  Google Scholar 

  28. Pickering A, Harnan S, Fitzgerald P, Pandor A, Goodacre S. Clinical decision rules for children with minor head injury: a systematic review. Arch Dis Child. 2011;96(5):414–21.

    Article  PubMed  Google Scholar 

  29. Ebell MH. Evidence-based diagnosis: a handbook of clinical prediction rules. Berlin: Springer Science & Business Media; 2001.

  30. Kappen T, van Klei W, van Wolfswinkel L, Kalkman C, Vergouwe Y, Moons K. General discussion I: evaluating the impact of the use of prediction models in clinical practice: challenges and recommendations. Prediction models and decision support; 2015. p. 89.

    Google Scholar 

  31. Taljaard M, Tuna M, Bennett C, Perez R, Rosella L, Tu JV, et al. Cardiovascular disease population risk tool (CVDPoRT): predictive algorithm for assessing CVD risk in the community setting. a study protocol. BMJ Open. 2014;4(10):e006701.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Ansari S, Rashidian A. Guidelines for guidelines: are they up to the task? A comparative assessment of clinical practice guideline development handbooks. PLoS One. 2012;7(11):e49864.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  33. Kish MA. Guide to development of practice guidelines. Clin Infect Dis. 2001;32(6):851–4.

    Article  CAS  PubMed  Google Scholar 

  34. Shekelle PG, Woolf SH, Eccles M, Grimshaw J. Developing clinical guidelines. West J Med. 1999;170(6):348.

    CAS  PubMed  PubMed Central  Google Scholar 

  35. Tranfield D, Denyer D, Smart P. Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br J Manag. 2003;14(3):207–22.

    Article  Google Scholar 

  36. Khalifa M. Developing an evidence-based framework for grading & assessment of predictive tools for clinical decision support presented at the health data analytics 2018. Melbourne: Digital health conference; 2018. [Available from: https://www.hisa.org.au/slides/hda/18/MohamedKhalifa.pdf

    Google Scholar 

  37. Lyttle MD, Crowe L, Oakley E, Dunning J, Babl FE. Comparing CATCH, CHALICE and PECARN clinical decision rules for paediatric head injuries. Emerg Med J. 2012. https://doi.org/10.1136/emermed-2011-200225.

    Article  PubMed  Google Scholar 

  38. Sempértegui Cárdenas PX. Validación de una escala de predicción de lesiones intracraneales para trauma cráneo-encefálico en niños de 0 a 5 años del Hospital Vicente Corral Moscoso Enero-Diciembre 2014. Cuenca: Estudio de test diagnóstico; 2016.

  39. Atabaki SM, Stiell IG, Bazarian JJ, Sadow KE, Vu TT, Camarca MA, et al. A clinical decision rule for cranial computed tomography in minor pediatric head trauma. Arch Pediatr Adolesc Med. 2008;162(5):439–45.

    Article  PubMed  Google Scholar 

  40. Buchanich JM. A clinical decision-making rule for mild head injury in children less than three years old. Pittsburgh: University of Pittsburgh; 2007.

  41. Da Dalt L, Marchi AG, Laudizi L, Crichiutti G, Messi G, Pavanello L, et al. Predictors of intracranial injuries in children after blunt head trauma. Eur J Pediatr. 2006;165(3):142–8.

    Article  PubMed  Google Scholar 

  42. Dietrich AM, Bowman MJ, Ginn-Pease ME, Kosnik E, King DR. Pediatric head injuries: can clinical factors reliably predict an abnormality on computed tomography? Ann Emerg Med. 1993;22(10):1535–40.

    Article  CAS  PubMed  Google Scholar 

  43. Dunning J, Daly JP, Lomas J, Lecky F, Batchelor J, Mackway-Jones K. Derivation of the children’s head injury algorithm for the prediction of important clinical events decision rule for head injury in children. Arch Dis Child. 2006;91(11):885–91.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  44. Greenes DS, Schutzman SA. Clinical indicators of intracranial injury in head-injured infants. Pediatrics. 1999;104(4):861–7.

    Article  CAS  PubMed  Google Scholar 

  45. Greenes DS, Schutzman SA. Clinical significance of scalp abnormalities in asymptomatic head-injured infants. Pediatr Emerg Care. 2001;17(2):88–92.

    Article  CAS  PubMed  Google Scholar 

  46. Güzel A, Hiçdönmez T, Temizöz O, Aksu B, Aylanç H, Karasalihoglu S. Indications for brain computed tomography and hospital admission in pediatric patients with minor head injury: how much can we rely upon clinical findings? Pediatr Neurosurg. 2009;45(4):262–70.

    Article  PubMed  Google Scholar 

  47. Haydel MJ, Shembekar AD. Prediction of intracranial injury in children aged five years and older with loss of consciousness after minor head injury due to nontrivial mechanisms. Ann Emerg Med. 2003;42(4):507–14.

    Article  PubMed  Google Scholar 

  48. Klemetti S, Uhari M, Pokka T, Rantala H. Evaluation of decision rules for identifying serious consequences of traumatic head injuries in pediatric patients. Pediatr Emerg Care. 2009;25(12):811–5.

    Article  PubMed  Google Scholar 

  49. Kuppermann N, Holmes JF, Dayan PS, Hoyle JD, Atabaki SM, Holubkov R, et al. Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study. Lancet. 2009;374(9696):1160–70.

    Article  PubMed  Google Scholar 

  50. Oman JA, Cooper RJ, Holmes JF, Viccellio P, Nyce A, Ross SE, et al. Performance of a decision rule to predict need for computed tomography among children with blunt head trauma. Pediatrics. 2006;117(2):e238–e46.

    Article  PubMed  Google Scholar 

  51. Osmond MH, Klassen TP, Wells GA, Correll R, Jarvis A, Joubert G, et al. CATCH: a clinical decision rule for the use of computed tomography in children with minor head injury. Can Med Assoc J. 2010;182(4):341–8.

    Article  Google Scholar 

  52. Palchak MJ, Holmes JF, Vance CW, Gelber RE, Schauer BA, Harrison MJ, et al. A decision rule for identifying children at low risk for brain injuries after blunt head trauma. Ann Emerg Med. 2003;42(4):492–506.

    Article  PubMed  Google Scholar 

  53. Quayle KS, Jaffe DM, Kuppermann N, Kaufman BA, Lee BC, Park T, et al. Diagnostic testing for acute head injury in children: when are head computed tomography and skull radiographs indicated? Pediatrics. 1997;99(5):e11–e.

    Article  CAS  PubMed  Google Scholar 

  54. Sun BC, Hoffman JR, Mower WR. Evaluation of a modified prediction instrument to identify significant pediatric intracranial injury after blunt head trauma. Ann Emerg Med. 2007;49(3):325–32. e1.

    Article  PubMed  Google Scholar 

  55. Ahmadi S, Yousefifard M. Accuracy of pediatric emergency care applied research network rules in prediction of clinically important head injuries; a systematic review and meta-analysis. Int J Pediatr. 2017;5(12):6285–300.

    Google Scholar 

  56. Atabaki SM, Hoyle JD Jr, Schunk JE, Monroe DJ, Alpern ER, Quayle KS, et al. Comparison of prediction rules and clinician suspicion for identifying children with clinically important brain injuries after blunt head trauma. Acad Emerg Med. 2016;23(5):566–75.

    Article  PubMed  Google Scholar 

  57. Atabaki SM, Jacobs BR, Brown KM, Shahzeidi S, Heard-Garris NJ, Chamberlain MB, et al. Quality improvement in pediatric head trauma with PECARN rules implementation as computerized decision support. Pediatr Qual Saf. 2017;2(3):e019.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Babl FE, Borland ML, Phillips N, Kochar A, Dalton S, McCaskill M, et al. Accuracy of PECARN, CATCH, and CHALICE head injury decision rules in children: a prospective cohort study. Lancet. 2017;389(10087):2393–402.

    Article  PubMed  Google Scholar 

  59. Babl FE, Bressan S. Physician practice and PECARN rule outperform CATCH and CHALICE rules based on the detection of traumatic brain injury as defined by PECARN. BMJ Evid Based Med. 2015;20(1):33–4.

    Article  Google Scholar 

  60. Babl FE, Lyttle MD, Bressan S, Borland M, Phillips N, Kochar A, et al. A prospective observational study to assess the diagnostic accuracy of clinical decision rules for children presenting to emergency departments after head injuries (protocol): the Australasian Paediatric head injury rules study (APHIRST). BMC Pediatr. 2014;14(1):148.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Babl FE, Oakley E, Dalziel SR, Borland ML, Phillips N, Kochar A, et al. Accuracy of clinician practice compared with three head injury decision rules in children: a prospective cohort study. Ann Emerg Med. 2018;71(6):703–10.

    Article  PubMed  Google Scholar 

  62. Barrett J. The use of clinical decision rules to reduce unnecessary head CT scans in pediatric populations. Tucson: The University of Arizona; 2016.

  63. Bozan Ö, Aksel G, Kahraman H, Giritli Ö, Eroğlu S. Comparison of PECARN and CATCH clinical decision rules in children with minor blunt head trauma. Eur J Trauma Emerg Surg. 2017;43:1–7. E-ISSN: 1863-9941, PMID: 29071378 Version:1. https://doi.org/10.1007/s00068-017-0865-8.

  64. Bressan S, Romanato S, Mion T, Zanconato S, Da Dalt L. Implementation of adapted PECARN decision rule for children with minor head injury in the pediatric emergency department. Acad Emerg Med. 2012;19(7):801–7.

    Article  PubMed  Google Scholar 

  65. Bressan S, Steiner IP, Mion T, Berlese P, Romanato S, Da Dalt L. The pediatric emergency care applied research network intermediate-risk predictors were not associated with scanning decisions for minor head injuries. Acta Paediatr. 2015;104(1):47–52.

    Article  PubMed  Google Scholar 

  66. Easter JS, Bakes K, Dhaliwal J, Miller M, Caruso E, Haukoos JS. Comparison of PECARN, CATCH, and CHALICE rules for children with minor head injury: a prospective cohort study. Ann Emerg Med. 2014;64(2):145–52. e5.

    Article  PubMed  PubMed Central  Google Scholar 

  67. Fuller G, Dunning J, Batchelor J, Lecky F, editors. An external validation of the PECARN clinical decision rule for CT head imaging of infants with minor head injury. 2012. BRAIN INJURY; INFORMA HEALTHCARE TELEPHONE HOUSE, 69-77 PAUL STREET, LONDON EC2A 4LQ, ENGLAND.

    Google Scholar 

  68. Gökharman FD, AYDIN S, Fatihoğlu E, KOŞAR PN. Pediatric emergency care applied research network head injuryprediction rules: on the basis of cost and effectiveness. Turk J Med Sci. 2017;47(6):1770–7.

    Article  PubMed  Google Scholar 

  69. Holmes M, Goodacre S, Stevenson M, Pandor A, Pickering A. The cost-effectiveness of diagnostic management strategies for children with minor head injury. Arch Dis Child. 2013;98(12):939–44.

    Article  CAS  PubMed  Google Scholar 

  70. Ide K, Uematsu S, Tetsuhara K, Yoshimura S, Kato T, Kobayashi T. External validation of the PECARN head trauma prediction rules in Japan. Acad Emerg Med. 2017;24(3):308–14.

    Article  PubMed  Google Scholar 

  71. Lorton F, Poullaouec C, Legallais E, Simon-Pimmel J, Chêne M, Leroy H, et al. Validation of the PECARN clinical decision rule for children with minor head trauma: a French multicenter prospective study. Scand J Trauma Resusc Emerg Med. 2016;24(1):98.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  72. Lyttle MD, Cheek JA, Blackburn C, Oakley E, Ward B, Fry A, et al. Applicability of the CATCH, CHALICE and PECARN paediatric head injury clinical decision rules: pilot data from a single Australian Centre. Emerg Med J. 2013;30(10):790–4.

    Article  PubMed  Google Scholar 

  73. Mihindu E, Bhullar I, Tepas J, Kerwin A. Computed tomography of the head in children with mild traumatic brain injury. Am Surg. 2014;80(9):841–3.

    PubMed  Google Scholar 

  74. Nakhjavan-Shahraki B, Yousefifard M, Hajighanbari M, Oraii A, Safari S, Hosseini M. Pediatric emergency care applied research network (PECARN) prediction rules in identifying high risk children with mild traumatic brain injury. Eur J Trauma Emerg Surg. 2017;43(6):755–62.

    Article  CAS  PubMed  Google Scholar 

  75. Nishijima DK, Yang Z, Urbich M, Holmes JF, Zwienenberg-Lee M, Melnikow J, et al. Cost-effectiveness of the PECARN rules in children with minor head trauma. Ann Emerg Med. 2015;65(1):72–80. e6.

    Article  PubMed  Google Scholar 

  76. Schonfeld D, Bressan S, Da Dalt L, Henien MN, Winnett JA, Nigrovic LE. Pediatric emergency care applied research network head injury clinical prediction rules are reliable in practice. Arch Dis Child. 2014. https://doi.org/10.1136/archdischild-2013-305004.

    Article  PubMed  Google Scholar 

  77. Thiam DW, Yap SH, Chong SL. Clinical decision rules for paediatric minor head injury: are CT scans a necessary evil. Ann Acad Med Singap. 2015;44(9):335–41.

    PubMed  Google Scholar 

  78. Alali AS, Burton K, Fowler RA, Naimark DM, Scales DC, Mainprize TG, et al. Economic evaluations in the diagnosis and management of traumatic brain injury: a systematic review and analysis of quality. Value Health. 2015;18(5):721–34.

    Article  PubMed  Google Scholar 

  79. Crowe L, Anderson V, Babl FE. Application of the CHALICE clinical prediction rule for intracranial injury in children outside the UK: impact on head CT rate. Arch Dis Child. 2010. https://doi.org/10.1136/adc.2009.174854.

    Article  PubMed  Google Scholar 

  80. Harty E, Bellis F. CHALICE head injury rule: an implementation study. Emerg Med J. 2010;2009:077644.

    Google Scholar 

  81. Gerdung C, Dowling S, Lang E. Review of the CATCH study: a clinical decision rule for the use of computed tomography in children with minor head injury. Can J Emerg Med. 2012;14(4):247–51.

    Google Scholar 

  82. Osmond M, Stiell I. Canadian assessment of tomography for childhood head injuries. Ontario: University of Ottawa, Trauma Division of Pediatric Emergency Medicine Children’s Hospital of Eastern Ontario Personal communication; 2002.

  83. Osmond MH, Klassen TP, Stiell IG, Correll R. The CATCH rule: a clinical decision rule for the use of computed tomography of the head in children with minor head injury. Acad Emerg Med. 2006;13(5 Supplement 1):S11.

    Article  Google Scholar 

  84. Gupta M, Mower WR, Rodriguez RM, Hendey GW. Validation of the pediatric NEXUS II head computed tomography decision instrument for selective imaging of pediatric patients with blunt head trauma. Acad Emerg Med. 2018;25(7):729-37.

    Article  PubMed  Google Scholar 

  85. Schachar JL, Zampolin RL, Miller TS, Farinhas JM, Freeman K, Taragin BH. External validation of the New Orleans criteria (NOC), the Canadian CT head rule (CCHR) and the National Emergency X-radiography utilization study II (NEXUS II) for CT scanning in pediatric patients with minor head injury in a non-trauma center. Pediatr Radiol. 2011;41(8):971.

    Article  PubMed  Google Scholar 

  86. Stein SC, Fabbri A, Servadei F, Glick HA. A critical comparison of clinical decision instruments for computed tomographic scanning in mild closed traumatic brain injury in adolescents and adults. Ann Emerg Med. 2009;53(2):180–8.

    Article  PubMed  Google Scholar 

  87. Palchak MJ, Holmes JF, Kuppermann N. Clinician judgment versus a decision rule for identifying children at risk of traumatic brain injury on computed tomography after blunt head trauma. Pediatr Emerg Care. 2009;25(2):61–5.

    Article  PubMed  Google Scholar 

  88. Mower WR, Hoffman JR, Herbert M, Wolfson AB, Pollack CV Jr, Zucker MI, et al. Developing a clinical decision instrument to rule out intracranial injuries in patients with minor head trauma: methodology of the NEXUS II investigation. Ann Emerg Med. 2002;40(5):505–15.

    Article  PubMed  Google Scholar 

  89. Mower WR, Hoffman JR, Herbert M, Wolfson AB, Pollack CV Jr, Zucker MI, et al. Developing a decision instrument to guide computed tomographic imaging of blunt head injury patients. J Trauma Acute Care Surg. 2005;59(4):954–9.

    Article  Google Scholar 

  90. Radiologists TRANZCo. Appropriate imaging referrals clinical guidelines for paediatric head. Trauma. 2015; Available from: https://www.ranzcr.com/documents/3839-print-version-paediatric-head-trauma/file.

  91. McGraw M, Way T. Comparison of PECARN, CATCH, and CHALICE clinical decision rules for pediatric head injury in the emergency department. J Can Assoc Emerg Physicians. 2019;21(1):120-4. ISSN: 14818035, E-ISSN: 14818043. https://doi.org/10.1017/cem.2018.44.

  92. Kadom N, Alvarado E, Medina LS. Pediatric accidental traumatic brain injury: evidence-based emergency imaging. Evidence-based emergency imaging. Berlin: Springer; 2018. p. 65–77.

    Chapter  Google Scholar 

  93. Lyttle M, Borland M, Phillips N, Kochar A, Cheek J, Gilhotra Y, et al. G273 accuracy of physician practice as compared with Pecarn, Catch and Chalice head injury clinical decision rules in children. A predict prospective cohort study. London: BMJ Publishing Group Ltd; 2017.

  94. Dalziel K, Cheek JA, Fanning L, Borland ML, Phillips N, Kochar A, Dalton S, Furyk J, Neutze J, Dalziel SR, Lyttle MD. A cost-effectiveness analysis comparing clinical decision rules PECARN, CATCH, and CHALICE with usual Care for the Management of pediatric head injury. Ann Emerg Med. 2019;73(5):429-39.

    Article  PubMed  Google Scholar 

  95. Stiell IG, Greenberg GH, McKnight RD, Nair RC, McDowell I, Worthington JR. A study to develop clinical decision rules for the use of radiography in acute ankle injuries. Ann Emerg Med. 1992;21(4):384–90.

    Article  CAS  PubMed  Google Scholar 

  96. Stiell IG, Greenberg GH, Wells GA, McKnight RD, Cwinn AA, Cacciotti T, et al. Derivation of a decision rule for the use of radiography in acute knee injuries. Ann Emerg Med. 1995;26(4):405–13.

    Article  CAS  PubMed  Google Scholar 

  97. Stiell I, Wells G, Laupacis A, Brison R, Verbeek R, Vandemheen K, et al. Multicentre trial to introduce the Ottawa ankle rules for use of radiography in acute ankle injuries. BMJ. 1995;311(7005):594–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  98. Stiell IG, McKnight RD, Greenberg GH, McDowell I, Nair RC, Wells GA, et al. Implementation of the Ottawa ankle rules. JAMA. 1994;271(11):827–32.

    Article  CAS  PubMed  Google Scholar 

  99. Stiell IG, Wells GA, Hoag RH, Sivilotti ML, Cacciotti TF, Verbeek PR, et al. Implementation of the Ottawa knee rule for the use of radiography in acute knee injuries. JAMA. 1997;278(23):2075–9.

    Article  CAS  PubMed  Google Scholar 

  100. Nichol G, Stiel IG, Wells GA, Juergensen LS, Laupacis A. An economic analysis of the Ottawa knee rule. Ann Emerg Med. 1999;34(4):438–47.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgments

Not Applicable.

Funding

This work was supported by the Commonwealth Government Funded Research Training Program, Australia. The funding body has no role in the design of the study or collection, analysis, or interpretation of data or writing the manuscript. These tasks were the sole responsibility of the study researchers.

Author information

Authors and Affiliations

Authors

Contributions

MK was mainly responsible for the conception of the study and the detailed analysis of the tools. BG was responsible for the overall supervision of the work done, verification and validation of the analysis, results and discussion. The two authors have been involved in drafting the manuscript and revising it. Finally, the two authors gave approval of the manuscript to be published and agreed to be accountable for all aspects of the work.

Corresponding author

Correspondence to Mohamed Khalifa.

Ethics declarations

Ethics approval and consent to participate

No ethics approval was required for any element of this study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Figure S2. Searching the literature for predictive tools and related published evidence. Figures S3 to S11. Statistical figures describing the fourteen paediatric head injury clinical predictive tools. Table S3. The GRASP Framework Detailed Report template. Tables S4 to S17. The GRASP Framework Detailed Report on each of the fourteen paediatric head injury clinical predictive tools. (PDF 958 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khalifa, M., Gallego, B. Grading and assessment of clinical predictive tools for paediatric head injury: a new evidence-based approach. BMC Emerg Med 19, 35 (2019). https://doi.org/10.1186/s12873-019-0249-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12873-019-0249-y

Keywords