In this study, we designed and evaluated an implicit review instrument to assess the quality of care provided to children in the ED. This instrument measures four aspects of care, as well as overall quality of care. When applied by two pediatric emergency medicine physicians to 178 acutely ill and injured pediatric patients seen at four rural EDs, the instrument had high internal consistency reliability and fair to good inter-rater reliability. The validity of the instrument is supported by the fact that the mean total summary score was associated with the incidence of medication errors (an explicit measure). Furthermore, each of the reviewer's total summary score correlated with the other reviewer's "validation question" score (a separate measure of validity), and the mean total summary score was correlated with the mean "validation question" score for the two reviewers.
We also found that in the majority of the visits, the quality of care provided to critically ill pediatric patients in this sample of four rural EDs was considered acceptable by experts in pediatric emergency medicine. This finding that the majority of the care was considered acceptable using implicit and explicit review is similar to previously published reports [10–12, 15]. The fact that our instrument has high face and construct validity and fair inter-rater reliability for the individual items and good inter-rater reliability for the total summary score (as measured by ICC) is also consistent with findings of several previous studies using implicit review [7–9, 11–15, 30]. These findings together suggest a tendency for multiple reviewers to rank quality of care similarly, but not necessarily with the same numerical ratings (e.g., some reviewers tend to assign higher scores than others, but in a similar rank).
With regards to pediatric medication errors, our study identified errors among 26.4% of patients who had medications ordered, which is higher than the previously published range of 5.7% to 14.7% [31–34]. However, most of these latter studies relied on incident report data or voluntary error reports [33, 34], which would tend to underestimate actual medication error rates. The medication error rates may also have been higher because of less pediatric experience at the hospitals studied or because the EDs were not all staffed by emergency medicine trained physicians with pediatric experience. Furthermore, our study focused on the most ill pediatric patients presenting emergently to the ED, which would likely tend to increase the prevalence of medication errors in our sample.
Peer review plays a central role in many quality assurances strategies  both for the evaluation of physician performance as well as program performance [10, 22]. The implicit peer review method used in this study has face validity to physicians. Because of the diversity of diagnoses and heterogeneous severity of illness among children presenting to the ED, no explicit measures of quality of care are available that could be applied to a consecutive cohort of unselected ED patients. Implicit review allows assessments to be made that consider the unique characteristics of each patient, taking into account the latest trends and developments of patient management. The structured implicit review approach adopted in this study is designed to capture the strengths of both implicit review (e.g., allowing the reviewer to consider the nuances of the case, which enhances validity) and explicit review (e.g., requiring all reviewers to examine certain elements of care, which enhances reliability) .
There are several limitations to our study. First, our instrument was only tested on the most ill pediatric patients presenting to four rural EDs. However, it is for these patients that quality of care is of greatest concern. Second, we only used two reviewers for the assessment of quality of care, which could limit the generalizability of the instrument if other reviewers score charts in a different manner. We recommend further validation of this instrument using more reviewers. The extent to which this instrument is valid and reliable in other settings when applied by other reviewer's and with less ill patients remains requires further study. Third, the ability of our instrument to measure quality is somewhat dependent upon the detail of documentation in the medical record . While the quality of the documentation may affect measurement of the physician's "integration of information," it would be less likely to affect measurement of the physician's "initial data gathering," "initial treatment plan and orders," and "plan for disposition and follow-up," which are documented through orders or laboratory reports as well as physician notes. Fourth, medication errors may have in part influenced the physicians' assessment of quality, making medication errors a less than ideal validation measure. However, many aspects of the review for medication errors could only be appreciated by pharmacist review of pharmacy records, and not by physicians' review of the ED record. Blinding charts of hospital information may not have been completely successful because hospital charts are different; however, this limitation should not affect the reliability or construct validity of the instrument. Finally, despite steps taken to increase inter-rater agreement, our ICC suggests only fair agreement between physician evaluators for individual items on our instrument, but good agreement for the total summary score . We are not discouraged by this finding, however, because we devised the instrument to measure variation in quality of care across different cohorts of patients, expecting that different reviewers may have different overall mean scores. Furthermore, we did not want to artificially increase reviewer agreement by providing a priori explicit instructions on how to score individual quality items. Our high Spearman rank correlation suggest that the reviewers tended to rank quality of care similarly, albeit with different mean scores. Previous studies indicate that the reliability of peer review increases with the number of reviewers and hence, using more than two reviewers would probably further increase inter-rater reliability [13, 29, 37].
By investigating processes of care in EDs and comparing implicit quality of care across sites, it is our goal to better understand the factors that need to be addressed to improve care. Our implicit review instrument could be used to assess whether differences in quality of care exist between different types of EDs, including rural, suburban, urban, or Children's Hospital EDs. Similarly, it could be used to investigate whether the presence of specialty trained or board certified Emergency Medicine physicians is associated with higher quality of care [38–40].