Skip to main content

Table 1 Adopted domains and (key) items of the used CHARMS [15] checklist accompanied by the reporting- and methodological score per item

From: Models to predict length of stay in the emergency department: a systematic literature review and appraisal

All studies [Reference]

Studies

Lee S. et al. (2023) [16]

Zeleke AJ. et al. (2023) [17]

Lee H. et al. (2023) [18]

Kadri F. et al. (2023) [19]

Lee KS. et al. (2022) [20]

Srivastava S. et al. (2022) [21]

Etu EE. et al. (2022) [22]

Chang YH. et al. (2022) [23]

d'Etienne JP. et al. (2021) [24]

Laher AE. et al. (2021) [25]

Bacchi S. et al. (2020) [15]

Sweeny A. et al. (2020) [26]

Sricharoen P. et al. (2020) [27]

Rahman MA. et al. (2020) [28]

Curiati PK. et al. (2020) [29]

Chen C-H. et al. (2020) [30]

Street, M. et al. (2018) [31]

Gill, S. D. et al. (2018) [32]

Key items

Source of data

 Source of data (e.g., cohort, case–control, randomized trial participants, or registry data)a

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

Participants

 Participant eligibility and recruitment method (e.g., consecutive participants, location, number of centers, setting, country, inclusion and exclusion criteria)a

y

y

y

y

y

y

y

y

p

y

y

y

y

p

y

y

y

y

 Participant description

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Study dates

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

Outcome(s) to be predicted

 Definition and method for measurement of outcome

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

p

p

 Was the same outcome definition (and measurement method) used in all patients?

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Type of outcome (e.g., single or combined endpoints)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Where candidate predictors part of the outcome (e.g., in panel or consensus diagnosis)?

n

n

n

y

n

n

n

y

n

n

n

n

n

n

n

n

n

n

Candidate predictor

 Number and type of predictors (e.g., demographics, patient history, physical examination, additional testing, disease characteristics)

y

y

y

y

y

y

y

y

y

y

y

y

p

p

p

y

y

y

 Definition and method for measurement of candidate predictors

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Timing of predictor measurement (e.g., at patient presentation, at diagnosis, at treatment initiation)

y

y

y

y

y

y

y

y

y

y

y

y

p

y

p

y

y

y

 Handling of predictors in the modeling (e.g., continuous, linear, non-linear transformations or categorized)

y

y

y

y

y

y

y

y

n

n

y

n

n

n

n

n

n

n

Sample size

 Number of participants and number of outcomes/events

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Number of outcomes/events in relation to the number of candidate predictors (Events Per Variable)a

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

Missing data

 Number of participants with any missing value (include predictors and outcomes)

n

y

n

n

n

y

n

y

n

n

n

n

n

n

n

n

n

n

 Number of participants with missing data for each predictor

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Handling of missing data (e.g., complete-case analysis, imputation, or other methods)

n

n

n

y

n

n

y

n

n

n

n

n

n

y

n

n

n

n

Model development

 Modeling method (e.g., logistic, survival or machine learning techniques)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Modelling assumptions satisfied

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Method for selection of predictors for inclusion in multivariable modeling (e.g., all candidate predictors, pre-selection based on unadjusted association with the outcome)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Initial predictors/variables are reported such that the results are reproducibleb

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Method for selection of predictors during multivariable modeling (e.g., full model approach, backward or forward selection) and criteria used (e.g., p-value, Akaike Information Criterion)

y

y

y

y

y

y

y

y

y

y

n

n

y

y

y

n

y

y

 Shrinkage of predictor weights or regression coefficients (e.g., no shrinkage, uniform shrinkage, penalized estimation)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

y

n

n

n

 Reporting of model derivation and calibration process is sufficient for the results to be reproducedb

n

y

y

n

n

n

n

n

n

n

n

n

n

y

y

n

n

n

Handling specific patient subgroups 3

 Readmissionsa

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Transfersa

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Non-survivorsa

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Cardiac surgerya

n

n

n

n

n

n

n

n

n

n

n

n

y

n

n

n

n

n

Model performance

 Calibration (calibration plot, calibration slope, Hosmer–Lemeshow test) and Discrimination

n

y

y

n

n

n

n

n

n

n

n

n

n

n

y

n

y

n

 (C-statistic, D-statistic, log-rank) measures with confidence intervals

n

n

n

n

y

y

n

y

y

y

n

y

n

n

y

n

n

n

 Classification measures (e.g., sensitivity, specificity, predictive values, net reclassification improvement) and whether a-priori cut points were used

y

y

n

n

n

n

y

y

y

n

y

n

y

y

y

n

n

n

Model evaluation

 Method used for testing model performance: development dataset only (random split of data, resampling methods e.g. bootstrap or cross-validation, none) or separate External validation (e.g. temporal, geographical, different setting, different investigators)a

y

y

y

y

n

n

y

y

y

n

y

n

y

y

y

y

y

y

 In case of poor validation, whether model was adjusted or updated (e.g., intercept recalibrated, predictor effects adjusted, or new predictors added)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

Publication of the developed models (Results)

 Final and other multivariable models (e.g., basic, extended, simplified) presented, including predictor weights or regression coefficients, intercept, baseline survival, model performance measures (with standard errors or confidence intervals)a

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Any alternative presentation of the final prediction models, e.g., sum score, monogram, score chart, predictions for specific risk subgroups with performance

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Comparison of the distribution of predictors (including missing data) for development and validation datasets

n

n

n

n

n

n

n

n

n

n

n

n

n

p

y

n

y

n

Interpretation and discussion

 Interpretation of presented models (confirmatory, i.e., model useful for practice versus exploratory, i.e., more research needed)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Comparison with other studies, discussion of generalizability

y

y

y

n

y

y

y

p

y

y

y

y

y

y

y

y

y

y

 Strengths, weakness, limitations and future challenges

y

y

p

p

y

p

p

y

p

p

y

y

p

y

y

p

p

p

Methodological quality items

 Study consists of a cohort study or registry instead of a randomized design (source of data)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Study consists of a prospective study design (source of data)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Patients are excluded based on outcome variable (participants)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Selective inclusion based on data availability took place (participants)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Sample size (n) in development set is sufficient relative to the number of variables in the final model (sample size)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

 Specific treatment for this subgroup took place: readmissions

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Specific treatment for this subgroup took place: transfers

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Specific treatment for this subgroup took place: non-survivors

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

 Specific treatment for this subgroup took place: cardiac surgery

n

n

n

n

n

n

n

n

n

n

n

n

y

n

n

n

n

n

 Validation took place using an independent validation dataset (model evaluation)

n

n

n

n

n

n

n

y

y

n

n

n

n

n

y

n

n

n

 Model is reproducible (results of the developed models)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

n

n

Total score

 Reporting score

54

60

55

53

52

53

55

61

54

49

52

48

53

55

62

47

50

46

 Reporting score (%)

54

60

55

53

52

53

55

61

54

49

52

48

53

55

62

47

50

46

 Methodological score

6

6

6

6

6

6

6

8

8

6

6

6

8

6

8

6

4

4

 Methodological score (%)

27

27

27

27

27

27

27

36

36

27

27

27

36

27

36

27

18

18

All studies [Reference]

Studies

Zhu, T. et al. (2017) [33]

Chaou C-H. et al. (2017) [34]

Mark B. Warren (2016) [35]

Prisk D. et al. (2016) [36]

Launay CP. Et al (2015) [37]

Stephens R. et al. (2014) [38]

Casalino, E. et al. (2014) [39]

Green N. et al. (2012) [40]

van der Linden C. et al. (2012) [41]

Nejtek V. A. et al. (2011) [42]

Ru Ding (2010) [43]

Chi, C. H. et al. (2006) [44]

Walsh P. et al. (2004) [45]

Tanabe P. et al. (2004) [46]

Jimenez, J. G. et al. (2003) [47]

Tandberg D. et al. (1994) [48]

Total score key item

Percentage of score key item (%)

Key items

Source of data

 Source of data (e.g., cohort, case–control, randomized trial participants, or registry data)a

y

y

y

y

y

y

y

y

y

y

y

y

y

y

p

y

67

99

Participants

 Participant eligibility and recruitment method (e.g., consecutive participants, location, number of centers, setting, country, inclusion and exclusion criteria)a

y

y

y

y

y

y

p

p

y

p

p

y

n

p

p

p

57

84

 Participant description

y

y

p

y

y

y

p

y

y

y

n

y

n

n

n

n

56

82

 Study dates

y

y

y

y

y

y

y

y

p

y

y

y

p

y

y

y

66

97

Outcome(s) to be predicted

 Definition and method for measurement of outcome

y

y

p

y

y

p

y

y

y

n

y

n

n

y

n

y

56

82

 Was the same outcome definition (and measurement method) used in all patients?

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

 Type of outcome (e.g., single or combined endpoints)

y

y

y

y

y

y

y

y

y

y

y

n

y

y

y

y

66

97

 Where candidate predictors part of the outcome (e.g., in panel or consensus diagnosis)?

n

n

n

n

y

y

n

n

n

n

n

n

n

n

n

n

8

12

Candidate predictor

 Number and type of predictors (e.g., demographics, patient history, physical examination, additional testing, disease characteristics)

y

y

y

p

y

y

y

y

y

y

y

y

y

y

y

y

64

94

 Definition and method for measurement of candidate predictors

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

 Timing of predictor measurement (e.g., at patient presentation, at diagnosis, at treatment initiation)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

66

97

 Handling of predictors in the modeling (e.g., continuous, linear, non-linear transformations or categorized)

n

y

y

p

y

y

p

p

n

p

y

y

n

n

n

n

44

65

Sample size

 Number of participants and number of outcomes/events

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

 Number of outcomes/events in relation to the number of candidate predictors (Events Per Variable)a

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

Missing data

 Number of participants with any missing value (include predictors and outcomes)

n

n

n

n

n

n

n

n

n

y

y

n

n

n

y

n

12

18

 Number of participants with missing data for each predictor

n

n

n

n

n

n

n

n

n

y

y

n

n

n

y

n

6

9

 Handling of missing data (e.g., complete-case analysis, imputation, or other methods)

n

p

n

n

n

y

n

n

n

n

y

n

n

n

y

n

13

19

Model development

 Modeling method (e.g., logistic, survival or machine learning techniques)

y

y

y

y

y

y

y

p

p

y

y

p

y

p

p

y

63

93

 Modelling assumptions satisfied

y

y

y

y

y

y

y

y

p

y

y

y

y

y

y

y

67

98

 Method for selection of predictors for inclusion in multivariable modeling (e.g., all candidate predictors, pre-selection based on unadjusted association with the outcome)

n

y

y

y

y

y

n

n

y

y

y

n

y

n

n

n

54

79

 Initial predictors/variables are reported such that the results are reproducibleb

n

y

y

p

y

y

y

y

y

n

n

n

y

n

n

n

53

78

 Method for selection of predictors during multivariable modeling (e.g., full model approach, backward or forward selection) and criteria used (e.g., p-value, Akaike Information Criterion)

n

y

y

y

y

y

n

n

y

y

y

n

y

n

n

n

48

71

 Shrinkage of predictor weights or regression coefficients (e.g., no shrinkage, uniform shrinkage, penalized estimation)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

2

3

 Reporting of model derivation and calibration process is sufficient for the results to be reproducedb

n

y

n

n

n

n

n

n

n

n

n

n

y

n

n

y

14

21

Handling specific patient subgroups 3

 Readmissionsa

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Transfersa

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Non-survivorsa

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Cardiac surgerya

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

2

3

Model performance

 Calibration (calibration plot, calibration slope, Hosmer–Lemeshow test) and Discrimination

n

y

y

n

n

y

n

y

y

y

y

n

y

y

n

y

28

41

 (C-statistic, D-statistic, log-rank) measures with confidence intervals

y

n

n

n

n

y

p

n

n

p

n

p

n

n

p

n

22

32

 Classification measures (e.g., sensitivity, specificity, predictive values, net reclassification improvement) and whether a-priori cut points were used

n

n

n

n

y

n

n

n

n

n

n

n

y

n

n

n

22

32

Model evaluation

 Method used for testing model performance: development dataset only (random split of data, resampling methods e.g. bootstrap or cross-validation, none) or separate External validation (e.g. temporal, geographical, different setting, different investigators)a

n

n

n

n

n

y

y

n

y

p

y

n

y

y

n

y

43

63

 In case of poor validation, whether model was adjusted or updated (e.g., intercept recalibrated, predictor effects adjusted, or new predictors added)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

Publication of the developed models (Results)

 Final and other multivariable models (e.g., basic, extended, simplified) presented, including predictor weights or regression coefficients, intercept, baseline survival, model performance measures (with standard errors or confidence intervals)a

y

y

p

p

y

y

y

p

p

y

y

y

y

y

p

y

63

93

 Any alternative presentation of the final prediction models, e.g., sum score, monogram, score chart, predictions for specific risk subgroups with performance

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Comparison of the distribution of predictors (including missing data) for development and validation datasets

y

y

n

n

n

y

y

y

p

y

y

y

n

n

p

n

23

34

Interpretation and discussion

 Interpretation of presented models (confirmatory, i.e., model useful for practice versus exploratory, i.e., more research needed)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

 Comparison with other studies, discussion of generalizability

y

y

n

y

y

y

n

y

n

n

n

n

n

y

n

n

47

69

 Strengths, weakness, limitations and future challenges

p

y

y

y

p

p

y

y

p

p

n

y

y

y

y

y

51

75

Methodological quality items

 Study consists of a cohort study or registry instead of a randomized design (source of data)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

 Study consists of a prospective study design (source of data)

n

n

n

n

y

n

y

y

n

n

n

y

n

n

n

y

10

14

 Patients are excluded based on outcome variable (participants)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Selective inclusion based on data availability took place (participants)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

s Sample size (n) in development set is sufficient relative to the number of variables in the final model (sample size)

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

68

100

 Specific treatment for this subgroup took place: readmissions

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Specific treatment for this subgroup took place: transfers

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

0

 Specific treatment for this subgroup took place: non-survivors

n

n

n

n

n

n

n

n

n

n

y

n

n

n

n

n

2

3

 Specific treatment for this subgroup took place: cardiac surgery

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

2

3

 Validation took place using an independent validation dataset (model evaluation)

n

n

n

n

n

y

n

n

n

n

n

n

y

n

n

y

12

18

 Model is reproducible (results of the developed models)

y

y

n

y

y

y

y

n

n

n

n

n

y

n

n

n

46

68

Total score

 Reporting score

45

57

45

46

55

62

48

46

44

49

53

40

49

40

38

45

1731

 

 Reporting score (%)

45

57

45

46

55

62

48

46

44

49

53

40

49

40

38

45

50

 

 Methodological score

6

6

4

6

8

8

8

6

4

4

6

6

8

4

4

8

208

 

 Methodological score (%)

27

27

18

27

36

36

36

27

18

18

27

27

36

18

18

36

28

 
  1. aOne or more methodological scores are given to this item
  2. bAdditional items were added to the checklist from a scoring framework developed for reviewing models to predict mortality in very premature infants [14]