Cover Image
close this bookCDC's Short Version of the ICECI - International Classification of External Causes of Injury - A Pilot Test (Centers for Disease Control and Prevention, 2000, 30 p.)
View the document(introduction...)
View the documentExecutive Summary
View the documentIntroduction
View the documentMethods and Procedures
View the documentResults
View the documentDiscussion
View the documentNext Steps for CDC’s Short ICECI
View the documentReferences

Methods and Procedures

Massachusetts ED-SCIP and NEISS Substudies

The short ICECI Pilot Test consisted of two independent substudies involving hospital EDs in two different injury surveillance systems: the Massachusetts (MA) Emergency Department Surveillance and Coordination Project (ED-SCIP) and CPSC’s National Electronic Injury Surveillance System (NEISS). The Massachusetts Department of Public Health is developing ED-SCIP as a statewide surveillance system that is based on voluntary reporting from a representative sample of hospitals. The reporting system is being implemented in a stratified random sample of 20 of the state’s 79 hospital EDs. The NEISS is comprised of 100 hospital EDs that are a stratified probability sample of hospitals in the United States that have at least six beds and provide 24-hour emergency care.

The ED-SCIP substudy involved 4 public health professionals from the MA Department of Public Health who coded medical records in 16 of the 20 ED-SCIP hospital emergency departments. The NEISS substudy involved 7 on-site data abstractors who routinely code data from ED records for product-related injuries at 7 NEISS hospital EDs. Sample hospitals in both substudies consisted of hospitals located in rural, urban, suburban, and inner-city areas.

Training of Coders and Quality Assurance Methods

The training of coders was conducted independently for the two substudies. For the ED-SCIP substudy, coders were mailed a coding manual and the coding rules and definitions for their review approximately 10 days in advance of the training session. The ED-SCIP coders attended an 8-hour training session with a detailed presentation and discussion of the study protocol, coding rules and definitions, and specific training exercises. For the NEISS substudy, coders were not given training material prior to attending a 2-hour orientation to the study protocol and coding rules and definitions.

For the ED-SCIP substudy, coders were asked to conduct a prepilot test at two study hospitals using the short ICECI data collection form and to provide feedback to the CDC investigators. A conference call was conducted to clarify how to interpret and apply coding rules and definitions. For the NEISS substudy, coders were asked to proceed without prepilot testing. Both ED-SCIP and NEISS substudy coders then followed similar study protocols for the case scenario test and the field test. Completed data collection forms were reviewed for completeness and mailed to CDC for data processing and analysis. At CDC, data were key-entered, checked electronically for valid codes, and visually reviewed for accuracy of data entry. In some cases, coders did not follow the skip patterns correctly in filling out the data collection form. All forms were key-entered as they were completed by the coder; therefore, these errors were included in the analysis of percent agreement.

Protocol for the Case Scenario Test

One hundred case scenarios and “gold standard” code sets were prepared for this test. (Note: These same case scenarios were also used by Malinda Steenkamp and James Harrison, Australia’s National Injury Surveillance Unit, to conduct a pilot study of inter-coder reliability using the full ICECI.) Case scenarios represented a wide variety of injury-related circumstances including unintentional injuries, intentionally self-inflicted injuries, assaults, and legal interventions. Seven of the case scenarios did not involve injuries at all. Gold standard code sets were established by having three coinvestigators code each case independently and then arrive at consensus on the appropriate codes based on the coding rules and definitions.

Coders participating in the short ICECI pilot study were asked to code 50 case scenarios prior to the field test and 50 case scenarios at the end of the field test. About two weeks after the coders completed the second set of 50 case scenarios, they were asked to code an additional 20 cases. Coders were not told that these additional cases represented recodes of cases selected from the second set of 50 case scenarios. These data were then used to measure validity and reliability by estimating percent agreement for (1) codes assigned by each coder with the gold standard (to measure accuracy in applying coding rules and definitions), (2) codes assigned by multiple coders (to measure inter-coder reliability), and (3) codes assigned by the same coder (to measure intra-coder reliability).

Protocol for the Field Test

For the field test, coders were asked to review medical records of approximately 100 injury-related ED cases and to code all pertinent data elements using the short ICECI data collection form. Adverse effects of therapeutic use of drugs and adverse effects of medical and surgical care were excluded. For the ED-SCIP substudy, ED cases were randomly selected from cases with a principal diagnosis of an injury or poisoning (ICD-9-CM diagnosis codes of 800–999), which were in the preexisting ED-SCIP injury surveillance database of each sample hospital. For one of the larger inner-city hospitals, assaults were oversampled to increase the number of intentional injuries in our study. In addition, 5–10 medical records of injury-related cases at each hospital were randomly chosen to be coded independently by at least two coders for use in measuring inter-coder reliability. Because the ED-SCIP hospital ED data obtained in this field study are not representative of all ED injury-related visits in the state, they cannot be used to project statewide injury incidence by external cause of injury.

For the NEISS substudy, ED cases were selected to represent a broad spectrum of external causes of injury. This was done to examine the use of the short ICECI for capturing information on a wide variety of injury-related circumstances for injured persons treated in hospital EDs.

Statistical Methods Used to Assess Gold Standard, Inter-coder, and Intra-coder Comparisons

The kappa statistic, expressed as a percent, was used as a measure of agreement to test for validity and reliability. This statistic provides a new dimension to the percent observed agreement by assuming that, except in most extreme cases, some degree of agreement is to be expected by chance alone. The estimated kappa was calculated as [(po – pe)/(1–pe) x 100], is the expected agreement based on chance.4 where po is the observed agreement and pe Their difference, (po – pe), represents the obtained excess agreement beyond chance, while the maximum possible excess agreement beyond chance is represented by the quantity (1–pe). The ratio of these two, or the kappa statistic, can be interpreted as the percent agreement among coders beyond that which is expected by chance. The kappa statistic was used to compare (1) each individual coder’s ratings with the gold standard (validity), (2) all substudy coders’ ratings simultaneously (inter-coder reliability) and (3) each coder’s rating with his or her own ratings for repeated cases (intra-coder reliability). Landis and Koch5 suggest that values greater than 75% may be taken to represent excellent agreement beyond chance, values below 40% to represent poor agreement beyond chance, and values between 40% and 75% to represent fair to good agreement beyond chance. Standard errors of kappas were calculated using a method, as described by Fleiss et. al.,6 that accounts for different sets of coders for different cases. Standard errors were used to compute 95% confidence intervals (CIs) for all kappa statistics.

For the analysis of mechanism of injury, we created an analytic variable with 15 categories, excluding adverse effects of drugs and adverse effects of surgical and medical care, with all 14 major mechanisms of injury and an “other specified” category. This variable was constructed to reflect the immediate or most direct cause of the most severe injury being treated. The most severe injury was determined by the principal diagnosis of the physician at the time of the ED visit. Only first-time visits for an injury were included in the study.

Pilot Study Participant Survey

Each coder was asked to complete a survey to provide coinvestigators with feedback on his or her experience in using the short ICECI.