Accuracy of the Charlson Index Comorbidities Derived from a Hospital Electronic Database in a Teaching Hospital in Saudi Arabia

by Adel Youssef, MD, PhD; and Hana Alharthi, PhD

Abstract

Hospital management and researchers are increasingly using electronic databases to study utilization, effectiveness, and outcomes of healthcare provision. Although several studies have examined the accuracy of electronic databases developed for general administrative purposes, few studies have examined electronic databases created to document the care provided by individual hospitals. In this study, we assessed the accuracy of an electronic database in a major teaching hospital in Eastern Province, Saudi Arabia, in documenting the 17 comorbidities constituting the Charlson index as recorded in paper charts by care providers. Using the hospital electronic database, the researchers randomly selected the data for 1,019 patients admitted to the hospital and compared the data for accuracy with the corresponding paper charts. Compared with the paper charts, the hospital electronic database did not differ significantly in prevalence for 9 conditions but differed from the paper charts for 8 conditions. The kappa (Ƙ) values of agreement ranged from a high of 0.91 to a low of 0.09. Of the 17 comorbidities, the electronic database had substantial or excellent agreement for 10 comorbidities relative to paper chart data, and only one showed poor agreement. Sensitivity ranged from a high of 100.0 percent to a low of 6.0 percent. Specificity for all comorbidities was greater than 93 percent. The results suggest that the hospital electronic database reasonably agrees with patient chart data and can have a role in healthcare planning and research. The analysis conducted in this study could be performed in individual institutions to assess the accuracy of an electronic database before deciding on its utility in planning or research.

Keywords: accuracy, agreement, Charlson index, hospital electronic database, Saudi Arabia

Introduction

Traditionally, a manual detailed chart review is used to abstract data from medical records when patient information is needed after discharge. However, hospital electronic databases are being increasingly used by policy makers and planners to measure healthcare demands and needs and to plan provision of care. These databases are also being used by researchers to conduct studies on healthcare quality and outcomes and on utilization of healthcare services.1 Among the factors contributing to the increasing use of electronic databases are their easy availability for analysis, their relatively low cost, and the large quantities of clinical information they offer regarding care provided during patient contact with the healthcare system.2–4

The presence of comorbid conditions has a major influence on utilization and outcomes of care. As a result, researchers have developed scoring systems to account for the number of comorbidities, which can be used to adjust for the patient mix when measuring outcomes or care utilization.5, 6 One of the most widely used and validated scoring systems was developed by Charlson and colleagues.7 This system was originally developed to predict one-year mortality in a cohort of medical patients while taking into consideration the number and severity of comorbid diseases.

National surveys in the United States have shown that the level of information technology (IT) adoption, including that of electronic medical records (EMRs), is still limited in most clinical settings.8 The Healthcare Information and Management Systems Society (HIMSS) analytic database indicated that about 65 percent of hospitals are at stage 3 of EMR implementation or below, and only 1.8 percent have adopted the complete use of EMRs.9 As a result, many hospitals are still using combined paper and EMR systems. Some information about patient care (such as laboratory orders and results, medications received, and procedures performed) is entered into the electronic system during the patient’s hospital stay. Other information about the episode of care, such as details of patient diagnoses and comorbidities, which are usually coded using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)10 and now in some countries the International Classification of Diseases, Tenth Revision (ICD-10), is transferred from the paper chart to the electronic database following the patient’s discharge.11, 12

With the wider use of electronic databases in healthcare research, the accuracy and completeness of such data have become an important issue given the potential for error in the process of coding the diagnoses for entry into these databases. Studies from North America, Europe, and Australia have investigated the accuracy of diagnostic coding of the Charlson comorbidity conditions in administrative databases compared with diagnoses obtained through paper medical records.13–15 However, studies from other countries or from different types of administrative databases are still lacking. As the quality of administrative data varies across hospitals, regions, and countries, more studies are needed from different countries to exchange information, to allow for the development of analytic tools that could be standardized and adopted across countries, and to help understand the strengths and weakness of various healthcare systems.16

In Saudi Arabia, despite the push by the government and a huge budget, Saudi hospitals have lagged behind their US counterparts in the use of EMR systems.17 On the other hand, in a process similar to the upcoming US transition to the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) and the International Classification of Diseases, Tenth Revision, Procedure Coding System (ICD-10-PCS), Saudi Arabia is still transitioning to the use of ICD-10-CM/PCS; however, ICD-9-CM is still the predominant coding system in both countries. In the United States, the Department of Health and Human Services (HHS) recently announced a final rule that delays the required ICD-10-CM/PCS compliance date from October 1, 2013, to October 1, 2014.18

We conducted this study to assess the accuracy of the hospital electronic database in a university hospital in Saudi Arabia. To achieve that aim, we evaluated the extent to which ICD-9-CM diagnostic codes constituting the Charlson index used in the hospital electronic database accurately reflected the patients’ comorbid conditions as documented in the patients’ paper medical records.

Methods

Study Design

This was a cross-sectional study comparing the agreement of comorbidities for the same patient obtained from two data sources: paper medical charts and the hospital electronic database. These data sources contained records for all patients discharged from the hospital, with all diagnoses and procedures recorded for each patient. The study was approved and funded by the Deanship of Scientific Research at the University of Dammam.

Study Setting and Population

The study was conducted using the records of patients admitted from January 1, 2008, to December 31, 2010, to the Department of General Medicine at the teaching hospital affiliated with the University of Dammam in Eastern Province, Saudi Arabia. In 1998, the hospital introduced a QuadraMed system that is integrated to a clinical decision support system.19 Since the implementation of this system, all physicians have been required to enter medication orders and review lab results electronically. Some information is still being entered manually in paper charts.

Data Sources

We identified the electronic medical records of patients discharged from the Department of General Medicine during the three-year period from January 1, 2008, to December 31, 2010. To have at least 1,000 study participants for the final analysis, a random number generator was used to randomly select 1,050 patients from the electronic database to compensate for the anticipated unavailability of the paper charts for some of the patients. Information including patient demographic characteristics such as date of birth, nationality, discharge status including in-hospital death, date of admission, date of discharge, and all ICD-9-CM diagnoses and procedure codes was abstracted. Patients were excluded if their age was less than 18 years. Medical conditions and procedures were coded into the hospital electronic database using ICD-9-CM by experienced coders who read through the paper charts. The corresponding paper charts were requested from the medical record department of the hospital. All diagnoses in both the paper charts and the electronic database were abstracted and included in the analysis.

Chart Review

Two health information management graduates with experience in medical record review and data extraction were assigned to review the selected medical charts in their entirety. For each patient, the principal diagnosis and all 17 comorbidities included in the Charlson index (see Table 1) were searched for and collected from the chart. The reviewers abstracted information from the cover page, discharge summary, physician notes, consultation reports, laboratory results, and physician orders. Other patient information, such as demographic characteristics and occurrence of death during hospitalization, was also abstracted. To ensure consistency in the abstraction process, 20 charts other than those included in the final study analysis were cross-abstracted by both reviewers and were checked by the author for agreement (Ƙ = 0.82). The chart reviewers were blinded to the contents of the electronic database for the study patients.

Statistical Analysis

Descriptive statistics were used to calculate the prevalence of 17 comorbidities in both the electronic database and the paper chart data, and the results were then compared using McNemar’s test. To assess the accuracy of the electronic database in reproducing the chart data, sensitivity, specificity, positive predictive value, and negative predictive value were calculated using the chart data as the gold standard. Sensitivity was calculated as a measure of the accuracy of recording comorbidities in the electronic database when they were present in the paper chart. Specificity was calculated to determine the accuracy of reporting the absence of the condition in the electronic database when it was also absent from the paper chart. Positive predictive values and negative predictive values were also calculated to determine the extent to which a comorbidity present in or absent from the electronic database was also present in or absent from, respectively, the paper chart. Furthermore, to test the agreement between the two databases, we calculated kappa (Ƙ) statistics for individual comorbidities. To interpret the extent of agreement greater than chance, kappa values were categorized into five categories according to Landis and Koch’s method:20 ≤0.20 (poor agreement), 0.21–0.40 (fair agreement), 0.41–0.60 (moderate agreement), 0.61–0.80 (substantial agreement), and 0.811.00 (excellent agreement). Analysis was conducted using Stata software (Stata Corporation, College Station, Texas). A p-value less than or equal to .05 was considered to be statistically significant.

Results

Of the 1,050 randomly selected charts, 31 were not available in the medical record department because they were of patients staying in the hospital at the time of request. The data for the remaining 1,019 patient paper charts were manually reviewed and successfully linked to the corresponding records in the electronic database and were included in the analysis of this study.

Table 2 shows the characteristics of the 1,019 study participants. The average patient age was 49.5 years, and about 64 percent were men. The majority of the patients were Saudi (81 percent), followed by non-Arab (10 percent), and Arab non-Saudi (9 percent). Five percent died during their hospital stay, 88 percent were discharged alive, and 7 percent were discharged against medical advice (DAMA). The highest percentage of patients (36 percent) had 5 to 10 comorbidities. Only 10 percent had more than 10 comorbidities.

Table 3 shows the prevalence of the 17 comorbidities included in the Charlson index according to data source (electronic database and paper patient chart). The prevalence of 9 of the 17 comorbidities did not differ significantly between the two databases (p > .05). The electronic database underreported the prevalence for six conditions (myocardial infarction, 12.8 percent vs. 40.4 percent; hemiplegia/paraplegia, 1.5 percent vs. 5.2 percent; diabetes, 32.8 percent vs. 35.5 percent; diabetes with chronic complication, 7.3 percent vs. 19.3 percent; mild liver disease, 0.9 percent vs. 4.9 percent; and renal disease, 8.9 percent vs. 10.5 percent; p =< .01) and overreported the prevalence for two conditions (cerebrovascular disease, 14.3 percent vs. 12.0 percent; rheumatologic disease, 1.7 percent vs. 0.7 percent; p < .01).

Five quantitative indices to assess the extent of accuracy of the electronic database in reproducing the comorbidities included in the paper charts are presented in Table 4. The kappa value indicated excellent agreement (Ƙ = 0.81–1.00) between the electronic data and the paper chart for three conditions (cerebrovascular disease, metastatic solid tumor, and AIDS/HIV), substantial agreement (Ƙ = 0.61–0.80) for seven comorbidities, moderate agreement (Ƙ = 0.41–0.60) for three comorbidities, and fair agreement (Ƙ = 0.21–0.40) for three comorbidities. Only mild liver disease had poor agreement (Ƙ < 0.20). Sensitivity also varied according to the comorbidity, from a high of 100 percent for rheumatologic disease to a low of 6 percent for mild liver disease. Of the 17 comorbidities included in the Charlson index, six comorbidities had sensitivity above 80 percent as recorded in the electronic data (cerebrovascular disease, chronic pulmonary disease, rheumatologic diseases, diabetes, metastatic solid tumor, and AIDS/HIV). On the other hand, six comorbidities had a sensitivity of less than 50 percent (myocardial infarction, peripheral vascular disease, hemiplegia/paraplegia, diabetes with chronic complication, mild liver disease, and moderate liver disease). The specificity values for all 17 comorbidities were greater than 93.0 percent, indicating that the electronic database performed very accurately when the condition was not present in the paper chart.

Table 4 also presents positive predictive values and negative predictive values, which indicate the extent to which a comorbidity present in or absent from the electronic database was also present in or absent from, respectively, the paper chart. Positive predictive values were low (≤50 percent) for four comorbidities (peripheral vascular disease, rheumatologic diseases, mild liver disease, and moderate liver disease). On the other hand, the positive predictive values were 80 percent or greater for seven comorbidities (myocardial infarction, chronic pulmonary disease, diabetes, renal disease, any malignancy, metastatic solid tumor, and AIDS/HIV). All but one of the 17 conditions (myocardial infarction) had a high negative predictive value (≥85.0 percent), indicating that their absence in the electronic database also indicated their absence in the paper chart.

When we compared our study findings with those reported by Quan et al.21 and Kieszak et al.,22 we found that the kappa values for four conditions (myocardial infarction, hemiplegia/paraplegia, diabetes with chronic complication, and mild liver disease) were in higher kappa categories in the study by Quan et al. than in ours (see Table 5). On the other hand, the kappa values of five other conditions (peripheral vascular disease, cerebrovascular disease, dementia, renal disease, and AIDS/HIV) were in higher kappa categories in our study than in that of Quan et al. The kappa values calculated for all conditions in the study by Kieszak et al. were lower than the corresponding kappa values in our study, although three were in the same kappa value category as our study.

Discussion

Our study examined the accuracy of a hospital electronic database in accurately capturing comorbidities documented in paper charts. Using the 17 comorbidities included in the Charlson index, we found that the overall accuracy of the electronic database was reasonably good. Although the electronic database reported a prevalence for the majority of the comorbidities included in the Charlson index that did not differ significantly from that reported by the paper charts, it tended to underreport the prevalence of the comorbidities when there was a discrepancy between the two data sources. Of the 17 comorbidities, the electronic database had substantial or excellent agreement for 10 conditions relative to paper chart data, and only one showed poor agreement.

Our study found that the difference in prevalence (whether higher or lower) was greater than 5 percent for only two conditions among the 17 included in the Charlson index when the hospital electronic database was compared with the corresponding paper charts. When the results of the accuracy of the 17 conditions from our study were compared with those in the study by Quan et al.,23 four kappa values reported by Quan et al. were in higher categories than in our study, and five were in higher categories in our study. These results suggest that our electronic database coding accuracy is probably similar to or may be better than the Canadian administrative data and more accurate than the US administrative data, as demonstrated by higher kappa value categories in our study in nine out of 12 conditions studied by Kieszak et al.24 These variations in accuracy between studies possibly reflect different types of administrative data. In our study, the data are created as part of the normal hospital operations and are intended for internal use by the hospital for quality assurance, utilization studies, and research purposes, in contrast to other studies’ use of claims data, which could contain information intended to maximize reimbursement.25 Other possible explanations include variation in the clarity and completeness of documentation by physicians and variation in the experience and knowledge of the coders. Explanations could also include the lack of standardized guidelines for the coding of comorbidities across institutions or across countries. Quan et al.26 suggested that the better accuracy for comorbidities in the Canadian administrative data compared with the US data in that study could be the result, at least partially, of having a medical chart coding department with a single coordinator who supervises coding practices. Interestingly, this management structure is the same as that used for the administration of medical chart coding in our study hospital. Having a dedicated department for coding probably creates an environment that changes the attitude toward documentation, allows for closer supervision and continued training,27 and ensures the hiring of qualified coders.

This study found that specificity was high (>93 percent) for all comorbidities, indicating that the electronic data did not include conditions that were not actually present in the patients as reflected in the paper charts. The low rate of false positives associated with high specificity may explain the generally high positive predictive value for most of the conditions except those with low prevalence rates.28

Similar to other studies, our study found low sensitivity for chronic diseases including past myocardial infarction, peripheral vascular disease, and hemiplegia/paraplegia. As suggested by others, physicians may overlook documenting patients’ established chronic conditions that do not require diagnostic investigations. As a result, coders may not code these conditions because coders tend to code only conditions that are clearly noted by physicians.29

Previous research has found that an increase in the number of codes on the discharge abstract has an inverse relation to coding completeness.30, 31 Our study supported the same finding: we found that a discrepancy between the electronic database and the paper chart was more likely with an increase in the number of comorbidities (data not shown). Having a higher number of comorbidities may lead coders to consider some to be of less importance to enter into the database.32

There are several potential limitations to this study. First, this study was conducted in a single university-affiliated hospital in Eastern Province, Saudi Arabia. Other studies have reported that the accuracy of administrative data varies between teaching and nonteaching hospitals.33, 34 Generalizability to other medical institutions or other countries might not apply. Second, having the study patients come from the Department of General Medicine may have reduced the applicability of the results of this study to other types of specialties. Other studies have found that documentation in paper charts depends on physician specialty and the type of medical condition.35 Third, this study was conducted on a hospital electronic database created for internal use by the hospital rather than for reimbursement claims; databases created for different purposes may vary in data quality.36 In addition, the chart coding used for this study was conducted in a medical chart coding department that uses standardized and professionalized coding rules and methods.37 The extent to which this organizational structure affects the quality of coding remains unknown. Fourth, we evaluated our database using ICD-9-CM despite its replacement by ICD-10 in several countries. Several studies have demonstrated that ICD-9-CM and ICD-10 administrative data were coded reasonably well and had similar validity in recording information on clinical conditions.38 Fifth, data abstraction by our research assistants was considered the gold standard in all validity analyses. We assumed that the research assistants were able to abstract all available information with complete accuracy. We also assumed that physicians were perfectly accurate in documenting patients’ histories and in making diagnoses. As a result, this study does not take into consideration the possibility of errors in physician documentation and errors in information abstraction by our researchers.

In summary, our study demonstrated that individual comorbidities in the electronic database of the University of Dammam teaching hospital coded according to ICD-9-CM are, on average, accurate for most but not all of the comorbidities in the Charlson index. Researchers and management can utilize the electronic database for research and general administrative purposes, although they must account for the degree of inaccuracy of some of the comorbidities.

 

Acknowledgment

Many thanks are due to Atheer Al-Saif and Lolwa Al-Mukhailid for abstracting data from the paper charts. We also would like to thank Mr. Mohamed H. Fahmy for his technical support with the electronic database. This study was supported by a grant from the Deanship of Research in the University of Dammam.

 

Adel Youssef, MD, PhD, is an assistant professor in the Department of Health Information Management and Technology in the College of Applied Medical Sciences at the University of Dammam in Saudi Arabia.

Hana Alharthi, PhD, is an assistant professor in the Department of Health Information Management and Technology College of Applied Medical Sciences at the University of Dammam Saudi Arabia.

 

Notes

  1. Dean, B. B., J. Lam, J. L. Natoli, Q. Butler, D. Aguilar, and R. J. Nordyke. “Review: Use of Electronic Medical Records for Health Outcomes Research: A Literature Review.” Medical Care Research and Review 66 (2009): 611–38.
  2. Deyo, R. A., V. M. Taylor, P. Diehr, et al. “Analysis of Automated Administrative and Survey Databases to Study Patterns and Outcomes of Care.” Spine 19 (1994): 2083S–2091S.
  3. Nuttall, M., J. van der Meulen, and M. Emberton. “Charlson Scores Based on ICD-10 Administrative Data Were Valid in Assessing Comorbidity in Patients Undergoing Urological Cancer Surgery.” Journal of Clinical Epidemiology 59 (2006): 265–73.
  4. Mitiku, T. F., and K. Tu. “Using Data from Electronic Medical Records: Theory versus Practice.” Healthcare Quarterly 11 (2008): 23–25.
  5. McGregor, J. C., P. W. Kim, E. N. Perencevich, et al. “Utility of the Chronic Disease Score and Charlson Comorbidity Index as Comorbidity Measures for Use in Epidemiologic Studies of Antibiotic-Resistant Organisms.” American Journal of Epidemiology 161 (2005): 483–93.
  6. Ghali, W. A., R. E. Hall, A. K. Rosen, A. S. Ash, and M. A. Moskowitz. “Searching for an Improved Clinical Comorbidity Index for Use with ICD-9-CM Administrative Data.” Journal of Clinical Epidemiology 49 (1996): 273–78.
  7. Charlson, M. E., P. Pompei, K. L. Ales, and C. R. MacKenzie. “A New Method of Classifying Prognostic Comorbidity in Longitudinal Studies: Development and Validation.” Journal of Chronic Diseases 40 (1987): 373–83.
  8. Jha, A. K., C. M. DesRoches, P. D. Kralovec, and M. S. Joshi. “A Progress Report on Electronic Health Records in U.S. Hospitals.” Health Affairs 29 (2010): 1951–57.
  9. Healthcare Information and Management Systems Society (HIMSS). HIMSS Analytics: EMR Adoption Model. 3rd quarter 2012. http://www.himssanalytics.org/hc_providers/emr_adoption.asp (accessed November 2012).
  10. Deyo, R. A., D. C. Cherkin, and M. A. Ciol. “Adapting a Clinical Comorbidity Index for Use with ICD-9-CM Administrative Databases.” Journal of Clinical Epidemiology 45 (1992): 613–19.
  11. Sundararajan, V., T. Henderson, C. Perry, A. Muggivan, H. Quan, and W. A. Ghali. “New ICD-10 Version of the Charlson Comorbidity Index Predicted In-Hospital Mortality.” Journal of Clinical Epidemiology 57 (2004): 1288–94.
  12. Quan, H., V. Sundararajan, P. Halfon, et al. “Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data.” Medical Care 43 (2005): 1130–39.
  13. Quan, H., G. A. Parsons, and W. A. Ghali. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.” Medical Care 40 (2002): 675–85.
  14. Henderson, T., J. Shepheard, and V. Sundararajan. “Quality of Diagnosis and Procedure Coding in ICD-10 Administrative Data.” Medical Care 44 (2006): 1011–19.
  15. Thygesen, S. K., C. F. Christiansen, S. Christensen, T. L. Lash, and H. T. Sorensen. “The Predictive Value of ICD-10 Diagnostic Coding Used to Assess Charlson Comorbidity Index Conditions in the Population-based Danish National Registry of Patients.” BMC Medical Research Methodology 11 (2011): 83.
  16. De, C. C., H. Quan, A. Finlayson, et al. “Identifying Priorities in Methodological Research Using ICD-9-CM and ICD-10 Administrative Data: Report from an International Consortium.” BMC Health Services Research 6 (2006): 77.
  17. Almutairi, M. S., R. M. Alseghayyir, A. A. Al-Alshikh, H. M. Arafah, and M. S. Househ.
    “Implementation of Computerized Physician Order Entry (CPOE) with Clinical Decision Support (CDS) Features in Riyadh Hospitals to Improve Quality of Information.” Studies in Health Technology and Informatics 180 (2012): 776–80.
  18. Department of Health and Human Services. “Administrative Simplification: Adoption of a Standard for a Unique Health Plan Identifier; Addition to the National Provider Identifier Requirements; and a Change to the Compliance Date for the International Classification of Diseases, 10th Edition (ICD-10-CM and ICD-10-PCS) Medical Data Code Sets; Final Rule.” 45 CFR Part 162. Federal Register 77, no. 172 (September 5, 2012): 54664–720. Available at http://www.gpo.gov/fdsys/pkg/FR-2012-09-05/pdf/2012-21238.pdf.
  19. QuadraMed. “QuadraMed® Provides Healthcare IT and Services That Transform Quality Care into Financial Health.” 2012. Available at http://www.quadramed.com/ (accessed October 1, 2012).
  20. Landis, J. R., and G. G. Koch. “The Measurement of Observer Agreement for Categorical Data.” Biometrics 33 (1977): 159–74.
  21. Quan, H., G. A. Parsons, and W. A. Ghali. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.”
  22. Kieszak, S. M., W. D. Flanders, A. S. Kosinski, C. C. Shipp, and H. Karp. “A Comparison of the Charlson Comorbidity Index Derived from Medical Record Data and Administrative Billing Data.” Journal of Clinical Epidemiology 52 (1999): 137–42.
  23. Quan, H., G. A. Parsons, and W. A. Ghali. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.”
  24. Kieszak, S. M., W. D. Flanders, A. S. Kosinski, C. C. Shipp, and H. Karp. “A Comparison of the Charlson Comorbidity Index Derived from Medical Record Data and Administrative Billing Data.”
  25. Lash, T. L., V. Mor, D. Wieland, L. Ferrucci, W. Satariano, and R. A. Silliman. “Methodology, Design, and Analytic Techniques to Address Measurement of Comorbid Disease.” Journals of Gerontology Series A: Biological Sciences and Medical Sciences 62 (2007): 281–85.
  26. Quan, H., G. A. Parsons, and W. A. Ghali. “Validity of Information on Comorbidity Derived from ICD-9-CM Administrative Data.”
  27. Hassey, A., D. Gerrett, and A. Wilson. “A Survey of Validity and Utility of Electronic Patient Records in a General Practice.” BMJ 322 (2001): 1401–5.
  28. Brenner, H., and O. Gefeller. “Variation of Sensitivity, Specificity, Likelihood Ratios and Predictive Values with Disease Prevalence.” Statistics in Medicine 16 (1997): 981–91.
  29. Chong, W. F., Y. Y. Ding, and B. H. Heng. “A Comparison of Comorbidities Obtained from Hospital Administrative Data and Medical Charts in Older Patients with Pneumonia.” BMC Health Services Research 11 (2011): 105.
  30. Powell, H., L. L. Lim, and R. F. Heller. “Accuracy of Administrative Data to Assess Comorbidity in Patients with Heart Disease: An Australian Perspective.” Journal of Clinical Epidemiology 54 (2001): 687–93.
  31. Iezzoni, L. I., S. M. Foley, J. Daley, J. Hughes, E. S. Fisher, and T. Heeren. “Comorbidities, Complications, and Coding Bias: Does the Number of Diagnosis Codes Matter in Predicting In-Hospital Mortality?” JAMA 267 (1992): 2197–2203.
  32. Chong, W. F., Y. Y. Ding, and B. H. Heng. “A Comparison of Comorbidities Obtained from Hospital Administrative Data and Medical Charts in Older Patients with Pneumonia.”
  33. Iezzoni, L. I., M. Shwartz, M. A. Moskowitz, A. S. Ash, E. Sawitz, and S. Burnside. “Illness Severity and Costs of Admissions at Teaching and Nonteaching Hospitals.” JAMA 264 (1990): 1426–31.
  34. Iezzoni, L. I., S. Burnside, L. Sickles, M. A. Moskowitz, E. Sawitz, and P. A. Levine. “Coding of Acute Myocardial Infarction: Clinical and Policy Implications.” Annals of Internal Medicine 109 (1988): 745–51.
  35. Khan, N. F., S. E. Harrison, and P. W. Rose. “Validity of Diagnostic Coding within the General Practice Research Database: A Systematic Review.” British Journal of General Practice 60 (2010): e128–e136.
  36. Lash, T. L., V. Mor, D. Wieland, L. Ferrucci, W. Satariano, and R. A. Silliman. “Methodology, Design, and Analytic Techniques to Address Measurement of Comorbid Disease.”
  37. Luthi, J. C., N. Troillet, M. C. Eisenring, et al. “Administrative Data Outperformed Single-Day Chart Review for Comorbidity Measure.” International Journal for Quality in Health Care 19 (2007): 225–31.
  38. Quan, H., B. Li, L. D. Saunders, et al. “Assessing Validity of ICD-9-CM and ICD-10 Administrative Data in Recording Clinical Conditions in a Unique Dually Coded Database.” Health Services Research 43 (2008): 1424–41.

Printer friendly version of this article.

Adel Youssef, MD, PhD; and Hana Alharthi, PhD. “Accuracy of the Charlson Index Comorbidities Derived from a Hospital Electronic Database in a Teaching Hospital in Saudi Arabia.” Perspectives in Health Information Management (Summer 2013): 1-14.

Leave a Reply