A Framework for Performance Comparison among Major Electronic Health Record Systems

By Tiankai Wang, PhD, and David Gibbs, PhD, CPHI, CHPS, CPHIMS, CISSP

Abstract

While nearly all hospitals have adopted electronic health record (EHR) systems, some are dissatisfied and considering replacement systems to better address unique organizational needs and priorities. With more than 4,000 certified health information technology products available, comparing the vast number of EHR options is complex. This study tested the hypothesis that various EHR systems demonstrate different financial and quality performance and presented a framework for comparison. Using a subscribed database containing US hospitals’ observations from 2011 to 2016, we estimated an ordinary least squares regression model with robust standard errors and clustered by year. We regressed the selected finance and quality measures as dependent variables with the vendors’ indicators as independent variables, with control variables. This study demonstrated an approach for analyzing performance data to help hospitals distinguish EHR systems on the basis of several organizational outcomes: return on assets, bed utilization rate, Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) summary star rating, and value-based purchasing Total Performance Score. This framework will help EHR acquisition teams make informed decisions.

Keywords: electronic health records; health information technology; financial and quality performance; comparison

Introduction

According to the Office of the National Coordinator for Health Information Technology (ONC) Certified Health IT Product List, there were 4,279 certified health information technology (IT) products in April 2018.1 Many healthcare organizations face the challenge of selecting a new or replacement electronic health record (EHR) system from the variety of choices available. While nearly all hospitals have adopted EHR systems, some are dissatisfied and considering replacing their systems with other products to better address unique organizational needs and priorities.2 Comparing the vast number of EHR options is complex, with numerous features and benefits to be considered. To help address the complexity, tools have been developed to identify objective criteria for comparison among EHR systems.3 In this paper we introduce an approach, based on our study, which may be of interest to those involved in comparing EHR systems from different vendors.

Background

One of the most important motivations for implementing EHR systems is to improve patient outcomes. The US federal government deemed the positive impact on patient outcomes to be worthy of financial incentives to hospitals adopting any of the available certified EHRs. While patient outcomes are important, other outcomes that must be considered include the long-term impact on financial performance and standardized quality scores to ensure sustainability and competitiveness of the healthcare delivery organization.

Other researchers have related a hospital’s investment in an EHR to positive financial outcomes, without distinguishing between competing EHR vendors.4 In prior research, theoretical analyses suggested that EHR systems can increase hospital financial performance, with the assumption that EHR systems would be interconnected, interoperable, adopted widely, and used effectively.5, 6 However, the empirical research had mixed results. Menachemi et al. looked at 82 Florida hospitals’ financial statements and an IT survey, with results showing a positive association between case mix and clinical IT use.7 A research team in Israel studied the use of customized EHRs to achieve cost savings by classifying specialty clinics for referrals into four classes based on average cost per visit. Clinics in class 1 had the lowest cost, while class 4 clinics were most expensive. Classes 1, 2, and 3 were represented in the EHR systems of 243 primary care facilities and available for referral; class 4 clinics required additional preauthorization. Primary care physicians initiating referrals to specialty clinics were presented with the classified options and tended to refer to those providers with lower cost. Having relative cost information available in the EHR at the time of the referral order resulted in lower overall costs during the four-year study. To become more competitive, the class 4 clinics eventually reduced their costs. The researchers concluded that EHR systems have a positive impact on finances.8 Subsequently, Collum et al. examined the levels of adoption of 32 specific clinical functions and found no long-term financial benefits of adoption or expanded use of EHR functions, based on their criteria.9 They highlighted that their findings were inconsistent with earlier studies that did show reduced operating costs associated with EHR adoption. These studies considered EHR systems in general without distinguishing between EHR vendors.

In addition to financial performance, quality performance is also an important consideration, especially for not-for-profit hospitals. Each healthcare organization must identify the appropriate balance of financial and quality performance representing its mission, vision, and value proposition. One cross-sectional study of 41 urban Texas hospitals across 10 metropolitan statistical areas examined discharge data covering more than 167,000 patients and concluded that hospitals with electronic records, order entry, and clinical decision support had lower mortality rates and fewer complications; these findings are indicators of quality care. The study used the Clinical Information Technology Assessment Tool (CITAT), which provides a numerical score indicating the level of automation used by a hospital. Four domains of automation are considered: test results, notes and records, order entry, and decision support. Indicators of quality for the study included complications, mortality, and length of stay (LOS). Higher levels of automation were associated with lower mortality rates and complications, in most instances. There was no clear relationship between the automation scores and LOS.10 Adler-Milstein et al. studied how hospital outcomes of process adherence, patient satisfaction, and efficiency were related to EHR adoption. Their results showed that increased EHR adoption was associated with improved process adherence and patient satisfaction, but not efficiency. The relationships were stronger in more recent years.11

Since the Hospital Value-Based Purchasing program was enacted as part of the Patient Protection and Affordable Care Act in 2010, hospitals have been incentivized to achieve high scores on quality measurements such as the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). The HCAHPS score is an important measure of hospital quality because it represents data collected from patients regarding their hospital experiences and how well their expectations were met. The HCAHPS scores, along with other factors, are used by the Centers for Medicare and Medicaid Services (CMS) to calculate individual and overall five-star quality ratings and the Total Performance Score (TPS). These quality scores are reported to the public on the Hospital Compare website to enable consumer comparison and research. Carter and Silverman provide an effective summary of the complex formula used by CMS to determine HCAHPS scores.12 Herrin et al. analyzed how HCAHPS scores were affected by sociodemographic factors, cultural factors, and level of access to care. They concluded that a number of community factors other than direct hospital experience affect HCAHPS scores.13 To determine the overall rating score, CMS combines the HCAHPS scores from patients with objective data collected from hospitals. Quality measures collected include complication, mortality, and readmission rates and specific patient safety indicators.14 The TPS is based on the same HCAHPS scores with different quality measures. The TPS formula has been refined over time. In 2014, TPS was based on clinical care processes, patient experience, and outcomes.15 The broad use and acceptance of HCAHPS scores as quality indicators influenced their inclusion in this research study.

Researchers have previously examined the concept of EHR systems and sought evidence that EHRs, in general, could meet expectations to improve efficiency and productivity in hospitals,16 but they did not attempt to distinguish among the various certified EHR vendors. Some of these studies focused on patient outcomes, while others analyzed EHR usability or financial outcomes.17–20 Ratwani et al. explored the barriers associated with comparing usability of EHRs, highlighting the need for robust comparison tools. They pointed out that although user-centered design was required by the ONC for products to be included on the public certified health IT products list, there was a lack of consistency among vendors.21 These studies confirmed the need for standardized comparison tools to enable purchasers of EHR systems to be more fully informed.22 They also highlighted the shortage of objective assessments for EHR systems that have already been implemented. The ONC proposed ways to improve comparison among vendors by validating certified and ancillary uncertified features of health IT.23

In 2016, the ONC published a report to Congress, Report on the Feasibility of Mechanisms to Assist Providers in Comparing and Selecting Certified EHR Technology Products.24 The report outlines four mechanisms to help health IT decision makers compare options and describes how the mechanisms are applicable to all providers and especially organizations lacking resources to independently complete a thorough scan of available technology options. The ONC cited a previous nationwide study, which concluded that EHR adoption challenges vary by organization and were affected by factors such as practice setting.25 The study found that selecting vendors for health IT was the second most frequently reported challenge to technology adoption, underscoring the complexity of comparing options. Neither the ONC report nor the national study included an analysis of how vendor selection affected finance and quality outcomes.

This study analyzed a collection of available data to identify objective measurements related to the financial and quality score impacts of specific EHR implementations. The result was a model that may be useful to organizations desiring to compare EHR solutions on the basis of the organization’s unique mission, values, and priorities.

Objective

Currently, thousands of EHR products are available in the market. Naturally, practitioners question how well individual EHR systems perform. While many have studied EHRs in general, to our knowledge, no one has studied the differences among the various EHR system vendors regarding the impact of their individual products on finance and quality performance indicators. This leads to our hypothesis that various EHR systems demonstrate different finance and quality performance.

Methods

Sample Selection

In our study, we chose to use a subscription-based health data provider, Definitive Healthcare, because the dataset covered more than 8,000 hospitals in the United States, and these hospitals provided a broad range of healthcare services. Because Definitive Healthcare frequently updates its dataset, the data were timely and current.

The original dataset contained 8,825 unique hospital observations from 2011 to 2016. The dataset contained historical financial data and quality measures back to 2009. The sample data used in this paper were downloaded on October 21, 2016. We dropped observations with unreasonable values, including 64 observations with total assets less than or equal to zero and 513 observations with total uncompensated care (unreimbursed cost) less than zero. We further dropped observations lacking the information required to calculate dependent, independent, and control variables. After data cleaning, our final sample contained 2,463 observations. This sample allowed us to examine hospital financial and quality performance measures over multiple years.

Empirical Model

To test our hypothesis, we estimated the following ordinary least squares (OLS) regression model with robust standard errors and clustered by year.

Performance = β0 + EHR vendor indicators + ∑Controls + Year fixed effects + ε

We selected several hospital performance measures as the dependent variables, including financial performance and quality performance. Consistent with prior literature,28, 29 in this paper we used return on assets (ROA) as the dependent variable measuring hospital financial performance. The quality measures selected were found in relevant literature and consisted of the bed utilization rate (BUR),30 HCAHPS summary star rating,31 Hospital Compare Overall Rating,32 and value-based purchasing TPS.33 These quality measures were available in the Definitive Healthcare dataset.

We created EHR vendor indicator variables as the independent variables. We identified the five top EHR products listed in the Definitive Healthcare database. Figure 1 represents the relative market share of each included vendor. To avoid having this study be misunderstood as EHR vendor promotion, we assigned the letters A to E, respectively, to the five top EHR vendors, rather than disclose their true names. Next, we created an indicator variable for each of the vendors. For example, the variable for vendor A is set equal to 1 if the hospital’s EHR system is vendor A’s product. Thus, the baseline was the use of an EHR system not manufactured by one of the top five vendors. By examining the coefficient estimates of the independent variables, we could investigate whether the top EHR system’s performance was better or worse than not using an EHR system from one of the top five vendors.

Following Wang et al., we used the following control variables in our model: hospital size, market concentration index (MCI), payer mix (Medicare and Medicaid; i.e., the percentage of revenue coming from each of those programs), and uncompensated care cost, ownership (governmental, proprietary and nonprofit, which was used as the reference group), teaching status, geographic classification, and year fixed effects.34 The definitions of variables are specified in Table 1.

Table 2 displays descriptive statistics for the full sample (2,463 observations). In sample data, the hospital mean ROA was 0.069, consistent with the findings of Collum et al.35 The hospital mean bed utilization rate was 0.53, mean HCAHPS score was 3.04, mean overall rating was 2.99, and mean TPS was 38.32.

The remainder of Table 2 presents descriptive statistics for the control variables used throughout the study. The hospital mean size was $177.68 million of total assets. The average market concentration index was 0.34, and the mean payer mix for Medicare, payer mix for Medicaid, and uncompensated cost were 0.37, 0.11, and 19.67, respectively, which were comparable to those found in prior studies.

Results

Table 3 reports correlation coefficients for the full sample. We found that vendors C and E were not significantly correlated with the financial measure (ROA), while the other three vendors were sufficiently correlated with ROA. Regarding quality measures, vendors A and C were significantly correlated with all quality measures, while the other three vendors were significantly correlated with some quality measures.

To test our hypothesis, we regressed the selected finance and quality measures as dependent variables with the vendors’ indicators as independent variables, including control variables. Table 4 presents the multiple regression results of our model.

The explanatory power of our models, as evidenced by the adjusted R2, varied from 8.5 percent in the ROA model to 38.2 percent in the BUR model, suggesting that 8.5 to 38.2 percent of variation was explained by the independent variables in different models. The magnitude of the adjusted R2 in our results was consistent with prior study.36

Discussion

Table 4 shows that various EHR systems performed differently. In the ROA model, vendors B and C showed positive associations with ROA, but only the coefficient of vendor C was significant, suggesting that using vendor C would significantly raise hospitals’ return on assets; that is, hospitals using vendor C had a higher ROA at 0.0226 than those hospitals not using one of the top five vendors’ EHR systems (p = 0.044). On average, hospitals’ total assets were $177.68 million dollars. These results suggested that, on average, if a hospital used vendor C’s EHR system, this choice could result in a $7.82 million increase in net income, compared with hospitals not using an EHR system from one of the top five vendors. With the sample data having an average net income of $22.1 million, this reflects a 35.38 percent increase in hospital net income due to the adoption and utilization of vendor C’s EHR system. However, using an EHR did not guarantee an increase in ROA. Table 4 shows that using vendor A’s EHR system resulted in a statistically significant ROA decrease, at 0.0111 (p = 0.014).

The BUR model showed mixed results across the vendors. Vendors A, C, and E showed positive and significant associations with the BUR, indicating that the use of their EHRs would significantly increase the hospital’s bed utilization rate. Among these three products, vendor E had the largest coefficient at 0.0321 (p = 0.008), suggesting a 0.0321 increase in the bed utilization rate with adoption of vendor E’s system. With an average BUR of 0.53 in the sample data, vendor E’s increase represented a 6.06 percent rise in the bed utilization rate. Vendor B also had a positive coefficient, but the finding was not significant (p = 0.310). Vendor D showed the only negative association with BUR at −0.0361, which was statistically significant (p = 0.002), indicating that the use of vendor D’s system would reduce the average bed utilization rate by 0.0361 or 6.81 percent, compared with not using an EHR system from the top five vendors.

The HCAHPS model showed statistically significant results for four of the five vendors. Vendors A, B, C, and D all showed statistically significant and positive associations with HCAHPS scores. Vendor C’s coefficient was the highest at 0.2779 (p = 0.001), suggesting an HCAHPS score increase of 0.2779 or 9.14 percent based on the average HCAHPS score of 3.04 in the sample data. Vendor E had a positive but not significant association with HCAHPS at 0.0141 (p = 0.769).

The model for overall rating showed significant positive associations for all five vendors. The highest coefficient was 0.2966 (p = 0.001) for vendor C, with an average overall rating score for the sample data of 2.99. This suggested that hospitals that adopt an EHR system from vendor C would have an overall score 9.92 percent higher than those without an EHR system from one of the top five vendors.

The TPS model had mixed results. Vendors A, B, and C showed positive and significant associations with the TPS, indicating that hospitals using one of these EHR systems would have higher TPSs than those not using any of these EHR systems. Among these three vendors, vendor C had the highest coefficient at 2.6266, and this finding was statistically significant (p = 0.001). With the average TPS in the sample data being 38.32, this suggests that hospitals with an EHR system from vendor C would have a TPS 6.85 percent higher than hospitals not using an EHR from one of these vendors. Vendor A’s coefficient of 0.2451 and vendor B’s coefficient of 0.7800 were much lower than the coefficient for vendor C. Vendor D had a positive coefficient of 0.3245, but this result was not statistically significant (p = 0.255). Vendor E had the only negative coefficient for TPS at −0.4534, which also was not statistically significant (p = 0.469).

These results show that the EHR systems performed differently on financial and quality measures. Vendor A’s system significantly improved quality performance but not financial performance. Vendor B’s system significantly improved most quality performance measures, with the exception of BUR. Vendor C’s system had the best results in our models, with significant improvement for all performance measures, and had four of the five highest coefficients and the second-highest coefficient for BUR. Given these results, it was not a surprise that Vendor C was the market-leading manufacturer. Vendor D’s system was the only system that lowered ROA and BUR. Vendor E’s system significantly improved only two quality measures, but it had the highest positive coefficient in the BUR model.

These results shed some light on EHR system selection. If a for-profit hospital focused on financial performance, administrators might prefer vendor C’s system, whereas if the bed utilization rate is the key performance indicator in a nonprofit hospital, vendor E’s system might be the best choice. Thus, hospital administrators need to make decisions on EHR adoption according to their hospital’s mission and goals.

Limitations

This study leveraged available data to test a hypothesis. The available data were collected from specific real-world implementations of commercially available EHR systems. One limitation of this study was the absence of details about how each EHR system was configured. EHR systems are often customized during implementation to integrate with other local systems and address local needs.37 Future research in this area would benefit from including a variety of source data as well as incorporating additional quantitative metrics (e.g., readmission rates) that may be a priority for some healthcare organizations.

Because of the data cleaning, our final sample contained only short-term acute care hospitals, without other types of hospitals such as VA hospitals or psychiatric hospitals. Thus, the sample is representative of short-term acute care hospitals only, and the results should not be generalized to other types of hospitals.

Conclusion

This study demonstrated how a variety of financial and quality factors associated with candidate EHR systems can be analyzed to measure how well the systems would meet organizational priorities. Combining this approach with others that focus on patient outcomes would provide EHR acquisition teams with additional information during the decision-making process.

 

Tiankai Wang, PhD, is an associate professor at Texas State University in San Marcos, TX.

David Gibbs, PhD, CPHI, CHPS, CPHIMS, CISSP, is an assistant professor at Texas State University in San Marcos, TX.

 

Notes

  1. Office of the National Coordinator for Health Information Technology. “Certified Health IT Products List.” Available at https://chpl.healthit.gov (accessed April 22, 2018).
  2. Ratwani, Raj M., A. Zachary Hettinger, and Rollin J. Fairbanks. “Barriers to Comparing the Usability of Electronic Health Records.” Journal of the American Medical Informatics Association 24, no. e1 (2017): e191–e193.
  3. Office of the National Coordinator for Health Information Technology (ONC). Report to Congress: Report on the Feasibility of Mechanisms to Assist Providers in Comparing and Selecting Certified EHR Technology Products. Washington, DC: Department of Health and Human Services, April 2016. Available at https://www.healthit.gov/sites/default/files/macraehrpct_final_4-2016.pdf.
  4. Schmitt, K. F., and D. A. Wofford. “Financial Analysis Projects Clear Returns from Electronic Medical Records.” Healthcare Financial Management 56, no. 1 (2002): 52–57.
  5. Grieger, D. L., S. H. Cohen, and D. A. Krusch. “A Pilot Study to Document the Return on Investment for Implementing an Ambulatory Electronic Health Record at an Academic Medical Center.” Journal of the American College of Surgeons 205, no. 1 (2007): 89–96.
  6. Wang, S. J., B. Middleton, L. A. Prosser, C. G. Bardon, C. D. Spurr, P. J. Carchidi, A. F. Kittler, R. C. Goldszer, D. G. Fairchild, A. J. Sussman, G. J. Kuperman, and D. W. Bates. “A Cost-Benefit Analysis of Electronic Medical Records in Primary Care.” American Journal of Medicine 114, no. 5 (2003): 397–403.
  7. Menachemi, N., J. Burkhardt, R. Shewchuk, D. Burke, and R. G. Brooks. “Hospital Information Technology and Positive Financial Performance: A Different Approach to Finding an ROI.” Journal of Healthcare Management 51, no. 1 (2006): 40–58, discussion 58–59.
  8. Bar-Dayan, Y., H. Saed, M. Boaz, Y. Misch, T. Shahar, I. Husiascky, and O. Blumenfeld. “Using Electronic Health Records to Save Money.” Journal of the American Medical Informatics Association 20, no. e1 (2013): e17–e20.
  9. Collum, T. H., N. Menachemi, and B. Sen. “Does Electronic Health Record Use Improve Hospital Financial Performance? Evidence from Panel Data.” Health Care Management Review 41, no. 3 (2016): 267–74.
  10. Amarasingham, Ruben, Laura Plantinga, Marie Diener-West, Darrell J. Gaskin, and Neil R. Powe. “Clinical Information Technologies and Inpatient Outcomes: A Multiple Hospital Study.” Archives of Internal Medicine 169, no. 2 (2009): 108–14.
  11. Adler-Milstein, Julia, Jordan Everson, and Shoou-Yih D. Lee. “EHR Adoption and Hospital Performance: Time-related Effects.” Health Services Research 50, no. 6 (2015): 1751–71.
  12. Carter, J. C., and F. N. Silverman. “Using HCAHPS Data to Improve Hospital Care Quality.” TQM Journal 28, no. 6 (2016): 974–90.
  13. Herrin, J., K. G. Mockaitis and S. Hines. “HCAHPS Scores and Community Factors.” American Journal of Medical Quality 33, no. 5 (2018): 461–71.
  14. Hu, J., J. Jordan, I. Rubinfeld, M. Schreiber, D. Nerenz, and B. Waterman. “Correlations among Hospital Quality Measures: What ‘Hospital Compare’ Data Tell Us.” American Journal of Medical Quality 32, no. 6 (2017): 605–10.
  15. Haley, Donald Robert, Zhao Mei, Aaron Spaulding, Hanadi Hamadi, Xu Jing, and Katelyn Yeomans. “The Influence of Hospital Market Competition on Patient Mortality and Total Performance Score.” Health Care Manager 35, no. 3 (2016): 220–30.
  16. Bar-Dayan, Y., H. Saed, M. Boaz, Y. Misch, T. Shahar, I. Husiascky, and O. Blumenfeld. “Using Electronic Health Records to Save Money.”
  17. Ratwani, Raj M., A. Zachary Hettinger, and Rollin J. Fairbanks. “Barriers to Comparing the Usability of Electronic Health Records.”
  18. Bar-Dayan, Y., H. Saed, M. Boaz, Y. Misch, T. Shahar, I. Husiascky, and O. Blumenfeld. “Using Electronic Health Records to Save Money.”
  19. Collum, T. H., N. Menachemi, and B. Sen. “Does Electronic Health Record Use Improve Hospital Financial Performance? Evidence from Panel Data.”
  20. Adler-Milstein, J., A. J. Holmgren, P. Kralovec, C. Worzala, T. Searcy, and V. Patel. “Electronic Health Record Adoption in US Hospitals: The Emergence of a Digital ‘Advanced Use’ Divide.” Journal of the American Medical Informatics Association 24, no. 6 (2017): 1142–48.
  21. Ratwani, Raj M., Natalie C. Benda, A. Zachary Hettinger, and Rollin J. Fairbanks. “Electronic Health Record Vendor Adherence to Usability Certification Requirements and Testing Standards.” JAMA 314, no. 10 (2015): 1070–71.
  22. Ratwani, Raj M., A. Zachary Hettinger, and Rollin J. Fairbanks. “Barriers to Comparing the Usability of Electronic Health Records.”
  23. Department of Health and Human Services. “ONC Health IT Certification Program: Enhanced Oversight and Accountability; Final Rule.” 45 CFR Part 170. Federal Register 81, no. 202 (October 19, 2016): 72404–71. Available at https://www.govinfo.gov/content/pkg/FR-2016-10-19/pdf/2016-24908.pdf.
  24. Office of the National Coordinator for Health Information Technology (ONC). Report to Congress: Report on the Feasibility of Mechanisms to Assist Providers in Comparing and Selecting Certified EHR Technology Products.
  25. Heisey-Grove, Dawn, Lisa-Nicole Danehy, Michelle Consolazio, Kimberly Lynch, and Farzad Mostashari. “A National Study of Challenges to Electronic Health Record Adoption and Meaningful Use.” Medical Care 52, no. 2 (2014): 144–48.
  26. Menachemi, N., J. Burkhardt, R. Shewchuk, D. Burke, and R. G. Brooks. “Hospital Information Technology and Positive Financial Performance: A Different Approach to Finding an ROI.”
  27. Collum, T. H., N. Menachemi, and B. Sen. “Does Electronic Health Record Use Improve Hospital Financial Performance? Evidence from Panel Data.”
  28. Gunny, Katherine A. “The Relation between Earnings Management Using Real Activities Manipulation and Future Performance: Evidence from Meeting Earnings Benchmarks.” Contemporary Accounting Research 27, no. 3 (2010): 855–88.
  29. Wang, Tiankai, Yangmei Wang, and Alexander McLeod. “Do Health Information Technology Investments Impact Hospital Financial Performance and Productivity?” International Journal of Accounting Information Systems 28 (2018): 1–13.
  30. Belciug, Smaranda, and Florin Gorunescu. “Improving Hospital Bed Occupancy and Resource Utilization through Queuing Modeling and Evolutionary Computation.” Journal of Biomedical Informatics 53 (2015): 261–69.
  31. Elliott, Marc N., Christopher W. Cohea, William G. Lehrman, Elizabeth H. Goldstein, Paul D. Cleary, Laura A. Giordano, Megan K. Beckett, and Alan M. Zaslavsky. “Accelerating Improvement and Narrowing Gaps: Trends in Patients’ Experiences with Hospital Care Reflected in HCAHPS Public Reporting.” Health Services Research 50, no. 6 (2015): 1850–67.
  32. Hu, J., J. Jordan, I. Rubinfeld, M. Schreiber, D. Nerenz, and B. Waterman. “Correlations among Hospital Quality Measures: What ‘Hospital Compare’ Data Tell Us.”
  33. Haley, Donald Robert, Zhao Mei, Aaron Spaulding, Hanadi Hamadi, Xu Jing, and Katelyn Yeomans. “The Influence of Hospital Market Competition on Patient Mortality and Total Performance Score.”
  34. Wang, Tiankai, Yangmei Wang, and Alexander McLeod. “Do Health Information Technology Investments Impact Hospital Financial Performance and Productivity?”
  35. Collum, T. H., N. Menachemi, and B. Sen. “Does Electronic Health Record Use Improve Hospital Financial Performance? Evidence from Panel Data.”
  36. Wang, Tiankai, Yangmei Wang, and Alexander McLeod. “Do Health Information Technology Investments Impact Hospital Financial Performance and Productivity?”
  37. Ratwani, Raj M., A. Zachary Hettinger, and Rollin J. Fairbanks. “Barriers to Comparing the Usability of Electronic Health Records.”

 

Printer-friendly version of this article.

Leave a Reply