Abstract
Background: Inadequate electronic health record (EHR) interface design hinders the physician-EHR experience, which may lead to increase physician frustration and fatigue levels.
Objectives: The objective of this study was to examine the physician EHR experience by evaluating the congruency between actual and perceived measures among physicians with different EHR expertise and utilization levels.
Methods: We conducted a cross-sectional EHR usability study of intensive care unit (ICU) physicians at a major Southeastern medical center. We used eye-tracking glasses to measure provider EHR-related fatigue and three surveys to measure the perceived EHR experience.
Results: Of the 25 ICU physicians, 11 were residents, nine were fellows, and five were attending physicians. No significant differences were found between actual fatigue levels and their perceived EHR usability (p=0.159), workload (p=0.753), and satisfaction (p=0.773).
Conclusion: We found that there was low congruency between physicians’ EHR-related fatigue and the perceived ratings for usability, satisfaction, and workload, which suggests using actual and perceived measures for a comprehensive assessment of the user experience. EHR-related fatigue may not be instantly felt by some physicians, hence the similar rating of perceived EHR experience among physicians.
Keywords: EHR, fatigue, usability, workload, satisfaction
Introduction
Using electronic health records (EHR) contribute to physician fatigue.1,2 The reported effect of EHR use on physician fatigue and burnout dramatically increased from 46 percent to 70 percent within a short time frame.3,4 Stringent documentation policies, prolonged time and effort fetching for information, and work out of work hours are the main contributors to EHR-related fatigue.5-7 Additionally, inadequate EHR interface design hinders the physician-EHR experience that leads to exhaustion.8
Evaluations of the physician EHR experience have heavily relied on objective and subjective responses.9-11 User experience is defined as the perceptions and responses that result from using a product or tool.12 User responses to EHR use can be measured using physiologic data such as blinking rates, pupillometry, and heart rate.13,14 User perceptions to EHR use can be measured through subjective measures including surveys and interviews.15
Popular survey instruments used to assess EHR-related fatigue include the NASA Task Load Index (TLX),16 and System Usability Scale (SUS),17 and Questionnaire for User Interface Satisfaction (QUIS). NASA TLX measures the perceived workload of using a given system using six factors.18 Recently, clinical researchers have adopted the SUS to measure the perceived usability of EHR systems by assessing 10 usability criterion.19 The QUIS is used to assess the user satisfaction with a given information system by assessing 27 satisfaction items.20 Although not designed for healthcare systems, researchers have used the QUIS to assess provider and patient satisfaction with regards to health information systems such as EHRs.21,22 It is unclear whether survey-based assessments are accurate measurements of the relationship between EHR usability and physician fatigue.
Only recently, the effect of EHR use on physicians’ fatigue levels was objectively quantified using physiologic measures, namely pupillometry.23,24 By measuring changes in pupil size, researchers reported it takes approximately 20 minutes of continuous EHR use for 80 percent of physicians to experience fatigue at least once. The use of eye-tracking methods can precisely measure the impact of EHR use on physicians’ cognition, which provides new context that subjective methods cannot provide. To our knowledge, no study has examined the accuracy of both subjective and objective measures as an indicator of EHR-related fatigue levels.
The limited knowledge around the consistency of subjective and objective measurements in assessing physician fatigue levels drove our research question of: Is there congruency between perceived and actual EHR experiences for physicians? We hypothesized that the EHR perceived experiences will not be congruent with the actual experiences for physicians. Our goal was to understand which types of measurement are more appropriate to evaluate EHR experiences such as usability, workload, and satisfaction. Therefore, the objective of this study was to examine the physician EHR experience by evaluating the congruency between actual and perceived EHR experience measurements among physicians with different EHR expertise and utilization levels.
Methods
We conducted a cross-sectional EHR usability study of a 30-bed medical intensive care unit (MICU) physicians at a major Southeastern medical center. The MICU used Epic as the institutional EHR system. We created a usability framework for this study that included four ICU patient cases, administration of three EHR surveys, and the use of eye-tracking device.25 We recruited a random sample of 25 MICU physicians including residents, fellows, and attendings through departmental emails and flyers. For each physician, we collected information regarding their years of Epic experiences and the estimated average number of hours spent using Epic per week.
A summary of the four simulation patients and tasks is provided below:26
Case 1: A 44-year-old female patient with multisystem organ failure. Participants were asked to manage medication orders and determine input from consulting clinical teams. Case 2: A 60-year-old female patient with acute hypoxic respiratory failure. Participants were asked to review clinical documentation and flowsheets, to evaluate changes related to the patient’s condition and mechanical ventilation, and to analyze microbiology data. Case 3: A 25-year-old male patient with severe sepsis infection. Participants were asked to assess the clinical flowsheet, assess laboratory data, evaluate antibiotics and intravenous fluid management, and manage laboratory studies. Case 4: A 56-year-old male trauma patient with postoperative heart failure and volume overload. Participants were asked to identify trends in the patient’s weight during previous clinical encounters and to manage orders for IV fluids and other medications.
We defined actual EHR experience as the fatigue level assessed by eye-tracking glasses using pupil size as measurement. We defined the perceived EHR experience as the level of workload, usability, and satisfaction measured by three different surveys NASA-TLX, SUS, and QUIS, respectively.
For Epic experiences, we defined three experience groups. Novice experience was defined as one to three years of Epic experience, intermediate was defined as three to five years of experience, and expert was defied a over five years of Epic experience.
For Epic user types, we created three groups as well. First, low use is defined as zero to 20 hours of Epic use per week; intermediate use is defined as 20-40 hours of Epic use per week; and high use is defined as over 40 hours of Epic use per week.
Physicians reviewed four ICU patient records, and after reviewing each case, a research assistant (RA) asked a series of clinical questions to assess the level of comprehension. After completing all four cases, physicians completed three validated surveys—NASA-TLX, SUS, and QUIS—to assess the perceived workload, usability, and satisfaction of the EHR. The study was conducted in a private office space to avoid disruption to the MICU environment. Institutional Review Board approval was obtained prior to conducting the study.
Study Materials
After the study, we collected and analyzed pupillometry data for each physician from the eye-tracking device. Pupillometry data calculates the pupil size for the right and left eye for each millisecond of the study. EHR-related fatigued was defined as the instance when the average pupil size of both eyes is smaller than the baseline average pupil size of both eyes. We previously reported that all 25 physicians experienced fatigue while using the EHR.23 Physicians experienced different number of fatigue instances during the study ranging from 1-4. Physicians who experience a single fatigue instance were defined as “low fatigue,” two or three fatigue instances were defined as “medium fatigue,” and four fatigue instances were defined as “high fatigue.”
For each physician, we calculated perceived workload, usability, and satisfaction scores. For NASA-TLX, we calculated the a total workload score using raw TLX scores from 0 to 100, without applying weights.27 For TLX, a score above 55 represents an overload, while a score under 55 is considered normal workload.
For SUS, we computed a total usability score based on the standard methods. For odd items, we subtracted one from the user response; for even-numbered items, we subtracted the user responses from 5. Then, we added the converted responses for each physician and multiplied that total by 2.5. This converts the range of possible values from zero to 100. SUS score between 0-50 is defined as unacceptable, 51-80 is acceptable, and 81-100 is excellent usability.28,29
For QUIS, we calculated a total satisfaction score by averaging the score of the five “overall” items. Each item is rated based on 10-point scale; therefore, QUIS scores ranged 0-10. Scores between 1-5 were defined as low satisfaction, 6-7 is medium satisfaction, and 8-10 is high satisfaction (Table 1).
Outcomes
Primary endpoints of this study were actual experience measured by eye-tracking fatigue scores and perceived experiences measured by workload (NASA-TLX), SUS (usability), and satisfaction (QUIS). Secondary endpoints were physician EHR expertise (number of years using the EHR system) and physician EHR utilization (self-reported number of hours using the EHR per week).
Statistical Analysis
We aggregated all eye-tracking and survey data into a single file for data analysis using SPSS® and MS Excel®. We used descriptive analysis to quantify differences between objective fatigue scores, and subjective usability, workload, and satisfaction scores. We also conducted Pearson Correlation test to examine the association between objective and subjective measures to understand if there is a relationship between both measurement types.
Results
Of the 25 ICU physicians, 11 were residents, nine were fellows, and five were attending physicians. Twelve (48 percent) were male; the mean age was 33 years (range: 28-55 years) and rhe mean weekly number of hours of current Epic use was 31.1 hours (IQR: 7.95-52.1 hours). Mean prior experience with Epic was four years (IQR: 2.0-5.5 years).
Among all 25 participants, the average total time to complete the four patient cases was 31:04 minutes. Residents spent the most time (40:44 minutes); fellows were the quickest (17:58 minutes), and then attendings (26:45 minutes). The average time to chart review was 4:42 minutes. Attending spent most time reviewing the records (5:25 minutes) followed by residents (5:05 minutes) and fellows (3:27 minutes). The average time to answer questions and complete tasks was 11:32 minutes. Residents spent the most time answering questions (20:21 minutes), followed by attendings (3:59 minutes) and fellows (3:13 minutes).
Actual and Perceived EHR Experience
All physicians experienced at least one fatigue instance such that four (16 percent) physicians experienced high fatigue levels, 11 (44 percent) experienced medium fatigue, and 10 (40 percent) experienced low fatigue levels. The overall rating for EHR satisfaction was “Medium” with a score of 6 out of 10. The overall rating for EHR usability was “Acceptable” with a score of 61 out of 100. The overall EHR workload level was “Normal” with score of 44 out of 100.
Using the QUIS, one (4 percent) physician rated their overall EHR satisfaction as “high,” 17 (68 percent) reported “medium” EHR satisfaction, and seven (28 percent) reported “low” EHR satisfaction levels. Using the SUS, one (4 percent) physician reported “excellent” EHR usability, 19 (76 percent) reported “acceptable” EHR usability, and five (20 percent) reported “unacceptable” EHR usability. For the NASA TLX, 21 (84 percent) physicians indicated “normal” EHR workload, and four (16 percent) reported “high” EHR workload.
No significant differences in EHR satisfaction were found between physicians experiencing low, medium, and high fatigue levels (Figure 1). The median (IQR) EHR satisfaction rating for physicians who experienced low fatigue was 5.7 (5.1-6.3), medium fatigue was 5.7 (5.5-6.1), and high fatigue was 5.4 (5.2-5.8). One-way ANOVA test showed that the distribution of EHR satisfaction is the same across the three fatigue categories (p=0.773).
Similarly, no significant differences in the perceived EHR usability ratings were found between physicians’ experiences varying levels of fatigue (Figure 2). The median (IQR) EHR usability rating for physicians who experienced low fatigue was 55 (45.6-64.4), medium fatigue was 67.5 (62.5-71.3), and high fatigue was 63.8 (57.5-68.1). One-way ANOVA test showed that the distribution of EHR usability is the same across the three fatigue categories (p=0.159).
There were no significant differences in the perceived EHR workload found between physicians’ experiences varying levels of fatigue (Figure 3). The median (IQR) EHR workload rating for physicians who experienced low fatigue was 44.6 (38.1-51), medium fatigue was 45.8 (34-51.3), and high fatigue was 52.1 (44.8-54.2). One-way ANOVA test showed that the distribution of EHR usability is the same across the three fatigue categories (p=0. 753).
Effect of EHR Expertise and Use
Based on self-reported years of EHR experience, there were four (16 percent) “novice” EHR users, 14 (56 percent) “intermediate” EHR users, and seven (28 percent) “expert” EHR users. EHR expertise was significantly associated with EHR satisfaction such that an increase in the number of years of EHR expertise was correlated with a higher EHR satisfaction ratings (p<0.05) (Table 2). No significant correlation was found between perceived usability or workload. Additionally, actual fatigue was not significantly associated with EHR expertise.
Based on self-reported hours spent in the EHR per week, there were seven (28 percent) physicians of “basic” EHR use, eight (32 percent) physicians of “intermediate” EHR use, and 10 (40 percent) physicians of “advanced” EHR use. EHR usability was significantly associated with the level of EHR use such that an increase in the number of hours spent using the EHR was correlated with higher EHR usability rating (p<0.05) (Table 2). Additionally, the level of EHR use was marginally correlated with both perceived EHR workload and satisfaction. EHR use was negatively correlated with perceived EHR workload such that an increase in EHR use was associated with lower EHR workload (p<0.1). Also, EHR use was positively correlated with EHR satisfaction such that an increase in the number of hours spent in the EHR was associated with higher satisfaction rating (p<0.1). No significant relationship was found between actual fatigue and EHR expertise (p=0.3) or EHR use (p=0.5).
Discussion
We conducted a cross-sectional study to investigate the differences between actual and perceived EHR experience of 25 ICU physicians with different EHR expertise and utilization levels while using a prominent EHR system. Our findings show that there were no significant associations between the actual and perceived EHR experiences, which may suggest that the varying levels of EHR-related fatigue experienced by physicians are not reflected in their subjective evaluation of EHR usability, workload, and satisfaction. We found that physicians who experienced low, medium, or high fatigue levels had relatively similar ratings of their perceived usability, workload, and satisfaction. This suggests that actual fatigue levels measured by physiologic data collected from eye-tracking devices may be more accurate to assess EHR-related fatigue levels.
EHR-related fatigue was not significantly associated with EHR expertise or EHR use. Physicians, despite their number of years using an EHR or their number of hours spent in the EHR, experienced similar fatigue levels. This may suggest that the problem of EHR-related fatigue may be less of a user issue and more of a design issue. For instance, if a physician with 10 years using the EHR experiences the same fatigue levels as a physician with one year experience, this suggests that the problem may be in interface design. Common interface design flaws that we observed during the study included information overload in some EHR screens such as the Flowsheet, heavy emphasis on memorization rather than recall, and challenges finding the latest and most accurate information.
All physicians experienced varying degrees of EHR-related fatigue, which validates the theory that EHRs contribute to physician fatigue.30-33 Although, we expected that fatigued physicians would report low ratings for EHR usability, workload, and satisfaction. However, physicians, in general, reported “acceptable” EHR usability, “normal” EHR workload, and “medium” EHR satisfaction. This suggests that EHR-related fatigue is not instantly felt by some physicians and, therefore, two physicians who experience different actual fatigue levels may rate their EHR experience similarly, which may not reflect the actual EHR experience. Future studies should examine the long-term effect of EHR-related fatigue on physician burnout.34,35
Limitations
This study had limitations. This was a single site and a single EHR study. Although we examined only one EHR, this EHR is the most prominent EHR system in the US. The study focused only on ICU physicians, and the generalizability of the study findings may not be applicable to other specialties. We collected continuous pupil size data throughout the experiment for each physician and only one perceived survey data at the end of the experiment. We did not administer the surveys after each patient case to avoid disruption of the continuous EHR use, which would jeopardize the reliability of the eye-tracking data.
Conclusion
We investigated the physician-EHR experience by evaluating the degree of congruency between actual EHR-related fatigue levels with the perceived EHR usability, workload, and satisfaction. We found that there was no relationship between changes in EHR-related fatigue and the perceived ratings, which suggests using actual and perceived measures for a comprehensive assessment of the user experience. It is possible that subjective data provides a baseline to EHR perceptions; however, this study demonstrated that the use of physiologic data (i.e., pupil size) provide a more in-depth evaluation of the actual impact of EHR on physician fatigue and well-being.
Declarations
Funding: This study was supported by 1R01LM013606-01 and NIH/NLM 1T15LM012500-01.
Conflicts of interest/Competing interests: The authors disclose no conflict of interests.
Availability of data and material: Data is available upon request.
Code availability: No code is available for distribution.
Ethics approval: Institutional Review Board approval was obtained before conducting this study.
Consent to participate: All participants were consented prior to the study.
Consent for publication: The authors consent for the publication of this work.
Notes
1. Kapoor M. “Physician Burnout in the Electronic Health Record Era.” Annals of Internal Medicine. 2019. 170(3): 216-216. doi:10.7326/l18-0601
2. Collier R. “Rethinking EHR interfaces to reduce click fatigue and physician burnout.” CMAJ. 2018. 190(33): E994-E995. doi:10.1503/cmaj.109-5644
3. Gardner RL, Cooper E, Haskell J, et al. “Physician stress and burnout: the impact of health information technology.” Journal of the American Medical Informatics Association. 2018. 26(2): 106-114. doi:10.1093/jamia/ocy145
4. Shanafelt TD, Hasan O, Dyrbye LN, et al. “Changes in Burnout and Satisfaction With Work-Life Balance in Physicians and the General US Working Population Between 2011 and 2014.” Mayo Clin Proc. Dec 2015. 90(12):1600-13. doi:10.1016/j.mayocp.2015.08.023
5. Bates DW, Landman AB. “Use of medical scribes to reduce documentation burden: Are they where we need to go with clinical documentation?” JAMA Internal Medicine. 2018. 178(11):1472-1473. doi:10.1001/jamainternmed.2018.3945
6. Khairat S, Coleman C, Ottmar P, Bice T, Koppel R, Carson SS. “Physicians’ gender and their use of electronic health records: findings from a mixed-methods usability study.” Journal of the American Medical Informatics Association: JAMIA. Dec 1 2019. 26(12):1505-1514. doi:10.1093/jamia/ocz126
7. Overhage JM, McCallie D. “Physician Time Spent Using the Electronic Health Record During Outpatient Encounters.” Annals of Internal Medicine. 2020/02/04 2020;172(3):169-174. doi:10.7326/m18-3684
8. Ratwani RM, Hettinger AZ, Fairbanks RJ. “Barriers to comparing the usability of electronic health records.” Journal of the American Medical Informatics Association: JAMIA. Apr 1 2017;24(e1):e191-e193. doi:10.1093/jamia/ocw117
9. Tran B, Lenhart A, Ross R, Dorr DA. “Burnout and EHR use among academic primary care physicians with varied clinical workloads.” AMIA Jt Summits Transl Sci Proc. 2019. 2019:136-144.
10. Arndt BG, Beasley JW, Watkinson MD, et al. “Tethered to the EHR: Primary Care Physician Workload Assessment Using EHR Event Log Data and Time-Motion Observations.” Ann Fam Med. Sep 2017. 15(5):419-426. doi:10.1370/afm.2121
11. Sieja, A., Markley, K., Pell, J., Gonzalez, C., Redig, B., Kneeland, P., & Lin, C. T. (2019). Optimization Sprints: Improving Clinician Satisfaction and Teamwork by Rapidly Reducing Electronic Health Record Burden. Mayo Clinic proceedings, 94(5), 793–802. https://doi.org/10.1016/j.mayocp.2018.08.036
12. ISO B, STANDARD B. Ergonomics of human-system interaction. 2010.
13. van der Wel P, van Steenbergen H. “Pupil dilation as an index of effort in cognitive control tasks: A review.” Psychon Bull Rev. Dec 2018. 25(6):2005-2015. doi:10.3758/s13423-018-1432-y
14. Mohamad Ali N, Abdullah S, Salim J, et al. “Exploring User Experience in Game Using Heart Rate Device.” Asia-Pacific Journal of Information Technology and Multimedia. 12/30 2012. 01doi:10.17576/apjitm-2012-0102-03
15. Rauschenberger M, Schrepp M, Cota M, Olschner S, Thomaschewski J. “Efficient Measurement of the User Experience of Interactive Products. How to use the User Experience Questionnaire (UEQ). Example: Spanish Language Version.” International Journal of Interactive Multimedia and Artificial Intelligence. 03/01 2013. 2:39-45. doi:10.9781/ijimai.2013.215
16. Yen P-Y, Pearl N, Jethro C, et al. “Nurses’ Stress Associated with Nursing Activities and Electronic Health Records: Data Triangulation from Continuous Stress Monitoring, Perceived Workload, and a Time Motion Study.” AMIA Annual Symposium proceedings. AMIA Symposium. 2020;2019:952-961.
17. Melnick ER, Dyrbye LN, Sinsky CA, et al. “The Association Between Perceived Electronic Health Record Usability and Professional Burnout Among US Physicians.” Mayo Clin Proc. Mar 2020. 95(3):476-487. doi:10.1016/j.mayocp.2019.09.024
18. Sharek D. A Useable, Online NASA-TLX Tool. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2011. 55(1):1375-1379. doi:10.1177/1071181311551286
19. Lewis JR, Sauro J. “The Factor Structure of the System Usability Scale.” Springer Berlin Heidelberg; 2009:94-103.
20. Slaughter L, Norman K, Shneiderman B. “Assessing Users’ Subjective Satisfaction with the Information System for Youth Services (ISYS)” 03/27 2001.
21. Chiapponi C, Witt M, Dlugosch GE, Gülberg V, Siebeck M. “The Perception of Physician Empathy by Patients with Inflammatory Bowel Disease.” PLoS One. 2016. 11(11):e0167113. doi:10.1371/journal.pone.0167113
22. Hajesmaeel-Gohari S, Bahaadinbeigy K. “The most used questionnaires for evaluating telemedicine services.” BMC medical informatics and decision making. 2021. 21(1):36-36. doi:10.1186/s12911-021-01407-y
23. Khairat S, Cameron C, Ottmar P, Jayachander D, Bice T, Carson S. “Association of Electronic Health Records Use with Physician Fatigue and Efficiency.” JAMA Open. 2020.
24. Mazur LM, Mosaly PR, Moore C, Marks L. “Association of the Usability of Electronic Health Records With Cognitive Workload and Performance Levels Among Physicians.” JAMA Network Open. 2019. 2(4):e191709-e191709. doi:10.1001/jamanetworkopen.2019.1709
25. Khairat S, Coleman C, Newlin T, et al. “A mixed-methods evaluation framework for electronic health records usability studies.” Journal of Biomedical Informatics. 2019/06/01/ 2019. 94:103175. doi:https://doi.org/10.1016/j.jbi.2019.103175
26. Khairat S, Coleman C, Newlin T, Rand V, Ottmar P, Bice T. “A Mixed-Methods Evaluation Framework for Electronic Health Records Usability Studies.” J Biomed Inform. Apr 11 2019. 103175. doi:10.1016/j.jbi.2019.103175
27. Hart SG. NASA-Task Load Index (NASA-TLX); 20 years later. Proceedings of the Human Factors and Ergonomic Society annual meeting. 2006. 50(9):904-908.
28. Khairat S, Coleman C, Ottmar P, Bice T, Koppel R, Carson SS. “Physicians’ gender and their use of electronic health records: findings from a mixed-methods usability study.” Journal of the American Medical Informatics Association: JAMIA. Dec 1 2019. 26(12):1505-1514. doi:10.1093/jamia/ocz126
29. Barnum CM. 6 - Preparing for usability testing. In: Barnum CM, ed. Usability Testing Essentials (Second Edition). Morgan Kaufmann; 2021. 197-248.
30. Kapoor M. Physician Burnout in the “Electronic Health Record Era.” Annals of Internal Medicine. 2019;170(3):216-216. doi:10.7326/l18-0601
31. Melnick ER, Dyrbye LN, Sinsky CA, et al. “The Association Between Perceived Electronic Health Record Usability and Professional Burnout Among US Physicians.” Mayo Clin Proc. Mar 2020. 95(3):476-487. doi:10.1016/j.mayocp.2019.09.024
32. Khairat S, Xi L, Liu S, Shrestha S, Austin C. “Understanding the Association Between Electronic Health Record Satisfaction and the Well-Being of Nurses: Survey Study.” Original Paper. JMIR Nursing. 2020. 3(1):e13996. doi:10.2196/13996
33. Kroth PJ, Morioka-Douglas N, Veres S, et al. “Association of Electronic Health Record Design and Use Factors With Clinician Stress and Burnout.” JAMA Netw Open. Aug 2 2019. 2(8):e199609. doi:10.1001/jamanetworkopen.2019.9609
34. Robertson SL, Robinson MD, Reid A. “Electronic Health Record Effects on Work-Life Balance and Burnout Within the I(3) Population Collaborative.” J Grad Med Educ. 2017. 9(4):479-484. doi:10.4300/JGME-D-16-00123.1
35. Tajirian T, Stergiopoulos V, Strudwick G, et al. “The Influence of Electronic Health Record Use on Physician Burnout: Cross-Sectional Survey.” Journal of Medical Internet Research. 2020. 22(7):e19274-e19274. doi:10.2196/19274
Author Biographies
Saif Khairat is an associate professor in the Carolina Health Informatics Program and the School of Nursing at the University of North Carolina at Chapel Hill.
Cameron Coleman works in the Carolina Health Informatics Program at the University of North Carolina at Chapel Hill.
Paige Ottmar works in the Gilling’s School of Public Health at the University of North Carolina at Chapel Hill.
Thomas Bice is an adjunct professor of pulmonary diseases and critical care medicine at the University of North Carolina at Chapel Hill.
Shannon S. Carson works in pulmonary diseases and critical care medicine at the University of North Carolina at Chapel Hill.