Evaluating the Usability of a Free Electronic Health Record for Training

Administrator March 28, 2013 0

by Robert Hoyt, MD, FACP; Kenneth Adler, MD, MMM; Brandy Ziesemer, RHIA; and Georgina Palombo, MBA

Abstract

The United States will need to train a large workforce of skilled health information technology (HIT) professionals in order to meet the US government’s goal of universal electronic health records (EHRs) for all patients and widespread health information exchange. The Health Information Technology for Economic and Clinical Health (HITECH) Act established several HIT workforce educational programs to accomplish this goal. Recent studies have shown that EHR usability is a significant concern of physicians and is a potential obstacle to EHR adoption. It is important to have a highly usable EHR to train both clinicians and students. In this article, we report a qualitative-quantitative usability analysis of a web-based EHR for training health informatics and health information management students.

Keywords: health informatics, medical informatics, clinical informatics, health information management, electronic health records, usability, students, education

Introduction

Electronic health records (EHRs) have been adopted by multiple countries in the past two decades.1 In the United States, the progress has been slow in spite of the 2004 executive order that set the goal of universal EHRs by 2014.2 EHR adoption in the United States has been accelerated by the Health Information Technology for Economic and Clinical Health (HITECH) Act, which is title XIII of the American Recovery and Reinvestment Act of 2009 (ARRA). This act created multiple programs to promote the adoption of health information technology (HIT), including financial incentives for the adoption of certified EHRs that meet the criteria for meaningful use and the creation of HIT regional extension centers to assist practices with EHR adoption.3 As of June 2012, more than 110,000 eligible clinicians and 2,400 hospitals have received reimbursement.4

To support provider adoption of HIT, the HITECH Act also funded workforce training programs to prepare skilled HIT workers to meet anticipated demand. The Office of the National Coordinator for Health Information Technology (ONC) estimated that the United States will need approximately 51,000 skilled workers over the next five years. The HITECH Act established community college and university workforce programs to fast-track the education required by new workers to support EHR adoption, workflow redesign, and technical support. Federal funding has also created the Curriculum Development Centers Program and Community College Consortia to Educate Health IT Professionals in Health Care Program, in addition to the Program of Assistance for University-based Training.5

In addition to training a new HIT workforce, current and prospective healthcare professionals (e.g., medical and nursing students, physicians, nurses, and pharmacists in outpatient and inpatient settings) also require training. Unfortunately, many professional education programs have not kept up with the rapid advances in technology.6 For hands-on training purposes, an EHR should be readily available in all clinical and HIT educational programs. Training clinicians and HIT professionals in the usage of EHRs faces numerous obstacles such as the lack of a uniform training strategy, constrained training times, and the availability and cost of an EHR suitable for educational purposes. Furthermore, in order to maximize training, an EHR should have good usability characteristics. Uniform usability standards for commercial EHRs, however, are lacking. Usability is defined by the US National Institute of Standards and Technology (NIST) as “the effectiveness, efficiency and satisfaction with which the intended users can achieve their tasks in the intended context of product use.”7 Effectiveness is usually measured as the completion of tasks without errors and the potential of the system to cause errors. Efficiency is measured as the time it takes to complete a task. Satisfaction is generally considered a subjective evaluation using tools such as surveys. Others have included ease of learning (learnability) and retention (memorability) as important additional aspects of usability.8 The Healthcare Information and Management Systems Society (HIMSS) EHR Usability Task Force listed these usability principles: simplicity (uncluttered application), naturalness (workflow matches practice), consistency (application parts have the same look and feel), forgiveness and feedback (mistakes don’t result in lost time or data), effective use of language (the application uses the same words a clinician uses), efficient interactions (minimizes steps), effective information presentation (easily readable), preservation of context (the application keeps screen changes to a minimum), and minimized cognitive load (information for tasks is on one screen).9

Few studies have reported usability comparison ratings among different EHRs. In a study by Murff and Kannry, a large commercial EHR and the EHR used by the Veterans Health Administration (VHA) were rated using the Questionnaire for User Interaction Satisfaction (QUIS). They demonstrated that the VHA EHR was rated in the range of 7.08–7.68 for the five main survey categories, compared to a range of 3.27–4.41 for the commercial system.10 A 2011 survey of 2,719 family medicine physicians rated 30 of the more common EHR systems on 17 dimensions. Participants were asked to reply to multiple statements, such as “overall this EHR is easy and intuitive to use,” with a scale from “strongly agree” to “strongly disagree.” Twenty-five percent of respondents disagreed or strongly disagreed that their EHR was easy and intuitive to use. Approximately 30 percent of respondents disagreed or strongly disagreed with the statement “I am highly satisfied with this EHR.” Only 38 percent of survey participants agreed or strongly agreed that they would purchase the same system again.11

Two of the coauthors have used a free web-based EHR (Practice Fusion) to teach informatics courses for four years. The informal subjective feedback on this EHR by students was very positive, prompting further study of this EHR with qualitative and quantitative tools. The research questions were as follows:

  1. Would a validated survey instrument confirm high satisfaction and usability?
  2. Would a system audit log that is part of the EHR provide a valuable, objective time-motion tool to evaluate EHR effectiveness?
  3. Does this free web-based EHR provide an overall satisfactory experience for students?

Materials and Methods

Materials

The University of West Florida has used a free web-based EHR since 2008 to teach online Introduction to Medical Informatics and Electronic Health Record courses at the undergraduate and graduate level.12 Lake-Sumter Community College has used the same EHR since 2008 as a hands-on training model for health information management (HIM) students.13 With simple instructions, assisted by YouTube tutorials, students have been able to access and use multiple EHR features. Using fictitious patients, all students are asked to create new appointments, new patient encounters, new diagnoses associated with the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes, and new prescriptions. Undergraduates follow a set script to create these items and then take screen shots and upload them to a learning management system to confirm competency.

The study EHR is a free web-based ambulatory EHR that launched in 2008. The user interface is based on Adobe Flex 2, and the data are archived in a central data repository with bank-level security. The target audience is primarily small primary care practices. The application began with basic functionality but has added multiple features since its inception, largely from user input. The EHR was certified as meeting the criteria for meaningful use in mid-2011. As of 2012, approximately 20 academic information science programs use this EHR for training. Enrollment is straightforward, and the EHR provides multiple test patients. EHR features are listed in Appendix 1.14

Participants

Students from the University of West Florida and Lake-Sumter Community College were recruited by a research assistant to volunteer to take a usability survey and perform a time-motion study for the 2011 summer semester and the 2012 spring and summer semesters. In 2011, they were not given any incentives and were told their participation would not affect their grades. All contact related to the research study was by a graduate student and not the instructors. Student participation in the study overall was approximately 20 percent. In order to increase participation in 2012, students entered a drawing for a free informatics-related textbook, but participation rates remained the same. Students had to read an informed consent statement and sign it electronically prior to participating. The research proposal was approved by the institutional review boards of both institutions.

Survey Instrument

To measure EHR satisfaction, students were first asked by a research assistant to volunteer to take the Questionnaire for User Interaction Satisfaction (QUIS), version 7.0. QUIS is a licensed, validated survey tool created at the University of Maryland15 and used to evaluate EHRs16–17 and other technologies.18–19 QUIS version 7.0 has 11 sections that measure user demographics (six questions), overall reaction ratings (six questions), screen design and layout (four questions), terminology and system information (six questions), learning (four questions), and system capabilities (five questions). Sections 8–11, which are optional, deal with evaluating technical manuals, online help, and so forth and were not included in this study. The following questions were not included because they were not felt to be pertinent: overall reaction (dull–stimulating); overall reaction (inadequate power–adequate power); highlighting on screen helpful (not at all–very much); supplemental reference materials (confusing–clear); and system tendency (noisy–quiet). We modified the demographics section to add questions about undergraduate/graduate status, gender, age group (30 and under, 31–50, and 51 and older), whether this was the participant’s first experience with an EHR, approximate time spent with this EHR, and prior technology experience (with eight common technologies to choose from). We added questions about the ease of creating a SOAP (subjective, objective, assessment, plan) note, creating new appointments, and using YouTube videos as a means of training. Each question was accompanied by a free-text box enabling the participant to provide comments. Satisfaction ratings were recorded using a Likert scale ranging from 1 (lowest) to 9 (highest). Reliability of this survey tool is excellent (Cronbach’s alpha of .95).20

Time-Motion Study

In order to test performance (time on task and error rates), we evaluated the students’ ability to complete routine EHR tasks. Student volunteers were given instructions via an e-mail message that outlined a series of routine EHR tasks usually accomplished by an office nurse or doctor. They were told to complete the tasks without interruption. After a test patient had been created, students assumed the role of the office nurse and updated the past medical history. Time-stamp times are in parentheses.

  1. Under surgery add “appendectomy 2008”
  2. Under family history add “sister with breast cancer”
  3. Under preventative medicine add “colonoscopy 2010”
  4. Under social history add “smoking cessation 1 year prior”
  5. Under diagnosis add “asthma (ICD-9-CM code 493)” and “backache, unspecified (ICD-9-CM code 724.5)” with start dates of “today” (time stamp)
  6. Under medications enter “Motrin 600 mg” (time stamp)
  7. Under allergies enter “sulfadiazine oral tablet, location skin, reaction rash localized, severity moderate” with today’s date (time stamp)
  8. Under immunization enter “pneumococcal vaccine, injected right deltoid” (time stamp)
  9. Under patient dashboard, patient actions start a new chart note, select SOAP note, enter vital signs: height 66 inches, weight 150 lbs, blood pressure 130/80, temperature 98 degrees, pulse 80, and respirations 16
  10. Enter chief complaint of “cough” (time stamp)

 

Students were then told to assume the role of office physicianandperform the following:

  1. With a SOAP note created, under subjective (S) find the template for “URI, bronchitis, sinusitis, otitis media and pharyngitis”; enter the following symptoms by selecting existing text “associated with mildly productive cough, acute onset”
  2. Under objective (O) select “pharynx clear” and “lungs: scattered inspiratory wheezes, no rales, rhonchi”
  3. Under assessment (A) add acute bronchitis with start date of one week ago and add comment “patient has symptoms consistent with acute bronchitis” (time stamp)
  4. Under plan (P) create a new electronic prescription and enter albuterol sulfate inhaler, inhale QID as needed (time stamp)
  5. Schedule patient for return appointment in two weeks at 8 a.m. (time stamp)
  6. Log off EHR

 

Times were recorded by the minute and not by the second because of the design of the EHR system audit log. Tasks that could be completed in less than one minute were combined so that task completion times would not be listed as zero.

Error Rates

Errors were determined by an experienced graduate research assistant who accessed the EHR to review every test patient created for the time-motion study to look for errors of omission and commission. The time-motion study included 18 tasks requested of the nurse and 15 tasks requested of the physician, for a total of 33 tasks. With 23 participants each performing these 33 tasks, the total number of tasks performed was 759.

In order to measure error rates, we accessed each test patient’s chart in the EHR. Data were retrieved from the summary section to find the participant’s entries on the date of the study for sections such as past medical history, diagnosis history, medication history, allergies, and immunizations. Also, SOAP notes were reviewed in the events and appointments sections. If any entry did not match instructions in the script, an error was recorded.

The two types of errors that we considered were errors of omission and errors of commission. When the information retrieved from the EHR did not match the information instructed to be entered for each task, a commission error was recorded. For example, if the student was instructed to set up a follow-up appointment at the end of the visit “two weeks from today 8 a.m.,” and the appointment was scheduled in the wrong week and/or at a different time, an error of commission was recorded. If the EHR did not show any follow-up appointment, an error of omission was recorded. Mean errors on the nurse instructions and on the physician instructions were calculated, as well as mean total errors. The error rate was calculated by dividing the number of errors noted by the number of tasks completed as a nurse or as a physician, as well as by the total number of errors. Confidence intervals were included with each calculation.

Statistical Analysis

Descriptive statistics were used to evaluate the QUIS survey data. Mean scores and 95 percent confidence intervals were calculated for the five categories, and a mean for all categories was calculated as the “overall satisfaction” score. Demographic data were converted to dummy codes and analyzed as categorical data. Descriptive statistics were used to evaluate the time-motion study, the mean and confidence intervals for nurse and physician task completion times, and a total time score for both. Time-motion data were normally distributed, but the data violated the assumption of equal standard deviations, so the data were analyzed with nonparametric tests. Mann-Whitney tests were utilized to analyze data between any two demographic groups, and Kruskal-Wallis tests were utilized to analyze data between more than two groups. A Spearman rank test was used to study the correlation between total time-motion scores and total satisfaction scores. Effect size for nonparametric data was calculated using a probabilistic index.21 All calculations were made using GraphPad InStat (version 3.10).

Results

Out of a potential student pool of 152 students from the two schools, 44 students completed the QUIS survey, and 23 completed the survey and the time-motion study, reflecting a study participation rate of 27 percent for the survey and 15 percent for the time-motion study. Two time-motion studies were not included in the analysis because they were started but not finished. Sixty-eight percent of study completers were from the University of West Florida, and approximately half of the participants took the study in each year. Of the students completing the survey, 65 percent were female, 77 percent were undergraduate students, 87 percent were over the age of 31, 53 percent were medical informatics certificate students, 50 percent had not had prior EHR experience, 75 percent claimed experience with five or more out of the eight common software packages, devices, or systems listed, and 85 percent had less than 10 hours of hands-on experience with the EHR before becoming a study participant. The time-motion study included one physician and five nurses.

Table 1 summarizes the mean scores and confidence intervals for the five sections of the QUIS survey instrument and the four non-QUIS questions. The scores were high in all categories with slightly lower scores noted in the QUIS question section. There was no statistical difference between overall satisfaction scores in 2011 and 2012 when an incentive was offered (not displayed in the table). Survey free-text comments were positive and primarily related to the intuitive nature of the EHR software.

Table 2 summarizes mean times and confidence intervals to complete time-motion tasks in the roles of nurse and physician. The total time to complete all tasks varied from a low of 11 minutes to a high of 36 minutes. Most students completed all tasks in about 19 minutes. Not surprisingly, clinicians (physicians and nurses) were significantly faster than nonclinicians, p < .0025 (large effect size of .93), calculated by a probabilistic index (not displayed in the table).

Table 3 summarizes the observed errors for the role of nurse and physician and the total errors recorded. Two students performed all tasks with no errors; most students made fewer than three errors. The average number of errors per student was 0.83 for nurse tasks, 1.13 for physician tasks, and 1.88 for total average errors. The error rate (errors divided by tasks) was 4.6 percent for nurse tasks, 7.5 percent for physician tasks, and 5.9 percent for total tasks. The majority of errors made were those of commission and not omission. Because of the small sample size and the inability to record task times by the second, we chose not to categorize errors in more detail.

Table 4 summarizes the multiple comparisons between overall satisfaction scores and subject demographics, total time-motion scores and subject demographics, and total time motion scores and overall satisfaction scores. No subject demographics were statistically significantly related to overall satisfaction with using the study EHR. Prior use of an EHR and male gender were associated with significantly shorter times to complete the time-motion tasks (large effect sizes of .92 and .83), calculated using the probabilistic index.

The total time to complete the time-motion study had no significant correlation with overall satisfaction scores.

Discussion

Our study of an ambulatory web-based EHR used for training demonstrated high usability based on satisfaction scores (survey results), efficiency (time-motion results), and effectiveness (low error rates). Satisfaction scores did not correlate with common demographics such as age, gender, graduate status, technology familiarity, and prior EHR experience, suggesting that the study EHR was easy for a diverse group of students to use. The results suggest that students found the computer interface to be intuitive and user friendly, even with minimal training. The graphical user interface was familiar and associated with a logical workflow. The satisfaction ratings by the students were consistent with the scores reported by clinicians in the study by Murff and Kannry.22 Additionally, the EHR evaluated in our study (Practice Fusion) was ranked fourth in a large survey of family medicine physicians, with about 95 percent agreeing that it was easy and intuitive to use.23 EHR surveys are limited by a variety of biases, but they do provide an overall perception by actual users.

Our unobserved time-motion study recorded the times to complete multiple common EHR tasks. As anticipated, prior EHR experience or status as a clinician predicted shorter completion times. There was no significant correlation between EHR satisfaction and the time to complete tasks. Our time-motion study was unique in that it used the EHR system audit log to automatically time routine tasks, rather than a trained observer. To our knowledge, this is the first study to report time-motion data using an intrinsic ambulatory EHR feature. Most time-motion studies of EHRs in the literature used human observers to compare the total time taken by a physician to complete a patient encounter using paper compared to an EHR.24–26 The goal of these published studies was not to time standard EHR tasks such as electronic prescribing or documentation.

The overall error rates for task completion were low, with an average of 1.9 total errors per student or 5.6 percent errors for the 33 tasks completed. Most errors noted were those of commission and not omission. Our low error rates also suggest that routine tasks were easily and accurately completed by our students, regardless of prior use of EHRs, technology expertise, age, or graduate student status. These error rates are low for students, but they did have the advantage of following a written script during task completion.

Currently, there is no standardized usability rating for EHRs, but limited usability testing will be part of the EHR certification for the second stage of meaningful use in 2014.27 The Certification Commission for Health Information Technology is the only EHR-certifying organization that currently offers additional usability testing as part of the certification process. The inspection process consists of a team of jurors rating usability based on observations through a series of questionnaires. The juror panel consists of three clinical jurors, including at least one practicing physician and an information technology security evaluator. The questionnaires used are the After Scenario Questionnaire, Perceived Usability Questionnaire, and System Usability Survey. However, these methods have been reported to provide subjective usability measures based on perceived satisfaction.28 Of the 75 certified EHRs rated using these measures, 98 percent were given four or five stars (the highest rating). This seems optimistic based on other surveys of the same EHRs by actual clinicians.29

EHR vendors are beginning to address usability issues, but historically, because of the competitive nature of the field, there has been little sharing of best practices. EHR models are selected on the number of features, not usability, and there are no enforceable usability standards at this time.30 Multiple organizations such as NIST are contributing to our knowledge of EHR usability and testing. In 2012, NIST released an EHR Usability Protocol (EUP) that outlined the formal procedures for evaluating EHR usability for developers and evaluators. The EUP is a three-step process consisting of EHR Application Analysis, EHR User Interface Expert Review, and EHR User Interface Validation Testing.31

Multiple usability questions remain, such as which tests should be used to ascertain usability. Common usability approaches include heuristic evaluation, cognitive task analysis, usability tests, surveys, and focus groups. Current evidence suggests that usability testing should include multiple evaluation methods.32 Schumacher and others have suggested that usability testing be a routine part of the request-for-proposal process commonly used to purchase an EHR.33

Our study has several limitations that should be noted and could limit the generalizability of our results. Our participation rates were low, and the lack of an association between student demographics and satisfaction scores may have been due to a type II error or recruitment bias. We evaluated students using an EHR for training purposes, so our results may not pertain to clinical staff. The time-motion study was easy to conduct with online students due to internal EHR time stamps but was limited by the fact that tasks could be recorded only by minutes and not by seconds. We are uncertain as to whether EHR system audit logs are routinely used and available to nontechnical staff. Because the time-motion portion of the study was unobserved, we do not know if the variation in performance times could have been due to inattentiveness and not usability issues. According to Zheng et al., observed time-motion studies are the standard, but they are not without problems, so alternatives should be investigated.34

Conclusions

The adoption of EHRs, particularly by small or rural primary care practices, is a key component of healthcare reform in the United States. In order to accomplish this immense task, EHR training will need to occur at multiple levels in our healthcare system. EHR usability is important for all users, particularly clinical staff who are most interested in efficiency (productivity) and effectiveness (patient safety). Healthcare workers and HIM students should ideally train on EHRs that have high usability ratings to prevent frustration and facilitate education. Our usability study suggests that the EHR evaluated is efficient, effective, and associated with high satisfaction levels for informatics and HIM student training. The study also suggests that educational programs do not have to make a substantial investment to have a highly usable EHR model for training purposes. User-centered EHR design and comprehensive usability testing are likely to become the norm in the next few years in order to improve EHR adoption, clinician productivity, and patient safety.

 

Acknowledgments

We would like to thank Steven Linnville, PhD, and Nora Bailey, MSP, for their review of the manuscript.

 

Robert Hoyt, MD, FACP, is the director of the Medical Informatics Program at the University of West Florida’s School of Allied Health and Life Sciences in Pensacola, FL.

Kenneth Adler, MD, MMM, is the medical director of information technology at Arizona Community Physicians in Tucson, AZ.

Brandy Ziesemer, RHIA, is a health information manager and associate professor at Lake-Sumter Community College in Leesburg, FL.

Georgina Palombo, MBA, is a graduate research assistant at the University of West Florida in Pensacola, FL.


Notes

  1. Schoen, Cathy, Robin Osborn, Michelle Doty, David Squires, Jordan Peugh, and Sandra Applebaum. “A Survey of Primary Care Physicians in Eleven Countries, 2009: Perspectives on Care, Cost and Experiences.” Health Affairs, 28, no. 6 (2009): 1171. Available at doi:10.1377/hlthaff.28.6.w1171 (accessed March 3, 2011).
  2. White House Office of Chief Information Officer. “Incentives for the Use of Health Information Technology and Establishing the Position of the National Health Information Technology Coordinator” (Executive Order 13335). April 27, 2004. Available at http://nodis3.gsfc.nasa.gov/displayEO.cfm?id=EO_13335_ (accessed January 5, 2011).
  3. “American Recovery and Reinvestment Act of 2009.” Public Law 111-5. February 17, 2009. Available at http://www.gpo.gov/fdsys/pkg/PLAW-111publ5/content-detail.html (accessed January 3, 2011).
  4. US Department of Health and Human Services. “More Than 100,000 Health Care Providers Paid for Using Electronic Health Records.” News Release, June 19, 2012. Available at http://www.hhs.gov/news/press/2012pres/06/20120619a.html (accessed June 19, 2012).
  5. Office of the National Coordinator for Health Information Technology. http://www.healthit.hhs.gov (accessed January 8, 2011).
  6. Hammond, M. M., K. Margo, J. G. Christner, J. Fisher, S. H. Fischer, and L. N. Pangaro. “Opportunities and Challenges in Integrating Electronic Health Records into Undergraduate Medical Education: A National Survey of Clerkship Directors.” Teaching and Learning in Medicine 24, no. 3 (2012): 219–24.
  7. Usability Net. “Usability Definitions: International Standards Organization (ISO) 9241-11: Guidance on Usability (1998).” Available at http://www.usabilitynet.org/tools/r_international.htm (accessed August 4, 2011).
  8. Nielsen, Jakob. “Usability 101: Introduction to Usability.” Nielsen Norman Group. January 4, 2012. Available at http://www.useit.com/alertbox/20030825.html (accessed July 25, 2012).
  9. HIMSS EHR Usability Task Force. “Selecting an EMR for Your Practice: Evaluating Usability.” August 2010. Available at http://www.himss.org/content/files/Selecting_EMR_Eval_Usability.pdf (accessed August 24, 2011).
  10. Murff, Harvey J., and Joseph Kannry. “Physician Satisfaction with Two Order Entry Systems.” Journal of the American Medical Informatics Association 8, no. 5 (2001): 499–509.
  11. Edsall, Robert L., and Kenneth G. Adler. “The 2011 EHR User Satisfaction Survey.” Family Practice Management 18, no. 4 (July–August 2011): 23–30.
  12. Certificate in Medical Informatics courses. University of West Florida. Available at http://uwf.edu/sahls/certificate-informatics/ (accessed February 21, 2011).
  13. “Health Information Management and Technology.” Lake-Sumter Community College. Available at http://www.lscc.edu/academics/him/Pages/Home.aspx accessed July 6, 2011).
  14. Practice Fusion electronic health record. Available at http://www.practicefusion.com/ (accessed January 19, 2011).
  15. University of Maryland Human-Computer Interaction Lab. “Questionnaire for User Interaction Satisfaction.” Available at http://lap.umd.edu/quis/ (accessed February 12, 2011).
  16. Sittig, Dean F., Gilad Kuperman, and Julie Fiskio. “Evaluating Physician Satisfaction Regarding User Interactions with an Electronic Medical Record System.” AMIA Annual Symposium Proceedings (1999): 400–404.
  17. Jaspers, Monique W. M., Linda W. P. Peute, Arnaud Lauteslager, and Piet J. M. Bakker. “Pre-Post Evaluation of Physicians’ Satisfaction with a Redesigned Electronic Medical Record System.” In S. K. Andersen et al. (Editors), eHealth beyond the Horizon: Get IT There. Amsterdam: IOS Press, 2008, 303–8.
  18. Johnson, Todd R., Jhang Zhang, Zhihua Tang, Constance Johnson, and James P. Turley. “Assessing Informatics Students’ Satisfaction With a Web-based Courseware System.” International Journal of Medical Informatics 73 (2004): 181–87.
  19. Hortman, Patricia A., and Cheryl B. Thompson. “Evaluation of User Interface Satisfaction of a Clinical Outcomes Database.” CIN: Computers, Informatics, Nursing 23, no. 6 (2005): 301–7.
  20. University of Maryland Human-Computer Interaction Lab. “Questionnaire for User Interaction Satisfaction.”
  21. Acion, L., J. J. Peterson, S. Temple, and S. Arndt. “Probabilistic Index: An Intuitive Non-parametric Approach to Measuring the Size of Treatment Effects.” Statistics in Medicine 25 (2006): 591–602.
  22. Murff, Harvey J., and Joseph Kannry. “Physician Satisfaction with Two Order Entry Systems.”
  23. Edsall, Robert L., and Kenneth G. Adler. “The 2011 EHR User Satisfaction Survey.”
  24. Lo, Helen G., Lisa P. Newmark, Catherine Yoon, Lynn A. Volk, Virginia L. Carlson, Anne F. Kittler, Margaret Lippincott, Tiffany Wang, and David W. Bates. “Electronic Health Records in Specialty Care: A Time-Motion Study.” Journal of the American Medical Informatics Association 14, no. 5 (2007): 609–15.
  25. Pizziferri, Lisa, Anne F. Kittler, Lynn A. Volk, Melissa M. Honour, Sameer Gupta, Samuel Wang, Tiffany Wang, Margaret Lippincott, Li Qi, and David W. Bates. “Primary Care Physician Time Utilization Before and After Implementation of an Electronic Health Record: A Time-Motion Study.” Journal of Biomedical Informatics 38 (2005): 176–188.
  26. J. Marc Overhage, Susan Perkins, William M. Tierney, and Clement J. McDonald. “Controlled Trial of Direct Physician Order Entry: Effects on Physician’s Time Utilization in Ambulatory Primary Care Internal Medicine Practices.” Journal of the American Medical Informatics Association 8, no. 4 (2001): 361–71.
  27. US Department of Health and Human Services. “Health Information Technology: Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology, 2014 Edition; Revisions to the Permanent Certification Program for Health Information Technology.” 45 CFR Part 170. Federal Register 77, no. 171 (September 4, 2012): 54163. Available at http://www.gpo.gov/fdsys/pkg/FR-2012-09-04/pdf/2012-20982.pdf (accessed October 20, 2012).
  28. Certification Commission for Health Information Technology. CCHIT 2011 Usability Testing Guide for Ambulatory EHRs. Available at http://www.himss.org/content/files/cchit_usability.pdf (accessed August 12, 2011).
  29. Edsall, Robert L., and Kenneth G. Adler. “The 2011 EHR User Satisfaction Survey.”
  30. McDonnell, Cheryl, Kristen Werner, and Lauren Wendel. Electronic Health Records Usability: Vendor Practices and Perspectives (AHRQ Publication No. 09(10)-0091-3-EF). Rockville, MD: Agency for Healthcare Research and Quality, May 2010.
  31. Lowry, Svetlana Z., Matthew T. Quinn, Mala Ramaiah, Robert M. Schumacher, Emily S. Patterson, Robert North, Jiajie Zhang, Michael C. Gibbons, and Patricia Abbott. Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records (NISTIR 7804). National Institute of Standards and Technology, February 2012. Available at http://www.nist.gov/healthcare/usability/upload/EUP_WERB_Version_2_23_12-Final-2.pdf(accessed June 2, 2012).
  32. Horsky, Jan, Kerry McColgan, Justin E. Pang, Andreas J. Melnikas, Jeffrey A. Linder, Jeffrey L. Schnipper, and Blackford Middleton. “Complementary Methods of System Usability Evaluation: Surveys and Observations During Software Design and Development Cycles.” Journal of Biomedical Informatics 43 (2010): 782–90.
  33.  R.M. Schumacher, J.M. Webb, K.R. Johnson. “How To Select An Electronic Health Record System That Healthcare Professionals Can Use.” User Centric, Inc. February 2009. http://www.usercentric.com/sites/usercentric.com/files/usercentric-ehr-white-paper.pdf (accessed March 3, 2011)
  34. Zheng, Kai, Michael H. Guo, and David A. Hanauer. “Using the Time and Motion Method to Study Clinical Work Processes and Workflow: Methodological Inconsistencies and a Call for Standardized Research.” Journal of the American Medical Informatics Association 18 (2011): 704–10.

 

Printer friendly version of this article.

 

Robert Hoyt, MD, FACP; Kenneth Adler, MD, MMM; Brandy Ziesemer, RHIA; and Georgina Palombo, MBA. “Evaluating the Usability of a Free Electronic Health Record for Training.” Perspectives in Health Information Management (Spring 2013): 1-14.

Leave A Response »

You must be logged in to post a comment.