In 2008, a clinical information tool was developed and embedded in the electronic health record system of an academic medical center. In 2009, the initial information tool, Clinical-e, was superseded by a portal called Clinical Focus, with a single search box enabling a federated search of selected online information resources.
To measure the usefulness and impact of Clinical Focus, a survey was used to gather feedback about users’ experience with this clinical resource. The survey determined what type of clinicians were using this tool and assessed user satisfaction and perceived impact on patient care decision making. Initial survey results suggest the majority of respondents found Clinical Focus easy to navigate, the content easy to read, and the retrieved information relevant and complete. The majority would recommend Clinical Focus to their colleagues. Results indicate that this tool is a promising area for future development.
Key words: electronic health record, user satisfaction, knowledge-based library resources, clinical decision support
The University of Pittsburgh Health Sciences Library System (HSLS) supports the educational, research, clinical, and service activities of the health sciences community of the University of Pittsburgh and the University of Pittsburgh Medical Center (UPMC) through development and provision of innovative information resources and services. UPMC includes 15 tertiary, specialty, and community hospitals; 400 outpatient sites and doctors’ offices; and rehabilitation and long-term care facilities. HSLS receives budget support from UPMC to provide systemwide access to licensed online resources. Linking these knowledge-based information resources into the electronic health record has been described as a long-standing informatics goal.1
In 2004, the library director was invited to participate in the UPMC-wide Physician Advisory Committee (PAC) for the eRecord, as the medical center’s electronic health record system is known. The PAC is chaired by the chief medical information officer, and its membership includes physicians and other health professionals, together with representatives from Information Services and Health Information Management. The eRecord, at the time of this study, had over 3.8 million unique patient records and more than 32,000 active users, including more than 5,000 physicians employed by or affiliated with UPMC. It also offers 200 clinical applications from more than 120 vendors, including Epic and Cerner.
By 2006, a “key physician” was identified who had a vision that the eRecord could improve practice by delivering not only patient-specific data directly to the clinician, but also context-specific knowledge-based information. A library development team was initiated to work with this physician to learn more about the eRecord and clinical information needs at the point of care, and to identify full-text information resources and appropriate technology, in order to design a clinical tool to be placed within the eRecord.
The result of this collaboration was the development of a tool named Clinical-e, which consisted of a search box with subject tabs to provide users with quick access to designated knowledge-based information resources to be used at the point of care through the eRecord.2 Each of the five subject tabs (diagnosis, diseases, drugs, evidence-based medicine, and patient education) offered a federated search of a different pool of full-text information resources. Clinical-e used the Velocity search platform from Vivisimo as its search interface. Search results were organized “on the fly” into meaningful categories using clustering technology and were directly accessible from the results page. Structured usability testing and focus groups with PAC physicians aided development, and in 2008 Clinical-e was embedded in the eRecord. The decision to place Clinical-e in the eRecord was made by the PAC. The development of Clinical-e was the beginning of an evolutionary approach in the design of an eRecord clinical information tool.
In the fall of 2009, the embedded tab-based search box was replaced by a Web portal accessible via links from the eRecord and also from the library’s public Web page. This updated tool, renamed Clinical Focus, enabled users to look for information on a disease, symptom, drug, procedure, or test using one simple search box. Vivisimo’s Velocity remained the federated search engine, but the selected full-text knowledge-based resources were changed to include ACP PIER (American College of Physicians’ Physicians Information and Education Resource), BMJ Clinical Evidence, Cochrane Reviews, Micromedex, and UpToDate. Figure 1 shows the search page for Clinical Focus.
As part of the redesign process, usage patterns and user satisfaction related to Clinical Focus were assessed. Information system research focuses on user satisfaction as a measure of information system success.3–6 Unlike measuring the system’s performance by assessing the retrieval of information and relevance of the results to the questions, user satisfaction research attempts to quantify the user’s experience.7 A common measurement tool is a user satisfaction survey. Delic and Lenz designed and validated a questionnaire using the DeLone and McLean Model evaluating four areas of an information system: information quality, system quality, user satisfaction, and individual impact.8, 9 Information quality evaluates the relevance and completeness of the information in a system. System quality assesses the organization of the system and ease of use. User satisfaction attempts to measures the users’ likes or dislikes of the system. Impact is meant to evaluate the effect the system has on individual users.
This article describes the use of Clinical Focus during the first six months of implementation. The results of a user satisfaction survey designed to determine who is using Clinical Focus and to measure the perceived impact of Clinical Focus are also reported.
A validated questionnaire was adapted to gather information about the usefulness and impact of Clinical Focus.10 The Web-based survey included 18 questions with a five-point rating scale ranging from “strongly disagree” to “strongly agree.” As described above, the survey was intended to measure four areas related to Clinical Focus: user satisfaction, information quality, system quality, and individual impact. Items to identify the clinical role of the participants were also included in the survey. The survey contained an introduction informing users of the purpose of the study and the name of the person conducting the research. The university’s Institutional Review Board designated this project exempt from formal review.
The survey was available from January through April 2010, and various marketing techniques to encourage participation were employed. A news item on the front page of the HSLS Web site and on the Clinical Focus Web page linked to the Web-based survey. The survey was promoted in HSLS and UPMC newsletters. The library director made presentations about Clinical Focus at three UPMC hospitals and promoted the survey during the presentations. A note about the survey was posted on the internal Web site for UPMC residents and fellows. HSLS also offered an incentive to participate: after the survey was closed, a drawing for two gift cards was held for respondents who listed their contact information.
Server logs were accessed to determine the number of Clinical Focus search queries for the six-month period from the date of initial implementation in November 2009 through the end of the survey.
As seen in Figure 2, there was steady use of Clinical Focus in the first six months it was available. Clinical Focus averaged more than 1,100 search queries per month.
Ninety-two clinicians completed the survey. As shown in Table 1, 16 respondents identified themselves as physicians, 30 as residents, 30 as nurses, and 4 as medical students. The 12 respondents in the “Other” category included physician assistants, nurse practitioners, and respiratory therapists. The majority of the respondents specialized in internal medicine, followed by family practice and surgery.
The survey results can be found in Table 2. The first four questions were designed to determine the quality of information found in Clinical Focus. More than 90 percent of the respondents agreed or strongly agreed that the information in Clinical Focus is up-to-date/current. More than 80 percent of the respondents agreed or strongly agreed the information in Clinical Focus is both relevant and complete. When asked about the exclusiveness of the information in Clinical Focus, approximately 58 percent agreed or strongly agreed. There was a larger undecided response to this question (31.8 percent) than for the previous questions.
Questions 5–10 were designed to assess the system quality of Clinical Focus. Approximately 80 percent or more of the respondents agreed or strongly agreed that Clinical Focus was organized and easy to navigate, with pages loading quickly, valid hyperlinks, and a helpful search function. When asked about the content in Clinical Focus, 88 percent agreed or strongly agreed that the content was easy-to-read.
The survey contained two items, questions 11 and 12, designed to gauge user satisfaction. Again, a high percentage of the respondents (80 percent) agreed or strongly agreed that Clinical Focus met their expectations, and more than 80 percent were satisfied using Clinical Focus.
Questions 13–18 evaluated the impact of using Clinical Focus on the individual. Questions 13 and 14 assessed knowledge gained from Clinical Focus. More than 70 percent of respondents agreed or strongly agreed they were better informed after using Clinical Focus. The majority (approximately 65 percent) of respondents agreed or strongly agreed they made better decisions because of the information in Clinical Focus. Question 15 was designed to measure trust, and approximately 78 percent of respondents agreed or strongly agreed that Clinical Focus is competent in fulfilling its task. Question 16 determined the risk of using Clinical Focus. Approximately 38 percent strongly disagreed or disagreed there was a risk involved in using Clinical Focus, with 30 percent undecided. The last two questions were designed to ascertain loyalty. Most respondents (approximately 77 percent) agreed or strongly agreed they would use Clinical Focus regularly in the future, and 80 percent agreed or strongly agreed they would recommend it to others.
The survey measured multiple factors in order to determine the overall success of Clinical Focus. The results of this survey suggest that users of Clinical Focus found the information current and relevant. Users agreed that Clinical Focus was well organized, and easy to read and navigate. They felt they were better informed and made better clinical decisions because of their use of Clinical Focus. Respondents generally did not feel there was a risk to using Clinical Focus. The results of this survey also imply that these Clinical Focus users plan to return to the site regularly and recommend it to their colleagues. Clinical Focus was highly rated in all four distinct areas: user satisfaction, information quality, system quality, and individual impact.
Previously published studies have attempted to assess electronic clinical information systems using similar survey tools.11–13 Several had sample sizes numbering 200, while another study had a sample size of 18. Each of these studies sought to determine user satisfaction and system and information quality. The authors could not, however, locate evaluation studies of knowledge-based tools embedded in an electronic health record.
The growing usage of Clinical Focus supports the survey. On average, more than 1,110 search queries were performed per month during the first six months. This usage was four times higher than the use of the previous information tool, Clinical-e. At the time this article was written, the use of Clinical Focus had grown to an average of 5,000 queries per month.
The study was limited by the small sample size. Unlike targeted surveys in which the number of potential respondents is known, the survey in this study was posted as part of the clinical portal. It is unknown how many potential respondents saw the survey link on the portal and chose not to complete it. Even with the incentive to participate, which is used to increase response rates, the response rate was small. Some research suggests, however, that a low response rate does not always suggest a large nonresponse error.14 Voluntary surveys are prone to bias by a self-selected group of respondents. Some characteristics of those who responded could vary from those who chose not to respond.15 The study was also limited by the lack of a control group. The clinical roles of the respondents represent the major user populations of the eRecord.
The survey results support the library’s objective of designing a clinical information tool embedded in the electronic health record that can be useful to clinical users.
Future research will continue to explore the impact of Clinical Focus on users of the eRecord. This research could include follow-up studies using focus groups or interviews to gather more detailed information about usefulness. It might also include comparing Clinical Focus to other similar tools. Future research could also include an evaluation of the location of Clinical Focus in eRecord. For example, would access within an order set lead to differences in usage rate or user satisfaction? Finally, additional studies should be conducted to determine continued usefulness and impact over time.
Barbara A. Epstein, MSLS, is director of the Health Sciences Library System at the University of Pittsburgh in Pittsburgh, PA.
Charles B. Wessel, MLS, is head of hospital services in the Health Sciences Library System at the University of Pittsburgh in Pittsburgh, PA.
Frances Yarger, MA, MAEd, is assistant director for computing services in the Health Sciences Library System at the University of Pittsburgh in Pittsburgh, PA.
John LaDue, MLIS, is the knowledge integration librarian in the Health Sciences Library System at the University of Pittsburgh in Pittsburgh, PA.
Mary Lou Klem, PhD, MLS, is a reference librarian in the Health Sciences Library System at the University of Pittsburgh in Pittsburgh, PA.
1. Humphreys, B. L. “Electronic Health Record Meets Digital Library: A New Environment for Achieving an Old Goal.” Journal of the American Medical Informatics Association 7, no. 5 (2000): 444–52.
2. Epstein, B. A., N. H. Tannery, C. B. Wessel, F. Yarger, J. LaDue, and A. B. Fiorillo. “Development of a Clinical Information Tool for the Electronic Health Record: A Case Study.” Journal of the Medical Library Association 98, no. 3 (2010): 223–27.
3. Gatian, A. W. “Is User Satisfaction a Valid Measure of System Effectiveness?” Information and Management 26, no. 3 (1994): 119–31.
4. Gelderman, M. “The Relation between User Satisfaction, Usage of Information Systems and Performance.” Information and Management 34, no. 1 (1998): 11–18.
5. Gluck, Myke. “Exploring the Relationship between User Satisfaction and Relevance in Information Systems.” Information Processing & Management 32, no. 1 (1996): 89–104.
6. Huffman, S. B., and M. Hochster. “How Well Does Result Relevance Predict Session Satisfaction?” SIGIR ’07: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2007.
7. Muylle, S., R. Moenaert, and M. Despontin. “The Conceptualization and Empirical Validation of Web Site User Satisfaction.” Information and Management 41, no. 5 (2004): 543–60.
8. Delic, Daniel, and Hans-J. Lenz. “Benchmarking User Perceived Impact for Web Portal Success Evaluation.” Journal of Information and Organizational Sciences 32, no. 1 (2008): 1–14.
9. DeLone, W. H., and E. R. McLean. “Information Systems Success: The Quest for the Dependent Variable.” Information Systems Research 3, no. 1 (1992): 60–95.
10. Delic, Daniel, and Hans-J. Lenz. “Benchmarking User Perceived Impact for Web Portal Success Evaluation.”
11. Golob, J. F., Jr., A. M. Fadlalla, J. A. Kan, N. P. Patel, C. J. Yowler, and J. A. Claridge. “Validation of Surgical Intensive Care-Infection Registry: A Medical Informatics System for Intensive Care Unit Research, Quality of Care Improvement, and Daily Patient Care.” Journal of the American College of Surgeons 207, no. 2 (2008): 164–73.
12. Palm, J. M., I. Colombet, C. Sicotte, and P. Degoulet. “Determinants of User Satisfaction with a Clinical Information System.” AMIA Annual Symposium Proceedings (2006): 614–18.
13. Sicotte, C., G. Pare, M. P. Moreault, A. Lemay, L. Valiquette, and J. Barkun. “Replacing an Inpatient Electronic Medical Record: Lessons Learned from User Satisfaction with the Former System.” Methods of Information in Medicine 48, no. 1 (2009): 92–100.
14. Krosnick, J. A. “Survey Research.” Annual Review of Psychology 50 (1999): 537–67.
15. Folwer, F. J., Jr. Survey Research Methods. Beverly Hills, CA: Sage, 1988.
Nancy H. Tannery, MLS; Barbara A. Epstein, MSLS; Charles B. Wessel, MLS; Frances Yarger, MA, MAEd; John LaDue, MLIS; and Mary Lou Klem, PhD, MLS. “Impact and User Satisfaction of a Clinical Information Portal Embedded in an Electronic Health Record.” Perspectives in Health Information Management (Fall 2011): 1-10.