Employing Kirkpatrick’s Evaluation Framework to Determine the Effectiveness of Health Information Management Courses and Programs

Abstract

Evaluation of the impact and effectiveness of courses is necessary so that strengths and weaknesses can be identified and improvements made. This article uses Kirkpatrick’s evaluation framework to present a model that health information management (HIM) instructors can use to improve upon the standard course evaluation form. Kirkpatrick’s model stresses evaluation on the levels of reaction, learning, behavior, and results.

The proposed course evaluation model addresses the first three of these levels and focuses on the conditions necessary for transfer of learned knowledge and skills into on-the-job application. The article provides concrete tips that HIM instructors can apply in the process of evaluating the effectiveness of their courses and programs.

Key words: HIM; evaluation; Kirkpatrick; education

Evaluation of the impact and effectiveness of courses is necessary so that strengths and weaknesses can be identified and improvements made. As in most academic fields, health information management (HIM) instructors typically use the course evaluation tool their organization suggests or requires they use. It is commonly referred to as a smile sheet. Smile sheets are not necessarily of no value. They can identify whether the course was a negative experience likely to inhibit learning and application back on the job.1 However, they do not measure learning or on-the-job application.

Now is the time to employ a more comprehensive model that incorporates the desires of all relevant stakeholders: the student, the instructor, and the employer. The following model is based on the seminal work of Kirkpatrick.

Developed more than 50 years ago as his dissertation, Kirkpatrick’s framework for evaluation has been used as a basic model for the identification and targeting of training-specific interventions in business, government, the military, and industry alike.2,3 It has been successfully used by Duke Energy, Arthur Andersen and Company, Intel (University), and St. Luke’s Hospital, among thousands of others. 4,5 An example of its use in education includes Baskin’s exploration of student-learning outcomes related to online group work.6

Kirkpatrick stressed that an evaluation should go beyond immediate reactions of the attendees. The study could be carried out on four different levels:

  1. reaction,
  2. learning,
  3. behavior, and
  4. results.7

For HIM instructors’ purposes, the first three levels will suffice, as the fourth level looks at systemwide or organizational impact. This level seeks to determine if an increase in company profits, customer satisfaction, or similar measures occurred as a result of the training. A longer-term evaluation to measure holistic improvement in the workplace after the learning experience would be highly beneficial but is beyond the scope of most HIM programs.

The Model

Level one evaluation assists an organization in assessing a participant’s reactions to a course’s instructor, setting, materials, and learning activities. This level of training evaluation is essential. It involves gaining direct feedback. Many organizations use this level as their sole means of evaluation.8 The strength of this level of evaluation is the ease of obtaining the information. However, positive satisfaction numbers do not ensure learning and subsequent application of program content.9, 10

From an instructor’s point of view, it is important to get good satisfaction ratings. An instructor must get favorable ratings in order to attract new participants and get current participants to return for other courses. Also, if participants are not satisfied, they probably will not be motivated to learn. So while good satisfaction ratings do not guarantee learning, bad ratings most likely decrease the probability of it occurring.11, 12 To keep things simple, I would suggest using the organization’s course evaluation form. Evaluate it to determine if all the relevant information you wish to obtain will be ferreted out. You may need to add questions. It should be given immediately after the learning event.

The second level of evaluation involves determining the extent to which learning has occurred.13 A variety of techniques can be used for determining if the learning objectives have been met. Types of level two assessments include performance testing, simulations, case studies, plays, and exercises. 14, 15 My suggestion would be to develop a pretest and a posttest. Have the students display actual knowledge of the subject before and after instruction. Quantify the results most likely using t-tests. A t-test is a statistical test used to determine if a set of results are statistically significant.16 It should be noted that although a participant may possess the knowledge, skills, and attitudes taught in the course, there is still no guarantee of application on the job.17–19 Of critical concern to any HIM department and employing organization is the actual on-the-job application of acquired knowledge and skills.

The third level of evaluation attempts to determine the extent to which new skills and knowledge have been applied on the job.20–22 Level three evaluation should not be conducted before completing level one and level two evaluations.23, 24 Even when satisfaction ratings are good and the learning objectives are met, transfer of knowledge into behavior may not occur.25, 26

Kirkpatrick lists four conditions that must be met for change to occur:

  1. the person must have a desire to change,
  2. the person must know what to do and how to do it,27
  3. the person must work in the right climate, and
  4. the person must be rewarded for changing.28

The first two conditions can be met by fathering a positive disposition toward the sought-after change and by instilling the requisite knowledge and skills to be successful.29

The third condition, the right climate, refers to the participant’s work environment. For knowledge and skills to be transferred, the environment must be receptive to the transfer.30, 31 Swanson and Holton maintain that the environment in which a learner earns a livelihood is more influential than the learning itself when it comes to execution.32

Barriers to using knowledge and skills on the job include:

  1. the lack of the opportunity to use one’s learning,33, 34
  2. the lack of the personal capacity to try out the learning,
  3. a belief that the effort exerted will not change performance,35
  4. a belief that the desirable performance will lead to outcomes the learner values,36, 37
  5. the extent to which the supervisor or manager actively inhibits the use of the new knowledge and skills,38–40 and
  6. the support or resistance that peers provide when using new approaches.41, 42

The fourth condition, rewards, can be either intrinsic or extrinsic.43 Intrinsic rewards are the psychological compensation an individual gets from work. When an individual is intrinsically rewarded, he or she is energized and fulfilled by doing a job well. Extrinsic rewards are the economic rewards received from others; they include pay increases, bonuses, and benefits.44

Many organizations avoid level three evaluations because they take time, add cost to the training and development process, and are often disruptive. For these reasons, only training programs that are critical to organizational success, that represent significant investments, or for which skill application is critical to the goals of the organization should be evaluated at this level.45 HIM skills meet all three criteria.

Level three evaluations can be obtained in numerous ways. I would suggest a survey of the direct supervisor of the former student. It could be conducted by phone, e-mail, letter, or other means.

The fourth level of evaluation involves measuring systemwide or organizational impact of training and is generally beyond the scope of HIM course evaluations.46 Most organizational leaders are interested in knowing how education actually improved the business in terms they understand. This is often quite difficult because some needs assessment processes do not link skills and knowledge deficiencies to business performance problems or opportunities. Only a small percentage of programs should be evaluated at level four because of the increased time requirements, additional cost, and the complexity of measuring business impact.47

Conclusion

Although Kirkpatrick’s model was never intended to describe exactly what to evaluate and how to do it, it does provide an overview of how to proceed.48 The model is still in widespread use.49 Also, it is the standard to which other techniques are compared.50 Finally, adult education practitioners generally hold this approach to be efficacious.51–54 Given the proliferation of HIM courses and programs, Kirkpatrick’s evaluation framework provides an excellent framework to determine strengths and weaknesses of HIM instruction.

Donald (Nick) Rouse, MS, MBA, EdD, RHIA, is a program director of health information technology at Durham Technical Community College in Durham, NC.

Notes

1. Lopker, G., and R. Askeland. “More Than a Smile Sheet: Using Level 1 Evaluation Effectively.” T+D, September 2009, 74–75.
2. Kirkpatrick, D. “50 Years of Evaluation.” T+D, January 2010, 14.
3. Watkins, R., D. Leigh, R. Foshay, and R. Kaufman. “Kirkpatrick Plus: Evaluation and Continuous Improvement with a Community Focus.” Educational Technology Research and Development 46, no. 4 (1998): 90.
4. Kirkpatrick, D. Evaluating Training Programs: The Four Levels. San Francisco: Berrett-Koehler, 1998.
5. Mathison, S. Encyclopedia of Evaluation. Thousand Oaks, CA: Sage, 2004.
6. Baskin, C. “Using Kirkpatrick’s Four-Level-Evaluation Model to Explore the Effectiveness of Collaborative Online Group Work.” In Proceedings of the Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE 2001), Melbourne, Australia, December 9–12, 2001, 37–44.
7. Kirkpatrick, J., and W. Kirkpatrick. The Kirkpatrick Four Levels: A Fresh Look After 50 Years, 1959–2009. April 2009. Available at http://www.managesmarter.com/managesmarter/images/pdfs/trg_20090417_kirkpatrickwhitepaper.pdf.
8. Morgan, R., and W. Casper. “Examining the Factor Structure of Participant Reactions to Training: A Multidimensional Approach.” Human Resource Development Quarterly 11, no. 3 (2000): 301–17.
9. Baldwin, T., and J. Ford. “Transfer of Training: A Review and Directions of Future Research.” Personnel Psychology 41, no. 1 (1988): 63–105.
10. Phillips, J. Handbook of Training Evaluation and Measurement Methods. Houston, TX: Gulf, 1997.
11. Kirkpatrick, D. Evaluating Training Programs: The Four Levels.
12. Kirkpatrick, D. “The Four Levels of Evaluation.” In S. Brown and C. Seidner (Editors), Evaluating Corporate Training: Models and Issues. Norwell, MA: Kluwer Academic, 1998, 95–112.
13. Phillips, J. Handbook of Training Evaluation and Measurement Methods.
14. Schriver, R. “Testing Employee Performance: A Review of the Milestones.” In D. Kirkpatrick (Editor), Another Look at Evaluating Training Programs. Alexandria, VA: American Society of Training and Development, 1998, 33–35.
15. Phillips, J. Handbook of Training Evaluation and Measurement Methods.
16. Horton, L. Calculating and Reporting Healthcare Statistics. Chicago, IL: AHIMA Press, 2010.
17. Baldwin, T., and J. Ford. “Transfer of Training: A Review and Directions of Future Research.”
18. Bates, R., E. Holton, and D. Seyler. “Validation of a Transfer Climate Instrument.” In Academy of Human Resource Development Conference Proceedings, Minneapolis, MN, February 29–March 3, 1996.
19. Phillips, J. Handbook of Training Evaluation and Measurement Methods.
20. Baldwin, T., and J. Ford. “Transfer of Training: A Review and Directions of Future Research.”
21. Bates, R., E. Holton, and D. Seyler. “Validation of a Transfer Climate Instrument.”
22. Phillips, J. Handbook of Training Evaluation and Measurement Methods.
23. Kirkpatrick, D. Evaluating Training Programs: The Four Levels.
24. Kirkpatrick, D. “The Four Levels of Evaluation.”
25. Baldwin, T., and J. Ford. “Transfer of Training: A Review and Directions of Future Research.”
26. Broad, M., and J. Newstrom. Transfer of Training. Reading, MA: Addison-Wesley, 1992.
27. Gielen, E. “Transfer of Training in a Corporate Setting: Testing a Model.” In Academy of Human Resource Development Conference Proceedings, Minneapolis, MN, February 29–March 3, 1996.
28. Kirkpatrick, D. Evaluating Training Programs: The Four Levels, 21.
29. Kirkpatrick, D. Evaluating Training Programs: The Four Levels.
30. Bates, R., E. Holton, and D. Seyler. “Validation of a Transfer Climate Instrument.”
31. Gielen, E. “Transfer of Training in a Corporate Setting: Testing a Model.”
32. Swanson, R., and E. Holton. Results: How to Assess Performance, Learning, and Perceptions in Organizations. San Francisco: Berrett-Koehler, 1999.
33. Gielen, E. “Transfer of Training in a Corporate Setting: Testing a Model.”
34. Swanson, R., and E. Holton. Results: How to Assess Performance, Learning, and Perceptions in Organizations.
35. Ibid.
36. Ibid.
37. Vroom, V. Work and Motivation. New York: Wiley, 1964.
38. Kirkpatrick, D. Evaluating Training Programs: The Four Levels.
39. Rouiller, J., and L. Goldstein. “The Relationship between Organizational Transfer Climate and Positive Transfer of Training.” Human Resource Development Quarterly 4, no. 4 (1993): 377–90.
40. Swanson, R., and E. Holton. Results: How to Assess Performance, Learning, and Perceptions in Organizations.
41. Rouiller, J., and L. Goldstein. “The Relationship between Organizational Transfer Climate and Positive Transfer of Training.”
42. Swanson, R., and E. Holton. Results: How to Assess Performance, Learning, and Perceptions in Organizations.
43. Kirkpatrick, D. Evaluating Training Programs: The Four Levels.
44. Thomas, K. “Intrinsic Motivation and How It Works.” Training 37, no. 10 (2000): 130–35.
45. Phillips, J. Handbook of Training Evaluation and Measurement Methods.
46. Ibid.
47. Ibid.
48. Kirkpatrick, D. “Great Ideas Revisited: Revisiting Kirkpatrick’s Four-Level Model.” Training and Development 50, no. 1 (1996): 54–57.
49. Broad, M. “Transfer Concepts and Research Overview.” In M. Broad (Editor), Transferring Learning to the Workplace. Alexandria, VA: American Society for Training and Development, 1997, 1–18.
50. Hanson, J. “Common Ground, Is It Reachable? A Program Evaluation of a Cross-Cultural Training Program.” Doctoral diss., Purdue University, 1997. Dissertation Abstracts International, 58-12A: 4530.
51. Alliger, G., and E. Janak. “Kirkpatrick’s Levels of Training Criteria: Thirty Years Later.” Personnel Psychology 42, no. 2 (1989): 331–41.
52. Broad, M. “Transfer Concepts and Research Overview.”
53. Hanson, J. “Common Ground, Is It Reachable? A Program Evaluation of a Cross-Cultural Training Program.”
54. Pine, J., and J. Tingley. “ROI of Soft-Skills Training.” Training 30, no. 2 (1993): 55–60.

Donald Rouse MS, MBA, EdD, RHIA. “Employing Kirkpatrick’s Evaluation Framework to Determine the Effectiveness of Health Information Management Courses and Programs.” Perspectives in Health Information Management (Spring 2011): 1-5.

Printer friendly version of this article.

 

Leave a Reply