The purpose of this research was to a) determine what assessment methods are being used in undergraduate health information administration programs to assess student learning and the usefulness of those methods, b) determine to what extent programs have incorporated good student learning assessment practices.
Programs use a variety of assessment tools to measure student learning; the most useful include assessments by the professional practice supervisor, course tests, assignments, presentations, capstone experiences, comprehensive projects, analyses of the students’ academic progress, and the results of the RHIA credentialing examination. The greatest challenge for using the results of student outcomes assessment is making changes in a timely manner. Satisfying accreditation requirements and program improvements were identified as the primary reasons for doing outcomes assessments.
The results signify a need to increase program directors’ knowledge regarding the essentials of outcomes assessment.
Key Words: student learning assessment, educational outcomes, health information management, health information administration, assessment tools
Colleges and universities find themselves reacting to a myriad of pressures from the state and federal government, society, employers, accrediting agencies, and in some instances professional organizations demanding educational accountability and quality.1-6 Responding to the outcries, institutions of higher learning implemented assessment programs to meet the expectations for more efficient educational programs and for more effective student learning by documenting the knowledge and skill levels the student achieved as a result of the educational process.7-8 The framework assisting colleges and universities in establishing effective outcome assessment programs for student learning identified characteristics of good practice such as those specified by the American Association of Higher Education’s Nine Principles of Good Practice of Assessing Student Learning.9 Effective assessment programs are faculty driven, making the faculty responsible for involvement in all phases from defining the purpose of assessment to analyzing data and making recommendations. Institutions should demonstrate their commitment to assessment by linking departmental results to planning, budget development, and personnel decisions.10-12 A list of the characteristics of good practice can be found in Figure 1.
The desire for a quality education is the driving force behind the student assessment movement. Assessment methods examine activities including course examinations, comprehensive examinations, capstone courses, performance on professional examinations, portfolio assessments, classroom tests and assignments, exit interviews of students, assessments by supervisors for students on practicum, projects, and poster sessions. Other methods of assessing student learning involve surveying or interviewing students in the program, alumni, and employers.13-15
Assessment in higher education requires institutions to use a systematic approach to gather, analyze, and use information about educational programs for the purpose of improving student learning and development.16 Although a large body of literature exists on outcomes assessment in higher education, professional education has received little attention in the literature due to the belief that professional education is self-regulating through the discipline’s professional organization and the specialized accreditation process.17
Health information administration (HIA) programs are regulated by the Commission on Accreditation for Health Informatics and Health Information Education (CAHIIM). CAHIIM incorporated assessing student learning in the 2005 accreditation standards for undergraduate HIA programs. These standards reflect a paradigm shift to an outcomes-based accreditation process that systematically addresses the issue of quality by linking accreditation to program goals and student learning outcomes. Standards II.A.3 and II.A.3a, Students and Graduates, require programs to assure a) that the educational needs of students are met, b) that graduates demonstrate entry-level competencies, and c) that student progress toward achieving the competencies identified in the curriculum is frequently evaluated.18 Most higher education institutions have been involved in assessing student learning outcomes for many years; therefore, CAHIIM does not prescribe a particular assessment method to comply with the standards. The absence of outcome literature for professional education coupled with the lack of specificity by CAHIIM presents challenges for the HIA program directors who are responsible for conducting outcomes assessment.
Literature specific to assessment of student outcomes in undergraduate HIA education is nonexistent, as no research has yet been conducted. The literature does not document what undergraduate HIA programs are doing to assess student outcomes. Programs are struggling with what to assess and how to use assessment results for enhancing learning. Enlightenment on the current practice of outcomes assessment in undergraduate HIA programs throughout the country would assist schools in developing meaningful assessment programs. This research asks a) what assessment methods are being used in undergraduate HIA programs to assess student learning, b) how the program directors perceive the usefulness of these assessment methods, and c) to what extent programs have incorporated attributes associated with good practice of assessing student learning.
A five-part, self-administered survey was mailed to the 45 directors of accredited HIA bachelor degree programs. To assure face validity of the instrument, two directors of HIA educational programs and one senior faculty were asked to review the document and give feedback regarding content, structure, design, and language. Changes were made to the instrument reflecting their feedback. Permission to conduct this study was granted from the Institutional Review Board prior to initiating the research study. Participation in the study was voluntary, meaning subjects did not have to complete or return the survey. Consent is implied by completing and returning the survey to the investigator.
Initially, the researcher confirmed the names and addresses of all of the HIA program directors with the American Health Information Management Association. The survey, along with a cover letter explaining how the data would be collected and how the information would be used, were sent and a reminder postcard was sent to all individuals who did not respond within a month. A response rate of 87 percent was achieved.
Data were entered into the Statistical Package for the Social Sciences. Descriptive statistics such as frequency distribution, measures of central tendency, variance, and correlation were used to analyze the research questions. Responses to open-ended questions were transcribed and analyzed to identify themes.
A total of 39 surveys were returned for an 87 percent response rate.
Demographics of Participating HIA Programs
The descriptive characteristics of the HIA programs responding to the questionnaire are presented in Table 1 along with the reason for assessment. The majority of respondents were from a public institution (72 percent). Thirty-one programs (80 percent) were over 15 years old. Thirty-two programs (82 percent) indicated that assessment had been used in the program for five or more years. Findings indicate satisfying accreditation requirements (100 percent) and program improvement (100 percent) are the top two reasons for conducting assessment with resource allocation being the last reason for conducting assessment at 41 percent.
Methods Used by Programs to Assess Student Learning
Methods used by programs to assess student learning are presented in Table 2. Supervisors in all programs assessed students on professional experiences, course tests, assignments, and presentations. More than 85 percent of the respondents used the following methods to measure student learning:
- a capstone experience at the conclusion of the senior year
- a comprehensive project
- a periodic analysis of the student’s academic progres
- the results of the RHIA credentialing examination for student outcome assessment
It is interesting to note that pre- and post-testing, exit interviews and focus groups at the completion of the program, and student portfolios are used less than other assessment methods.
Further analysis of assessment methods used less frequently appears in Figure 2. This data suggest a considerable difference exists between HIA programs located in public institutions versus private institutions in the use of exit examinations at the conclusion of the program, focus groups, and student portfolios for assessing student outcome. Data show a higher percentage of programs less than 15 years old use pre- and post-testing, exit examinations at the conclusion of the program, focus groups, and student portfolios compared to programs 15 years old or older.
Usefulness of Assessment Methods
Table 3 indicates the level of usefulness for the various assessment methods. Those rated as useful include capstone experiences, course tests, course assignments, exit examinations at the conclusion of course work, analyses of academic progress, and student portfolios. Each assessment method will be discussed individually.
Capstone: Thirty-seven programs find the capstone experience useful in assessing student learning. Two programs do not use a capstone experience for assessing student learning.
Course tests, assignments, and presentations: Course tests, assignments, and presentations are used by all programs and found to be useful assessment methods.
Assessment of professional practice supervisors: Thirty-eight program directors rated this assessment method as useful, although one program director does not agree.
Pretest and posttest: Mature programs, those 15 years or older, find pre- and post-testing useful as compared to younger programs, those less than 15 years old.
Exit examination at conclusion of coursework: Twenty-eight programs utilize an exit examination to assess student learning.
Exit interview: This assessment method is not used by 51 percent of the programs. Of those programs using exit interviews, 53 percent found this method useful in assessing student learning.
Focus groups: Sixty-seven percent of the respondents do not use focus groups to gather data to assess student outcomes. Sixty-two percent of the programs that use focus groups found them to be a very useful assessment method.
Analysis of academic progress: Ninety-seven percent of the programs rate the analysis of academic progress as useful in assessing student learning.
Student portfolios: Student portfolios are used by 49 percent of the programs. Ten programs find student portfolios to be very useful in assessing student learning outcomes.
RHIA credentialing examination results: Sixty-two percent of the programs indicate the results of the credentialing examination are very useful in assessing student learning. One program does not use the results and two programs find the results not useful.
Characteristics of Good Practice for Assessing Student Learning
The survey queried the directors about the incorporation of the characteristics in their assessment practices. Analysis of the responses indicates the level of incorporation varies by program as revealed in Table 4. There is some concern that a few program directors indicated the educational values and goals of the program are “somewhat” or “minimally” reflected in the program’s assessment process for student outcomes. Sixteen percent of the program directors noted that stakeholders are either “somewhat” or “minimally” involved in the assessment process for the program. Eighty percent of the respondents said that assessment is faculty driven, which was lower than expected. The majority of programs do not fully incorporate the collection of data on teaching techniques, classroom experiences, environmental components, or use of outcome data for institutional planning, budget development, and personnel decisions.
Greatest Challenge for Using Results of Student Outcomes Assessment
Program directors were asked to identify the greatest challenge for using results of the student outcomes assessment process. Responses were a) being able to make indicated changes in a timely manner, b) resistance of faculty where outcome results indicate a potential problem with their course or delivery of the course, c) linking the results to one course, d) avoiding the “politics” of critical outcomes, e) time to sit and contemplate the results, f) time to implement change, g) inability to make change swiftly, h) trying to determine whether a poor outcome is due to an individual student’s success (or lack of) or due to a “flaw” in the program or both, i) separating valid from frivolous student comments, and j) using national certification examination results with a low number of students sitting for the credentialing exam.
Like many allied health programs, health information administration programs are required by their accrediting agency to assess student learning and program outcomes. For this reason, it is not surprising that HIA program directors indicated that satisfying accreditation requirements and program improvement are the primary reasons for conducting assessment activities. This result is consistent with recent literature affirming a growing demand for student learning accountability.19-20 Accrediting agencies, such as CAHIIM, are responsible for establishing the expectations for institutions and programs to assess student learning outcomes.21
Analysis of the data collected using the aforementioned methodologies revealed the undergraduate HIA programs use a variety of methods to assess student learning outcomes; some methods are more useful than others. Test results were identified as the most valuable assessment method used to measure student learning. Test scores are effective in measuring student learning if linked to a course learning goal.22 Focus groups, student portfolios, and exit interviews were less frequently used; however, programs applying these methods found them useful. These findings indicate a possible need to educate program directors on the various assessment tools available for measuring student learning. The RHIA credentialing examination allows an HIM program to compare the performance of its graduates to HIM graduates nationwide. Standardized tests are sometimes criticized for providing minimal feedback to faculty.23 Although the majority of the program directors indicated the results of the credentialing examination were very useful in assessing student learning outcomes, further investigation is needed to determine why some program directors disagree.
The “Characteristics of Good Practice for Assessing Student Learning” was compiled as a resource for the development of assessment programs in colleges and universities.24-25 As seen in Table 4, not all characteristics were incorporated into the assessment practice of all programs. Responses to characteristics 4 and 5 reveal that some program directors do not include data on teaching techniques and classroom experiences in the assessment process. This is somewhat puzzling since improvement in teaching and learning in the classroom cannot occur without obtaining feedback on faculty and courses.26
Characteristic 6 brings attention to the belief that the environment, along with the institution’s resources, assists students in achieving their learning outcomes.27-28 CAHIIM requires institutions to provide resources to support the program’s goals and outcomes. These resources include financial support, physical resources including classrooms, laboratory space, Internet access, computer hardware, software, specific HIM software applications, and sufficient faculty and staff to achieve the program’s goals and outcomes.29 Feedback from current students, new graduates, faculty, and staff addressing these issues can be obtained through surveys or focus groups. It seems appropriate for program directors to seek assistance in data collection methodology if needed. This assistance may be internal to the institution, another program director, or a professional organization.
Characteristics 10, 11, and 12 seek to link the results of student learning outcomes to program planning, budgeting, and personnel decisions.30 Linking of the student outcomes assessment results to activities related to planning, budgeting, and personnel decisions may not occur in some institutions, which could explain responses. Program directors faced with justifying the program’s existence might find linking beneficial.
Program directors recognize the need to assess student learning as evidenced by the variety of assessment methods being used and the number of years programs have conducted outcomes assessment. However, using the results becomes a challenge. Involving faculty, students, alumni, employers, and other stakeholders as appropriate in the assessment process might resolve or at least minimize the challenges.
Conclusion and Recommendations
This research documents that ongoing assessment of student learning is occurring in undergraduate HIA programs. Although program directors noted a variety of assessment methods being used, test results were identified as the most valuable. With the wide array of assessment tools available, program directors and faculty might benefit with increased knowledge of the subject. For this reason, program directors and faculty must seek out educational opportunities internal and external to the institution in which they work.
AHIMA, through the Assembly on Education, could take a leadership position by offering affordable seminars and workshops addressing the needs of the educators on the topic of outcomes assessment.
Program directors indicated satisfying accreditation requirements as a reason for performing assessment activities. CAHIIM, the accrediting agency for HIA programs, could provide formalized training to educators for student and program assessment. Knowledge on how to link assessment results to program goals, curriculum, staffing, physical resources, and budget is instrumental in maintaining a quality program.
Program directors must persevere in obtaining funding for faculty to participate in educational opportunities to broaden their knowledge of the assessment process. Ideally, the institution should demonstrate commitment to the assessment process by allocating adequate funding for faculty participation and education in each school, department, or program budget.
To assist in meeting the challenges associated with using assessment results, stakeholders can be a valuable resource. Program directors should, at a minimum, involve faculty, current students, graduates of the program, professional practice supervisors, and employers in assessing the data obtained through the assessment process. Involving a wide range of stakeholders in the process facilitates acceptance of the results.
This study provides baseline information that can be used to further the knowledge of assessing student learning. Further studies could investigate the different perceptions of students’ learning between the clinical supervisors, program faculty, and program director; investigate in more detail the effectiveness of the various assessment methods used to assess student learning; and investigate the usefulness of student outcomes assessment in program development, program quality, and faculty development.
Jody Smith, PhD, RHIA, FAHIMA, is an Associate Professor and Chair of the Health Informatics and Information Management Department of Saint Louis University in St. Louis, MO.
1. Bilder, A.E. and D.F. Conrad. “Challenges in Assessing Outcomes in Graduate and Professional Education.” In Haworth, J.G. (Editor). New Direction for Institutional Research. San Francisco: Jossey-Bass, 1996, 92: 5-15.
2. Eckel, P., B. Hill, and M. Green. American Council on Education. On Change I—En route to transformation. (1998). Retrieved March 14, 2002. Available at http://www.acenet.edu/bookstore/pubInfo.cfm?pubID=62.
3. Kolb, C.E.M. Accountability in postsecondary education. (1995). Retrieved November 3, 2001. Available at http://www.ed.gov/offices/OPE/PPI/FinPostSecEd/kolb.html
4. Lewis, D.R. “Costs and benefits of assessment: A paradigm.” In Banta, T.W. (Editor). New directions for institutional research. San Francisco: Jossey-Bass, 1988, p. 59.
5. Peterson, M.W., et al. National Center for Postsecondary Improvement. Designing student assessment to strengthen institutional performance in baccalaureate institutions. (1999). Retrieved June 21, 2002. Available at http://www.stanford.edu/group/ncpi/documents/pdfs/5-08_baccalaureate.pdf.
6. Council for Higher Education Accreditation. “Footsteps and Footprints: Emerging National Accountability for Higher Education Performance.” Inside Accreditation. Washington, D.C.: CHEA, January 2006. Retrieved July 30, 2006. Available at http://www.chea.org/ia/IA_010406.htm.
7. Gray, P.J. “Viewing assessment as an innovation: Leadership and the change process.” In P.J. Gray and T.W. Banta, (Editors). New directions for higher education. San Francisco: Jossey-Bass, 1997, p. 100.
8. Barr, R.B. and J. Tagg. (1995). “From teaching to learning—a new paradigm for undergraduate education.” Change 27, no. 6 (1995): 12-25.
9. American Association of Higher Education. Nine Principles of Good Practice for Assessing Student Learning. Retrieved July 12, 2005. Available at http://www.fctel.uncc.edu/pedagogy/assessment/9Principles.html.
10. Nichols, J.O. Assessment case studies: Common issues in implementation with various campus approaches to resolution. New York: Agathon Press, 1995.
11. Palomba, C.A. and T.W. Banta. Assessment Essentials. San Francisco: Jossey-Bass, 1999.
12. Erwin, T.D. Assessing Student Learning and Development. San Francisco: Jossey-Bass, 1991.
13. Terenzini, P. T. “Assessment with open eyes: Pitfalls in studying student outcomes.” Journal of Higher Education 60 no. 5 (1989): 644-664.
14. Nichols, J.O. Assessment case studies: Common issues in implementation with various campus approaches to resolution.
15. Palomba, C.A. and T.W. Banta. Assessment Essentials.
17. Bilder, A.E. and D.F. Conrad. “Challenges in Assessing Outcomes in Graduate and Professional Education.”
18. Commission of Accreditation for Health Informatics and Information Management Education. 2005 Standards for Health Information Management Education. (2005) [MCS1]Retrieved February 20, 2006. Available at http://www.cahiim.org/standards.
19. American Council on Education. Public Accountability for Student Learning in Higher Education: Issues and Options. (2004) Retrieved July 30, 2006. Available at http://www.bhef.com/includes/pdf/2004_public_accountability.pdf.
20. Council for Higher Education Accreditation. “Footsteps and Footprints: Emerging National Accountability for Higher Education Performance.”
21. Council for Higher Education Accreditation. 2003 Statement of Mutual Responsibilities for Student Learning Outcomes: Accreditation, Institutions, and Programs.[MCS2](2003) Retrieved July 30, 2006. Available at http://www.chea.org/pdf/StmntStudentLearningOutcomes9-03.pdf.
22. Wolvorrd, B.E. and V.J. Anderson. Effective Grading: A Tool for Learning and Assessment. San Francisco: Jossey-Bass, 1998.
23. Erwin, T.D. Assessing Student Learning and Development.
24. Banta, T., et al. Assessment in Practice: Putting Principles to Work on College Campuses. San Francisco: Jossey-Bass, 1996.
25. Banta, T.W. “Moving Assessment Forward: Enabling Conditions and Stumbling Blocks.” In Gray, P.J. and T.W. Banta (Editors). New Direction for Higher Education. San Francisco: Jossey-Bass, 1997, p. 100.
26. Banta, T., et al. Assessment in Practice: Putting Principles to Work on College Campuses.
27. Astin, A.W. Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education. Phoenix: Oryx Press, 1993.
28. Erwin, T.D. Assessing Student Learning and Development.
29. Commission of Accreditation for Health Informatics and Information Management Education. 2005 Standards for Health Information Management Education. Retrieved February 20, 2006. Available at http://www.cahiim.org/standards.
30. Palomba, C.A. and T.W. Banta. Assessment Essentials.
Article citation: Perspectives in Health Information Management 3;6, Summer 2006