Development of an Instrument to Measure Students’ Perceptions of Information Technology Fluency Skills: Establishing Content Validity

Introduction

This article describes the process undertaken to develop and validate a tool to measure students’ perceptions of their information technology (IT) fluency skills. Why is this important? There is a growing concern that students today are not prepared to live, learn, and work in a technology-rich society.1–3 Today’s college students do not have the necessary IT fluency skills despite their widespread use of the Internet.4–6

Studies that assess students’ IT fluency skills show gaps between the perception and reality of these skills.7, 8 These studies use many assessments and instruments to evaluate students’ IT fluency skills; however, tools have not been developed specifically to assess allied health students’ perceptions of their IT fluency skills. 

The purpose of this study is to establish the content validity of an instrument to measure students’ perceptions of their IT fluency skills using a rigorous judgment-quantification process. The IT fluency instrument developed and validated herein will be used for future studies comparing allied health students’ perceptions of their IT fluency skills with their actual IT fluency skills.

Review of the Literature

The assessment of students’ perception of IT fluency skills derives from the National Research Council’s IT fluency report of 1999. The report challenges the use of the term computer literacy, which implies just having a particular skill or basic knowledge, whereas fluency involves deep understanding and critical thinking skills with the ability to adapt to changes in technology.9 The Computer Science and Telecommunications Board of the National Research Council (NRC) devised the concept of fluency with information technology (FITness) to measure the ability of an individual to handle information technology. While computer literacy is defined with a focus on computer skills, specifically the ability to use a few computer applications, FITness requires that people understand information technology well enough to apply it productively in work situations and in their daily lives, to recognize when information technology may assist or hinder the achievement of goals, and to adapt to changes in and the advancement of information technology.10, 11

FITness requires three kinds of knowledge: contemporary skills, foundational concepts, and intellectual capabilities.12 Contemporary skills, the ability to use today’s computer applications, enable an individual to apply information technology immediately.13, 14 Contemporary skills are an essential component of job readiness. Foundational concepts explain the how and why of information technology. Foundational concepts are defined as the ability to understand the basic principles of computers, networks, and information systems.15, 16 Intellectual capabilities are the higher-level thinking skills needed to apply information technology in complex and sustained situations. For instance, the ability to identify where errors exist in a database and solve such problems requires more than just the ability to enter data into a database. Also, the ability to understand the changing technology industry allows an intellectually capable individual to investigate alternatives to antiquated products and processes. Intellectual capabilities empower people to manipulate the medium to their advantage and to handle unintended and unexpected problems.17, 18 Because foundational concepts, intellectual capabilities, and contemporary skills are essential to the IT fluency concept, they serve as the three constructs from which this tool was developed. Although many assessment instruments exist to measure students’ IT fluency skills, no studies have been undertaken in the field of allied health, more specifically health information management.

Methodology

Overview

Content validity is an essential step in the development of new empirical measuring devices because it represents a beginning mechanism for linking abstract concepts with observable and measurable indicators.19 Lynn (1986) describes content validation as a two-step process beginning with the developmental stage and ending with the judgment-quantification process.20 Stage one of the process, the development stage, requires a comprehensive review of the literature to identify content for the instrument and establish relevant domains. In this study, the literature review identified approximately 30 to 40 articles on the subject of information technology fluency. After the literature was reviewed and the items were constructed, the entire instrument was developed with instructions and scoring guidelines.

The second stage, judgment-quantification, occurs when a panel of experts, working independently, evaluates the instrument and rates items of relevance according to the content domain.21 In addition, item content and clarity, as well as overall instrument comprehensiveness, are evaluated in this stage. Berk (1990) suggests that expert panel members evaluate how representative the items are of the content domain.22 As part of this process, expert panel members should be asked to provide revisions for items that are not consistent with conceptual definitions.23 Clarity of items is another element for content experts to evaluate.24 Finally, the instrument should be evaluated, as a whole, for overall comprehensiveness. As Grant and Davis (1997) note, “This step is necessary because an instrument may have acceptable interrater agreement, but still not cover the content domain.”25

When measuring content validity, it is necessary to utilize a quantitative measure, the content validity index (CVI).26–28 The CVI is calculated by tallying the results of the expert reviewers. The degree to which the expert panelists agree on the relevance determines whether the items are relevant or irrelevant. A Likert-type scale is used to determine relevance. Items that are irrelevant are scored with a 1, items that are somewhat relevant are scored with a 2, items that are quite relevant are scored with a 3, and items that are highly relevant are scored with a 4. Only items scored 3 and 4 are considered relevant and thus are used to calculate the actual CVI.

Instrument

This research required the drafting of a Perceptual IT Fluency Skills Student Survey for use with allied health students. Before information was gathered, local Institutional Review Board (IRB) approval was obtained from the University of Memphis and the University of Tennessee Health Science Center. During the summer and fall of 2009, information on IT fluency and establishing content validity was gathered. Instruments and information from various sources were reviewed, and draft survey items were created. The draft survey included measures of students’ perceptions of their IT fluency skills based on their contemporary skills, foundational concepts, and intellectual capabilities. The contemporary skills section was composed of eight multiple choice questions related to the student’s ability to set up a computer, use a word processor to create a document, use technology to find information, or create a spreadsheet. The foundational concepts portion contained six multiple-choice questions that focus on the student’s knowledge of computer operations, networks, and e-mail. The intellectual capabilities section included five multiple-choice questions to elicit the student’s ability to manage computer problems, adapt to new technology, and communicate concepts.

Sample

A panel of experts was used to validate the draft Perceptual IT Fluency Skills Student Survey. The content validation process described by Lynn (1986) was used.29 A panel of experts including allied health educators and health information managers was asked to participate in the process of validating the instrument for content validity to measure allied health students’ perceptions of their IT fluency skills based on the NRC definition of IT fluency. The panel of experts was selected based on their knowledge of information technology, their chosen profession within the health information field, and their having at least five years of experience monitoring and assessing students’ IT fluency skills. The members consisted of individuals from education and private healthcare entities and were selected because of their involvement in developing programs for teaching information technology skills to allied health students.

According to Lynn (1986), no more than 10 panel members should be used.30 This panel consisted of seven members: three educators and four healthcare professionals. Educators held the rank of assistant professor or above, and the healthcare professionals were a director of clinical information systems, a director of health information management, a manager of veteran services, and a senior systems analyst. All panel members contacted agreed to evaluate the instrument and provide feedback. All feedback from the experts was received within two months of initial contact.

Data Collection

A cover letter explaining the purpose of the instrument; literature defining IT fluency concepts such as contemporary skills, foundational concepts, and intellectual capabilities; and instructions on how to complete the rating form were e-mailed to the panel of experts in November 2009. The researcher made a follow-up phone call to the experts to verbally explain the process and to ensure understanding of the process. The panel was asked to judge the items for clarity, relevance, and item content using a 1-to-4 scale as described above. The members were asked to provide suggestions for any revisions or changes needed. The Content Validity Setup designed by Lynn (1986) was used as a model for this task.31

After all correspondence was received regarding content validity for each item, a focus group was held to evaluate the instrument for overall comprehensiveness. Six of the seven panel members participated in the focus group. The objective of the focus group was to reach consensus on the overall comprehensiveness of the instrument, that is, to determine whether the experts felt the instrument measured what it was intended to measure.

Findings

The calculation or proportion that is sufficient for determining content validity agreement was explored in the literature. A CVI of 0.70 represents average agreement; 0.80, adequate agreement; and 0.90, good agreement.32,33 According to Lynn (1986), when there are six or more judges, the CVI should be no lower than 0.78 for an item to be judged acceptable.34 A CVI of 1.00 indicates 100 percent agreement between raters. A CVI was calculated for each item (see Table 1) and for the overall instrument.

Results from the panel of experts yielded a 0.87 overall content validity index. Six items had a CVI below 0.78 and were deleted from the instrument. Two experts suggested minor revisions regarding the clarity or wording of the items, and those revisions were incorporated into the instrument. One expert suggested that the word “connected” be defined in question 5 under contemporary skills (this question was subsequently deleted from the survey draft because its CVI was below 0.78). Another expert suggested that the word “system” be changed to “application” in question 7 of the contemporary skills category (also subsequently deleted). Once all items had been evaluated and all changes were made, the revised instrument was sent to the panel of experts to evaluate the overall instrument.

The focus group discussed the instrument for overall comprehensiveness. None of the experts suggested additional content or changes at this time. The CVI for the revised instrument (which can be found in Appendix A) was 1.00. Based on the CVI for each item as well as that for the overall instrument, it is believed that the instrument contains questions relevant to students’ perceptions of their IT fluency skills.

Conclusion

Content validity is a critical step in the selection and administering of an instrument. The two-step method used in this study, consisting of a developmental stage and a judgment-quantification stage, required a comprehensive literature review, item creation, and agreement from a specific number of experts about the items’ and the entire instrument’s validity. Seven experts were asked to identify omitted areas and to suggest areas for improvement, and these revisions were made. The process used to determine content validity proved to offer consistency, rigor, and structure to the instrument development. High CVI scores were generated for those items judged relevant to the content domain as well as for the overall instrument. The results support the content validity of this instrument as a tool for measuring students’ perceived information technology (IT) fluency skills.

Marcia Sharp, MBA, RHIA, is an assistant professor in the College of Allied Health Sciences at the University of Tennessee Health Science Center in Memphis, TN.

 

Notes

1. Dougherty, J., N. Kock, C. Sandas, and R. Aiken. “Teaching the Use of Complex IT in Specific Domains: Developing, Assessing and Refining a Curriculum Development Framework.” Education and Information Technologies 7, no. 2 (2002): 137–54.
2. Educational Testing Service. Succeeding in the 21st Century: What Higher Education Must Do to Address the Gap in Information and Communication Technology Proficiencies. 2003. Available at http://www.ets.org/ictliteracy/succeeding1.html (accessed October 12, 2009).
3. Salaway, G., and J. B. Caruso. “Students and Information Technology in Higher Education.” EDUCAUSE Center for Applied Research 6 (2007): 1–124.
4. Hilberg, S. “Fluency with Information and Communication Technology: Assessing Undergraduate Students.” Doctoral dissertation, Wilmington College, 2007.
5. Katz, I. “Beyond Technical Competence: Literacy in Information and Communication Technology.” Educational Technology 45, no. 6 (2005): 44–47. 6. Resnick, M. “Rethinking Learning in the Digital Age.” In G. Kirkman (Editor), The Global Information Technology Report 2001–2002: Readiness for the Networked World. New York: Oxford University Press, 2002.
7. McEuen, S. F. “How Fluent with Information Technology Are Our Students?” Educause Quarterly 4 (2001): 8–17. Available at http://www.educause.edu/ir/library/pdf/eqm0140.pdf (accessed October 18, 2009).
8. Stone, J., and E. Madigan. “Inconsistencies and Disconnects.” Communications of the ACM 50, no. 4 (2007): 76–79.
9. National Research Council Computer Science and Telecommunications Board. Being Fluent with Information Technology. Washington, DC: National Academy Press, 1999.
10. Lin, H. “Fluency with Information Technology.” Government Information Quarterly 17, no. 1 (2000): 69–76.
11. National Research Council Computer Science and Telecommunications Board. Being Fluent with Information Technology.
12. Ibid.
13. Dougherty, J., N. Kock, C. Sandas, and R. Aiken. “Teaching the Use of Complex IT in Specific Domains: Developing, Assessing and Refining a Curriculum Development Framework.”
14. National Research Council Computer Science and Telecommunications Board. Being Fluent with Information Technology.
15. Dougherty, J., N. Kock, C. Sandas, and R. Aiken. “Teaching the Use of Complex IT in Specific Domains: Developing, Assessing and Refining a Curriculum Development Framework.”
16. National Research Council Computer Science and Telecommunications Board. Being Fluent with Information Technology.
17. Dougherty, J., N. Kock, C. Sandas, and R. Aiken. “Teaching the Use of Complex IT in Specific Domains: Developing, Assessing and Refining a Curriculum Development Framework.”
18. National Research Council Computer Science and Telecommunications Board. Being Fluent with Information Technology.
19. Wynd, C. A., B. A. Schmidt, and M. A. Schaefer. “Two Quantitative Approaches for Estimating Content Validity.” Western Journal of Nursing Research 25, no. 5 (2003): 508–18.
20. Lynn, M. “Determination and Quantification of Content Validity.” Nursing Research 35 (1986): 382–85.
21. Ibid.
22. Berk, R. “Importance of Expert Judgment in Content-Related Validity Evidence.” Western Journal of Nursing Research 12 (1990): 659–71.
23. Lynn, M. “Determination and Quantification of Content Validity.”
24. DeVillis, R. F. Scale Development: Theory and Applications. Newbury Park, CA: Sage, 1991.
25. Grant, J. S., and L. Davis. “Selection and Use of Content Experts for Instrument Development.” Research in Nursing & Health 20 (1997): 271.
26. Anders, R. L., J. S. Tomai, R. M. Clute, and T. Olson. “Development of a Scientifically Valid Coordinated Care Path.” Journal of Nursing Administration 27 (1997): 45–52.
27. Summers, S. “Establishing the Reliability and Validity of a New Instrument: Pilot Testing.” Journal of Post Anesthesia Nursing 8 (1993): 124–27.
28. Wynd, C. A., B. A. Schmidt, and M. A. Schaefer. “Two Quantitative Approaches for Estimating Content Validity.”
29. Lynn, M. “Determination and Quantification of Content Validity.”
30. Ibid.
31. Ibid.
32. Ibid.
33. Wynd, C. A., B. A. Schmidt, and M. A. Schaefer. “Two Quantitative Approaches for Estimating Content Validity.”
34. Lynn, M. “Determination and Quantification of Content Validity.”

Marcia Sharp, MBA, RHIA. “Development of an Instrument to Measure Students’ Perceptions of Information Technology Fluency Skills: Establishing Content Validity” Perspectives in Health Information Management (Summer 2010): 1-10.

 

Printer friendly version of this article.

 

Leave a Reply