2006 CAC Conference Introduction

Computer-assisted Coding Software Standards Workshop
September 2006
Summary

Introduction

AHIMA and its Foundation for Research and Education (FORE) convened the Computer-Assisted Coding Software Standards Workshop in Arlington, VA, on September 6-8, 2006.

The workshop was supported by grants from 3M Health Information Systems, CodeRyte and MedQuist. It was attended by representatives from computer-assisted coding (CAC) software vendors, health information system vendors, government representatives, HIM professionals, and other end users.

Purpose

The stated intention of the program planning committee (see Appendix A) was and is to promote a future state where automated tools are used to capture discrete coded data that can be mutually understood and shared for multiple uses within the healthcare industry. However, collaboration among the various segments of the industry using coded data is needed to accomplish this. Thus, AHIMA and FORE, with support from corporate sponsors, organized this meeting to begin a dialog between interested CAC stakeholders. See Appendix B for the complete meeting agenda.

Keynote

AHIMA’s executive vice president and chief operating officer Sandy Fuller, MA, RHIA, delivered the keynote address to the attendees. After welcoming the participants to the workshop, Fuller thanked the corporate donors (3M, CodeRyte and MedQuist) for their sponsorship and recognized the workshop planning committee (see Appendix A for a complete list).

The keynote address set the tone for the meeting by articulating AHIMA’s goals in organizing this conference. These goals included:

  • raising the awareness of the need for computer-assisted coding (CAC) software standards,
  • delineating the scope of the challenge,
  • gathering input from a variety of stakeholders,
  • recommending a framework for CAC software standards, and
  • determining the necessary and desired next steps.

AHIMA began planning for alternate coding futures in 1998 by sponsoring the Coding Futures task force. More recently they published a practice brief entitled, “Delving into CAC.” AHIMA has also been involved in various industry activities as classification, terminology, EHR content, and related standards evolve, all of which are relevant to CAC software.

Some of the CAC software standard drivers were highlighted including reimbursement efficiency, the detection and prevention of fraud and abuse, current and future use related to patient care and patient safety, post-market surveillance, and disaster preparedness and recovery. The need for structured and organized data is common to all of these uses. Different CAC stakeholders may expect a variety of outcomes from standards related to CAC. Clarification of expectations and priority setting were a key theme of the discussion for this conference.

The CAC industry is faced with a number of challenges and opportunities as the industry begins working on CAC software standards. The current low level of EHR adoption is a challenge because organizations need an EHR in order to fully utilize the capabilities of CAC software. It is also an opportunity because the minimal level of CAC adoption makes this the optimal time to develop standards since the utility and functionality of the applications are evolving. All of the challenges for CAC software standards are also opportunities to create new and different, hopefully better, methods for secondary data capture for the healthcare industry. AHIMA thanked the participants requesting that they listen to the presentation of the technical papers the first day (visit http://www.ahima.org/perspectives/conference_papers.asp to view a complete list) and prepare to work at the breakout sessions the second day.

Presentation of Technical Papers

As stated previously, the first day of the workshop consisted of presentations based on selected technical papers. The technical papers were solicited via a call for papers open to all interested parties. Fifteen papers were submitted and eight were chosen for presentation at the workshop. The papers were selected by the program committee using an online abstract rating system where the rater is blinded to any author or company information. The following is a list of the technical papers (visit http://www.ahima.org/perspectives/conference_papers.asp to view the complete archive of papers). Each presentation lasted approximately 25 minutes with an additional 10 minutes allotted for questions.

Assessing Coder Change Rates as an Evaluation Metric
by Michael Nossal, MA; Philip Resnik, PhD; and Jean Stoner, CPC, RCC, from CodeRyte, Inc.

Computer-Assisted Auditing For High Volume Medical Coding
by Daniel T. Heinze, PhD; Peter Feller, MS; Jerry McCorkle; and Mark Morsch, MS from A-Life Medical, Inc.

Using Intrinsic and Extrinsic Metrics to Evaluate Accuracy and Facilitation in Computer Assisted Coding
by Philip Resnik, PhD, Michael Niv, PhD; Michael Nossal, MA; Gregory Schnitzer, RN, CCS, CCS-P, CPC, CPC-H, RCC, CHC; Jean Stoner, CPC, RCC; Andrew Kapit; and Richard Toren from CodeRyte, Inc.

Software Engineering of NLP-Based Computer Assisted Coding Applications
by Mark Morsch, MS; Carol Stoyla, CLA; and Ronald Sheffer, Jr., MA; from A-Life Medical, Inc.

Computer Assisted Coding For Inpatients—A Case Study
by Cheryl Servais, MPH, RHIA, from Precyse Solutions, LLC

How Does the System Know It’s Right? Automated Confidence Assessment for Compliant Coding
by Yuankai Jiang, PhD; Michael Nossal, MA; and Philip Resnik, PhD, from CodeRyte, Inc.

Computer Assisted Coding Software Improves Documentation, Coding, Compliance and Revenue
by Sean Benson from ProVation Medical

Automated Coding and Fraud Prevention
by Jennifer Garvin, PhD, RHIA, CPHQ, CCS, CTR, FAHIMA; medical informatics postdoctoral fellow at the Center for Health Equity Research and Promotion; Valerie Watzlaf, PhD, RHIA, FAHIMA, an associate professor at the University of Pittsburgh; and Sohrab Moeini, MS, a HIM graduate student from the University of Pittsburgh.

Breakout Group Summary

The second day of the workshop was designed as an opportunity for the participants to engage in group discussions regarding specific topics. The topics outlined in the program were formulated by the program planning committee, however the participants were encouraged to enhance the topics or change them according to the needs of the breakout groups. This method was established to encourage open dialogue between the facilitator and the group’s members enabling them to become part of the process of contributing key issues, not necessarily coming up with answers. The five work group breakout sessions are the following:

  1. software management and use
  2. audit mechanisms
  3. software metrics and evaluation
  4. software certification
  5. software test suite

1. CAC software management and use—The purpose of this group was to discuss the implementation and management of CAC software.

The conclusions of the participants for this session were:

  1. For CAC to be desirable, it has to interface with other systems that store clinical data used in code assignment, such as lab and radiology.
  2. CAC could be used for data abstracting (Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) quality measures and/or Agency Healthcare Research and Quality (AHRQ) patient safety indicators abstracting) eliminating the time consuming task for coders and utilization review nurses. Other data abstracting needs could include Tumor Registry, Centers for Disease Control (CDC) reporting, definitely for quality measures and syndromic surveillance for public health.
  3. CAC could assist with a documentation improvement program as part of the continuous quality improvement of healthcare
  4. CAC education, training, and certification of HIM professionals is needed
  5. CAC requires auditing to ensure the validity of reported codes compared to the source documentation.

This group felt that CAC was not yet ready to help with all coding needs due to the underutilization of electronic health record systems and the complexity of acute care documentation.  They described the use of CAC as still having significant financial risks, as well as potential fraud and abuse risks. This is because the use of templates or automated linking of documentation to codes might overstate the intensity or reason for services resulting in incorrect health plan reimbursement. They also stated that the Return on Investment will have to be proven before many organizations are willing to purchase CAC products.

2. CAC Audit Mechanisms—The purpose of this group was to describe the potential standards for the use of audit trails and/or other technologies and mechanisms to serve this purpose.

Audit mechanisms was felt by the participants to be a misleading term, so it was renamed Quality Assurance (QA) Mechanisms. It was decided that these mechanisms are not unique to computer-assisted coding, but are applicable to all current coding processes and are the responsibility of the end user. Important concepts presented by the group included:

  1. Data quality is important regardless of where the data is maintained. Some organizations will maintain the data in the CAC system, others in other databases. These concepts apply no matter where the data is maintained.
  2. Data must be consistent across all systems otherwise it cannot be trusted
  3. All changes must be tracked (who, when, why).
  4. The tracking of changes is for education as well as systems maintenance. The tracking helps to develop a standard list of transactions requiring additional monitoring, evaluation and follow up.
  5. Many systems have audit mechanisms, however they are not used. The tools have no value if they are not utilized.
  6. Any QA performed for CAC or coding does not replace QA/compliance outside of coding.

Specific recommended best practices supporting these concepts included:

  • Acceptable accuracy measures must be user specified, based on institutional risk assessment (100 percent is an eventual goal, but probably not achievable with the current systems and guidelines). The risk assessment should consider:

    • frequency of code use
    • documentation reliability
    • revenue impact
    • the current Office of Inspector General (OIG) work plan/focus
  • Determine and justify the sample size whether it is simple choice, a statistical sample or a judgment decision. Methods for performing sampling is needed as continuing education for HIM professionals
  • These practices are ongoing (monthly, quarterly). This quality assurance will not be a single event. It is a part of the business process and must be done continually.

This data quality assurance is integral to the integrity of all healthcare data, not just coded data.

3. CAC Metrics and Evaluation—A discussion of how and what to measure to determine CAC accuracy was held by this group.

The metrics and evaluation group determined that there are many characteristics which can be measured for coding and the coding process. Their list included:

Intrinsic—Intrinsic evaluations measure the performance of an NLP component on its defined subtask, usually against a defined standard in a reproducible laboratory setting. This definition is taken from “Using Intrinsic and Extrinsic Metrics to Evaluate Accuracy and Facilitation in Computer Assisted Coding” by Philip Resnik, PhD, Michael Niv, PhD; Michael Nossal, MA; Gregory Schnitzer, RN, CCS, CCS-P, CPC, CPC-H, RCC, CHC; Jean Stoner, CPC, RCC; Andrew Kapit; and Richard Toren from CodeRyte, Inc.

  • Accuracy—condition of being true or correct (definition from dictionary.com)
  • Variability—the extent to which something changes versus consistency—the extent to which something stays the same. There is a need to balance variability with accuracy. (definition from dictionary.com)
  • Predictability—the ability to foretell with precision (definition from dictionary.com)
  • Accuracy of prediction—The component needs to be able to evaluate and communicate its own ability to accurately predict an outcome, in this case code assignment.
  • Information capture—The component needs to fully annotate all of the information utilized in predicting an outcome
  • Adaptability—ability to adjust to different conditions (definition from dictionary.com) The component needs to move easily between different terminologies, including code set changes such as going from ICD-9-CM to ICD-10-CM.

Extrinsic—Extrinsic evaluations focus on the component’s contribution to the performance of a complete application, which often involves the participation of a human in the loop. This definition is taken from “Using Intrinsic and Extrinsic Metrics to Evaluate Accuracy and Facilitation in Computer Assisted Coding” by Philip Resnik, PhD, Michael Niv, PhD; Michael Nossal, MA; Gregory Schnitzer, RN, CCS, CCS-P, CPC, CPC-H, RCC, CHC; Jean Stoner, CPC, RCC; Andrew Kapit; and Richard Toren from CodeRyte, Inc.

  • Clean, but non-fraudulent claims—Results from the entire process whether or not CAC is used.
  • Correct coding—According to the guidelines of the system in use.
  • Standard data/clean data—The data meet any necessary requirements and only valid values are output.
  • Usable information—The results of the evaluation are able to be used.
  • Adaptability—As defined above, however, this time for the entire process.
  • Productivity—This is the return on investment question.  The evaluation must be able to be done with the resources available.
  • Accuracy—As defined above. If the component is accurate, but the process is not, sources of inaccuracy must be determined.
  • Useful data related to the process—for process management in real time. Users and managers needs to know as soon as practicable if problems are arising in the process.
  • Ad hoc reporting—The data to create new reports or to ascertain new information on an as needed basis is required.
  • Compliance—The process must accommodate code changes, coding guidelines, etc., and remain in compliance with all applicable laws and regulations.

There was agreement that a “gold standard” was needed for coding, especially with CAC. In order to quantify performance on a task, it is common to create a “gold” (reference) standard by having independent experts perform the task and resolving cases where they differ (taken from Hripcsak, G. and A. Wilcox. Reference standards, judges, comparison subjects: roles for experts in evaluating system performance. J Am Med Inform Assoc. 9, pp. 1-15, 2002.) Discussion was lively around the question of whether a centralized standard is better or whether each organization should create their own standard. Given the current and anticipated changes to healthcare coding (severity, pay-for-performance, quality reporting), gold standard flexibility and adaptability were considered to be extremely important.

4. CAC Certification—This group discussed the need for formal validation or certification of CAC software performance to foster acceptability and buyer confidence in systems.

The group began by discussing the value of and other issues pertinent to certification. It was felt to be good because it lends credibility to the products, confers public relations benefit and removes barriers to the market. Issues for consideration include that certification should not suppress innovation, should separate structured input from natural language processing (NLP), determination of what to test, and questioning whether there is a “right” answer in coding.  It should be noted that the group felt the central issue might be change resistance, rather than software reliability.

When the group was asked if there was a difference between certification and validation, they indicated that validation might be a first step to certification with transparency in criteria development removing some of the market barriers. Further, they felt that internal software validation processes might be a criterion for certification (FDA, CMNI)

In discussing if the Certification Commission for Health Information Technology (CCHIT) is the right model, the group felt it had advantages including that it was an established process, a part of the EHR process (garnering that market awareness and reusing some applicable criteria), as well as HHS sanctioning via ONC.  Concerns include that CCHIT already has projects in a three-year cue, the cost, and the fact that functional testing may require a different approach.

The group developed alternative certification ideas such as testing against a standard sample by an acknowledged authority, prototype client auditing and efficiency testing/Return on investment.

Finally, the group put together the following list of criteria for certification:

  • the product must be developed using sound methods
  • any testing process needs to show your product does work, is monitored, and continues to work
  • the processes and controls are in place
  • it is a reasonable test and is survivable
  • recognize that the IT criteria are different than HIM criteria
  • certification priority for functionality should be set
  • test the product to its design scope, not to all possible functionality
  • baseline functionality should be tested first and then specialty functionality
  • structured input would require a different test than natural language processing
  • there needs to be an authoritative “test” on a corpus of data with defined outputs
  • vendors need a return on investment for testing
  • market acceptance of testing needs to be explored

5. CAC Test Suite—The issues involved in developing a standard source for testing CAC functionality and/or accuracy were discussed by this group.

This group had a very detailed discussion about the needs and requirements of such a source. The development of a CAC test suite would be an extremely complex project. The issues of gold standard development and metrics and evaluation would have to be addressed before something of this nature could be attempted. The content and administration of the suite would be dependent upon its purpose. Acceptance by Health and Human Services (HHS) and other payers would need to be guaranteed. In short, development would be exceedingly difficult.

The group felt that development of a test suite would not only require a high level of government involvement and cooperation, but would also be very expensive. Overall, while this might be a desirable objective, the group felt the industry needed to understand the issue more thoroughly before beginning on a project like this.

Review of Workgroups and Next Steps

The work groups reviewed their output on Thursday afternoon. When the meeting reconvened on Friday morning Susan Fenton, MBA, RHIA, quickly summarized the work group output (similar to the above) and discussed the following next steps gleaned from the work group detail:

Short-term Next Steps

  • Establish a forum for continued dialogue and development—AHIMA will be doing this via their CAC CoP mechanism. First efforts will center on developing a CAC Glossary and delineating expected CAC functionality.
  • Benchmark current coding process to determine current levels of productivity, technology used, etc. Responsibility for this was not assigned. AHIMA will investigate the possibility of pursuing this.
  • Begin PR process—Other stakeholders such as the government, more health information management professionals and EHR vendors will not only need to be involved, they need to be educated about the issues.
  • Plan to reconvene—AHIMA is exploring the possibilities of sponsoring another CAC software standards meeting in 2007
  • Develop a standard coding process—The healthcare industry needs to identify a best practice method for code assignment. AHIMA will lead this process.

Long-term Next Steps

These items are in long-term section not just because they need to be further defined and will take longer to accomplish, but also because they will build on some of the items in the section “Short-term Next Steps.”

  • Fully define a coding gold standard for accuracy—This may be easier to accomplish once a standard coding process is established.
  • Testing, certification, conformance—What are the best, correct choices and methods for achieving some level of standardization?
  • Education—Education will be needed in the form of continuing education for current professionals in the field, as well as for the academic programs to ensure newly trained HIM professionals can work effectively with CAC software.
  • Continued pursuit of the “right” code—In order to have high-quality data our methods and processes for structuring and organizing healthcare data must continue to evolve until the day we can assign a right code with certainty.

Overall, AHIMA is committed to the continued improvement of the coding processes and systems.

Reactor Panel

The purpose of the reactor panel was to present narrow perspectives on the issue of CAC software standards. The perspectives, as well as others, will be encountered as the effort to standardize CAC continues. The reactor panel was comprised of the following members:

Keith Campbell, software developer, Chief Technology Officer, Informatics, Inc.— Campbell’s main reaction as a software developer was that he did not have enough information about the coding process. This would help him to understand why human coders disagree in order to program software more effectively. He recommended establishing a standard coding process as a starting point for CAC software standards.

Kelly Mann, vendor, national marketing director for 3M Health Information Systems—vendor. Mann’s main reaction as a vendor was, in the end, organizations will have to be able to demonstrate a return on their investment. Any standards will have to support increased efficiency in order for the software to be salable.

Jim Bowman, MD; medical director for the Centers for Medicare & Medicaid Services—physician and payer. From the physician perspective, Dr. Bowman encouraged the participants to remember that the majority of the current medical residents have grown up with computers and will not only accept, but will expect, computers to help them with their work. From the payer point of view, he stated that more consistent, higher quality data should be welcomed by payers since humans handling a claim with questionable data is very expensive for the payers.

Mary Stanfill, RHIA, CCS, CCS-P; HIM professional, AHIMA practice resource manager—Stanfill spoke from the perspective of a coding manager or HIM director. Stanfill noted that CAC had made a lot of progress in the last few years, but it still needed to mature. She wanted CAC to help her with all of her coding needs (including emerging data collection tasks such as quality indicators and pay-for-performance), not just in a specialty area or with certain tasks.

Following the reactor panel the participants were thanked for their attention and for a very productive workshop. The workshop was adjourned.

Conclusion

HIM professionals are facing the challenge of helping to develop a technology which has the potential to fundamentally change the way they perform a key HIM function—coding. As described in this workshop, CAC will become a cyber worker. It will work in conjunction with coders, doing the things computers do well, freeing humans up to do the things they do well. AHIMA will continue to champion the incorporation of CAC into the industry in a way which benefits all stakeholders.

Appendix A

Computer-Assisted Coding Software Standards Workshop
Program Planning Committee

  • Amy Wang, MD, Intelligent Medical Objects
  • Andrew Kanter, MD, MPH, Intelligent Medical Objects
  • Brian Levy, MD, Health Language
  • Dee Lang, CodeRyte
  • James Flanagan, MD, PhD, Language and Computing
  • Jennifer Garvin, PhD, RHIA, Center for Health Equity Research and Promotion, Veterans Health Administration
  • Keith Campbell, MD, PhD, Informatics, Inc.
  • Lee Min Lau, MD, PhD, 3M, Inc.
  • Linda Crossley, MedQuist
  • Mark Tuttle, Apelon
  • Pat Wilson, 3M, Inc.
  • Peter Elkin, MD, Mayo Clinic
  • Shaun Shakib, MPH, 3M, Inc.
  • Valerie Watzlaf, PhD, RHIA, University of Pittsburgh
  • Win Carus, PhD, Information Extraction, Inc.

 

Appendix B
Meeting Agenda

COMPUTER-ASSISTED CODING SOFTWARE
STANDARDS WORKSHOP
September 6-8, 2006
Arlington, VA

AGENDA
September 6, 2006

7:30 – 8:30 am  Registration & Continental Breakfast
Sign up for workgroup breakouts

8:30 – 8:45 am  Welcome/Introductions/Orientation
Location: Galaxy Ballroom

8:45 – 9:30 am  Keynote Speaker
Sandy Fuller, Executive Vice President and Chief Operating Officer,  AHIMA

9:30 – 10:00 am Networking Break

10:00 – 10:35 am Assessing Coder Change Rates as an Evaluation Metric 
Michael Nossal, MA; Philip Resnik, PhD; and Jean Stoner, CPC, RCC -CodeRyte, Inc.

10:35 – 11:10 am Computer-Assisted Auditing For High Volume Medical Coding
Daniel T. Heinze, Peter Feller, Jerry McCorkle, Mark Morsch – A-Life Medical, Inc.

11:10 – 11:45 am Using Intrinsic and Extrinsic Metrics to Evaluate Accuracy And Facilitation In Computer Assisted Coding  
Michael Nossal, MA; Andrew Kapit, Michael Niv, PhD; Philip Resnik, PhD; Gregory Schnitzer, RN, CCS, CCS-P, CPC, CPC-H, RCC, CHC; Jean
Stoner, CPC, RCC; and Richard Toren – CodeRyte, Inc.

11:45 am – 1:00 pm Lunch
Location: Cavalier Ballroom

1:00 – 1:35 pm Software Engineering of NLP-Based Computer Assisted Coding Applications
Mark Morsch; Carol Stoyla; and Ronald Sheffer, Jr .- A-Life Medical, Inc.

1:35 – 2:10 pm  Computer Assisted Coding For Inpatients – A Case Study   
Cheryl Servais, MPH, RHIA – Precyse Solutions, LLC

2:10 – 2:45 pm How Does the System Know It’s Right? Automated Confidence Assessment for Compliant Coding   
Yuankai Jiang, PhD; Michael Nossal, MA; and Philip Resnik, PhD, – CodeRyte, Inc.

2:45 – 3:00 pm  Networking Break

3:00 – 3:35 pm Computer Assisted Coding Software Improves Documentation, Coding, Compliance and Revenue  
Sean Benson, – ProVation Medical

3:35 – 4:10 pm  Automated Coding and Fraud Prevention
Jennifer Garvin, Ph.D. – Medical Informatics Postdoctoral Fellow, Center for Health Equity Research and Promotion; and Valerie Watzlaf,
Ph.D., RHIA, FAHIMA – University of Pittsburgh.

4:10 – 4:45 pm Overview of Audit Mechanisms (Fraud and Abuse Prevention) and CAC Test Suite

4:45 – 5:00 pm  Closing Comments

AGENDA
September 7, 2006

7:30 – 8:30 am  Registration & Continental Breakfast
Location: Galaxy Ballroom

8:30 – 9:15 am           Opening plenary session
Linda Kloss, MA, RHIA, CAE; AHIMA

9:15 – 9:45 am          Presentation of the Workgroup Topics and Process

9:45 – 10:00 am        Networking Break

10:00 am – 12 Noon      Workgroup Breakouts

  1. Audit Mechanisms
  2. Software Metrics
  3. Software Test Suite
  4. Software Validation Tests
  5. Software Management and Use

12 Noon – 1:00 pm        Lunch
Location: Cavalier Ballroom

1:00 – 3:00 pm          Workgroup Breakouts – continued

3:00 – 3:30 pm          Networking Break

3:30 – 5:00 pm           Workgroup Reports

AGENDA
September 8, 2006

7:30 – 8:30 am  Registration & Continental Breakfast
Location: Galaxy Ballroom

8:30 – 9:30 am     Review of Breakout Workshop Reports

9:30 – 11:00 am    Plenary Reactor Panel Representing the Following Stakeholders

  1. Clinicians – Physicians and Nurses
  2. HIM Professionals
  3. IT Vendors/Developers
  4. Regulators
  5. Payers

11:00 – 11:30 am    Closing Comments

 

Leave a Reply