Value of Investment as a Key Driver for Prioritization and Implementation of Healthcare Software

by Seth A. Bata, MS, and Terry Richardson, MS


Health systems across the nation are recovering from massive financial and resource investments in electronic health record applications. In the midst of these recovery efforts, implementations of new care models, including accountable care organizations and population health initiatives, are underway. The shift from fee-for-service to fee-for-outcomes and fee-for-value payment models calls for care providers to work in new ways. It also changes how physicians are compensated and reimbursed. These changes necessitate that healthcare systems further invest in information technology solutions. Selecting which information technology (IT) projects are of most value is vital, especially in light of recent expenditures. Return-on-investment analysis is a powerful tool used in various industries to select the most appropriate IT investments. It has proven vital in selecting, justifying, and implementing software projects. Other financial metrics, such as net present value, economic value added, and total economic impact, also quantify the success of expenditures on information systems. This paper extends the concept of quantifying project value to include clinical outcomes and nonfinancial value as investment returns, applying a systematic approach to healthcare software projects. We term this inclusive approach Value of Investment. It offers a necessary extension for application in clinical settings where a strictly financial view may fall short in providing a complete picture of important benefits. This paper outlines the Value of Investment process and its attributes, and uses illustrative examples to explore the efficacy of this methodology within a midsized health system.

Key words: value of investment; benefits estimator; healthcare software; project prioritization; ROI; SDLC


Health systems across the nation are recovering from massive financial and resource investments in electronic health record (EHR) applications. In 2009, the American Recovery and Reinvestment Act (ARRA) mandated that physicians, clinics, and hospitals implement EHR technology. This provision of the act invigorated the languishing EHR business and forced healthcare entities to spend billions of dollars nationwide on adopting this technology. With an overall budget of almost $800,000,000, the staggering investment in accelerating the adoption of health IT put forth by the ARRA was unprecedented. The ARRA also provides grants and loans to help establish utilization of a certified EHR for each person in the United States.1 To push the health information systems agenda even further, as of 2015 physicians and hospitals that do not use certified information systems in a meaningful way began to be penalized. ARRA has taken far-reaching initiatives to increase the use of information systems.

These mandates have changed the healthcare industry. In no previous time in healthcare has patient care been inextricably bound to data entered into information systems such as EHRs. The power of hundreds of EHRs and thousands of clinicians producing meaningful EHR data cannot be overstated. When the current situation is compared to previous years when patient information was in paper form, the significance of the EHR mandate is evident. The cost of implementing information systems, accessibility of the clinical data, change of reimbursement to new models, and more changes on the horizon underline the need for improved information systems and an improved process by which associated projects are selected and implemented.

Despite the substantial investments in information systems by physicians, hospitals, and government, the expected effects of reducing overall costs and improving outcomes through this technology remain elusive.2–4 Instead of continuing in the same manner, hospital systems should focus on tightly associating these projects with needed outcomes and require demonstration of their value. Rather than continuing to implement applications in hopes of helping patient care, healthcare systems need to explore new ideas to justify spending, improve financial outcomes, reduce healthcare costs, and improve clinical efficiency and care quality.

Return on investment (ROI) is a powerful concept used to justify spending and select the most appropriate investment projects. When ROI is coupled with other tools, projects in various industries have recognized significant success in justifying, prioritizing, and measuring success.5 ROI models have proven to be vital in improving the process of selecting, justifying, and designing application projects. Although ROI can have multiple interpretations, within this research ROI is an indicator used to measure the financial value and organizational impact of a project in relation to its cost. This definition works well with the operational approach to research that real-world applications offer. Real-world examples lend insight into more than numbers. They are instrumental in ascertaining impact, outcomes, or mechanisms. ROI is used as part of the process of assessing whether a project will yield financial returns for the organization. However, ROI alone is not sufficient to obtain an accurate assessment of the value and impact of information systems in healthcare. The very nature of the long-term commitments that are made when information systems are chosen suggests the need for improved quantitative approaches to the assessment of financial and nonfinancial returns.

Trying to assess the justification of expenditures is common practice in business. Companies and researchers engage in intensive discussions regarding how to measure whether goals or gains were achieved through implementation of information systems. Metrics such as net present value (NPV), economic value added, and total economic impact are commonly used to provide greater clarity in measuring the success or failure of information systems projects. Understanding the gains that can result from implementing information systems in healthcare is more nuanced, because many benefits are not financial in nature. Healthcare expenditures are typically worthwhile when process improvement, cost avoidance, patient-experience improvement, and better healthcare outcomes are achieved. These factors must be considered for inclusion when measuring project success. This thinking is more than a notion. Green and Young explored four methods of deriving value propositions for healthcare IT, including ROI.6 However, the researchers noted that assigning a financial value to a purchase would ease much of the debate regarding decisions that compare investment in IT, for instance, to investments in extra staff, new drugs, or other technology.

Although easily quantifying how to select IT projects has obvious allure, this research argues against that draw. The focus of ROI is the financial benefit of an expenditure. This approach is sensible, but it loses its usefulness in the realm of healthcare IT spending. Healthcare IT is more than a financial value to a purchase. The healthcare industry is driving away from fee-for-service and toward fee-for-outcomes. Healthcare IT initiatives would do well to recognize that the outcomes of the work performed, systems implemented, and IT projects managed have a significant impact on healthcare. The goal has to be more than assigning a financial value to a purchase. Thus, using ROI to quantify the reason for the selection of an IT project has a glaring weakness. Whether an IT project implements a system to support the changes in reimbursement models, addresses the impact of coordinated care, or helps determine the next location for a community clinic, the numbers alone do not demonstrate the value of the IT investment. Modern healthcare brings levels of complexity that none of the metrics mentioned before are able to address. For this reason, ROI has been coupled with an understanding that includes value and impact and is summarized throughout this research as Value of Investment (VOI). Other researchers have attempted to delve into similar work but were unable to effectively demonstrate value or impact and when these factors occurred.7–9 We develop a theoretical framework for applying VOI in conjunction with a typical project life cycle to create and implement a novel approach for assessing project viability, build stakeholder involvement and support, and inform project selection and prioritization.

This paper outlines the VOI methodology in concert with project management in healthcare IT. We applied a systematic approach to understanding the efficacy of proposed software projects. This approach includes coupling a systematic VOI process with a typical project life cycle. While previous research has emphasized the need to quantify value,10, 11 we applied a novel approach to assessing proposed and existing software development projects. This paper outlines this systematic approach and the necessary tools to demonstrate the impact of using VOI in the selection, justification, and design of application projects in healthcare analytics. Using illustrative examples, we explore the efficacy of this methodology within a midsize health system.

Theoretical Framework

We systematically apply VOI to project prioritization and implementation through sequential steps that progressively yield improved accuracy in estimating the project’s quantified value. The VOI approach is most effective when integrated with a robust software implementation process, complementing the software delivery with measurement of its impact. The illustrative examples used in this research demonstrate the integration of software implementation and measurement of impact. Figure 1 illustrates the overall VOI process flow and its integration with the software implementation process. The VOI process can be integrated into various software implementation methodologies, such as the Agile, Waterfall, and Spiral models.12

Integrating VOI with the project life cycle lends to a practical, stepwise application. VOI’s actionable insight yields benefits at several stages of the project:

  • Data-driven project prioritization and selection, at the beginning of the project life cycle
  • Focused scope refinement and alignment on highest-priority deliverables, early in the project life cycle
  • Improved guidance of implementation, in the middle of the project life cycle
  • Increased likelihood of successful impact, at the end of the project life cycle
  • Improved stakeholder engagement, throughout the project life cycle
  • Ability to learn why the VOI goal was or was not achieved and to incorporate this learned lesson into the next project life cycle

The VOI approach consists of the following steps:

1. ResearchGathering and analyzing available information to inform whether a potential project is feasible and beneficial is the first step in assessing its viability. The requesting team can lead this part of the process with assistance from IT, business development, finance, or another consulting entity. Fundamental questions such as “Does a software solution exist in the market today? Has anyone tried this concept before? If so, what were their results? What value did they achieve?” can often be efficiently answered by conducting a benchmark analysis with other internal departments or other companies, performing a literature review, or receiving proposals from potential vendors. The goal is to establish a well-informed initial opinion on whether the request is sufficiently viable to move forward. Many ideas do not make it past this step. For example, a barrier such as insufficient technology, prohibitively high cost, or lack of other necessary resources may be revealed.

2. Benefit ScoringAfter an informed understanding of potential solutions and their associated benefits, costs, and feasibility has been established, the next step is to use this information to score benefits in a systematic manner. A simple yet effective tool to facilitate this scoring is what we term the Benefits Estimator, adapted from Six Sigma’s Cause and Effect Matrix.13 Using the Benefits Estimator, a score is assigned for each benefit attribute for each project, and the composite score is subsequently calculated for each project (see Figure 2). These composite scores inform prioritization based on relative value across projects. As an optional enhancement, higher-priority criteria can be emphasized by multiplying the associated scores by a weighting factor in the composite formula.

Preconfiguring the scoring criteria, scales, and levels ensures consistency and repeatability when using the Benefits Estimator. A scoring key (see Figure 3) guides structured discussions on how to score each potential project. Quantifying benefit scores can reveal gaps in knowledge about the potential project. It encourages mature discussions regarding project value, removes ambiguity, provides a mechanism for gaining consensus, and minimizes subjectivity in informing critical go/no-go investment decisions. The intent is to facilitate productive conversations about quantifying the project’s viability.

Using the information obtained through the research in Step 1 as the basis for scoring, the output from this step is a quantified estimate of value. Although still a rudimentary approximation, this process yields enough information to prioritize projects based on their relative ranking. For projects requiring only a small investment of resources, this may be enough information to make the go/no-go decision on whether to move forward. For any project, it may become apparent that sufficient upside simply does not exist to justify moving forward.

3. Proof of Concept—For projects requiring a large investment of money or time resources, typically including capital funds, further due diligence is warranted, which can reduce uncertainty of the investment and lower the risk in the decision of whether to move forward. Building a small-scale prototype to test and verify that the concept will work can also improve the accuracy in estimating the project’s worth. This prototyping step can often be accomplished through data analysis and extrapolation of a full-scope implementation. Additionally, a sensitivity analysis can yield insight into the risk of assumptions.

Some concepts lend themselves to small-scale prototyping better than others do. Considerations on whether to pursue a proof of concept may include (1) scalability (i.e., can we model the concept at a small scale and cost?) and (2) learning opportunity (i.e., will the proof of concept cost-effectively yield useful information not already learned through Steps 1 and 2?).

Key outputs from this step are (1) proof that the concept is feasible,14 (2) reduced uncertainty of the value estimate, cost, and resource requirements, and (3) first-pass learnings that can serve as early guidance for the eventual full-scale project plan.

4. Financial Pro Forma—A pro forma financial statement15 enables preinvestment analysis of cost and revenue impacts, yielding insight into summary metrics such as the ROI, NPV, and incremental rate of return (IRR). Similar to Step 3, this step is most relevant for projects requiring substantial investments of money and time. Otherwise, Step 2’s more rudimentary range-based approximations are typically sufficient to justify proceeding. Because of the pro forma’s financial focus, this step does not address nonfinancial impacts, such as improved clinical outcomes, safety, and patient experience. However, insight into resource requirements and cost impacts can be useful for planning purposes, even on projects not aiming to drive financial value.

5. Goal and Aim Statements—Articulating well-specified goals and aims ties the vision of the project to actionable steps toward achieving the anticipated value. The goal and aim statements should be based on the value logic defined in Steps 1 to 4, with greater levels of detail.

With roots in the traditional quality improvement toolbox,16, 17 goal and aim statements explicitly describe the desired outcome and steps toward achievement to be driven or enabled by the project, respectively. Groups of aim statements are sequenced toward a cohesive purpose, each contributing to the larger goal, and so the framework is a two-level hierarchy. Subdividing the goal into smaller aim statements encourages the process owner to define real-world action steps, leading to an achievable execution plan.

McCannon, Schall, and Perla state, “a good aim should feel simultaneously ambitious and achievable, and must be measurable.”18 To be most compelling, goal and aim statements should exhibit Doran’s “S.M.A.R.T.” characteristics (specific, measurable, attainable, realistic, and time-bound).19 This is a key milestone in a project where process change must be tightly coupled with software design in order to achieve a targeted outcome improvement. To successfully achieve value, it is vitally important to associate the process change with the software release. What will the process owner do differently and better through use of this software? How will the process change result in a measurable outcome improvement? The process change should be complemented by the software’s functionality.

6. Software Implementation ProcessAt this stage, the software implementation process begins. Virtually any project management methodology (e.g., Agile, Waterfall, Spiral) can be used to guide the execution.

Equipped with strategic purpose and guided by tactical goal and aim statements, the project’s scope and requirements become clear. Any requests for changes to the scope must be evaluated against the project’s purpose, goal, and aim statements. If the change request does not improve the likelihood of achieving the VOI or distracts from the immediate tactical aims, then it is best deferred until after the initial delivery plan because it is likely to unnecessarily add noise, erode focus, and drain resources.

7. Organizational Change—Organizational change through process improvement and cultural evolution is a critical step in realizing the benefits of software development. As the software nears deployment, the stakeholders drive the transformation enabled by the new software. Requiring intentional planning, diligence, and a champion from within the user community, this step is critical for realizing value.20–22 Various process- or quality-improvement methods, such as plan-do-study-act (PDSA),23 the Six Sigma DMAIC (define, measure, analyze, improve, control) method,24 and others, may be employed and should be closely coupled with the software development. This tight integration facilitates definition of the scope of the software, adoption by users, realization of the impact, and attribution of value.

8. Measure Actual Value—After implementation of the software, a sufficient period of use by a well-trained and engaged user community, and a period of intentional organizational change (three to six months after the project goes live, in many cases), initial value can usually be expected to be seen. Measuring the realized value is a critical step in the VOI process because it confirms the efficacy of the software and the users’ ability to draw insight and take informed action.

The key output of this step is a quantified actual value, which can be compared to the anticipated estimates and commitments yielded in Steps 1 through 5. By confirming or disproving the accuracy of those estimates, it offers the opportunity to learn. Were the estimates accurate? Did the estimation methods succeed? Were the sources of information upon which the estimates were based reliable? If the actual value differed from the estimates, why? Were the assumptions sound? Answering these questions and others helps to provide useful feedback to inform the next set of goal and aim statements (i.e., the next version of the software) as well as the beginning of the VOI process (i.e., the next project).

Measurement of value can be a relatively new concept for some organizations and is often not broadly practiced in a systematic fashion. Determining how to quantify alleged intangibles can present a daunting but necessary challenge. By applying a systematic approach and having a diverse, well-rounded tool set of measurement methods25, 26 at the decision maker’s disposal, value can often be observed and quantified.

Illustrative Examples

We examine two examples where this methodology was applied at Mission Health System, yielding measurable impact. Mission Health is North Carolina’s sixth-largest health system, comprising six hospitals, numerous ambulatory practices, and more than 11,000 employees.27 Mission Health also owns the state’s largest accountable care organization.28

Example 1: Bowel Surgery Analytics Application

The first example examines implementation of an analytics application focused on a bowel surgery care process model at Mission Health. Care process models are patient-cohort-centered building blocks for clinical process standardization and implementation of evidence-based best practice and are used at several healthcare systems, including Mission Health and Intermountain Healthcare.29

The analytics software application supporting this initiative was first researched by a multidisciplinary project team including clinicians, quality improvement advisors, clinical informaticists, administrators, and data architects. Vendors were then evaluated, and best-practice benchmarks were gathered from other healthcare providers. The benefits were quantified within the Benefits Estimator’s multicriteria framework, and a small-scale proof of concept was built in the form of a rough, nonautomated version of the software at relatively low cost. The team sought to answer the question of whether they could achieve a decrease in the number of days that bowel surgery patents spent at the healthcare system. The unit of analysis was the length of stay (LOS), measured in days. The project team consisted of a panel of operational and clinical subject-matter experts who helped define and remove extraneous variables as well as identify any reasonable rival explanations of outcomes.

Organizational support for the project was improved by the understanding of its anticipated financial impacts that was provided by the consolidated pro forma financial statement, enabling the executive committee to review and approve this project efficiently and with relatively low risk. The project team then documented their initial goal and aim statements, articulating specific process and outcome improvements that the software would enable. The initial goal was defined as reducing total LOS for bowel surgery patients by 3 percent by a specified date. This goal was based on research, analysis, and prioritization of potential focus areas within the bowel surgery process, championed by physician leadership.30

The team prioritized its initial improvement efforts on standardization of preoperative procedures, having identified this area as a key driver toward accomplishing the LOS goal, and the software application complemented this effort by measuring caregivers’ compliance. The team documented two aim statements to correspond with measurable process improvement toward this goal, specifying target levels of compliance with physician and nurse protocols, respectively. Each aim statement corresponded with an iterative release of the software, giving helpful context to the software developer. The developer worked hand-in-hand with a physician champion and nurse champion to custom-design the software in a manner that would enable them to accomplish their goals. Because of the intentionality and diligence in quantifying targets, focused effort, and a cross-functional teamwide alignment of purpose, the investment yielded measurable VOI, reducing the LOS by 7 percent by the target date, which exceeded the original goal of 3 percent. Subsequently, the LOS reduction enabled patients to return to their homes sooner, reduced hospital utilization and costs,31 and improved the number of patients discharged per day. Other potential benefits that the team may not have anticipated were also associated with decreased LOS, such as improved patient and family satisfaction and decreased overtime labor hours.

The cost reduction in this example enabled a traditional measurement of financial ROI impact. In the next example, we examine yield of a nonfinancial value, illustrating the expansion of the ROI concept to the more inclusive VOI.

Example 2: Sepsis Analytics Application

The second example considers Mission Health’s implementation of an analytics application focused on a sepsis care process model. As in the first example, the VOI process was followed during the project’s planning phases and implementation. Goal and aim statements were articulated, prioritizing initial improvement efforts on the standardization of a treatment bundle to occur within the critical first three hours of the patient’s arrival in the emergency room. The goal focused on improving compliance with this three-hour procedural bundle, and the complementary aim statements set targets for the more granular components of this bundle. Each statement specified a target compliance level and date. Unlike the previous example, in this case this project team decided to focus on improving the process alone, assuming that improvement in outcomes would follow. The team sought to answer the question of the impact of measuring compliance with the three-hour procedural bundle and the effects of increased compliance with the bundle. The unit of analysis involved the total number of opportunities to use the bundle and the number of times the bundle was used. The measure also tracked the number and type of infections as compliance increased and decreased. As in the first example, the project team consisted of a panel of operational and clinical subject-matter experts who helped define and remove extraneous variables and identify any reasonable alternative explanations of outcomes.

The software component of the project supported this initiative by enabling measurement and tracking of process metrics and outcome metrics, and by offering visibility into potential opportunities for improvement based on variations in these metrics. As a result, this project also exceeded its goal, improving compliance with the bundle by more than 12 percentage points. In addition, the project realized reductions in the mortality rate and LOS for sepsis patients by 3 percentage points and 3.3 percent, respectively.32 These benefits were undoubtedly important for Mission Health and its patients even though they were nonfinancial in nature. This example therefore emphasizes the nonfinancial value component of VOI.

These examples illustrate real-world successes in applying the VOI process, demonstrating how value can be effectively planned and achieved. While these specific examples highlighted LOS, mortality rate, and associated secondary benefits, the VOI process can be applied to various projects in a similar fashion in order to achieve positive effects on numerous metrics, such as patient satisfaction, readmission rate, patient safety incident rate, patient wait time, claims denial rate, and others.

Future Research

An area for future research may be a larger-scale empirical study to statistically measure the effectiveness of applying the VOI method or another approach that systematizes the anticipation and measurement of project value. An empirical study comparing outcomes of an experimental group versus a control group of healthcare software projects could quantify the benefits of the VOI approach with a high degree of statistical rigor. Measures of success between these two groups may include a set of financial outcome metrics (e.g., ROI, IRR, incremental profit, time to value), a set of nonfinancial outcome metrics (e.g., incremental improvement in quality indicators, incremental improvement in patient satisfaction, incremental LOS), and a set of project-centric metrics (e.g., project delivery timeline, project stakeholder satisfaction, project budget).


Seth A. Bata, MS, is the director of analytics at Mission Health System in Asheville, NC.

Terry Richardson, MS, is the director of clinical and business analytics at Mission Health System in Asheville, NC.



  1. Steinbrook, Robert. “Health Care and the American Recovery and Reinvestment Act.” New England Journal of Medicine 360, no. 11 (2009): 1057–60.
  2. Mamlin, Burke W., and William M. Tierney. “The Promise of Information and Communication Technology in Healthcare: Extracting Value from the Chaos.” American Journal of the Medical Sciences 351, no. 1 (2016): 59–68.
  3. Rahurkar, Saurabh, Joshua R. Vest, and Nir Menachemi. “Despite the Spread of Health Information Exchange, There Is Little Evidence of Its Impact on Cost, Use, and Quality of Care.” Health Affairs 34, no. 3 (2015): 477–83.
  4. Jeansson, John. “Perception of EHR Value.” 4th European Conference on Information Management and Evaluation (2010): 160.
  5. Berander, Patrick, and Anneliese Andrews. “Requirements Prioritization.” In Aybüke Aurum and Claes Wohlin (Editors), Engineering and Managing Software Requirements. Berlin: Springer-Verlag, 2005, 69–94.
  6. Green, D., and T. Young. “Value Propositions for Information Systems in Healthcare.” Proceedings of the 41st Annual Hawaii International Conference on System Sciences (2008): 257.
  7. Brynjolfsson, Erik, and Lorin Hitt. “Information Technology as a Factor of Production: The Role of Differences Among Firms.” Working Paper Series 201. MIT Center for Coordination Science. 1997. Available at (accessed on May 24, 2016).
  8. Dewan, Sanjeev, and Chung-ki Min. “The Substitution of Information Technology for Other Factors of Production: A Firm Level Analysis.” Management Science 43, no. 12 (1997): 1660–75.
  9. Stratopoulos, Theophanis, and Bruce Dehning. “Does Successful Investment in Information Technology Solve the Productivity Paradox?” Information & Management 38, no. 2 (2000): 103–17.
  10. Nelson, Eugene C., Mark E. Splaine, Stephen K. Plume, and Paul Batalden. “Good Measurement for Good Improvement Work.” Quality Management in Health Care 13, no. 1 (2004): 1–16.
  11. Jeansson, John. “Perception of EHR Value.”
  12. Isaias, Pedro, and Tomayess Issa. High Level Models and Methodologies for Information Systems. New York, NY: Springer, 2015.
  13. Breyfogle, Forrest W., III. Implementing Six Sigma: Smarter Solutions Using Statistical Methods. 2nd ed. Hoboken, NJ: Wiley, 2003.
  14. Eeles, Peter, and Peter Cripps. The Process of Software Architecting. Upper Saddle River, NJ: Addison-Wesley Professional, 2009.
  15. Smith, Richard L., and Janet Kiholm Smith. Entrepreneurial Finance. 2nd ed. New York, NY: Wiley, 2003.
  16. US Department of Health & Human Services, Health Resources & Services Administration. “Readiness Assessment & Developing Project Aims.” 2011. Available at (accessed on November 5, 2017).
  17. Nelson, Eugene C., Mark E. Splaine, Stephen K. Plume, and Paul Batalden. “Good Measurement for Good Improvement Work.”
  18. McCannon, C. Joseph, Marie W. Schall, and Rocco J. Perla. Planning for Scale: A Guide for Designing Large-Scale Improvement Initiatives. IHI Innovation Series White Paper. Cambridge, MA: Institute for Healthcare Improvement, 2008.
  19. Doran, George T. “There’s a S.M.A.R.T. Way to Write Management’s Goals and Objectives.” Management Review 70 no. 11 (1981): 35–36.
  20. de Mast, Jeroen, Benjamin Kemper, Ronald J. M. M. Does, Michel R. H. Mandjes, and Yohan van der Bijl. “Process Improvement in Healthcare: Overall Resource Efficiency.” Quality and Reliability Engineering International 27, no. 8 (2011): 1095–1110.
  21. McFadden, Kathleen L., Gregory N. Stock, and Charles R. Gowen III. “Leadership, Safety Climate, and Continuous Quality Improvement: Impact on Process Quality and Patient Safety.” Health Care Management Review 40, no. 1 (2015): 24–34.
  22. Henderson, John C., and H. Venkatraman. “Strategic Alignment: Leveraging Information Technology for Transforming Organizations.” IBM Systems Journal 32, no. 1 (1993): 472–84.
  23. Speroff, Theodore and Gerald T. O’Connor. “Study Designs for PDSA Quality Improvement Research.” Quality Management in Healthcare 13, no. 1 (2004): 17–32.
  24. de Koning, Henk, and Jeroen de Mast. “A Rational Reconstruction of Six-Sigma’s Breakthrough Cookbook.” International Journal of Quality & Reliability Management 23, no. 7 (2006): 766–87.
  25. Jones, Cheryl, Beth Layman, Elizabeth Clark, Joseph Dean, Fred Hall, John McGarry, and David Card. Practical Software Measurement: Objective Information for Decision Makers. Boston, MA: Addison-Wesley Professional, 2001.
  26. Hubbard, Douglas W. How to Measure Anything: Finding the Value of Intangibles in Business. 3rd ed. Hoboken, NJ: Wiley, 2014.
  27. Mission Health System. “Mission Health: About Us.” 2016. Available at (accessed May 16, 2016).
  28. US Department of Health & Human Services, Centers for Medicare & Medicaid Services. “2016 Medicare Shared Savings Program Organizations.” 2016. Available at (accessed on November 5, 2017).
  29. Intermountain Healthcare. Guidebook for Implementation of a Clinical Work Process-Based Organizational Structure. Institute for Health Care Delivery Research, 2008. Available at (accessed on November 5, 2017).
  30. Hendy, Jane, and James Barlow. “The Role of the Organizational Champion in Achieving Health System Change.” Social Science & Medicine 74, no. 3 (2012): 348–55.
  31. Polverejan, Elena, Joseph C. Gardiner, Cathy J. Bradley, Margaret Holmes-Rovner, and David Rovner. “Estimating Mean Hospital Cost as a Function of Length of Stay and Patient Characteristics.” Health Economics 12, no. 11 (2003): 935–47.
  32. Health Catalyst. “Sepsis Mortality and Length of Stay: One Hospital System’s Story.” Available at (accessed May 24, 2016).

Printer friendly version of this article.

Seth A. Bata, MS, and Terry Richardson, MS. “Value of Investment as a Key Driver for Prioritization and Implementation of Healthcare Software.” Perspectives in Health Information Management (Winter 2018): 1-13.

Posted in:

Leave a Reply