New Technologies in Economic Evaluation




Introduction

The overarching central issue addressed by the discipline of economics is resource scarcity. In one sense or another, all economists are working on questions that have some connection to scarcity and limits. Thus, the primary purpose of economic analysis, and cost-benefit and cost-effectiveness analysis (CEA) in particular, is to support decision-making necessitated by the scarcity problem. Therefore, economic evaluation information is generated with the direct intention of influencing policy – but is that objective achieved? This is the central question addressed in this article.

The policy frame here relates to decisions on coverage of medical interventions. A decision to ‘cover’ a technology indicates that its cost will be reimbursed as part of an insurance package, and so it involves setting limits on the health care services that can be accessed or provided. Coverage decisions are taken in health systems where private insurance is widely seen and in systems dominated by publicly funded insurance programs.




This article initially provides a definition of economic evaluation typically undertaken to inform coverage decisions and then introduces a case study, the UK’s National Institute for Health and Clinical Excellence (NICE). The problem, reflected in the lack of use of such information, is then outlined, with supporting evidence from the published literature presented. The article then provides a discussion of how some of the barriers and obstacles to use might be overcome.

Normative Economic Evaluation

Much economic evaluation work in health care, seeking to support coverage decision making, has a ‘normative’ bent. That is, the role of the economist has been to indicate the nature of the resource allocation decision that ought to be followed if certain objectives are to be achieved. An important prerequisite for such a normative stance is that the analyst has a good understanding of the objective function (i.e., what should the health service be seeking to achieve?) and the decision rules to be applied. As Culyer (1973) points out, the process of agreeing objectives is not necessarily straightforward:

In the real world y policy makers and most other people who seek economic advice do not have well-articulated ideas of their objectives. One of the first tasks of a cost-benefit analyst, for example, is usually to seek to clarify the objectives – even to suggest some.

Culyer (1973, p. 254)

Many health economists have taken Culyer at his word, proposing an objective of maximising population health benefits and, although there are those who argue for a broader set of objectives, the proposition does receive some support from policy makers and the public more generally. The difficulties and disputes arise primarily around attempts to measure health. Over the course of the past 20 years or so the subdiscipline of health economics has had a methodological focus on health measurement and valuation. The result is a measure of health that can be operationalized for use in policy making, that is, the quality-adjusted life-year (QALY). The decision rule, therefore, is to invest in those technologies that produce the largest QALY gains for a given level of cost. To inform such decisions, normative analyses tend to provide results in the form of an incremental cost-effectiveness ratio (ICER), a net-benefit statistic and a cost-effectiveness acceptability curve (CEAC).

  • The ICER reports the ratio of additional costs to additional health effects associated with a new intervention (e.g., cost per QALY gained).
  • The net-benefit statistic expresses the additional health effects in monetary units by using an estimate of the ‘maximum willingness to pay’ per unit of health gain, where available.
  • The CEAC plots the probability that the intervention in question is cost-effective against a range of possible threshold values to define cost-effectiveness.

A National Institute For Health And Clinical Excellence Case Study

Perhaps the most researched example of use of economic evaluation in coverage decision making is the UK’s NICE. In many respects, NICE has set the standard for evidence informed coverage decision making and openness to the application of economic analyses.

The Institute, established in 1999, has as one of its functions the appraisal of new and existing health technologies. Coverage decisions made by NICE are based on explicit criteria and are informed by evidence, including an economic evaluation. The evidence is interpreted and considered by the Technology Appraisal Committee, and that Committee formulates recommendations and guidance on the use of the technology in the National Health Service (NHS) in England and Wales.

There can be no doubt that the technology appraisal decisions at NICE are driven in large part by the results of economic analyses. This was stated explicitly by the Institute’s Chairman, Sir Michael Rawlins, who stated that in determining its guidance, NICE would take six matters into account, including both clinical and cost-effectiveness (Rawlins and Culyer, 2004). Further, in the Secretary of State’s Direction to NICE when it was established in 1999, the intent was clearly stated: NICE should consider the broad balance of clinical benefits and costs.

As a crude example to demonstrate that cost-effectiveness drives decisions, in the appraisal of statin therapy for secondary prevention of coronary heart disease, the ICER ranged from d10 000 to d16 000 per QALY gained and the guidance from NICE states: ‘Statin therapy is recommended for adults with clinical evidence of coronary vascular disease’ (NICE, 2006). However, when the ICER is much less favorable, in the case of Anakinra for rheumatoid arthritis the ICER was in the region of d105 000 per QALY gained, the guidance tends to be negative: ‘‘Anakinra should not normally be used as a treatment for rheumatoid arthritis. It should only be given to people who are taking part in a study on how well it works in the long term’’ (NICE, 2003).

This general picture is supported by the analyses of decisions taken by NICE and other agencies presented by Clement et al. (2009, p. 1437): agencies such as NICE make ‘‘recommendations that are consistent with evidence on effectiveness and cost-effectiveness but that other factors are often important.’’ Qualitative work by Bryan et al. (2007, p. 41) tells a very similar story – examples of quotes from NICE committee members:

I think economic evaluation was regarded as being important from day one.

It [the CEA] seems to me to be the clincher really. If it’s too high then it’s not going to get funded.

The Problem

The NICE story is positive but it is important to understand that it is an outlier in terms of policy use of economic evaluation in health care. The broader literature on this topic has a consistent refrain, with concern expressed regarding the usefulness, or more precisely the lack thereof, of CEAs when applied in decision making processes. Responses to this concern have tended to centre on questions of how evaluation research by health economists can be made more useful and accessible to policy makers.

As a framework for considering these issues, the authors have previously grouped barriers to the use of economic analyses in health care decision-making under two headings: accessibility and acceptability. The accessibility concern includes issues such as interpretation difficulties, the aggregation of results, difficulties in accessing information, shortage of relevant skills, etc. Under an acceptability or relevance banner, a whole range of barriers might be considered relating to the timeliness of information provision, and the quality and nature of the information.

Thus, if one accepts this framework, the necessary requirements for economic evaluation evidence to be used in decision-making, relate both to accessibility and to acceptability. For the information to be accessible, it is required that the results of the economic analyses can readily be understood and interpreted by end-users. This is mainly concerned with issues of the presentation of information. For the information to be acceptable, it is necessary that economic analyses provide information that is seen by end-users to be relevant (i.e., providing data on parameters that are likely to influence the decision of the policy maker), information that is appropriate to the decisions they face, taking into account relevant contextual factors (e.g., budgetary arrangements commonly seen in the NHS), and that such analyses are seen as providing information in a timely fashion.

This article will now summarize the main themes that emerge from the published literature on this topic. The authors will then return to NICE and reflect further on its use of economic evaluation in light of these accessibility and acceptability criteria. The article will conclude with reflections of going forward, drawing on contributions from a more ‘positive’ approach to economics.

Empirical Work

This part of the article discusses the work of others who have researched the use of economic evaluation in health care decision making. A formal review of literature in this area has been published by Williams et al. (2008) and this article draws, in part, from that work.

The vast majority of empirical work in this field was conducted from the mid-1990s onwards. In terms of method, there are three strands to the empirical literature:

  • Surveys and questionnaires.
  • Studies specifically of the NICE appraisals process, drawing solely on secondary sources.
  • A prospective, case study approach, represented by a single study.

One of the most innovative pieces of research, going beyond surveys and interviews, was conducted by McDonald (2002). Based within an English Health Authority, she offered health economics support as a participant observer of a Coronary Heart Disease Strategy. She found that CEA was not geared toward assisting in the decision making processes prevalent at local levels of the NHS in England. This work highlighted barriers beyond those identified in previous UK studies. These are discussed below.

In a US context, use of formal CEA in technology coverage decisions is, if anything, even less commonly seen.

Successful application of CEA to policy has thus proved to be a challenge to decision makers across a range of health care systems. This low level of use occurs despite evidence suggesting that decision makers appreciate the potential value of cost-effectiveness information to the policy.

Studies of NICE have largely relied on data collected from secondary sources. Although these vary in approach to data analysis, each identifies CEA as a prominent feature in the Institute’s work, in contrast to decision makers from all other studies.

Barriers To The Use Of Economic Evaluation

Research indicates a plethora of active barriers to use of CEA. In relation to accessibility, there are three dimensions reported as significant within the literature. The first relates to the shortage of relevant analyses. Early studies in particular emphasize the difficulties decision makers face in obtaining economic evaluations. The second barrier derives from uncertainty or ignorance over how and from where existing studies can be accessed. This is compounded by the funding and access difficulties inherent in commissioning a new CEA that can be delivered in a timely manner. Finally, and – within this category of barriers – most consistently, studies demonstrated a lack of expertise in comprehension and interpretation. It is clear from studies at local levels that decision makers struggle to understand health economic analyses including the concepts and language used, and the presentational styles adopted.

These problems of accessibility are compounded by barriers relating to the perceived acceptability and ease of implementation of CEA. A small number of studies indicated that perceived methodological flaws were a major impediment to utilization. More commonly, studies found that decision makers did not always consider the source of CEAs to be independent. The pharmaceutical industry has been active in using CEAs to promote their products and studies repeatedly emphasize the distrust this engenders in decision makers.

Studies employing qualitative methods have uncovered factors relating to the complexity and interactive nature of the decision making environment, and therefore the competing drivers of decisions. Far from reflecting a problem-solving research-led model, health care decision making is subject to multiple influencing factors including: political considerations, administrative arrangements, equity concerns, societal opinion and the values and attitudes of decision makers. Interestingly, this multiplicity of competing considerations was also indicated in more recent quantitative analysis of NICE decisions.

The study by McDonald (2002) uncovered fundamental value conflict between decision makers’ guiding principles and those underpinning normative health economics. She reinforces the assertion that single objectives are not routinely present in decision making and details instances of decision making which could not be said to be following any single maximization principle. As a participant observer, her attempts to introduce a rational, problem-solving approach to resource allocation resulted in a ‘paralysis’ caused, in part, by complex funding constraints. Rational approaches to policy formulation were considered by decision makers to be less satisfactory than standard nonrational practices of ‘muddling through’ in a context of resource scarcity.

Finally, studies from across the range of methodological types suggest that decision makers perceive recommendations from CEAs to be difficult to implement. For example, budget holders operating within short-term budgeting cycles may be under pressure to contain cost over and above promoting efficiency and others experience difficulties redirecting resources across inflexible financial structures. Such barriers have been expressed in terms of the savings identified in economic evaluations being unrealisable in practice. Health economists are then accused of being ill informed on structural aspects of health systems.

Overall, the literature reveals a growing realization that interventions by health economists in the area of research utilization have neither addressed the totality of factors which influence policy makers nor accounted for the complexity of health care decision making processes.

Prescriptions For Improvement

Typically, the published research draws on a similar range of potential solutions to the problem of low levels of usage. These include the need to standardize and improve methods of CEA and to increase the available evidence base for decision makers both in terms of volume and timeliness. A strong strand within prescriptions for greater usage focused on education and training for decision makers so that CEA can be better accessed, understood and applied.

Overall, responses to reported barriers tended to centre on questions of how research by health economists can be made more useful and accessible to policy makers. Prescriptions for overcoming accessibility barriers usually involve a combination of increasing resources, improving the means of communication with decision makers, and providing decision makers with training in interpreting health economics.

However, it is less clear from the literature how barriers relating to organizational and political context are to be addressed. There is little, for example, by way of prescriptions for shaping the health care system in order to incentivize and facilitate the use of CEA. Indeed, one study author, McDonald (2002), is pessimistic as to the appropriateness of seeking to increase the use of CEA. Her argument is that, as a result of the complex and sometimes perverse structures of the English NHS, it is unhelpful to prescribe rational frameworks for NHS decision makers because this serves only to highlight to decision makers the gap between the rationalist ideal and the structural and political reality of the system.

Further National Institute For Health And Clinical Excellence Reflections

This part of the article draws on the authors’ qualitative empirical work looking at the challenges for NICE in making full use of economic evaluations. Although issues of accessibility, broadly speaking, are not acute at the national level in the UK, organizations like NICE still have some important issues to address in this field. The NICE Appraisals Committee is in the highly unusual situation of having, for every topic they consider, an economic analysis undertaken specifically for their purposes. Thus, they avoid the frequently cited problems encountered by those working at a local level in the NHS of not being able to access cost-effectiveness (CE) information in a timely manner.

In terms of the challenge of interpreting CEAs, the qualitative study uncovered poor levels of understanding of CE information. The extent to which this is a serious barrier depends, to some extent, on the role NICE Committee members are expected to play and the overall approach to decision making being adopted. If all Committee members have a vote on the policy decision then they all need to understand all relevant information presented, including the CEA. A failing on the part of analysts that was revealed from the authors’ research concerned the presentational style of CE studies. The highly technical nature of the CE studies being undertaken for NICE, and their presentational style, make for difficulties in understanding for the noneconomist. The need for improvements in the presentation of CE studies was a strong message from the authors’ work.

A commonly cited acceptability concern with the CEAs is that they fail explicitly to consider the opportunity costs of the decisions being made. In the authors’ research this was raised by a number of committee members including both health economists and health care managers. The CEA at NICE typically presents the problem in terms of a one-off decision concerning the coverage of a given health technology, commonly a new drug. No explicit consideration is therefore given to the sacrifice that would be required in order for the additional resources to be made available (assuming that the incremental cost is positive). An attempt to negate this problem involves use of a CE threshold, and defining technologies that have ICERs that fall below the threshold as cost-effective uses of NHS resources (regardless of their true opportunity cost). This issue has been highlighted by other commentators. However, although the necessity of using a CE threshold was acknowledged by most of the authors’ research subjects, it was also viewed as problematic because the basis for the threshold value or range is very unclear.

In summary, the data from the authors’ qualitative work with NICE suggest that for analyses to be viewed as acceptable, it is necessary that they provide information: (1) that endusers see as relevant (i.e., providing data on parameters that are likely to influence the decision of the policy maker), (2) that is appropriate to the decisions being faced, taking into account relevant contextual factors (e.g., budgetary arrangements commonly seen in the NHS), and (3) that can inform implementation of decisions in a complex decision making environment.

The Research-Practice Divide

This article has explored some of the reasons for the moderate impact of economic evaluation on health policy. There is little dispute that such findings are a source of concern to the discipline of health economics and that for such analyses to be a valuable decision making tools then change of some form is required. Commentators have identified weaknesses in methodologies adopted in economic analyses and there have been concerted attempts to improve their quality through, for example, the development of methodological standards. Difficulties in implementation may also derive from limits to the generalizeability of studies, resulting from factors such as: variations in disease epidemiology, relative prices, levels of health care resources, organizational arrangements, and clinical practice patterns.

However, one of the most challenging issues is contextual and relates to the difficulty in implementing hypothetical savings predicted by CEAs. It has been noted that the erroneous assumption of incremental divisibility of interventions and their benefits underpins many CEAs. Adang et al. (2005) have developed checklists to address the issue of reallocating resources within a real world context in order to get better information as to whether savings can indeed be made.

Important as these developments undoubtedly are, they also need to be accompanied by a concerted attempt to understand the differences in respective domains of ‘research’ and ‘practice’. Much valuable work has been done on techniques for reducing or bridging the gap between the ‘two communities’ of researchers and decision makers. A review of studies by Innvaer et al. (2002) suggests that ‘personal contact’ between researchers and decision makers is one of the most commonly reported facilitators of evidence-based decision making. Lavis et al. (2003) argue that such interaction enables researchers to improve the production of analyses although simultaneously enhancing their adaptation by policy makers. However, these prescriptions for closer contact between researchers and decision makers also need to avoid naivety: it has been seen that other barriers exist. Also, incentives and rewards for researchers are less likely to recognize the value of incremental influence than they are outcomes that have a more direct influence on policy formation. In other words, the academic institutional environment in which economic evaluations are produced is not always conducive to such an interactive approach.

Much of the health economics literature to date has concentrated on barriers of accessibility of CEA results. This suggests a view that improvement in the process by which evaluations are communicated to decision makers, and the latter’s capacity to understand their recommendations, ought to be the focus of attention and activity if impact is to be maximized. In other words, the emphasis is on tweaking the process at both ends in order to support rational implementation of research findings. A focus on barriers to the acceptability of economic evaluation directs us away from such an approach. Instead, it is seen that there is substantive disjuncture between researchers and decision makers in terms of objective functions, institutional contexts and professional value systems. The literature in this area charts a growing realization of the conditions and contingencies of the health decision making environment. There has been a move away from an assumption of policy involving simple, rational choices to a realization of an interactive process with competing aims and considerations. Issues such as system rigidities, value conflict and competing objectives are difficult to overcome as this requires broader changes to the macropolitical and institutional environment of health care policy making.

A More ‘Positive’ Approach?

In contrast to the default normative approach taken in economic evaluation in health care, a positive analysis would simply generate information on the likely costs and benefits associated with alternative courses of action. Dowie (1996) describes such research as knowledge-generating, as opposed to decision-making. A distinguishing feature of positive analyses is that there is no a priori objective specified. Such analyses might involve the use of profile or cost consequence approaches to reporting results. This is where the predicted impacts of the intervention in question are detailed, possibly in a tabular form, without any attempt to summarize or aggregate across different dimensions. Kernick (2000) is a strong advocate of such an approach:

Cost consequence analysis emphasises the importance of presenting data on costs and benefits in disaggregated form, implying a recognition of the value judgement from decision makers and an acceptance that benefits and disadvantages cannot always be condensed into a single output measure.

Kernick (2000, p.314)

Traditional economic evaluation work evokes a conception of research utilization defined by Weiss (1979) as the ‘problem-solving model’. In this model empirical and analytical evidence is applied directly to a policy problem and supplies the information required to enable the optimal solution to be identified and implemented. For the problem-solving model to apply, the recommendations of a normative economic analysis, for example, would need to be implemented directly by the relevant policy maker and would be seen as the driving force behind the decision reached. As Weiss (1979) indicates:

when this imagery of research utilisation prevails, the usual prescription for improving the use of research is to improve the means of communication to policy makers.

Weiss (1979, p.428)

However, there are a number of weaknesses with the problem-solving model. For example, some have called into question the likelihood of establishing a single, agreed objective. Although many economists may adopt a normative view that the problem-solving model has much to recommend it, it has to be recognized that, the real world rarely lives up that aspiration. For example, in a review of UK studies into factors effecting evidence-based policy-making, Elliott and Popay (2000) conclude that many policy problems are often intractable or not clearly enough delineated to be tackled directly and comprehensively. They also find that research evidence is frequently unlikely to be sufficiently clear-cut and unambiguous to translate directly into policy. They also call into question the assumption of a straightforward policy process in the problem-solving model and conclude that dissemination of health services research results has been hampered by a preoccupation with the rational, problem-solving model. In these circumstances, Weiss’s ‘interactive’ model of research utilization, in which policy formulation is understood as a nonlinear process involving multiple agents and influences, has far greater descriptive validity.

The distinction between problem-solving and interactive models of research utilization correlates, to some extent, with the binary of normative and positive approaches to health economic analyses. The requirement for agreement of purpose and objectives between researcher and decision maker is a defining premise of both normative economic evaluation and problem-solving conceptions of policy research utilization. Positive approaches to evaluation, however, may be seen as more helpful to decision makers involved in policy processes that are marked by interaction and competing or multiple objectives. An understanding by the analyst of the nature of the policy environment into which the analyses are being placed is required. This will allow more informed choice to be made concerning the appropriate approaches to analysis and presentation of results.

In highlighting the failure of health economists to consider issues of the acceptability of the data they generate, Kernick (2000) argues that:

The history of any movement determines its structure and the way in which meaning is generated within it. Health economists tend to adopt a straightforward view y Just as the NHS was configured in part to reflect the needs of doctors and not patients, the development of health economics was set to reflect the requirements of the academic discipline and not the realities of the emerging healthcare environment.

Kernick (2000, p.312)

Conclusions

And so to conclude, the driving force behind the push to make more use of economic analyses in health care resource allocation decisions is the desire to make decision processes, and the decisions themselves, more rational. In turn, greater rationality in the system contributes to openness and transparency, and so necessitates that the information on which decisions are based is accessible to a wide audience – the more accessible the information used in decision-making, the easier it is to be inclusive in the decision-making process and the more transparent is the basis on which the decision is made.

This accessibility concern represents one of the challenges to the health economics community in terms of producing evidence that is more reflective of real world practices but also highlights a potential training agenda: clinical and managerial decision makers in health care require some level of expertise and understanding of economic evaluation in order to provide input into the decision making process. Additional areas of focus for health economists include the need to overcome perceived weaknesses in the methods of their analyses, and the need to work with those at the front-line in health care to ensure alignment between the health maximization objectives often assumed in economic analyses and the broad range of other objectives facing decision-makers in reality. That is not to suggest that the decision-maker always ‘knows best’ but analyses based on false assumptions regarding objectives serve no purpose.

Bibliography:

  1. Adang, E., Voordijk, L., van der Wilt, G. and Ament, A. (2005). Cost-effectiveness analysis in relation to budgetary constraints and reallocative restrictions. Health Policy 74, 146–156.
  2. Bryan, S., Williams, I. and McIver, S. (2007). Seeing the NICE side of cost-effectiveness analysis: A qualitative investigation of the use of CEA in NICE technology appraisals. Health Economics 16, 179–193.
  3. Clement, F. M., Harris, A., Li, J. J., et al. (2009). Using effectiveness and cost- effectiveness to make drug coverage decisions. Journal of the American Medical Association 302(13), 1437–1443.
  4. Culyer, A. J. (1973). The economics of social policy. London: Martin Robertson and Company Ltd.
  5. Dowie, J. (1996). The research-practice gap and the role of decision analysis in closing it. Health Care Analysis 4, 5–18.
  6. Elliott, H. and Popay, J. (2000). How are policy makers using evidence? Models of research utilisation and local NHS policy making. Journal of Epidemiology and Community Health 54, 461–468.
  7. Innvaer, S., Vist, G., Trommald, M. and Oxman, A. D. (2002). Health policy-makers’ perceptions of their use of evidence: A systematic review. Journal of Health Services Research and Policy 7(4), 239–245.
  8. Kernick, D. P. (2000). The impact of health economics on healthcare delivery. PharmacoEconomics 18(4), 311–315.
  9. Lavis, J. N., Robertson, D., Woodside, J. M., McLeod, B. and Abelson, J. (2003). How can research organizations more effectively transfer research knowledge to decision makers? Milbank Quarterly 81(2), 221–248.
  10. McDonald, R. (2002). Using health economics in health services: Rationing rationally? 1st ed. Buckingham: Open University Press.
  11. National Institute for Health & Clinical Excellence (2003). The clinical effectiveness and cost effectiveness of anakinra for rheumatoid arthritis. London, UK: NICE.
  12. National Institute for Health & Clinical Excellence (2006). Statins for the prevention of cardiovascular events in patients at increased risk of developing cardiovascular disease or those with established cardiovascular disease. London, UK: NICE.
  13. Rawlins, M. D. and Culyer, A. J. (2004). National Institute for Clinical Excellence and its value judgments. British Medical Journal 329, 224–227.
  14. Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review 426–431.
  15. Williams, I., McIver, S., Moore, D. and Bryan, S. (2008). The use of economic evaluations in NHS decision-making: A review and empirical investigation. Health Technology Assessment 12(7), 1–193.
  16. Devlin, N. and Parkin, D. (2004). Does NICE have a cost-effectiveness threshold and what other factors influence its decisions? A binary choice analysis. Health Economics 13, 437–452.
  17. Hoffmann, C., Stoykova, B. A., Nixon, J., et al. (2002). Do health-care decision makers find economic evaluations useful? The findings of Focus Group Research in UK Health Authorities. Value in Health 5(2), 71–78.
  18. Schlander, M. (2008). The use of cost-effectiveness by the National Institute for Health and Clinical Excellence (NICE): No(t yet an) exemplar of a deliberative process. Journal of Medical Ethics 34, 534–539.
  19. von der Schulenburg, J. M. G. (2000). The influence of economic evaluation studies on health care decision-making. Oxford: IOS Press.
  20. Spath, H. M., Allenet, B. and Carrere, M. O. (2000). Using economic information in the health sector: The choice of which treatments to include in hospital treatment portfolios. Journal d’Economie Medicale 18(3–4), 147–161.
  21. Williams, I. and Bryan, S. (2007). Understanding the limited impact of economic evaluation in healthcare resource allocation: A conceptual framework. Health Policy 80, 135–143.
  22. Williams, I., Bryan, S. and McIver, S. (2007). Health technology coverage decisions: Evidence From The N.I.C.E. ‘experiment’ in the use of cost-effectiveness analysis. Journal of Health Services Research and Policy 12(2), 73–79.
  23. https://www.cadth.ca/ The Canadian Agency for Drugs and Technologies in Health.
  24. https://www.york.ac.uk/crd/ The Centre for Reviews and Dissemination at the University of York.
  25. https://www.tuftsmedicalcenter.org/research-clinical-trials/institutes-centers-labs/center-for-evaluation-of-value-and-risk-in-health The Center for the Evaluation of Value and Risk in Health at Tufts University Medical Center.
  26. https://www.nihr.ac.uk/explore-nihr/funding-programmes/health-technology-assessment.htm The Health Technology Assessment Programme of the National Institute for Health Research.
  27. https://www.nice.org.uk/ The National Institute for Health & Clinical Excellence.
Heterogeneity for Decision Making