Value of Information Analysis




Policy Relevance

The general issue of balancing the value of evidence about the performance of a technology and the value of providing patients with access to a technology can be seen as central to a number of policy questions in many different types of healthcare systems (HCS). For example, decisions about approval or reimbursement of new drugs are increasingly being made close to their launch when the evidence base to support their use is least mature and when there may be substantial uncertainty surrounding their cost effectiveness. In these circumstances, further evidence maybe particularly valuable as it will lead to better decisions about the use of the technology, which would improve patient outcomes and/or reduce resource costs. Therefore, it is useful to establish the key principles of what assessments are needed to decide whether there is sufficient evidence to support reimbursement or recommending the use of a new drug, whether it should be approved but additional evidence sort or whether its widespread use should be restricted until the additional evidence is available. Such assessments can help to inform the questions posed by coverage with evidence development and managed entry in many health-care systems including restricting approval to ‘only in research’ which is part of the UK National Institute for Heath and Clinical Excellence (NICE) statutes.

If there are constraints on the growth of health-care expenditure, then approving a more costly technology will displace other activities that would have otherwise generated improvements in health for other patients, as well as other socially valuable activities outside health care. If the objective of a HCS is to improve health outcomes across the population it serves then, even if a technology is expected to be more effective, the health gained must be compared to the health expected to be forgone elsewhere as a consequence of additional costs, i.e., whether the technology is expected to be cost effective and offer positive net health benefits (NHB) (other effects, e.g., on consumption, can also be expressed as their health equivalent). An assessment of expected cost effectiveness or NHB relies on evidence about effectiveness, impact on long-term overall health and potential harms, as well as additional health-care costs together with some assessment of what health is likely to be forgone as a consequence (the cost-effectiveness threshold).




Such assessments are inevitably uncertain and, without sufficient and good quality evidence, decisions about the use of technologies will also be uncertain. There will be a chance that the resources committed by the approval of a new technology may be wasted if the expected positive net health effects are not realized. Equally, rejecting a new technology will risk failing to provide access to a valuable intervention if the net health effects prove to be greater than expected. Therefore, if the social objective is to improve overall health for both current and future patients then the need for and the value of additional evidence is an important consideration when making decisions about the use of technologies. This is even more critical once it is recognized that the approval of a technology for widespread use might reduce the prospects of conducting the type of research that would provide the evidence needed. In these circumstances there will be a trade-off between the net health effects for current patients from early access to a cost-effective technology and the health benefits for future patients from withholding approval until valuable research has been conducted.

Research also consumes valuable resources which could have been devoted to patient care, or other more valuable research priorities. Also uncertain events in the near or distant future may change the value of the technology and the need for evidence (e.g., prices of existing technologies, the entry of new technologies and other evidence about the performance of technologies as well as the natural history of disease). In addition, implementing a decision to approve a new technology may commit resources which cannot subsequently be recovered if a decision to approve or reimburse might change in the future (e.g., due to research reporting). Therefore, appropriate research and coverage decisions will depend on whether the expected benefits of research are likely to exceed the costs and whether any benefits of early approval or reimbursement are greater than withholding approval until additional research is conducted or other sources of uncertainty are resolved. Methods of analysis which provide a quantitative assessment of the potential benefits of acquiring further evidence allow research and reimbursement decisions to be addressed explicitly and accountably.

The Value Of Additional Evidence

The principles of value of information analysis have a firm foundation in statistical decision theory with closely related concepts and methods in mathematics and financial economics with diverse applications in business decisions, engineering, environmental risk analysis, and financial and environmental economics. There are now many applications in health, some commissioned to directly inform policy and others published in specialist as well as general medical and health policy journals. Most commonly these methods of analysis have been applied in the context of probabilistic decision analytic models used to estimate expected cost effectiveness of alternative interventions. However, the same type of analysis can also be used to extend standard methods of systematic review and meta-analysis. Indeed the principles or value of information analysis can also be used as a conceptual framework for qualitative assessment of how important uncertainty might be and the relative priority of alternative research topics and proposals.

Additional evidence is valuable because it can improve patient outcomes by resolving existing uncertainty about the cost effectiveness of the interventions available, thereby informing treatment choice for subsequent patients. For example, the balance of existing evidence might suggest that a particular intervention is expected to be cost effective and offer the greatest NHB, but there will be a chance that others are in fact more cost effective, offering higher NHB to the HCS. If treatment choice is based on existing evidence then there will be a chance that other interventions would have improved overall health outcomes to a greater extent, i.e., there are adverse net health consequences associated with uncertainty. The scale of uncertainty can be indicated by the results of probabilistic analysis of a decision analytic model and/or based on the results of a meta-analysis of the evidence relevant to the choice between interventions. The expected consequences of this uncertainty can be expressed in terms of NHB or the equivalent HCS resources that would be required to generate the same net health effects. These expected consequences can be interpreted as an estimate of the NHB that could potentially be gained per patient if the uncertainty surrounding their treatment choice could be resolved, i.e., it indicates an upper bound on the expected NHB of further research.

Expected Value Of Perfect Information

More formally, if there are alternative interventions (j), where the NHB of each depends on uncertain parameters that may take a range of possible values (θ), the best decision based on the information currently available would be to choose the intervention that is expected to offer the maximum net benefit (i.e., maxj Eθ NHB(j, θ)). If the uncertainty could be fully resolved (with perfect information), the decision maker would know which value y would take before choosing between the alternative interventions. They would be able to select the intervention that provides the maximum NHB for each particular value of y (i.e., maxj NHB(j, θ)). However, when a decision about whether further research should be undertaken is made, the results (the true values of θ) are necessarily unknown. Therefore, the expected NHB of a decision taken when uncertainties are fully resolved (with perfect information) is then found by averaging these maximum net benefits over all the possible results of research that would provide perfect information (over the joint distribution of θ); Eθ maxj NB(j, θ). The expected value of perfect information (EVPI) for an individual patient is simply the difference between the expected value of the decision made with perfect information about the uncertain parameters θ, and the decision made on the basis of existing evidence (EVPI=Eθ maxj NB(j, θ) – maxj Eθ NHB (j, θ)).

Once the results of research are available they can be used to inform treatment choice for all subsequent patients. Therefore, the potential expected benefit of research (EVPI) needs to be expressed for the population of patients that can benefit from it. The population EVPI will increase with the size of the patient population whose treatment choice can be informed by additional evidence and the time over which evidence about the cost effectiveness of these interventions is expected to be useful, but will tend to decline with the time that research is likely to take to be commissioned, conducted and report.

Time Horizons For Research Decisions

The information generated by research will not be valuable indefinitely, because other changes occur over time, which will have an impact on the future value of the information generated by research that can be commissioned today. For example, over time the prices of the alternative technologies are likely to change (e.g., patent expiry of branded drugs and the entry of generics versions) and new and more effective interventions become available which will eventually make current comparators obsolete, so information about their effectiveness will no longer be relevant to future clinical practice. Other information may also become available in the future which will also impact on the value of the evidence generated by research that can be commissioned today. For example, other evaluative research might be (or may already have been) commissioned by other bodies or HCS, that may resolve much of the uncertainty anyway. Also, this research or other more basic science may fundamentally change our understanding of disease processes and effective mechanisms. Finally, as more information about individual effects is acquired through greater understanding of the reasons for variability in patient outcomes, the value of evidence that can resolve uncertainty in expected or average effects for the patient population and/or it’s subpopulations will decline (see Section Uncertainty, Variability, and Individualized Care). For all these reasons there will be a finite time horizon for the expected benefits of additional evidence, i.e., there will be a point at which the additional evidence that can be acquired by commissioning research today will no longer be valuable.

The actual time horizon for a particular research decision is unknown, because it is a proxy for a complex, and uncertain process of future changes. Nonetheless some judgment, whether made implicitly or explicitly, is unavoidable when making decisions about research priorities. Some assessment is possible based on historical evidence and judgments about whether a particular area is more likely to see earlier patent expiration, future innovations, other evaluative research, and the development of individualized care (e.g., where diagnostic technologies, application of genomics, and the development of evidence-based algorithms are rapidly developing). Information can also be acquired about trials that are already planned and underway around the world (e.g., various trial registries) and future innovations from registered patents and/ or phase I and II trials as well as licensing applications, combined with historic evidence on the probability of approval and diffusion. For these reasons, an assessment of an appropriate time horizon may differ across different clinical areas and specific research proposals. The incidence of patients who can benefit from the additional evidence may also change over time, although not necessarily decline as other types of effective health-care change competing risks. However, in some areas recent innovations might suggest a predictable decline, e.g., the decline in the incidence of cervical cancer following the development of the HPV vaccine.

Research Prioritization Decisions

Two questions are posed when considering whether further research should be prioritized and commissioned: Are the potential expected NHB of additional evidence (population EVPI) sufficient to regard the type of research likely to be required as potentially worthwhile; and should it be prioritized over other research that could be commissioned with the same resources? Of course, these assessments require some consideration of the period of time over which the additional evidence generated by research is likely to be relevant; as well as the time likely to be taken for proposed research to be commissioned, conducted and report.

One way to address the question is to ask whether the HCS could generate similar expected NHB more effectively elsewhere, or equivalently whether the costs of the research would generate more NHB if these resources were made available to the HCS to provide health care. Very recent work in the UK has estimated the relationship between changes in NHS expenditure and health outcomes. This work suggests that the NHS spends approximately ₤75 000 to avoid one premature death, ₤25 000 to gain one life year and somewhat less than ₤20 000 to gain one quality-adjusted life-year (QALY). Using these estimates proposed research that, for example, costs d2 million could have been used to avoid 27 deaths and generate more than 100 QALY elsewhere in the NHS. If these opportunity costs of research are substantially less than the expected benefits (population EVPI) then it would suggest that the proposed research is potentially worthwhile.

However, most research funders have limited resources (with constraints relevant to a budgetary period) and cannot draw directly on the other (or future) resources of the HCS. Therefore, even if the population EVPI of proposed research exceeds the opportunity costs it is possible that other research may be even more valuable. If similar analysis is conducted for all proposals competing for limited research resources it does become possible to identify a short list of those which are likely to be worthwhile and then select from these those that are likely to offer the greatest value.

Research And Reimbursement Decisions

It should be noted that the population EVPI represents only the potential or maximum expected benefits of actual research that could be conducted for two reasons: no research, no matter how large the sample size or how assiduously conducted can resolve all uncertainty and provide perfect information; and there are usually a large number of uncertain parameters that contribute to y and are relevant to differences in NHB of the alternative interventions – most research designs will not provide information about all of them. Nonetheless EVPI does provide an upper bound to the value of conducting further research, so when compared with the opportunity cost of conducting research (e.g., the health equivalent of the resources required) it can provide a necessary condition for a decision to conduct further research while the intervention is approved for widespread use. It also provides a sufficient condition for early approval when approval would mean that the type of further research needed would not be possible or too costly to be worthwhile (e.g., because there would be a lack of incentives for manufacturers, or further randomized trials would not be regarded as ethical and/or would be unable to recruit). In these circumstances the population EVPI represents an upper bound on the benefits to future patients that would be forgone or the opportunity costs of early approval based on existing evidence.

What Type Of Evidence?

The type of analysis described above indicates the potential value of resolving all the uncertainty surrounding the choice between alternative the interventions. However, it would be useful to have an indication of which sources of uncertainty are most important and what type of additional evidence would be most valuable. This can start to indicate the type of research design that is likely to be required, whether the type of research required will be possible once a new technology is approved for widespread use as well as indicating the sequence in which different studies might be conducted.

Expected Value Of Perfect Parameter Information

The potential expected benefits of resolving the different sources of uncertainty that determine the NHB of the alternative interventions can be established using the same principles. For example, if the NHB of each intervention (j) depends on two (groups of) uncertain parameters (θ1 and θ2) that may take a range of possible values, the best decision based on current information is still to choose the intervention that is expected to offer the maximum net benefit (i.e., maxj Eθ2,θ1 NHB(j, θ1, θ2)). If the uncertainty associated with only one of these groups of parameters (θ1) could be fully resolved (i.e., with perfect parameter information), the decision maker would know which value θ1 would take before choosing between the alternative interventions. However, the values of the other parameters (θ2) remain uncertain so the best they can do is to select the intervention that provides the maximum expected NHB for each value of θ1 (i.e., maxj Eθ2|θ1 NHB(j, θ1, θ2)). Which particular value θ1 will take is unknown before research is conducted so the expected NHB when uncertainty associated with θ1 is fully resolved is the average of these maximum net benefits over all the possible values of θ1, (i.e., Eθ1 maxj Eθ2|θ1 NHB(j, θ1, θ2)). The expected value of perfect parameter information about θ1 (EVPPI θ1) is simply the difference between the expected value of the decisions made with perfect information about θ1, and a decision based on existing evidence (EVPI= Eθ1 maxj Eθ2|θ1 NHB(j, θ1, θ2)-maxj Eθ2,θ1 NHB(j, θ1, θ2)).

It should be noted that this describes a general solution for nonlinear models. However, it is computationally intensive because it requires an inner loop of simulation to estimate the expected NHB for each value of θ1 (Eθ2|θ1 NHB(j, θ1, θ2)), as well outer loop of simulation to sample the possible value θ1 could take. The computational requirements can be somewhat simplified if there is a multilinear relationship between the parameters and net benefit. If the model is multilinear in θ2, the parameters in θ2 are uncorrelated with each other and θ1 and θ2 are independent then the inner loop of simulation is unnecessary (using the mean values of θ2 will return the correct estimate of Eθ2|θ1 NHB(j, θ1, θ2)).

Sequence Of Research

This type of analysis can be used to focus research on the type of evidence that will be most important by identifying those parameters for which more precise estimates would be most valuable. In some circumstances, this will indicate which endpoints should be included in further experimental research. In other circumstances, it may focus research on getting more precise estimates of particular parameters that may not necessarily require experimental design and can be provided relatively quickly. This type of analysis can be extended to consider the sequence in which different types of study might be conducted, e.g., whether: no research; research about θ1 and θ2 simultaneously; θ1 first and then θ2 depending on the results of θ1 research; or θ2 first and then θ1 depending on the results of θ2 research, would be the most valuable research decision.

Informing Research Design

Identifying which sources of uncertainty are most important and what type of evidence is likely to be most valuable is useful in two respects. It can help to identify the type of research design that is likely to be required (e.g., an randomized controlled trial (RCT) may be needed to avoid the risk of selection bias if additional evidence about the relative effect of an intervention is required) and identify the most important endpoints to include in any particular research design. It can also be used to consider whether there are other types of research that could be conducted relatively quickly (and cheaply) before more lengthy and expensive research (e.g., a large RCT) is really needed (i.e., the sequence of research that might be most effective).

Estimates of EVPI and EVPPI only provide a necessary condition for conducting further research. To establish a sufficient condition to decide if further research will be worthwhile and identify efficient research design, estimates of the expected benefits and the cost of sample information are required.

The same value of information analysis framework can be extended to establish the expected value of sample information (EVSI) for particular research designs.

Expected Value Of Sample Information

For example, a sample of n on y will provide a sample result D. If the sample result was known the best decision would be to choose the alternative with the maximum expected net benefit when the estimates of the NHB of each alternative was based on the sample result (averaged over the posterior distribution of the net benefit given the sample result D). However, which particular sample result will be realized when the research reports is unknown. The expected value of acquiring a sample of n on y is the found by averaging these maximum expected net benefits over the distribution of possible sample results, D, i.e., the expectation over the predictive distribution of the sample results D conditional on y, averaged over the possible values of y (the prior distribution of y). The additional expected benefit of sample information (EVSI) is simply the difference between the expected value of a decision made with sample information and the expected value with current information.

The EVSI calculations require the likelihood for the data to be conjugate with the prior so there is an analytic solution to combining the prior distribution of y with the predicted sample result (D) to form a predicted posterior. If the prior and likelihood are not conjugate, the computational burden of using numerical methods to form predicted posteriors is considerable. Even with conjugacy, EVSI still requires intensive computation if the relationship between the sampled parameters (end points in the research design) and differences in the NHB of the alternatives are nonlinear.

Optimal Sample Size And Other Aspects Of Research Design

To establish the optimal sample size for a particular type of study these calculations need to be repeated for a range of sample sizes. The difference between the EVSI and the costs of acquiring the sample information is the expected net benefit of sample information (ENBS) or the societal payoff to research. The optimal sample size is simply the value of n that generates the maximum ENBS. As well as sample size the same type of analysis can be used to evaluate a range of different dimensions of research design such as which endpoints to include, which interventions should be compared, and the length of follow-up. The best design is the one that provides the greatest ENBS. The same type of analysis can also be used to identify whether a combination of different types of study might be required (an optimal portfolio of research). It should be recognized that the costs of research not only include the resources consumed in conducting it but also the opportunity costs (NHB forgone) falling on those patients enrolled in the research and those whose treatment choice can be informed once the research reports. Therefore, optimal research design will depend, among other things, on whether or not patients have access to the new technology while the research is being conducted and how long it will take before it reports (determined by length of follow-up and recruitment rates). It is also possible to take account of likely implementation of research findings in research design, e.g., if an impact on clinical practice depends on the trial reporting a statistically significant result for a particular effect size (and there are no other effective ways to ensure implementation) this will influence optimal sample size as well.

The Value Of Commissioned Research

Research decisions require an assessment of the expected potential value of future research before the actual results that will be reported in the future are known. Therefore, using hindsight to inform research prioritization decisions is inappropriate for two reasons: (1) such an (ex post) assessment cannot directly address the (ex ante) question posed in research prioritization decisions; and (2) assessing the (ex post) value of research with hindsight is potentially misleading if used to judge whether or not the original (ex ante) decision to prioritize and commission it was appropriate. This is because the findings of research are only one realization of the uncertainty about potential results that could have been found when the decision to prioritize and commission research must be taken.

It is useful and instructive, however, to reconsider the analysis set out above once the results of research become available by updating the synthesis of evidence, reestimating the NHB of the alternative interventions and updating the value of information analysis to consider whether the research was indeed definitive (the potential benefits of acquiring additional evidence does not justify the costs of further research) or whether more or different types of evidence might be required. Therefore, value of information analysis can also provide the analytic framework to consider when to stop a clinical trial, how to allocate patients between the arms of a trial as evidence accumulates (sequential and group sequential designs) and when other types of evidence might become more important as the results of research are realized over time.

Value Of Implementation

Overall health outcomes can also be improved by ensuring that the accumulating findings of research are implemented and have an impact on clinical practice. Indeed, the potential improvements in health outcome by encouraging the implementation of what existing evidence suggests is the most cost-effective intervention may well exceed the potential improvements in NHB through conducting further research.

The distinction between these two very different ways to improve overall health outcomes is important because, although the results of additional research may influence clinical practice and may contribute to the implementation of research findings, it is certainly not the only, or necessarily the most effective, way to do so. Insofar as there are other more effective mechanisms (e.g., more effective dissemination of existing evidence) or policies (e.g., those that offer incentives and/or sanctions), than continuing to conduct research to influence clinical practice, rather than because there is real value in acquiring additional evidence itself, would seem inappropriate, because research resources could have been used elsewhere to acquire additional evidence in areas where it would have offered greater potential NHB.

Clearly, the potential health benefits of conducting further research will only be realized (health outcomes actually improve and/or resources are saved) if the findings of the research do indeed have an impact on clinical practice. Recognizing that there are very many ways to influence the implementation of what current evidence suggests, other than by conducting more research, is important when considering other policies to improve implementation of research findings instead of, or in combination with, conducting further research. However, the importance of implementing the findings of proposed research might influence consideration of its priority and research design in a number of ways. If it is very unlikely that the findings of proposed research will be implemented and other mechanisms are unlikely to be effective or used, then other areas of research where smaller potential benefits are more likely to be realized might be prioritized. If the impact of research on clinical practice is likely to require highly statistically significant results this will influence the design, cost, and time taken for research to report and therefore its relative priority. It maybe that a larger clinical difference in effectiveness would need to be demonstrated before research would have impact on clinical practice. This will tend to reduce the potential benefits of further research as well because large differences are less likely to be found than small ones.

Decisions Based On The Balance Of Existing Evidence?

It should be recognized that restricting attention to whether or not the result of a clinical trial, a meta-analysis of existing trials, or the results of a cost-effectiveness analysis offer statistically significant results is unhelpful for a number of reasons: it provides only a partial summary of the uncertainty associated with the cost effectiveness of an intervention, nor does it indicate the importance of the uncertainty for overall patient outcomes or the potential gains in NHB that might be expected from acquiring additional evidence that could resolve it. Of course, failing to implement an intervention which is expected to offer the greatest NHB will impose unnecessary opportunity cost. This suggest that always waiting to implement research findings until the traditional rules of statistical significance are achieved (whether based on frequentist hypothesis testing or on Bayesian bench mark error probabilities) may well come at some considerable cost to patient outcomes and HCS resources.

However, once uncertainty and the value of additional evidence is recognized there are a number of issues that need to be considered before decisions to approve or reimburse a new technology can be based on the balance of accumulated evidence, i.e., expected cost effectiveness and expected NHB:

  1. As already discussed, if early approval or reimbursement means that the type of research required to generate the evidence needed is impossible or more difficult to conduct then the expected value of additional evidence that will be forgone by approval needs to be considered alongside the expected benefits of early implementation.
  2. Insofar as widespread use of an intervention will be difficult to reverse if subsequent research demonstrates that it is not cost effective (e.g., where it would require resources and effort as well as take time to achieve), then account must be taken of the consequences of this possibility (i.e., the opportunity costs associated the chance that research finding that the intervention is not cost effective but being unable to immediately implement these findings and withdraw its use).
  3. If an intervention offers longer-term benefits which will ultimately justify initial treatment costs (e.g., any effect on mortality risk) its approval or reimbursement is likely to commit initial losses of NHB compensated by later expected gains. In these circumstances its approval or reimbursement commits irrecoverable opportunity costs for each patient treated. If the uncertainty about it cost-effectiveness might be resolved in the future (e.g., due to commissioned research reporting) then if may be better to withhold approval or reimbursement until the research findings are available even if the research could be conducted while the technology is in widespread use. This is more likely to be the case when a decision to delay initiation of treatment is possible and associated with more limited health impacts (e.g., in chronic and stable conditions).
  4. There is a common and quite natural aversion to iatrogenic effects, i.e., health lost through adopting an intervention not in widespread use tends to be regarded as of greater concern than the same health lost through continuing to use existing interventions that are less effective than others available. However, it should be noted that the consequences for patients are symmetrical and this ‘aversion’ also depends entirely on which intervention happened to have diffused into common clinical practice first.

These considerations can inform an assessment of whether more health might be gained through efforts to implement the findings of existing research or by acquiring more evidence to inform which intervention is most cost effective. Although there are many circumstances where approval or reimbursement should not be simply based on the balance of evidence (i.e., expected cost effectiveness or expected NHB), it should be noted that these considerations are likely to differ between decisions and certainly do not lead to a single ‘rule’ based on notions of the statistical significance of the results of a particular study, a meta-analysis of existing studies, or the results of a cost-effectiveness analysis. They can be, and have been, dealt with explicitly and quantitatively within well conducted value of information analysis.

Uncertainty, Variability, And Individualized Care

It is important to make a clear distinction between uncertainty, variability, and heterogeneity. Uncertainty refers to the fact that we do not know what the expected effects will be of using an intervention in a particular population of patients (i.e., the NHB of an intervention on average). This remains the case even if all patients within this population have the same observed characteristics. Additional evidence can reduce uncertainty and provide a more precise estimate of the expected effects in the whole population or within subpopulations that might be defined based on different observed characteristics. Variability refers to the fact that individual responses to an intervention will differ within the population or even in a subpopulation of patients with the same observed characteristics. Therefore, this natural variation in responses cannot be reduced by acquiring additional evidence about the expected or average effect. Heterogeneity refers to those individual differences in response that can be associated with differences in observed characteristics, i.e., where the sources of natural variability can be identified and understood. As more becomes known about the sources of variability (as variability is turned into heterogeneity) the patient population can be partitioned into subpopulations or subgroups, each with a different estimate of the expected effect of the intervention and the uncertainty associated with it. Ultimately, as more sources of variability become known the subpopulations become individual patients, i.e., individualized care.

Overall patient outcomes can be improved by either acquiring additional evidence to resolve the uncertainty in the expected effects of an intervention, and/or by understanding the sources of variability and dividing the population into finer subgroups where the intervention will be expected to be cost effective in some but not in others. However, a greater understanding of heterogeneity also has an impact on the value of additional evidence. As more subgroups can be defined the precision of the estimates of effect is necessarily reduced (the same amount of evidence offers fewer observations in each subgroup). However, the uncertainty about which intervention is most cost effective may be reduced in some (e.g., where it is particularly effective or positively harmful), but increase in others. Therefore, the expected consequences of uncertainty per patient, or value of additional evidence per patient may be higher or lower in particular subgroups. The expected value of evidence across the whole population (the sum across all subgroups of the population) may rise or fall. However, in the limit as more sources of variability are observed the value of additional evidence will fall. Indeed, if all sources of variability could be observed then there would be no uncertainty at all.

Value of information analysis can be applied within each subgroup identified based on existing evidence. Conducting an analysis of the expected health benefits of additional evidence by subgroups is useful because it can indicate which types of patient need to be included in any future research design and others that could be excluded. Although the potential value of additional evidence about the whole population is simply the sum of values for each of its subpopulations, the value of acquiring evidence within only one subgroups depends on whether that evidence can inform decisions in others. For example, if subgroups are identified based on differing base line risks then evidence about the relative effect of an intervention in one subgroup might also inform relative effect in others so the value of research conducted in one of the subgroups should take account of the value it will generate in others. However, evidence about a subgroup specific baseline risk might not be relevant and offer value in others. In principle, these questions of exchangeability of evidence can be informed by how existing evidence and ought to be reflected in how it is synthesized and the uncertainties characterized.

Therefore there is potential value of research which might not resolve uncertainty but instead reveal the reasons for variability in outcome; informing which subgroups could benefit most from an intervention, or the choice of the physician patient dyad in selecting care given their symptoms, history and preference (i.e., individualized care). This type of research may be very different from the type of evaluative research that reduces uncertainty about estimates of effect. For example, it might include: diagnostic procedures and technologies, pharmacogenetics; analysis of observational data and treatment selection as well as novel trial designs which can reveal something of the joint distribution of effects. Much methodological and applied work has been conducted in this rapidly developing area. There is an opportunity to explore ways of estimating the potential value of such research (the expected benefits of heterogeneity) based only on existing evidence. This would provide a very useful complement to estimates of EVPI and EVPPI. It would allow policy makers to consider whether HCS resources should be invested in: providing early access to new technologies; ensuring the findings of existing (or commissioned) research are (or will be) implemented; conducting research to provide additional evidence about particular sources of uncertainty in some (or all) subgroups; or conducting research which can lead to a better understanding of variability in effects. Of course some combination of these policy choices may well offer the greatest impact on overall health outcomes.

Value Of Information And Cost-Effectiveness Analysis

The discussion of value of information analysis has been founded on a HCS which faces some constraints on the growth of health-care expenditure so additional HCS costs displace other care that would have otherwise generated improvements in health. In the UK recent estimates of the rate at which health-care cost displace health elsewhere (the cost-effectiveness threshold) are now available. However, in all HCS new technologies impose costs (or offer benefits), which fall outside the health care and displace private consumption rather than health. If some consumption value of health is specified then these other effects can also be expressed as their health equivalent and included in the expression for NHB. Impacts on health, HCS resources, and consumption can also be expressed in terms of the equivalent net private consumption effects or the equivalent HCS resources (these monetary values will only be the same if the estimate of the threshold is the same as some consumption value of health). Therefore the methods of analysis outlined above are not restricted to cost-effectiveness analysis applied in HCS which have administrative budget constraints and/or where decision making bodies disregard effects outside the HCS. It is just as relevant to an appropriately conducted cost-benefit analysis (one which accounts for the shadow price of any constraints on health-care expenditure).

Equally the principles of value of information analysis can be usefully applied even in circumstances where decision making bodies are unwilling or unable to explicitly include any form of economic analysis in their decision making process. For example, a quantitative assessment of the expected health (rather than net health) benefits of additional evidence is possible by applying value of information analysis to the results of standard methods of systematic review and metaanalysis. Insofar as there are additional costs associated with more effective interventions this will tend to overestimate the expected NHB of additional evidence. Also the endpoints included in the meta-analysis of previous trials may not capture all valuable aspects of health outcome. For example, although mortality following acute myocardial infarction maybe the appropriate primary outcome in the evaluation of early thrombolysis, it is not necessarily the only relevant outcome. Stroke and its consequences are also very relevant as well as length of survival and the type of health experienced in the additional years of life associated with mortality effects.

Specifying a minimum clinical difference required to change clinical practice is one way to incorporate concerns about potential adverse events and other consequences of recommending a more effective intervention, including the additional costs, albeit implicitly. This concept of an effect size has been central to the design of clinical research and determines the sample size in most clinical trials. The effect size does not represent what is expected to be found by the research, but the difference in outcomes that would need to be detected for the results to be regarded as clinically significant and have an impact on clinical practice. The same concept can be used to report estimates of the expected heath benefits of additional evidence for a range of minimum clinical differences (MCD) in outcomes. The value of additional evidence and the need for further research depends on the clinical difference in key aspects of outcome that would be need to be demonstrated before clinical practice ‘should’ or is likely to change. There are a number of circumstances where a larger MCD might be required. For example: (1) where the quantitative analysis is restricted to the primary endpoint reported in existing clinical trials but there other important aspects of outcome that are not captured in this endpoint (e.g., adverse events or quality of life impacts that have not been accounted for in the meta-analysis); (2) when there is an impact on HCS costs, out of pocket expenses for patients or the wider economy; and (3) it maybe that larger clinical difference in effectiveness would need to be demonstrated before research would have an impact on practice and the findings of proposed research would be widely implemented.

Requiring that further research must demonstrate larger differences in effect will tend to reduce its expected potential benefits because large differences are less likely to be found than smaller ones. Specifying an MCD through some form of deliberative process would implicitly account for the other unquantified aspects of outcome, HCS costs and other nonhealth effects. Of course decision makers would need to consider whether proposed research is still a priority at an MCD that is regarded as sufficient to account for these other effects. Importantly, whatever the policy context, the principles and established methods of value of information analysis are relevant to a wide range of different types of HCS and decision making contexts and should not be regarded as being restricted to situations where probabilistic decision analytic models to estimate cost effectiveness based on QALYs as a measure of health are available and routinely used within the decision making process.

Bibliography:

  1. Ades, A. E., Lu, G. and Claxton, K. (2004). Expected value of sample information in medical decision modelling. Medical Decision Making 24(2), 228–702.
  2. Basu, A. and Meltzer, D. (2007). Value of information on preference heterogeneity and individualized care. Medical Decision Making 27(2), 112–127.
  3. Briggs, A., Claxton, K. and Sculpher, M. J. (2006). Decision analytic modelling for health economic evaluation. Oxford: Oxford University Press.
  4. Claxton, K. (1999). The irrelevance of inference: A decision making approach to the stochastic evaluation of health care technologies. Journal of Health Economics 17(3), 341–364.
  5. Claxton, K., Griffin, S., Hendrik, K. and McKenna, C. (2013). Expected health benefits of additional evidence: Principles, methods and applications. CHE Research Paper 83. University of York, York.
  6. Claxton, K., Palmer, S., Longworth, L., et al. (2012). Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development. Health Technology 16(46), doi:10.3310/hta16460.
  7. Colbourn, T., Asseburg, C., Bojke, L., et al. (2007). Preventive strategies for group B streptococcal and other bacterial infections in early infancy: Cost effectiveness and value of information analyses. British Medical Journal 335, 655–662.
  8. Eckermann, S. and Willan, A. R. (2008). The option value of delay in health technology assessment. Medical Decision Making 28(3), 300–305.
  9. Griffin, S., Claxton, K., Palmer, S. and Sculpher, M. (2011). Dangerous omissions: The consequences of ignoring decision uncertainty. Health Economics 20, 212–224, doi:10.1002/hec.1586.
  10. Griffin, S., Claxton, K. and Welton, N. (2010). Exploring the research decision space: The expected value of sequential research designs. Medical Decision Making 30, 155–162, doi:10.1177/0272989 09344746.
  11. Hoomans, T., Fenwick, E., Palmer, S. and Claxton, K. (2009). Value of information and value of implementation: Application of a framework to inform resource allocation decisions in metastatic hormone-refractory prostate cancer. Value in Health 12, 315–324, doi:10.1111/j.1524-4733.2008.00431.x.
  12. McKenna, C. and Claxton, K. (2011). Addressing adoption and research design decisions simultaneously: The role of value of sample information analysis. Medical Decision Making, doi:10.1177/0272989 1139992.
  13. McKenna, C., Claxton, K., Chalabi, Z. and Epstein, D. (2010). Budgetary policies and available actions: A generalisation of decision rules for allocation and research decisions. Journal of Health Economics 29, 170–181.
Infectious Disease Modeling
Observational Studies in Economic Evaluation