The best known ‘facts’ about the macroeconomics of health are that rich nations are healthier and spend more on medical care than poor nations, but that additional wealth or spending may not add much to life expectancy after some threshold level has been exceeded (Figure 1(a)–(c)). A fact that receives insufficient attention is that any major macroeconomic change takes time, often quite a long time. An often repeated but generally incorrect ‘fact’ is that population aging and health risks (obesity and cancer) are major drivers of aggregate spending growth.
Macroeconomists focus on large-scale issues at the national or global level – growth, distribution, business cycles, money, and finances – rather than the micro individual rational choice decisions examined by most health economists. Macroeconomists tend to use time series methods and address dynamics rather than the cross-sectional methods and comparative statics of micro studies. Analyzing when and how change occurs forces more explicit consideration of lags, heterogeneity, and variance – and of the differences between micro and macro processes that might superficially appear to be the same.
Some notable disparities addressed in this article are the contrast between the quick, anticipatory movements of financial markets and the slow inertial flow of complex health care systems (smoothing that renders regular business cycles almost invisible); discrepancies in the determinants of spending between the individual micro level (illness) and the national macro level (per capita gross domestic product (GDP) – with a lag); and divergences in sustainable rates of growth.
Mortality And GDP
During the past 200 years, many parts of the world experienced unprecedented growth in material well being and human health. In the UK, real income per capita rose 10-fold while life expectancy doubled. Demographic transition and the industrial revolution brought similar improvement in the US, France, Germany, Sweden, Japan, and most developed nations. The massive effect of modern economic development on human conditions is well known and beyond dispute. The timing and uneven distribution of such gains is less well recognized. What has become increasingly evident in recent research is that the relationship between ‘GDP’ and ‘Health,’ although quite strong and clearly causal, is far from simple.
Any major social change takes time and rests on many preconditions, making a precise dating of a ‘starting point’ at best imprecise, and possibly misleading. That said, a reasonable consensus among the economic historians and macroeconomists who study growth is that the industrial revolution began around 1775 (7 75 years) and was well established by 1850, although wider diffusion and follow-on benefits continued through much of the twentieth century. Therein lies the rub. Although the surge of innovation and economic development was manifestly widespread in nineteenth century Dickensian England, the industrial revolution in 1850 – and for a long time thereafter – is associated with widespread misery and substantial declines in life expectancy. The data presented by Angus Maddison are consistent with the following rather loose and lengthy causal chain: A burst of productivity-enhancing innovations (steam engine and factory work) starting around 1780 allowed rapid growth in population and trade, which eventually (20–50 years later) led to rising average incomes and material well-being of individuals, which in turn (after another 20–50 years) led to a rise in human life expectancy. Some details of timing, paths, and dynamics of this process are discussed in section Growth, Business Cycles, and the Long Run below.
Business Cycles And Employment
Figure 2 compares ‘total’ and ‘health’ employment in the US 1990–2010 and reveals two major macro conclusions: The health sector is growing much faster than the rest of the economy (rising share), and that growth is much steadier (lower variance). The jagged seasonal variation very evident in total employment is almost nonexistent in health care. The significant deviations from trend due to recessions in 1990–91, 2001, and 2007–09 readily discernable in total employment are also missing. Instead, health employment shows an almost steady upward incline throughout this 20-year period and for earlier decades as well.
The health sector’s lack of response to recession is evident in Figure 2(b). The ‘great recession’ officially dated as beginning in the fourth quarter of 2007 appears here as a slowdown in rate of job growth starting after a peak (2.1%) in March 2006, which then went below the long-run sustainable rate of increase (0.9%) in November 2007 and turned negative in May 2008, finally reaching a trough in August 2009 when jobs were disappearing at a 5% annual rate. Only after June 2010 did job growth turn positive, and it will still be a number of years before overall US employment again reaches the previous level (139 million) and even longer (perhaps 5–7 years) to compensate for the intervening population growth. In contrast, growth in health employment continued to increase throughout 2007 and decelerated moderately after that. The great recession, to the extent that is visible at all in health care, shows up as a slight dampening in a continuing high rate of growth 2 years after the most massive economic downturn since the depression.
Unemployment And Mortality
Employment (and its obverse, unemployment) is a main indicator of economic growth. Hence, it seems reasonable that more employment (unemployment) should be associated with higher survival (mortality). Indeed economic historians traced the path of medieval economic fluctuations correlating the price of grain with mortality rates. Twentieth century policy makers often pointed to the adverse effects of unemployment on population health as a justification for countercyclical monetary and fiscal interventions. The research and legislative testimony of M. Harvey Brenner quantifying the expected number of lives lost for each additional percent of unemployment became so well known that the association of unemployment with mortality was widely referred to as the ‘Brenner Hypothesis.’ The strong long-run and cross-sectional connection between GDP and mortality made it seem like ‘common sense’ that a similar short-run relationship should hold. However, Jose Granados, Hugh Gravelle, Audrey Laporte, Jes Sogaard, Adam Wagstaff, and others attempting to empirically verify the Brenner hypothesis reported great difficulty in doing so. In a seminal paper in 2000, Christopher Ruhm reported compelling evidence that recessions were in fact associated with less, rather than greater, mortality – and was able to explain why. Briefly and incompletely put, Ruhm and others have shown that unemployment and the concomitant reduction in general economic activity is associated with changes in behavior and consumption (less driving, more exercise, etc.) that reduce contemporaneous mortality without affecting long-run mortality very much. This is especially true for deaths due to accidents, cardiovascular disease, births, and some other medical conditions, whereas the converse holds for suicide and some other causes of death where acute stress may play a greater role. The conclusion that unemployment lowers mortality rates, although considered counterintuitive 20 years ago, has been so frequently confirmed empirically that most informed researchers would now consider it conventional – even though much of the public still thinks unemployment causes mortality rather than the reverse.
Some of the public confusion arises because these macro results apply to aggregate population mortality rates rather than the typical individual results that people ‘see for themselves.’ A negative macro correlation between unemployment and mortality does not imply that unemployment is healthy for the individual who loses a job. Indeed, there is compelling research showing that unemployment is highly damaging to the individual who is laid off. Daniel Sullivan and Til von Wachter report that involuntarily unemployed workers suffer a 10–15% increase in annual mortality rates that persists for at least 20 years, reducing average life expectancy by 1–1.5 years. Jason Lindo reports that parental job loss substantially reduces birthweight and child health, while Gerard Van den Berg, Maarten Lindeboom, and France Portrait show that infants born during economic crises in the nineteenth century had reduced life expectancies. These results make it clear that the impact of job loss on individual health (micro effect) is quite different from the macro effect on population rates.
Expenditures on health care have increased rapidly in all developed (Organization for Economic Co-operation and Development (OECD)) countries over the past five decades, with total spending rising more than 1000% in most countries due to inflation, demography, technology, income, and other factors. However, the relative contribution of each factor is often uncertain, variable over time and across countries, as well as being subject to inertia and lags of varying lengths.
Inflation And ‘Real’ Expenditures
Differences in the nominal value of money over time and across countries cause large yet presumably unimportant differences in measured spending. If medical transactions were simple spot exchanges and price indexes were perfect, adjustment using deflators and exchange rates would not be an econometric problem. Instead, medical transactions are usually complex, involving group contracts and institutional interactions extending over years or decades. In such a context, inflation and purchasing power parity discrepancies will often distort measures of ‘real’ health expenditures.
To sidestep real versus nominal issues quantifying resource use within a country, region, township, or household by share of GDP (or of consumption, income, employment, etc.) may sometimes be preferable. However, the inertial response of health care systems to macroeconomic forces means that short-run shifts in shares are more apt to come from delays and measurement errors than substantial changes in real resource use. This is shown below with data from Canada during a spike of inflation in 1974. The measured health share of GDP fell from 0.073 to 0.069, whereas the share of employment in health increased. It is most likely that the real share of economic activity devoted to health was rising rather than falling in 1974.
One way such systematic errors are generated is by delaying wage increases. In a study by Getzen and Kendix, wages of health care workers were estimated to have a secular increase of 0.6% above that of other workers and respond essentially 1:1 to inflation – but with a lag. When inflation goes up (or down), less than half of that change in the rate of inflation is reflected in health care wages in the current year, and even after 2 years, about one-fourth of any shift is still waiting to trickle into the health sector.
If it takes 18 or 24 months for changes in the general level of prices to be reflected in the wages of health care workers, significantly longer than for most employees, then measured labor will appear to be significantly below(above) real employment whenever inflation is rising(falling), even though the long-run effect of general price inflation is neutral. Similar distortions arise when purchasing power parities (PPPs) deviate widely from exchange rates. Internationally traded items, such as pharmaceuticals, are priced in international currency units, whereas wages and physician services reflect domestic (PPP) equivalents.
Measurement and estimation of income effects are even more affected by lags and inertial response. In Figure 3(a), the1990–93 recession in Finland and the subsequent decline in national health expenditures occurring after a lag of 2 years is clearly visible. Yet when the same data are presented as a scatter-gram in Figure 3(b), the correlation between GDP growth and health expenditures growth is almost entirely obscured. Only once allowances are made for delayed response spread out over several years does the correlation again become clear, as in Figure 3(c), which plots annual expenditure increases against a lagging 3-year moving average of GDP growth.
Health care spending depends on permanent income, which changes slowly over time. Even after a decision to spend more (or less) has been made, the rigidity of budgets and licensed professions delays implementation. With slowly evolving expectations regarding permanent income and complex institutional inertia, the impact of current changes in GDP are barely apparent in contemporaneous spending. Estimation across a panel of OECD countries from 1961 to 2008 indicates an average lag of 2 or more years before changes in per capita GDP affect health care spending.
What is not so apparent in this single equation estimate on panel data encompassing 17 countries and 46 years is that the lags between measured current income and changes in spending vary substantially from country to country, by the particular type of health spending (research, physician, hospital, and dental) and even from one time period to the next. Table 1 provides coefficients estimated for several categories of health expenditures within the US for 1960–2009. The average lag for all categories of spending combined is a little over 3 years, but varies from less than 2 years for personal out-of-pocket spending and drugs to more than 4 years for hospitals. Presumably, more detailed accounting would reveal an even greater range, perhaps just a month or two for bandages and over-the-counter medicines but close to a decade or more for construction of new buildings. 500 Observations can robustly establish that lags occur and that they vary, but is hardly able to specify the range and shape of those variations or to identify the many institutional features that cause responses to be delayed. Kenneth Arrow’s classic 1963 paper focuses on uncertainty with regard to incidence (risk) and quality (effectiveness) as the cause of special characteristics in the health care market. The first risk, is dealt with primarily through pooled financing. Third party insurance, whether government or private, builds a structural lag into the link between income and expenditure. Premiums are set well in advance and based on expectations – that is, on a form of permanent rather than temporary income.
Although demand side buffering causes some lags, the more significant institutional rigidities that Arrow identifies occur on the supply side – licensure, cost shifting, nonprofit community organization, and other barriers. Adjustment of physician supply is enmeshed in traditional educational institutions that are resistant to change, so much so that any equilibrium must be considered heavily punctuated if not ossified. From 1980 to 2005, US population grew 23% and real health expenditures per capita grew 187%, but just one new medical school was built and the number of US medical graduates rose by only 2% (from 15 632 to 15 962). The shift to produce more graduates carried out in the early 1960s reverberated for more than 30 years (the average length of professional practice), but the grudging and belated accommodation of growth through the professional supply chain indicates how inertial the medical care system can be.
Expectations prevalent during creation tend to get built in, embedded in the processes and organizations by which a medical financing system operates. The Medicare and Medicaid programs in the US were conceived during a period of endless growth and bright technological promise and thus were designed to increase the wages of health workers, to subsidize the construction of hospitals, and to support experimental treatments through generous funding. Enactment in 1965 promoted the rise of Academic Medical Centers, sophisticated subspecialty practice and rapid increases in health spending. Only after the Oil Producing Economies Oil Crisis and recession of 1974 dimmed, the once rosy economic outlook was a serious attempt made to control (rather than expand) the growth of medical spending. Yet grafting cost controls onto an expansionary system has proven difficult. Decades later these two government payment programs, originally just 2% of GDP, are projected to rise above 10% and threaten the entire budget process. Conversely, the UK National Health System was established in a context of postwar austerity in 1948. Although growth in UK spending has been substantial, sometimes more than desired, it has usually been below the OECD average and certainly well below the excessive rates in the US.
Institutional forces are hard to quantify. Empirical estimates of long-run trends (or curvature in trends) are difficult to make and seldom compelling. That said, economic historians and theorists such as Daron Acemoglu, Philippe Aghion, David Landes, Joel Mokyr, Douglass North, Mancur Olson, Dani Rodrik, James Robinson, Paul Romer, Oliver Williamson, and others have concluded that institutions are a primary factor in economic growth and development. In the case of health care, it seems apparent that macroeconomic factors prevalent when the foundations are laid can continue to exert an influence on spending for at least as long as the doctors and politicians then present continue to exist and perhaps as long as the defining institutional structures (licensure, voluntary nonprofit hospitals, and insurance pools) endure.
Population Demographics And Aging
Population can be a neutral denominator by which costs or mortality are scaled. There is little evidence to contradict the simple notion that a group or nation two (or twenty) times the size of another differs in costs or mortality per person (or per thousand or per million, holding other factors constant). Growth (or decline) makes the situation more complex, as the dynamics of changing dependency ratios, disability, aging, and time-to-death come into play. Births and deaths are the basic building blocks of demographics, and both events are expensive. Although now discredited, the impression that population aging itself was the important factor accounting for rising national health expenditures probably arose because: (1) health care spending on the elderly was rising rapidly, (2) health care spending was higher in nations with a more elderly population, and (3) growing fiscal concern regarding how governments could pay for expected increases in pensions and health care services. During the 1970s and 1980s, a number of ‘demographic models were constructed that projected future health expenditures using a linear matrix that mimicked the format used for projections of future pension payouts (the ‘i’ are ‘age–sex’ categories or ‘age–sexdisease-disability’ categories if more detail is desired).
Empirical investigation quickly showed that change in the percentage of population aged 65 years or more or number of elderly accounted for only a small portion of total cost increases, with most attributable to increased cost per person (holding age and sex constant). As government and employer financing of health care expanded, the personal budget constraints that had prevented many people, especially the elderly, from spending considerably on medical care in the 1950s were largely removed during subsequent decades.
The main reason for a rising health share of GDP is secular ‘excess cost growth’ per person (i.e., medical costs for every age–sex category has grown more rapidly than per capita income). A secondary factor is the extra ‘excess’ among the elderly (again, holding age and sex constant). In the US, the ratio of spending over: under age of 65 years has moved from 168% in 1953 to 345% in 1970 and above 500% in the 1980s before falling back below 400% after 2001, with similar changes in relative spending ratios occurring in most OECD countries. Governments were spending more, a lot more, on elderly people who had been significantly relieved of the financial burden of doing so. A false impression of causality was created as economic development led to concurrent rises in both average age and per capita spending for most nations. A panel study of OECD data by Getzen demonstrated that the cross-sectional association between age (%65 þ ) and expenditures at a point in time tends to disappear once income effects are accounted for, and also that more rapid growth in the elderly population of a country during the decades 1960–1990 was not correlated with that country’s rate of growth in real health spending per capita (illustrated below in Figures 4(a) and (b)).
Most health economists now agree (even when arguing details, estimation procedures, and causes) that it is more (excess) spending per person, and not population aging, that threatens the fiscal health of nations. Why then were commentators so convinced three and four decades ago that ‘aging causes higher health care costs,’ – and why was that mistaken impression so persistent when it could so easily be overturned by empirical investigation? Confusion arose from a failure to distinguish between micro and macro phenomena as well as the facile but misleading association of concurrently rising trends. At the individual micro level, older persons do spend more than younger persons because older people are usually sicker and stand to benefit more from therapy. Pooled government financing strengthens the connection between an individual’s age and medical expenditures by removing the personal budget constraint. However, the system also disconnects total (and hence average per capita) financing from need. At the national macro level, spending decisions (total funds available) are driven by budgets, not by need or illness. A nation populated only by poor old people suffering from diabetes, dementia, and other illnesses would have to spend less, not more, on health.
Macro (National) And Micro (Individual) Expenditures: Budgets And Allocation
Asked why health care spending is so much higher in Germany than in Ghana, most respondents quickly offer the answer that Germany is much richer. When pressed, they acknowledge that need and potential benefits from medical care are likely to be much higher in Ghana, but ‘the funds are not available there.’ The connection between purchasing power and spending, so obvious at the national level, is often obscured in microeconomic analyses of aging, disability, or time-to-death. Clarification comes from recognizing that on one hand the use (allocation) of available medical resources is determined by clinicians on the spot immediately responding to the health of the patient, while on the other hand the total amount of national medical resources available (budget) to treat patients is determined through the political process, shaped most strongly by fiscal policies that respond slowly, and with a lag, to changes in GDP (national permanent income).
Finland, like most European countries, is steadily aging. From Figure 3(a) above, it is evident that spending on health care was severely restricted in Finland after the deep recession of 1992–94, and also the response was slow, delayed for 2 or 3 years. One searches in vain for evidence that per capita spending for Finland, or any other country rose or fell in response to changes in health status – or that differences in the rates of death, disability, or aging were matched by differences in the rate of growth in spending. Pooling of funds through insurance and tax financing removes the budget constraint from the individual, so that personal income is no longer a major factor determining the amount of care used. However, the budget constraint still applies for the pool as whole (in the case of Finland, the nation), so in aggregate the sum of spending on all individuals is constrained by the average contribution paid in (which, in turn, is usually strongly correlated to per capita GDP). Of course, some medical spending is made by patients from their own budgets or by subnational entities (kin, employee groups, neighborhoods, counties, and provinces) constrained by their own budgets – hence more related to per capita income of that particular group than the nation as a whole.
Spending depends on who makes the decision and how. For food, housing, transportation, and most other consumption, total spending is the sum of many individual decisions. Medical decisions, conversely, are made by professional agents (physicians) operating within a highly structured system dominated by third-party insurance or tax financing, divorcing spending on an individual from that individual’s budget.
Growth, Business Cycles, And The Long Run
A Tale Of Two Necessities
Housing and healthcare are both generally considered ‘necessities,’ although neither conforms to ‘Engels law’ or meets the technical definition used by most economists (income elasticity o1.0). What they have in common is that both are considered vital and are sufficiently expensive as to require external financing for mass consumption. Housing needs are financed through the mortgage and rental markets, which pool the resources of investors, banks, and other intermediaries. Health care financing needs are met through broad-based taxation and employer insurance pools.
What sharply distinguishes the two sectors is dynamics – different, almost opposing, responses to macroeconomic fluctuations. Housing swings wildly with the business cycle, anticipating and amplifying the ebb and flow of money, employment, and interest rates. Healthcare plods along, an inertial stabilizer that muffles shocks, only belatedly registering the effects of booms and busts, and then with such long and variable lags as to smooth business cycles into near invisibility (Figures 2–4). Differences in financing mechanisms account for much of the differential in dynamics. As pointed out by Robert E. Hall and legions of other macroeconomists, it is the flow of money, which links regions, sectors, and countries – and puts the economy at risk of business cycles, with interest rates as the key transmitter. Slack and contraction make adjustment to financial frictions problematic, sometimes sufficiently so that the equilibrating mechanisms are seriously compromised. The use of money to facilitate transactions, allocate capital, provide credit, and spread risks comes at a cost that rises exponentially during a systemic crisis, with savings, interest rates, and employment unhinged. Housing is highly leveraged with debt financing and bears the brunt of adjustment. Healthcare is not. It is, in contrast, routinely financed within a pay-as-you-go framework by government or employers. Medicine proceeds with blithe indifference to financial markets. Doctors, nurses, administrators, and even pharmaceutical companies are often unaware of and relatively unaffected by interest rates. Stock markets soar and crash with little more effect on the operation of hospitals than sunspots or tidal waves. The only oscillation that seems to be generated within the healthcare financing system is the ‘underwriting cycle’ of alternating hard and soft markets for private insurance premiums, pushing quoted rate increases slightly above or below the rise in medical costs. The private health insurance underwriting cycle, however, has a little power and is often offset by countervailing trends in government tax financing. Probably the only way to get a real and significant financial disruption of the medical sector would be to put corporate and government financing under such stress that the entire structure was threatened. Fortunately, this has not yet happened, or at least not with sufficient force as to be evident in the modern national economies and health systems characteristic of most OECD countries since 1960 (although continuing fallout from the 2008 to 2009 recession and subsequent bank collapses in Iceland, Ireland, and Portugal may put that to the test).
Limits To Growth And ‘The Great Inflection(S)’
No matter how vital or necessary, there is a limit to the amount of spending on any sector. Expenditures cannot logically exceed 100% of income (at least, not for any extended period of time). Currently, most wealthy OECD economies seem resistant to spending much more than 10% of GDP on health care, with the US at 16% a notable exception. The rise in health spending was very rapid during the 1960s and 1970s, moderated a bit during the 1980s and 1990s before bumping up around the turn of the millennium and then becoming somewhat restrained over the past 5 years (an uptick in share for 2008 and 2009 is mainly because of countries having a temporary decline in GDP rather than a more rapid rise in real health expenditure). Figure 5 was constructed using historical data from the UK and US, but the shape would be similar for other OECD economies – and is remarkably like the typical inflected logistic S-curve that characterizes most growth processes. A long period of slow growth (incubation) builds toward an explosive spurt (exponential growth) that is bent back (inflected) as it approaches some constraint that limits growth in the long run (upper bound/stability).
Many aspects of health and economics appear to follow a typical growth process during the nineteenth and twentieth centuries: Medical costs, life expectancy, population, urbanization, industrialization, trade, workforce participation, and GDP all trace recognizable S-curves, although each differs somewhat in shape and timing. The growth of per capita income slid along at a very low rate for millennia before rising abruptly after 1850, soaring for a century, and appearing to stabilize (?) near an upper bound of 1–2% within the next century. Life expectancy fluctuated from 20 to 40 years before making tremendous gains during the twentieth century and is expected to face diminishing returns as genetic and social factors impose an upper limit (110, 125, or ?). Global population took hundreds or thousands of years for each doubling from prehistoric eons through the middle ages before reaching half a billion around 1550, quickly climbed to one billion by 1820, two billion by 1925, three billion in 1960, six billion in 2000 – and is projected to stabilize after reaching a peak of 10 billion within the foreseeable future. The process of ‘demographic transition’ traces a similar curve, reversed and displaced in time.
The coincidence of so many dramatic changes in human society could hardly be attributed to chance, yet the causality and order is much debated – and has only recently (and partially) been illuminated with empirical data through the efforts of cliometric economic historians such Gregory Clark, Dora Costa, Robert Fogel, Angus Maddison, and others and placed within a conceptual framework by development theorists and macroeconomists such as Daron Acemoglu, Oded Galor, Chad Jones, Michael Kremer, Rodrigo Soares, and others. The tentative consensus among these scholars is that the gradual increase in knowledge and technology over millennia brought about an end to the Malthusian era, appearing first as a dramatic increase in total population and urbanization, a shift from agriculture to industry, a decline in mortality, and a steady increase in income per capita – transformations that were well underway toward the end of the nineteenth century and clearly before the rise of modern (expensive) scientific medicine or the modern increase in life expectancy beyond the biblical three score and ten. This cluster of transformations by which humanity escaped the era of Malthusian constraints in a burst of exponential growth is variously known as ‘development,’ ‘demographic transition,’ ‘the industrial revolution,’ the ‘modern era’, or most misleadingly, as ‘normal times.’
In 2001, macroeconomists were discussing ‘the great moderation,’ showcasing compelling results from rational expectation models and financial forecasts based on recent time series data. By 2011, such discussions were supplemented or supplanted by observations on the great depression and financial panic of 1873. The postwar ‘normal’ should be seen as an aberration – calm at the eye of a storm that transformed human society and is not yet finished.
Complexities Of Measurement And Specification
The reason that business cycles are mostly invisible in health care spending and employment is not that inflation and GDP growth have no effect, but that the relationships are misspecified. To estimate the effect of one variable on another, the units of observation must match the span of action in both time and space. One hundred or even one thousand observations cannot capture the effect of a change in GDP on life expectancy or health expenditures if each observation is one minute long. A minute-by-minute time frame is, however, quite useful for determining the effect of a 10.00 a.m. announcement of clinical results on the price of an exchange-traded biotech stock. Using minuteby-minute, hourly, or even daily measures will tend to increase the signal-to-noise ratio and obscure the long-term low frequency effects of a recession on health employment or mortality rates. Note also, that observations on the price of a specific biotech stock are neither likely to reveal the broad effects of macroeconomic factors on the market as a whole nor is investigating the determinants of one individual’s medical costs during an illness episode likely to reveal the factors that cause national health spending to rise over years or decades.
Inequality And Nonlinearity
Not long after Samuel Preston published his analysis in 1975 showing that the positive effect of income on longevity became progressively smaller at higher levels of income, GB Rodgers used a multivariate regression to show that for any given level of per capita GDP, greater income inequality (Gini coefficient) was associated with lower life expectancy. This led to suggestions that inequality and social stress could be an underlying cause of disparities in mortality by ethnicity and occupation status. However, work by Gravelle and others subsequently made it clear that what appeared to be an independent factor was instead an artifact due to nonlinearity: any mean-preserving spread would necessarily cause the estimated coefficient of ‘inequality’ to be negative – the mortality reduction obtained by the higher income group from gaining $1000 would (diminishing returns) be smaller than the mortality increase imposed on the lower income group losing $1000. Subsequent studies have supported this explanation. An extensive review of the literature by Deaton in 2003 concluded that there is still no compelling evidence that inequality in itself is a major cause of population mortality rates once sufficient care is taken to consider the effects of nonlinearity and other contributing factors.
Income, Education, Wealth, Or Socio-Economic Status (SES)?
Why spending depends on broader measures such as ‘permanent’ or ‘shared’ income rather than current individual earnings is fairly evident. Categories and concepts, like temporal boundaries, may also be indistinct. Demographers, sociologists, epidemiologists, and public health researchers examining the connection between income and health at the individual micro level are apt to use a broad concept of resource availability such as ‘socioeconomic-status’ for which ‘household income’ is just one aspect or indicator. For macroeconomists, the catchall term is ‘level of development’ or technology. Occupation, assets, poverty, and malnutrition are all associated with income levels – and with mortality. Ethnicity, education, urbanization, and social status may not be so directly related to income but are rarely independent of it – and are sometimes even stronger predictors of morbidity. The black/white differential appears to be larger in the US than the UK, but UK occupational status disparities seem to be greater. The strength and relative importance of factors varies so much across places and periods that it is unlikely that the determinants of health are constant or fixed, even though almost every region has ethnic (Inuit, Sami, Maori, Romani, etc.) or other groups (widows, orphans, albinos, and refugees) for which health outcomes are persistently worse than average.
Measuring income, SES, or economic development is difficult but less problematic than quantifying ‘health.’ Longevity and mortality rates are clear but crude measures, and neither is applicable to individuals. More detailed, specific, or nuanced assessments (activities of daily living, Euro QoL quality of life measurement-36, quality of life, diabetes prevalence, antidepressant drug expenditures, disability days, cancer survival, hospital utilization, psychiatric visits, etc.) are all sufficiently incomplete or ambiguous that none can be satisfactorily aggregated to macro measures of ‘real’ health outcomes. Analysis of system effects is further complicated by reverse causality between health and income – and also by interactions between marital status and occupation, education, and family size, or almost any set of contributory factors. Although each variable has a distinct connotation that is important in certain contexts, they are almost always acting together in related ways that make it difficult, if not impossible, to decompose a compound total network effect into shares, or to reliably estimate an independent coefficient for each variable.
Empirical analysis of macro determinants is often quite limited by the time frame and number of large-scale long-run observations available to discern diffuse and low-frequency responses. Temporal, spatial, and organizational boundaries must be carefully specified to distinguish and reveal micro and macro effects. Changes in coefficients as the unit of observation expands or contracts can be a key for understanding the underlying structure of the process – opening up the institutional black box of a firm, a hospital, the medical profession, or pharmaceutical discovery. The fact that health care employment adjusts quite slowly to inflation tells us something about wage formation within this industry; a mismatch between price indexes and expenditure patterns suggests that little significance should be attached to publicly listed prices; the fact that pharmaceutical research and development is more strongly correlated with prior firm profits than future prospects suggests something about capital allocation within the industry; disparity between individual cross-sectional expenditure estimates and national time series results may be a useful indicator of the likelihood that a specific policy will be able to ‘bend the (national) cost curve.’
Conclusion: Structure And Lags In The Macroeconomics Of Health
The health sector is technologically dynamic but fiscally inertial. Major change often takes decades rather than months or years. Responses to macroeconomic shocks are delayed and damped by organizational rigidity so that ordinary business cycles are mostly smoothed away. Price changes, whether physician fees, hospital charges, medical care price index, consumer price index, inflation, or interest rates, can make appropriate measurement difficult but appear to have little effect on aggregate real health resources or outcomes. The process is subject to highly variable lags and complicated by interactions and feedback among variables to such an extent that almost any broadly correct generalization has one or more counter examples that can be named. Coefficients are difficult to estimate with precision and parsing total network effects into a linear combinations or shares for each factor is not very meaningful.
With regard to the current state of the literature, it may be said that since 1960 the development of national health accounting and a host of econometric studies have allowed us to become more precise about what we know and do not know, and considerably more humble about how easy it is to decompose and discern the relative contributions of various factors. Despite the humbling lack of progress in specifying many of the mechanisms and magnitudes involved, several popular hypotheses (aging, unemployment, and inequalities) have been rejected by repeated empirical tests. Some tentative conclusions are probably justified by the extensive research to date:
- The relationship of national GDP to mortality and health expenditures is strong, but not simple or constant.
- Responses are usually delayed, subject to long and variable lags. The inertial smoothing means that most effects of ordinary business cycles are rendered nearly invisible.
- The spatial and temporal boundaries of observations must be matched to the decision process of the phenomena to be estimated. Often the long-run effects are not the same as short run and may even have the opposite sign: for example, unemployment is associated with decreases in short-run mortality but increases decades later. Macro effects on national outcomes and measures are not the same as micro effects on individuals: for example, getting older greatly increases personal risk and individual medical spending, but population aging has little, if any, effect on average per-capita expenditures.
- The main determinants of individual medical costs (illness) have almost no effect on national health expenditures, which are largely shaped by budget and political pressures. Institutional factors (licensure, nonprofit hospitals, and government financing schemes) seem to dominate with prices, including interest rates, playing a much smaller role in health than other sectors.
- Income is intertwined with social organization, ethnicity, education, and other factors in a complex way that precludes any clear decomposition or reliable estimates of independent or relative importance. The magnitudes and interactions of these effects are demonstrably different for different causes of death, for different countries, and for different time periods.
- Nonlinear flattening of the income-mortality curve at the upper end implies that a mean-preserving spread will help the poor more than it harms the rich, thence reducing average mortality, but income inequality per se does not have much, if any, independent effect on aggregate mortality rates.
- Demographic transition, industrialization, urbanization, education, life expectancy, increases in health expenditure growth, and other aspects of modern development all appear as typical logistic S-shaped growth curves during the twentieth century. This suggests that the postwar span of rapid growth, rather than being a new normal equilibrium, was more like the inflection point in a centuries-long turbulent process of global development that has not yet achieved a long-run steady state.
- Abel-Smith, B. (1967). An international study of health expenditure. Public Health Papers No. 32, Geneva: WHO.
- Cutler, D., Deaton, A. and Lleras-Muney, A. (2006). The determinants of mortality. Journal of Economic Perspectives 20, 97–120.
- Fogel, R. (2004). The escape from hunger and premature death, 1700–2100. New York: Cambridge University Press.
- Galor, O. (2011). Unified growth theory. Princeton, NJ: Princeton University Press.
- Getzen, T. E. (2000a). Health care is an individual necessity and a national luxury: Applying multilevel decision models to analysis of health care expenditures. Journal of Health Economics 19, 259–270.
- Getzen, T. E. (2000b). Forecasting health expenditures: Short, medium, and long (long) term. Journal of Health Care Finance 26, 56–72.
- Hall, R. E. (2010). Why does the economy fall to pieces after a financial crisis? Journal of Economic Perspectives 24, 3–20.
- Newhouse, J. P. (1977). Medical care expenditure: A cross-national survey. Journal of Human Resources 12, 115–125.
- Porter, R. (1999). The greatest benefit to mankind: A medical history of humanity. New York: Norton.
- Preston, S. H. (1975). The changing relation between mortality and level of economic development. Population Studies 29, 231–248.
- Smith, J. P. (1999). Healthy bodies and thick wallets: The dual relation between health and economic status. Journal of Economic Perspectives 13, 145–166.
- Swift, R. (2011). The relationship between health and GDP in OECD countries in the very long run. Health Economics 20, 306–322.
- Weil, D. N. (2007). Accounting for the effect of health on economic growth. Quarterly Journal of Economics 122, 1265–1305.
- https://www.rug.nl/ggdc/ Groningen Growth and Development Centre.
- https://eurohealthobservatory.who.int/ European Observatory on Health Systems and Policies.
- https://www.cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/nationalhealthexpenddata National Health Expenditure Data.
- https://data.oecd.org/health.htm OECD Health Data.
- https://data.worldbank.org/ World Bank Open Data.