Acting to reduce health inequity: How much evidence is enough?

It is often asserted that more evidence is needed to take action on the social determinants of health. In this guest post Ted Schrecker identifies such claims as a key obstacle to achieving health equity. He argues that to overcome this obstacle, we must recognize that decisions about how much evidence is enough are irrevocably bound together with important ethical and political choices. Ted is a Professor of Global Health Policy at Durham University.

 We should now be familiar with many hard facts about health equity.  In the United Kingdom, for example, despite rhetorical commitments by the previous government to reduce health disparities between rich and poor, by 2007 such disparities were on many measures greater than at any point since the 1930s.  This was before the economic crisis and subsequent austerity measures, which have disproportionately affected the UK’s poorest regions, including the one where I live and work.

Yet in discussions of policy responses, a frequent refrain is that the evidence is not strong enough to justify addressing the “inequitable distribution of power, money, and resources” that was one of the foci of the WHO’s Commission on Social Determinants of Health.   Tobacco control initiatives and encouraging people to eat a healthy diet are fine, but not so challenges to “the inequality machine [that] is reshaping the whole planet,” in the words of the editor of Le Monde Diplomatique.  Since the Canadian experience shows that a healthy diet is often unaffordable for benefit recipients or the working poor, and more than 47 million people in the United States are relying on the government vouchers known as food stamps, that would seem to be a major omission.

Debates about the strength of evidence are hardly new: think about tobacco, or climate change, or any number of environmental and workplace exposures whose lethality is now widely acknowledged.  The role of ethical and political choices about standards of proof (how much evidence is enough) in these debates is often neglected.  I began a recent article on epidemiology and social determinants of health with an analogy to the case of former professional athlete O.J. Simpson.  Acquitted of the murder of his estranged wife and her friend in a criminal trial, he was nevertheless found liable for damages in a civil proceeding initiated by the survivors of his alleged victims.  The difference simply reflects the much higher standard of proof that must be met, in common law countries, in criminal proceedings.

My points were that (a) the concept of a standard of proof is crucial for public health policy; (b) the choice of a standard of proof with respect to social determinants of health, as for environmental exposures, is a matter of public health ethics with respect to which scientists qua scientists have no special competence; and (c) unreflective insistence on a definition of scientific rigour organized around avoiding false positives, or Type I errors, can be highly destructive of health, and in particular health equity, under conditions of uncertainty.

The complexity of the causal pathways that connect macro-scale economic and social processes with health disparities means that some degree of uncertainty is inescapable.  A recent report on structural influences on obesity from the Scottish Collaboration for Public Health Research and Policy makes this point effectively, noting that “many strategies aimed at obesity prevention may not be expected to have a direct impact on BMI [Body Mass Index], but rather on pathways that will alter the context in which eating, physical activity and weight control occur.  Any restriction on the concept of a successful outcome … is therefore likely to overlook many possible intervention measures that could contribute to obesity prevention.”  Conversely, if the evidentiary bar is set high enough, it can always be claimed that nothing works, or that more research is needed … but waiting for more evidence is itself a decision about risks and benefits.  This point has been understood for decades, yet it continues to be either ignored or willfully misunderstood by (for example) some protagonists in the current debate over European policy toward endocrine-disrupting chemicals in the environment.

Choice of a standard of proof is one of a larger class of issues and choices at the interface of science, values and politics.  Understanding that interface, and in particular its political dimension, is critical to reducing health inequity.  Thus, when I read an article that exhorts social epidemiologists to concentrate on narrowly defined questions amenable to experimental or quasi-experimental study designs that will generate “the kind of evidence wanted by policymakers,” my immediate reaction is one of revulsion.  The quality of evidence that is demanded by “policymakers” – and the term is itself curiously decontextualized – depends entirely on what those in power have at stake.  Often, no evidence or imaginary evidence is sufficient; think about the weapons of mass destruction that Iraq was confidently declared to possess, or the nonexistent jobs into which George Osborne wants to herd poor under-25s.  Producing research findings that are not “wanted” by Osborne and his kind should be viewed as an ethical imperative.  When public health practitioners and the organisations in which they work are sincerely committed to reducing health inequities in a hostile environment, progressive health researchers should provide all the support we can.  But we must choose allies and audiences with care, and often the most appropriate algorithm for our interactions with those in power is the three R’s:  Resist, Ridicule, and Replace.  More about that in my next posting.

No test, no visa: How mandatory immigration HIV testing makes Canada—and HIV—stand out as exceptional

In this guest post, Dr. Laura Bisaillon critiques the inner workings of the Canadian immigration system. She explores whether mandatory HIV testing is justified for prospective immigrants and challenges us to consider how broader socio-political relations shape such practices.

 What logic prevails that sees prospective immigrants and refugees to Canada submitted to a test that would be unlawful to impose on Canadians or permanent residents except by court order? Exploring answers to this troubling question is part of the puzzle that I unravelled in a recently completed social scientific study of the Canadian immigration system and analysis of state decision-making about the admissibility of applicants with HIV within this system. During 18 months of multilingual fieldwork in Montreal, Ottawa and Toronto, I met with over 60 people who described their experiences relating to mandatory HIV testing in the Canadian immigration process. This included HIV-positive applicant immigrants and refugees, nurses and other health providers, and lawyers and civil servants, among other actors.

When I present my research findings, I find that audiences are generally unaware that the Canadian state obliges applicants to submit to HIV testing as a pre-condition for applying to immigrate. Since 2002, Canada has required HIV testing of all persons aged 15 years and above who request Canadian permanent resident status (such as immigrant and refugee persons) and temporary resident status (such as migrant workers, students, and long-term visitors from certain countries; generally countries of the Global South). Citizenship and Immigration Canada manages testing within the immigration medical examination, and tuberculosis and syphilis are the two other conditions for which diagnostics are mandatory. It is not entirely surprising that the general population is unaware of HIV testing or other details of the immigration process. The immigration system is a large and opaque system that is organized as a perplexing and hard to navigate bureaucratic institution. These features make it difficult, time consuming and expensive for people to grasp and traverse. Canadian-born Canadians do not have to interact with the immigration system as immigrant hopefuls, and so its workings remain largely out of view and beyond the likelihood of critical assessment.

HIV testing is normalized as a good and necessary practice within biomedical ways of knowing and thinking about the world. The dominance of this position makes any problematizing of HIV testing difficult, and it also elides the socio-political and embodied contexts in which HIV testing happens. Approximately a half million Canadian immigration medical examinations are conducted annually worldwide. The vast majority of these take place outside of Canada, and the Canadian government actually knows little about the empirical conditions in which HIV testing occurs in these overseas locations. Findings from my research reveal problems, exclusions, and inequities within the practices associated with immigration HIV testing. Testing and practices to which it gives rise are routinized as something that ‘just must happen’ to people who aspire to be Canadians.

Empirical research findings show that there are practices associated with Canadian immigration HIV testing that are problematic. Problems occur for applicants with HIV, for contract physicians conducting HIV testing, and for the immigration system more broadly. For example, prospective immigrants are not necessarily aware they are being tested for HIV; there is a general absence of or inadequate personalized care at diagnosis with HIV; and, persons with HIV report not understanding that the immigration medical encounter is actually a filter where physicians are working within relations that are not therapeutic or in their subjective best interests.

The point is, mandatory immigration HIV testing makes Canada, as a nation—and HIV as a health condition—stand out as exceptional. Testing as a pre-condition for immigration is not inevitable. Unlike Canada, the vast majority of countries within the Organisation for Economic Co-operation and Development do not operate a policy designed to screen out applicants with HIV or that excludes persons who already take antiretroviral medication from the possibility of immigrating. Refugees and spouses are immigrant applicants who are not inadmissible on health grounds. Despite that in Canada HIV is increasingly compared to diabetes for its chronicity and manageability, this is true only under specific conditions, and, furthermore, refugees and immigrants interviewed for this research do not experience HIV according to such classification. Empirical findings show that in practice, HIV is, in fact, ‘othered’ and made exceptional within the Canadian immigration program. The addition of HIV testing in the immigration medical examination was the first change to the exam in about fifty years. No other condition attracts the degree of institutional scrutiny and surveillance that HIV does within the Canadian immigration process.

What role is there for the citizenry, health providers, and other actors? I suggest that legal reform work on the Canadian immigration system, with a specific focus on the place of HIV within this system, is a high priority. Research evidence such as the results generated through my study can provide an important empirical basis for this work. Second, persons who work with immigrant and refugee applicants with HIV are well placed to query the latter people about their experiences with immigration medical examination HIV testing as it occurs in Canada and overseas. Individual citizens and health providers can report irregularities and problems to Canadian-based civil society organizations with specific expertise in assessing and acting on unjust practices as these are directed to persons living with HIV, including the Canadian HIV/AIDS Legal Network ( and the HIV and AIDS Legal Clinic Ontario ( Last, we might bear in mind that the rationale for what we are asked to do in our work is not always readily evident, and sometimes what we are asked to accomplish is not defensible. In the case discussed here, it is not clear that mandatory HIV testing within the Canadian immigration medical examination is justified. By seeing our immediate workplaces and practices as parts of broader, complex socio-political relations, we are challenged to rethink how and in whose interests these places and practices work.

Fraser Institute on Health Care in Canada and Sweden: Selective Evidence, Even More Selective Conclusions

In this guest post, Ronald Labonté discusses a recent report from the Fraser Institute which compares the healthcare systems of Sweden and Canada. While the report aims to promote the privatization of the Canadian healthcare system, Labonté argues that its conclusions are ideologically driven and that the evidence it draws on must be considered in the wider sociopolitical context of both countries.  Labonté holds a Canada Research Chair in Globalization and Health Equity at the Institute of Population Health, and is Professor in the Faculty of Medicine, University of Ottawa; and in the Faculty of Health Sciences, Flinders University of South Australia. 

The May 22, 2013 report from the Fraser Institute comparing Swedish and Canadian health systems is interesting, provocative and another example where ideology trumps evidence.

The Fraser Institute is a well-known Canadian conservative think tank that emphasizes small government, market fundamentalism and individual choice in its policy arguments. This does not detract from its report’s findings that Sweden’s health system generally performs better and for a lower expenditure of its GDP than does Canada; or that Sweden allows some private insurance to co-exist with its public system (between 2% and 4% of Swedes opt for such coverage), some small co-payments in its public system (with exclusions for those who find it difficult to pay), and a few privately managed hospitals. On this evidence, the report robustly concludes that Canada should therefore increase private provision of hospital and surgical services, allow private insurance to compete with its public system, and to introduce co-payments (user fees) for all health care.

In doing so, the report ignores that Canada already has a large private health insurance system for non-publicly insured health care. Its private health care expenditure (about 30% of total spending) far exceeds that paid by Swedish citizens (about 15%), partly because Sweden provides free or heavily subsidized dental care and prescription pharmaceuticals, which most of Canada does not. The report also ignores the context in which Sweden’s small medical and hospital co-payment policy exists: a high tax/transfer and high public spending social welfare state, still far outperforming Canada in health, poverty, unemployment and other key social indicators. In this context small out of pocket payments do not pose the same barrier to health care access that they might if transferred to a country like Canada, which ranks very low in the OECD league table for tax/transfers and social spending. One cannot cherry-pick an ideologically convenient public policy out of a total social welfare package.

Finally, that Sweden spends significantly less of its GDP on health care than Canada while outperforming on many health system and health outcome measures, may well be related to its physicians being salaried and much primary care being delivered by nurses. Though noting this, the Fraser Institute report simply concludes that these policies ‘will not work in Canada’ due to ‘a lack of physicians and an independent practitioner model of delivery.’ Whether Canada has a substantial lack of physicians is moot; but the report is silent on the nurse-centered, team-oriented approach to primary care that helps keep Sweden’s health care costs low and which would obviate much of the claimed doctor shortage in Canada. Although shifting some of the budgetary measures for hospitals (from global to activity based financing) may merit consideration, how increasing private sector provision, private financing and user fees would reduce Canada’s annual health care spending, and not launch us further along the American pathway of excess costs for limited returns, is never explained. In sum, the Fraser Institute’s recent report may make for some interesting reading, but with eyes critically wide open.


Thatcher’s Trickle-Up Economics Made Us Sick

In this guest post, Dr. Roberto DeVogli discusses the relationship between the policy agenda of Margaret Thatcher and important social determinants of health. A parallel is drawn between Thatcher’s economic reforms and the austerity policies plaguing Europe today. Dr. DeVogli is an Associate Professor at the Department of Public Health Sciences, University of California Davis and a Senior Lecturer at the Department of Epidemiology and Public Health, University College London.

Last week Baroness Margaret Thatcher died. Since her death, media outlets have been discussing her achievements and legacy as a political leader. How will she be remembered by history?

It depends. Her death provoked very different reactions. While conservatives still praise her as one of the best political leaders in British history, liberals remember her for the heartless, cynical economic and social policies she implemented.

This polarization of emotional and political opinion is due to a simple fact: Thatcher policies have produced large effects on the distribution of power and wealth in society. Thatcher reforms, as a whole, acted as a sort of Robin Hood in reverse. During her administration inequalities in income and wealth have sharply increased: while the top 1% share of total income in England increased from 5.93% in 1979 to 9.8% in 1990, the worst off members of society have seen their real income decline or stagnate. During the same years, the Gini index, one of the most popular indicators of income inequality in society, increased by 26%: from 31 in 1981 to 39 in 1991.

These economic changes have also affected the health of British people. Although the effects of Thatcher policies on population health have not been studied in detail, there are plausible mechanisms explaining how her socioeconomically divisive reforms have promoted detrimental effects on the British population. Evidence on the effects of inequality on health has shown that health conditions are generally worse in societies where income gaps are larger. Why? A highly unequal distribution of wealth is correlated with higher poverty and poorer social, health and education services. Compared to more egalitarian nations, more unequal societies have poorer outcomes in terms of mental health, child wellbeing, social mobility, teenage pregnancy, stress, drug abuse, obesity, anti-social behavior, crime and imprisonment.

Thatcher policies were particularly deleterious for the health and social conditions of people living in the most deprived areas of England. Indeed, during the years in which she was in power, inequalities in health have widened considerably: research shows that the mortality differentials between the most deprived and least deprived electoral wards of England increased significantly during the years of her leadership. Similar findings were reported in Glasgow where the health gap between rich and poor widened over the period of time in which Thatcher administered the country.

What were the specific policies adopted by Thatcher that made Britain more unequal from an economic and health standpoint? What types of reforms did she implement? Probably, the most important reason for remembering Margaret Thatcher is her political leadership in promoting a paradigm shift in political economy consisting in the decline of the Keynesian welfare state toward neoliberalism. Neoliberalism is an economic ideology that advocates for indiscriminate privatization, liberalization, deregulation of the labor force and finance and austerity and free markets (for everybody but the rich). As I wrote in my book Progress or Collapse: the Crises of Market Greed, the neoliberal model of political economy is based on two “articles of faith”: the “greed creed” and the belief in “the market God.” The greed creed states that people are nothing more than selfish profiteers in perpetual competition for profit and wealth. The belief in the market God is the conviction that all social and human affairs are best regulated as “free” market exchanges. Joseph Stiglitz, author of Globalization and Its Discontents, once defined neoliberalism as “market fundamentalism” because of the dogmatic assumption that free markets are essentially self-correcting and government regulations and interventions are unfair interferences against the optimal functioning of the economy.

For sure, Thatcher’s faith in the “market greed” doctrine was well reflected in her brutal policy agenda. As a political leader, she was unusually bold and dogmatic: during a British Conservative Party policy meeting, while listening to a presentation, she interrupted the speaker by slamming down onto the table one of the books of the founder of neoliberalism, Friedrich von Hayek and screamed: “This is what we believe!” Thatcher probably knew that the neoliberal market revolution generated a deterioration of socioeconomic conditions among the most unfortunate sectors of society. Yet, she believed that it was a price worth paying. In her opinion, “market fundamentalism” was not only desirable, but also inevitable. As she once put it: “I do not know whether [neoliberalism] will work or not. Why is there no critique of it? Because no one has an alternative.”

As mentioned before, the major casualties of Margaret Thatcher’s neoliberal revolution were manual workers and the labor unions – she once defined as “the enemy within.” Her crusade against organized labor, one of the most important democratizing forces in society, resulted in a sharp decline of trade union membership – from 13.5 million in 1979 to fewer than 10 million by the time she left office in 1990. Her vitriolic assault on coal miners is “memorable”: her policies contributed to the shut down of about 150 mines and the loss of thousands of jobs that had devastating economic and social effects for a large number of marginalized communities. Thatcher’s class war against workers and labor unions was complemented by drastic cuts on social security and austerity policies, the same failed reforms that are causing unnecessary human and economic chaos in Europe these days.

But perhaps the worst reforms that Thatcher promoted were the policies which deregulated and liberalized the banking sector. These policies advanced the power of financial elites and encouraged large-scale speculation at the expense of the real economy. With the approval of the UK Banking Act in 1979 and the 1986 Big Bang Day, which together deregulated the London Stock Exchange, Thatcher basically helped to transform the City of London into an offshore haven. Her policies created the conditions for the proliferation of those very infamous mortgage backed securities that led to the collapse of UK banks such as Northern Rock. Of course, the financialization of the UK economy has been a bipartisan project and the Blair administration is equally responsible for the failed policies that led to the 2008 financial meltdown. Yet, make no mistake: Margaret Thatcher championed deregulation and the neoliberal market revolution with a religious fervor hardly seen before.

The long-term effects of her policies are now becoming more apparent as the 2008 financial crisis is generating its victims. Since 2008, in both Europe and the US, there has been a sharp rise of unemployment, widespread economic distress and mental health deterioration, especially among those people who lost their jobs, houses, savings and businesses because of the financial downturn. Thousands of people in Europe and in the US paid the ultimate cost of the financial downturn: a recent study by Barr and colleagues published in the British Medical Journal estimated that, between 2008 and 2010, there were about 1,000 suicides in excess attributable to the Great Recession in the UK only.

It is obviously impossible to ascertain with precision and rigor whether Thatcher policies were really responsible for these deaths and increases in mortality gaps between rich and poor in England. Nevertheless, the net impression is that her policies produced adverse health conditions among the very workers and low-income communities that she fought against.

But there is another reason why Lady Thatcher’s devotion to neoliberalism made us sick: by promoting the idea that “there is no such thing as society”, but only families and individuals whose only aim in life is to maximize profits, she justified the greed of the very rich to accumulate money “beyond the dream of avarice” and pardoned their lack of compassion toward the most unfortunate sectors of society.

Trade and Public Health: What’s missing?

In a piece published in the Lancet last Friday, public health researchers warn of the negative public health impacts of the Trans Pacific Partnership Agreement (TPP), also known as ‘the biggest trade deal you’ve never heard of’.

The TPP is a large regional trade agreement being negotiated by 11 countries around the Pacific Rim—Australia, Brunei, Canada, Chile, Malaysia, Mexico, New Zealand, Peru, Singapore, USA, and Vietnam.

The authors draw attention to two major ways the Agreement is likely to negatively impact public health. First it is argued that the TPP will reduce access to medicines via strong protections on intellectual property rights. Second, the authors note that one of the Agreement’s major clauses (related to investor–state dispute settlement provisions) will limit the ability of governments to regulate important health impacting industries such as those related to the production of tobacco, alcohol and highly processed foods.

The outlined arguments are compelling and worth a read. They are also supported by similar warnings being cast across the public health sphere.

However, I wonder if there isn’t more to the picture. The identified pathways which link the TPP to health are largely about contextualizing risk factors. That is, they contextualize people’s exposure to individual-based risk factors. These risk factors are related to people’s access to medicines and unhealthy behaviours such as smoking, alcohol consumption and the consumption of unhealthy foods. In this way, these pathways can largely be characterized as operating within a bio-medical paradigm.

However as Link and Phelan importantly acknowledged, even if we change the contexts within which people are exposed to individual-based risk factors, fundamental determinants of health will continue to shape population health profiles. This is because fundamental determinants of health—things like income, power, knowledge, prestige—are associated with a range of diseases and health outcomes. Moreover, we live in a world where new diseases and risk factors are always presenting themselves, and those with greater resources will always be better positioned to protect themselves. This idea is similar to perspectives which highlight the health importance of factors outside the bio-medical domain, factors for instance related to people’s social position like income and employment, and the distribution of wealth across populations. The idea here is that these social determinants have impacts on health outside of their role in shaping individual health behaviours.

So while the pathways thus far identified as linking the TPP to health are important, are they the only ways through which the Agreement might impact health? Specifically, are there ways in which it may also impact these fundamental, social determinants of health?

In a Wall Street Journal piece, author Philip Stevens argues that lamenting over the TPP’s influence on access to medicines is short sighted since it ignores the historically important role trade has played in generating many of the negotiating countries’ wealth, such as Singapore, and thereby their subsequent gains in health via increased spending capacities on important health promoting policies like water and sanitation programs. However, this notion was forcefully rebuked in a piece by Pubic Citizen which among other arguments illustrates that during periods of increased trade liberalization economic growth contracted in Singapore, as it also did in other countries undergoing liberalization policies. Therefore free trade can neither be categorically credited for higher growth nor improvements in health outcomes.

But saying that trade liberalization does not uniformly lead to growth or that economic growth does not uniformly lead to improved health, does not mean that trade does not impact health through economic pathways.

The TPP for instance, is expected to have wide implications on the textile and clothing sector, which many middle and low income countries rely on as an important source of employment. For example, by removing tariffs on textile imports from much of Asia, the Agreement is likely to negatively impact textile producing countries in the Caribbean and Central America. El Salvador alone is expected to lose 22,000 jobs in its textile market (and another 15,000 indirectly). On the flip side, textile markets in Asia, and in Vietnam particularly, are expected to gain.

Employment, as a key factor shaping people’s social position, is a fundamental and social determinant of health. But how these shifts in employment will impact health will largely depend on countries’ labour market conditions as well as the level of protection offered by policies like unemployment insurance.

Current acknowledgements of the links between TPP and health call for the incorporation of health impact assessments within international agreements, strengthened representation of public health within economic negotiations and greater coherence between trade and health policy.  Expanding our understanding of the links between this agreement and health not only strengthens these calls, but is crucial to the success of these undertakings.


The Real American Exceptionalism: Our Lives Are Stressful, Unhealthy and Short

In this guest post Dr. Mark Santow discusses American Exceptionalism in the context of a recently released report from the Institute of Medicine. This report shows  the relative poor health status of Americans in relation to their international peers. Dr. Santow is an Associate Professor and Chair of the History Department at the University of Massachusetts-Dartmouth and blogs at Chants Democratic

Back in October 2012, New York Times writer Scott Shane described the idea of ‘American exceptionalism’ as an “opiate,“ inducing a kind of hubristic national stupor that prevents us from seeing things as they truly are.

The idea that God or history has uniquely blessed the United States, justifying a proselytizing posture toward the rest of the world, is an old one.  It returned to our national discourse in the last couple of years, when conservatives accused President Obama of backsliding in his belief that Americans are the ‘chosen people.’

As Shane argued, the idea that American identity is a kind of calling can have positive consequences:  “this national characteristic…may inspire some people and politicians to perform heroically, rising to the level of our self-image.”  But it can also be deeply dysfunctional, as politicians of all stripes trip over one another to reassure Americans “that their country, their achievements and their values are extraordinary,” while profound problems are left unaddressed.

American patriotism has always — and uniquely — had this ‘Stuart Smalley‘ taint to it, but as we collectively whistle through the graveyard of apparent national decline, it seems to have over-ripened a bit.  If John Winthrop’s idea of America as the ‘city on a hill’ drawing the “eyes of all people” was to be more than just an expression of jingoism, it demanded (and demands) a delicate balance between description and aspiration.  When it morphs into mere self-affirmation, however, we Americans become a danger to ourselves — and to others, who increasingly keep their eyes on us for fear of what might happen if they don’t.

Sadly, as a recent international health study bracingly reminds us, most of the ways in which the United States is exceptional today are negative.

The Institute of Medicine just released a study comparing American health care outcomes to other industrialized countries.  And all rhetoric about the U.S. having the ‘best health care system in the world’ aside, the realities are shocking.  Despite spending far more per capita on health care than any other nation, the data make it abundantly clear that the American way of life has become nasty, increasingly brutish, and comparatively short.

The optimistic takeaway here is that almost everything described below is attributable to (and capable of being ameliorated by) public policy — in health, but in much broader areas as well.  In other words, if we can once again rediscover our aspirational identity as Americans, change is possible.

Physician Steven Woolf, a professor at Virginia Commonwealth University, chaired the panel that wrote the report.  He and his co-authors were “stunned by the findings.”  Americans “have a long-standing pattern of poorer health that is strikingly consistent and pervasive” over a person’s lifetime, the study found.

The U.S. is at or near the bottom in virtually every health outcome:  life expectancy, obesity, diabetes, heart disease, and homicide.  We have much higher rates of death before the age of 50, accounting for most of the gap between the United States and our peer nations.

According to the report, most of these poor health outcomes are attributable to poor childhood health.

The USA has had the highest infant mortality rate of any developed country for several decades, due partly to a high rate of premature birth and low birth weight.  Dr. Woolf and his colleagues also note that the U.S. has by far the highest rate of child poverty, though they don’t really touch on its role as a possible cause.

Here is a graphic representation of the infant mortality gap:

This gap, interestingly, has a history.

The gap between the United States and its peer nations was relatively wide in 1950s and early 1960s, and then rapidly closed.  Why did it close, and so quickly?  Policy — the ‘War on Poverty,’ including Medicare/Medicaid, among other programs, which for the first time began to connect millions of Americans to the health care system (and to food) on a relatively consistent basis.

And then, right around 1980, the gap re-emerged.  This coincided, not surprisingly, with Reagan-era cuts in public health and social programs; but it also coincided with an ongoing increase in economic inequality and insecurity.  Since 2000, the gap between the U.S. and its peers has expanded.  While infant mortality rates among African-American and Hispanics are high, this doesn’t explain the gap — white Americans have higher comparable rates as well.

In short, while we have continued to make progress over the past two decades, our peers have had much greater success:  “although U.S. infant mortality declined by 20 percent between 1990 and 2010,” the report notes, “other high-income countries experienced much steeper declines and halved their infant mortality rates over those two decades.”

The report doesn’t really offer any explanations for the infant mortality gap, or for the poor health performance of the U.S. more generally, beyond behavioral factors like drug abuse, calorie consumption, not wearing seat belts, and the ubiquity of handgun violence.

I’m not a health expert, but I do think we need to consider one factor:  inequality.

While economic inequality within nations wasn’t the focus of the Institute of Health report, we know from research by epidemiologists Richard Wilkinson and Kate Pickett that it strongly correlates with (and is reproduced by) health outcomes — such as infant mortality (see above).

This is true in two senses.

First, nations with higher rates of economic inequality tend to have poorer health outcomes across the class structure — in other words, while health outcomes are better the richer or more educated one is, they will still be lower than those of comparably placed people in more equal countries.  The Institute of Health report confirms this.

Second, economic (and educational) inequality and health outcomes are strongly correlated within societies.  In the U.S., life expectancy for white women without a high school diploma is 73.5 years, compared with 83.9 years for white women with a college degree or more.  For white men, the gap is even larger: 67.5 years for the least educated white men compared with 80.4 for those with a college degree or better.

Because the United States is drastically more unequal than any other comparable nation, the socio-economic gradient is much sharper here — and its getting worse.  Indeed, we now have evidence that the life span of the least educated white Americans has actually contracted, falling a full four years since 1990.  The numbers are worse for women.  Some of this is attributable to changes in the labor market:  the share of working-age adults with less than a high school diploma who did not have health insurance rose to 43 percent in 2006, up from 35 percent in 1993.  While full implementation of the Affordable Care Act in the coming years may help somewhat, the deeper problems of rising inequality and economic insecurity — and the debilitating stress and anxiety that accompany them — remain.

In other words, our unequal and insecure American way of life is making us sick.

While the mechanisms that connect inequality with poor health outcomes are many and hard to disentangle, it seems clear that stress and insecurity are critical.  Both affect the cardiovascular and immune systems, and both are found in abundance and in greater numbers in unequal societies — and their effects are devastating on the young in particular.

Thomas McInerny, president of the American Academy of Pediatrics, reacted to the Institute of Health study by pointing to recent research on the long-term impact of “toxic stress” on the health and cognitive development of babies and toddlers.   “It’s becoming increasingly clear that the first 1,000 days of life are critically important for children’s development, and can determine the course of their life span from then on,” McInerny says. “Investing in children in the first three years of life provides higher returns, for improving their productivity as adults, compared to intervening later.”

Back in 2007, UNICEF put together an index of child well-being.  It measured material and educational factors, health and safety, peer and family relationships, surveys of subjective well-being, and behavioral risks.  When Wilkinson and Pickett lined this index up with rates of income inequality, they found something striking:  the more unequal a society is, the worse its rates of child well-being — not just among poor children, but overall:

These correlations and comparisons make one thing clear:  America’s poor health outcomes, particularly for our children, can be ameliorated.  Why?  Because the differences between the U.S. and its peers, ultimately, are policy differences — and thus are amenable to collective action.  We can make America healthier (and more productive) by making it less unequal, and by investing in pre-natal care, early childhood health, and high quality and universal pre-school.  “We already know what to do,” Dr. Woolf says. “It’s more a matter of having the resolve and resources to actually do it.”

Notwithstanding the false scarcity of our current austerity politics, we have the resources.

Notwithstanding the libertarian and narcissistic braying of the privileged, our well-being is ultimately inseparable from that of our fellow Americans.

Whether we have the resolve to see this will ultimately determine whether the term ‘American exceptionalism’ serves to damn or praise our national experiment.

Curb the Spread of the Flu: don’t eat at restaurants that don’t provide paid sick leave

Photo by Flickr member Mcfarlandmo, available under a Creative Commons Attribution-Noncommercial license.

According to the Centers for Disease Control, the US is in the midst of the worst flu season it’s seen in a decade. In Boston, a state of emergency has been declared, where at least 18 people have died because of the flu. The CDC recommends that people with flu-like symptoms stay home and avoid contact with others, except to receive medical care. But as Think Progress reports,

“for a huge number of American workers, that option doesn’t exist due to a lack of paid sick days. 40 percent of private sector workers and a whopping 80 percent of low-income workers do not have a single paid sick day. One in five workers reports losing their job or being threatened with dismissal for wanting to take time off while sick”

So what’s a country to do?

One idea is to avoid restaurants that don’t provide paid sick leave—in the food industry the potential for spreading disease is high, especially with  79% of workers reporting that they are unable able to take a paid sick day.

While this would do little for sick workers this season, it does have the potential to reduce the spread of disease, and perhaps it could act as an incentive for employers to provide better benefits to their workers (although there are other reasons why providing paid sick leave is good for business).

Which restaurants don’t provide paid sick leave to their employees?

Each year the Restaurant Opportunities Centers (ROC) United publishes a guide on the working conditions in popular America restaurants. In their latest guide, the following restaurants are noted for not providing paid sick leave:

7-Eleven Logan’s Roadhouse
AppleBees Long John Silver’s
BJ’s Restaurants Longhorn Steakhouse
Bob Evan’s Restaurants Luby’s Cafeteria
Bojangle’s Famous Chick ‘N Biscuits Maggiano’s Little Italy
Bonefish Grill McCormick & Schmick’s
Boston Market McDonald’s
Buca Di  Beppo Moe’s Southwest Grill
Buffalo Wild Wings Morton’s, The Steakhouse
California, Pizza Kitchen Noodles & Company
Capital Grille O’Charley’s
Captain D’s Old Chicago
Carino’s Italian On the Border
Carl’s Jr Outback Steakhouse
Carrabba’s Italian Grill Papa Murphy’s
Cheddar’s Causual Café Perkin’s Restaurant Bakery
Chick-Fil-A Popeyes Louisiana Kitchen
Chili’s Grill Bar Portillo’s Hot Dogs
Church’s Chicken Pot Belly Sandwich Works
Coldstone Creamery Qdoba Mexican Grill
Coner Bakery Café Quizno’s
Cracker Barrel Old Country Store Rainforest Café
Dave & Buster’s Rally’s Hamburgers
Denny’s Red Robin Gourmet Burgers
Dunkin’ Donuts Ruby Tuesday
Famous Dave’s Ruth’s Chris Steak House
Firehouse Subs Sbarro Pizza
Five Guys Burgers & Fries Sheetz
Fleming’s Prime Steakhouse & Wine Bar Starbucks
Fuddruckers Steak N Shake
Godfather’s Pizza Subway
Golden Corral T.G.I Fridays
Hardee’s Taco Bell
Houlihan’s The Melting Pot
Huddle House Uno Chicago Grill/Pizzeria
Jason’s Deli Whataburger
Jersey Mike’s Subs Wienerschnitzel
Johnny Rockets Wingstop
Krystal Yard House
Little Ceasers Pizza

A top 5 list of the best public health top 10 lists

Commemorating each New Year is an endless supply of top 10 lists. Typically these lists recount and rank some variety of preoccupation of the twelve months prior (books, movies, innovations, photos, you-name-it), while others offer predictions or resolutions for the year to come. In the world of public health, this phenomenon finds no exception.

Lists in general can tell us a lot about ourselves—our pursuits, anxieties, desires—when it comes to well-being, the majority of top ten lists portray very individualistic, very bio-medically skewed notions of health.  For this reason I’ve compiled a list of the top 5 public health top 10 lists which approach health with a greater consideration of the social determinants of health (SDOH). I’m only listing five because for all the top 10 health-related lists out there, I couldn’t find many that took this wider notion of health into account. If you know of any others please leave a note in the comments section.

Here goes:

5. The 2×2’s project’s “Public Health 2012: The Top 10 Stories of the Year”—This list discusses some of the major public health stories of the year, including US health reform, the NYC soda ban, and the health impacts of Hurricane Sandy. It made the cut for approaching these stories with a consideration of their political dimensions.

4. Surround Health’s “Public Health Top 10 List for 2013”—This is a list that looks into the future and presents a ranking of “public health milestones to look forward to in the coming year”. While the previous list highlights major public health stories and their political dimensions, I’ve included this list because it highlights a range of political issues and in turn discusses how they influence health. Examples include the health impacts of US immigration reform, gun control, and cuts to social security.

3. Croakey’s “Recommended reading for your summer holidays, from Croakey contributors”—Ok so I’ve bent the inclusion criteria a little bit for this one since it’s not technically a “top 10”, but like I said it was a struggle to find SDOH friendly lists, and you’ll thank me anyways because this has some really great reading recommendations from some very SDOH inclined public health folk.

2. Corporations & Health Watch’s “Books on Corporations and Health from 2012”—This book list comes from an organization concerned with how corporate industry practices influence health—if you didn’t find enough reading material for 2013 in the list prior, look no further.

1. Globalization and Health’s “Most viewed articles in the past year”—Ok, I bent the rules again. This unofficial list comes from Globalization and Health, an open access journal and a great resource for keeping up to date on SDOH research.

And don’t forget, Healthy Policies is in the running for its own “best of” list over at Healthline’s Best Health Blog of 2012 contest. We’re the only SDOH blog in the running and just barely keeping up with the current top 10 contenders—mostly lifestyle and weight-loss blogs—so make sure you vote every day until the 15th of February here!


Tackling Obesity: Should the UK take public health cues from the US?

A new report from the Royal College of Physicians (RCP) notes that nearly a quarter of the UK population is obese, a figure of expanding waistlines which trails only the US. Given the RCP’s objective of improving clinical conditions, it’s not surprising that the report focuses on the medical aspects of addressing obesity, concluding that “the healthcare system in Britain must adapt to the demands of an increasingly obese nation”.

However, in its recommendations for action, the report also directs attention beyond the boundaries of clinical care by calling for a national leader to spearhead obesity prevention efforts and to maintain a spotlight on the growing obesity crisis in the UK. According to Professor John Wass, chair of the working party that produced the report, this national leader should be someone like First Lady Michelle Obama, or New York City Mayor, Michael Bloomberg. Wass says,

“I think we could have a senior figure in London, rather like the mayor of New York, who has led on having smaller measures of Coca-Cola in cups and other things. Michelle Obama has had a huge effect on obesity and getting things labelled.”


Leadership is certainly a crucial aspect of achieving public health goals; but are US leaders focused on the right messages when it comes to addressing obesity?

It is now widely acknowledged among public health professionals that efforts aimed at addressing obesity must account for the broader context within which personal choices are made. As Professor Lindsey Davies, president of the Faculty of Public Health, notes

“Obesity is not only caused by how much we each eat or drink: if tackling it were as simple as telling people to eat less and move more, we would have solved it by now. Our chances of being obese are also affected by factors like whether we have easy access to affordable fruit, veg and other healthy foods, and if it is safe to let our kids play outside.

More fundamentally, obesity is also influenced by the broader structural determinants of health that create socio-economic inequities. The structural determinants of health can be thought of as the policies which create unequal distributions of resources important for health, resources which influence people’s social position like, income, employment, education, knowledge, and power.  In rich countries for example, compelling evidence links obesity to the distribution of income across populations. In countries where the gap between the rich and the poor is wider, we find higher rates of obesity.

Authors, Gore and Khotari (2012) differentiate between three types of policy initiatives when the aim is to improve nutrition and increase physical activity. Initiatives can either be:

  1. Lifestyle-based: where the focus is on changing people’s behaviour, for example through advocacy campaigns to encourage healthier eating habits.
  2. Environmental-based: where efforts are aimed at influencing the environment in which personal choices are made for example, by banning the sales of unhealthy foods at schools; or
  3. Structural-based: where the aim is to improve the structural determinants of health directly for example by, broadening “the distribution of power, income, goods and services across the population”.

The authors argue that the most effective initiatives are those which are based at the structural-level and that public health “should not settle for programs that bring about changes in lifestyle and the immediate environment”.  In fact, it is noted that individual and environment-based initiatives may have potentially negative impacts on health equity by differentially benefiting those who are better positioned to take advantage of the initiatives. This has been the case with many smoking cessation campaigns which target services at the individual level. For example, in a smoking cessation program run by the NHS, the quit rate for the most disadvantaged smokers was only half of that achieved in the highest socio-economic group, despite equal access to the services. A potential explanation for this is higher nicotine dependence among the most socially disadvantaged. Other studies have similarly shown that “anti-smoking messages have been more successful with better off people”.

So where do Michelle Obama and Mayor Bloomberg’s obesity initiatives fall among Gore and Khotari (2012) characterization of healthy living initiatives?

Michelle Obama gained her leadership role in the US with the creation of Let’s Move: a campaign that aims to endchildhood obesity through better nutrition and increased physical activity. Let’s Move conceptually recognizes obesity as a multi-faceted socio-economic issue but is exceedingly an environment-based initiative which aims to improve the contexts in which personal choices surrounding nutrition and physical activity are made, for example, by improving the quality of food within schools and building playgrounds to promote greater physical activity among children.

Mayor Bloomberg poses with different sizes of soft drink containers. Photo: Chad Rachman/New York Post

Mayor Bloomberg is famed in certain public health circles for instituting a ban on large-sized sugary drinks in New York City. This initiative is also, categorically, environment-based since it aims to alter the context in which people make their decisions about what they drink.

While both Michelle Obama and Mayor Bloomberg’s initiatives are sensitive to the fact that people’s choices are shaped by their environments, neither tackle the fundamental, structural determinants of health. It is also worth noting, as discussed in a previous Healthy Policies post, that the Mayor’s initiative should be considered within a broader health agenda, one in which he is noted to have denied the links between income inequality and health.

Indeed if leadership is sought to improve obesity rates, the UK should not be taking cues from the US but from health and nutrition proponents within its own borders.

The UK is much more advanced in terms of the attention that is given to the structural determinants of health. This is especially true when compared to the US where public health efforts continue to compensate for the negative impacts of public policies rather than identify them as the sources of health problems. For example, Sir Michael Marmot is professor of epidemiology at the University College of London, and a well-recognized leader in the structural determinants of health;  UK-based academics Kate Pickett and Richard Wilkinson authored the widely discussed Spirit Level, a book which very much directs attention to the structural determinants of health.  In terms of nutrition, UK food campaigners and experts have drawn attention to the role of rising food prices and shrinking incomes in both increasing people’s consumption of fatty foods and in reducing their intake of fruit and vegetables; especially for those with the lowest incomes.

Therefore while the call by the RCP for an obesity-focused figurehead seems appropriate, the UK would be better suited to find leaders whose messages are more closely aligned with health proponents found within its own borders rather than those across the pond.

On a semi-related note:

Healthy Policies is in the running for Best Health Blog of the year, but we need your votes! In addition to bragging rights, there are monetary prizes involved which would help cover the annual costs of maintaining the website. While we held the lead for a few days, life-style and coincidently, weight-loss blogs are bringing in a lot of votes. We are the only Social Determinants of Health blog in the running! You can vote once every 24 hours here until the 15th of February.

best health blogs 2012

Walmart’s free healthcare plan and why strikers shouldn’t care

Photo by Flickr member peoplesworld. Available under a Creative Commons Attribution-Noncommercial license.

Mammoth retail giant, Walmart, announced last week that it will cover 100% of healthcare costs for its US employees needing specialty heart, spine and transplant surgeries. And that’s not all.  For those needing treatment in what it calls its ‘Centers of Excellence’ program, the company is also offering an all-expenses-paid trip to some of the nation’s most prestigious hospitals. Coincidentally (but perhaps not), this news comes amid the first strike ever launched against the retailer in its 50-year history, with protests currently spreading across 28 stores in 12 states.

What are the health implications of this program and what do they mean in the context of worker strife? Examining the initiative on its own suggests that any beneficial impact the program might have on worker health is severely limited.

Sally Welborn, senior vice president of the chain’s global benefits says, “We devoted extensive time developing Centers of Excellence in order to improve the quality of care our associates receive”.

But how many of Walmart’s ‘associates’ will actually be able to benefit from this program? To benefit workers must first be covered under the retailer’s healthcare plan, comprising not only an elusive number of employees, but most likely a dwindling group as well. In 2009, the company claimed that 52% of its 1.4m employees were covered; however this was before it eliminated health benefits for its part-time employees and hiked premiums for its full-time workers. And since then, the retail giant has declined to give figures of those covered. A Walmart watch group called Making Change at Walmart, estimates that for an average employee who earns $8.81/hr and works 34 hours per week, some of Walmart’s 2012 healthcare plans would cost between 77% and 104% of the employee’s annual gross income. This perhaps explains studies which show that Walmart workers are more likely than others in the industry to rely on government benefits, as well as criticisms that taxpayers are subsidizing the company by paying the healthcare costs of workers who are not insured on the company’s plan.

If the fact that the Centers of Excellence program is problematic to the extent that its impact on worker health will be seriously limited, even more problematic is the fact that it conveys a false sense of consideration for the lives and health of its workers. For while the Centers of Excellence program certainly is a great initiative for those who can afford the company’s insurance, most likely its upper managers and executives, clearly the well-being of the majority of its workers is not the company’s primary concern.  Moreover, while the program’s stated aim is to improve the quality of care for its employees, Walmart consistently disregards workers’ quality of life. Adverse working conditions have become a hallmark of Walmart’s employment model, brought more sharply into the public’s focus by various corporate watch groups, journalistic first-hand accounts like Barbara Ehrenreich’s Nickled and Dimed, and now by the growing display of Walmart workers walking off the job.

In this context, the announcement of the new program, whether deliberate in its timing or not, has the potential to direct attention away from the concerns being voiced by striking workers. And not only does it initiate publicity which is distracting to legitimate calls for living-wages and better working conditions, it is an implicit challenge to valid attempts by workers to improve the conditions of their lives. This is because it covertly throws into question the basis on which workers are protesting. In other words, it provokes the question, “Why are workers striking if Walmart is so obviously concerned with worker well-being?”. However, any achievement in the improvement of workers’ health is much more likely to come from the demands being voiced in opposition to the company’s current modus operandi. Indeed if striking workers are able to secure higher wages and better working conditions, health improvements are likely to have ripple effects beyond that of individual workers to both the workers’ families as well as to their broader communities.