Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.
14 Jan 2020 | |
Blogs - Evidence-Informed Decision-Making |
One year ago today, the Foundations for Evidence-Based Policymaking Act (Evidence Act) became law. Building on the recommendations of the U.S. Commission on Evidence-Based Policymaking, the Evidence Act, and complementary guidance from the White House Office of Management and Budget on federal data policy, work is well underway to create a growing appetite for research evidence within the federal government’s Executive Branch agencies. These efforts also aim to reduce some of the hurdles to the use of government data across agencies, between government agencies, and with the external research community, where much of the relevant evidence is generated.
I am Dean of a school populated by faculty who conduct policy research using such government data and teach students who will graduate and staff many federal agencies. How should our role, as external researchers and educators, be affected by implementation of the Evidence Act? One way that we can help with implementation is to focus on… implementation.
Executive agencies spend most of their time implementing policies legislated by Congress. This implementation process is wide-ranging – from decisions as visible and charged as changing fuel economy requirements for automobiles to nearly invisibly and apparently innocuous choices like how to arrange the paragraphs in an informational letter to welfare program recipients to promote effective use of benefits. Researchers outside government, however, often have access to information about decision-making and relevant data only at the legislative policy level. Their research, therefore, tends to evaluate the impact of an overall policy decision, rather than the nuances of its implementation.
One promising outgrowth of the Evidence Act is to narrow this mismatch. Agency learning agendas, mandated by the new law, can inform researchers of potentially compelling implementation choices at the agency level. Improved access to administrative data can give researchers the granularity of information needed to assess such decisions. As a corollary benefit, if external researchers focus attention in areas where agencies really need their insight, they can build partnerships with agency staff that enable them to use agency data to better inform analyses at the policy level as well.
The potential for this kind of synergy – simultaneously informing agency micro-implementation decisions and generating broad new knowledge – struck me in reading a recent study in my own area of health policy research. In 2015, some 6.1 million U.S. tax returns were subject to individual mandate penalties because some people on the return had failed to obtain health insurance. The Treasury Department was interested in learning about the best ways to design outreach letters encouraging those in this group to buy coverage in 2017 – a typical agency micro-implementation decision. In this case, though, there wasn’t enough funding available to conduct outreach to all eligible returns. Ithai Lurie and Janet McCubbin of the Treasury Department, together with Jacob Goldin, a Stanford Law Professor, designed an experiment in which they randomly assigned returns to one of several outreach letter formats or to a control group that did not receive an outreach letter at all. Then they used Treasury Department data on 2017 returns to see which of their outreach strategies worked best to encourage health insurance take-up in 2017.
But the team didn’t stop there. They merged the Internal Revenue Service (IRS) data with the Social Security death file, to see whether changes in the take-up of health insurance induced by their experiment affected mortality outcomes. That is, they built on their implementation research to evaluate the effects of the underlying program itself. Goldin, Lurie, and McCubbin found that the additional coverage induced by their more effective intervention letters actually reduced mortality among middle-aged adults over a two-year follow-up period.
The Goldin, Lurie, and McCubbin study is a great illustration of rigorous evidence development on implementation effectiveness. It’s also the first large-scale experimental study to show that health insurance reduces mortality. And it couldn’t have happened without collaboration between external researchers and agency staff, and without access to large-scale administrative data.
There are lessons in that collaboration for us as educators as well. We need to encourage our faculty to examine agency learning agendas, to put our insights to work where they are most needed. We need to educate our students — future agency staffers — about the value of data and about the array of approaches and perspectives available in the external research community, so that they actively welcome and seek out research partners.
Finally, as external researchers, we should take the same approach to the Evidence Act as we would to other important federal legislation – evaluate it! At this stage, the right focus is on evaluating the implementation choices made at the agency level, so that we develop an evidence base to make better use of evidence in policymaking.
As agencies proceed in implementing the Evidence Act during its second year, there is much promise for changing the relationship between the government and research community for the better. Success will require strong collaborations and partnerships, promising tremendous gains for producing and using meaningful evidence in coming years.
About the Author:
In 2013, Sherry Glied was named Dean of New York University’s Robert F. Wagner Graduate School of Public Service. From 1989-2013, she was Professor of Health Policy and Management at Columbia University’s Mailman School of Public Health. She was Chair of the Department of Health Policy and Management from 1998-2009. On June 22, 2010, Glied was confirmed by the U.S. Senate as Assistant Secretary for Planning and Evaluation at the Department of Health and Human Services, and served in that capacity from July 2010 through August 2012. She had previously served as Senior Economist for health care and labor market policy on the President’s Council of Economic Advisers in 1992-1993, under Presidents Bush and Clinton, and participated in the Clinton Health Care Task Force. She has been elected to the National Academy of Medicine, the National Academy of Social Insurance, and served as a member of the Commission on Evidence-Based Policymaking.
Glied’s principal areas of research are in health policy reform and mental health care policy. Her book on health care reform, Chronic Condition, was published by Harvard University Press in January 1998. Her book with Richard Frank, Better But Not Well: Mental Health Policy in the U.S. since 1950, was published by The Johns Hopkins University Press in 2006. She is co-editor, with Peter C. Smith, of The Oxford Handbook of Health Economics, which was published by the Oxford University Press in 2011.
Glied holds a B.A. in economics from Yale University, an M.A. in economics from the University of Toronto, and a Ph.D. in economics from Harvard University.
DATA FOUNDATION
1100 13TH STREET NORTHWEST
SUITE 800, WASHINGTON, DC
20005, UNITED STATES