Federal agency officials, especially those in newly created leadership positions established by the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act), are engaging in activities to facilitate building and using evidence to inform decision-making. The Data Foundation’s 2022 Virtual Symposium, in partnership with The George Washington University’s Trachtenberg School of Public Policy and Public Administration, convened experts who described real-world experiences and research projects that focused on efforts to build federal agency evidence-building capacity. In the call for submissions, the Data Foundation asked experts to submit research related to four areas: 

  • Analysis of agency capacity assessments 

  • Reviews of agency evidence-building plans and learning agendas 

  • Workforce for evidence-building activities 

  • Novel uses of data for making decisions and informing policy. 

This compendium offers an overview of the 16 presentations and key takeaways.


Table of Contents


Day 1: Understanding Capacity for Evidence-Building Across Government 

The first day focused on efforts to assess evidence-building capacity and establish evidence-building processes. The presenters from both government and non-governmental organizations covered topics including capacity assessments, data silos and data hubs, workforce management, and how agencies can work with contractors and collaborate across different levels of government to build capacity for effective evidence-based policymaking.

Informing Infrastructure Improvement Decision-Making with the Best Available Science

Presented by Ed Kearns, Chief Data Officer, First Street Foundation and former Acting Chief Data Officer at the U.S. Department of Commerce

 

Risk Factor is a free tool created by the nonprofit First Street Foundation to make it easy to understand risks from a changing environment. Risk Factor (available at: riskfactor.com) allows users to search their address to see the location’s flood, wildfire, or heat risk now and 30 years into the future. The tool is able to determine various climate risk factors down to the level of a street, property, house, or 100-meter segment of roadway. Developed using an open source hazard model driven by open source government data from various agencies – including the Federal Emergency Management Agency, National Oceanic and Atmospheric Administration, and U.S. Geological Survey at the Department of Interior – the First Street Foundation further expanded the tool’s capabilities using outputs from the Intergovernmental Panel on Climate Change (IPCC) climate models as well as spatial and economic analysis models. The project demonstrates how federal government data can enable development of innovative tools to inform public understanding and decision-making. By utilizing open science and bringing together various data sources, the public and private sectors can take a more climate-informed science approach to prepare for the effects of climate change.
Key Takeaways & Recommendations:
  • Agencies can learn from demonstration projects like First Street’s risk model and leverage the wealth of government data to develop evidence-informed approaches to complex issues such as climate change.

  • Cultivating public-private partnership is essential. There is a critical need to improve federal government engagement with the private sector to develop information products. 

  • To facilitate greater data capacity within the federal government, officials need to identify existing obstacles to using data, including potential restrictions on agencies that may want to use outside products, and consider incentives for public-private sector collaboration.


An Iterative Approach to Developing, Conducting, and Using the Department of Homeland Security Capacity Assessment

Presented by Rebecca Kruse, Assistant Director for Evaluation, Department of Homeland Security; Brodi Kotila, Senior Political Scientist, RAND Homeland Security Operational Analysis Center (HSOAC); and Coreen Farris, Senior Behavioral Scientist, RAND HSOAC

The Evidence Act requires agencies to assess their capacity to produce and use evidence for better decision-making and policymaking. The Department of Homeland Security (DHS) has conducted two iterations of its capacity assessment since 2020 – an initial assessment in fiscal year 2020, and the official, independent assessment in fiscal year 2021. The presentation covered methodology, findings and recommendations, lessons learned, and an update on DHS evaluation activities since the assessment was published.

Key Takeaways & Recommendations: 

  • Cross-department collaboration is critical for successful capacity assessments.

  • Agencies can foster continued collaboration and learning to strengthen capacity by providing educational resources, hosting workshops and lunch and learns, and holding office hours.


Understanding Human Capital Needs for Expanding Data and Evidence Culture Using a Federal Data and Digital Maturity Survey

Presented byMaddie Powder, Research Associate, Partnership for Public Service

The Partnership for Public Service and the Boston Consulting Group conducted a data maturity assessment using the Federal Data and Digital Maturity Index survey, or FDDMI, at six federal agencies. To understand the agencies’ data and digital maturity, respondents ranked their agency’s current maturity on a sliding scale of 0 to 100 points and their target maturity in five years. Based on the survey responses, agency results were separated into 4 categories – starter, literate, performer, and leader. Findings from the survey revealed human capital – referring to an agency’s workforce and a component of the overall maturity score – ranked lower than overall maturity scores. Target maturity scores were higher than current maturity scores for all sub-categories, revealing a consistent ambition to improve.

Key Takeaways & Recommendations: 

  • Federal agencies need to retain and recruit a skilled workforce to carry out evidence-building activities.

  • There are several strategies to build a skilled data workforce in the federal government, including: 

  • Utilizing creative hiring authorities; ​​

  • Making government jobs accessible to young people and giving them the space to be creative to support retention;

  • Promoting the government’s mission; 

  • Creating experiential onboarding programs; and 

  • Investing in the current workforce through upskilling and reskilling programs.


An FAA Experience: Applying Intervention Research as a Change Management Approach to Implement Evidence-Based Management

Presented by Robert Young, Senior Advisor, Strategy, Risk & Engagement, Federal Aviation Administration Security & Hazardous Materials Safety Organization

The Federal Aviation Administration (FAA) Security and Hazardous Materials Safety Organization developed a three-year strategic plan to implement evidence-based management with the objectives of delivering goods and services with more rigor and scientific integrity, better managing data, and building evidence-building capabilities. To execute this strategy, the agency used intervention research as a change management approach. The research team engaged participants with a “reflexive experiential lens” to gather a range of perspectives from the senior executive level to employees to bridge the gap between the academic theory and the realities of implementing an evidence-based management system in a federal agency. 

Key Takeaways & Recommendations: 

  • Integrating evidence-based management into organizations requires ongoing, long-term effort. It is important to identify specific, practical examples to demonstrate how evidence-based management can improve an organization’s work. 

  • Trust building is a key component of intervention research. Applying concepts to day-to-day activities is more effective than formal instruction. 


Advancing Equity through Evidence-Building, Data Integration, and Research Partnerships: A Local Government’s View from “The Other Washington”

Presented by Claire Evans, Research Specialist, King County Metro Transit; Truong Hoang, Deputy Regional Administrator, Region 2, Community Services Division, Economic Services Administration, Washington State Department of Social and Health Services; Maria Jimenez-Zepeda, ORCA Reduced Fare Project Program Manager, King County Metro Transit; Christina McHugh, Housing and Adult Services Evaluation Manager, King County Department of Community and Human Services, Performance Measurement and Evaluation Unit; and David Phillips, Associate Research Professor, Wilson Sheehan Lab for Economic Opportunities, University of Notre Dame

Representatives from King County and Washington State highlighted challenges, successes, and lessons learned as their organizations have been working to adopt and integrate evidence-based policymaking into their programs and day-to-day operations. Panelists described how partnerships and data sharing helped center equity and improve program decisions about transportation assistance and housing for low-income populations. The panelists touted leadership buy-in, partnerships between local, state, and nonprofits to conduct evaluation, and funding as critical for developing local evaluation capacity. 

Key Takeaways & Recommendations:  

  • Sustained local funding and non-government financial investment are central to improving business processes.  

  • Interdisciplinary partnerships with various levels of government and other external partners enable collaboration across local and state governments and make findings applicable in multiple contexts.

  • Panelists suggested three ways the federal government can help strengthen state and local capacity for data and evidence-building: 

  • Focus investment on bolstering overall local capacity rather than program-specific funding. 

  • Incorporate federal carve-outs to provide sustained overhead to build and modernize data systems. 

  • Improve access to data and dissemination of research, and provide legal clarity and guidance around when and how data should be accessible for research purposes.


Assessing the Quality of Impact Evaluations at USAID

Presented by Irene Velez, Director, Monitoring Evaluation Research Learning and Adaptation (MERLA), Panagora Group

The United States Agency for International Development’s (USAID) Bureau of Policy Planning and Learning commissioned a study that assessed the quality of the agency’s impact evaluations from 2011-2019. The study was intended to identify the strengths and shortcomings of USAID’s impact evaluation reports and inform the agency’s update to guidance for conducting impact evaluations. The research group compared impact evaluation reports with the definition outlined in USAID’s evaluation policy and developed a review instrument to assess the quality of the impact evaluations. Quality was based on six domains, including: sample size, conceptual framing, treatment characteristics and outcome measurements, data collection and analysis, threats to validity, and reporting. The researchers found that about half of the impact evaluation reports met the quality criteria, that USAID’s existing evaluation guidance was not tailored toward impact evaluations, and that the existing guidance lacked alignment with the organization’s broader scientific research policy.

Key Takeaways & Recommendations:  

  • Third-party reviews can identify misalignment or missed opportunities to build evaluation capacity at an organization. 

  • USAID updated its operational policy by expanding the definition of impact evaluation, specifying the elements required for evaluation reports, and included explicit mention of a cost analysis for impact evaluation – demonstrating how the organization made a decision and took action based on evidence.

 

Approaches to Assessing Agency Capacity for Evidence-Building

Presented by Tania Alfonso, Senior Evaluation Specialist, USAID; Danielle Berman, Senior Evidence Analyst, Office of Management and Budget (Moderator); Susan Jenkins, Evaluation Officer, Department of Health and Human Services; and Christina Yancey, Chief Evaluation Officer, Department of Labor

Panelists from the Department of Health and Human Services, the Department of Labor, and the U.S. Administration for International Development demonstrated various approaches to conducting capacity assessments by sharing the models their agencies used to complete  department-wide assessments. 

Agencies tailored the capacity assessment to fit their unique organizational needs. Some utilized an evaluation council framework and others started from a more structured, centralized evaluation authority. Based on the first agency capacity assessments, the evaluation officers are planning activities to address capacity needs. Future efforts include developing a four-year plan with each year focusing on a different category of capacity, providing training and tools, focusing on knowledge management, and improving communication regarding the value of evidence.

Key Takeaways & Recommendations:  

  • Agency leaders should emphasize both evidence producers and evidence users when communicating the value of capacity assessments.

  • Frame capacity assessment conversations to emphasize continuous learning. Identify achievable short, medium, and long-term goals, and recognize agency strengths and promising practices.

  • Use maturity models to set goals and measure progress. 

  • Allow progress to reflect department-wide capacity strengths and weaknesses while also incorporating the nuances of different sub-agencies and offices. 


Opportunity for Partnership: A Budget and Program Perspective on the Learning Agenda and Evidence-Building Activities 

Presented by Courtney Timberlake, President, American Association for Budget and Program Analysis (AABPA); Ed Brigham, Executive Consultant, Federal Consulting Alliance & AABPA Board Member;Jon Stehle, Councilmember, City of Fairfax VA; and Darreisha Bates, Federal Portfolio Manager, Tyler Technologies & Former Director of Intergovernmental Relations, U.S. Government Accountability Office

The panel covered the evolution of the budget process since analysts adopted more advanced technology and discussed how to improve upon that recent progress going forward. The panel divided data use in the budgeting process into two areas: how computerization and data have made the federal budget more accurate, and where and how data can be used to support budget and policy decisions. 

Key Takeaways & Recommendations: 

  • Since the enactment of milestone data laws like the Digital Accountability and Transparency Act of 2014 (DATA Act) and the Evidence Act, the budget process has become more focused on data quality and reporting. 

  • Executive Branch leaders need to continue communicating the value of data, collaboration, and training to improve data use in the budget process and to inform programmatic decisions.

  • Federal agencies need to recruit and train data talent to improve data generation, communication, and use.


Day 2: Innovations in Evidence-Building Activities

The second day of the symposium focused on innovation. Presentations offered a glimpse into areas where government agencies and other organizations have expanded their capacity to build and use evidence – and the strategies, tools, and methods that have enabled them to do so. Panelists’ innovative approaches to evidence across multiple issue areas emphasized the importance of fostering partnerships, leveraging existing data sources, and the role of organizational leadership.


Education Research-Practice Partnerships: Innovative Structures to Build and Use Evidence

Presented by Rachel Anderson, Director, Policy and Research Strategy, Data Quality Campaign