Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.

LEARN > Blogs > Applying Systematic Regulatory Learning: Post-Loper Bright Opportunities

Applying Systematic Regulatory Learning: Post-Loper Bright Opportunities

On July 30, former White House OIRA Administrator Susan Dudley testified before the Senate Homeland Security subcommittee on regulatory affairs.
12 Aug 2025
Written by Nick Hart
Blogs
Blog, a orange background with files connected a lock and shield
Blog, a orange background with files connected a lock and shield

On July 30, former federal White House OIRA Administrator Susan Dudley provided compelling testimony in a hearing before the Senate Homeland Security and Governmental Affairs subcommittee covering regulatory affairs. She reinforced a longstanding need overlooked in federal regulatory practice: applying systematic learning and evaluation of regulations after they are implemented.

As someone whose PhD dissertation focused precisely on this challenge, and who has facilitated discussions on retrospective regulatory review over the subsequent years, I could not agree more strongly with Ms. Dudley's call for Congress to "embrace a learning agenda" for regulation. In fact, this mirrors past recommendations I’ve offered in related testimony to the Select Committee on the Modernization of Congress about improving congressional learning and capacity.

The Post-Loper Moment Creates New Urgency

The Supreme Court's Loper Bright decision fundamentally changes the regulatory landscape by eliminating judicial deference to agency interpretations of ambiguous major statutes. As Ms. Dudley noted in her testimony, this shift places "greater responsibility on Congress to write less ambiguous statutes" while requiring agencies to be more transparent about their assumptions and judgments.

This new reality makes retrospective review not just valuable, but essential. When agencies can no longer rely on broad statutory interpretation to justify regulatory choices, the evidence of what actually works, when, and where is no longer optional – it’s essential. We need systematic, robust, valid, and reliable evaluation to inform better regulatory design and implementation.

The Persistent Gap Between Promise and Practice

Despite decades of recognition that retrospective review can help design more effective regulation, agencies continue to struggle with implementation. As I documented in our 2024 retrospective review session for the IBM Center for the Business of Government, agencies face two major challenges: lack of incentives to conduct reviews, and technical difficulties in designing evaluations after regulations are already in place.

The Environmental Protection Agency's use of regional rules as quasi-experiments for gasoline standards, and the Labor Department OSHA's ability to evaluate vaccine uptake against RIA assumptions during the COVID-19 mandate, demonstrate what is possible when agencies plan for evaluation from the outset and make decisions based on evaluation results. But these evidence-based decisions with ex-post results unfortunately remain exceptions rather than the rule.

Evidence Building as a Foundation for Better Regulation

In our book Evidence Building and Evaluation in Government, Data Foundation board member Kathy Newcomer and I explored how government agencies can build systematic approaches to learning from their interventions. The principles we outline and teach directly apply to evaluation of federal regulations:

  • Design for learning from the start: Regulations should include clear performance metrics and evaluation plans, as Ms. Dudley recommends
  • Build in variation: Different compliance schedules or pilot programs create natural experiments for evaluation
  • Invest in data infrastructure: Agencies need systematic administrative data collection capabilities that support future research and analysis alongside operational decision-making
  • Create institutional incentives: Without structural reforms, evaluation will remain sporadic and superficial, and evaluation must be adapted within existing processes

Building the Learning Infrastructure

Ms. Dudley's testimony offers a reasonable approach for Congress, including to require agencies to "test the validity of their ex-ante regulatory impact predictions before commencing new regulation" and reallocate resources from prospective analysis to retrospective review. This would create a virtuous cycle where better evaluation improves future regulatory design. The approach aligns with longstanding theory and intended practice of a cycle of learning.

The Supreme Court’s Loper Bright decision may have inadvertently created the perfect moment for this reform, and an opportunity for Congress to also reinforce the vision of the Foundations for Evidence-Based Policymaking Act applied to the regulatory environment. With agencies needing stronger evidence to justify their regulatory choices, systematic retrospective review becomes not just good policy, but a strategic necessity.

As Ms. Dudley concluded, regulatory decisions must be "more transparent, accountable, and grounded in both sound science and legitimate policy judgment." Systematic learning and evaluation is one way to get there.

The convergence of legal necessity, available methodology, and growing recognition of the problem creates an opportunity. Congress should catalyze the moment, align the resources, use the Evidence Act to reinforce capacity in regulatory agencies, and ensure evaluation capacity is robust for the regulatory environment in the years ahead. 

Nick Hart is President and CEO of the Data Foundation and co-author with Kathryn Newcomer of "Evidence Building and Evaluation in Government."

 

image

DATA FOUNDATION
1100 13TH STREET NORTHWEST
SUITE 800, WASHINGTON, DC
20005, UNITED STATES

INFO@DATAFOUNDATION.ORG

This website is powered by
ToucanTech