Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.
Recently, I participated in a panel discussion at the National Academy of Sciences on the "Principles and Practices for Federal Statistical Agencies"—the foundational document guiding our federal statistical system. As an advocate for strengthening the quality and utility of federal data, and as a peer reviewer of the 8th edition, I want to share reflections on why these principles matter, what we must do to ensure they remain both timeless and timely, and where we need to go from here.
The Principles and Practices (P&P) represent an extraordinary success story spanning three decades. Since 1992, this document has been cited by agency heads in Senate confirmation hearings, referenced in OMB Statistical Policy Directives (SPDs) including SPD #1, incorporated into the Foundations for Evidence Based Policymaking Act (Evidence Act) and the Confidential Information Protection and Statistical Efficiency Act (CIPSEA) of 2018, and used by the Government Accountability Office to evaluate agency performance. This record demonstrates that P&P is not merely aspirational—it is operational across administrations and circumstances. What strikes me most is that these principles work precisely because they transcend political boundaries. Statistics are neither Republican nor Democratic. They are the foundation of evidence-informed governance in our democratic republic.
The Federal Data Strategy offers a compelling example of P&P principles in action. In 2019, OMB Director Russ Vought issued and endorsed the Federal Data Strategy following enactment of the Evidence Act signed by President Trump. That strategy embodied many P&P principles, demonstrating that commitment to statistical excellence truly transcends administrations. During the COVID-19 pandemic, the Census Bureau's Household Pulse Survey and Business Pulse Survey exemplified Principle 1 on providing “objective, accurate, and timely information that is relevant to important public policy issues.” The agency collected key data during a national emergency, building on where P&P provided the framework for mission-driven decision-making. Perhaps most importantly, P&P serves as a communication tool across the entire system, providing shared vocabulary for the crucial distinction between policy choices—which rightfully belong to appointed and elected officials—and methodological choices, which must remain within the purview of career statisticians to maintain credibility and trust.
During our panel discussion, I raised what some might consider a controversial point: the statistical system should not be excluded from evaluating itself. In a world where resources are genuinely zero-sum, we must be willing to ask whether continuing a particular data collection is as valuable as something else we could invest in. This is not about diminishing the importance of statistics—quite the opposite. It is about ensuring we can make the strongest possible case for the resources these critical functions require.
We need evaluative approaches that go beyond simplistic metrics. The current system often relies heavily on comment counts during Paperwork Reduction Act reviews, which tells us remarkably little about actual value or use. Instead, we should examine multiple dimensions: citations in peer-reviewed research, documented use in policy decisions, systematic feedback from diverse user communities, and demonstrated impact on decision-making across sectors. Between 2012 and 2022, the government-derived statistics sector of our economy doubled its revenue from $400 billion to $800 billion—industries built on repackaging and repurposing federal statistics. That economic multiplier effect is substantial, but we need frameworks to articulate such value clearly and consistently across the entire federal statistical portfolio.
Statistical functions cannot and should not be exempt from the program evaluation expectations we apply to other government activities. If we can demonstrate value systematically and rigorously, we strengthen rather than weaken the case for adequate resources. The alternative—treating statistical programs as somehow beyond evaluation—ultimately undermines the very principles of evidence-based governance that the statistical system exists to support.
Federal statistical agencies face a fundamental mismatch between expanding expectations and constrained resources. This situation is not sustainable, yet the traditional approach of simply advocating for more money within the broken annual appropriations process has proven insufficient. We need to fundamentally rethink funding mechanisms for statistical infrastructure. Some statistical programs are so critical to democratic governance and economic functioning that they perhaps warrant mandatory funding streams, insulated from annual appropriations battles. Multi-year commitments for statistical infrastructure—similar to how we fund major IT modernization—could provide the stability needed for genuine innovation rather than perpetual crisis management.
The Evidence Act created infrastructure across government through Chief Data Officers, yet we have not fully realized the potential efficiency gains from improving data quality at the source. If we strengthen data governance for administrative records where they originate, statistical agencies benefit tremendously without having to duplicate quality improvement efforts. This is not just about statistical agencies—it is about improving the entire data ecosystem in ways that create multiple benefits. The Bureau of Labor Statistics could supplement traditional store-visit data collection with administrative data from retail firms. Statistical agencies can better validate and verify private sector data sources, as agencies like the Energy Information Administration already do extensively. We need innovation funding mechanisms—perhaps through the National Science Foundation connected to emerging infrastructure like the National Secure Data Service—that allow for methodological experimentation without jeopardizing production systems.
One of the most frustrating aspects of our current system is how difficult we make it for legitimate stakeholders to engage with statistical agencies about data needs and improvements. A major philanthropic organization recently asked me how they could suggest a minor but valuable improvement to the American Community Survey. The answer involved navigating the Paperwork Reduction Act comment process during a narrow Federal Register notice period—a mechanism designed for compliance oversight, not genuine stakeholder engagement. This is absurd. We operate with twentieth-century administrative procedures in a twenty-first-century information environment.
Statistical agencies need ongoing mechanisms for meaningful dialogue with diverse user communities—not just researchers and contractors, but also policymakers, advocacy organizations, businesses, and the general public who provide data. Federal advisory committees represent one model, but we need to think more creatively about how to gather systematic input on priorities, understand emerging needs, and communicate effectively about how data are collected, protected, and used. The current system fails both statistical agencies and the public they serve.
Can and should the public trust federal statistics? Absolutely—but trust is not self-sustaining. It requires continuous demonstration and protection. Principle 4 on impartiality speaks to protecting the objectivity of statistical products and methods. This does not exempt agencies from executive branch authority over administrative matters, but statistical integrity cannot be compromised by political interference or even the appearance of such interference. These principles, stable since 1992, have built enormous trust over decades. When that trust is challenged by unfounded allegations or when personnel decisions create the appearance of political motivation, those of us with knowledge of how the system actually operates have a responsibility to speak clearly about the integrity with which statistical agencies aspire to operate.
The tension between relevance and independence that my co-panelist Kevin Corinth articulated is real and requires constant attention. Statistical agencies must serve policy needs without allowing policy preferences to distort objective measurement. This distinction between value judgments—which belong to elected officials—and methodological judgments—which must remain with career professionals—is foundational to maintaining credibility. When that line blurs, as Kevin illustrated with challenges in poverty measurement, we undermine the very utility that makes statistics valuable in the first place.
While the 8th edition of Principles and Practices provides essential guidance, our panel discussion and my ongoing work suggest several areas where future editions might evolve to address emerging challenges more directly.
Federal statistical agencies must strengthen their role as authoritative information sources in an increasingly crowded and often chaotic data landscape. This requires more explicit frameworks for how statistical agencies coordinate with and evaluate non-federal data producers. We need clearer principles for when and how to incorporate alternative data sources while maintaining quality standards, and better mechanisms for assessing whether statistical products actually influence the decisions they are meant to inform. The current emphasis on dissemination is necessary but insufficient—we need to think more rigorously about impact and utility.
The principle of innovation deserves deeper examination beyond generic calls for improvement. Innovation should not be pursued for its own sake or as mere technological novelty, but rather to better fulfill agency missions in ways that achieve genuine efficiencies and improvements in effectiveness. This means developing structured approaches to evaluate new methods and technologies before adoption, establishing responsible frameworks for artificial intelligence in statistical production (as the Census Bureau is beginning to model), and ensuring that innovation includes accessibility improvements so diverse users can benefit from statistical products. The National Secure Data Service's work on AI-enabled interfaces for trusted data access represents exactly this kind of purposeful innovation.
We need better articulation of how statistical agencies function within the broader evidence ecosystem, not just in isolation. Future principles should address more explicitly how statistical agencies coordinate with program evaluation offices, how to facilitate appropriate data sharing while preserving privacy protections, and how to build stronger partnerships with academic researchers and responsible private sector data scientists. The siloes between statistical agencies and other parts of the evidence infrastructure represent missed opportunities for both efficiency and effectiveness.
Public engagement and statistical literacy deserve elevation beyond current practice guidance. Statistical agencies struggle with public communication, often limiting engagement to research communities and contractors. We need principles for how to communicate statistical uncertainty to non-expert audiences, how to gather meaningful public input on statistical priorities, and how to make statistical literacy a genuine priority in agency outreach. The gap between technical excellence and public understanding represents a vulnerability we cannot afford to ignore.
Finally, we need explicit attention to measuring and demonstrating the value statistical work creates. This includes developing better frameworks for assessing return on investment, methods to track how statistical information influences policy outcomes and private sector decisions, and evidence of whether products truly meet diverse user needs. The economic multiplier effect I mentioned earlier—$800 billion in private sector revenue built on federal statistics—represents just one dimension of value that we should be articulating more systematically.
The 8th edition of Principles and Practices arrives at a moment demanding both continuity and evolution. We face resource constraints, workforce pressures, technological disruption, and heightened scrutiny. Yet we also have tools our predecessors did not: the Evidence Act framework, new data sources, advanced analytical methods, and growing recognition of data's value to democracy and the economy. P&P does not tell us exactly what to do—it provides principles to guide decisions and practices to implement those principles. That is precisely what we need in uncertain times.
The question before us is not whether to uphold these principles but how to uphold them while addressing challenges the original authors could not have anticipated. That requires continued leadership from the National Academies’ Committee on National Statistics (CNSTAT), courage from agency heads navigating unprecedented pressures, support from OMB and Congress for adequate and appropriate resources, and sustained advocacy from all who believe in evidence-informed governance. At times of great change, it is essential to begin with shared culture—common values, norms, and accepted priorities. The Principles and Practices provide that foundation. They serve as both a north star when things go well and an anchor when they do not. We must ensure these principles remain not just timeless, but timely—guiding federal statistics toward a future worthy of the public's trust and capable of serving our democratic republic, our scientific enterprise, our economy, and our people for decades to come.