Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.
| 21 Feb 2026 | |
| Written by Julia Cherashore | |
| Blogs |
It’s 2026 and the conversation about artificial intelligence has not waned. In fact, innovation at the intersection of data and artificial intelligence (AI) continues to compound at a breathtaking pace. Building off observations from conversations, panels, and work I engaged with in 2025, I suggest key themes within the context of data and AI at the onset of 2026 and highlight emerging trends to watch for in the months ahead as innovation takes on new shapes and phases.
Data & Infrastructure:
Over the course of 2025, “getting data AI-ready” became a buzz phrase at many technology, data, and AI events. It would be impossible to separate data from the AI journey last year. But embedded in this buzz phrase are a number of enablers for the “AI journey,” including technology and data infrastructure, appropriate organizational skillsets, and data governance.
Towards the end of 2025, the White House issued the Mission Genesis Executive Order (EO). Within the EO, the White House elevates the use of AI across government and the implementation of “America’s AI Action Plan.” As such, 2026 will likely see, among other things, further coordination on the integration of data and infrastructure across U.S. federal government agencies to be able to implement and deploy the use of AI.
Data and technology architecture are poised for further transformation, in part, as foundational enablers of AI. According to industry surveys, including Harvard Business Review’s (HBR) “Survey: How Business Executives are Thinking about AI in 2026,” most expect leaders to prioritize investment in data and AI infrastructure this year. In the HBR survey, for example, 90% of respondents indicated that investing in data and AI are organizational priorities. A notable majority of industry respondents to the HBR survey reported having a Chief Data Officer (CDO), with 70% of companies indicating that the CDO role was well established in their company. With AI adoption and use increasing, the importance of workforce leadership, like supporting CDOs, and socializing data literacy will become imperative.
Part of the task ahead for leaders and CDOs will be workforce reskilling and upskilling within their organizations, with a particular focus on data literacy, which will be critical to fostering the necessary skill development. Data literacy and the ability to use AI cannot be limited to data professionals. If the current skills mismatch persists, it will likely impact the success of both data and AI initiatives.
CDOs and other leaders responsible for AI adoption in their organizations will also need to continue addressing data quality, which remains for data governance across enterprises and sectors. A growing trend has been to apply AI to improve data quality and maintain high-quality data going forward. Interest in data governance policy, as a way to drive data consistency and progress towards data standards, remains high. (Check out this blog for an insightful look at the real cost of not having data standards.)
In many places, company leaders have already started to prepare for new privacy regulations because state governments took a more active approach to privacy regulation in 2025, with an increasing number of state privacy regulations passed and coming into effect in 2026 and growing state-level enforcement efforts. This area is especially interesting to watch this year, due to the intersection of privacy and AI, as discussed later in this blog.
Finally, inquiries surrounding the origin points of data creation, ownership, and use, coupled with the application of AI have become fundamental to sovereignty questions, whether in the context of data sovereignty or AI sovereignty, both of which gained momentum in 2025 and will continue in 2026. As such, for organizations operating across state and international borders or using vendors who do so, both data lineage and vendor management have come into focus to be able to answer the questions on where the organization’s data resides, where the compute takes place, and where the output is stored—all of which will continue to be focal points for decisions related to data and AI sovereignty.
Artificial Intelligence:
Over the course of 2025, the conversation broadly evolved to creating with and around AI. The broad experimentation with AI has taken on many forms and iterations, including, but not limited to, agentic AI, which dominated many headlines last year.
Looking ahead, the AI-agent-as-collaborator model, wherein people and AI agents collaboratively execute standard business workflows, is poised to continue gaining traction this year. Further, some are doubling down on multi-agent orchestration, which enables AI agents to interact with other AI agents, workflows and/or data sources, sometimes even with those outside the organizational boundaries. Another trend to watch in 2026 is whether agentic AI broadly moves up the chain from lower-risk, non-personally identifiable information (PII) use cases, into more complex ones involving sensitive or personal data.
The AI maturation journey has encountered some challenges. These include escalating costs, privacy, cyber, and sovereignty concerns. Agentic AI, in particular, has been cited as a large cost contributor due to the high compute needs. This year, technology and AI leaders will be exploring and partnering on solutions, such as federated computing, increasing data density, and others, to unlock more computing capability and to manage the rising computing bills.
With the push for AI present at every turn, the insatiable appetite for the data that feeds into AI technologies means that the intersection of privacy and AI is more complex than ever, in part due to data and AI sovereignty and in part due to proliferation of chatbots (and emerging chatbot regulation). One approach recommended by the Data Foundation, is to expand access to insights from data while ensuring privacy protections for confidential, personal, and proprietary information.
Cybersecurity is another domain that has felt the double-edged sword of AI. With the acceleration of AI-empowered innovations, like agentic AI, and AI-fueled threats, like AI-led hacks in 2025, it became evident that to fight cybercrime in the age of AI effectively, one must employ the very same technology to do so.
Whether it’s leveraging AI to sift through generated threat alerts at scale, marrying cyber and transaction data to unlock better intelligence, enhancing vulnerability scanning, or engaging AI security frameworks, AI can play a greater role in 2026 in enhancing security team’s activities and areas of impact.
Comprehensive AI governance is essential to manage the sharp edges of rapid innovation, ensure AI reliability and auditability, and address ethical considerations and other complexities that may present themselves in the future. This has not been lost on policymakers at various levels of government in the U.S.
The intersection of federal and state AI regulation continued to hit various snags with policy development and implementation in the past few years. Early adopters for state AI regulations are primed for imminent effective dates at the onset of 2026. Two states have AI regulations on the cusp of implementation: California’s Transparency in Frontier Artificial Intelligence Act (TFAIA) and the Texas Responsible Artificial Intelligence Governance Act, both effective as of January 1, 2026, while state AI regulations in other states like Colorado and New York have future effective dates.
However, the President’s Executive Order (EO) on “Ensuring a National Policy Framework for Artificial Intelligence” presently halts the implementation of state AI regulatory policies while under review from the AI Litigation Task Force. According to the EO, the AI Litigation Task Force will review established AI regulations among the states to ensure the laws are not overly burdensome or “stymie innovation” and may not fully halt state regulations on AI, but will likely introduce changes to the state regulatory policies. As more and more states crafted regulatory policies, the need for a federal framework became more imperative.
It would seem that 2026 could see the needle move on national AI regulation.
Despite the challenges noted above and an increasingly complex global landscape, most foresee continued growth of AI in 2026, in part as result of maturity and learning, and in part driven by the desire to see the anticipated value along the pillars of growth, efficiency/cost improvement, and/or risk reduction. How will the technology itself evolve this year? Will rising tensions add fuel to the data and AI sovereignty trend? And, how will the public conversation on AI shift, if at all? While these are up for debate, the Data Foundation is your trusted thought leader to help navigate this rapidly evolving landscape.
Additional resources:
“2025 in Review: Insights from the Evidence Capacity Pulse Report Series”, a Data Foundation Report.
Data Foundation’s Evidence Act Hub: a digital repository and knowledge hub that brings together U. S. federal data and evaluation resources in a single, searchable location.
“Practicing What We Preach—We Have an LEI and You Should Too” blog by Nick Hart, Data Foundation President and CEO.
“How Privacy-Preserving Infrastructure Helps Transition from Data Silos to Evidence-Based Solutions” blog by Nick Hart, Data Foundation President and CEO.
“Scaling Quality in the Age of AI” blog by Julia Cherashore, Data Foundation Senior Fellow.