Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.

ANALYSIS > Past Events > Better Evaluation: EvalAssist, an Open-Source Tool for Evaluation Design

Better Evaluation: EvalAssist, an Open-Source Tool for Evaluation Design

Attendees joined us to hear directly from the team behind EvalAssist — how it works, how it was built and assessed, and what its development revealed about the current state of GenAI evaluation.
14 Mar 2026
Written by Carrie Myers
Past Events

Overview

Designing rigorous program evaluations takes time, expertise, and access to resources that early career evaluators don't always have. EvalAssist was built to change that — offering an open-source generative AI tool to serve as a thought partner throughout the evaluation design process.

Developed by Data Foundation Senior Fellow Lauren Damme, Ph.D., EvalAssist draws on best practices from the social sciences and input from over 45 experienced evaluators across 15 countries, the Data Foundation's Data Coalition, and graduate students from George Washington University. The result is a practical support tool designed to help evaluators build ethical and rigorous evaluation designs without requiring deep AI expertise to use.

The project also advanced new approaches to a persistent challenge in the field: how to measure the human and social impacts of GenAI use. Evaluators and researchers exploring GenAI applications will be particularly interested in the measurement rubric developed as part of this work, a process which surfaced the critical gaps and underlying issues in how the field currently approaches AI evaluation.

Attendees joined us to hear directly from the team behind EvalAssist — how it works, how it was built and assessed, and what its development revealed about the current state of GenAI evaluation. A Technical Paper with further detail on the model and rubric is forthcoming in March 2026.


Speakers

  • Sarah Cheney, GW MPP Candidate, Health Care & Regulatory Policy, The George Washington University
  • Lauren K. Damme, Ph.D., Senior Fellow, Data Foundation
  • Lauren Decker-Woodrow, Ph.D., Principal Research Associate, Westat
  • Gursimer Jeet, Ph.D., Independent Consultant
  • Sara Stefanik, Director, Center for Evidence Capacity, Data Foundation

Download EvalAssist Presentation

 

image

DATA FOUNDATION
1100 13TH STREET NORTHWEST
SUITE 800, WASHINGTON, DC
20005, UNITED STATES

INFO@DATAFOUNDATION.ORG

This website is powered by
ToucanTech