CVPR 2026 Workshop
Date: June 3 or June 4 (TBD)
Location: Denver, Colorado
Overview
The design of generative AI and computer vision systems is often guided by technical benchmarks and in-lab evaluations that can differ from their real-world uses. This misalignment, at best, can lead to inefficiencies and, at worst, cause unintended harms in unforeseen contexts.
“Humans of Generative AI” recenters attention on the people who use and are affected by these systems. We invite talks and posters from human-centric research that inform the design or evaluation of generative AI and computer vision systems. Through this workshop, we will encourage and develop cross-disciplinary collaboration between computer vision and human- centric researchers, two often disconnected communities.
Topic 1. Social Science Findings That Inform the Technical Design of AI Systems.
As AI systems begin to see widespread adoption and use by an increasing number of real users, these systems will inevitably be used in unanticipated scenarios. A core focus of this workshop is human-centered research that informs the design of technical systems. From these discussions, we aim to connect human-centric insights with the technical limitations and design choices of current systems, identifying directions for future research.
Topic 2. Technical Designs That Consider Unattended Needs of Real-World Users.
As a direct response to Topic 1, this topic focuses on technical designs that fulfill unmet needs or protect against unattended harms in existing AI systems. We broadly discuss how socio-technical findings can be translated into concrete design goals, evaluation protocols, and system architectures.
(Tentative) Schedule
Times are listed in CEST
| Time | Activity |
|---|---|
| 8:00a | Welcome |
| 8:10a | Keynote |
| Topic 1: Human-Centric Findings That Inform the Technical Design of AI Systems | |
| 8:30a | Lightning Talk 1 |
| 8:45a | Lightning Talk 2 |
| 9:00a | Lightning Talk 3 |
| 9:15a | Lightning Talk 4 |
| 9:30a | Breakout |
| 9:50a | Panel Discussion: Humans of Generative AI |
| Topic 2: Technical Designs That Consider Unattended Needs of Users | |
| 10:35a | Lightning Talk 1 |
| 10:50a | Lightning Talk 2 |
| 11:05a | Lightning Talk 3 |
| 11:20a | Lightning Talk 4 |
| 11:35a | Posters & Breakout |
| 11:55a | Closing Statements |
Call for Participation
- Feb. 2 — Call for papers released
- Mar. 20 — Submission deadline
- Mar. 30 — Notification to authors
Submission details will be announced.
Organizers
Jaron Mink
Arizona State University
Assistant Professor at Arizona State University studying human factors in the security, safety, and trustworthiness of machine learning systems.
jaron.mink@asu.eduDavid A. Forsyth
University of Illinois–Urbana-Champaign
Professor at the University of Illinois Urbana–Champaign and former Editor-in-Chief of IEEE TPAMI, with foundational contributions to computer vision.
daf@illinois.eduElissa M. Redmiles
Georgetown University
Assistant Professor at Georgetown University using computational and social science methods to study user safety and decision-making in digital systems.
elissa.redmiles@georgetown.eduSarah Adel Bargal
Georgetown University
Assistant Professor at Georgetown University working at the intersection of computer vision, machine learning, and explainable AI.
sarah.bargal@georgetown.eduShawn Shan
Dartmouth College
Assistant Professor at Dartmouth College researching security and machine learning, including protections for artists against generative model misuse.
shawn.shan@dartmouth.eduLucy Qin
Georgetown University
Postdoctoral researcher at Georgetown University studying online abuse and digital intimacy using qualitative methods.
lucy.qin@georgetown.eduAnand Bhattad
Johns Hopkins University
Assistant Professor at Johns Hopkins University focusing on understanding and evaluating knowledge in generative models.
bhattad@jhu.eduShiry Ginosar
Toyota Technical Institute at Chicago
Assistant Professor at TTIC whose research spans grounded vision, social behavior understanding, and video synthesis.
shiry@ttic.eduEunice Yiu
University of California, Berkeley
Postdoctoral researcher at UC Berkeley studying how humans and AI systems build world models through analogy and exploration.
ey242@berkeley.edu