CVPR 2026 Workshop

Date: June 3 or June 4 (TBD)
Location: Denver, Colorado

Overview

The design of generative AI and computer vision systems is often guided by technical benchmarks and in-lab evaluations that can differ from their real-world uses. This misalignment, at best, can lead to inefficiencies and, at worst, cause unintended harms in unforeseen contexts.

“Humans of Generative AI” recenters attention on the people who use and are affected by these systems. We invite talks and posters from human-centric research that inform the design or evaluation of generative AI and computer vision systems. Through this workshop, we will encourage and develop cross-disciplinary collaboration between computer vision and human- centric researchers, two often disconnected communities.

Topic 1. Social Science Findings That Inform the Technical Design of AI Systems.

As AI systems begin to see widespread adoption and use by an increasing number of real users, these systems will inevitably be used in unanticipated scenarios. A core focus of this workshop is human-centered research that informs the design of technical systems. From these discussions, we aim to connect human-centric insights with the technical limitations and design choices of current systems, identifying directions for future research.

Topic 2. Technical Designs That Consider Unattended Needs of Real-World Users.

As a direct response to Topic 1, this topic focuses on technical designs that fulfill unmet needs or protect against unattended harms in existing AI systems. We broadly discuss how socio-technical findings can be translated into concrete design goals, evaluation protocols, and system architectures.

(Tentative) Schedule

Times are listed in CEST

Time Activity
8:00aWelcome
8:10aKeynote
Topic 1: Human-Centric Findings That Inform the Technical Design of AI Systems
8:30aLightning Talk 1
8:45aLightning Talk 2
9:00aLightning Talk 3
9:15aLightning Talk 4
9:30aBreakout
9:50aPanel Discussion: Humans of Generative AI
Topic 2: Technical Designs That Consider Unattended Needs of Users
10:35aLightning Talk 1
10:50aLightning Talk 2
11:05aLightning Talk 3
11:20aLightning Talk 4
11:35aPosters & Breakout
11:55aClosing Statements

Call for Participation

Submission details will be announced.

Organizers

Jaron Mink

Arizona State University

Assistant Professor at Arizona State University studying human factors in the security, safety, and trustworthiness of machine learning systems.

jaron.mink@asu.edu

David A. Forsyth

University of Illinois–Urbana-Champaign

Professor at the University of Illinois Urbana–Champaign and former Editor-in-Chief of IEEE TPAMI, with foundational contributions to computer vision.

daf@illinois.edu

Elissa M. Redmiles

Georgetown University

Assistant Professor at Georgetown University using computational and social science methods to study user safety and decision-making in digital systems.

elissa.redmiles@georgetown.edu

Sarah Adel Bargal

Georgetown University

Assistant Professor at Georgetown University working at the intersection of computer vision, machine learning, and explainable AI.

sarah.bargal@georgetown.edu

Shawn Shan

Dartmouth College

Assistant Professor at Dartmouth College researching security and machine learning, including protections for artists against generative model misuse.

shawn.shan@dartmouth.edu

Lucy Qin

Georgetown University

Postdoctoral researcher at Georgetown University studying online abuse and digital intimacy using qualitative methods.

lucy.qin@georgetown.edu

Anand Bhattad

Johns Hopkins University

Assistant Professor at Johns Hopkins University focusing on understanding and evaluating knowledge in generative models.

bhattad@jhu.edu

Shiry Ginosar

Toyota Technical Institute at Chicago

Assistant Professor at TTIC whose research spans grounded vision, social behavior understanding, and video synthesis.

shiry@ttic.edu

Eunice Yiu

University of California, Berkeley

Postdoctoral researcher at UC Berkeley studying how humans and AI systems build world models through analogy and exploration.

ey242@berkeley.edu