Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research

Hariharan Subramonyam, Jessica Hullman

IEEE Trans. Visualization & Comp. Graphics (Proc. VIS) 2023

vis4ML_banner

Examples of forms of human-generated insights in visualization for machine learning (Vis4ML) research. We identify gaps in researchers' claims versus evaluations when it comes to what a given Vis4ML system accomplishes.

Abstract

Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research.