Towards a Software Framework for Evaluating the Visualization Literacy of Large Language Models

Loading...
Thumbnail Image
Date
2025
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Large Language Models (LLMs) are increasingly integrated into Natural Language Interfaces (NLIs) for visualizations, enabling users to inquire about visualizations through natural language. This work introduces a software framework for evaluating LLMs' visualization literacy, i.e., their ability to interpret and answer questions about visualizations. Our framework generates a set of data points across different LLMs, prompts, and question types, allowing for in-depth analysis. We demonstrate its utility by two experiments, examining the impact of the temperature parameter and predefined answer choices.
Description

CCS Concepts: Human-centered computing → Visualization systems and tools; Information visualization; Natural language interfaces; Accessibility technologies; Visualization theory, concepts and paradigms; User centered design

        
@inproceedings{
10.2312:evp.20251134
, booktitle = {
EuroVis 2025 - Posters
}, editor = {
Diehl, Alexandra
and
Kucher, Kostiantyn
and
Médoc, Nicolas
}, title = {{
Towards a Software Framework for Evaluating the Visualization Literacy of Large Language Models
}}, author = {
Jobst, Adrian
and
Atzberger, Daniel
and
Scheibel, Willy
and
Döllner, Jürgen
}, year = {
2025
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-286-8
}, DOI = {
10.2312/evp.20251134
} }
Citation