Members

Researchers across disciplines who make up TextGroup

Core Leadership

Andrew Elfenbein Faculty Lead

Andrew Elfenbein

UMN Dept. of English

I work on bridging the worlds of empirical psychology and the humanities, especially in relation to reading and learning.

Selected works

  • The Gist of Reading — Stanford UP, 2018
  • “Mental Representation” in Further Reading — Oxford UP, 2020
  • Rhyme as resonance in poetry comprehension: An expert-novice study — Memory & Cognition, 2021
  • How feelings matter for reading — in TXT: The Art of Reading, 2019
  • Text structure and the online processing of expository prose — Reader, 2016
  • Sweet silent thought: Alliteration and resonance in poetry comprehension — Psychological Science, 2008
Andreas Schramm Faculty Lead

Andreas Schramm

Hamline University – Professor Emeritus

Researches the cognitive processing and acquisition of time in language — the system of linguistic expressions for meanings articulating the probability of events.

Selected works

  • How LLMs comprehend temporal meaning in narratives — ACL 2025
  • Implicit textually enhanced processing of aspectual meanings in English learners — Discourse Processes, 2022
  • It is time to tackle aspect! — MinneTESOL Journal, 2017
  • Processing of aspectual meanings by non-native and native English speakers — Benjamins, 2016
Varun Athilat Student Lead

Varun Athilat

University of Minnesota – Twin Cities

Although people perceive text as being inherently less emotional than speech, text language still affects our emotions — and, funnily enough, how we feel can also influence the way we read. This interplay between text and emotions is what I love studying.

Members

Dongyeop Kang

Dongyeop Kang

University of Minnesota

Natural Language Processing, combining language and cognition. Builds human-centric NLP systems through cognitively-aligned models and interactive AI.

Selected works

  • Mary, the Cheeseburger-Eating Vegetarian: Do LLMs Recognize Incoherence in Narratives? — EACL 2026 (Oral)
  • Strong Memory, Weak Control: Executive Functioning in LLMs — EACL 2026 (Oral)
  • How LLMs Comprehend Temporal Meaning in Narratives — ACL 2025
  • Tracing How Annotators Think: Augmenting Preference Judgments with Reading Processes — LREC 2026
  • A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models — CoNLL 2023
Sashank Varma

Sashank Varma

Georgia Tech – School of Interactive Computing

Investigates the alignment between how humans and Large Language Models understand language and, more generally, perform cognitive tasks.

Selected works

  • Modeling understanding of story-based analogies using LLMs — CogSci 2025
  • When visuals aren’t the problem: Evaluating vision-language models on misleading data visualizations — 2026
  • Incremental comprehension of garden-path sentences by LLMs — CogSci 2024
  • Development of cognitive intelligence in pre-trained language models — EMNLP 2024
  • Recruitment of magnitude representations to understand graded words — Cognitive Psychology, 2024
Evelyn Milburn

Evelyn Milburn

North Dakota State University

Investigates how we use knowledge above and beyond words to quickly and flexibly comprehend complicated real-life language use, including figurative language, language learning, and cognitive aging.

Selected works

  • Native speakers kick buckets but learners kick doors: A comparison of native and non-native idiom comprehension — Memory & Cognition, 2026
  • In the native speaker’s eye: Online processing of anomalous learner syntax — Applied Psycholinguistics, 2023
  • Idioms show effects of meaning relatedness and dominance similar to ambiguous words — Psychonomic Bulletin & Review, 2019
  • Comprehending the impossible: What role do selectional restriction violations play? — Language, Cognition and Neuroscience, 2015
Püren Öncel

Püren Öncel

University of Valencia

Examines how individuals differ in their phenomenological experiences, particularly during reading. Her current work focuses on understanding how variations in language relate to fluctuations in visual imagery and inner-speech, leveraging methodological techniques from cognitive psychology, linguistics, and NLP.

Selected works

  • Mary, the cheeseburger-eating vegetarian: Do LLMs recognize incoherence in narratives? — EACL 2026
  • Investigating the impact of linguistic features of text on readers’ phenomenological experiences — Technology, Mind, and Behavior, 2025
  • Exploring the affordances of text and picture stories — Discourse Processes, 2024
  • Seeing through the character’s eyes: Examining phenomenological experiences of perspective-taking during reading — Discourse Processes, 2022
Amanda Jensen

Amanda Jensen

University of Minnesota

Adolescent reading comprehension and text readability.

YooJeong Son

YooJeong Son

University of Minnesota

Research interests include literacy and instruction, print and online reading comprehension, and learners’ interactions with AI during reading and writing. Focuses on effective instructional approaches for students from linguistically, culturally, and economically diverse backgrounds.

Michael C. Mensink

Michael C. Mensink

University of Wisconsin-Stout

Reader misconceptions and inaccurate scientific information, effects of seductive details on cognitive and emotional processes, metacognition and knowledge calibration.

Selected works

  • Confidence and knowledge calibrations after reading an introductory text on a complex topic — Discourse Processes, 2025
  • Emotional responses to seductive scientific texts during online and offline reading tasks — Discourse Processes, 2022
  • Implicit textually enhanced processing of aspectual meanings in English learners — Discourse Processes, 2022
  • Do different kinds of introductions influence comprehension and memory for scientific explanations? — Discourse Processes, 2021
  • Prereading questions and online text processing — Discourse Processes, 2012
Zae Myung Kim

Zae Myung Kim

University of Minnesota

Developing a “meta-scaffolding paradigm” that integrates discourse structures, dataset metadata, and metacognitive feedback into the training loop of large language models to stabilize learning and produce interpretable, coherent long-form text.

Selected works

  • Improving Iterative Text Revision by Learning Where To Edit From Other Revision Tasks
  • Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision
  • Threads of Subtlety: Detecting Machine-Generated Texts Through Discourse Motifs
  • Toward Evaluative Thinking: Meta Policy Optimization with Evolving Reward Models
  • Align to Structure: Aligning Large Language Models with Structural Information

Want to be listed here? Contact athil003@umn.edu or the organizers.