Séminaire au DIC: «Articulating the Ineffable: The Analytic Turn in Generative AI» par Ari Holtzman

Séminaire ayant lieu dans le cadre du Doctorat en informatique cognitive, en collaboration avec le centre de recherche CRIA         

 

TITRE :  Articulating the Ineffable: The Analytic Turn in Generative AI

 

Ari HOLTZMAN

Jeudi le 13 novembre 2025 à 10h30

Local PK-5115 (Il est possible d'y assister en virtuel en vous inscrivant ici)          

 

RÉSUMÉ

Generative AI has taken an analytic turn: we now cultivate models from objectives and data, then try to understand what we’ve grown. Current approaches to studying LLMs—focused on engineering progress or mechanistic explanations at the implementation level-—are insufficient for grasping their emergent behaviors. I will discuss what it means for interpretability approaches to be predictive rather than mechanistic, the changing landscape of machine communication, and efforts to identify fundamental laws that govern LLM behavior. I will argue that developing precise behavioral vocabulary and conceptual frameworks is the only way to turn the ‘fieldwork’ of finding surface regularities in LLMs into a science of LLMs. The guiding questions are basic, empirical, and exploratory: what do models consistently do, what do they reliably miss, and how do they incorporate and store new information? Along the way we’ll discover that AI has been given a new mandate—to articulate the ineffable, by describing aspects of communication and computation that we previously had no words for because they were stuck to deep inside human cognition to be easily referenced.

 

BIOGRAPHIE

Ari HOLTZMAN is Assistant Professor of Computer Science and Data Science at the University of Chicago, where he directs the Conceptualization Lab. His research is on developing new conceptual frameworks for understanding generative models, treating them as complex systems rather than traditional engineering artifacts. He introduced nucleus sampling, a text generation algorithm used in deployed systems including the OpenAI API.

 

RÉFÉRENCES

Holtzman, A., et al. (2023). Generative Models as a Complex Systems Science. arXiv:2308.00189.

Holtzman, A., Buys, J., Du, L., Forbes, M., & Choi, Y. (2019). The curious case of neural text degeneration. International Conference on Learning Representations (ICLR).

West, P., Holtzman, A., Hessel, J., Chandu, K., & Choi, Y. (2021). Symbolic knowledge distillation: from general language models to commonsense models. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics.

Holtzman, A., West, P., Shwartz, V., Choi, Y., & Zettlemoyer, L. (2021). Surface form competition: Why the highest probability answer isn't always right. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.

BilletteriechevronRightCreated with Sketch.

clockCreated with Sketch.Date / heure

jeudi 20 novembre 2025
10 h 30

pinCreated with Sketch.Lieu

UQAM - Pavillon Président-Kennedy (PK)
PK-5115 et en ligne
201, avenue du Président-Kennedy
Montréal (QC)

dollarSignCreated with Sketch.Prix

Gratuit

personCreated with Sketch.Renseignements

Visiter le site webchevronRightCreated with Sketch.

Mots-clés

Groupes