Séminaire au DIC: «Symbols and Grounding in LLMs» par Ellie Pavlick

Séminaire ayant lieu dans le cadre du doctorat en informatique cognitive, en collaboration avec le centre de recherche CRIA et l'ISC  

 

Ellie PAVLICK

Jeudi le 5 octobre 2023 à 10h30

PK-5115 (possible d'y assister à distance, pour ce faire, vous devez vous inscrire ici)     

 

Titre : Symbols and Grounding in LLMs

 

Résumé

Large language models (LLMs) appear to exhibit human-level abilities on a range of tasks, yet they are notoriously considered to be "black boxes", and little is known about the internal representations and mechanisms that underlie their behavior. This talk will discuss recent work which seeks to illuminate the processing that takes place under the hood. I will focus in particular on questions related to LLM's ability to represent abstract, compositional, and content-independent operations of the type assumed to be necessary for advanced cognitive functioning in humans.

 

Biographie 

Ellie PAVLICK is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Ellie’s research is on cognitively-inspired approaches to language acquisition, focusing on grounded language learning and on the emergence of structure (or lack thereof) in neural language models. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.

 

Références  

Tenney, Ian, Dipanjan Das, and Ellie Pavlick. "BERT Rediscovers the Classical NLP Pipeline." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. https://arxiv.org/pdf/1905.05950.pdf

Pavlick, Ellie. "Symbols and grounding in large language models." Philosophical Transactions of the Royal Society A 381.2251 (2023): 20220041. https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2022.0041

Lepori, Michael A., Thomas Serre, and Ellie Pavlick. "Break it down: evidence for structural compositionality in neural networks." arXiv preprint arXiv:2301.10884 (2023). https://arxiv.org/pdf/2301.10884.pdf

Merullo, Jack, Carsten Eickhoff, and Ellie Pavlick. "Language Models Implement Simple Word2Vec-style Vector Arithmetic." arXiv preprint arXiv:2305.16130 (2023). https://arxiv.org/pdf/2305.16130.pdf

BilletteriechevronRightCreated with Sketch.

clockCreated with Sketch.Date / heure

jeudi 5 octobre 2023
10 h 30

pinCreated with Sketch.Lieu

UQAM - Pavillon Président-Kennedy (PK)
PK-5115 et en ligne
201, avenue du Président-Kennedy
Montréal (QC)

dollarSignCreated with Sketch.Prix

Gratuit

personCreated with Sketch.Renseignements

Visiter le site webchevronRightCreated with Sketch.

Mots-clés

Groupes