BEGIN:VCALENDAR
VERSION:2.0
PRODID:https://evenements.uqam.ca
X-PUBLISHED-TTL:P1W
BEGIN:VEVENT
UID:32495@https://evenements.uqam.ca
DTSTART:20260122T103000Z
SEQUENCE:4
TRANSP:OPAQUE
URL:https://evenements.uqam.ca/evenements/seminaire-au-dic-systematicity-in
 -language-models-knowledge-and-self-knowledge-par-jacob-andreas/32495?date
 =2026-01-22_10-30-00
LOCATION:UQAM - Pavillon Président-Kennedy (PK) (201\, avenue du Présiden
 t-Kennedy\, Montréal )
SUMMARY:Séminaire au DIC: «Systematicity in language models' knowledge an
 d self-knowledge» par Jacob Andreas
CLASS:PUBLIC
DESCRIPTION:Séminaire ayant lieu dans le cadre du doctorat en informatique
  cognitive\, en collaboration avec le centre de recherche CRIA\n\n\n   
         \n\n\nTITRE :  Systematicity in language models' knowled
 ge and self-knowledge\n\n\n \n\n\nJacob ANDREAS\n\n\nJeudi le 22 janvier 
 2026 à 10h30\n\n\nLocal PK-5115 (Il est possible d'y assister en virtuel
  en vous inscrivant ici)            \n\n\n \n\n\nRÉSUMÉ\n\n
 \nCurrent language models (LMs) can converse knowledgeably\, and in remark
 able depth\, about a wide range of topics. But these same LMs often genera
 te confident-but-incorrect outputs\, contradict themselves\, and generally
  behave in ways that appear surprising and unnatural to human users. Incre
 asingly\, researchers attribute these failures not to surface-level statis
 tical errors\, but instead to mistakes and inconsistencies in LMs' \"knowl
 edge\" or \"beliefs\" about the outside world. To what extent should we un
 derstand LMs as possessing beliefs at all? How should this understanding i
 nfluence the procedures we use to train them? This talk will describe a fa
 mily of training objectives that optimize language models for *internal sy
 stematicity* rather than predictive accuracy on some external dataset\, sh
 owing that such objectives can improve models' linguistic and factual gene
 ralization\, as well as the reliability of their explanations of their own
  behavior.\n\n\n \n\n\nBIOGRAPHIE\n\n\nJacob ANDREAS is Associate Profess
 or in the Department of Electrical Engineering and Computer Science at MIT
  and a member of CSAIL\, where he directs the Language &amp\; Intelligence
  Group. His research focuses on understanding computational foundations of
  language learning and building intelligent systems that communicate effec
 tively with humans. Andreas earned his PhD from UC Berkeley\, MPhil from C
 ambridge as a Churchill Scholar\, and BS from Columbia. He has received th
 e Samsung AI Researcher of the Year award\, MIT's Kolokotrones teaching aw
 ard\, and paper awards at NAACL and ICML. His work bridges machine learnin
 g and natural language processing\, with particular expertise in compositi
 onal generalization\, neural module networks\, and systematic reasoning in
  language models.\n\n\n \n\n\nRÉFÉRENCES\n\n\nAkyürek\, A. F.\, Akyür
 ek\, E.\, Choshen\, L.\, Wijaya\, D. T.\, &amp\; Andreas\, J. (2024\, Augu
 st). Deductive closure training of language models for coherence\, accurac
 y\, and updatability. In Findings of the Association for Computational Lin
 guistics: ACL 2024 (pp. 9802-9818).\n\n\nLi\, B. Z.\, Guo\, Z. C.\, Huang\
 , V.\, Steinhardt\, J.\, &amp\; Andreas\, J. (2025). Training Language Mod
 els to Explain Their Own Computations. arXiv preprint arXiv:2511.08579.\n\
 n\nDamani\, M.\, Puri\, I.\, Slocum\, S.\, Shenfeld\, I.\, Choshen\, L.\, 
 Kim\, Y.\, &amp\; Andreas\, J. (2025). Beyond binary rewards: Training lms
  to reason about their uncertainty.\n\nMot-clés : LLMs\, LATECE UQAM INFO
 RMATIQUE\, LATECE\, CRIA\, Philosophie\, Sciences cognitive\, École de la
 ngues\, département de philosophie\, neurosciences cognitives\, Neuroscie
 nces\, Département de Neuroscience\, Sciences cognitives\, Institut des s
 ciences cognitives\, Apprentissage du langage naturel\, Apprentissage du l
 angage\, sciences du langage\, apprentissage machine\, apprentissage profo
 nd\, langage automatique\, langage cognitif\, Cognition humaine\, Cognitio
 n\, cognition\, intelligence artificielle\, IA\, intelligence artificielle
 \, chatGPT\, enseignement supérieur\, IA\, intelligence artificielle\, IA
 \; intelligence artificielle\; société\, département de linguistique\, 
 maîtrise en psychologie\, doctorat en psychologie\, Faculté des sciences
  humaines\, Faculté des sciences de l'UQAM\, Faculté des sciences\, dép
 artement de psychologie\, Département d'informatique\, doctorat en inform
 atique cognitive\, doctorat en informatique\n\nPrix : Gratuit\n\n
CATEGORIES:Séminaire,Conférence
DTSTAMP:20260309T062523Z
CREATED:20260115T160540Z
LAST-MODIFIED:20260119T124658Z
END:VEVENT
END:VCALENDAR