Emily M. Bender — Language Models and Linguistics


Manage episode 302056793 series 2973389
By Lukas Biewald. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.

In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.

Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender


Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.



0:00 Sneak peek, intro

1:03 Stochastic Parrots

9:57 The societal impact of big language models

16:49 How language models can be harmful

26:00 The important difference between linguistic form and meaning

34:40 The octopus thought experiment

42:11 Language acquisition and the future of language models

49:47 Why benchmarks are limited

54:38 Ways of complementing benchmarks

1:01:20 The #BenderRule

1:03:50 Language diversity and linguistics

1:12:49 Outro

62 episodes