Seminar Bachelor

About the Seminar:
 

 

 

Organizational Information:

Kickoff: 07.02.2025 - 09:30-11:00 Uhr

Zwischenpräsentation: 03.05.2025 ab 9:00 Uhr

Abgabe: 25.06.2025

Abschlusspräsentation: 27.06.2025 ab 9:00

 

List of Topics:
  1. Linguistic Form vs. Language Understanding in Large Language Model:      A Critical Analysis of the "Stochastic Parrots" Debate
  2. Explainable AI: Crafting and Applying Explanations in Moral and Non-Moral Decision-Making

 

 

Linguistic Form vs. Language Understanding in Large Language Models:
A Critical Analysis of the "Stochastic Parrots" Debate

Description:
The rise of large language models (LLMs) like GPT-3 raises questions about whether these models truly "understand" language or merely replicate statistical patterns. The paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" highlights risks such as biases, ethical concerns, and the lack of genuine language comprehension. This seminar explores the distinction between linguistic form and actual language understanding in LLMs.

Learning & Objectives of the Thesis:
The paper will critically assess how well large language models replicate linguistic structures and whether this equates to real language understanding. Key objectives include:

  • Differentiating linguistic form from language understanding
  • Evaluating if known LLMs surpass simple pattern recognition.
  • Analyzing ethical and societal implications of LLMs.
  • Understanding the technical limitations of current AI language models.

Requirements:

  • Basic knowledge of linguistics or computational linguistics.
  • Familiarity with AI and natural language processing (NLP) concepts.
  • Ability to conduct literature reviews and critical analysis.
  • Awareness of ethical issues in AI development.

Sources:

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? . Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922