Semantics — the study of meaning — is a profoundly complex topic that has challenged linguists, philosophers, and AI researchers for decades.
With the recent explosion in large language models, interest has surged in understanding these models’ capabilities and limitations when it comes to learning meaning.
Let’s untangle the web of meaning, evaluate the strengths and shortcomings of LLMs in semantic understanding, and take a look at the evolving strategies to make these models more human-like in their semantic grasp.
I. What Exactly Constitutes Meaning?
Breaking it down, linguists often categorize meaning into distinct types:
Lexical Meaning: The straightforward definition of individual words. For instance, the word “table” signifies a flat-topped piece of furniture with legs.
Compositional Meaning: This deals with how different words mesh together to give a sentence its unique meaning. Order matters; “The table is brown” and “the brown table” mean slightly different things.
Pragmatic Meaning: Context is king here. The words in a sentence often carry nuances and implications that aren’t directly stated but understood through the given context.
Associative Meaning: This is a more personal form of meaning. It involves the memories or feelings that a word might stir up in someone due to their unique experiences.