What can we learn from bound learners?
In this talk, I discuss an often neglected perspective on understanding/modeling the acquisition of syntax, namely the limited capacities of the infant acquiring a language. Modern computational cognitive science typically treats learners as ‘Laplacean demons’, that is: supercalculators who can process enormous hypothesis spaces and keep track of innumerable statistics.
This ideal learner has no cognitive limitations: it has an infinite capacity for searching, storing and calculating. However, for different domains of cognitive science, including language, it has been shown that understanding the nature of cognitive limitations (e.g. working memory and hypothesis generation mechanisms) and acknowledging their presence allows us to model observed behavior better and thus furthers our understanding of the underlying mechanisms. I present an extension of this perspective to the acquisition of syntax and discuss the possible sources of cognitive limitations.