An integrated semantic and cognitive model of presuppositional dependencies
Language does not come to us instantaneously, but instead in the shape of a stream of input (a stream of written words or spoken sounds). When we make sense of linguistic input, we incrementally build an interpretation. This project studies the processes that underly incremental interpretation. In particular, the project aims to build computational models of the working memory processes involved. To this end, the focus is on so-called presuppositions, a phenomenon where sentences signal that some bit of information was established earlier in the discourse. Presuppositions allow us to use semantic theories of the dynamics of interpretation at the linguistic end of our models, allowing the theory-internal concepts to become predictive factors in the cognitive architecture used to model the comprehension process. The models will be based on a series of eye-tracking and self-paced reading experiments conducted to identify the retrieval processes at play in presupposition resolution, with both theory-internal as well as theory-external features as manipulations. To test the models, we will annotate a large eye-tracking corpus with information about presuppositional content.
This research aims to pioneer the processing of the linguistically controlled information flow in discourse. The research results will be connected to both existing work on retrieval in syntactic dependency processing and to semantic theories of presupposition resolution. By providing a model that combines semantics and processing, the project will be able to deliver a vital baseline for future research into the incremental comprehension of language.
More details about the project is here.