Abstract | The interplay between syntax and semantics is one of the most complex and challenging problems one must face in building a plausible model of human language understanding. In the current work, we used as starting point the fact that the main verb of a sentence, by imposing both syntactic and semantic constraints on its arguments, can provide important insights in this direction. Following this line of thought, we have designed and implemented TRANNS (Thematic Role Assignment Neural Network System), which succeeds in connecting the computation of the structure of a sentence and that of its semantics. TRANNS is also the first neural network (NN) model of language understanding which provides a link between its own structural organization, and that of the human language processing system. By making a clear-cut distinction between purely syntactic/semantic formation rules, and syntax-semantics interface constraints, TRANNS is in line with a range of linguistic processing principles. Furthermore, the basic assumptions behind the model are strongly supported by experimental data.
In TRANNS, thematic role labels are assigned to constituent phrases, as a function of their position within the structural configuration of the input sentences, as well as their semantic features. At the end of their processing, the system outputs a complete thematic role description of the input sentences, i.e., their associated predicate argument structure.
There are two different implementations of our model. TRANNS(I) is a localist neural network system, in that individual words, semantic and syntactic features, and thematic roles correspond to individual neurons in the network. The proper functioning of TRANNS(I) served as a proof of concept for a later implementation (TRANNS(II)), which involved the use of distributed representations, and a large scale neural network simulator. | dc.description.abstract |