Generative Grammar

The motivating idea of generative grammar is that we can gain insight into human language through the construction of explicit grammars. A language is taken to be a collection of structured symbolic expressions, and a generative grammar is simply an explicit theoretical account of one such collection.

Simple finite sets of rules can describe infinite languages with interesting structure. For example, the following instruction describes all palindromes over the alphabet {A, B}:

Start with S and recursively replace S by ASA, BSB, or nothing.

We prove that ABBA is a palindrome by constructing the sequence S -- ASA -- ABSBA -- ABBA, which represents a derivation of the string. The instruction, or rule, constitutes a generative grammar for the language comprising such palindromes; for every palindrome over the alphabet {A, B} the grammar provides a derivation. S can be regarded as a grammatical category analogous to a sentence, and A and B are analogous to words in a language.

The following condition on nodes in trees turns out to be equivalent to the above grammar:

Nodes labeled S have left and right daughters either both labeled A or both labeled B; optionally these are separated by a node labeled S; nodes labeled A or B have no daughters.

If a set of trees T satisfies this condition, then its frontier (the sequence of daughterless nodes) is a palindrome of A's and B's; and any such palindrome is the frontier of some such tree. This is a declarative definition of a set of trees, not a procedure for deriving strings. But it provides an alternative way of defining the same language of strings, a different type of generative grammar.

Studying English in generative terms involves trying to determine whether some finite set of rules could generate the entire set of strings of words that a native speaker of (say) English would find acceptable. It presumably is possible, because speakers, despite their finite mental capacities, seem to have a full grasp of what is in their language.

A fundamental early insight was that limitations on the format of rules or constraints set limits on definable languages, and some limits are too tight to allow for description of languages like English. String-rewriting rules having the form "rewrite X as wY," where w is a string of words and X and Y are grammatical categories, are too weak. (This limitation would make grammars exactly equivalent to finite automata, so strictly speaking English cannot be recognized by any finite computer.) However, if limits are too slack, the opposite problem emerges: the theory may be Turing-equivalent, meaning that it provides a grammar for any recursively enumerable set of strings. That means it does not make a formal distinction between natural languages and any other recursively enumerable sets. Building on work by Zellig Harris, Chomsky (1957) argued that a grammar for English needed to go beyond the sort of rules seen so far, and must employ transformational rules that convert one structural representation into another. For example, using transformations, passive sentences might be described in terms of a rearrangement of structural constituents of corresponding active sentences:

Take a subjectless active sentence structure with root label S and move the postverbal noun phrase leftward to become the subject (left branch) of that S.

Several researchers (beginning with Putnam 1961) have noted that transformational grammars introduce the undesirable property of Turing-equivalence. Others, including Chomsky, have argued that this is not a vitiating result.

Generative grammatical study and the investigation of human mental capacities have been related via widely discussed claims, including the following:

  1. People tacitly know (and learn in infancy) which sentences are grammatical and meaningful in their language.
  2. They possess (and acquire) such knowledge even about novel sentences.
  3. Therefore they must be relying not on memorization but on mentally represented rules.
  4. Generative grammars can be interpreted as models of mentally represented rule sets.
  5. The ability to have (and acquire) such rules must be a significant (probably species-defining) feature of human minds.

Critics have challenged all these claims. (It has been a key contribution of generative grammar to cognitive science to have stimulated enormous amounts of interesting critical discussion on issues of this sort.) Psycholinguistic studies of speakers' reactions to novel strings have been held to undercut (1). One response to (2) is that speakers might simply generalize or analogize from familiar cases. Regarding (3), some philosophers object that one cannot draw conclusions about brain inscriptions (which are concrete objects) from properties of sentences (arguably abstract objects). In response to (4), it has been noted that the most compact and nonredundant set of rules for a language will not necessarily be identical with the sets people actually use, and that the Turing-equivalence results mean that (4) does not imply any distinction between being able to learn a natural language and being able to learn any arbitrary finitely representable subject matter. And primatologists have tried to demonstrate language learning in apes to challenge (5).

Two major rifts appeared in the history of generative grammar, one in the years following 1968 and the other about ten years later. The first rift developed when a group of syntacticians who became known as the generative semanticists suggested that syntactic investigations revealed that transformations must act directly on semantic structures that looked nothing like superficial ones (in particular, they did not respect basic constituent order or even integrity of words), and might represent They persuaded Mike to fell the tree thus (a predicate-initial representation of clauses is adopted here, though this is not essential):

[SPAST [SCAUSE they [SAGREE Mike [SCAUSE Mike [SFALL the tree]]]]].

The late 1960s and early 1970s saw much dispute about the generative semantics program (Harris 1993), which ultimately dissolved, though many of its insights have been revived in recent work. Some adherents became interested in the semantic program of logician Richard Montague (1973), bringing the apparatus of model-theoretic semantics into the core of theoretical linguistics; others participated in the development of RELATIONAL GRAMMAR (Perlmutter and Postal 1983).

The second split in generative grammar, which developed in the late 1970s and persists to the present, is between those who continue to employ transformational analyses (especially movement transformations) and hence derivations, and those who by 1980 had completely abandoned them to formulate grammars in constraint-based terms. The transformationalists currently predominate. The constraint-based minority is a heterogenous group exemplified by the proponents of, inter alia, relational grammar (especially the formalized version in Johnson and Postal 1980), HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR (Pollard and Sag 1994), LEXICAL FUNCTIONAL GRAMMAR (Bresnan and Kaplan 1982), and earlier frameworks including tree adjoining grammar (Joshi, Levy, and Takahashi 1975) and generalized phrase structure grammar (Gazdar et al. 1985).

In general, transformationalist grammarians have been less interested, and constraint-based grammarians more interested, in such topics as COMPUTATIONAL LINGUISTICS and the mathematical analysis of properties of languages (sets of expressions) and classes of formalized grammars.

Transformationalists formulate grammars procedurally, defining processes for deriving a sentence in a series of steps. The constraint-based minority states them declaratively, as sets of constraints satisfied by the correct structures. The derivational account of passives alluded to above involves a movement transformation: a representation like

[[NP__] PAST made [NPmistakes] ([PPby people]])

is changed by transformation into something more like

[[NPmistakes] were made [NP__]] ([PPby people]])

The same facts might be described declaratively by means of constraints ensuring that for every transitive verb V there is a corresponding intransitive verb V* used in such a way that forB to be V*ed by A means the same as for A to V B. This is vague and informal, but may serve to indicate how passive sentences can be described without giving instructions for deriving them from active structures, yet without missing the systematic synonymy of actives and their related passives.

There is little explicit debate between transformationalists and constraint-based theorists, though there have been a few interchanges on such topics as (1) whether a concern with formal exactitude in constructing grammars (much stressed in the constraint-based literature) is premature at the present stage of knowledge; (2) whether claims about grammars can sensibly be claimed to have neurophysiological relevance (as transformationalists have claimed); and (3) whether progress in computational implementation of nontransformational grammars is relevant to providing a theoretical argument in their favor.

At least two topics can be identified that have been of interest to both factions within generative grammar. First, broadening the range of languages from which data are drawn is regarded as most important. The influence of relational grammar during the 1980s helped expand considerably the number of languages studied by generative grammarians, and so did the growth of the community of generative grammarians in European countries after 1975. Second, the theoretical study of how natural languages might be learned by infants -- particularly how SYNTAX is learned -- has been growing in prominence and deserves some further discussion here.

Transformationalists argue that what makes languages learnable is that grammars differ only in a finite number of parameter settings triggered by crucial pieces of evidence. For example, identifying a clause with verb following object might trigger the "head-final" value for the head-position parameter, as in Japanese (where verbs come at the ends of clauses), rather than the "head-initial" value that characterizes English. (In some recent transformationalist work having the head-final parameter actually corresponds to having a required leftward movement of postverbal constituents into preverbal positions, but the point is that it is conjectured that only finitely many distinct alternatives are made available by universal grammar.) Theories of learning based on this idea face significant computational problems, because even quite simple parameter systems can define local blind alleys from which a learning algorithm cannot escape regardless of further input data (see Gibson and Wexler 1995).

The general goal is seen as that of overcoming the problem of the POVERTY OF THE STIMULUS: the putative lack of an adequate evidential basis to support induction of a grammar via general knowledge acquisition procedures (see LEARNING SYSTEMS and ACQUISITION, FORMAL THEORIES OF).

Nontransformationalists are somewhat more receptive than transformationalists to the view that the infant's experience might not be all that poverty-stricken: once one takes account of the vast amount of statistical information contained in the corpus of observed utterances (see STATISTICAL TECHNIQUES IN NATURAL LANGUAGE PROCESSING), it can be seen that the child's input might be rich enough to account for language acquisition through processes of gradual generalization and statistical approximation. This idea, familiar from pre-1957 structuralism, is reemerging in a form that is cognizant of the past four decades of generative grammatical research without being wholly antagonistic in spirit to such trends as CONNECTIONIST APPROACHES TO LANGUAGE. The research on OPTIMALITY THEORY that has emerged under the influence of connectionism meshes well both with constraint-based approaches to grammar and with work on how exposure to data can facilitate identification of constraint systems.

There is no reason to see such developments as being at odds with the original motivating idea of generative grammar, inasmuch as rigorous and exact description of human languages and what they have in common -- that is, a thorough understanding of what is acquired -- is surely a prerequisite to any kind of language acquisition research, whatever its conclusions.

See also

Additional links

-- Geoffrey K. Pullum

References

Bresnan, J., and R. Kaplan. (1982). The Mental Representation of Grammatical Relations. Cambridge, MA: MIT Press.

Chomsky, N. (1957). Syntactic Structures. The Hague: Mouton.

Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.

Chomsky, N. (1994). The Minimalist Program. Cambridge, MA: MIT Press.

Gazdar, G., E. Klein, G. K. Pullum, and I. A. Sag. (1985). Generalized Phrase Structure Grammar. Cambridge, MA: Harvard University Press.

Gibson, T., and K. Wexler. (1995). Triggers. Linguistic Inquiry 25:407-454.

Harris, R. A. (1993). The Linguistic Wars. New York: Oxford University Press.

Johnson, D. E., and P. M. Postal. (1980). Arc Pair Grammar. Princeton: Princeton University Press.

Joshi, A., L. S. Levy, and M. Takahashi. (1975). Tree adjunct grammars. Journal of Computing and System Sciences 19:136-163.

Montague, R. (1993). Formal Philosophy. New Haven: Yale University Press.

Perlmutter, D. M., and P. M. Postal. (1983). Studies in Relational Grammar 1. Chicago: University of Chicago Press.

Pollard, C., and I. A. Sag. (1994). Head-driven Phrase Structure Grammar. Chicago: University of Chicago Press.

Putnam, H. (1961). Some issues in the theory of grammar. Proceedings of the Twelfth Symposium in Applied Mathematics. Providence: American Mathematical Society. Reprinted in Philosophical Papers, vol. 2: Mind, Language and Reality. New York: Cambridge University Press, 1975, pp. 85-106; and in G. Harman, Ed., On Noam Chomsky. New York: Anchor, 1974, pp. 80 - 103.