A Concise Introduction to Syntactic Theory: The Government-Binding Approach

A Concise Introduction to Syntactic Theory: The Government-Binding Approach

by Elizabeth A. Cowper
A Concise Introduction to Syntactic Theory: The Government-Binding Approach

A Concise Introduction to Syntactic Theory: The Government-Binding Approach

by Elizabeth A. Cowper

eBook

$27.99  $36.99 Save 24% Current price is $27.99, Original price is $36.99. You Save 24%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This textbook is intended to give students a quick start in using theory to address syntactic questions. At each stage, Cowper is careful to introduce a theoretical apparatus that is no more complex than is required to deal with the phenomenon under consideration. Comprehensive and up-to-date, this accessible volume will also provide an excellent refresher for linguists returning to the study of Government-Binding theory.

"Cowper exhibits the analytical devices of current principles-and-parameters approaches, takes readers carefully through the central elements of grammatical theory (including very recent work), and ushers them selectively into the technical literature. . . . A serious introduction for those who want to know the nuts and bolts of syntactic theory and to see why linguists are so excited these days."—David Lightfoot, University of Maryland

"An excellent short introduction to the Government and Binding model of syntactic theory. . . . Cowper's work succeeds in teaching syntactic argumentation and in showing the conceptual reasons behind specific proposals in modern syntactic theory."—Jaklin Kornfilt, Syracuse University


Product Details

ISBN-13: 9780226160221
Publisher: University of Chicago Press
Publication date: 06/15/2009
Sold by: Barnes & Noble
Format: eBook
Pages: 220
File size: 5 MB

About the Author

Please fill in author biography

Read an Excerpt

A Concise Introduction to Syntactic Theory

The Government-Binding Approach


By Elizabeth A. Cowper

The University of Chicago Press

Copyright © 1992 The University of Chicago
All rights reserved.
ISBN: 978-0-226-16022-1



CHAPTER 1

The Theory in Context


The theory of Government and Binding has developed out of a tradition going back to the 1950s, known variously as transformational grammar (TG), generative grammar, or generative-transformational grammar. While it is not the purpose of this textbook to give a detailed, or even thorough, history of the theory, I will give a short outline of how the theory has changed and developed over the years, so as to provide a context for what follows.

In section 1.1, some of the most fundamental goals and assumptions of the theory of generative grammar will be identified. These are the things which have remained constant throughout the history of the theory, even though just about every aspect of how the theory is structured and implemented has changed. In section 1.2, a number of major stages of the theory will be identified, and the major properties of each stage will be described. For a more comprehensive discussion of the development of the theory, see Newmeyer (1980).


1.1 Goals and Assumptions

The fundamental problem of linguistic theory, according to Chomsky, is that of "determining how it is possible for a child to acquire knowledge of a language" (Chomsky 1973:12). This question has remained at the core of work in generative grammar since its inception. In order to answer this question, we must first examine it and make sure we understand it precisely enough that it can guide our investigations in a meaningful way.

First, what is meant by "knowledge of a language"? I am not speaking here of the kind of explicit, conscious knowledge taught in elementary-school grammar classes. Rather, I mean the largely unconscious knowledge that makes us speakers of a language—the knowledge we use when we judge that (1) is a grammatical sentence of English while (2) is not.

(1) Mary is dancing on the stage.

(2) *Mary are danced the stage on.


Before we can begin to answer the question of how knowledge of a language can be acquired, we must have some notion of exactly what it is that is being acquired.

Knowledge, however, is not something we can observe directly. This is especially true in the case of language. Every normal human being is a native speaker of (at least) one language and thus by definition has acquired knowledge of that language. Most people, however, never study their native language in any conscious way. Just as people know how to walk without consciously knowing which muscles, nerves, and parts of the brain are involved, people know their native language without consciously knowing its structure. In contrast, people who know predicate logic, or chess, for example, normally do have an awareness of the structure of the system of rules governing what can be done in logic or in a game of chess.

Since we cannot observe knowledge of language directly, how then can we study it? What we can do is observe people as they use this knowledge in various ways—as they speak and understand their native language. In other words, we can observe the linguistic behavior of native speakers. Another thing we can do is ask people, including ourselves, to use their knowledge in judging whether particular sentences are acceptable sentences of their native language. From these types of linguistic behavior, we can then try to deduce the knowledge that enables them to perform the behavior. S. Jay Keyser, in class lectures in the late 1960s, put it very well: We are trying to figure out what it is that people act as if they know. Our job is therefore not merely to describe what people say, but, rather, to figure out what might be the knowledge which permits them to perform their linguistic behavior. We shall henceforth refer to this knowledge as the speaker's linguistic competence and to the behavior which we can observe as the speaker's linguistic performance.

The problem, of course, is that linguistic competence is not the only factor which influences linguistic performance. For this reason, not everything a native speaker of English says is an equally reliable indicator of that speaker's linguistic competence. A rather blatant example is given in (3).

(3) Please don't shut the window on my [loud scream].


External events, such as a window shutting on someone's hand, can interrupt a speaker and force the abandonment of a sentence in midstream. No one would seriously propose that the sentence in (3) as it stands constitutes a grammatical sentence of English. Rather it is a sentence fragment, or an incomplete sentence, which happened to be uttered by someone on a particular occasion.

While it is fairly clear that (3) can be discarded as contaminated data, many cases are far less obvious. Consider, for example the sentences in (4) and (5).

(4) a. They talked to Sue and I about the accident.

b. Me and Sue saw the accident.

(5) a. They talked to I about the accident.

b. Me saw the accident.


Sentences like those in (4) are produced by speakers of English fairly frequently, while sentences like those in (5) are almost never observed. Nonetheless, most speakers of English would say that all four of the sentences are ungrammatical. The problem with all of these sentences has to do with the form of the pronoun I/me. Normally, when this pronoun occurs as the object of a verb or of a preposition, it takes the so-called objective form, me. When it occurs as the subject of a clause, it normally takes the so-called nominative form, I. Confusion tends to arise when this pronoun occurs in a coordinate structure containing the conjunction and. The question is, in constructing a grammar of English which is supposed to reflect the competence of native speakers of English, do we consider the sentences in (4) grammatical or ungrammatical? If we consider them grammatical, then the rule governing the choice of pronoun form will have to have in it a special subclause saying that in coordinate structures, the choice of form is freer. If we consider them ungrammatical, then we must explain why speakers often produce sentences like (4) and almost never produce sentences like (5).

The point here is that in order to develop a theory of competence, or a model of a native speaker's linguistic knowledge, we will, at every step of the way, be making judgments about the relevance of the data. These judgments are possible only in the context of the theory itself. The theory of linguistic competence will ultimately interact with other theories of memory, production, and comprehension, as well as with an understanding of external events (see (3) above) to account for particular instances of linguistic behavior. For the moment, we will simply say that generative grammar has been and continues to be primarily concerned with linguistic competence.

The next question that arises in our examination of Chomsky's question has to do with the model of knowledge, or grammar, we are constructing. What should the grammar do? According to Chomsky, the grammar must explicitly account for all of the grammatical sentences of the language under consideration. In other words, every grammatical sentence of the language must conform to all the requirements of the grammar, and every ungrammatical sentence must violate some requirement of the grammar. In this, generative grammars differ from the traditional and structuralist grammars that preceded them. Those grammars, again according to Chomsky,

do not attempt to determine explicitly the sentences of a language or the structural descriptions of these sentences. Rather, such grammars describe elements and categories of various types, and provide examples and hints to enable the intelligent reader to determine the form and structure of sentences not actually presented in the grammar. Such grammars are written for the intelligent reader. To determine what they say about sentences one must have an intuitive grasp of certain principles of linguistic structure. These principles, which remain implicit and unexpressed, are presupposed in the construction and interpretation of such grammars. While perhaps perfectly adequate for their particular purposes, such grammars do not attempt to account for the ability of the intelligent reader to understand the grammar. The theory of generative grammar, in contrast, is concerned precisely to make explicit the contribution of the intelligent reader. (Chomsky 1973:8)


When a particular sentence conforms to all the requirements, or rules, of a generative grammar, we say that the grammar generates that sentence. If a sentence violates one or more requirements or rules of the grammar, then we say that the grammar fails to generate that sentence. If a grammar generates all the grammatical sentences of a language, and fails to generate any ungrammatical sentences, then we say that the grammar is observationally adequate —it successfully distinguishes between grammatical and ungrammatical sentences.

It is entirely possible that several very different observationally adequate grammars could be written for the same language. The goal of linguistic theory, however, goes beyond simply describing which sentences are grammatical and which are not. What we are trying to understand is not the language at all, but knowledge of language and how it can be acquired. Our grammar must therefore achieve more than observational adequacy. In addition to accounting for grammatical versus ungrammatical sentences, it must capture linguistically significant generalizations. In other words, it must provide the correct analysis for the sentences of the language—the analysis which corresponds to the native speaker's (unconscious) knowledge. For example, if two sentences are related to each other in a systematic way, as are the sentences in (6), then the grammar must explicitly account for that relation.

(6) a. Sue started the car.

b. The car started.


A grammar which accurately reflects the native speaker's knowledge is called a descriptively adequate grammar. It provides a model of the native speaker's linguistic competence. But recall that we are not simply trying to understand what knowledge of language is; we are trying to understand how knowledge of language can be acquired. The process of language acquisition can, for our purposes, be represented as in (7).

(7)

[ILLUSTRATION OMITTED]

When a child acquires its native language, its knowledge, or competence, begins in an initial state (note that we have not yet said anything about what that initial state might be). As a result of the child's being exposed to language spoken in his/her environment, the competence gradually develops into an adult competence, which we call the final state.

A descriptively adequate grammar is a representation of the final state depicted in (7). A theory which also accounts for the initial state, and how it develops into the final state through exposure to linguistic data, is an explanatorily adequate theory—a theory which explains how it is possible to acquire knowledge of a language. The problem here is that, as far as we can tell, the linguistic data to which children are exposed are not by themselves sufficient to account for the development of the adult competence. Native speakers, in other words, know things which they could not possibly have learned from the language spoken around them. An example is given in (8).

(8) a. *Which book did Mary hire the person that _____ wrote _____?

b. Who did Mary think that Anna saw _____?

The fact illustrated in (8) is stated informally in (9).

(9) In English, when a question word corresponds to a gap in a relative clause, the sentence is ungrammatical. If the gap is in a complement clause, the sentence is grammatical.


It is difficult to imagine how someone could learn this. Children do not make errors such as (8a), which means that there is no evidence of a stage of not knowing (9), preceding a stage of knowing (9). See Lightfoot (1982) for discussion of this issue.

There are many other aspects of linguistic competence which do not seem to be learnable from ordinary linguistic data in the child's environment. The conclusion to be drawn from this is that the child does not come empty-handed to the task of language acquisition. In other words, the initial state of linguistic competence has a role to play as well. In order to understand how language can be acquired, then, we must investigate the nature of the initial state of linguistic competence.

This initial state has been called the biological endowment for language. It must be common to all human beings, since all (normal) human beings are equally capable of acquiring any language. For this reason, it has also been called universal grammar, where by grammar we mean linguistic competence.

Recall that linguistic competence can only be investigated indirectly, by observing linguistic performance. Universal grammar is even more difficult to investigate. It is impractical to try first to develop a competence model for every language in the world and then look at what these grammars have in common. In addition, it is counterproductive. What we need to do is to develop models of specific languages at the same time as we are developing a model of universal grammar. The model of universal grammar will then inform us as we examine each language and can be revised where necessary. Conclusions based on the analysis of one language will narrow the possibilities for the analysis of other languages.


1.2 Stages in the Development of Generative Grammar

1.2.1 The Standard Theory

Chomsky's Aspects of the Theory of Syntax, published in 1965, defined the framework within which much syntactic research took place for the following decade. The structure of the model is shown in (10).

(10)

[ILLUSTRATION OMITTED]

To illustrate how the grammar works, let us consider how the sentence in (11) would be derived.

(11) The car was stolen by a young woman.

Phrase structure rules, such as those given in (12), apply to give the tree in (13).

(12) S -> NP AUX VP

NP -> (DET) (ADJ) N

VP -> V (NP)

(13)

[ILLUSTRATION OMITTED]

Rules of lexical insertion, which take account of the context in which a word occurs, apply to insert the lexical items into the tree in the appropriate places, giving (14) as the deep structure for the sentence.

(14)

[ILLUSTRATION OMITTED]

This deep structure serves as the input to the transformational component and to the semantic component. The transformational component contains transformational rules such as passive formation, affix-hopping and subject-verb agreement. Passive formation gives the intermediate representation shown in (15).

(15)

[ILLUSTRATION OMITTED]

Affix-hopping and subject-verb agreement apply to give (16) as the surface structure.

(16)

[ILLUSTRATION OMITTED]

The structure in (16) serves as the input to the phonological component, which realizes the auxiliary verb as was, the main verb as stolen, and applies any phonological rules, giving the phonetic output.

Another example involves the transformations of reflexivization and equi-NP deletion, as well as affix-hopping and subject-verb agreement. The identical subscripts on the three instances of Carol indicate that all three noun phrases refer to the same person.

(17)

[ILLUSTRATION OMITTED]

Meanwhile, the deep structure is interpreted by the projection rules of the semantic component. These rules combine the semantic representation of the various parts of the sentence to construct a semantic representation for the sentence as a whole. The important thing to note about this model is that the semantic representation is constructed entirely on the basis of the deep syntactic structure. It follows as an implicit claim, then, that the application of a transformation can have no effect on the meaning of a sentence. This implicit claim was articulated in Katz and Postal (1964) and came to be known as the Katz-Postal hypothesis.

What was universal about this model was not any specific phrase structure rules or transformations, but rather the structure of the model and the various categories (NP, V, etc.) that it made use of. The rules in each of the components were seen as language-particular and therefore to be learned by the native speaker in the course of language acquisition.


1.2.2 The Problem of Meaning

The Katz-Postal hypothesis ran into empirical problems almost immediately. Assuming for the purposes of this discussion that there is a transformation of passive formation which operates roughly as shown above, consider the sentences in (18).

(18) a. The editor didn't find many mistakes.

b. Many mistakes weren't found by the editor.


The only difference between (18a) and (18b) is that passive has applied in the derivation of (18b) and not in (18a). The presence of the verb do in (18a) is accounted for by another transformation which inserts do under certain circumstances. This means that the deep structures of (18a) and (18b) are the same and therefore that the semantic representations will also be the same. Unfortunately, the sentences are not synonymous. Sentence (18a) is true if there were, in fact, very few mistakes for the editor to find and the editor found all the mistakes there were. Sentence (18b) states that there were many mistakes that the editor failed to find. See Partee (1971) for a thorough discussion of this problem.


(Continues...)

Excerpted from A Concise Introduction to Syntactic Theory by Elizabeth A. Cowper. Copyright © 1992 The University of Chicago. Excerpted by permission of The University of Chicago Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

1 The Theory in Context
2 Categories and Phrase Structure
3 Thematic Relations and Theta Roles
4 Predicting Phrase Structure
5 NP-Movement
6 Government and Case
7 WH-Movement
8 Move Alpha and the Theory of Movement
9 The Empty Category Principle
10 Interpretation of Nominals
11 Clauses and Categories
12 A Unified Approach to Locality Constraints

References
Index
From the B&N Reads Blog

Customer Reviews