Phonology addresses the question of how the words, phrases, and sentences of a language are transmitted from speaker to hearer through the medium of speech. It is easy to observe that languages differ considerably from one another in their choice of speech sounds and in the rhythmic and melodic patterns that bind them together into units of structure and sense. Less evident to casual observation, but equally important, is the fact that languages differ greatly in the way their basic sounds can be combined to form sound patterns. The phonological system of a given language is the part of its grammar that determines what its basic phonic units are and how they are put together to create intelligible and natural-sounding spoken utterances.

Let us consider what goes into making up the sound system of a language. One ingredient, obviously enough, is its choice of speech sounds. All languages deploy a small set of consonants and vowels, called phonemes, as the basic sequential units from which the minimal units of word structure are constructed. The phonemes of a language typically average around 30, although many have considerably more or less. The Rotokas language of Papua New Guinea has just 11 phonemes, for example, whereas the !X language of Namibia has 141. English has about 43, depending on how we count and what variety of English we are describing. Although this number may seem relatively small, it is sufficient to distinguish the 50,000 or so items that make up the normal adult LEXICON. This is due to the distinctive role of order: thus, for example, the word step is linked to a sequence of phonemes that we can represent as /step/, whereas pest is composed of the same phonemes in a different order, /pest/.

Phonemes are not freely combinable as a maximally efficient system would require but are sequenced according to strict patterns that are largely specific to each language. One important organizing principle is syllabification. In most languages, all words can be exhaustively analyzable into syllables. Furthermore, many languages require all their syllables to have vowels. The reason why a fictional patronymic like Btfsplk is hard for most English speakers to pronounce is that it violates these principles -- it has no vowels, and so cannot be syllabified. In contrast, in one variety of the Berber language spoken in Morocco, syllables need not have vowels, and utterances like tsqssft stt ("you shrank it") are quite unexceptional. Here is a typical, if extreme, example of how sound patterns can differ among languages.

Speech sounds themselves are made up of smaller com ponents called DISTINCTIVE FEATURES, which recur in one sound after another. For example, a feature of tongue-front ARTICULATION (or coronality, to use the technical term) characterizes the initial phoneme in words like tie, do, see, zoo, though, lie, new, shoe, chow, and jay, all of which are made by raising the tip or front of the tongue. This feature minimally distinguishes the initial phoneme of tie from that of pie, which has the feature of labiality (lip articulation). Features play an important role in defining the permissible sound sequences of a language. In English, for instance, only coronal sounds like those just mentioned may occur after the diphthong spelled ou or ow: We consequently find words like out, loud, house, owl, gown, and ouch, all words ending in coronal sounds, but no words ending in sound sequences like owb, owf, owp, owk, or owg. All speech sounds and their regular patterns can be described in terms of a small set of such features.

A further essential component of a sound system is its choice of "suprasegmental" or prosodic features such as LINGUISTIC STRESS, by which certain syllables are highlighted with extra force or prominence; TONE, by which vowels or syllables bear contrastive pitches; and intonation, the overall "tune" aligned with phrases and sentences. Stress and tone may be used to distinguish different words. In some varieties of Cantonese, for example, only tone distinguishes si "poem" (with high pitch), si "cause" (with rising pitch), and si "silk" (with falling pitch). Prosodic features may also play an important role in distinguishing different sentence types, as in conversational French where only intonation distinguishes the statement tu viens "you come" (with falling intonation) from the corresponding question tu viens? (with rising intonation). In many languages, stress is used to highlight the part of the sentence that answers a question or provides new information (cf. FOCUS). Thus in English, an appropriate reply to the question "Where did Calvin go?" is He went to the STORE, with main stress on the part of the sentence providing the answer, whereas an appropriate reply to the question "Did you see Calvin with Laura?" might be No, I saw FRED with her, where the new information is emphasized. Though this use of stress seems natural enough to the English speaker, it is by no means universal, and Korean and Yoruba, to take two examples, make the same distinctions with differences in word order.

Although phonological systems make speech communication possible, there is often no straightforward correspondence between underlying phoneme sequences and their phonetic realization. This is due to the cumulative effects of sound changes on a language, many of them ongoing, that show up not only in systematic gaps such as the restriction on vowel + consonant sequences in English noted above but also in regular alternations between different forms of the same word or morpheme. For example, many English speakers commonly pronounce fields the same way as feels, while keeping field distinct from feel. This is not a matter of sloppy pronunciation but of a regular principle of English phonology that disallows the sound [d] between [l] and [z]. Many speakers of American English pronounce sense in the same way as cents, following another principle requiring the sound [t] to appear between [n] and [s]. These principles are fully productive in the sense that they apply to any word that contains the underlying phonological sequence in question. Hosts of PHONOLOGICAL RULES AND PROCESSES such as these, some easily detected by the untrained ear and others much more subtle, make up the phonological component of English grammar, and taken together may create a significant "mismatch" between mentally represented phoneme sequences and their actual pronunciation. As a result, the speech signal often provides an imperfect or misleading cue to the lexical identity of spoken words. One of the major goals of speech analysis -- one that has driven much research over the past few decades -- is to work out the complex patterns of interacting rules and constraints that define the full set of mappings between the underlying phonemic forms of a language and the way these forms are realized in actual speech.

Why should phonological systems include principles that are so obviously dysfunctional from the point of view of the hearer (not to mention the language learner)? The answer appears to lie in the constraints imposed "from above" by the brain and "from below" by the size, shape, and muscular structure of the speech-producing apparatus (the lungs, the larynx, the lips, and the tongue). The fact that languages so commonly group their phonemes into syllables and their syllables into higher-level prosodic groupings (metrical feet, phrases, etc.) may reflect a higher-order disposition to group serially ordered units into hierarchically organized structures, reflected in many other complex activities such as memorization, versification (see METER AND POETRY), and jazz improvisation. On the other hand, human biology imposes quite different demands, often requiring that complex phonemes and phoneme sequences be simplified to forms that are more readily articulated or that can be more easily distinguished by the ear.

Research on phonology dates back to the ancient Sanskrit, Greek, and Roman grammarians, but it received its modern foundations in the work of Henry Sweet, Jan Baudouin de Courtenay, Ferdinand de SAUSSURE, and others in the late nineteenth century. Principles of phonemic analysis were subsequently worked out in detail by linguists such as BLOOMFIELD, SAPIR, Harris, Pike, and Hockett in the United States and Trubetzkoy, Hjelmslev, and Martinet in Europe. Feature theory was elaborated principally by Roman JAKOBSON and his associates in the United States, and the study of suprasegmental and prosodic features by Kenneth Pike as well as by J. R. Firth and his associates in London. Since mid-century, linguists have increasingly attempted to develop explicit formal models of phonological structure, including patterns of phonologically conditioned morpheme alternation. In their watershed work The Sound Pattern of English (1968), Noam Chomsky and Morris Halle proposed to characterize the phonological competence of English speakers in terms of an ordered set of rewrite rules, applying in strict order to transform underlying representations into surface realizations. More recent trends taking such an approach as their point of departure have included the development of so-called nonlinear (autosegmental, metrical, prosodic) models for the representation of tone, stress, syllables, feature structure, and prosodic organization, and the study of the interfaces between phonology and other areas of language, including SYNTAX; MORPHOLOGY; AND PHONETICS. At the present time, newer phonological models emphasizing the role of constraints over rewrite rules have become especially prominent, and include principles-and-parameters models, constraint-and-repair phonology, declarative phonology, connectionist-inspired approaches, and most recently OPTIMALITY THEORY.

Viewed from a cognitive perspective, the task of phonology is to find the mental representations that underlie the production and perception of speech and the principles that relate these representations to the physical events of speech. This task is addressed hand-in-hand with research in related areas such as LANGUAGE ACQUISITION and language pathology, acoustic and articulatory phonetics, PSYCHOLINGUISTICS, neurology, and computational modeling. The next decades are likely to witness increased cross-disciplinary collaboration in these areas.

As one of the basic areas of grammar, phonology lies at the heart of all linguistic description. Practical applications of phonology include the development of orthographies for unwritten languages, literacy projects, foreign language teaching, speech therapy, and man-machine communication (see SPEECH SYNTHESIS and SPEECH RECOGNITION IN MACHINES).

See also

Additional links

-- G. N. Clements


Anderson, S. R. (1985). Phonology in the Twentieth Century. Chicago: University of Chicago Press.

Chomsky, N., and M. Halle. (1968). The Sound Pattern of English. New York: Harper and Row.

Durand, J. (1990). Generative and Non-linear Phonology. London: Longman.

Goldsmith, J. A., Ed. (1995). The Handbook of Phonological Theory. Oxford: Blackwell.

Hockett, C. F. (1974/1955). A Manual of Phonology. Chicago: University of Chicago Press.

Jakobson, R. (1971). Selected Writings, vol. 1: Phonological Studies. The Hague: Mouton.

Kenstowicz, M. (1994). Phonology in Generative Grammar. Oxford: Blackwell.

Spencer, A. (1995). Phonology: Theory and Description. Oxford: Blackwell.

Trubetzkoy, N. S. (1969/1939). Principles of Phonology. Berkeley and Los Angeles: University of California.

Further Readings

Archangeli, D., and D. Pulleyblank. (1994). Grounded Phonology. Cambridge, MA: MIT Press.

Bloomfield, L. (1933). Language. New York: Holt.

Clark, J., and C. Yallop. (1995). Introduction to Phonetics and Phonology. 2nd ed. Oxford: Blackwell.

Clements, G. N., and S. J. Keyser. (1983). CV Phonology: A Generative Theory of the Syllable. Cambridge, MA: MIT Press.

Dell, F. (1980). Generative Phonology and French Phonology. Cambridge: Cambridge University Press.

Dressler, W. (1985). Morphophonology. Ann Arbor, MI: Karoma.

Ferguson, C., L. Menn, and C. Stoel-Gammon, Eds. (1992). Phonological Development. Timonium, MD: York Press.

Fischer-Jörgensen, E. (1975). Trends in Phonological Theory: A Historical Introduction. Copenhagen: Akademisk Forlag.

Goldsmith, J. A. (1990). Autosegmental and Metrical Phonology. Oxford: Blackwell.

Greenberg, J. H. (1978). Universals of Human Language, vol. 2: Phonology. Stanford, CA: Stanford University Press.

Hyman, L. (1975). Phonology: Theory and Analysis. New York: Holt, Rinehart, and Winston.

Inkelas, S., and D. Zec. (1990). The Phonology-Syntax Connection. Chicago: University of Chicago Press.

Jakobson, R. (1941). Child Language, Aphasia, and Phonological Universals. The Hague: Mouton.

Kenstowicz, M., and C. W. Kisseberth. (1979). Generative Phonology: Description and Theory. New York: Academic Press.

Kiparsky, P. (1995). Phonological basis of sound change. In J. Goldsmith, Ed., The Handbook of Phonological Theory. Oxford: Blackwell, pp. 640-670.

Labov, W. (1994). Principles of Linguistic Change: Internal Factors. Oxford: Blackwell.

Maddieson, I. (1984). Patterns of Sounds. Cambridge: Cambridge University Press.

Makkai, V. B., Ed. (1972). Phonological Theory: Evolution and Current Practice. New York: Holt, Rinehart, and Winston.

Martinet, A. (1955). Economie des Changements Phonétiques: Traité de Phonologie Diachronique. Bern: Francke.

Ohala, J. (1983). The origin of sound patterns in vocal tract constraints. In P. F. MacNeilage, Ed., The Production Of Speech. New York: Springer, pp. 189-216.

Palmer, F. R., Ed. (1970). Prosodic Analysis. London: Oxford University Press.

Phonology, Published three times a year by Cambridge University Press: Cambridge.

Sapir, E. (1925). Sound patterns in language. Language 1:37-51.

Vihman, M. (1995). Phonological Development: The Origins of Language in the Child. Oxford: Blackwell.