A basic introduction to what I work on
I do linguistics
, which is standardly (albeit vaguely) defined as "the scientific study of
language." Linguistics is commonly broken down into the following core sub-fields:
Semantics: linguistic meaning.
Syntax: sentence structure.
Morphology: word structure (for example, rules for combining
stems with suffixes).
: (my specialty) the study of linguistic
Phonetics: the study of speech sounds, closely related to phonology,
but traditionally focussing more on experimentation and instrumental measurement
of speech, and consequently paying more attention to quantitative differences
Confused about the phonetics/phonology distinction? Me too.
I find this traditional distinction to be arbitrary and unhelpful; I argue
for a unified approach in my
Other sub-fields (e.g. psycholinguistics, sociolinguistics,
historical linguistics) concern the application of linguistic theory to questions
in other disciplines, and vice-versa.
What do I mean by "sound patterns"?
Every language has rules about (a) what speech sounds occur in that language,
and (b) how those sounds can be sequenced to form words and phrases.
For example, there happens not to be a word splick (pronounced
[splIk]) in English, but English speakers would generally agree that it
a word (for example, you could name a new toy, or a sub-atomic particle,
On the other hand, something like [txznt] could not possibly
be a word of English -- firstly because English does not have the sound [x]
(a voiceless velar fricative) -- but even if we replace the [x] with a [k],
[tkznt], is still not a possible English word. It's not that [txznt] is physically
unpronounceable: in fact, it's an actual word in Tashlhiyt Berber (it means
'you stored'). It's just that English and Berber have different sound-pattern
These rules are often surprisingly intricate and subtle. And
yet native speakers readily recognise that something is "off" when they hear
speech which violates the phonological rules of their language (e.g. speech
with a foreign accent, or computer-synthesised speech).
A theory of sound patterns:
Despite the sorts of differences noted above, linguists have discovered numerous
generalizations about phonological systems, through comparison across languages.
No language has only consonants, or only vowels.
Every language has, among its consonants, a set of "stops"
(consonants made by completely blocking the airflow.
Every language has a set of high vowels (made with the tongue
body in a relatively high position), distinct from its non-high vowels.
In addition, language sound patterns reflect many strong tendencies.
For example, languages commonly forbid long vowels in syllables which are
closed by a consonant, or forbid distinctly oral (i.e. non-nasalized) vowels
before a nasal consonant, though there are exceptions to both generalizations.
It is the basic task of phonological theory to:
state the set of rules governing the sound patterns of each of the world's
- identify the
universal principles and tendencies to which these rule systems conform.
The theory is further responsible for, inter alia:
explaining how children are able to learn the sound patterns
of their native language, and
delimiting the ways in which the pronunciation of a language
can change from one historical stage to the next (at least to the extent
that these changes are systematic and phonetically conditioned).
Where do these phonological principles come from? Why do languages
have sound-pattern rules at all?
One possible answer is simply to assume that phonology is part
of our innate endowment, through some inscrutable quirk of evolution.
A more plausible hypothesis, I believe, is that natural language
sound patterns emerge from trade-offs between functional pressures on speech
as a system of communication: principally, the demands of the
system (the imperative to use minimally effortful movements of vocal tract
organs in speech production) and the
system (the imperative to avoid confusion, by producing words which are
sufficiently distinct from other words).
Phonological universals, under this view, follow from the fact
that all humans have roughly the same articulatory and auditory physiology.
Language-specific differences follow from the freedom of a given speech community
to develop its own particular set of tradeoffs.
From this perspective,
the phonological rule system is central to the question of how humans perceive
and produce speech.
Speech processing is a much more difficult task than initially
meets the eye. Consider a token of, say, the word potato.
The actual speech signal that hits the hearer's ear can be significantly
different, depending on the age, gender, and dialect of the speaker, background
noise, speech rate, the position within the sentence where the word occurs,
etc. But, except in extreme cases, we are able to instantly recognize
the speech signal as containing a token of the word potato; indeed,
we are generally not even aware of the variation.
Presumably, it is the phonological rule system which guides
the hearer in zeroing on the particular cues which are crucial to distinguishing
potato from similar-sounding words (e.g. tomato or petunia
) in English, without getting sidetracked by differences in the lexically
irrelevant aspects of the speech signal. (Note, by the way, that the
identity of these particular cues depends upon the language in question;
therefore this knowledge must be part of the grammar of the language.)
Moreover, the phonological rule system constrains the set of words which
the hearer might be called upon to recognize, or a speaker to produce, by
ruling out sounds or sound sequences which are relatively difficult to perceive
research interests and current projects