2008年5月15日 星期四

土耳其文中「母音諧聲」

[所屬格]
土耳其文中的所屬格(genitive),是以固定的形式直接加在名詞之後,如:

el 手 (單數)
elim 我的手 el-im
elin 你的手 el-in
eli 他的手 el-i
elimiz 我們的手 el-imiz
eliniz 你們的手 el-iniz
elleri 他們的手 el-leri


但是所屬格的母音部分,會為了達到「諧聲」而隨著單字本身變化,如:

göz 眼睛(單數)
gözüm 我的眼睛 göz-üm
gözün 你的眼睛 göz-ün
gözü 他的眼睛 göz-ü
gözümüz 我們的眼睛 göz-ümüz
gözünüz 你們的眼睛 göz-ünüz
gözleri 他們的眼睛 göz-leri


所屬格中子音的部分完全沒有改變,但是母音的部分改變了。這樣的變化總共有四種,整理如下:

單子最後一個母音為: a 或 ı 則所屬格中的母音變為:ı
單子最後一個母音為: e 或 i 則所屬格中的母音變為:i
單子最後一個母音為: o 或 u 則所屬格中的母音變為:u
單子最後一個母音為: ö 或 ü 則所屬格中的母音變為:ü


範例:
top 球 top-um top-un top-u top-umuz top-unuz top-ları
diş 牙齒 diş-im diş-in diş-i diş-imiz diş-iniz diş-leri
çay 茶 çay-ım çay-ın çay-ı çay-ımız çay-ınız çay-ları

2008年5月9日 星期五

questions and sub-questions in phonology

The outline of the hierarchy of these 16 questions. (1.2.3. for my own questions, I, II, III, thus for those in the book, and the VIII is parted, into a,b, and c, as three questions.)

Important questions of phonology

(7) How do we define the border of phonology?

(2) What is the structure of sound system?
(I) How is language and its parts, including words and morphemes, represented in the mind of the speaker; how is this representation accessed and used? How can we account for the variation in the phonetic shape of these elements as a function of context and speaking style?
(V) How can the functions of speech be enhances and amplified, for example, to give permanency to ephemeral speech, to permit communication over great distances, and to permit communication with machines using speech?

(6) What is the formation of speech sound?
(VIII b) Why is the vocal apparatus different as a function of the age and sex of the speaker?
(II) How, physically and physiologically, does speech work—the phonetic mechanisms of speech production and perception, including the structures and units it is built on?

(5) How do we acquire/learn the meaning of sounds?
(VII) How is sound associated with meaning?
(VI) How is speech acquired as a first language and as a subsequent language?
(IV) How can we ameliorate communication disorders?

(3) How language change affects phonology?
(III) How and why does pronunciation change over time, thus giving rise to different dialects and languages, and different forms of the same word or morpheme in different context? How can we account for common patterns in diverse languages, such as segment inventories and phonotactics?
(VIII a&b) How did language and speech arise or evolve in our species? What is the relation, if any, between human speech and non-human communication?

(1) What is the current methodology for phonological research?

(8) What is sign language phonology about?

important questions in phonology for me

My 8 questions

1. What is the current methodes for phonological research?
2. What is sound system?
3. How does language change?
4. What are phonological features in language family?
5. How do we know the meaning of sounds?
6. What is the formation of speech sound?
7. How do we define the border of phonology?
8. What is sign language phonology?

How many overlappings between my questions and the questions listed on our textbook?
The overlappings are question2, 3, 5, 6

2008年5月8日 星期四

Part III

9.applying perceptual method to the study of phoetic variation and sound change

10.interpreting misperception

11.coarticulatory nasalization and phonological developments

12. A perceptual bridge between coronal and dorsal /r/

13.Danish Stød



CHAPTER 9 負責人:鎮妃、怡萱、勝芬

9.2.2 Testing perception of co-variationHypothese 1: - listeners formulate equivalence categories in which the two sites of a lowered velum, N and V(nasal), are perceptually equivalentHypothese 2: the range of variants of V(nasal) and N that listeners treat as perceptually equivaletn will differ depending on the voicing of the coda consonant9.2.2.1 Methodological approach1. co-varying acoustic properties- trading with each other is taken as evidence of th ecoherence among parts of the acoustic signal that belong together2. wavefrom-edeiting techniques (bed, bend, bet, bent)-three groups of pairsa. N-only pair: /n/ duration was the only difference between pair membersb. cooperating paris: the stimulus with the shorter /n/ had less vowel nasalization than did the stimulus with the longer /n/c. conflicting pairs: the stimulus with the shorter /n/ had more vowel nasalization than did the on with hte longer /n/9.2.2.2 predictions1. conflicting pairs, despite large acoustic differences between pair members, should be difficult to dscriminate--possibly more difficult than the acoustically less distinct N-only pairs2. cooperating pairs, whose member have large acoustic differences and alrge differences in total nasalization, should be correctly judged as different3. the expected influence of coda voicing is that the perceptual judgments of listeners will broadly reflect the distribution of V(nasal)N measures found for th eproduction of VNC(voiced) and VNC(voiceless) words, such that vowerl nasalization will have a greater influence on judgments in the voiceless than in the voiced context.9.2.2.3 Results(expected)1. discrimination was most accurate for coopearating pairs, whose members differd substantially in total nasalization across the V(nasal)N sequence(unexpected)2. listeners also showed the expected greater sensitvity to vowel nasalization in the [t] than in athe [d] context3. listeners who consistently discriminated the conflicting trials more poorly tha n the acoustically less distinfct N-only trials, and listeners whose overall accuracy on conflincting trials was similar to that on cooperating trials.4. nasa murmurs are more likely to be detected when followed by silence (the voiceless closure) than when followed by glottal pulsing (the voiced closure)5. Diffierent listeners have diffierent levels of respondence to the simuli

(勝芬↑)


It has been suggested that this article or section be merged with phonological change. (Discuss)This article does not cite any references or sources. (November 2006)Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed.Note: This page or section contains IPA phonetic symbols in Unicode. See Help:IPA for a pronunciation key.Sound change includes any processes of language change that affect pronunciation (phonetic change) or word structures (phonemic change). Sound change can consist of the replacement of one speech sound (or, more generally, one phonetic feature) by another, the complete loss of the affected sound, and (rarely) even the introduction of a new sound in a place where there previously was none. Sound changes can be environmentally conditioned, meaning that the change in question only occurs in a defined sound environment, whereas in other environments the same speech sound is not affected by the change.Sound change is assumed to be usually regular, which means that it is expected to apply mechanically whenever its structural condition is met, irrespective of any non-phonological factors (such as the meaning of the words affected). On the other hand, sound changes can sometimes be sporadic, affecting only one particular word or a few words, without any seeming regularity.Of regular sound changes, the somewhat hyperbolic term sound law is also sometimes used. This term was introduced by the Neogrammarian school in the 19th century and is still commonly applied to some historically important sound changes, such as Grimm's law. While real-world sound changes often admit of exceptions (for a variety of known reasons, and sometimes without a known reason), the expectation of their regularity or "exceptionlessness" is of great heuristic value, since it allows historical linguists to define the notion of regular correspondence (see: comparative method).Each sound change is limited in space and time. It means it functions within a specified area (only in some dialects) and within a specified period of time. These limitations are some of the reasons for which some scholars refuse using the term "sound law" (asserting that laws should not have such spatial and temporal limitations) and replace it with phonetic rule.Contents1 The formal notation of sound change2 Principles of sound change3 Terms for changes in pronunciation4 Examples of specific historical sound changes5 External links

(怡萱↑)

CHAPTER 10 負責人:珮驊、義仁

Ohala has gave substance to Baudouin’s insighta. Misperception as a significant source of sound changeb. Investigation of the nature of such misperceptions by experimental methodsTwo fundamental implications of Ohala’s researcha. The innocent misperception can lead directly to attested recurrent sound patternsb. Sound change is non-teleologicalThe sources of the resistance to non-teleological modelsa. Experimental results are simply ignoredb. Interpretations of perception experiments are not empirically motivated, and fail to recognize lexical effectsc. Simplification of the model

(義仁↑)

CHAPTER 11 負責人:惠珍、怡君、晟維

11.1 IntroductionVowel-nasal-fricative nasalizationVelum movement during nasalizationSound changesNasal loss and preceding vowel lengtheningStop epenthesisThe unclear of nasal following voiceless fricativeVowel types do matter for the ease of nasalization11.2 previous investigations of nasal-obstruent sequences in Italian and EnglishVowel nasalizationIn Northern ItalianLong vowel durationVoiceless post-nasal consonants (fricative)Complete nasal consonant loss and longer vowel nasalization before fricatives than stops (Busà, 2003)In Central ItalianNo extensive nasalization nor complete nasal consonant lossIn American English80-100% nasalization, esp. the vowel before a tautosyllabic nasal and before a voiceless stopAE vowel nasalization is an intrinsic property of vowel rather than an coarticulation effectStop epenthesisReason of occurrence: when the oral constriction is released it causes a burst at the same place of articulation as the nasal consonantIn Central (-Southern) ItalianIn AE2 cases of stop epenthesisThe velum raising before the beginning of the oral constriction (for the fricative)The velum raising after the release for the fricativeFavored environments for occurrence: Word-final position and following a stressed vowel

(晟維↑)

11.3MethodPrevious findings between oral air emission for the production of oral sound and the extend of the closure of the VP opening.Positive correlation(Lubker and Moll 1995)Current method11.3.1 Speech materialTable 11.1 Words used in the experimentThe words are placed as in belowItalian: Dico X diEnglish: I said X againAnd read five times by each subject11.3.2 ProcedureOral and nasal flows were transduced bytwo-chamber Glottal Enterprise Rothenberg mask.Audio signal were recorded byA high-quality microphone attached to the exterior of the mask11.3.3 AnalysisFirst analyzed with PCquitier.Display of acoustic waveformSpectrogramOral and nasal flowWhen vowel is oral & nasal is fully articulatedWhen vowel is nasalized before a fully articulated nasal consonantWhen vowel is nasalized before a weakly articulated nasal consonantA fully nasalized vowels co-occurrence of nasal flow11.3.4 Measures11.3.4.1 Acoustic analysisDuration measures were taken ofthe test & control of Vsnasalized portions of pre-nasal Vs, Ns,Fs11.3.4.2 Nasal airflowDifference at the nasal onset and offsetFigure 11.2The interpretation of the nasal flow with the thresholds were label astN1, tN2, tN3, tN4,The peak time of nasal flow was labeled t Npeak, -tN, 11.3.4.3 Oral airflowOral movement with a piecewise linear envelopThe envelop was used to compute the time lagFrom the maximum of oral closure to the nasal peak11.3.4.5 Statistical analysisOne-way ANOVAsAcoustic dataTwo-ways ANOVAsWithin-groupBetween-groupBy averaging, by groupSame to aerodynamic dataAveraging values across VNF, and VNTS context11.4 Result11.4.1. Acoustic analysisFigure 1.1(left panel)Typical case in N1 dataThe vowel is heavily nasalizedThe nasal consonant is weakly articulated before the following voiceless fricativesFigure 1.1(right panel)The release ofthe oral occlusion for the nasal consonant between the velic closure(nasal peak)The result of the acoustic analysisTwo-ways ANOVAsTable 11.2-4As expectedThere is defect of vowel quality on vowel durationAnd the duration of vowel nasalizationTable 5N1 has the longest vowels and shortest oral consonants(F,TS) inVNFVNTS sequence

(惠珍↑)

CHAPTER 12 負責人:威鈴、伊津、宜珊
12.1 IntroductionPhonetic variation of rhotics /r/ in Swedish dialects:(1).front(coronal)/r/(2).back(dorsal)/r/Region of back /r/ :western EuropeanEnglishItalian CzechEstonianworking-class varieties of rural communitiesThe complementary distribution between [R] and [r] in southern Swedish dialects:/r/: back only in intitial postion , after a short stressed vowelFront and back /r/ have provided a basis for lexcal contrast in OccitanWhy would [r] change into [R] (or vice versa)?How does sound change begin?Purpose :(1)to establish an articulary-acoustic reference for /r/ types(2)to evaluate the articulatory-acoustic relationship(3)to synthesize an /r/ continuum situated in the F2-F3 area in question

(宜珊↑)


12.1Why would [r]change into [R] (or vice versa)How does sound change begin?Perception affects place of articulationPurpose---examine the perceptual preconditions for reinterpretation of place of articulation.Establish an articulatory-acoustic reference systemEvaluate the articulatory -aoustic relationshipsSynthesize an /r/ continuum

(伊津↑)


12.1 IntroductionThe rhotics (r-sounds) are known for having a particularly wide range of phonetic variationWhy would [r] change into [R]?How does sound change begin?The purposeTo examine the perceptual preconditions for reinterpretations of place of articulation1) Establish an articulatory-acoustic reference system for a number of /r/ types2) To evaluate the articulatory-acoustic relationships using articulatory modeling.3) To synthesize an /r/ continuum situated in the F2-F3 area in question.12.2 Formant Frequencies for places of /r/ articulation12.2.1 DataWe recorded reference material to obtain formant frequencies for various approximant rhotics12.2.2 CommentsThe pharyngeals, uvulars, and back velars form separate but adjacent clusters.12.3 APEX simulations12.3.1 The APEX model1) an implementation of a framework previously developed for vowels2)subsequently augmented with tongue tip and blade parameters3) APEX is a tool for going from articulatory positions to sound in four steps4) From specifications for lips, tongue lip, tongue body, jaw opening and larynx height, APEX constructs an articulatory profile.12.3.2 SimulationsAPEX was used to help answer two questionsWhat are the acoustic consequences of varying the place of articulation in /r/-like coronal articulations?What are the acoustic consequences of varying the place of articulation in /r/-like dorsal articulations?12.3.3 ConclusionsBy the large, it can be seen that APEX corroborates the articulatory properties exhibited by speaker O. It would therefore seen justified to assume that they are descriptively valid not only for him, but, at least qualitatively, also more generally.

(威鈴↑)

CHAPTER 13 負責人:Aleksandra 、洋吉


13.3 PHONOLOGICAL AND MORPHOLOGICAL FACTORS IN THE DISTRIBUTION OF STØDAppearance of stød, where it did not belong originallyMozartProductiveness of stød13.3.1 Stød and word structureGeneral principles of stødStd vs non-stød13.3.1.1 Stød in non-inflected, non-derived words (lexical items)13.3.1.2 Inflection and derivationSuffixesFully productiveSemi-productiveNon-productiveDependency of stød13.3.2 Stød in new and unexpected contextsPrinciples of stød in the process of changeUnexpected examplesSimple nouns in the pluralCompound nouns in the pluralVerbal adjectivesNon-inflected lexical itemsNon-inflected compound stemsCONCLUSIONAim-acoustic & perceptual evidence on Danish stødCharacteristic of Danish stødPhonetic natureDistributionPrinciples of governing

(Aleksandra ↑)

psycholinguistic method

Bruce Derwing
He is Professor Emeritus in the Department of Linguistics at the University of Alberta (Edmonton, Canada). He was a pioneer in the development of non-chronometric psycholinguistic techniques for the cross-linguistic investigation of phonological units, involving languages as disparate as Arabic, Blackfoot, Korean, Minnan, and Swiss German. His current research focuses on the phonological and morphological aspects of the form, structure, and organization of the mental lexicon, with a special interest in the role of orthographic knowledge on the perceived segmentation of speech.

Nina Grønnum and Hans Basbøll
Nina Grønnum

She works at the Department of Nordic Studies and Linguistics, University of Copenhagen. She received her M.A. in phonetics in 1972, her Ph.D. in 1981 (Studies in Danish Intonation), and her Danish Doctorate in 1992 (The Groundworks of Danish Intonation). From 1972 to 1976 she was an Assistant Professor, between 1976 and 1993 she was an Associate Professor, and since 1993 she has been an Associate Professor with Special Qualifications. She is a Fellow of The Royal Danish Academy of Sciences and Letters.
Hans Basbøll
He is Professor of Scandinavian Linguistics at the Institute of Language and Communication, University of Southern Denmark. He has directed projects on Danish language acquisition. Among his recent publications is Phonology of Danish (2005, Oxford University Press). Hans Basbøll is a fellow of The Royal Danish Academy of Sciences and Letters.

Sieb Nooteboom and Hugo Quené
Sieb Nooteboom
He received his Ph.D. in 1972 from Utrecht University. He has had positions as Researcher in Philips Research in Eindhoven (1966-1988), part-time Professor of Phonetics in Leyden University (1980-1988), part-time Professor of Experimental Linguistics in Eindhoven Technical University (1986-1988), full-time Professor of Phonetics in Utrecht University (1988-2004), and part-time Professor of Phonetics in Utrecht University (2004-2006).
bibliography of Sieb G. Nooteboom

homepage of Sieb Nooteboom - Universiteit Utrecht

Hugo Quené
He received his Ph.D. from Utrecht University in 1989. He has held positions as Research Scientist in Utrecht University (1986-1989), Assistant Professor of Phonetics at Leyden University (1989-1990), Assistant Professor of Phonetics (1989-2004), and Associate Professor of Phonetics at Utrecht (2004- ). He was Fulbright Visiting Scolar at Indiana University, Bloomington, in 2001 and 2002.

Manjari Ohala
She received her Ph.D. from UCLA in 1972. She is now Professor, and Currently Chair, of the Department of Linguistics and Language Development at San Jose State University, where she was taught since 1974. She has also taught at University of Maryland (Linguistic Society of America Summer Institute), UC Davis, UC Berkeley, and the University of Alberta, Edmonton. Her research interests in phonetics and phonology include experimental phonology and the phonetics and phonology of Indo-Aryan languages. She is the author of numerous articles on Hindi phonetics and phonology, and Aspects of Hindi Phonology (1983, Motilal Banarsidass).

Danny D. Steinberg, Psycholinguistics: Language, Mind, and World

MPG (Max Planck Gesellschaft)

Speech Prosody 2002



2008年5月1日 星期四

textbook reading

Chapter 5
5.2.3 some key notions of French phonemics
cardinal primary vowels → french vowels
e.g. /i/, /e/, /a/, /u/, /o/, /ε/, /α/
IPA is inadequet: American English is pronounced differently from British English
analyses of these vowels
/i/
/y/
/u/
nasalized vowels
palatalized consonants


5.3.5 investigaing the effets of states of the glottis and of supraglottic constriction
/R/(upside down)
compare to /X/


vocabulary:
articulatory modeling (AM)
vocal tract(VT)
F-pattern
Maeda’s artiuculatory model
Guided Principal Component Alaysis (PCA)
midsagittal

MRI


Chapter 6 The control and Regulation of Speech
6.4.3.3 sentences
Research goal: to investigate the relationship between f0 and Ps
Research subjects: two males
Research method: sentences reading with no instructions about speed and loudness.
Sentence types: declarative, statements, yes-no questions, sentence with complete or incomplete information.
Research findings: it was never possible to establish a clear correlation between f0 and Ps.
6.4.3.4 the effects of changes in Ps and intensity on f0
Research goal: to investigate the effect of changes in intensity and Ps on the f0 of sentences.
Research subjects: VL♀ and DD♂
Research method: produce 14 sentences at 3 level of intensity with no other instructions.
Research findings:
f0 declination does not entirely correspond to declining Ps
Ps and Intensity seemto be correlated.




Chapter 11 Coarticulatory Nasalization and phonological Developments
11.1 Introduction
Vowel-nasal-fricative nasalization
Velum movement during nasalization
Sound changes
Nasal loss and preceding vowel lengthening
Stop epenthesis
The unclear of nasal following voiceless fricative
Vowel types do matter for the ease of nasalization
11.2 previous investigations of nasal-obstruent sequences in Italian and English
Vowel nasalization
In Northern Italian
Long vowel duration
Voiceless post-nasal consonants (fricative)
Complete nasal consonant loss and longer vowel nasalization before fricatives than stops (Busà, 2003)
In Central Italian
No extensive nasalization nor complete nasal consonant loss
In American English
80-100% nasalization, esp. the vowel before a tautosyllabic nasal and before a voiceless stop
AE vowel nasalization is an intrinsic property of vowel rather than an coarticulation effect
Stop epenthesis
Reason of occurrence: when the oral constriction is released it causes a burst at the same place of articulation as the nasal consonant
In Central (-Southern) Italian
In AE
2 cases of stop epenthesis
The velum raising before the beginning of the oral constriction (for the fricative)
The velum raising after the release for the fricative
Favored environments for occurrence: Word-final position and following a stressed vowel




Chapter 15 Physiological and Physical Bases of the Command-Response Model for Generating Fundamental Frequency Contours in Tone Languages
15.3 mathematical representation of F0 contours of tone languages with positive and negative local components
Introduction of mathematical formulation and relative settings
15.4 application of the model to the analysis of F0 contours of several tone languages
Research goal:
Analyze F0 contour
Estimate the underlying commands
Research procedure:
Procedure: analysis-by-synthesis
Input data: Mandarin, Thai, Cantonese, Shanghainese, Vietnamese, Lanqi
Tone & Positive/Negative output:
In Mandarin
In Thai
In Cantonese
Conclusion: the advantage of the command-response model is that it provides a method for representing quantitive aspects of tonal features with a higher degree of accuracy.
15.5 conventional descriptions of tone systems
Systems
Chinese traditional tone names
Western phonological tone names
趙元任’s five-level tone-code system
The weakness: they all lack of accuracy and generality, and depend heavily on auditory perception and are subjective.
The limitation of five-level tone code system
It risks being confounded with non-distinctive phonetic variations.
It is valid only for representing tones in citation form.
It’s discrete and semi-quantitative, and cannot characterize the continuous and dynamic nature of F0 contours.




Chapter 22 Morphophonemics and the Lexicon
22.1 introduction
Try to find a way to explain the stem-final alternation in Turkish
22.2 the problem
Size (length) as a categorizer
Wedel: neighborhood density & alternation rate
Conclude: Wedel’s findings cannot be meaningfully evaluated for it’s done by statistics from a dictionary.(a single-speaker corpus is a better choice)
22.3 methodology: TELL and a frequency corpus
22.3.1 the Turkish electronic living lexicon (TELL)
Maker: University of California, Berkeley
Content: 30,000 words (25,000 headwords, 5,000 place names)
Voice producer: 63-year-old standard Istanbul Turkish speaker
Morphological context:
NOM. case
ACC. case
1. person predicative
Possessive case
Professional suffix
22.3.2 stem-final alternations: a snapshot from TELL
22.3.3 frequency corpus
Maker: Kemal Oflazer, at Sabancı University in Istanbul, Turkey
Content: 12,000,000 words
22.4 frequency
Rhodes’ AE Flapping and Bybee’s coronal deletion
Gradient alternation and semi-regular
Findings
In velar deletion: more frequent, more alternation
In voicing: less frequent, more alternation
22.5 neighborhood density
22.5.1 neighborhood density with a single-speaker corpus
22.5.2 frequency-weighted neighborhood density
22.6 cohorts
22.7 etymology
22.8 conclusions

homepages of some linguists

John S. Coleman

Steven Bird , other information

Paul Houseman , other information

Dan Slobin

harmonic phonology

definitions form David Crystal, A Dictionary of Linguistics & Phonetics:
harmonic phonology
In phonology, an approach which recognizes three levels of representation working in parallel: morphophonemic (‘M-level’), word/ syllabic tactics (‘W-level’), and phonetic (‘P-level’). Each level is characterized by a set of well-formedness statements (‘tactics’) and a set of unordered ‘intra-level’ rules which collectively define the paths an input representation has to follow I order to achieve maximum conformity to the tactics. This maximal well-formedness is called ‘harmony’. The levels are related by ‘inter-level’ rules. The approach avoids the traditional conception of the organization of a generative grammar in which each level of representation is seen to precede or follow another (as would be founding the ordered steps within a derivation).

harmony
(1) A term used in phonology to refer to the way the articulation of one phonological unit is influenced by (is ‘in harmony’ with) another unit in the same word or phrase. An analogous notion is that of assimilation. The two main processes are consonant harmony and vowel harmony. In the typical case of vowel harmony, for example, such as is found in Turkish or Hungarian, all the vowels in a word share certain features – for instance, they are all articulated with the front of the tongue, or all are rounded. The subsets of vowels which are affected differently by harmonic processes are harmonic sets. Disharmony (or disharmonicity) occurs when a vowel from set A is used (e.g. by suffixation) in words which otherwise have set B, thus forming a harmonic island (if transparent) or a new harmonic span (if opaque). The span within which harmony operates (usually the word) is the harmonic domain
(2) in optimality theory, the measurement of the overall goodness of a form given a constrain ranking.

Vowel Harmony
Vowel harmony is a type of long-distance assimilatory phonological process involving vowels in some languages. In languages with vowel harmony, there are constraints on what vowels may be found near each other.
Harmony processes are "long-distance" in the sense that the assimilation involves sounds that are separated by intervening segments (usually consonant segments). In other words, harmony refers to the assimilation of sounds that are not adjacent to each other. For example, a vowel at the beginning of a word can trigger assimilation in a vowel at the end of a word. The assimilation sometimes occurs across the entire word. This is represented schematically in the following diagram:
beforeassimilation

afterassimilation

VaCVbCVbC

VaCVaCVaC
(Va = type-a vowel, Vb = type-b vowel, C = consonant)
In the diagram above, the Va (type-a vowel) causes the following Vb (type-b vowel) to assimilate and become the same type of vowel (and thus they become, metaphorically, "in harmony").
The vowel that causes the vowel assimilation is frequently termed the trigger while the vowels that assimilate (or harmonize) are termed targets. In most languages, the vowel triggers lie within the root of a word while the affixes added to the roots contain the targets. This may be seen in the Hungarian dative suffix:
Root
Dative
Gloss
város
város-nak
"city"
öröm
öröm-nek
"joy"
The dative suffix has two different forms -nak/-nek. The -nak form appears after the root with back vowels (a and o are both back vowels). The -nek form appears after the root with front vowels (ö and e are front vowels).
Another example: Turkish ev-ler-imiz "our houses" (house-{PL}-{1PL POSS}) vs. dam-lar-ımız "our roofs" (roof-{PL}-{1PL POSS}).
Harmony assimilation may spread either from the beginning of the word to the end or from the end to the beginning. Progressive harmony (a.k.a. left-to-right harmony) proceeds from beginning to end; regressive harmony (a.k.a. right-to-left harmony) proceeds from end to beginning. Languages that have both prefixes and suffixes often have both progressive and regressive harmony. Languages that primarily have prefixes (and no suffixes) usually have only regressive harmony — and vice versa for primarily suffixing languages.
Vowel harmony often involves dimensions such as
Vowel height (i.e. high, mid, or low vowels)
Vowel backness (i.e. front, central, or back vowels)
Vowel roundedness (i.e. rounded or unrounded)
tongue root position (i.e. advanced or retracted tongue root, abbrev.: ±ATR)
Nasalization (i.e. oral or nasal) (in this case, a nasal consonant is usually the trigger)
In many languages, vowels can be said to belong to particular classes, such as back vowels or rounded vowels, etc. Some languages have more than one system of harmony. For instance, Altaic languages have a rounding harmony superimposed over a backness harmony.
In some languages, not all vowels participate in the vowel conversions — these vowels are termed either neutral or transparent. Intervening consonants are also often transparent. In addition to these transparent segments, many languages have opaque vowels that block vowel harmony processes.
Finally, languages that do have vowel harmony sometimes have words that fail to harmonize. This is known as disharmony. Many loanwords exhibit disharmony, either within a root (e.g., Turkish/Turkic vakit/waqit, "time" [from Arabic waqt], where °vakıt/°waqıt would have been expected) or in suffixes (e.g., Turkish saat-ler "(the) hours" [hour-PL, from Arabic sâ`a], where saat-lar would have been expected). In Turkish, disharmony tends to disappear through analogy, especially within loanwords. Suffixes drop disharmony to a lesser extent, e.g. Hüsnü (a man's name) <>



(go to the page of FJU)


papers about harmonic phonology

jers or floaters in the phonology of bulgarian

ghost vowels and syllibification

Syllabification and Rule Application in Harmonic Phonology

Harmonic Phonology

Harmonic Grammar with Harmonic Serialism




declarative phonology

DECLARATIVE PHONOLOGY
Steven Bird, John Coleman, Janet Pierrehumbert and James Scobbie
University of Edinburgh, AT&T Bell Laboratories,
Northwestern University, Stanford University
1 INTRODUCTION

Declarative phonology is a program of research that was motivated in part by the need for theories of phonology that can be implemented on a computer. While it is clear that such a development would be beneficial for both theoretical and field phonology, it is not immediately obvious how one should go about implementing phonological models. The so-called ‘declarative’ approach draws on a key insight from theoretical computer science, where there has been a long tradition of distinguishing between the declaration of a problem and a procedure which computes the solution to that problem. Paradoxically, the kind of problem specifications that are frequently the most useful for computational implementation are those which make the fewest procedural commitments.
The declarative phonology programme is, at its heart, an attempt to do away with the ordered derivations and the concomitant feature-changing rules of traditional generative phonology. In this respect, declarative phonology ties in with some recent developments in theoretical phonology where feature-changing rules have been criticized or explicitly avoided (Rice, 1989; McCarthy, 1991). However, it is also possible to find precedents in the literature on American Structuralist phonemic phonology (Hockett, 1954), Firthian Prosodic Phonology, Natural Generative Phonology (Hooper, 1976; Hudson, 1980) and Montague Phonology (Wheeler, 1981; Bach, 1983). More recently, ‘harmonic’ approaches to phonology arising from work in connectionism (Smolensky, 1986) have also questioned the procedural paradigm but from a perspective which does not clearly differentiate the declaration of grammar from the means of its implementation (Goldsmith, ta; Prince & Smolensky, 1992). Despite this difference, the declarative and connectionist approaches are alike as regards their incorporation of various kinds of constraint satisfaction.
With increasing interest in the interaction between phonology and syntax being expressed in the literature, declarative phonology has something to contribute here too. Constraint-based grammar frameworks such as HPSG (Pollard & Sag, 1987) (manifesting good linguistic coverage and attractive computational properties) have the same metatheoretical commitments as declarative phonology. The prospect for having a computational theory of phonology that is fully integrated with a computational theory of syntax and semantics is now imminent.
A final area of concern is the phonology-phonetics interface. In the declarative framework it makes sense to view the relationship between phonology and phonetics as being one of denotation. Under this view, phonological representations are descriptions of phonetic reality and a particular phonological construct is said to denote a phonetic event (Bird & Klein, 1990; Pierrehumbert, 1990; Scobbie, 1991a; Coleman, 1992).
This article consists of four sections, where each section has been contributed by a different author. The first three sections present reanalyses of phenomena that have previously been thought to require the ability to destructively modify phonological structures. In 2, James Scobbie discusses syllabification in Tashlhiyt Berber and presents a declarative analysis couched in a feature-structure based framework. In 3, Steven Bird investigates vowel harmony in Monta˜nes Spanish and consonant harmony in Chumash and proposes a non-feature-changing account using a finite-state model. In 4, John Coleman presents a brief overview of his reconstruction of Lexical Phonology in a declarative framework. The final section contains a commentary by Janet Pierrehumbert, discussing the achievements and prospects of declarative phonology in relation to generative phonology, Lexical Phonology and laboratory phonology.


2 CONSTRAINT CONFLICT (James M. Scobbie)
2.1 The Phonotactic: General Tendency or Hard Constraint?
It is well-known that rewrite rules fail to capture some generalisations about the level of representation they derive (Kisseberth, 1970; Shibatani, 1973). Defining the well-formed structures of that level using phonotactics enables those patterns to be addressed. Moreover, insofar as the patterns that exist in a language trigger its alternations, the alternations are explained.
If well-formedness constraints are used, it is necessary to decide whether or not to use rewrite rules also. When a grammar employs both formal techniques, their interaction is necessarily an area of concern (Scobbie, 1991b). Some work (e.g. (Singh, 1987; Paradis, 1988)) replaces structural descriptions with phonotactics and structural changes with repair-strategies. Whenever a structure known to be ill-formed at the surface level of representation can be generated during the derivation, it is indeed generated, only to be destructively modified. Therefore one can state general tendencies of distribution directly in the grammar and ‘repair’ those forms generated by the tendency which happen to be in conflict with empirical considerations. Though in these theories phonological representations are intended to be models of aspects of competence, the derivation and the intermediate forms are uninterpreted aspects of the theory. Such hidden elements imbue the theory with greater abstractness, and they decrease the modularity of the theory with respect to the procedures that can be employed to implement it.
Another line of research is to use only constraints acting in consort to describe a level of
representation. If the constraints are broad-stroke general tendencies (such as a syllable’s
disdain for a coda or love of an onset) they will of course sometimes clash in their demands.
Some means must be found of resolving such inconsistencies.
We can avoid an inconsistent grammar by using formal statements of distribution which
fail to clash by virtue of their precision, and by using familiar conventions such as the
Elsewhere Condition. Formalising the universal tendencies with an appropriate amount
of detail dispels constraint conflict. The interaction of these hard constraints is therefore
declarative and compositional. This is the approach advocated here.
Other approaches adopt optimisation techniques which provide a metric capable of
determining the best-formed structures possible in the contradictory circumstances. The
optimal solution the one in which the fewest important constraints are violated. Tendencies
are in fact soft constraints in these theories and are carefully prioritised in a derivational
architecture familiar from connectionism.
In the next section I examine data which has been argued to be ideally suited to
optimisation. I will show that once the tendencies are formalised, they include enough
detail to allow them to be implemented as hard constraints in a derivationally neutral,
declarative way.


(to read to whole paper, please click on the title as Declarative Phonology)

Go to FJU-Lin. and see the page


papers about declarative phonology

key aspects of declarative phonology

declarative phonology



recommanded books:
declarative phonology in Phonology
作者:Charles W. Kreidler
declarative phonology in Asymmetry in Grammar
作者:Anne-Marie Di Sciullo
declarative phonology in Linguistics Today: Facing a Greater Challenge
作者:Piet Van Sterkenburg, International Congress of Linguists
declarative phonology in Phonetics, Phonology, and Cognition
作者:Jacques Durand, Bernard Laks