STARTBODY

Teach English in XujiAhui Jiedao - Shanghai Shi

Do you want to be TEFL or TESOL-certified and teach in XujiAhui Jiedao? Are you interested in teaching English in Shanghai Shi? Check out ITTT’s online and in-class courses, Become certified to Teach English as a Foreign Language and start teaching English ONLINE or abroad! ITTT offers a wide variety of Online TEFL Courses and a great number of opportunities for English Teachers and for Teachers of English as a Second Language.

Phonetics-PhonologyI have long been curious about the structural and physiological aspects of human speech?partly as a result of being struck by how different language traditions, and the differing exercises and demands they impose on the muscles of the face, actually tend to produce slightly, but still recognizably different, facial features in the speakers of different languages. I find this to be particularly evident in comparing Americans to people from other countries. For example, the french tend to have noticeably developed lines showing muscular focus and exertion around their lips and mouths and in the lower parts of their faces generally; Germans tend to have some of the same, though not quite as much. Although they speak an extremely different language from french or German, I have found that the japanese tend to have somewhat similar visible lower facial development that I have not encountered to the same extent among the chinese or other Asian nationalities. Interestingly, japanese is significantly more consonant-based than largely vowel-based chinese, which might help to account for the difference. [All these patterns tend to be more visible in persons above a certain age, like maybe 40, certainly 50, as the characteristic facial lines of the language tradition gradually get etched deeper. However, working at UCLA as I do, where the student body is about half east Asian in background and includes people whose families have been over here for generations, first- and second- generation immigrants, and recently arrived visiting foreign students, it is interesting to note that one can often guess fairly accurately at which stage of acculturation even a relatively young person is by studying his or her face?which is partly related to cultural differences regarding what emotional states or facial expressions are normal or acceptable, but also appears to be related to predominant english use versus use of one Asian mother tongue or another.] I recently have been studying spanish, and in the process of coping with my frustration over pronouncing words with more than one trilled ?r? in close succession, I realized that moving the lower jaw slightly forward tends to place the tongue in a better position for trilling ?r?s?and a better position for producing various other characteristic sounds of spanish, including the dental ?d? (which shades into a ?th? sound and is different from the english alveolar ?d?). The ideal position for producing the sounds of the spanish language?lower jaw slightly forward but held close to the upper jaw?in turn helps explain a facial formation I have seen and found curious and interesting in spanish speakers?the tendency for the face to appear to have a permanent slight semi-smile. Americans, by comparison, seem to tend to have the slackest faces of all the peoples of the world that I?ve encountered, and Americans do noticeably less precise articulation of speech sounds than most other peoples of the world?even the various British nationalities use their facial muscles more and develop different configurations, notwithstanding that they?re also speaking english. But that is all merely by way of explaining and justifying my interest in phonetics, the topic discussed in the following paragraphs. Phonetics is a branch of linguistics that studies the sounds of human speech (or corresponding aspects of gesture and movement in sign language). Phonetics includes three sub-branches: (1) Articulatory Phonetics, which concerns the physiological aspects of the production of speech sounds by speakers; (2) Acoustic Phonetics, studying how speech sounds are transmitted from speaker to listener; and (3) Auditory Phonetics, studying the processes by which listeners receive and perceive speakers? speech sounds. All of these sub-branches involve the physics of sound and sound waves, including changes of wavelength and amplitude that affect tone, pitch, harmonics, and other sonic properties and characteristics. Perhaps the first known explorations of phonetics happened in ancient India around the same time as the Golden Age of ancient Greece (500 BCE). The ancient Greeks, among their many other firsts, appear to have been first to develop a writing system based upon a phonetic alphabet. Modern phonetics starts with the 1867 publication of Alexander Melville Bell?s book, Visible Speech, which offered a new system for precisely denoting speech sounds. Not long thereafter, the early phoneticist Ludimar Hermann used the new technology of the early Edison phonograph to record, and play back at varying slower speeds, human vowel sounds to study specific characteristics lost to the ear at normal speeds. [Such equipment and study techniques appear in the laboratory of Professor Henry Higgins in the classic film version of George Bernard Shaw?s play My Fair Lady; apparently the set-designers did their homework.] Phonetics is related to, but also distinct from, Phonology, yet another distinct branch of linguistics that studies how sounds and gestures form patterns both within individual languages and across various languages, and how these patterns relate to other aspects of language, including language meaning. Phonetics tends to be less concerned with actual meaning and other communicative aspects of language, and more focused specifically upon the physical and physiological aspects of sound production and reception related to language. Given its concern with physiology, phonetics traces the different organs and structures in the human body associated with sound production, from the lungs and diaphragm through the nose and mouth, including the trachea, larynx, glottis, pharynx, uvula, velum, hard palate, alveolar ridge, tongue, oral cavity, nasal cavity, teeth, and lips, each of which has a different role in the production of speech sounds. The places of articulation of speech sounds, corresponding to the various above structures, are categorized as glottal, pharyngeal, uvular, velar, palatal, alveopalatal, alveolar, dental, interdental, labiodental, and bilabial (adding just a few additional categories to those included and explained in Unit 13). The categories describing the manner of articulation include plosives, fricatives, affricates, nasals, approximants, glides, taps, and trills, most of which are listed and explained in Unit 13. Approximants include what the unit listed as semi-vowels (the ?w? and ?j? sounds) or laterals (the ?l? sound). Approximants are created by the constriction of the vocal tract without any obstruction and thus do not have the explosive air release of plosives (from full obstruction) or the turbulent air release of fricatives (from partial obstruction). The manner of articulation categories mostly concern consonant sounds; vowel sounds are categorized according to tongue height, tongue backness (in the mouth), lip rounding, and tenseness or laxness. [The umlauted u and o sounds in German, for instance, involve a higher degree of tenseness and lip-rounding than almost any sounds in english, certainly American english, as do similar vowel sounds in french, which probably helps to account for the different facial muscle development around the lips and mouth.] Sound itself, whether from a human voice, a musical instrument, or any other sound source, is produced by vibration causing a disturbance of air molecules (a mixture of oxygen and nitrogen gas molecules) at the point where the sound is generated, causing compression of the usual spatial relations between nearby air molecules, and a reaction against that compression as the molecules try to restore their usual spacing between themselves, that ripples outward through other nearby air molecules and forms the wave patterns we call sound waves. Humans? auditory apparatus (the ear drum and associated structures and nerve connections) is sensitive to such sound waves and sends signals to the brain for processing for meaning and significance. Here are three useful and interesting websites I consulted for information in preparing this report: http://en.wikipedia.org/wiki/Phonetics http://www.ic.arizona.edu/~lsp/Phonetics.html http://webspace.ship.edu/cgboer/phonetics.html
ENDBODY