Neural dynamics of variable-rate speech categorization |
Author(s):
,Journal/Book: J Exp Psychol Hum Percep Perf. 1997; 23: 750 First St NE, Washington, DC 20002-4242. Amer Psychological Assoc. 481-503.
Abstract: What is the neural representation of a speech code as it evolves in time? A neural model simulates data concerning segregation and integration of phonetic percepts. Hearing two phonetically related stops in a VC-CV pair (V = vowel; C = consonant) requires 150 ms more closure time than hearing two phonetically different stops in a VC1-C2V pair. Closure time also varies with long-term stimulus rate. The model simulates rate-dependent category boundaries that emerge from feedback interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech code is a resonant wave. It emerges after bottom-up signals from the working memory select list chunks which read out top-down expectations that amplify and focus attention on consistent working memory items. In VC1-C2V pairs, resonance is reset by mismatch of C-2 with the C-1 expectation. In VC-CV pairs, resonance prolongs a repeated C.
Note: Article Grossberg S, Boston Univ, Dept Cognit & Neural Syst, 677 Beacon St, Boston,MA 02215 USA
Keyword(s): ADAPTIVE PATTERN-CLASSIFICATION; INTERACTIVE ACTIVATION MODEL; WORKING-MEMORY NETWORKS; FUZZY LOGICAL MODEL; SHORT-TERM-MEMORY; PERCEPTUAL INTEGRATION; CIRCADIAN-RHYTHMS; TEMPORAL-ORDER; TRACE MODEL; RECOGNITION
© Top Fit Gesund, 1992-2024. Alle Rechte vorbehalten – Impressum – Datenschutzerklärung