Feindel Brain and Mind Seminar Series: Neural Dynamics and Computations Constraining Speech and Music Processing
The Feindel Brain and Mind Seminar Series will advance the vision of Dr. William Feindel (1918–2014), Former Director of the Neuro (1972–1984), to constantly bridge the clinical and research realms. The talks will highlight the latest advances and discoveries in neuropsychology, cognitive neuroscience, and neuroimaging.
Speakers will include scientists from across The Neuro, as well as colleagues and collaborators locally and from around the world. The series is intended to provide a virtual forum for scientists and trainees to continue to foster interdisciplinary exchanges on the mechanisms, diagnosis and treatment of brain and cognitive disorders.
To attend in person, register
To watch via Vimeo, clickÌý
Benjamin Morillon
Research Director, Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
Host: robert.zatorre [at] mcgill.ca (Robert Zatorre)
´¡²ú²õ³Ù°ù²¹³¦³Ù:ÌýBenjamin Morillon will depict the neural dynamics underlying music perception and speech comprehension, emphasizing time scales and adaptive processes. First, he will explore why humans spontaneously dance to music, presenting behavioral and neuroimaging evidence that motor dynamics reflect predictive timing during music listening. While auditory regions track the rhythm of melodies, intrinsic neural dynamics at delta (1.4 Hz) and beta (20-30 Hz) frequencies in the dorsal auditory pathways encode the wanting-to-move experience, or "groove." These neural dynamics are organized along the pathway in a spectral gradient, with the left sensorimotor cortex coordinating groove-related delta and beta activity. Predictions from a neurodynamic model suggest that spontaneous motor engagement during music listening arises from predictive timing, driven by interactions of neural dynamics along the dorsal auditory pathway. Second, to investigate speech comprehension, a framework was developed utilizing the concept of channel capacity. This approach examines the influence of various acoustic and linguistic features on the comprehension of compressed speech. Results demonstrate that comprehension is independently affected by each feature, with varying degrees of impact and a clear dominance of the syllabic rate. Complementing this framework, human intracranial recordings reveal how neural dynamics in the auditory cortex adapt to different acoustic features, facilitating parallel processing of speech at syllabic and phonemic time scales. These findings underscore the dynamic adaptation of neural processes to temporal characteristics in speech and music, enhancing our understanding of language and music perception.