BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//ARNI - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:ARNI
X-ORIGINAL-URL:https://arni-institute.org
X-WR-CALDESC:Events for ARNI
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20240419T113000
DTEND;TZID=UTC:20240419T130000
DTSTAMP:20260514T020711
CREATED:20240410T234910Z
LAST-MODIFIED:20240416T184148Z
UID:792-1713526200-1713531600@arni-institute.org
SUMMARY:Shihab Shamma
DESCRIPTION:Title: The auditory cortex: A sensorimotor fulcrum for speech and music perception \nAbstract: The Auditory cortex sits at the center of all auditory-motor tasks and percepts\, from listening to our voice as we speak\, to the music that we play\, and to the complex sound mixtures that we seek to perceive. The auditory cortex orchestrates all these demands by segregating the sound sources and attending to a few\, and then directing them to be semantically decoded in the language or music areas of the brain. It also sends collateral signals to motor areas where the sounds could be produced and controlled (e.g.\, the vocal tract or hands). All these regions in turn reflect back to the auditory cortex their expectations and predictions of the activations due to the incoming sound streams.I shall review in this talk computational models of several such phenomena\, and discuss the experimental findings that test their underlying assumptions in humans and ferrets while they segregate speech mixtures\, imagine music\, or listen to songs.
URL:https://arni-institute.org/event/shihab-shamma/
LOCATION:Zuckerman Institute – L5-084\, 3227 Broadway\, New York\, NY\, United States
END:VEVENT
END:VCALENDAR