Object Recognition Using Images, Sounds, and Sentences: Normative Dynamic Tractography Atlases
Abstract number :
1.256
Submission category :
5. Neuro Imaging / 5B. Functional Imaging
Year :
2022
Submission ID :
2203935
Source :
www.aesnet.org
Presentation date :
12/3/2022 12:00:00 PM
Published date :
Nov 22, 2022, 05:22 AM
Authors :
Yu Kitazawa, MD, PhD – Wayne Sate University; Masaki Sonoda, MD, PhD – Neurosurgery – Yokohama City University; Kazuki Sakakura, MD – Pediatrics – Wayne State University; Takumi Mitsuhashi, MD, PhD – Neurosurgery – Juntendo University; Ethan Firestone, MD/PhD candidate, MS – Wayne State University; Toshimune Kambara, PhD – Psychology – Hiroshima University; Hirotaka Iwaki, MD, PhD – Psychiatry – Hachinohe City Hospital; Aimee Luat, MD – Pediatrics – Central Michigan University; Neena Marupudi, MD – Neurosurgery – Wayne State University; Sandeep Sood, MD – Neurosurgery – Wayne State University; Eishi Asano, MD, PhD, MS (CRDSA) – Pediatrics – Wayne State University
This abstract is a recipient of the Grass Young Investigator Award
Rationale: One can readily recognize an object based on its visual appearance, environmental sounds, or spoken descriptions. However, it is unknown if each of those features utilize distinct functional pathways in the brain.
Methods: To fill this knowledge gap, we built cortical atlases visualizing the spatiotemporal dynamics of high-gamma activity (70-100 Hz) elicited by different naming tasks. We analyzed 1,114 nonepileptic electrode sites sampled from 13 children with drug-resistant focal epilepsy who underwent extraoperative intracranial EEG (iEEG) recording. The iEEG data was combined with diffusion-weighted imaging tractography, to animate signal propagation along white matter pathways connecting cortices simultaneously modulated beyond a chance level during a given test. Using this approach, we compared the cortical/white matter network dynamics between naming tasks involving either pictures, environmental sound stimuli, or spoken descriptions; syntactic processing was only needed for the latter.
Results: Picture and environmental sound naming tasks revealed a double dissociation in network activation immediately after stimulus onset. Pictures activated callosal white matter pathways between the occipital lobes, whereas environmental sounds activated those between the superior-temporal gyri. A double dissociation in network activation also existed 200-600 ms before overt response (Figures 1, 2). Pictures activated the left superior longitudinal fasciculus between the inferior-temporal and precentral gyri; whereas environmental sounds activated the left inferior fronto-occipital fasciculus between the posterior inferior-frontal and medial-occipital gyri. A similar dissociation at 200-600 ms before response was also noted for environmental sound and spoken sentence naming tasks (Figures 1, 2). The former activated posterior-dominant networks, including the left middle longitudinal fasciculus between the medial-occipital and superior-temporal gyri. In contrast, the latter activated anterior-dominant networks, including left frontal-aslant u-fibers involving the posterior middle-frontal gyrus.
Conclusions: The spatiotemporal dynamics of naming-related network modulations differ across stimuli. The left superior longitudinal fasciculus from the inferior-temporal gyrus may be engaged more for image-based naming. Alternatively, the left-hemispheric pathways from the medial-occipital to superior-temporal and posterior inferior-frontal gyri may support processing of environmental sounds. The left frontal-aslant u-fibers involving the posterior middle-frontal gyrus may support auditory syntactic processing.
Funding: NIH grant NS64033 (to E. Asano)
Neuro Imaging