Authors :
Presenting Author: Edwina Tran, BA – UCSF
Jet M.J. Vonk, PhD – Assistant Professor, Neurology, UCSF; Kaitlin Casaletto, PhD – Neuropsychologist, Neurology, UCSF; Da Zhang, PhD – Post-doctoral scholar, Neurology, UCSF; Raphael Christin, BS – Research Associate, Neurology, UCSF; Jonathan Kleen, MD, PhD – Assistant Professor, Neurology, UCSF
Rationale:
Semantic memory refers to long-term general knowledge in the form of facts, concepts, and associations. Prior non-invasive studies using fMRI, EEG, and MEG suggest semantic processing circuits are distributed throughout regions (“hubs”) across the cortex bilaterally. Some of these brain areas, such as the temporal lobe, can be disrupted by neurodegenerative conditions affecting semantic memory (Alzheimer’s disease and semantic variant primary progressive aphasia), as well as in epilepsy. However, it is challenging to delineate semantic deficits from other cognitive elements that may be affected in epilepsy, including episodic memory and linguistic/verbal aspects. We probed semantic memory through non-verbal means by developing a new Visual-based Semantic Association Task (ViSAT) to understand whether semantic processing is impaired in patients with epilepsy (PWE).
Methods:
We obtained data from 17 participants with epilepsy (PWE) and 19 control participants (CP). We also obtained crowdsourced normative data from 55 USA-based Amazon Mechanical Turk (MTurk) workers online. Participants across all groups ranged 19-73 years of age (median 38). The ViSAT Task was adapted from concepts and similar stimuli as the Pyramids and Palm Trees, in which a person matches a target picture (line-drawing) on top with one of two pictures below based on a semantic association. We adapted and expanded this by creating 100 trials with four answer choices using real-life color pictures, balancing visual features and improving generalizability. Performance metrics included the percent consensus of the most popular answer choice among MTurk workers (PCons), along with accuracy (relative to PCons) and reaction time (RT) among in-person participants.
Results:
Based on MTurk normative data, the task was adapted to achieve high consensus rates for each trial, with 91.2% of all trials having a PCons >=90% (34% PCons = 100%). We ran a deep neural network (ResNet-18) using an image feature similarity embedding model (image2vec), which validated that visual features alone (color, shape, texture, etc.) could not explain the similarity between the stimulus and the PCons, suggesting participant choices were semantic-based. The PWE and CP groups showed less overall accuracy than the MTurk group (p< 0.001, Wilcoxon) though trial-by-trial accuracy was similar (p< 0.001, Spearman). The PWE group had longer RT than the CP group (p< 0.001), and an interaction between group and RT suggested PWE took more time to choose an answer (p< 0.001, mixed effects model) than to click the stimulus (p< 0.001, mixed effects model).