5 abstracts accepted to CCN
Our lab will present 5 abstracts at the Cognitive Computational Neuroscience conference at MIT in Boston this fall!
The projects cover new models of vision and language, new ways to evaluate these models on their brain alignment, and ideas to make use of the best models.
A Simple Untrained Recurrent Attention Architecture Aligns to the Human Language Network
Scaling Laws for Task-Optimized Models of the Primate Visual Ventral Stream
Current DNNs are Unable to Integrate Visual Information Across Object Discontinuities
Topographic Deep ANN Models Predict the Perceptual Effects of Direct IT Cortical Interventions