To learn more about my professional experience, check out my cv (updated June 2021).


The way that we perceive the world is fundamentally shaped by our knowledge of it. In the domain of speech perception, for instance, our interpretation of the speech signal can be shaped by our knowledge of what is or isn’t a word (lexical knowledge) as well as by our knowledge of the idiosyncratic ways that different talkers produce their speech sounds.

I am particularly interested in the mental computations that underlie the process of spoken word recognition as well as how these computations are achieved in the brain, and I’m keen to draw connections between computational and neurobiological accounts. As a graduate researcher, I’ve focused largely on how listeners leverage their knowledge of who is talking as well as their knowledge of what that person is likely to say in order to comprehend the speech signal. My research has employed a wide range of methodological approaches, including behavioral / eye tracking experiments, simulations with computational models, functional MRI, machine learning, and transcranial magnetic stimulation. I’m a big believer in #openscience and interdisciplinary #scicomm, and I see diversity, equity and inclusion as essential components of any endeavor. Follow me on Twitter and/or send me an email at sahil [dot] bamba [dot] luthra [at] gmail [dot] com.

Selected Recent Work:


  1. Luthra, S., Saltzman, D., Myers, E. B., & Magnuson, J. S. (2021). Listener expectations and the perceptual accommodation of talker variability: A pre-registered replicationAttention, Perception, & Psychophysics. In press. [GitHub].
  2. Luthra, S., Li, M. Y. C., You, H., Brodbeck, C., & Magnuson, J. S. (2021). Does signal reduction imply predictive coding in models of spoken word recognition? Psychonomic Bulletin & Review. Advance online publication. [Supplementary Materials]. [GitHub].
  3. Luthra, S., Peraza-Santiago, G., Beeson, K., Saltzman, D., Crinnion, A. M., & Magnuson, J. S. (2021). Robust lexically-mediated compensation for coarticulation: Christmash time is here again. Cognitive Science, 45(4), 1-20. [Video Summary.] [OSF Preregistration.] [GitHub].
  4. Luthra, S., Mechtenberg, H., & Myers, E. B. (2021) Perceptual learning of multiple talkers requires additional exposure. Attention, Perception, & Psychophysics. Advance online publication. [Video Summary]. [OSF Repository].
  5. Luthra, S. The role of the right hemisphere in processing phonetic variability between talkers. (2021) Neurobiology of Language, 2(1), 138-151.
  6. Luthra, S., You, H., Rueckl, J. G., & Magnuson, J. S. (2020). Friends in low-entropy places: Orthographic neighbor effects on visual word identification differ across letter positions. Cognitive Science, 44(12), 1-31. [GitHub].
  7. Luthra, S., Magnuson, J. S. & Myers, E. B. (2020). Boosting lexical support does not enhance lexically guided perceptual learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. [OSF Repository]. 
  8. Luthra, S., Correia, J. M., Kleinschmidt, D. F., Mesite, L. & Myers, E. B. (2020). Lexical information guides retuning of neural patterns in perceptual learning for speechJournal of Cognitive Neuroscience, 32(10), 2001-2012. [OSF Repository].
  9. Magnuson, J.S., You, H., Luthra, S., Li, M. Y. C., Nam, H., Escabí, M., Brown, K., Allopenna, P.D., Theodore, R.M., Monto, N., & Rueckl, J.G. (2020).  EARSHOT: A minimal neural network model of incremental human speech recognition. Cognitive Science, 44(4), 1-17. [Supplementary Materials]. [GitHub].
  10. Luthra, S., Fuhrmeister, P., Molfese, P. J., Guediche, S., Blumstein, S. E., & Myers, E. B. (2019). Brain-behavior relationships in incidental learning of non-native phonetic categories. Brain & Language, 198
  11. Luthra, S., Guediche, S., Blumstein, S. E., & Myers, E. B. (2019). Neural substrates of subphonemic variation and lexical competition in spoken word recognitionLanguage, Cognition and Neuroscience, 34(2), 151-169. [Supplementary Materials.]
  12. Luthra, S., Fox, N. P., & Blumstein, S. E. (2018). Speaker information affects false recognition of unstudied lexical-semantic associatesAttention, Perception & Psychophysics80(4), 894-912. [OSF Repository].
  13. Magnuson, J. S., Mirman, D., Luthra, S., Strauss, T., & Harris, H. D. (2018). Interaction in spoken word recognition models: Feedback helpsFrontiers in Psychology, 9. 1-18
  14. Theodore, R. M., Blumstein, S. E., & Luthra, S. (2015). Attention modulates specificity effects in spoken word recognition: Challenges to the time-course hypothesis. Attention, Perception & Psychophysics, 77(5), 1674-1684.


  1. Grubb, S., Dalal, P., Daniel, J., Peraza-Santiago, G., Luthra, S., Saltzman, D., Xie, B., Crinnion, A.M., & Magnuson, J. S. Talkers, time, tasks, and similarity in spoken word recognition. Psychonomic Society, Virtual Conference, November 2020.
  2. Saltzman, D., Luthra, S., Myers, E. B., & Magnuson, J. S. Multi-talker processing costs in monitoring reflect task demands, not normalization. Psychonomic Society, Virtual Conference, November 2020.