To learn more about my professional experience, check out my cv (updated April 2023).


The way that we perceive the world is fundamentally shaped by our knowledge of it as well as what we are paying attention to. In the domain of speech perception, for instance, our interpretation of the speech signal can be shaped by our knowledge of what is or isn’t a word (lexical knowledge) as well as by our knowledge of the idiosyncratic ways that different talkers produce their speech sounds. At the same time, our ability to selectively attend to certain parts of an auditory environment — as well as to flexibly shift the focus of our attention — can influence whose speech we perceive as well as how we perceive their speech sounds.

I am particularly interested in the mental computations that underlie the process of spoken word recognition as well as how these computations are achieved in the brain, and I’m keen to draw connections between computational and neurobiological accounts. During my PhD training, I focused largely on how listeners leverage their knowledge of who is talking as well as their knowledge of what that person is likely to say in order to comprehend the speech signal. In my postdoctoral work, I am also considering how listeners’ ability to selectively pay attention to relevant acoustic dimensions influences auditory processing. My research has employed a wide range of methodological approaches, including behavioral / eye tracking experiments, simulations with computational models, functional MRI, machine learning, and transcranial magnetic stimulation. I’m a big believer in #openscience and interdisciplinary #scicomm, and I see diversity, equity and inclusion as essential components of any endeavor. Follow me on Twitter, check out my projects on GitHub and the Open Science Framework, and/or send me an email at


  1. Obasih, C. O., Luthra, S., Dick, F., & Holt, L. L. (in press). Auditory category learning is robust across training regimes. Cognition. [Preprint].
  2. Luthra, S., Mechtenberg, H., Giorio, C., Theodore, R. M., Magnuson, J. S., & Myers, E. B. (2023). Using TMS to evaluate a causal role for right posterior temporal cortex in talker-specific phonetic processingBrain & Language, 240, 105264. [Supplementary Materials]. [Video summary]. [OSF].
  3. Luthra, S., Magnuson, J. S. , & Myers, E. B. (2023). Right posterior temporal cortex supports integration of phonetic and talker information. Neurobiology of Language, 4(1), 145-177. [Video summary]. [OSF].
  4. Heffner, C. C., Fuhrmeister, P., Luthra, S., Mechtenberg, H., Saltzman, D., & Myers, E. B. (2022). Reliability for perceptual flexibility in speech: Identification, learning, and adaptation. Brain and Language, 226, 1-11.
  5. Saltzman, D., Luthra, S., Myers, E. B., & Magnuson, J. S. (2021). Attention, task demands, and multi-talker processing costs in speech perception Journal of Experimental Psychology: Human Perception and Performance, 47(12), 1673-1680. [Video summary]. [GitHub].
  6. Luthra, S., Saltzman, D., Myers, E. B., & Magnuson, J. S. (2021). Listener expectations and the perceptual accommodation of talker variability: A pre-registered replicationAttention, Perception, & Psychophysics, 83(6), 2367-2376. [GitHub].
  7. Luthra, S., Li, M. Y. C., You, H., Brodbeck, C., & Magnuson, J. S. (2021). Does signal reduction imply predictive coding in models of spoken word recognition? Psychonomic Bulletin & Review, 28(4), 1381-1389. [Supplementary materials]. [Poster presentation]. [GitHub].
  8. Luthra, S., Peraza-Santiago, G., Beeson, K., Saltzman, D., Crinnion, A. M., & Magnuson, J. S. (2021). Robust lexically-mediated compensation for coarticulation: Christmash time is here again. Cognitive Science, 45(4), 1-20. [Video summary.] [OSF preregistration.] [GitHub].
  9. Luthra, S., Mechtenberg, H., & Myers, E. B. (2021) Perceptual learning of multiple talkers requires additional exposure. Attention, Perception, & Psychophysics, 83(5), 2217-2228. [Video summary]. [OSF repository].
  10. Luthra, S. (2021) The role of the right hemisphere in processing phonetic variability between talkers. Neurobiology of Language, 2(1), 138-151.
  11. Luthra, S., Magnuson, J. S. & Myers, E. B. (2021). Boosting lexical support does not enhance lexically guided perceptual learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(4), 685-704. [Poster presentation]. [OSF repository]. 
  12. Luthra, S., You, H., Rueckl, J. G., & Magnuson, J. S. (2020). Friends in low-entropy places: Orthographic neighbor effects on visual word identification differ across letter positions. Cognitive Science, 44(12), 1-31. [Poster presentation]. [GitHub].
  13. Luthra, S., Correia, J. M., Kleinschmidt, D. F., Mesite, L. & Myers, E. B. (2020). Lexical information guides retuning of neural patterns in perceptual learning for speechJournal of Cognitive Neuroscience, 32(10), 2001-2012. [Poster presentation]. [OSF Repository].
  14. Magnuson, J.S., You, H., Luthra, S., Li, M. Y. C., Nam, H., Escabí, M., Brown, K., Allopenna, P.D., Theodore, R.M., Monto, N., & Rueckl, J.G. (2020).  EARSHOT: A minimal neural network model of incremental human speech recognition. Cognitive Science, 44(4), 1-17. [Supplementary materials]. [Poster presentation]. [GitHub].
  15. Luthra, S., Fuhrmeister, P., Molfese, P. J., Guediche, S., Blumstein, S. E., & Myers, E. B. (2019). Brain-behavior relationships in incidental learning of non-native phonetic categories. Brain & Language, 198. [Poster presentation].
  16. Luthra, S., Guediche, S., Blumstein, S. E., & Myers, E. B. (2019). Neural substrates of subphonemic variation and lexical competition in spoken word recognitionLanguage, Cognition and Neuroscience, 34(2), 151-169. [Supplementary materials]. [Poster presentation].
  17. Luthra, S., Fox, N. P., & Blumstein, S. E. (2018). Speaker information affects false recognition of unstudied lexical-semantic associatesAttention, Perception & Psychophysics80(4), 894-912. [Poster presentation]. [OSF Repository].
  18. Magnuson, J. S., Mirman, D., Luthra, S., Strauss, T., & Harris, H. D. (2018). Interaction in spoken word recognition models: Feedback helpsFrontiers in Psychology, 9. 1-18
  19. Theodore, R. M., Blumstein, S. E., & Luthra, S. (2015). Attention modulates specificity effects in spoken word recognition: Challenges to the time-course hypothesis. Attention, Perception & Psychophysics, 77(5), 1674-1684.

Recent Presentations

  1. Luthra, S., Tierney, A. T., Dick, F., & Holt, L. L. Neural systems underlying source- and dimension-based auditory selective attention to naturalistic speech. Poster presentation at Society for Neurobiology of Language, Philadelphia, PA, October 2022.
  2. Crinnion, A. M., Luthra, S., Gaston, P., & Magnuson, J. S. Resolving competing predictions in speech perception. Oral presentation at Psychonomic Society, Virtual Conference, November 2021.
  3. Mechtenberg, H., Luthra, S., & Myers, E. B. Cents and Shenshibility: The role of reward in talker-specific phonetic recalibration. Poster presentation at Psychonomic Society, Virtual Conference, November 2021.
  4. Luthra, S., Rueckl, J. S. & Magnuson, J. S. A computational investigation of the transformation from talker-specific detail to talker-invariant lexical representations. Poster presentation at Society for Neurobiology of Language, Virtual Conference, October 2021.
  5. Brodbeck, C., Luthra, S., Gaston, P., & Magnuson, J. S. Discovering computational principles in models and brains. Poster presentation at Cognitive Science Society, Vienna, Austria (In-Person/Virtual Hybrid), July 2021.
  6. Luthra, S., Peraza-Santiago, G., Saltzman, D., Crinnion, A. M., & Magnuson, J. S. Lexically-mediated compensation for coarticulation in older adults. Oral presentation at Cognitive Science Society, Vienna, Austria (In-Person/Virtual Hybrid), July 2021. [Conference Proceedings].
  7. Grubb, S., Dalal, P., Daniel, J., Peraza-Santiago, G., Luthra, S., Saltzman, D., Xie, B., Crinnion, A.M., & Magnuson, J. S. Talkers, time, tasks, and similarity in spoken word recognition. Poster presentation at Psychonomic Society, Virtual Conference, November 2020.