To learn more about my research experiences, check out my cv (updated October 2020).
To understand speech, we have to figure out how the acoustic signal relates to the speech sounds we know (a task made difficult by the considerable variability in how individuals talk), find the word breaks in the speech stream (which aren’t always easy to identify), and retrieve the correct words from our mental dictionaries (and not, for instance, a similar-sounding one). It should be a daunting task, and yet listeners seem to perform the task optimally.
As a graduate researcher, I am particularly interested in the mental computations that underlie the process of spoken word recognition as well as how these computations are achieved in the brain. How are we able to achieve such good perception, especially given the limited amount of cognitive resources we have at our disposal? How do listeners leverage their knowledge of who is talking as well as their knowledge of what that person is likely to say in order to comprehend the speech signal?
Selected Recent Work:
- Luthra, S., You, H., Rueckl, J. G., & Magnuson, J. S. Friends in low-entropy places: Orthographic neighbor effects on visual word identification differ across letter positions. Cognitive Science. In press.
- Luthra, S., Magnuson, J. S. & Myers, E. B. Boosting lexical support does not enhance lexically guided perceptual learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. In press.
- Luthra, S., Correia, J. M., Kleinschmidt, D. F., Mesite, L. & Myers, E. B. Lexical information guides retuning of neural patterns in perceptual learning for speech. Journal of Cognitive Neuroscience, 32(10), 2001-2012.
Magnuson, J.S., You, H., Luthra, S., Li, M., Nam, H., Escabí, M., Brown, K., Allopenna, P.D., Theodore, R.M., Monto, N., & Rueckl, J.G. (2020). EARSHOT: A minimal neural network model of incremental human speech recognition. Cognitive Science, 44(4). [Supplementary Materials].
- Luthra, S., Fuhrmeister, P., Molfese, P. J., Guediche, S., Blumstein, S. E., & Myers, E. B. (2019). Brain-behavior relationships in incidental learning of non-native phonetic categories. Brain & Language, 198.
- Luthra, S., Guediche, S., Blumstein, S. E., & Myers, E. B. (2019). Neural substrates of subphonemic variation and lexical competition in spoken word recognition. Language, Cognition and Neuroscience, 34(2), 151-169.
- Luthra, S., Fox, N. P., & Blumstein, S. E. (2018). Speaker information affects false recognition of unstudied lexical-semantic associates. Attention, Perception & Psychophysics, 80(4), 894-912.
- Magnuson, J. S., Mirman, D., Luthra, S., Strauss, T., & Harris, H. D. (2018). Interaction in spoken word recognition models: Feedback helps. Frontiers in Psychology, 9. 1-18
- Theodore, R. M., Blumstein, S. E., & Luthra, S. (2015). Attention modulates specificity effects in spoken word recognition: Challenges to the time-course hypothesis. Attention, Perception & Psychophysics, 77(5), 1674-1684.
- Luthra, S., Steiner, R. J., Magnuson, J. S., & Myers, E. B. The influence of sentence context on lexically guided perceptual learning. Psychonomic Society. Montréal, QC, November 2019.
- Luthra, S., Correia, J. M., Kleinschmidt, D. F., Mesite, L. & Myers, E. B. Lexical information guides retuning of neural patterns in perceptual learning of speech. Society for Neurobiology of Language, Helsinki, Finland, August 2019.
- Luthra, S., You, H., & Magnuson, J. S. Orthographic neighbor effects on visual word identification differ across letter positions. Psychonomic Society. New Orleans, LA, November 2018.
- Li, M. Y. C., You, H., Luthra, S., Steiner, R. J., & Magnuson, J. S. Predictive processing in computational models of spoken word recognition. Psychonomic Society. New Orleans, LA, November 2018.