Though listeners are typically able to recognize spoken words without much difficulty, the underlying mental processes are remarkably complex. In order to extract meaning from sound waves, we have to figure out how acoustics relate to the speech sounds we know (a task made more difficult by the considerable variability in how individuals talk), find the word breaks in the speech stream (which aren’t always easy to identify), and retrieve the correct words from our mental dictionaries (and not, for instance, a word that sounds similar or has a similar definition). It should be a daunting task, and yet listeners seem to perform the task optimally, even when encountering a relatively unfamiliar talker.
As a graduate researcher, I am particularly interested in the mental computations that underlie the process of spoken word recognition as well as how these computations are achieved in the brain. How are we able to achieve such good perception, especially given the limited amount of cognitive resources we have at our disposal? How might we rely on other sources of information, like sentence context, to make the word recognition process easier?
Please watch this space to learn more about how my interests develop throughout graduate school. To learn more about my academic experiences, check out my CV (current as of October 2018). Some of my most recent work is highlighted below.
- Luthra, S., Guediche, S., Blumstein, S. E., & Myers, E. B. Neural substrates of subphonemic variation and lexical competition in spoken word recognition. Language, Cognition and Neuroscience.
- Luthra, S., Fox, N. P., & Blumstein, S. E. (2018). Speaker information affects false recognition of unstudied lexical-semantic associates. Attention, Perception & Psychophysics, 80(4), 894-912.
- Magnuson, J. S., Mirman, D., Luthra, S., Strauss, T., & Harris, H. D. (2018). Interaction in spoken word recognition models: Feedback helps. Frontiers in Psychology, 9. 1-18
- Theodore, R. M., Blumstein, S. E., & Luthra, S. (2015). Attention modulates specificity effects in spoken word recognition: Challenges to the time-course hypothesis. Attention, Perception & Psychophysics, 77(5), 1674-1684.
- Luthra, S., You, H., & Magnuson, J. S. Orthographic neighbor effects on visual word identification differ across letter positions. Psychonomic Society. New Orleans, LA, November 2018.
- Li, M. Y. C., You, H., Luthra, S., Steiner, R., & Magnuson, J. S. Predictive processing in computational models of spoken word recognition. Psychonomic Society. New Orleans, LA, November 2018.
- Luthra, S., & Magnuson, J. S. Friends in low entropy places: Letter position influences orthographic neighbor effects in visual word identification. Cognitive Science Society, Madison, WI, July 2018.
- Luthra, S., Fuhrmeister, P., Molfese, P. J., Guediche, S., Blumstein, S. E., & Myers, E. B. Brain-behavior relationships in implicit learning of non-native phonetic categories. Society for Neurobiology of Language. Baltimore, MD, November 2017.
- Luthra, S., & Magnuson, J. S. Cumulative response probabilities: Estimating time course of lexical activation from single-point response times. Cognitive Science Society, London, UK, July 2017.
- Luthra, S., Fuhrmeister, P., Guediche, S., Blumstein, S. E., & Myers, E. B. Neural correlates of task-irrelevant perceptual learning of non-native speech sounds. Psychonomic Society. Boston, MA, November 2016.