(*first authored)

*E. Sohoglu and M. Chait (2016) Detecting and representing predictable structure during auditory scene analysis. eLife. 5: E19113. [PDF]

*E. Sohoglu and M. Davis (2016) Perceptual learning of degraded speech by minimizing prediction error. PNAS. 113(12): E1747-56. [PDF]

*E. Sohoglu and M. Chait (2016) Neural dynamics of change detection in crowded acoustic scenes. NeuroImage. 126: 164-172. [PDF]

*E. Sohoglu, J. Peelle, R. Carlyon, M. Davis (2014) Top-down influences of written text on perceived clarity of degraded speech. Journal of Experimental Psychology: Human Perception and Performance. 40(1): 186-99. [PDF]

S. Amitay, J. Guiraud, E. Sohoglu, O. Zobay, B. Edmonds, Yu-Xuan Zhang, D. Moore (2013) Human decision making based on variations in internal noise: An EEG study. PLOS ONE. 8(7): e68928. [PDF]

*E. Sohoglu, J. Peelle, R. Carlyon, M. Davis (2012) Predictive top-down integration of prior knowledge during speech perception. Journal of Neuroscience. 32(25): 8443-53. [PDF]

K. Molloy, D. Moore, E. Sohoglu, S. Amitay (2012) Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning. PLOS ONE. 7(5): e36929. [PDF]

S. Amitay, L. Halliday, J. Taylor, E. Sohoglu, D. Moore (2010) Motivation and intelligence drive auditory perceptual learning. PLOS ONE. 5(3): e9816. [PDF]