Pradeep D
I am Pradeep, a postdoctoral research associate in the Auditory Cognition Group and working at Newcastle University, UK. I obtained my PhD on understanding auditory scene analysis using fMRI in monkeys from Newcastle University. My research interests include Auditory Perception and Cognition. I have employed fMRI, MEG and EEG techniques in humans and non-human primates to address questions on timbral analysis, auditory segregation and auditory working memory. I am currently working on understanding auditory scene analysis using EEG and analysis of auditory objects using MEG. If my projects interest you and you wish to collaborate with me then please get in touch via email.
PubMed: My papers
ORCID: My research profile
Google Scholar: My citations
My Personal website: www.pradeepd.com
PubMed: My papers
ORCID: My research profile
Google Scholar: My citations
My Personal website: www.pradeepd.com
Project: Auditory Scene Analysis
More than half the world's population above the age of 75 years develop age-related hearing loss. They have difficulty understanding speech amidst background noise, like when listening to someone speak in a noisy cafe. Colloquially this is known as the ‘cocktail party problem’ which most animals are able to solve but computers cannot. However, how our brains solve this challenge is not well understood.
Here is a visual summary of this project.
More than half the world's population above the age of 75 years develop age-related hearing loss. They have difficulty understanding speech amidst background noise, like when listening to someone speak in a noisy cafe. Colloquially this is known as the ‘cocktail party problem’ which most animals are able to solve but computers cannot. However, how our brains solve this challenge is not well understood.
Here is a visual summary of this project.
I explored whether monkeys are a good model of human brain mechanisms underlying auditory segregation. Unlike in humans, the use of monkeys allows systematic invasive brain recordings to characterise how single neurons achieve this feat. However, before one can record from a monkey brain and generalize the results to humans it is essential to show that the underlying mechanisms are similar in both species.
I employed synthetic auditory stimuli over speech as they do not have semantic confounds and help us to develop animal models. Our behavioural experiments showed that rhesus macaques are able to perform auditory segregation based on the simultaneous onset of spectral elements (temporal coherence). I conducted functional magnetic resonance imaging (fMRI) in awake behaving macaques to show that the underlying brain network is similar to that seen in humans. My study is the first investigation to show such evidence in any animal model.
Here is my 3 minute video summarising this work
I employed synthetic auditory stimuli over speech as they do not have semantic confounds and help us to develop animal models. Our behavioural experiments showed that rhesus macaques are able to perform auditory segregation based on the simultaneous onset of spectral elements (temporal coherence). I conducted functional magnetic resonance imaging (fMRI) in awake behaving macaques to show that the underlying brain network is similar to that seen in humans. My study is the first investigation to show such evidence in any animal model.
Here is my 3 minute video summarising this work
Here is a poster summarizing this project
Here is a peer reviewed publication:
- Schneider F, Dheerendra P, Balezeau F, Ortiz-Rios M, Kikuchi Y, Petkov CI, Thiele A, Griffiths TD. Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey. Scientific Reports 2018, 8, 17948.
Now, my experiments aim to address the dynamics underlying auditory figure-ground and speech-in-noise segregation using electroencephalography (EEG) in normal hearing humans
EEG responses
People with hearing-loss commonly experience difficulty in understanding speech in noisy situations, like social gatherings e.g. cocktail parties, café, or restaurants. Earlier scientific studies have established that the ability to segregate and group overlapping generic artificial sounds (not words in any language) is related to the ability to understand speech in social chatter.
In this study, I non-invasively recorded electrical activity from the brain (using electroencephalography aka EEG technique) while the subjects performed a relevant task (sound segregation and grouping) and irrelevant task (a visual task). I compared the electrical signals evoked from the brain during segregation and grouping of non-linguistics artificial sounds against when trying to understand speech amidst (babble) noise.
I established that the electrical brain activity generated during real-world listening is similar to that of artificial sound segregation done passively without paying attention. Thus I showed that brain's response to listening in noisy situations can be studied without using a language and even while subjects are not performing a relevant task.
People with hearing-loss commonly experience difficulty in understanding speech in noisy situations, like social gatherings e.g. cocktail parties, café, or restaurants. Earlier scientific studies have established that the ability to segregate and group overlapping generic artificial sounds (not words in any language) is related to the ability to understand speech in social chatter.
In this study, I non-invasively recorded electrical activity from the brain (using electroencephalography aka EEG technique) while the subjects performed a relevant task (sound segregation and grouping) and irrelevant task (a visual task). I compared the electrical signals evoked from the brain during segregation and grouping of non-linguistics artificial sounds against when trying to understand speech amidst (babble) noise.
I established that the electrical brain activity generated during real-world listening is similar to that of artificial sound segregation done passively without paying attention. Thus I showed that brain's response to listening in noisy situations can be studied without using a language and even while subjects are not performing a relevant task.
Here is a visual summary of the project
Here is the peer reviewed publication:
- Xiaoxuan Guo, Pradeep D, Ester Benzaquen, William Sedley, Timothy D Griffiths, "EEG responses to auditory figure ground perception", Hearing Research, vol. 422, pp. 108524, 2022
Project: Time Window
Sounds differ in the duration over which information is conveyed. For instance, phonemes are short in duration while syllables are much longer. So the optimal duration of time window that the brain requires to employ depends on the kind of acoustic feature. But how does the primate brain organize the processing of sounds that require time windows of different duration? Is this anatomical organization of time-window processing seen in humans consistent with other primates?
Here is a visual summary of this project.
Sounds differ in the duration over which information is conveyed. For instance, phonemes are short in duration while syllables are much longer. So the optimal duration of time window that the brain requires to employ depends on the kind of acoustic feature. But how does the primate brain organize the processing of sounds that require time windows of different duration? Is this anatomical organization of time-window processing seen in humans consistent with other primates?
Here is a visual summary of this project.
I explored whether monkeys are a good model of human mechanisms underlying processing of time windows. I conducted fMRI in awake behaving macaques using synthetic stimuli to show that their anatomical organization of processing of time windows is similar to humans. However, monkeys exhibit a decreased sensitivity to longer time windows as compared to humans.
This difference in sensitivity between humans and monkeys is surprising given their phylogenetic proximity. This difference in sensitivity for long time windows between species might be due to the specialization of the human brain for processing of speech which requires greater sensitivity to longer time windows. My study highlights the brain mechanisms that might be unique to humans, possibly an outcome of divergent evolution alongside the development of speech.
This difference in sensitivity between humans and monkeys is surprising given their phylogenetic proximity. This difference in sensitivity for long time windows between species might be due to the specialization of the human brain for processing of speech which requires greater sensitivity to longer time windows. My study highlights the brain mechanisms that might be unique to humans, possibly an outcome of divergent evolution alongside the development of speech.
Here is a poster summarizing the project
Here is the peer reviewed paper
- Pradeep Dheerendra, SImon Baumann, Olivier Joly, Fabien Balezeau, Christopher I Petkov, Alexander Thiele, Timothy D Griffiths, "The representation of time windows in primate auditory cortex", Cerebral Cortex, 2021
Project: Auditory object boundary detection
Our brain needs to perceive the world around us by hearing and recreating it in our mind. This requires that our brain has a way of identifying any new sound as it appears, separating it from other sounds that are present and finally representing it as a distinct object in our mind. An important question in understanding how the brain recreates an 'auditory scene' is therefore: how does the brain detects appearance of new sounds?
Since sounds may contain multiple components that vary in time and frequency, this process requires that our brains detect changes in the statistics of the time-frequency space. However, how the identification of discontinuities at object boundary is accomplished is not well understood.
Here is a visual summary of this project which aims to answer this question.
Our brain needs to perceive the world around us by hearing and recreating it in our mind. This requires that our brain has a way of identifying any new sound as it appears, separating it from other sounds that are present and finally representing it as a distinct object in our mind. An important question in understanding how the brain recreates an 'auditory scene' is therefore: how does the brain detects appearance of new sounds?
Since sounds may contain multiple components that vary in time and frequency, this process requires that our brains detect changes in the statistics of the time-frequency space. However, how the identification of discontinuities at object boundary is accomplished is not well understood.
Here is a visual summary of this project which aims to answer this question.
To understand how the emergence of a new sound in an ongoing acoustic scene is detected, I use the MEG (magnetoencephalography) technique, which non-invasively records the magnetic activity of the brain. I employ artificially created sounds: for these I have intentionally formed boundaries by changing the underlying regularity in time-frequency space. I recorded MEG in volunteers who are asked to report any change in the sound structure that they can detect, as they listen to these artificial sounds. All subjects were able to detect and report these changes in the sound structure very well.
I observed a very low frequency drift (orange trace) in the magnetic response of their brain recorded from their scalp. I estimated the location of the activity in the brain that can evoke such a response at the scalp. This activity seems to be located in a particular region (Primary Auditory Cortex) which is known to be the first cortical region to receive the auditory input from the ears. In a previous study that used fMRI (a technique to non-invasively record the metabolic demand created by brain activity), this same region of the brain was shown to be involved in this same task!
There is an emerging school of thought that our brains are not just passively reacting to the world around us but constantly predicting it. This "predictive-coding" idea suggest that our brains accomplish this by creating a model of the world around us and predicting how it will change. Next our brains compare this prediction with the actual sensations received to update their model of the world and make further predictions. This process of predict-compare-update is a continuous ongoing process. In this process of prediction, temporally regular sensory inputs have higher relevance than temporally irregular sensations. So in this framework, 'precision' signal, a long-term second order statistic, represents the level of regularity of sensory inputs. For instance, higher precision implies that the corresponding sound source is very regular so it is assigned a higher weight in the process of prediction and vice versa. In our case, this drift signal (shown in orange) encodes the level of regularity of sound structures in time-frequency space.
So I conclude that Primary Auditory Cortex in human brain detects the appearance of new sounds as they emerge in the acoustic environment by continually monitoring the regularity of time-frequency space.
Here is a poster summarising this project.
I observed a very low frequency drift (orange trace) in the magnetic response of their brain recorded from their scalp. I estimated the location of the activity in the brain that can evoke such a response at the scalp. This activity seems to be located in a particular region (Primary Auditory Cortex) which is known to be the first cortical region to receive the auditory input from the ears. In a previous study that used fMRI (a technique to non-invasively record the metabolic demand created by brain activity), this same region of the brain was shown to be involved in this same task!
There is an emerging school of thought that our brains are not just passively reacting to the world around us but constantly predicting it. This "predictive-coding" idea suggest that our brains accomplish this by creating a model of the world around us and predicting how it will change. Next our brains compare this prediction with the actual sensations received to update their model of the world and make further predictions. This process of predict-compare-update is a continuous ongoing process. In this process of prediction, temporally regular sensory inputs have higher relevance than temporally irregular sensations. So in this framework, 'precision' signal, a long-term second order statistic, represents the level of regularity of sensory inputs. For instance, higher precision implies that the corresponding sound source is very regular so it is assigned a higher weight in the process of prediction and vice versa. In our case, this drift signal (shown in orange) encodes the level of regularity of sound structures in time-frequency space.
So I conclude that Primary Auditory Cortex in human brain detects the appearance of new sounds as they emerge in the acoustic environment by continually monitoring the regularity of time-frequency space.
Here is a poster summarising this project.
Relevant publication:
- Pradeep Dheerendra*, Nicolas Barascud*, Sukhbinder Kumar, Tobias Overath, Tim Griffiths, "Dynamics underlying auditory object boundary detection in primary auditory cortex", European Journal of Neuroscience, pp. 1-15, 2021
Link: https://onlinelibrary.wiley.com/doi/10.1111/ejn.15471
Your browser does not support viewing this document. Click here to download the document.
Project: Auditory Working Memory
Auditory working memory (AWM) is the process of keeping representations of auditory objects in mind for short duration when the sounds are not in the environment. This is different from phonological WM as these sounds cannot be assigned a semantic label.
Recent fMRI study on AWM in humans showed a network of activation in auditory cortex, hippocampus, and inferior frontal gyrus. They proposed a system for AWM where sound specific respresentations in auditory cortex are kept active by projections from hippocampus and inferior frontal cortex.
My MEG project aims to understand the dynamics underlying this proposed system. What mechanisms underlie neural activity during retention? What is the role of hippocampus in AWM?
Auditory working memory (AWM) is the process of keeping representations of auditory objects in mind for short duration when the sounds are not in the environment. This is different from phonological WM as these sounds cannot be assigned a semantic label.
Recent fMRI study on AWM in humans showed a network of activation in auditory cortex, hippocampus, and inferior frontal gyrus. They proposed a system for AWM where sound specific respresentations in auditory cortex are kept active by projections from hippocampus and inferior frontal cortex.
My MEG project aims to understand the dynamics underlying this proposed system. What mechanisms underlie neural activity during retention? What is the role of hippocampus in AWM?
In my first experiment I contrasted working memory for pitch of a tone against working memory for spacing of a visual sinusoidal grating. Source localisation of the induced response during the first second of maintenance of pitch of a tone against pre-stimulus baseline showed medial prefrontal theta enhancement, cerebellar beta enhancement, auditory cortex alpha suppression, and left supramarginal gyrus theta and beta suppression. Further, I found theta phase coupling between medial prefrontal and left posterior hippocampus and right inferior frontal gyrus in addition to beta phase coupling between cerebellum and left inferior frontal gyrus (Broca's area) which was correlated with subject performance. So I speculate that, representations of auditory stimulus are kept active in the auditory cortex through covert rehearsal by Broca's area in tandem with the Cerebellum and consolidated by the medial prefrontal-hippocampal network.
In my second experiment where pitch of one of two tones is required to be retained for 12 s, the neuro-magnetic response showed excitation throughout the maintenance phase when compared to silent pre-stimulus baseline which is consistent with existing literature. However, when compared to a control condition that required no memorisation, the response during maintenance was not sustained over the entire duration but instead decayed to control after initial excitation. Source localisation of the evoked response during maintenance against pre-stimulus baseline showed activity in the auditory cortex similar to that seen during encoding phase. Similarly, source localisation of the induced response during the first second of maintenance against pre-stimulus baseline showed suppressed alpha oscillations in the auditory cortex, enhanced theta oscillations in the medial prefrontal, enhanced beta in cerebellum, suppressed beta in left supramarginal gyrus. Further I observed enhanced theta phase coupling between medial prefrontal and left anterior STG in the temporal pole . So I speculate that, for the retention of a single tone in memory, representations of the acoustic stimulus are maintained by the activation of the auditory cortex at the start of the retention phase but it is not persistent throughout the delay. Further this activity in the auditory cortex is consolidated by the medial prefrontal cortex and covert rehearsal by the Cerebellum.
In my second experiment where pitch of one of two tones is required to be retained for 12 s, the neuro-magnetic response showed excitation throughout the maintenance phase when compared to silent pre-stimulus baseline which is consistent with existing literature. However, when compared to a control condition that required no memorisation, the response during maintenance was not sustained over the entire duration but instead decayed to control after initial excitation. Source localisation of the evoked response during maintenance against pre-stimulus baseline showed activity in the auditory cortex similar to that seen during encoding phase. Similarly, source localisation of the induced response during the first second of maintenance against pre-stimulus baseline showed suppressed alpha oscillations in the auditory cortex, enhanced theta oscillations in the medial prefrontal, enhanced beta in cerebellum, suppressed beta in left supramarginal gyrus. Further I observed enhanced theta phase coupling between medial prefrontal and left anterior STG in the temporal pole . So I speculate that, for the retention of a single tone in memory, representations of the acoustic stimulus are maintained by the activation of the auditory cortex at the start of the retention phase but it is not persistent throughout the delay. Further this activity in the auditory cortex is consolidated by the medial prefrontal cortex and covert rehearsal by the Cerebellum.
Here is a video summary of this work
Here is poster summary of this work
Project: Auditory Spatial Perception
In this project, I aimed to validate the percept induced by a virtual motion of auditory stimuli. I characterised the stimuli psychophysically in humans. I built an apparatus capable of delivering static or moving sound stimuli in free field in a soundproof chamber. This used an electric motor with adjustable speed with an attached rotor arm to which a small speaker was fixed, to achieve sound-source rotatory movement in the azimuthal plane through the subject’s ear canal. I have replicated in 3 participants the intra-auricular recording approach using amplitude-modulated noise stimulus i.e. I recorded from the ear canal when static sounds were delivered from azimuthal positions recorded in 10° intervals from zero (midline, front); I also recorded motion stimuli from the ear canal when a speaker moved around the head with an angular motion of 100°/s or 50°/s. Static recordings from adjacent positions were then concatenated to create stimuli virtually moving at speeds of 100°/s or 50°/s.
Here is the visual summary of the project:
In this project, I aimed to validate the percept induced by a virtual motion of auditory stimuli. I characterised the stimuli psychophysically in humans. I built an apparatus capable of delivering static or moving sound stimuli in free field in a soundproof chamber. This used an electric motor with adjustable speed with an attached rotor arm to which a small speaker was fixed, to achieve sound-source rotatory movement in the azimuthal plane through the subject’s ear canal. I have replicated in 3 participants the intra-auricular recording approach using amplitude-modulated noise stimulus i.e. I recorded from the ear canal when static sounds were delivered from azimuthal positions recorded in 10° intervals from zero (midline, front); I also recorded motion stimuli from the ear canal when a speaker moved around the head with an angular motion of 100°/s or 50°/s. Static recordings from adjacent positions were then concatenated to create stimuli virtually moving at speeds of 100°/s or 50°/s.
Here is the visual summary of the project:
I tested each participant’s perception of these stimuli using criterion-free psychophysics: AXB paradigm, where X was always a moving stimulus and A and B were either a moving or a concatenated stimulus. Participants were asked to identify which of stimuli A or B was different from X.
The results confirmed that none of the three participants were able to distinguish concatenated from motion stimuli at 100°/s (performance was at chance level in each participant). However, at a 50°/s speed, two of three participants were able to discriminate concatenated stimuli from motion stimuli. I concluded that the percept of virtual motion is speed dependent - at high speeds (greater than 100°/s) virtual motion is indistinguishable from true motion while at lower speeds (lesser than 50°/s) it can be distinguished
Relevant publication:
The results confirmed that none of the three participants were able to distinguish concatenated from motion stimuli at 100°/s (performance was at chance level in each participant). However, at a 50°/s speed, two of three participants were able to discriminate concatenated stimuli from motion stimuli. I concluded that the percept of virtual motion is speed dependent - at high speeds (greater than 100°/s) virtual motion is indistinguishable from true motion while at lower speeds (lesser than 50°/s) it can be distinguished
Relevant publication:
- Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biology 2017, 15(5), 1-24.
Your browser does not support viewing this document. Click here to download the document.
© 2019 Auditory Cognition Group