Based on recent neuroimaging findings in music neuroscience, we have proposed that the human mirror neuron system may play a central role in the perception of emotion in music (Molnar-Szakacs & Overy, 2006). A range of neuroscientific evidence indicates that musical sound is perceived not only in terms of the auditory signal, but also in terms of the temporally synchronous, hierarchically organised sequences of expressive motor acts behind the signal. Music may evoke, in the mind of the listener, a bodily movement or motor act allowing one to experience the music in an embodied way. It is these embodied motor representations that recruit the human mirror neuron system. As posture, gesture and facial expressions are important implicit cues in social communication, one can easily imagine that music can have similar effects in communicating emotions.
Emotion, especially as communicated by the face, the body and the voice is an active motor process. Deficits in the attribution of emotion from static facial expressions have been reported in ASD and recently acknowledged as a potential cause of social communication impairment in this population. Although there have been innumerable studies of face processing in ASD, these have predominantly been studies of static face processing and have focused on facial identity and gender. Whereas in social situations, social information is conveyed by the development of facial expressions and emotion actually attributed from dynamic facial expressions. The use of static facial expression stimuli in fMRI research has resulted in inconclusive findings. The task-dependant nature of emotion attribution deficits required consideration of the different attentional and perceptual processing demands of the tasks and supported the need for more naturalistic facial expression processing paradigms that require continuous attention and necessitate the configural processing of face percepts. The Dynamic Facial Expression Paradigm (DFEP) was developed to address these issues and provide a more naturalistic facial expression processing paradigm to quantify the attribution of emotion from dynamic facial expressions in ASD.
This study will be the first to investigate the fundamental neural components of emotion understanding through music and face perception in both typically developing children and children with ASD.
Katie Overy, PhD., Co-Director, Institute for Music in Human and Social Development (IMHSD), University of Edinburgh
This collaborative project takes place under the auspices of The Help Group – UCLA Autism Research Alliance. Financial support is provided by a grant from The GRAMMY Foundation®.