Recognizing emotional signals and responding appropriatly to them is vital for survival. It has therefore been claimed that processing of emotional information has priority in the human brain. Experiments showing that emotional information is processed fast and even without awareness give some support for this idea.
Both prosody and music can convey emotional information through the structuring of sound.
How does the human brain extract emotional meaning from acoustic signals? And does the brain extract emotional meaning from music and speech in a similar way or are there separate circuits?
A recent study claims to have found evidence that emotional information in music is indeed extracted very quickly and that this emotional information can interact with the semantic system.
In this talk, I would like to discuss a collaboration project that aims to replicate and extend this study. Two experiments are proposed to address the following questions:
- is the fast emotional meaning-extraction from music reliable?
- can fast emotional meaning-extraction also be found for emotional prosody?
- does emotional information extracted from music and prosody indeed interact with the semantic system?
In addition to answering these questions the proposed experperiments may shed some light on the automaticity of emotional processing in general and the extent to which both acoustic media for emotional expression have a shared neural representation.