Holding a guitar case can increase your success with women… if you’re attractive (and live in France)
I came across a fun little study published on the Psychology of Music. The experimenters designed a simple experiment to test whether music plays in role in sexual selection. To be honest, I am not sure if that was what they actually tested but I’ll let you decide on that. It seems that what they tested was whether holding a guitar case increases women’s receptivity to a courtship solicitation – if you are attractive, male and live in France. The participants were 300 young females with an estimated age between 18 and 22 years, who were walking alone in several shopping streets of a medium-sized city in France. The experiment was conducted on a sunny Saturday afternoon at the beginning of the summer period. A 20-year-old man, previously evaluated as having a high level of physical attractiveness, acted as confederate.
The participants were selected following a random assignment in which the confederate was instructed to approach the first young woman in the age group (18–22 years) who appeared alone on the pedestrian walkway. He was instructed not to select a participant according to her physical attractiveness, the way she was dressed, her height, etc. He was instructed to wait until a young woman between approximately 18 and 22 years of age passed by him in the street, and then to approach her… The confederate was instructed to approach the young women with a smile and to say, “Hello. My name’s Antoine. I just want to say that I think you’re really pretty. I have to go to work this afternoon, and I was wondering if you would give me your phone number. I’ll phone you later and we can have a drink together someplace.” According to the experimental conditions, the confederate held in his hands a black acoustic guitar case (guitar case condition), a large black sports bag (sports bag control condition), or nothing (no bag control condition). After testing 10 women in one condition, the confederate was instructed to move to another area and to select a new experimental condition according to a random distribution.
The confederate talked to 300 women in one afternoon.. Impressive. The results? In the guitar case condition, 31% (!) of the women gave their phone number to the confederate , compared to 9% in the sports bag condition and 14% in the no bag control condition. It seems that holding a guitar case is an effective strategy if you’re attractive. Holding a sports bag seems to have the opposite effect. There are many problems with this study which the authors recognise. First of all, they only used one confederate and his physical attractiveness was high. He was voted as the most good looking guy among a list of others in a previous study. As a result, it is difficult to generalise the effects to other male confederates with various attractiveness levels. It’d be interesting to see if holding a guitar case would increase women’s receptivity to a courtship solicitation by an average looking male. Also, only one instrument was manipulated in this study. Could the effect be limited to the guitar?
Why do some people like popular music while others prefer less popular genres? A new study published on the journal Psychology of Music proposes a possible explanation for this, handedness. After examining the musical preferences and the handedness scores of 92 undergraduate students, S.D. Christman identified that the strength of handedness is an important factor in individual differences in musical preferences.
More specifically, strong right-handers compared to mixed-handers reported significantly decreased liking of unpopular music genres and marginally increased liking of popular genres. These differences do not appear to reflect differences in musical training or experience. According to the author of the study handedness is associated with differences in cognitive flexibility. Previous studies suggest that strong right-handedness is associated with decreased interaction between the left and right cerebral hemispheres, which in turn is associated with decreased cognitive flexibility across various domains. The author concludes:
A number of studies report differences between conservatives and liberals in musical preferences (e.g., Glasgow & Cartier, 1985; North & Hargreaves, 2007). For example, Glashow and Cartier (1985) reported that conservatives prefer music that is safe and familiar, presumably reflecting preference for popular, not unpopular, genres. Given evidence that strong right-handedness is associated with increased conservative attitudes (Christman, 2008), this suggests a possible three-way connection between strong right-handedness, conservative views, and a lack of open-earedness. Accordingly, future research on individual differences in musical preferences would be well advised to include strength of handedness as a variable.
Finally, in case you’re curious, here are some of the genres included in each category: (a) popular: modern rock, classic rock, heavy metal, alternative rock, modern pop, 80s pop, R&B, Rap, Hip-hop, country, (b) unpopular: soul, funk, jazz, blues, folk, avant-garde, world, electronica, reggae, ambient, house. The categorisation of popular and unpopular genres was based on record sales (conventional music was defined as popular genres with high numbers of sales, while unconventional music was defined as less popular genres with lower numbers of sales). Even though the proposed idea is interesting, handedness is probably only one of the factors that might explain individual differences in musical preference. For a different approach see a recently published study by Chamorro-Premuzic et al. (2011) who found that individual differences in music consumption are predicted by uses of music and age rather than emotional intelligence, neuroticism, extraversion or openness.
Christman, S. D. (2011). Handedness and ‘open-earedness’: Strong right-handers are less likely to prefer less popular musical genres Psychology of Music : 10.1177/0305735611415751
Chamorro-Premuzic, T., Swami, V., & Cermakova, B. (2011). Individual differences in music consumption are predicted by uses of music and age rather than emotional intelligence, neuroticism, extraversion or openness Psychology of Music : 10.1177/0305735610381591
Music is a powerful tool of expressing and inducing emotions. Lima and colleagues aimed at investigating whether and how emotion recognition in music changes as a function of ageing. Their study revealed that older participants showed decreased responses to music expressing negative emotions, while their perception of happy emotions remained stable.
Emotion plays an important role in music. Even infants have been found to be capable of identifying emotions in musical excerpts (Nawrot, 2003). However, recognition of emotion in music has received little attention so far. A new study by Lima and Castro published in Cognition and Emotion examined the effects of ageing on the recognition of emotions in music. Previous studies looking at emotion recognition in other modalities have revealed that increasing age is associated with a decline in the recognition of some emotions but not others (for more information see meta-analysis by Ruffman et al. (2008)). Laukka and Juslin (2007) examined the effects of ageing on emotion recognition in music comparing young adults (around 24) and older adults (older than 65). Their results identified that older adults had more difficulty recognizing fear and sadness in both music and speech prosody, whereas no differences were observed for anger, happiness and neutrality.
The sample used by Lima et al. was of 114 healthy adults (67 female). They were aged between 17 and 84 years, and were divided into three groups with 38 participants each: younger(mean age=21.8 years), middle-aged (mean age=44.5 years) and older adults (mean age=67.2 years). Each group listened to 56 short musical excerpts that expressed happiness, sadness, fear/threat and peacefulness. Each category was consisted of 14 stimuli.
The results revealed significant age-related changes associated with specific emotions. More specifically, the authors identified a progressive decline in responsiveness to sad and scary music. No difference was found in happy music. Differences between age groups were also observed in the pattern of misclassifications for sad and peaceful music. Younger participants perceived more sadness in peaceful music, older participants perceived more peacefulness. This could be due to the structural features of peaceful and sad songs, which are both characterised by slow tempo. Future studies could further investigate this. In addition to that, Lima et al. took into account the years of musical training that the participants had. This analysis revealed a positive association between music training and the categorisation of musical emotions.
One possible explanation for the main findings of this study suggests that the decline in the recognition of particular emotions might reflect the age-related neuropsychological decline in brain regions (such as the amygdala) involved in emotion processing. Previous studies have showed that distinct brain regions are involved in the perception of different emotions (Mitterschiffthaler et al., 2007). Another possible explanation is the age-related positivity bias (Mather & Carstensen, 2005; Carstensen & Mikels, 2005). Age-related positivity bias suggests that people get older, they experience fewer negative emotions.
Future studies could attempt to identify particular brain regions involved in emotion recognition at different ages. Furthermore, since the age-related positivity bias might not be universal (older Chinese participants looked away from happy facial expressions and not from negative ones, see Fung et al., 2008), it’d be very interesting to investigate the effects of ageing on emotion recognition in music in participants from different cultures.
Lima CF, & Castro SL (2011). Emotion recognition in music changes across the adult life span. Cognition & emotion, 25 (4), 585-98 PMID: 21547762
Carstensen, L., & Mikels, J. (2005). At the Intersection of Emotion and Cognition. Aging and the Positivity Effect Current Directions in Psychological Science, 14 (3), 117-121 DOI: 10.1111/j.0963-7214.2005.00348.x
Ruffman T, Henry JD, Livingstone V, & Phillips LH (2008). A meta-analytic review of emotion recognition and aging: implications for neuropsychological models of aging. Neuroscience and biobehavioral reviews, 32 (4), 863-81 PMID: 18276008
Laukka, P., & Juslin, P. (2007). Similar patterns of age-related differences in emotion recognition from speech and music Motivation and Emotion, 31 (3), 182-191 DOI: 10.1007/s11031-007-9063-z
Mather M, & Carstensen LL (2005). Aging and motivated cognition: the positivity effect in attention and memory. Trends in cognitive sciences, 9 (10), 496-502 PMID: 16154382
Mitterschiffthaler, M., Fu, C., Dalton, J., Andrew, C., & Williams, S. (2007). A functional MRI study of happy and sad affective states induced by classical music Human Brain Mapping, 28 (11), 1150-1162 DOI: 10.1002/hbm.20337
Nawrot, E. (2003). The Perception of Emotional Expression in Music: Evidence from Infants, Children and Adults Psychology of Music, 31 (1), 75-92 DOI: 10.1177/0305735603031001325
Fung HH, Lu AY, Goren D, Isaacowitz DM, Wadlinger HA, & Wilson HR (2008). Age-related positivity enhancement is not universal: older Chinese look away from positive stimuli. Psychology and aging, 23 (2), 440-6 PMID: 18573017
Charles Limb is a surgeon and musician who is investigating the neural correlates of musical creativity. You might remember his very cool fMRI study of jazz improvisation. You can read it here. He talks about this and other projects he’s working on in his recent TED talk. We need more studies like these!
Many studies have showed that that media with violent or aggressive content (such as violent videogames) may increase aggressive behaviour and thoughts (Bushman & Huesmann, 2006). Moreover, music and lyrics can influence people’s behaviour; prosocial songs were found to be associated with a significant increase in tipping behaviour (Jacob, Guéguen & Boulbry, 2010), male customers exposed to romantic songs spent more money than when no music was played or when non-romantic pop music was played (Jacob, Guéguen, Boulbry & Selmi, 2009).
Guéguen, Jacob and Lamy (2010) investigated if exposure to romantic songs could have an effect on behaviour. In particular, they tested if background romantic music would influence the dating behaviour of young single female participants. The stimuli used were a romantic song ‘Je l’aime à mourir’ by the french songwriter Francis Cabrel (was selected after a pilot study) and the neutral song was ‘L’heure du
thé’ by Vincent Delerm.
183 single female participants were exposed to romantic lyrics or to neutral ones while waiting for the experiment to start. Five minutes later, the participant interacted with a young male confederate in a marketing survey. During a break, the male confederate asked the participant for her phone number
In the romantic song lyrics condition 52.2% (23/44) complied with the confederate’s request , compared to 27.9% (12/43) in the neutral song lyrics condition. The difference was found significant (χ2(1, N = 83) = 5.37, p = .02, r = .24).
According to the authors these results support the General Learning Model (GLM), which was initially proposed by Buckley and Anderson (2006) to explain the influence of aggressive media (i.e. videogames) on behaviour, but was updated recently by Greitemeyer (2009) to include media exposure in general. The GLM proposes that exposure to media affects the internal states of individuals (aggressive media increase aggressive behaviour/thoughts, prosocial media promote prosocial behaviour/thoughts).
Guéguen, Jacob and Lamy (2010) suggest that the results of this particular experiment could be explain by music’s ability to induce positive affect (Lenton & Martin, 1991)
and that positive affect is related with receptivity in a courtship request (Guéguen, 2008). Thus, it is possible that the romantic song lyrics activated positive affect
which, in turn, made the participant more receptive to a request for a date. It’s also
possible that the romantic song lyrics acted as a prime that, in turn, led to the display
of behaviour associated with this prime (Bargh, Chen & Burrows, 1996).
Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effect of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230–244.
Buckley, K. E., & Anderson, C. A. (2006). A theoretical model of the effects and consequences of playing video games. In P. Vorderer and J. Bryant (Eds.), Playing video games: Motives, responses, and consequences (pp. 363–378). Mahwah NJ: Lawrence Erlbaum.
Bushman, Brad J., & Huesmann, L. R. (2006). Short-term and Long-term Effects of Violent Media on Aggression in Children and Adults. Archives of Pediatrics & Adolescent Medicine, 160, 348-352.
Greitemeyer, T. (2009). Effects of songs with prosocial lyrics on prosocial thoughts, affect, and behavior. Journal of Experimental Social Psychology, 45, 186–190.
Jacob, C., Guéguen, N., and Boulbry, G. (2010). Effects of songs with prosocial lyrics on tipping behavior in a restaurant. International Journal of Hospitality Management.
Jacob, C., Guéguen, N., Boulbry, G., & Selmi, S. (2009). ‘Love is in the air’: Congruency between background music and goods in a flower shop. International Review of Retail, Distribution and Consumer Research, 19, 75–79.
Lenton, S. R. & Martin, P. R. (1991). The contribution of music vs. instructions in the musical mood induction procedure. Behavioral Research Therapy, 29, 623–625.
Gueguen, N., Jacob, C., & Lamy, L. (2010). ‘Love is in the air’: Effects of songs with romantic lyrics on compliance with a courtship request Psychology of Music, 38 (3), 303-307 : http://dx.doi.org/10.1177/0305735609360428
Infants interact with their mothers through music from the first months of their lives (Fridman, 1980). The main feature of these interactions is a well-sustained rhythm (Schogler, 2000). According to a recent study by Zentner and Eerola (2010) published in PNAS infants respond to tempo and rhythm more than other rhythmical sounds (like speech). Here’s the abstract of the paper:
Humans have a unique ability to coordinate their motor movements
to an external auditory stimulus, as in music-induced foot tapping or
dancing. This behaviour currently engages the attention of scholars
across a number of disciplines. However, very little is known about
its earliest manifestations. The aim of the current research was to
examine whether preverbal infants engage in rhythmic behaviour to
music. To this end, we carried out two experiments in which we
tested 120 infants (aged 5–24 months). Infants were exposed to
various excerpts of musical and rhythmic stimuli, including isochronous
drumbeats. Control stimuli consisted of adult- and infant-directed
speech. Infants’ rhythmic movements were assessed by multiple
methods involving manual coding from video excerpts and innovative
3D motion-capture technology. The results show that (i) infants
engage in significantly more rhythmic movement to music and other
rhythmically regular sounds than to speech; (ii) infants exhibit
tempo flexibility to some extent (e.g., faster auditory tempo is associated
with faster movement tempo); and (iii) the degree of rhythmic
coordination with music is positively related to displays of
positive affect. The findings are suggestive of a predisposition for
rhythmic movement in response to music and other metrically
You can read the rest here.
Zentner, M. and Eerola, T. (2010). Rhythmic engagement with music in infancy. Proceedings of the National Academy of Sciences.
One of my favourite music & mind speeches. Dr. Aniruddh D. Patel of the Neurosciences Institute, discusses what music can teach us about the brain, and what brain science, in turn, can reveal about music.
You can find many of his interesting papers here.
Music has the power to induce specific emotions to the listener. Just of think of its use on movies. Try to imagine how the classic shower scene from Alfred Hitchcock’s “Psycho” would be without music or with a different tune. If the wrong song had been used, it could have easily ruined the scene (the famous tune was composed by Bernard Herrmann btw).
Even back in ancient Greece philosophers like Aristotle and Plato had realised that music can affect human emotion. According to them, certain structural factors determine whether a song evokes unpleasant, pleasant or other moods to the listener.
Nowadays, music is one of the most hyped subjects in psychology and cognitive neuroscience. It seems that ancient Greeks were half right. Scherer & Zentner in their 2001 study suggested that the relationship between music and emotion is determined by 4 factors: the structural features, the performance features, the listener features, and the contextual features.
In this post I’m going to talk about the structural features and ignore the other three factors. Most studies suggest that the emotional valence of music depends mostly on the mode (major/minor) and the tempo (slow/fast) of the tune. Mode refers to the specific subset of pitches used to write a given musical excerpt and tempo to the number of beats per minute in a song. Major mode and fast tempo are associated with “happy” songs and minor mode and slow tempo with “sad” songs. Other emotions such as fear and anger are less easily studied, but can be also recognised by the listeners. These seem to be induced by other features apart from mode and tempo. Fear excerpts seem to have fast tempo, dissonant harmonies and vast variations of dynamics and pitch.
Interestingly, it seems that children use the same properties as adults (i.e. tempo and mode) in determining whether music sounds “happy” or “sad”. Dalla Bella, Peretz, Rousseau & Gosselin showed that children are capable of judging the emotional content of music from the age of 5 using tempo as the sole determinant. Older children (from 6 to 8 years old) use mode as well as tempo, just like adults. Recent studies show that even newborn babies have the ability to discriminate between happy and sad songs. These findings suggest that infants are born with the ability to perceive music. However, it’s really hard to tell if this ability is really innate or learned during the last months of pregnancy, during which the infant can perceive certain stimuli from the external environment.
Scherer, K., R. & Zentner, M., R. (2001). Emotional Effects of Music: Production Rules.in P. N. Juslin & J. A. Sloboda (ed.). Music and emotion: theory and research. Oxford: Oxford University Press. 362-393.
Khalfa, S., Roy, M., Rainville, P., Dalla Bella, S. & Peretz, I. (2008). Role of tempo entrainment in psychophysiological differentiation of happy and sad music?. International Journal of Psychophysiology, 68, 17–26
Sugimoto, T. & Hashiya, K. , 2006-06-19 “The Recognition of Affective Values of the Music in Infants: Infants Motoric Response to Music” Paper presented at the annual meeting of the XVth Biennial International Conference on Infant Studies, Westin Miyako, Kyoto, Japan