I’m planning to write a blog post about comedy in the near future. For now, you can enjoy this interesting TED talk by Peter McGraw, one of the researchers that developed the Benign Violation Theory. You can read more about it here.
Brain training games claim to boost your mental skills. But while practicing a game might make you better at it, research in young people has shown it doesn’t improve how well you perform other cognitive tasks in everyday life. Now a new study suggests the case may be different for adults above the age of 60. Researchers at the University of California have designed a driving game called NeuroRacer. In this Nature Video, we see how the game can improve an older player’s short-term memory and attention, skills which decline with age.
Read the original research paper here:http://dx.doi.org/10.1038/nature12486 (from Nature)
I’ve always been interested in science communication. Perhaps, because that’s how I got interested in science in the first place. Documentaries, public talks, popular science books. So, whenever I get the chance I try to give something back. No, I’m not hoping to inspire people to become scientists – there’s enough competition already (joking)! Challenging their views and making them come up with interesting questions are probably the most important things.
Taking part at the Street Wonder Fair in the first week of April during BNA 2013 was an excellent opportunity for public engagement. So you can imagine how excited I was when our project “A sixth sense and beyond” was accepted! Brains AND Barbican (if you ever visit London, you must visit this place); perfect.
Here’s a brief description of what we attempted to do:
Have you ever wanted an extra sense? How about using sonar to see in the dark, or always keeping your bearings with an internal compass? Explore the mysterious world of sensory augmentation, and decide what extra sense you would have.
(official page here)
You might be a bit disappointed to see that we didn’t really give people “a sixth sense”, but remember, this is a cognitive psychology/neuroscience blog. It did work though. It got people’s attention. Festival attendees were given the chance to experience how it feels to have an “extra sense” by wearing a hat connected to a small device which gave them an “internal compass” by indicating the direction of the magnetic north through vibration. The device was inspired by a similar gadget that was developed in the University of Osnabrück.
See a photo of the unit below:
Attendees were asked to close their eyes and point to the direction of the magnetic north following the signals from the hat. Kids were given the opportunity to play a modified version of pin the tail to the donkey game (see pic below). There was a ongoing debate whether the animal was a donkey or a giraffe – to me it’s clearly a giraffe.
It was rewarding to see both adults and children having fun with device. Here are a few pics from our last day:
We also asked the participants what would their answer be to the question ‘If you could have any extra sense, what would it be?’ and encouraged them to write it down on a board. People came up with a few interesting ideas. We’re still debating whether some of them are actually senses!
Here are some of the answers people gave:However, some of those choices promoted further discussions about the senses with various attendees.
I have some experience in public engagement (ScienceBrainwaves) but I’d never taken part in such a big event. Even though standing in front of the stall talking to people for hours for 3 consecutive days was exhausting (there were only 3 of us, so we could only take (very) short breaks), it was a fascinating experience! I’m really sorry for not having enough time to check out all the other stalls. There were so many researchers having interesting demonstrations – from knitting neurons to virtual brain surgery! Hoping I get the chance to be part of a similar event in the future.
This is not a proper post. It is more like a long tweet. Having done a similar study last year and finding no significant results I felt I had to share this with you.
You have probably heard that right-handed people look up to their right when they are telling a lie, while they look up to their left when they are telling the truth. Surprisingly, even though many people believe this is to be scientifically established, a quick google search comes up with no relevant peer-reviewed papers. Richard Wiseman and colleagues investigated this notion with three different studies. All three studies provided no evidence to support the notion. So it seems that the patterns of eye-movements do not aid lie detection.
Why did this myth survive for such a long time? Probably thanks to psychologists’ reluctance to publish negative results…
Here is the abstract:
Proponents of Neuro-Linguistic Programming (NLP) claim that certain eye-movements are reliable indicators of lying. According to this notion, a person looking up to their right suggests a lie whereas looking up to their left is indicative of truth telling. Despite widespread belief in this claim, no previous research has examined its validity. In Study 1 the eye movements of participants who were lying or telling the truth were coded, but did not match the NLP patterning. In Study 2 one group of participants were told about the NLP eye-movement hypothesis whilst a second control group were not. Both groups then undertook a lie detection test. No significant differences emerged between the two groups. Study 3 involved coding the eye movements of both liars and truth tellers taking part in high profile press conferences. Once again, no significant differences were discovered. Taken together the results of the three studies fail to support the claims of NLP. The theoretical and practical implications of these findings are discussed.
The rest of the article can be found on PLoS ONE.
We spent a lot of time mind wandering. Cognitive neuroscience has recently started investigating this phenomenon. However, the subjective nature of mind wandering makes capturing and measuring it exceptionally difficult. As a result, there is still no way to objectively measure mind wandering. In the majority of published studies researchers ask participants at random intervals how focused they are on a given task. Uzzaman and Joordens in a recently published paper explored the use of eye movements as an objective measure of mind wandering while participants performed a reading task.
Eye movements are thought to reflect (to some degree) cognitive processes (for a brief overview of eye movement research, see the Scholarpedia entry). Uzzaman et al. study was based on an earlier paper by Reichle, Reineberg, and Schooler (2010) who suggested that eye movements may provide an objective measure of mind wandering. Reichle et al. investigated this hypothesis by comparing the fixation-duration during mind wandering and normal reading episodes. The results were very encouraging and suggested that the participants’ eye movements became progressively decoupled from the ongoing task (i.e., text processing) during mind wandering episodes.
Uzzaman et al. used a reading task coupled with a self-classiﬁed probe-caught mind wandering paradigm to obtain a subjective account of mind wandering episodes. They recruited 30 participants who were explicitly informed of the deﬁnition of mind wandering episodes prior to the start of the experiment and were instructed that they would be asked to report their mind state at random intervals. The authors defined explicitly mind wandering “as reading without text comprehension, or thinking about anything other than the text on hand”. They also provided several examples to make sure the participants fully understood the concept.
The participants read sixteen pages of “War and Peace” by Tolstoy on a computer screen while their eye movements were tracked and recorded. Randomly every 2–3 min, a probe would appear on top of the text asking what was the mind state of the participants at this specific point. Participants would have to answer to continue the experiment. On average participants received 10 probes in total, in which mind wandering was reported on 49% of them.
The eye movement behaviours of the participants were categorised into mind wandering or reading conditions, based on their self-reports. This analysis was conducted for the 5 s time interval preceding the probe for reading and wandering conditions within each participant. Nine pairs of eye movement variables were analysed (e.g., count of blinks, fixations, saccades, fixation duration, within-word regression count), which displayed different degrees of sensitivity to mind wandering.
Statistical differences were found in two of the eye movement variables, run count and within-word regression count. Run count was defined as the “the total number of runs, where a run is two consecutive fixations within the same interest-area” and within-word regression count as “the sum of all fixation durations from when the word was first fixated upon, till the last fixation”.
Specifically, there were fewer within-word regressions for periods before mind wandering episodes compared to periods before reading reports (z = −2.305, p = 0.021). Also, the total run count was also lower during mind wandering episodes (z = −1.997, p = 0.046). In addition, fixation count, saccade count and total number of saccades within the interest-area were lower during mind wandering reports, although these variables fell slightly short of the conventional significance criterion (all z < −1.755,p > 0.079).
During comprehensive reading all the words were being cognitively processed deeply and effort was put forth. On the contrary, a different pattern was observed during mind wandering episodes, as it was suggested by the lower number and duration of within-word regressions that shows that the text was not being processed deeply, and as a result limited lexical information was being extracted. As a result, reading became less effortful and more automatic.
The current study revealed a correlation between subjective reports of mind wandering, and objective ocular behaviour. These findings could be further exploited in future studies and lead to the development of algorithms that would mathematically predict the likelihood of mind wandering based on eye movements. Such a development might provide valuable insights into the neural correlates of mind wandering.
Uzzaman, S., & Joordens, S. (2011). The eyes know what you are thinking: Eye movements as an objective measure of mind wandering Consciousness and Cognition, 20 (4), 1882-1886 DOI: 10.1016/j.concog.2011.09.010
Reichle ED, Reineberg AE, & Schooler JW (2010). Eye movements during mindless reading. Psychological science, 21 (9), 1300-10 PMID: 20679524
ADHD is the most common neurodevelopmental disorder (Faraone et al., 2003) and affects about 3–6% of children (Tannock 1998). ADHD is defined by either an attentional dysfunction, hyperactive/impulsive behaviour or both (DSM-IV; American Psychiatric Association, 1994). Therefore, the diagnosis of ADHD has three subtypes: the Inattentive subtype (ADHD/IA), which is characterised by significant levels of inattention but subthreshold levels of hyperactive/ impulsive symptoms, the Hyperactive/Impulsive subtype (ADHD/HI), which is defined by hyperactivity/ impulsivity but not of inattention symptoms, and the Combined Inattentive-Hyperactive/Impulsive subtype (ADHD/C), which is characterised by maladaptive levels of both symptom clusters.
Morningness is a stable characteristic which reflects the phase of circadian system. It is a continuum with evening types at one end and morning types on the other. Previous studies have found that the evening orientation might be a risk factor for various disorders including depression and personality disorders. Morningness is also a heritable trait (Vink, Groot, Kerkhof, & Boomsma, 2001) and determined by genetic factors (Mishima, Tozawa, Satoh, Saitoh, & Mishima, 2005). Impulsivity and novelty seeking, two characteristics associated with particular ADHD subtypes are negatively related to morningness. Specifically, evening oriented individuals often score higher on tests assessing those traits. In addition to that, there is evidence that morningness is implicated in the variation of performance (Natale, Alzani, & Cicogna, 2003). Variability in various cognitive tasks is a common finding in many studies examining individuals with ADHD. Individuals with ADHD have also been found to experience a number of sleep related disorders such as sleep-onset difficulties, agitated sleep, and a higher number of nocturnal awakings.
Caci et al. examined the relationship between morningness and ADHD. Their hypothesis was that adults suspected of having ADHD are more evening oriented than are adults without ADHD. They recruited 354 participants and assessed their scores in the Composite Scale of Morningness (CSM), a measure of morningness, and the Adult Self-Report Scale v1.1 (ASRS), a self-reported questionnaire used for screening of ADHD in adults. ASRS includes two subscales for inattention and hyperactivity symptoms. This allowed Caci et al to examine the relationship between possible ADHD subtypes and morningness.
The results of the study confirmed the hypothesis; participants with higher scores on the ASRS reported having an evening orientation. The effect was stronger in participants with higher scores on the subscale of inattention. No correlation was found between hyperactivity and morningness. This provides evidence for the existence of different endophenotypes in ADHD. Since the sample used in this study consisted of healthy volunteers, it would be interesting to try to replicate this finding in diagnosed individuals with ADHD.
PS: After writing this post, I realised there’s a new study published in Nature by Baird et al. (2011) that examines endocrine and molecular levels of circadian rhythms in ADHD and seems to confirm the morningness hypothesis proposed by Caci et al. According to this paper, adult ADHD is accompanied by significant changes in the circadian system. I might write a post about it in the near future.
Caci H, Bouchez J, & Baylé FJ (2009). Inattentive symptoms of ADHD are related to evening orientation. Journal of attention disorders, 13 (1), 36-41 PMID: 19387003
Music is a powerful tool of expressing and inducing emotions. Lima and colleagues aimed at investigating whether and how emotion recognition in music changes as a function of ageing. Their study revealed that older participants showed decreased responses to music expressing negative emotions, while their perception of happy emotions remained stable.
Emotion plays an important role in music. Even infants have been found to be capable of identifying emotions in musical excerpts (Nawrot, 2003). However, recognition of emotion in music has received little attention so far. A new study by Lima and Castro published in Cognition and Emotion examined the effects of ageing on the recognition of emotions in music. Previous studies looking at emotion recognition in other modalities have revealed that increasing age is associated with a decline in the recognition of some emotions but not others (for more information see meta-analysis by Ruffman et al. (2008)). Laukka and Juslin (2007) examined the effects of ageing on emotion recognition in music comparing young adults (around 24) and older adults (older than 65). Their results identified that older adults had more difficulty recognizing fear and sadness in both music and speech prosody, whereas no differences were observed for anger, happiness and neutrality.
The sample used by Lima et al. was of 114 healthy adults (67 female). They were aged between 17 and 84 years, and were divided into three groups with 38 participants each: younger(mean age=21.8 years), middle-aged (mean age=44.5 years) and older adults (mean age=67.2 years). Each group listened to 56 short musical excerpts that expressed happiness, sadness, fear/threat and peacefulness. Each category was consisted of 14 stimuli.
The results revealed significant age-related changes associated with specific emotions. More specifically, the authors identified a progressive decline in responsiveness to sad and scary music. No difference was found in happy music. Differences between age groups were also observed in the pattern of misclassifications for sad and peaceful music. Younger participants perceived more sadness in peaceful music, older participants perceived more peacefulness. This could be due to the structural features of peaceful and sad songs, which are both characterised by slow tempo. Future studies could further investigate this. In addition to that, Lima et al. took into account the years of musical training that the participants had. This analysis revealed a positive association between music training and the categorisation of musical emotions.
One possible explanation for the main findings of this study suggests that the decline in the recognition of particular emotions might reflect the age-related neuropsychological decline in brain regions (such as the amygdala) involved in emotion processing. Previous studies have showed that distinct brain regions are involved in the perception of different emotions (Mitterschiffthaler et al., 2007). Another possible explanation is the age-related positivity bias (Mather & Carstensen, 2005; Carstensen & Mikels, 2005). Age-related positivity bias suggests that people get older, they experience fewer negative emotions.
Future studies could attempt to identify particular brain regions involved in emotion recognition at different ages. Furthermore, since the age-related positivity bias might not be universal (older Chinese participants looked away from happy facial expressions and not from negative ones, see Fung et al., 2008), it’d be very interesting to investigate the effects of ageing on emotion recognition in music in participants from different cultures.
Lima CF, & Castro SL (2011). Emotion recognition in music changes across the adult life span. Cognition & emotion, 25 (4), 585-98 PMID: 21547762
Carstensen, L., & Mikels, J. (2005). At the Intersection of Emotion and Cognition. Aging and the Positivity Effect Current Directions in Psychological Science, 14 (3), 117-121 DOI: 10.1111/j.0963-7214.2005.00348.x
Ruffman T, Henry JD, Livingstone V, & Phillips LH (2008). A meta-analytic review of emotion recognition and aging: implications for neuropsychological models of aging. Neuroscience and biobehavioral reviews, 32 (4), 863-81 PMID: 18276008
Laukka, P., & Juslin, P. (2007). Similar patterns of age-related differences in emotion recognition from speech and music Motivation and Emotion, 31 (3), 182-191 DOI: 10.1007/s11031-007-9063-z
Mather M, & Carstensen LL (2005). Aging and motivated cognition: the positivity effect in attention and memory. Trends in cognitive sciences, 9 (10), 496-502 PMID: 16154382
Mitterschiffthaler, M., Fu, C., Dalton, J., Andrew, C., & Williams, S. (2007). A functional MRI study of happy and sad affective states induced by classical music Human Brain Mapping, 28 (11), 1150-1162 DOI: 10.1002/hbm.20337
Nawrot, E. (2003). The Perception of Emotional Expression in Music: Evidence from Infants, Children and Adults Psychology of Music, 31 (1), 75-92 DOI: 10.1177/0305735603031001325
Fung HH, Lu AY, Goren D, Isaacowitz DM, Wadlinger HA, & Wilson HR (2008). Age-related positivity enhancement is not universal: older Chinese look away from positive stimuli. Psychology and aging, 23 (2), 440-6 PMID: 18573017
Sustaining attention and blocking goal-irrelevant information is a crucial function in everyday life. Kanai and colleagues combining neuroimaging, self-report judgements and TMS found evidence that indicates that a region of the left superior parietal cortex mediates this function.
The ability to avoid distractibility varies across individuals as measured by the Cognitive Failures Questionnaire (CFQ) (Broadbent et al., 1982). Studies on twins and families have showed that the ability to maintain attention in the presence of distractors is highly heritable (Boomsma, 1998). High degree of heritability suggests that the variability might be mediated by genetic influences on the brain, which may be expressed via variability in brain structure.
This hypothesis was tested by Kanai et al. by scanning 145 healthy adult individuals and obtaining their CFQ scores. They used voxel-based morphometry (VBM) to examine whether distractibility scores predicted brain structure. Their results revealed that the level of an individual’s distractibility in everyday life was predicted by variability in regional grey matter density of the left superior parietal lobe (SPL). Highly distractable individuals had larger grey matter density at the left SPL. This particular region has been implicated in top-down attentional control in previous studies (Mevorach et al., 2009). To examine whether there is a causal relationship between this region and distractibility, Kanai et al. applied transcranial magnetic stimulation (TMS) over the left SPL of the participants while they were performing an attentional capture paradigm. The results of the experiment suggest that the left SPL plays a role in suppressing distraction from task-irrelevant salient distractors in both visual fields.
Kanai R, Dong MY, Bahrami B, & Rees G (2011). Distractibility in daily life is reflected in the structure and function of human parietal cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (18), 6620-6 PMID: 21543590
Boomsma, D. I. (1998). Genetic analysis of cognitive failures (CFQ): a study of dutch adolescent twins and their parents. Eur. J. Pers., 12(5):321-330.
Broadbent, D. E., Cooper, P. F., FitzGerald, P., and Parkes, K. R. (1982). The cognitive failures questionnaire (CFQ) and its correlates. The British journal of clinical psychology / the British Psychological Society, 21 (Pt 1):1-16.
Mevorach, C., Shalev, L., Allen, H. A., and Humphreys, G. W. (2009). The left intraparietal sulcus modulates the selection of low salient stimuli. Journal of cognitive neuroscience, 21(2):303-315.
Trying to kill time and not my neighbour who enjoys listening to loud music after midnight, I found myself wondering why do most GPs have bad handwriting! Or is it a myth? Naturally, Google came up with some very interesting results including some actual studies! It seems like there are peer reviewed papers on almost any possibly topic nowadays. What a joy for bloggers and curious people.
Here’s what I found:
1) According to Sokol and Hettige (2006) doctors’ bad handwriting is still a problem in medicine.
In centuries past, doctors scribbled notes to keep a personal record of the patient’s medical history. The notes were generally seen only by the doctor. Today, doctors are no longer one-man bands. With dozens of other professionals, doctors are but one element of a large, multidisciplinary health care team. A consequence of this expansion is that illegible scrawls, hurriedly composed by rushed doctors, are now presented to colleagues with no qualifications in cryptology.
2) Rodríguez-Vera and colleagues (2002) looked at clinical histories and case notes from a Spanish general hospital. To do this, they asked two independent observers to assign legibility scores to the notes. They found that defects of legibility such that the whole was unclear were present in 18 (15%) of 117 reports. Furthermore, their findings suggest that these defects were particularly frequent in records from surgical departments…
3) A 1998 study examined the handwriting of doctors, nurses plus other medical professions, and administrative staff. The recruited staff from three main settings – the health authority headquarters, an accident and emergency department, and various departments in another hospital. They report:
This study suggests that doctors, even when asked to be as neat as possible, produce handwriting that is worse than that of other professions. This provides supportive evidence for the commonly held belief that the legibility of doctors’ handwriting is unusually poor. A small prospective study in the United States reported no difference between the legibility of doctors’ handwriting and that of other healthcare professionals,4 but this study used a subjective assessment of readability and the comparison group was confined to senior non-medical staff.
A surprising finding of our study is that the poor legibility was confined to letters of the alphabet rather than numbers. This may reflect the importance attached by doctors to the legibility of drug doses.
4) Schneider and colleagues (2006) compared doctors’ handwriting to that of engineers, accountants, and lawyers. Their results suggest that physicians’ handwriting is no better or worse than that of other professionals with comparable education. These findings provided support for an earlier study conducted by Berwick and Winickoff (1996) that found that “the handwriting of doctors was no less legible than that of non-doctors”.
5) A study by Gupta and colleagues (2003) investigating differences in handwriting between residents and medical students found that the more experienced doctors had increasingly illegible handwriting compared to their younger colleagues and to medical students. As a result, one could propose that bad handwriting is a product of the profession of medicine.
Sokol DK, & Hettige S (2006). Poor handwriting remains a significant problem in medicine. Journal of the Royal Society of Medicine, 99 (12), 645-6 PMID: 17139073
Rodriguez-Vera, F., Marin, Y., Sanchez, A., Borrachero, C., & Pujol, E. (2002). Illegible handwriting in medical records JRSM, 95 (11), 545-546 DOI: 10.1258/jrsm.95.11.545
Schneider KA, Murray CW, Shadduck RD, & Meyers DG (2006). Legibility of doctors’ handwriting is as good (or bad) as everyone else’s. Quality & safety in health care, 15 (6) PMID: 17142598
Berwick DM, & Winickoff DE (1996). The truth about doctors’ handwriting: a prospective study. BMJ (Clinical research ed.), 313 (7072), 1657-8 PMID: 8991021
Gupta AK, Cooper EA, Feldman SR, Fleischer AB Jr, & Balkrishnan R (2003). Analysis of factors associated with increased prescription illegibility: results from the National Ambulatory Medical Care Survey, 1990-1998. The American journal of managed care, 9 (8), 548-52 PMID: 12921232