Tuesday 31 January 2017

A cognitive model of psychological resilience - current thinking and future directions

When our recent paper was made open access in the Journal of Experimental Psychopathology it got me thinking about what was next for the line of research that we discussed in the paper. "A cognitive model of psychological resilience" (available here or on my Researchgate here) was my attempt to wrestle with two research domains which have potential for an integrated approach to psychological resilience and wellbeing. Separately, the cognitive-experimental approach to emotion dysfunction investigates the biased cognitive processes influencing mental ill-health, whereas, positive-psychology approaches (such as resilience approaches and positive mental health continua) inform our understanding of what it is to be healthy. While there is some cross over in these fields, for me it was not sufficient to really investigate the cognitive mechanisms of resilience, well-being and positive mental health.

As this is the general topic of my DPhil, it left a bit of a gap to fill. My initial attempt to do so involved several measures of cognitive bias and self-reports of positive mental health and resilience. This is exactly the approach that we advise against in our paper, although we do acknowledge several benefits. The main rationale is that resilience is something that is demonstrated, rather than reported. The majority of the prior research that I came across attempting to investigate the cognitive basis of psychological resilience took a similar approach, which led me to become increasingly unsatisfied with this approach.

On the flip side, the gap between these fields was further suggested to me as I read the resilience and positive mental health literature. There was a consistent lack of cognitive methods used. Or, when cognitive aspects were mentioned they typically referred to self-reported measures or the more conscious and complex biases known from social psychology, rather than the automatic selective biases that I was interested in.

I presented this paper in the form of posters and oral presentations to both audiences and ended up with very different responses from each. This first hand experience gave me more of an appreciation for the rift between the two fields and even different understandings of the terminology used. I now begin these presentations with a quick, audience specific summary, with the aim to ensure that whichever group I talk to really "gets" the purpose of our cognitive model of resilience.

Back to the present and I will be presenting a poster of our model on Thursday at the MQ Science Meeting. This seemed a perfect opportunity to do a bit more thinking about some ongoing research that gives support to the model and future research directions. The rest of this post is intended to be a more in-depth version of the poster that I will present. In part, to flesh out the material within the poster, but also to ensure that I get more of this thought process down in a much more usable form than a few scribbled notes as I create the poster. Here goes...

A cognitive model of psychological resilience

In our paper, we noted that certain biases thought to be detrimental (e.g. attention bias towards threat) have adaptive roles in certain circumstances. Therefore, the ability to flexibly switch between strategies must be important to promote adaptive responses across multiple circumstances and life events. However, flexibility is not enough, some amount of rigidity or inflexibility is also called for when the current strategy is effective or will be more effective in the long term, for example. Therefore, we hypothesised the presence of an overarching system that guides the flexible and directed application of cognitive processing strategies dependent on the present circumstances, future orientated goals, affective states, and is able to integrate with prior knowledge and experience in order to adaptively align information processing strategies. We decided to term this the mapping system and went on in the paper to detail how this system might be used to guide future research in developing more detailed cognitive accounts of psychological resilience.

Recent and ongoing research

In a novel task assessing the alignment of attentional bias (developed by Dr Lies Notebaert), it was found that participants successfully aligned their processing biases to the blocked circumstances. e.g. participants more accurately attended to controllable threats and ignored uncontrollable threats. This suggests that healthy individuals do successfully align or map cognitive bias to dynamically respond to differing circumstances, in this case the control-ability of potentially threat-related cues. Current follow-up research is investigating whether this alignment is impaired in anxious individuals.

Ongoing research in the OCEAN lab has investigated psychological flexibility in relation to resilience. Task-switching measures conceptually capture the nature of psychological flexibility as increased capacity to switch between task sets. With a more specific focus on emotional health these paradigms have been adapted to include emotional components. In one such task, participants respond to emotional and neutral faces, based on whether that face is the odd-one-out by one of several rules. Therefore, task switching can be examined when emotion is the focus of the switch, and also in cases where emotion is present, but irrelevant to the current task. In another task, working memory capacity is incorporated with emotional and non-emotional internal set-switches in order to examine this task-switching capability when it is performed internally, rather than determined by trial cues. At this stage the analyses preliminary, but potentially indicate that trait-resilience is influenced by task-switching capability in the presence of emotion.

Future research directions

Much of the following trains of thought come from an upcoming grant application which has taken up most of my thinking time for the past few months. Understandably, these are areas of research that I believe should be addressed to further our understanding of the cognitive basis of promoting psychological resilience to emotion dysfunction and the development of mental ill-health.

Developmental approaches: It is almost too common an observation that we need longitudinal and developmental approaches to mental health research. This is especially important with a cognitive approach as it is largely missing from the literature, and even less so with adolescent populations. This is unfortunate given that adolescence is a period of emotional vulnerability and that many cases of adult mental ill-health developed from adolescent emotional disorders. Thus, more research is needed taking a longitudinal approach. Of particular interest in this field would be employing large cohorts of adolescents and following them up regularly to investigate the influence of developmental changes in cognitive processing in predicting future emotional pathology and resilience to the development of such pathology.

Repetitive Negative Thinking induction procedures: worry and rumination take up valuable cognitive resources and are a contributing characteristic of many emotional disorders. Examining this maladaptive thinking style goes beyond specific diagnoses (of which there is a high rate of co-morbidity anyway). Additionally, these states can be experimentally induced which gives us the ability to directly examine the extent to which these processing styles impact processing capability and any knock-on effects on emotional information processing.

Cognitive-affective flexibility training: To examine the causal influence of a particular process on an outcome you must modify the process in question. In the cognitive-experimental approach to emotion dysfunction, this typically comes in the form of cognitive bias modification. If the modification procedure is successful in changing the process in question, then we can reasonably address the question of whether that process causally influences our desired outcome, for example tendency or susceptibility to engage in repetitive negative thinking. There has been research aimed at improving executive functioning with promising results, however, training flexibility in the context of emotion remains to be seen. Longitudinal designs can be used to examine whether these short-term transient changes or changes resulting from longer training programs go on to influence vulnerability factors and potentially boost resilience to the development of emotional disorders.

Some conclusions

A cognitive approach to psychological resilience offers many benefits and I hope that our model will serve as a catalyst for research in this area. I've briefly covered several ongoing and planned research projects with the aim to highlight how our model may be used to integrate two important fields of mental health research. In the future I will post an update on these projects and what they have taught us. Fingers crossed that we will also get funding to pursue some of the important research that is vitally needed to investigate cognitive approaches to improving adolescent psychological resilience. In the mean time, please do have a read of our paper, and I am looking forward to presenting the poster version of this post at the MQ Science meeting, which I'll also make available on my Researchgate page





Tuesday 3 January 2017

Improving your statistical inferences from Daniel Lakens - a review

The key message I remember from undergraduate statistics modules is that you want your p value to be less than .05. In December I joined Daniel Lakens on his "Improving your statistical inferences" online course (free, unless you would like to buy the certificate). It became glaringly obvious that my ability to actually interpret statistics was extremely limited during, and even after, my undergraduate degree. In this review, I want to highlight a few of the lessons I gained from this course, and why everybody should complete it (especially current undergraduate and postgraduate students).

Undergraduate Psychology degrees in the UK require the teaching of research methods and statistics. Usually, this is the module or course that gets a lot of hate for being dry (or hard). In order to actually understand research, I believe it may be the most important aspect of undergraduate psychology courses. In all fairness to the lecturers at the universities that I have attended, and I believe most others, it is made pretty clear that statistics are important. Of course we need a working understanding of statistics to understand and conduct research. Unfortunately, I believe that by somewhere around week 2 or 3 of running some new analysis that is not fully understood in SPSS, some of the motivation gets lost. This results in students lacking a full appreciation of the more important aspects of statistical analyses - interpreting statistics. Thankfully, this is where 'Improving your statistical inferences' comes in.

There is definitely improvement since my undergraduate. In the lab practicals that I have demonstrated in (and marked many reports from), the students seem to grasp the concept of interpreting statistics better than I did at that stage. Note: I was not number adverse, my strongest subject was always maths and there was plenty of statistics in A-level. These students even know to report effect sizes, which I had barely any comprehension of until my Masters. This is great progress, though judging from some of the reports I'm not completely convinced that these effect sizes have been interpreted or fully understood, other than knowing that they should be reported.

But, it is not the place of this post to rip on undergraduate statistics modules. I want to talk about the course "Improving your statistical inferences" taught by Daniel Lakens. In particular I want to highlight a few of the important lessons from this course that I wish I had discovered during my undergraduate. Side note: undergraduates, if you don't understand something that you think is important then it is also your responsibility to find that information - just because it is not in the lecture does not mean that it is not important, or even essential. On to two of the lessons.


Type 1 and Type 2 Errors

Image result for science is a liar sometimes
My wife didn't quite appreciate the reference. Any other it's always sunny fans out there?

We all learned about type 1 and type 2 errors. What I now realise is that I did not quite appreciate how important they are. To recap, type 1 error (alpha) is the probability of finding a significant result when there is no real effect. Type 2 error (1 - power) is the probability that we find no significant result, even when there is a real effect. Or in the next picture...


pregnant

Why is this important?

We know this, everybody is screaming. But, this underlies a key point in teaching of statistics.
In psychology we are trained that p < .05 is good, without complete appreciation for the importance of the actual effect in question. These lessons reinforce the notion that the effect size that is the important factor, rather than the p-value itself. This is of the utmost importance for those beginning their research journey, and I wish that I appreciated this during my undergraduate. Across the course, we learn about several methods to reduce these errors. Amongst other things, apriori methods such as upping your sample size help to control type 2 errors and apostiori methods such as controlling for multiple comparisons help to control type 1 errors.


P-curve analysis

p-curve analysis is a meta-analytic technique to explore distributions of p-values in the literature. It allows the analyst to draw inferences about the potential that a collection of statistical results contains p-hacking. Here is one question I did not know the answer to until taking this course. What p-value would we expect if there is no actual effect in an analysis? answer - any p-value, the distribution is uniform. So there is the same change of a p = .789 as p = .002 for example. Now, what about if there is a real effect? in this case our distribution shifts trending towards lower p-values (highly-significant or whichever terminology one prefers). If we plot this as a distribution we would expect something like the right hand graph in the figure below.

We may see a different pattern with a higher distribution of p-values at the high end, say .04-.05 or the "just-significant" line. This may be suggestive of p-hacking, optional stopping (running more participants until a significant result is found), amongst other things. See the left-hand graph in the image below. So, in short, if a literature is filled mainly with p-values at the high end of significance, e.g. lots of .04 or higher, then we might want to examine this research further to be more confident in the results.

Taken from Simonsohn et al. 
http://www.haas.berkeley.edu/groups/online_marketing/facultyCV/papers/nelson_p-curve.pdf

Why is this awesome?

The p-curve analysis lecture supports the view that there is nothing inherently special about p < .05, especially in the higher range of these values. Indeed, a high-proportion of these results within a research literature is suggestive that the effect may not be as robust as believed. Note however that p-curve analysis is a relatively new technique and cannot actually diagnose p-hacking etc. There is a more in depth discussion of p-curve analysis in the course.


About the course as a whole

The format of the course is engaging. The lectures are short, and include quick multi-choice checks of understanding. This was a nice touch for me in particular as it did lead me to revisit a few ideas straight away, rather than waiting on the exams to realise my error. The quizzes/exams are exactly what they need to be to ensure that those taking the course have a good overall understanding of the lessons. The practical aspect of the course is a stand-out aspect as it gives students the opportunity to engage with the analyses and simulate lots of data to really drive the lesson home. It may also be a nice way to expose people to R initially, without any pressure on learning the language. The final assignment requires students to pre-register a small experiment and report it. This was very useful to me, especially as the course will hopefully instill in everybody the benefits of registration and open-science. It is  possible to complete each week with a commitment of 2-3 hours, depending on the topic, so the time commitment is not huge.

There are many other useful and interesting insights offered by this course, too many to discuss here. It successfully covers controlling your alpha; increasing power; effect sizes; confidence intervals; and the differences between frequentist; likelihood; and Bayesian statistical approaches. We learn about the importance of open-science, pre-registration, equivalence testing and so much more. Most importantly, the course develops your ability to make inferences from your statistics and has left me with a desire to implement these lessons in my research. This course will help me do better science.

I hope that at the least this post is appreciated as a positive review and long-winded thank you to Prof. Lakens for the lessons in the course. More importantly, if it convinces one person in psychology to take the course and learn to better understand and make inferences from our and others statistical analyses, then the time spent writing this post has been well spent. Anybody interested, click here for the course homepage.



Any and all comments are appreciated on this first post as a return to blogging. Correct me if I have misquoted anything or misrepresented the information I have discussed.