After surviving writing my dissertation, I knew I wanted to reflect on the whole process. At first, I was thinking about blogging about what I learned and what I would do differently, but to be honest, I am not sure I can do that before I actually get the dissertation back. As a colleague of mine said; I mainly looked shell-shocked the day I handed it in. The featured image is also definitely a lie; majority of the dissertation process looked more like this:

Of course with a full cup of tea at hand. Extra kudos if you recognise the programme at the bottom.

My dear colleague, Lorraine, suggested that I could write about why I chose my topic. In a field considered as interesting as psychology, my topic may pale a bit at first. But here’s why you should care. (Buckle in, this is going to be a long one)

The main thing I set out to research was whether different approaches to learning had an effect on how well psychology students did on a statistical reasoning test. This end result of a research question was definitely not my initial intention.

I knew when it was time to pick dissertation topics that what interested me the most is the critical analysis of psychology papers and theoretical issues in using statistics in the 21st century. Long story very short; it turns out that psychology academics have one time too many used incorrect statistical methods, not understood what a significant result means, and not understood the problems of having small sample sizes. This paper by Button et al. (2013) really highlighted the issue for me. These issues have led to a replication crisis, where as little as one-third to one-half of replication studies find the original findings.

This is hugely problematic.

Why?

Because psychology considers itself a science based on hypothesis testing. Where every finding should potentially lead us closer to discover more of the real world. If we can’t trust published findings in peer-reviewed journals, how can we really trust any finding in psychology? If we base future studies on previous findings, which we should to not try to reinvent the wheel constantly, but they’re not a true reflection of reality, are we just fumbling around in the dark getting no closer to real findings?

This importance of not only replicating findings, but also being okay with your results being challenged, is more common in harder sciences, such as physics.

Hence, there needs to be massive change in academic psychology if we, as a field, want to be taken seriously. As much as I wanted to challenge the capability of academics, my supervisor said it was a tad bit early in my career to do so. Perhaps later.

Instead, I decided to focus on who will helpfully be the next generation of academics; psychology students.

My degree has changed massively since I started, mainly thanks to my two statistics lecturers who has worked tirelessly, and most of the time thanklessly, to help us become capable data analysts. They have taught us statistical methods usually not encountered before master’s level to help us become more statistically literate. While we will not all become academics, having an understanding and ability to critically assess findings is an important and transferable skill.

Lecturers and course organisers need a way to ensure that students are actually learning what they are intending them to. Things like exams may not always be a true reflection of skills, and certainly if the content has been crammed, it may be forgotten a few months later. Therefore, you’d like students to be able to apply the thinking and reasoning skills in general situations, and not only in an assessment setting. This is what I tested using a statistical reasoning assessment, which measures both correct reasoning and incorrect reasoning. I then used a measure that divides students into three different approaches to learning.

What I found was that students too caught up in following the syllabus, having a high fear of failure, and always feeling behind were less likely correctly apply statistical reasoning. Hence, while they did not seem to have a lot of misconceptions, they were not as good as the two other groups at applying their knowledge in ambiguous situations.

What does this mean? I’m not completely sure. My sample size was small. I couldn’t compare groups as well as I wanted. I couldn’t control for demographics. It was interesting that my model did not have a better fit for misconceptions. Perhaps our statistics courses are already great at ensuring we at least understand some statistics.

But I do think it would be worth doing more work into how we can suit up psychology graduates in the best possible way for the future. I hope I’ve convinced you to (sort of) believe the same.

 

Both a big relief and at the same time strange handing in two printed copies of my dissertation.