The Significance of Significance

So earlier this year, with hope in my heart, I entered the Wellcome Science Writing Prize for 2013. Yesterday, like many others, I got the email saying that my piece had not been short listed this year so I decided to put it up here so you can have a read if you’re curious/bored :)

I’m not disappointed, though I am shockingly thin-skinned in some areas of my life I’ve always been a complete and utter hard arse when it comes to my writing. No doubt I’ll enter again next year! Commiserations to my friends who also didn’t make the cut, and congratulations to any reading this who did (though I’m not aware of any up to this point).

Bear in mind that I could have written a much longer piece on P values, and no doubt will at some point in future, but this was a strict 800 words. Without further ado….

The Significance of Significance

 

“Scientists have found a significant link…”  

“A significant new study shows….” 

Don’t scientists drive you mad sometimes? With every report on a new study they seem full of bluster. We get excited there’s a cure for cancer, and it turns out to be a study on the colour of seal’s noses. That’s because there’s a difference between what a journalist might mean by “significant” and what a scientist intends, which is statistical significance.

Statistical significance is a way of showing how big the role of chance is in our findings. The smaller the role of chance, the more we can be certain that the “Thing We Did” affected the “Result”. The bigger the role of chance, the less sure we can be of our theory (hypothesis). We can figure it all out with calculations that give us something called a probability, or “p”-value, which shows the role of chance. But not all scientists are happy with how p-values are used, and some think that using p-values gives us a skewed view of reality.

Surprisingly, it all started with beer. In the early 1900s, Guinness Chief Brewer William Sealy Gosset wanted to find the best barley to make his pints. But that meant testing lots of barley crops against each other. There are lots of reasons why crops may vary – where and when they were planted, the sowing method, what the weather was like that year and the number of pests, for example. To eliminate the role of these chance factors, Gosset needed lots of barley, land and time – not a very economical way of finding out about economy. Gosset needed to find a significant result from as few harvests as he could.

Luckily, Guinness employed the best graduates and Gosset was no exception. A gifted mathematician and chemist, he worked with renowned statistician Karl Pearson, and finally published his calculations for small sample significance testing, “t-tables”, in Pearson’s journal Biometrika in 1908.

But it was statistician Ronald Fisher who took p-values from brewing into mainstream science. He wanted to reproduce Gosset’s tables in his 1925 book, Statistical Methods for Research Workers, but due to copyright disputes with Pearson he had to rework Gosset’s t-tables. One of Fisher’s tables gets the blame for science’s fixation with a magic number:

“P” is less than 0.05

It means the role of chance in your study is less than 5%, or 1 in 20. If you toss a coin 1000 times and expect an even result between heads and tails, you might allow for, say, 550 heads and 450 tails and put it down to chance. Any more than that, and you might suspect your coin was unevenly weighted.

Scientists quickly latched onto small scale significance testing and the power of “p”, particularly in psychology. In his article Negativland, Prof Keith Laws says that in the 1920s only 17% of psychology papers used significance tests, but by the 1960s it had risen to 90% . And the figure still rises.

So what’s the problem? Why the controversy? The 0.05 means too much. Publication bias is a common problem in science journals – we’re more likely to get published if our study supports our initial theory, preferably with a big “wow”. If our study shows that the role of chance was too big and it’s hard to judge, we might not even bother trying to get published and shut it in a drawer. In a study of 609 American Psychological Association members, 82% said they’d submit a paper that supports their hypothesis, but only 43% would try with a non-significant finding. This mirrors the views of journal editors. Publication in a major journal is a career high, so some scientists try hard to get those big results and tiny p-values. “Data dredging” programs exist to nudge your p-value down as far as possible. While falsification is thankfully rare, these “makeovers” are common practice and skew visible science towards the sensational when in reality, science is much more commonplace and uncertain.

What’s more, p-values place too much emphasis on certainty, rather than the actual effect. Economist Deirdre McCloskey explains well. Imagine you have two diet pills, one promises to lose you up to 20lbs but with high variance. You might lose 10lbs, or 30lbs. The other pill promises only 5lbs, but with a lower variance, maybe between 4.8 and 5.2lbs. Which would you choose? The one with the bigger effect, of course, the one that says you might lose more. The impact of the result is more important to most than how sure we are of it.

We may talk of changing the p-value culture, but since Edwin Boring first criticised p-values in 1919, how much longer until it changes? The misguided emphasis on p-values is too significant to ignore.

 

Posted in Uncategorized | Leave a comment

A Truly EPIC Blog Post

Not long ago I was with my hairdresser, and as the rubber gloves came out and the bleach went on, she asked me in passing what I do for a living.

“Well, I’m a chef,” I said, “But I’m hoping to go back to university full-time this year, and when I’m done I want to be an epidemiologist.”

To my surprise – because mostly people glaze over at this point – my hairdresser’s face lit up. She pointed out some brightly coloured jars on the shelf and told me they’d just switched brand supplier, she was hoping it would go well. She pointed out that my skin was a bit dry in places but on the whole I was doing pretty well for my 40+ years. Yes, you guessed it. She thought epidemiology was the study of skincare, and that I wanted to be a beauty therapist.

To be fair to my hairdresser, it’s a very common misconception. Most people don’t even notice that epidemiologists exist. Which is odd, considering how pervasive their work is through our lives.

Epidemiology is the study of health in defined populations of people, and what might affect it one way or the other. It’s the base science of public health, and it looks to see what might harm or help the health of a large number of people in the same way that a doctor wants to know what might harm or help an individual.

To me, epidemiology is a wonderful thing precisely because it covers so many areas of life. It straddles both the social sciences and the very cutting edge of medicine. You have vaccinations as a young child thanks to public health interventions. That’s epidemiology, that is. You’re given milk at primary school because an epidemiologist somewhere figured out it would improve your calcium intake, and more calcium in young children is a good thing. If your school or workplace suffer an outbreak of norovirus, it’s an epidemiologist who has figured out how the damn thing spreads and will tell you to wash your hands and sanitise contact surfaces such as door handles. Epidemiologists figured out that SARS came from one bloke feeling a bit poorly who took a flight to Hong Kong, how he got it, and where it went from there in time to quell it. And I’m sorry to have to tell you, but epidemiologists are also the annoying people who tell you that smoking causes lung cancer, that you should only drink so many units of alcohol a week and that red meat isn’t that great for you. Epidemiologists, eh? Sucking all the fun out of those little pleasures in life.

A perfect example of this kind of headline came out with a fanfare last Thursday, and was widely reported. The news that processed meat seems to cause a significant increase in risk to our health was not a surprise to many in these days of horse meat scandals and intensive farming, but all the same journalists seemed caught in a panic as they urged us to drop the bacon butties and pick up the salad instead.

But I’m not writing this to tell you that bacon is bad.  Frankly, I’m in no position to preach. What I want to celebrate here is a tremendous monument to international relations and human achievement on a grand scale…. the study that produced those results.

The study was called the European Prospective Investigation into Cancer and Nutrition, or EPIC for short. And it truly was. There are many kinds of study in epidemiology, and EPIC was what we call a Prospective Cohort Study. A cohort study is an observational study where you pick your population, figure out which of them have a particular characteristic and which don’t, then follow them over time – we’re talking years or even decades, here. At the end of it all, you might find that the people with one characteristic fare better or worse than those without it. Simple.

Cohort studies come in two types – a retrospective (historic) cohort, where the result you’re looking at has already happened and you’re looking at medical records to seek correlations, and a prospective cohort. This is where you choose your people first then follow them up over time to see what happens to them.

Recipe for an EPIC Cohort Study

1) Hello, Is It Meat You’re Looking For? Defining Your Objectives

As with any other form of scientific investigation, your hypothesis needs to be clear before you start. The EPIC study worked on the basis that while meat is high in iron and folate, which is good, it’s also high in cholesterol and saturated fat, which isn’t. The meat proportion of diets across the world has been rising steadily since the end of World War 2, and so has coronary heart disease and some forms of cancer. There was clearly a correlation between the rise in consumption of meat and poor health, but nobody knew if one causes the other directly. We can’t just assume, that’s not terribly scientific. 

The EPIC report cited a study that took place in Oxford to see if vegetarians were healthier than meat eaters, but it found that in a sample of both vegetarians and  moderate meat eaters who lead a healthy lifestyle, there wasn’t a great deal of difference. In other words, vegetarians were healthier than the general population, but that might be due to other factors in their lifestyle –  jogging and doing yoga and all that lovely healthy stuff, which meat eaters could do as well. Whether you let meat pass your lips or not didn’t seem to be significant.

But EPIC also cited some large American cohorts where there was a clear risk for those who eat high levels red meat and processed meat, compared to people who still ate meat but not much of it, or mainly poultry.  So is it the amount and type of meat that’s making the difference? Or would the lovely people of Europe prove far less prone to this stuff than the USA, due to differences in their lifestyle?

2) Choose your Population

The EPIC study comprised of almost 500 000 people, based in 23 different towns in 10 different European countries. Let me say that again: half a million people, twenty three towns, ten nations. If that isn’t epic then I don’t know what is.

You can recruit your population, or “cohort”, from many different places depending on what you’re looking at. The main thing is that you make sure your cohort represents the people you want to study. It’s no good studying heart disease in, say, 7 year old girls. You want to look at the people where the incidence of heart disease is highest already.

Sometimes your cohort will be quite specific. For instance, if your aim is to study the benefits/harms of radiotherapy as a course of treatment in people with specific cancers, then your sample must come from the total population of people who have that kind of cancer, and you’ll look to specialist clinics and  the like to find them. EPIC had a much wider remit, as, let’s face it, most people in Europe eat meat.

People who took part in the EPIC study were mostly taken from the general population. People were recruited from the blood donor programme, from particular companies and health insurance schemes, from the civil service and even a mammogram screening scheme. In the UK, some of the participants were those “health concious” people from the Oxford study we cited earlier. It’s important to have people you can keep track of over time, so you can follow as many of them up as possible at a later date. This is why things like health insurance schemes are handy – they effectively keep track of your cohort for you.

All participants in EPIC were recruited between 1992 and 2000, and all were between 40 and 70 (for the chaps) and 35 and 70 (for the ladies). In total 511, 781 people were recruited. But because cohort studies focus on the development of disease, EPIC needed to ensure that people started off without any of those diseases already – we want to ensure we start from a baseline healthy population. So the next job was to whittle out anyone who reported they’d had cancer, had suffered a stroke or a myocardial infarction, and those who hadn’t reported if they smoked or not.  While they were at it, EPIC also cut out the health freaks and couch potatoes, or, as they put it, the top and bottom 1% of ratio for energy-to-intake expenditure (I prefer my way). This left them with 448, 568 participants. Not much to be going on, then.

3) Confound it all! Or not, preferably…

It would be a marvellous thing if you could put 448, 568 people in a sealed room and stop them getting on with their everyday lives. You could force them to stop doing annoying things like drinking too much and going to parties, or having babies, or….

OK, no it wouldn’t. It would be unethical and horrific. But still, cohort studies need some way to cut out as many confounders as possible. Confounders are other things that people might do which could affect the results you’re looking at. People smoke, they drive to work when they could walk instead, they go on diets, they change their eating habits and activity levels when they have children, become unemployed…. the list is almost endless, and any one of these factors might scupper you being able to say your result is significant – definitely down to one or two factors.

In short, life goes on. This is especially a problem with cohorts as they study people for such a long time and, let’s face it, life throws up some pretty vexing confounders. What you can do, however, is think up as many of those confounders as you can in advance and allow for them in your study. You don’t treat all your 448,000 people as an amorphous mass, you divide them into sub-groups called strata and try to group similar people together as much as possible. EPIC stratified people according to their age, their weight and height, which town they were in, their smoking and drinking habits, their overall food intake, their exercise and education levels. These strata could then be analysed separately to try and cancel out the confounding effects. It’s by no means ideal and you’ll never cancel out confounders entirely, but it’s a start.

4) Food, Glorious Food… Measuring the “Unmeasurable”

The methods varied a bit from country to country across the EPIC cohort, but if you really want to there are various ways you can measure people’s food intake. The majority of information was taken from Food Frequency Questionnaires, where people respond to specific questions then submit their answers for computerised scoring. For EPIC purposes they were particularly interested in 3 categories of food intake – red meat, processed meat, and white meat/poultry.

A significant chunk of the EPIC study also went through an occasional 24 hour Recall Interview – this is where an investigator contacts you and asks you to remember what you ate in the last day, and can provide  a lot more detail and nuance than a questionnaire. Ideally this method would be used a lot more, but it’s pretty impractical to ask 20 000 people or more at your particular centre  to tell you what they ate every single day. You could in theory do, say, a week’s recall interview, but it’s quite surprising how quickly our memory of what we ate fades and most people underestimate their true food intake vastly.

In the UK and Sweden participants usually kept a Food Diary, where the food they ate each day was recorded and they were asked to estimate their portion size using a set of standard photos. It’s a reasonably good method, but under reporting is till a problem.

The absolute gold standard of nutrition studies is called the Weighted Inventory, but is much more useful in smaller groups of people than a massive cohort like EPIC. In this kind of study all food for a week is weighed and recorded, and the waste left on your plate at the end of a meal is also taken into consideration. Food intake is then calculated from Food Composition Tables. But even in this kind of study, there is a tendency to under record those sneaky custard creams you had with your cuppa.

5) Add some HSS (Highly Scientific Stalking)…

When you have a study like EPIC, where almost half a million people are studied on average for 12 1/2 years, keeping track of them all is almost a bigger problem than recruiting them in the first place. People move house, change jobs, some will migrate and some will even die from no reason at all to do with what you’re investigating. You can sign up to a cancer study and still die in a car crash, there’s no guarantee.

Normally, investigators will phone people, write to them, check their medical records and regional health departments, find their change of address through the professional association or university they were recruited through – there’s a reason for choosing participants from professional bodies and the like.

It’s important that as high a proportion of participants as possible is followed up, and every effort should be made to trace the outcomes of all people. EPIC managed to trace the outcomes of 98.4% of their participants, which considering the size of the cohort is pretty darned impressive. In all 26, 344 EPIC participants died during the study – about 6%. 37% of the deaths were from cancer, 21% from heart disease, and 3% from diseases of the digestive tract, but that leaves a remaining 39% of the deaths not caused by anything that EPIC was investigating.

6) Here Comes The “Smug Boffins” Bit…

My dad, a plumber and dedicated tabloid reader, gave a little snort when I told him I was going back to university to study science. “A scientist? Why do you want to be a scientist? They’re always on the news being smug and everything, claiming some study of monkeys’ sex habits is significant. Significant is the economy! Significant is pensions!”

My dad, like many non-sciencey types, had a fundamental misunderstanding of what “significance” means when scientists refer to results. They’re not making a personal judgement about the value of their work, they’re talking statistics. Because for about 5 years after the last outcome of the last person had been sent in to the EPIC study, they were processing all that information.

Statistics is kind of complicated, but suffice to say the EPIC investigators would have sorted out their strata, cancelled out confounders as best they could, divided their meat consumption into categories according to grams consumed, used the Cox Regression Model then decided if the correlation between processed meat consumption and death was significant enough to suggest that processed meat may be a cause.

Scientists are quite finickity about what they consider a probable cause, and what might be just chance. The widely accepted numerical indicator of this is the P-Value. If your P-Value is 0.01, that means you reckon there’s a one in a hundred (1%)  chance that your finding was down to…. well, chance. The P-Value most scientists are desperate for in crunching their numbers is one of less than 0.05. That means that there is just a tiny risk that your findings were down to something completely random – 5 in 100.  A P-Value of less than 0.05 means you’re really onto something.

The EPIC cohort found an increased risk of death in people who ate more than 20g per day of processed meat, but concluded it was a moderate risk. Their findings on red meat were less conclusive. If you want to find out more about the results you can do so on Henry Scowcroft’s fantastic blog for CRUK here

As is so often the case, a balanced and varied diet is the key and nobody’s telling you that you should give up your once-a-week-Sunday-morning bacon toastie. The true significance of EPIC to me is that cohort studies like this are going on over years and years all around us, and we have no idea unless either we’re taking part, or the results are published in the future. On my 50th birthday I might take a look at the newspaper projected onto my corneas as I jetpack into work, and see the result of a study that’s taking place right now and I just didn’t know it.

So next time you read a headline saying a new study finds something is bad for your health and that moderation is all-important, maybe go to the link (if there is one) and look at the (hopefully Open Access) paper behind it. Because much as some of us would like to think, this isn’t about busy bodies and the Nanny state making sure we don’t enjoy stuff for their own fun. There is some truly amazing methodology behind these headlines, and it’s worth taking a moment to gaze in organisational wonder.

Posted in Uncategorized | 5 Comments

Interview: Dr Chris Chambers

Following on from my post about scientific misconduct yesterday I spoke to Dr Chris Chambers, Psychologist at the University of Cardiff and Associate Editor on the journal “Cortex”, about his background and his dreams of a scientific revolution. 

chambers_large

1)      Do you come from a scientific background? Is science in your blood?

I was the first in my family to become a research scientist, although my sister is a medical doctor and definitely has the bug. My dad was a surveyor and draftsman and is a very analytical person. My mum was a taxation accountant. My wife’s side of the family is a completely different story, almost everyone has a PhD in something!

2) Was there any particular thing that made you think “That’s it, I want to be a scientist”?

For me it was a gradual thing. Growing up I was influenced mostly by what I read and what I saw on TV. David Attenborough. Carl Sagan. Jacque Cousteau. I loved books on astronomy and the planets. I struggled to get my head around how big everything was, and I loved the sense of mystery it evoked. I was intrigued by special relativity, which is mathematically simple but conceptually mindblowing. In reading fiction I was also drawn toward characters with introverted academic streaks. I read a lot of David Eddings books, which are in the fantasy genre, and I loved how his main protagonists were researchers first and heroes second. I even noticed how Rupert Bear would solve problems by doing careful research! And, of course, there was Star Trek, a vision of what our future might be if the science geeks win. It conveyed the important message that there was hope afterall for dorks like me.

3) Who were your scientific inspirations as a younger person? Who were your teachers and mentors?

I had some hilariously dreadful teachers, but there were three who stood out as positive inspirations. One was Mr Richards, a maths teacher, who made maths entertaining by explaining his method for winning at the dog track and how dividing by zero equals infinity. Another was Mr Dwyer, my fifth form chemistry teacher. He was possibly the tallest, skinniest and beardiest person I’ve ever seen. As well as making stoichiometry slightly less boring than it really is, he let me off one time for bringing in a stack of porno magazines to class. He confiscated them, naturally, but I’m convinced he kept them for himself. The final inspiration at school was my English teacher, whose name escapes me, but he was utterly brilliant and deranged in equal measure. He inspired me to write and, for a very short time, act.

4) Pardon me, but I couldn’t help but notice your antipodean origins, how did you wind up here in the UK?

I won two lotteries. The first was meeting Jemma in 2002. She’s English and was doing her PhD at the University of Melbourne, where I was working at the time as a post-doc. The second was receiving a research fellowship from the BBSRC, which is the major UK research council for biological research. I applied for that fellowship from Australia and we moved to the UK in early 2006. I then set up my lab at UCL before moving to Cardiff a couple of years later. Since then I’ve felt very much at home on this miserably grey, drunken little island.

5) What do you do now? What’s the day job?

It varies greatly from day to day, which is one of the things I love most about science. A typical week can be hard to predict at the outset. Often it will involve meetings with students, staff and collaborators about specific research projects, editing a manuscript that we’re preparing to submit to a journal, working on a grant application, and perhaps giving a lecture or attending one. I also spend several hours each week reading and editing manuscripts submitted to PLOS ONE, where I’m an academic editor. And lately I’ve also been spending a lot of time developing our new pre-registration article at Cortex, and on different forms of public engagement and media work. When I want to procrastinate or avoid humans, I also quite enjoy getting stuck into data analyses or making pretty figures for publications.

6) You’re given an hour of television or radio to talk about a topic you love, to show the world something you’re really passionate about – what would it be about?

At the moment, the issue I would choose is the importance of evidence-based policy, and how the media and public need to think rationally and critically to ensure that we live in an intelligent democracy. We have so many challenges facing us, both locally and globally, and I really think that scientific literacy is a huge part of the solution. So when people watch a Brian Cox documentary I don’t want them to say ‘wow!’ and then forget about it. I want them to ask ‘how?’ and do something about it. When people see a politician use statistics to make a point, I want to them to know what the politician means and also what they are trying to avoid. I want people to know about statistical uncertainty, statistical significance, and logical fallacies. We set our own pace in society and education is everything.

7) Harsh, but… give me your top three science tweeters and why. Who do you find yourself reading or linking up to again and again?

That is tough! Ok here goes.

First, Ed Yong, because he’s a prodigious and immensely talented science writer who’s not afraid to shit all over bad journalism or bad science, but equally loves to celebrate the best of both. He’s the kind of writer and communicator I wish we could clone a few hundred times, package in bubble wrap and post to every tabloid in the land. Oh, and David Attenborough once made him a cup of tea*, which makes me want to either kill Ed or be Ed.

Second, Neuroskeptic, because of his keenly rational eye, which often casts things into focus for me and saves me thinking for myself. Also, he was the main inspiration for me pursuing Registered Reports at Cortex. Mind you, I still have no idea who he is. For all I know, he’s already someone I know.

Third, Rebekah Higgitt, a science historian and museum curator who blogs at the Guardian. I find her articles supremely intelligent and provocative, and they often have the jarring effect of challenging my positivist assumptions. Sometimes they’re like eating brussel sprouts, but I’m convinced they’re good for me.

Finally, I feel Ananyo Bhattacharya, Nature online editor, deserves an honorable mention because he has a unique ability to get a rise out of me, and we’ve had some brilliant debates in the past about things he knows he is completely wrong about.

8) So, the relaunch of Cortex is soon, are you excited?

Yes! This will sound grandiose, but with this new article format I feel we are rediscovering what scientific publishing was meant to be all along. We’re shifting the value from getting ‘good results’ to doing good science. I feel hugely privileged to be at the forefront of this initiative.

9) Did you think Elsevier would accept your proposals for pre-registration, were you nervous?

Actually I didn’t even think the journal editorial board would like the idea, let alone the publisher. But I take my hat off to both Sergio della Sala, the editor in chief of Cortex, and to Elsevier. It takes vision and courage to boldly go where no journal has gone before.

10) Do you think that scientific misconduct is more of a problem with psychology, or do you think you’re just the first field to really hold your hands up and say “we have a problem here”?

Psychology is worse than some and better than others, but overall we seem to be comparable to other biomedical sciences. Actually, though, I think this question takes us off point. For me it doesn’t matter whether we’re more or less prone to misconduct. All sciences should be judging their own behaviour by the objective standards of best scientific practice. This isn’t a relativist argument: there’s a good way to do science and there’s a bad way. Psychology is bogged down in the bad way and we need to change. I admire those psychologists, like E.J. Wagenmakers and Hal Pashler, who are courageous and thick-skinned enough to admit that our discipline needs a revolution. I’m trying to help in my own small way by writing a book at the moment for Princeton University Press about the problems facing psychology and possible solutions. If we’re proactive then we can lead the way for many sciences.

11) The pre-registration process and the conditions for submission of data is quite…. strict! But this is science, that’s a good thing, right?

I hear this a lot, and it always interests me because there are only two really strict aspects to pre-registration. The first is that the scientist follows the experimental procedures as stated. So if you say you’re going to present images of faces on a computer screen for half a second, you stick to it and don’t change your mind half way through the experiment. That’s not really strict, it’s science 101, and to do good science you have to be that strict anyway. The second is that you don’t pretend you found an effect when, deep down, you know you didn’t. As we’ve seen lately, there are many ways to cherry pick complex data sets to find effects that are barely publishable. All pre-registration requires is honesty, that you don’t reinvent your hypothesis to predict something you didn’t expect to find, and that if you analysed your data 100 times and found one statistically significant effect, that you report all 100 analyses rather than picking out the one that ‘worked’. Is it really so strict to ask scientists to plan their methods ahead and to be honest? Pre-registration rewards honesty with a publication, regardless of what the results look like. Everybody wins.

12) You say you’d like to see replication encouraged and a new system where researchers are rewarded, not for how many citations they get but how easy they are to replicate… how would you see this working? Any wild ideas or is this just a nice idea at the moment?

At the moment it’s just a theory but we need to sit down and hammer out ways to quantify the reliability of scientific results. We know it can be done, otherwise science would wander randomly and we’d still be in the stone age. But it isn’t, and over time I believe we are converging on a truer representation of the universe and ourselves. The problem we face is that this learning process takes time, often many decades. This is a problem because the timescale of science is far greater than the short-term whims of governments and the media. Aligning these priorities is as much a sociological challenge as a scientific one.

13) You also said, in The Guardian last September, that you’d love to see a new system where the old heirarchy of Prestige Journals is wiped away and replaced by topic based journals that are more equal and democratic. Don’t you think that’s a bit harsh on the old journals? They’ve brought us some great stuff over the years…

Yes, they have. But I like to think that, one day, we can part company with corporate journals on mostly amicable terms. Mike Taylor has a brilliant Guardian piece about the parable of the farmers and the teleporting duplicators, which shows how corporate publishers add nothing more than illusory value to science. The fact is that everything brilliant ever brought to us by Nature or Science, or any journal ever, was in fact brought to us by a brilliant scientist. There is no such thing as a brilliant journal. Science doesn’t need journals. One day, as an older and slightly balder man, I hope I will be able to look back on the current system and smile tolerantly at how foolish we were, and how far we’ve come.

*Correction: Ed Yong insists that it was coffee, and these scientists can’t get their flippin’ facts straight. David Attenborough made him COFFEE. As if that makes us less jealous. Hmph.

Posted in Uncategorized | Leave a comment

When Science isn’t Scientific

When I fell in love with science again after a while away, I was besotted. I tripped though theories and definitions, misty eyed with possibilities. But the more I hung around with scientific types, the more I realised that not everything in the field was fresh and rosy.

For all the idealism associated with the scientific method, scientists are human. Like any long term relationship, they can lose sight of what made them fall in love in the first place. They can feel like drudges at the lab workbench, watching colleagues get that grant they wanted while they try to swallow down the jealousy. They can be driven by ego and petty feuds just like anyone. Though the scientific method is pure, those who practice it have to work at staying on track sometimes.

And sometimes scientists need to take a long, hard look in the mirror.

Retractions

Let’s revisit my Mole Nose Hair paper. My paper was accepted, peer reviewed and published in the Nasal Hair Journal.  It’s  been cited a few times, first by friends and colleagues but increasingly by people around the world who I’ve never met. I’m really excited about this, Mole Nose Hair Studies have taken off in a big way and my career prospects are looking up. Hopefully in the next year or two I’ll get a paper published in the International Journal of Mole  Studies.

Say, for instance, my hypothesis was that moles who live in areas with clay soil have longer, denser nose hairs because clay is denser and they need the sensitivity. And that what’s more, nose hairs in moles living in clay soil are getting longer with each generation to adapt to this environment. So every week for a year I went out to three different fields around my university, caught moles in a big net and measured their nose hairs with a tape measure (I know, I know. You all see the glamour of science now, don’t you?)

But what if my mole nose hair measurements weren’t always accurate? It’s very hard to hold a struggling mole still with one hand and a tape measure with the other, after all. So maybe some of the readings were more of a guestimate. Maybe on some especially cold, dark nights I really couldn’t be bothered to get up and go out with my Giant Mole Catching Net, so I made something up that seemed in line with the other findings in my notebook. Just once or twice, mind, and I went out 47 other times, honest. What if I wanted my hypothesis that mole nose hairs are longer and stronger to be published so much, that when I did my number crunching and found no real evidence for it, I gave my results a tiny nudge, just enough to make things a significant “yes” to my theory? It was close anyway. I’m sure if I’d had more time and money it would have shown my hypothesis was true.

Nobody might notice that I didn’t measure very well, or made up a few of my findings, for several years. In that time I’ve been cited in other papers and invited to speak at conferences. But then one day, a keen young researcher in my lab starts looking at my original data for her own paper, and notices something odd. All my measurements seem a bit… tidy. She’d expect a lot more random ones. She pops my final figures into STATA and sees how I used statistical techniques to make them more significant. I have falsified my data.

And I wouldn’t be alone. In a study conducted by Daniele Fanelli at the University of Edinburgh, 1.97% of scientists surveyed admitted that they have made up or falsified findings at least once, and 14.7% reckoned that they knew a colleague who had falsified data. A total of 72% had witnessed other dodgy practises from colleagues. So why do these issues not get reported? Most institutions don’t have a Whistleblower policy – if you’re a junior post-doctoral researcher, you’re the most likely to see falsification and yet the least likely to report something that may adversely affect your future career. To put it frankly, you don’t want to piss off your supervisors. Which makes the keen young researcher looking at my mole nose hair data all the more brave.

The Nasal Hair Journal would investigate and issue a retraction of my paper – a statement that my paper has been withdrawn from the Mole Nose Hair literature. There are many reasons that retractions might be made, falsification is only one. The authors themselves might discover a genuine error in their data or conclusions. But whatever the reasons, retractions are seriously on the increase.

According to a recent study by Nature, a decade ago retractions averaged around 30 a year, but even though the number of papers published has risen by 44%, the number of retractions is currently around 400 a year. As I say, not all of these will be due to scientific misconduct and some will be for perfectly valid reasons, but when journals vary hugely on the details they release it’s hard to tell. There is an air of mystery about retractions which creates a stigma – if a journal retracts you, you must be a fraud. This culture discourages researchers who discover genuine cock-ups in their own work from coming forward, and the cycle of incorrect data continues….

Some see the rapid rise in retractions as a positive thing, evidence that the procedures and software for detecting fraudulent results are improving, and that after a peak the number of retractions will decline just as steeply. But for others, it’s a sign that Science is not such a self-correcting process as we think. After all, my mole nose hair paper was published in the first place. It’s been cited in other papers, and has tainted them as a result. How on earth did it manage to get through? And more importantly, why would I falsify my data in the first place?

Oh, the humanity! Or, the Prevailing Culture of Journals

We’ve talked about the prevailing culture of big journals before. People want to publish in big journals because the rewards in terms of career prospects and eminence are huge. It’s a major career goal for the average research scientist, and as such they do what they can to write a paper the editors want.

And as a journal editor, what do you want? You want something with a swanky new paradigm shift that will shake up science, you want an attention grabber, something that will be cited to forever and back. Most of all, you want a positive result. The null-hypothesis – a statement that it didn’t seem your results were related to what you did in any way- is the complete norm in science, yet the enemy of interestingness.

As a journal editor, if you have 50 papers you need to whittle down to 12, you’re going to choose the ones with attention grabbing positive results because they’re the ones that will most grab your reader’s attention, and most likely to be cited in other papers. You need a “wow” factor. And scientists want to give you that wow factor, because the rewards of being published in one of the big prestige journals are immense.

Yet as Keith Laws says in his excellent article in the BMC Psychology journal, “Negativland”…

“Negative findings and replications are science’s road signs telling us how to moderate our journey – we may like to ignore them or find them frustrating but they are vital to progress - and contrary to existing trends, journals must allocate more space and importance to both null findings and replications.”

It’s telling that Laws is a psychologist. This form of publication bias is more prevalent in some areas of science than others and in the last few years, the field of psychology has done a great deal of soul searching. Particularly in bias against studies that attempt to replicate the findings of other studies.

Replications (Not Replicants, that’s Blade Runner)

Replication studies are vital in science, the ability of others to repeat your work and obtain similar data is a cornerstone of the scientific method. It shows that there was a definite relationship between what you did and the results you got – you can prove it wasn’t random more easily if others did it too.  Replication also helps prevent fraud, because if others can’t even get close to your results they’re much more likely to realise something fishy is going on.

But replication studies are implicitly discouraged because they lack the pizazz of a new finding. Why would a researcher choose to replicate someone else’s work when they could be doing something more… well, glamorous? In a survey of journal editors, Laws reports, 94% were not in favour of encouraging replications. Even 54% of reviewers thought they were a waste of time. So if you were a young researcher trying to get published to establish your career, why would you even bother with replications in the first place?

So the null hypothesis and the replication study, which are the bread and butter of science, are pushed to one side in favour of the champagne and caviar of spectacular new findings – what can we do about it?

So What’s the Solution?

There are hopeful signs that things are on the turn, and social psychologists are holding up their hands to their problems and taking the lead. Brian Nosek, a Psychologist at the University of Virginia, yesterday launched The Center for Open Science in Charlottesville. The aim is to shift science away from the end product – the scientific paper – and remind people that 99% of the stuff of science is in the doing before that. The Open Science Framework is the lynch pin, a place for scientists to upload their notes and “show their working”, as Ed Yong succinctly puts it in his article  yesterday. And with his plans for a 100 strong team dedicated to replications, Nosek is hoping to remind both his fellow scientists and  the public that science is all about the process, not just the occasional sensational headlines that come at the end of it.

And in Cardiff, Chris Chambers is doing his part to shake up the journals and make the actual processes of science much more open. A long-time reviewer for Elsevier’s journal Cortex, Chambers was appointed to the board on October 3rd last year and wasted no time at all in setting out his plans. Five days later he published an open letter to his colleagues on the board laying out his ideas for Registered Reports – a process by which researchers would submit their background rationale, hypothesis and methods before their study had even started for approval. Only then would they commence their investigation. Chambers believes this is the best way to prevent publication bias, and he may well have a point. The newly revamped Cortex will launch very soon.

Chambers believes that the real measure of a good scientific career should be based, not on how many times you are published or cited, but on how easily your studies can be replicated. As system where replication is celebrated, not yawned at, and where the old prestigious  journals give way to titles based on topic and subject area alone. Last September, in an article for The Guardian written with Petroc Sumner, he said

Imagine a science in which actively replicating the research of other scientists was rewarded with personal success and funding rather than being derided as dull. Imagine a science in which quality and certainty of findings surpassed quantity in every respect, so much so that the amount of individual input was nearly irrelevant. And imagine a science in which “career-making” journals like Nature and Science simply didn’t exist – a world in which research was categorised by topic before being peer reviewed and published in an open forum, freely accessible to all” 

As a call to scientific revolution, it takes some beating. What remains to be seen is whether the rest of science follows the pysychologists’ soul searching example, and if scientists like Nosek and Chambers become the rule rather then the exception.

There will be an interview with Chris Chambers on Science Groupie tomorrow.

Posted in Uncategorized | 1 Comment

A Brief History of Time (Management)

When I was in my final year of A Levels, I made the terrible mistake of falling in love.

Well I fell in love twice, actually. Once with a boy and once with a girl, but if you want that sort of story I’ve written about it plenty of times on my other blog. I asked both of them out and, probably because there weren’t so many lesbians around and she couldn’t afford to be fussy, the girl accepted. Her name was Debbie. We first got together at a Rocky Horror party and I basically popped my cherry while wearing a lab coat, which probably explains more than you’d like about the title of this blog.

Because Debbie and I met in the early spring and were due to sit our A Levels in a few short months, we made the biggest mistake of our academic lives. We uttered those fatal words that fall from the lips of besotted teens everywhere.

“It’s OK, we can do our revising together!”

It still seems odd to me that nations across the world expect their young people to knuckle down and do the kind of studying that will affect their future just as their hormones are quietly exploding, but I suppose if we waited until after that time we’d never get anything done (particularly me, as I’m 42 and my hormones haven’t stopped exploding yet). So Debbie and I dutifully made our revision timetables, and equally dutifully ignored them as we strolled hand in hand and drank cheap cider on Brighton beach.

In compulsory education, in the UK at least, we’re not taught to manage our time. In fact, none of my lessons at school or college were given over to study skills at all. We were simply left to sink or swim. Nobody told me how to research, plan, and write all those essays that were suddenly expected. As long as I handed them in on time, I could fudge it through any way I liked. Although I didn’t go to university straight away I had no reason to believe, from speaking to friends, that anything changed once they entered academia’s hallowed halls. Fresher’s week is more about persuading students to organise their social time than their study time.

Which is why, when I started studying with the Open University, I scoffed and snorted when my first courses seemed to be as much about how is was going to study as what I was studying. I’m reasonably sharp, surely it was up to me to sort that out? To cram for that test at the last minute, to squeeze in a few paragraphs of required reading while the children’s baked beans were warming up? What, you mean the study diary is part of my required work this week?

The idea of learning how to learn seemed almost like an insult to my intelligence, I vowed to skip it and fudge it in favour of the much more interesting stuff, and I suffered for it. Because learning, just like many success stories, is about acquired skill as well as natural intelligence. You can be highly intelligent, but so disorganised it counts for very little when you’re sitting in front of an exam paper. You can spend your time plugging diligently and, while not shining as a beacon of genius, do OK.

Could you imagine entering a tennis tournament when you haven’t lifted a racket in 25 years? A marathon when your sum previous experience is a 5k fun run? No. You’d expect to do some training. Why should embarking on the rigours of a science degree after twenty years away from education be different for me? Luckily I saw that I was going to struggle without listening to the advice quickly, and I now devote some time each week to…. well, learning how to learn.

The hardest thing about returning to the world of education for me has been keeping away from the massive time-hole that is the internet. Which is kind of difficult when we rely on the web so much for research, and Twitter for sources of information on our special interests. Sometimes, when I just can’t seem to concentrate, it takes a far stronger person than me to step away from gossiping on Twitter with my friends and get down to that reading on quantum electron pathways.

But through self-examination, tracking my own habits and jotting down times when I’m at my most alert I’m getting there. For instance, I’ve realised that the best time for me to read and absorb information about quantum electron pathways is about 6am, before my children get out of bed and request cornflakes. After about 2 in the afternoon my brain activity seems to slow right down (a little something called post-lunch-slump), and although I have a brief recovery around 7-8pm I really am best off using all my brain power in the morning.

There are various levels of concentration you use for student life, of course. That concentration you need for the Krebs Cycle isn’t necessarily required for sorting out your notes or figuring out what the hell your timetable is going to be for the next week (working alone has its own dangers). I can use these small organisation tasks either in short bursts, for my brain to recover from trying to list amino acids from memory, for instance, or in longer stretches when I know I’ll be frequently interrupted by family life. In this way, somehow, most of the things I need to do get done.

Having a vast amount of unstructured time during which I’m just supposed to…. well, gain knowledge has been disconcerting, and even more so because none of my time is really unstructured. I have school picking up times, times for my job, times when I’m supposed to be cooking dinner because it’s not socially acceptable to be living on instant noodles at 11pm when you have young children… I have a new life of learning to fit in with my existing life and they spend a lot of time rubbing against each other uncomfortably (for instance, I’m writing this post while I can hear my children arguing over whose turn it is to use the PC. I’m doing my best to ignore it). Everyone in my family has had to make some concession to my studies and it takes effort on their part as well as my own.

But the point is, it can be done. And if you really want to, and you have the right support, it will be done. But it won’t always be easy, and I’d be lying if I didn’t say there are some times I wonder why the hell I’m doing it.

By the way, I managed to pass my 4 A levels and was offered a place at university, as was Debbie a hundred miles in the opposite direction. And that, sadly, was where the romance ended. Distance relationships are always tough. I sometimes wonder if we would have stayed together, drinking cider and learning Michelle Shocked songs on the guitar, if we’d both failed our A levels miserably and she’d stayed in the same town. But I’m glad she didn’t. Our time management was obviously better than we thought.

Posted in Uncategorized | 2 Comments

Open Access for the Deeply Confused

On Friday my Twitter feed was awash with scientists and academics both rejoicing and despairing. Two big announcements had been released from either side of the Atlantic regarding the complex issue of Open Access of scientific literature. There was enough talk of Green and Gold to make you feel like Bob Marley had never gone away, and enough back-story to confuse even a hardened devotee of Eastenders.

To non-scientists (and even some scientists) Open Access can feel like a confusing issue. You’ve walked into a room halfway through a debate, and the participants are using language that was agreed long before you even knew there was an issue to be solved. So  here’s a very basic guide to Open Access for non-academics. It’s by no means comprehensive and I welcome clarification comments…. but it’s a start if you don’t know where to.

How the process works

If I were a research scientist, a lot of my time would be taken up writing proposals – applications to a grant funding body to do some research.  This money would usually be for a set period of time, to pay my rent and keep me in pot noodles and clean pants, employ assistants, rent equipment,  pay lab fees etc. Say, for instance, I want to look at the evolutionary significance of nose hair development in moles. I write a proposal to get funds for my research and it’s granted – yippee! Sometimes this money is from private institutions, but much more often it is money from Government Organisations (a hypothetical  Nasal Hair Research Council, say), and is, basically, part of your taxes.

So far, so good. I spend a couple of years researching my Mole Nose Hair theory and when I’m ready with my findings I look to publish a paper. As if the evolution of mole’s noses wasn’t enough, this is where it gets really interesting.

You see, the scientific publishing world is dominated by the big science journals – volumes that are published, for instance, quarterly and whose main content is papers from people like me. Journals are the basic scientific currency, the way ideas are communicated. But these aren’t the kind of journals you buy from the newsagent, they’re generally available only by subscription and most subscribers are academic institutions and libraries. Subscription  is pricey and readers usually rely on being part of an organisation that subscribes to them. In fact, the subscription price of journals has risen at nearly 4 times the rate of inflation since 1986 so it’s hard to keep up any other way.

As if all that wasn’t enough, in the “currency” of journals some hold much more weight than others. So I might try and submit my paper to a very prestigious journal first, then work my way further down a ladder until I find a rung on which my Mole Nose Hair theory is accepted. I may try the International Journal of Mole Studies, but that would be living the dream, my friends. Because I’m an early career researcher I stand a better chance with Nasal Hair Journal. In science, who you get published with really matters, and we’ll come back to why a bit later.

But being accepted by a journal is hardly the beginning. Because science is rigorous and the scientific method is important, my paper will then go through the process of Peer Review. My Mole Nose Hair paper will be given out to several of my colleagues in the mole research field, and some experts who have written other papers on nasal hair evolution. Their job at this stage is, basically, to take it apart and see if it holds up. If my Mole Nose Hair paper survives the process and my peers deem it good enough for publication, I will get my paper published in the journal.

Brilliant, you’re published! So now’s the part where you get rich, right?

Erm… no. Did I not mention? Journals don’t pay you to publish your papers. Authors of the papers within see none of the money that journals make from subscriptions, all the author’s money comes in the form of the grants we’ve already talked about. What publication in a journal brings is prestige. It oils the cogs of your scientific career, and maybe makes your next grant proposal easier. While a banker might ask another banker what their bonus was last year, or an actor might ask “are you working?”, a scientist will ask another scientist “How many papers have you published?”, or “Where are you published?”, and what you answer will add either add to or subtract from your professional clout.

The other currency with which scientists hold great weight is citations. This is a way of measuring how influential your paper is on other research that comes after it. Because all scientific papers and articles cite their sources and where their information came from, the more citations you receive, the more influential you are perceived as being. This is partly why a very prestigious journal, which is more widely read, is seen as a good goal. I’ll probably get more citations if my paper is published in The International Journal of Mole Studies than I will in Nasal Hair Journal.

So this is where we stood until about 15 years ago. A world where scientific research, while not deliberately secretive, was at least closed off and difficult to obtain unless you were part of that research world. Journals were the gatekeepers of knowledge and debate, but their price put them out of range of people like you and me who may have a passing interest. But then, of course, like with many areas…

The Internet changed things.

As use of the internet spread, some science organisations realised that they no longer had to rely on print media and the gateway journals to disseminate their ideas. Science is a collaborative process, and the more open the process, the more we can collaborate and advance our knowledge in a particular area. Simple. The idea of open access and online publishing as a help to scientific discourse grew, and the OA Movement now encompasses  many fields, which includes scientific research.

So let’s talk a bit about what we mean by open access. PLoS, a series open access science journals celebrating its tenth year this year, defines open access as “free availability and unrestricted use.” This means not only that pay walls and price barriers disappear, but also permission barriers. Not only can anybody read my Mole Nose Hair paper because it’s freely available on the web, but as long as they use it for legitimate scholarship, maintain my paper’s integrity and acknowledge me as the author, they’re also free to use my Mole Nose Hair paper as part of their own work into, say, vole nose hairs.

Because I’m not paid any money for my paper, as long as I’m acknowledged and cited where I need to be it makes no difference to me. I lose no royalties as happens with, for instance, pirated music. The only thing I have to gain and lose is clout and reputation, and the whole “legitimate scholarship” thing pretty much covers that. So I have no real reason not to give up my copyright of my paper and publish under, say, a Creative Commons license.

The whole idea of open access in science is pretty uncontroversial. Across the world, governments are bringing in legislation to make open access the norm. It’s grown arm in arm with the idea that the public have the right to see the research their taxes paid for, and that the more open the scientific method is, the faster we might be able to find that fabled cure for cancer which is so much more important than mole’s noses in the public mind (harrumph!). Open access gives authors a world wide audience, reduces the expense of journal subscriptions for financially constrained institutions, and increases citations among many other things.  But there is still some reluctance by many scientists to pull away from the prestige security blanket a big journal offers them and publish with some of the new kids on the block. And there is still, in the UK at least, confusion over what form open access should take.

True Colours – Green and Gold

Open access has two different methods, and this is where the crux of the matter lies. With Green open access, I would publish my Mole Nose Hair paper in a subscription journal as per the old method. Keen mole nose enthusiasts, and those doing their own research into the subject, can still lay their hands on my research  straight away if their institution pays the subscription and they will still be at the cutting edge of Mole Nose Studies. What’s more, the old style journals still have their subscription fee and the scientists don’t lose their sense of prestige and security. The difference is, after an agreed time period (maybe, say, 12 months) my Mole Nose Hair paper will then become freely available on the web and placed in something called an Institutional Repository – for instance, the website of the university I did my mole research with, or a central repository such as the USA’s PubMed. This way, those for whom Mole Nose research is not vital  can still take a look at my research if they wish.

The alternative to green is Gold open access, where authors publish their paper in an Open Access Journal. Here, the publisher makes the papers freely available straight away and we bypass the old subscription style journals all together. Because there is no subscription fee for readers, other ways of making such journals pay for themselves have to be found and a number of different business models have sprung up. Some advertise on their sites or have branded products you can buy. Others are crowd funded or ask for donations. Some offer add-ons, such as the ability to customise your viewing experience and be alerted to special interest papers. The most common way, however, is for open access journals to require a fee from the author either when the paper is submitted, or when the paper is published after the peer review process. Often a researcher’s institiution will cover these fees on their behalf, but if they need to come up with money themselves journals will often let these fees pass in cases of  financial hardship.

A major problem with open access journals, for some scientists at least, is that there seems to be some snobbery attached to the idea that you pay to have your work published. Some fear the peer review process for open access journals is weaker than the more “prestigious” routes to publication, and others feel that if your paper was really ground-breaking then you shouldn’t have to pay for it. Hoewever there isn’t any evidence that this is the case, and although it suits the status quo nicely to have these prejudices around they are slowly changing.

A Tale of Two Countries

Both Britain and the United States are keen to move their open access policies along in the near future. Just over a week ago, US Congress saw the introduction of the Fair Access to Science and Technology (FASTR) Act which will require agencies with large research budgets to make their results publicly available within six months. The Act has strong support on both sides and, after three previous efforts, there is a great deal of optimism among advocates that now is the right time.

This sense was compounded on Friday, when the US government announced that, seperately from the FASTR Act, all publications made from tax-payer funded research should be made free to read after a year – that policy had previously only applied to biomedical research. The White House’s Office of Science and Technology also asked that federal agencies provide them with a draft policy of how they’re going to do it within 6 months. Both aspects of the US open access announcements are important – in theory the next government could undo Friday’s statement in a change of policy, but FASTR will bring open access into the realms of legislation which is much more difficult to reverse.

The USA’s juggernaut green access policies look in a healthy state, if not perfect they’re at least a good move in the right direction. The world is looking to the US policies with interest on how to manage their open access affairs, and if it’s Green, they seem pretty keen.

And it’s why, in Friday’s other report on open access, the UK seemed a little… out of line. We seem to be going for Gold, and some worry that it’s somewhat bold.

In July of last year,  Research Councils UK published a new open access policy which seemed, The House of Lords announced on Friday, a little over-enthusiastic to the idea of introducing a Gold policy in the UK. RCUK had protested that their new policy was, in fact, even-handed and gave the decision of Gold or Green to the authors concerned, but the House of Lords retorted that, in the wording of the document, it definitely seems that they’re saying Gold is the preference and Green should be second-best.

Why is this a problem? Gold access should be the ideal, right? Smash the old system to smithereens and create a new model of academic freedom, I hear you cry. That’s all well and good, some say, as long as the rest of the world follow us along the Gold path. But it’s not looking likely. If the UK follows a Gold access policy while everyone else follows the Green path, then those pesky subscription fees for seeing cutting edge research are still there and still have to be paid for the 94% of academic papers that are published outside the UK. And in addition, researchers still face submission or publication fees for publishing in open access journals in the UK. It’s a financial double whammy that most cash-strapped institutions would not be looking forward to, and the fear is that publication fees for UK journals, on top of subscription fees for Green routes worldwide, will take money away from any actual research being done. A lot of open access advocates in the UK, in addition to the House of Lords, are seeing the Green route as the preferred way – it may be less bold, but it’s more likely to fall in line with what everyone else is doing.

It will be interesting to see how RCUK responds to the Lords’ criticism. The current open access policy is due to be brought in gradually over the next 5 years at a cost of £50m, £10m of which has been devoted to a budget for paying submission/publication fees in open access journals. The Lords have also pointed out that RCUK didn’t do a great deal of consultation before their new policy was drafted, and the fact that the UK seems alone in it’s enthusiasm over Gold open access may indicate this. Britain has a chance to be bold, set an example for the rest of the world and be at the forefront of open access, but it will come at a potential price to the scientists and institutions on the front line. And if they have to pick and choose what research to back because even less money is available, mole’s noses may have to take a back seat for a while.

Posted in Uncategorized | 13 Comments

Interview – Suzi Gage

suzico90sbrazil

As “up-and-coming” early career scientists go, Suzi Gage as about as up-and-coming as you can get. Suzi won joint-first prize at The Good Thinking Society’s Science Blog awards in December (presented to her in a beautiful, Whitney Houston fuelled moment by none other than Ben Goldacre). Last week  she announced her move from scilogs.com to a regular blog with The Guardian. We had a bit of a chat about her plans for future world domination.

  • Do you come from a scientific background? Is science in your blood?

 

  • There’s no science really in my family, no. In fact, I’ll be the first person I know of to get a Master’s degree or a doctorate,  so I’m blazing a new trail for the family!

 

  • Was there any one thing that made you think “That’s it, I want to be a scientist”?

 

  • Hmm I was always pretty inquisitive as a child so I guess that’s a surefire sign of scientific leanings. But I can’t think of a specific moment where I decided ‘this is it’. Maybe not until I was at University. I really didn’t enjoy A-Levels, but when I started studying psychology at UCL  I wrote my first essay about autism. I found it really exciting and interesting to write, and it came back with a first, which was just the motivation I needed.  I  realised I was on the right path, and  I’ve been in academia ever since (apart from a year in the wilderness, aka NHS commissioning)

 

  • So who were your scientific inspirations as a younger person? Who were your teachers and mentors? 

 

  • I wouldn’t say I was particularly inspired by science teachers at school, but nor was I put off. When I got to University College there were a few who were amazing: Cecilia Heyes, Gabriella Vigliocco and Jamie Ward particularly spring to mind. Really excellent lecturers, and encouraging in seminars. I worked for Jamie over the summer of my 2nd year, on a synaesthesia project, and it was doing this that opened my eyes to the possibility of research as a career.

 

  • And what do you do now?

 

  • I’m currently based at the University of Bristol,  doing a 4 year PhD  looking at associations of cannabis and tobacco use with later mental health (you can read a short summary of the background rationale for my PhD here).  This is somewhere between psychology and epidemiology, which is basically the study of population health. I’m part of the most awesome lab group ever – TARG (the Tobacco and Alcohol Research Group).  The group is based in the School of Psychology, but there are a few of us  in the School of Social and Community Medicine too.

 

  • You’re given an hour of television  or radio to talk about one topic that you love, to show the world something you’re really passionate about – what would it be about?

 

  • Easy! It’d be about the misconceptions that fly around whenever recreational drugs are mentioned. In fact, I’m currently making podcasts about this very issue – not quite a TV show, but I’m getting there! :)

 

  • Harsh, but…. give me your top 3 science tweeters, who do you find yourself reading or linking up to again and again? 

 

  • Obviously my top pick is @edyong209 - not only is his writing exceptional, but he is also a great source of other people’s writing.

 

  • @garwboy Dean Burnett’s column in the Guardian is brilliant, and one I always make sure I read.

 

 

  • Any Current plans you can talk about that you’re excited about?  What do you see yourself doing in the long term?

 

  • I don’t think I have any secrets I can reveal I’m afraid – ah if only. PhD World is kind of a long haul,  with lots of tiny steps forward rather than big breakthroughs. I’m writing up a few papers for publication at the moment though, so hopefully they will emerge some time soon. Though, outside of the PhD, last week my blog moved to the Guardian website, which is really exciting!

 

  • Anything I’ve missed? What  do you get up to in your free time?

 

  • Well, when I’m not writing or staring at ones  and zeros I’m probably running. I’m in training for the London Marathon, which I’m running to raise money for CF Trust, a charity of personal importance to me as my cousins daughters have CF. I might also be onstage flicking my hair about, as I play synths and sing in Glis Glis, Bristol purveyors of glorious tunes! We’re releasing our first EP on 18th March, and having a launch party in Bristol on March 23rd. Hopefully we might organise a London show too!

Thanks very much to Suzi for taking the time to answer some questions. You can hear a preview of  a track from the upcoming Glis Glis EP  here.

 

Posted in Interviews | Leave a comment