Posted by: mattamorphosis | January 31, 2012

Reflections Following SPSP in San Diego

I’m sitting on a plane flying back from the big annual social psychology conference and am exhausted, but feeling more inspired than I have in a long time. After a couple years of going and feeling somewhat apathetic and disenchanted about it all, I found myself bouncing from session to session, and attending something almost every session. The session or two that I missed, however, were productive times as I was catching up with colleagues who I only seldom get to see. So, as I sit here on this cross-country flight, I am creating stimuli for experiments, as best I can in-flight sans internet, and eager to get back into the lab to push on some of my ideas.

Conferences that bring together some of the best and brightest minds in a given field should inspire the next great ideas. If they don’t, you’re probably in the wrong field. For the past couple of years, I’ve had this concern. This year, much like years ago when I first started going to these conferences, I feel like I am mostly at home in social psychology… which is interesting considering that most social psychologists listen to my ideas and are quick to quip, “well, you have certainly left the psychology world.” I don’t know to what extent this is true, but it’s an interesting thought to contemplate — in order to feel at home in social psychology, I had to break from the field’s traditional research practices and topics, to some extent.

This may provide some commentary on a field that has been struggling in recent years, as it has had to cope with scientists (see Marc Hauser of Harvard University) so motivated to be seen as distinguished that they decided that they could doctor their data to fit their a priori predictions (or perhaps, convictions, more appropriately). Following in his footsteps, and likely others yet to be discovered, Diederik Stapel decided he didn’t even need to run experiments to find the answers to his hypotheses, as if he already knew what the data would show. Stapel would just create his own data and then publish papers in highly-esteemed empirical journals, as if he had ran the studies. Tragically, Daryl Bem then published a paper in our field’s top journal, using questionable research and statistical practices, showing that people can predict the future. This work was quickly attacked by many scientists recognizing how damaging this paper rife with flaws could have on the reputation of the journal and the field more broadly. These critiques highlighted the many flaws in these studies purported to show the existence of psychic powers, but the damage was already done.

The New York Times took the above-mentioned scientific transgressions and published a scathing critique of the field. Some called for the cessation of all public funding for social psychological research. Others were less punitive calling for the retraction of certain papers in the public record. All, however, recognized the profound problems facing a field that *should* have so much to contribute to our understanding of social living.

In an effort to understand how widespread some of these problems are, my adviser Brian Nosek, social psychologist at the University of Virginia, launched the Reproducibility Project. This project is a large-scale collaborative endeavor involving more than 75 social psychologists randomly selecting experiments from social psychology journals and trying to run exact replications of those studies. This *should* provide a sense of how many false positives (Type I errors, or effects that are statistically significant by chance) are in different journals and help the field to correct for years of loosey goosey scientific practices.

This project is an important one, but also somewhat scary. What if a high proportion of studies in these well-respected journals do not replicate? What are the ramifications for the field? What will we do?

Brian has also been at the forefront of creating an Open-Science Framework, aiming to make the path forward a better one that will improve scientific practices by opening the process up more. Within the Open-Science Framework, scientists will be required to register their theories, hypotheses, predictions, and exact materials on a public website before running studies. Scientists will be asked to submit their analysis scripts before analyzing their data. Scientists will be asked to admit when a finding was actually in support of an a priori hypothesis, rather than suggesting that was the case in a manuscript (see also Cracking Open the Scientific Process).

If done right, this initiative will change the face of the field in dramatic fashion. It will also make scientists more efficient. Every social psychologist I know has ran multiple studies trying to use experimental manipulations from published work and failed to find the supposed effect. Most of the time, I presume, this is due to subtle parts of the method that scholars simply do not think to include in their manuscripts. However, some of the time, these effects are just not real. They are spurious findings that worked out in some very interesting, counterintuitive way a time or two in one lab. In the former case, registering the exact materials for public consumption, reduces the likelihood of these failed replications. In the latter case, scholars can see how often their colleagues find effects using certain materials. If some materials elicit significant effects in less than 5% of studies that use them, those effects are likely to be spurious. Or, at least highly qualified. And, other scholars would take note and stop using those materials expecting a certain effect with them, rather than wasting tens, or even hundreds, of thousands of dollars on studies every year trying to find those effects.

At this year’s conference, I overheard many people tell me that they just don’t trust what they read in journals. Some of the graduate students saying this told me that their advisers instruct them to not read journals because the stuff in them is crap (and, reading many of the field’s journals these days involves reading about very narrow ideas far-removed from the real-world that so many of us are interested in studying, that we cannot develop ideas that pass the theoretical-novelty threshold required by our top journals to publish; see Tim Wilson’s presidential address on making our world a better place). I tend to agree with much of what I overheard. I do not trust what I read in journals, so I read lots of books and theoretical pieces in journals, and talk to smart people, and devise my own ideas. As a young scholar, I have had more success using this strategy than the journal article-minutia strategy. It is my hope, though, that through the Open-Science Framework and related projects (especially the Reproducibility Project), that we can restore the faith in our journals and not only restore the reputation of the field that I love, but emerge as better and more widely-respected than ever before.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: