From: Daniel Kahneman
Sent: Wednesday, September 26, 2012 9:32 AM
Subject: A proposal to deal with questions about priming effects关于处理启动效应问题的建议
I write this letter to a collection of people who were described to me (mostly by John Bargh) as students of social priming. There were names on the list that I could not match to an email. Please pass it on to anyone else you think might be relevant.
As all of you know, of course, questions have been raised about the robustness of priming results. The storm of doubts is fed by several sources, including the recent exposure of fraudulent researchers, general concerns with replicability that affect many disciplines, multiple reported failures to replicate salient results in the priming literature, and the growing belief in the existence of a pervasive file drawer problem that undermines two methodological pillars of your field: the preference for conceptual over literal replication and the use of meta-analysis. Objective observers will point out that the problem could well be more severe in your field than in other branches of experimental psychology, because every priming study involves the invention of a new experimental situation.
For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.
I am not a member of your community, and all I have personally at stake is that I recently wrote a book that emphasizes priming research as a new approach to the study of associative memory – the core of what dualsystem theorists call System 1. Count me as a general believer. I also believe in a point that John Bargh made in his response to Cleeremans, that priming effects are subtle and that their design requires high-level skills. I am skeptical about replications by investigators new to priming research, who may not be attuned to the subtlety of the conditions under which priming effects are observed, or to the ease with which these effects can be undermined.
My reason for writing this letter is that I see a train wreck looming. I expect the first victims to be young people on the job market. Being associated with a controversial and suspicious field will put them at a severe disadvantage in the competition for positions. Because of the high visibility of the issue, you may already expect the coming crop of graduates to encounter problems. Another reason for writing is that I am old enough to remember two fields that went into a prolonged eclipse after similar outsider attacks on the replicability of findings: subliminal perception and dissonance reduction.
I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is selfdefeating. Specifically, I believe that you should have an association, with a board that might include prominent social psychologists from other field. The first mission of the board would be to organize an effort to examine the replicability of priming results, following a protocol that avoids the questions that have been raised and guarantees credibility among colleagues outside the field.
The following is just an example of such a protocol:
· Assemble a group of five labs, where the leading investigators have an established reputation (tenure should perhaps be a requirement). Substantial labs with several students are the most desirable participants.
· Each lab selects a recent demonstration of a priming effect, which they consider robust and most likely to replicate.
· The board makes a public commitment to these five specific effects· Set up a daisy chain of labs A-B-C-D-E-A, where each lab will replicate the study selected by its neighbor: B replicates A, C replicates B etc.
· Have the replicating lab send someone to see how subjects are run (hence the emphasis on recency – the experiments should be in the active repertoire of the original lab, so that additional subjects can be run with confidence that the same procedure is followed).
· Have the replicated lab send someone to vet the procedure of the replicating lab as it starts its work
· Run enough subjects to guarantee power (probably more than in the original study)
· Use technology (e.g. video) to ensure that every detail of the method is documented and can be copied by others.
· Pre-commit to publish the results, letting the chips fall where they may, and make all data available for analysis by others.
This is something you could do quickly, and relatively cheaply. The main costs are 10 trips, and funds to cover these costs would be easy to get (I have checked). You would have to be careful in selecting laboratories and results to maximize credibility, and every step of the procedure should be open and documented. The unusually high openness to scrutiny may be annoying and even offensive, but it is a small price to pay for the big prize of restored credibility.
Success (say, replication of four of the five positive priming results) would immediately rehabilitate the field. Importantly, success would also provide an effective challenge to the adequacy of outsiders’ replications. A publicly announced and open effort would be credible among colleagues at large, because it would show that you are sufficiently confident in your results to take a risk.
More ambiguous results would be painful, of course, but they would still protect the reputations of scholars who sincerely believe in their work – even if they are sometimes wrong.
The protocol I outlined is just an example of something you might do. The main point of my letter is that you should do something, and that you must do it collectively. No single individual will be able to overcome the doubts, but if you act as a group and avoid defensiveness you will be credible. All best,
more on nature
Response to Ed Yong’s Questions
2 Oct 2012
Ed Yong, a science journalist (http://notexactlyrocketscience.wordpress.com/), emailed questions in response to a call from Danny Kahneman that priming researchers should engage in a concerted effort to replicate findings. Ed’s questions are in italics. I share the answers in form of a public Google Document because they are longer than Ed can possibly use and I anticipate that there will be questions down the road about what I may or may not have said once parts of my response might be quoted.
Questions & Answers
EY: Do you think that suspicions about social priming are as strong as he suggests?
NS: Experiments are conducted to test theoretical predictions. No theoretical proposal stands or falls on the basis of a single, isolated finding. Instead, theoretical proposals are evaluated on the basis of a body of convergent findings and their compatibility with what else we know. Individual findings can provoke a rethinking of assumptions, but they are just one building block in a research program.
In his book “Thinking, fast and slow” Danny Kahneman has done a masterful job of reviewing and integrating the diverse findings that some people loosely refer to as “priming research” (knowledge accessibility effects, automaticity, fluency, and so on). As his book shows, there is a large body of converging findings from labs around the world, accumulated over almost four decades of peer-reviewed research published in an array of different journals. This work paints a coherent picture of the underlying processes that does not ride on any single individual finding. Researchers familiar with this literature are also familiar with the large number of conceptual, and sometimes exact, replications and the convergence documented in meta-analyses.
There is no empirical evidence that work in this area is more or less replicable than work in other areas. What distinguishes this work from other areas is solely that some of the findings are more surprising to lay people than findings in other domains. Unfortunately, the surprise value of the findings has sometimes been in the foreground of the publications (and has always been in the foreground of popular reports). This gave some particularly surprising individual findings an iconic status that far exceeds their empirical contribution to theory testing. It also focused the popular discussion on individual results and away from the convergence of a large body of evidence, including many findings that are not eye-catching, and the rather straightforward processes that underlie the surprising effects.
This created a context in which the concerns of a few sceptics, focused on one or two iconic findings, received more attention than either the critics’ slim empirical evidence or the relevance of the iconic findings warrants. You can think of this as psychology’s version of the climate change debate: Much as the consensus among the vast majority of climate researchers gets drowned out by a debate created by poorly supported and narrowly focused claims of a few persistent climate sceptics, the consensus of the vast majority of psychologists closely familiar with work in this area gets drowned out by claims of a few persistent “priming” sceptics. Their scepticism is based on isolated nonreplications of individual findings combined with a refusal to acknowledge the results of meta-analyses that count as conclusive evidence in any other area. Their critiques find attention because the findings they doubt are counterintuitive and of interest to a wide audience -- a failure to replicate a ten millisecond difference in a standard attention experiment would never be covered by you, Ed, or your colleagues. Hence, nonreplications in other domains of psychology rarely become the topic of public debate -- that people care in the case of “priming” studies is a tribute to those who put these phenomena on the map in the first place. While much remains to be learned about these phenomena, a response of broad doubt is incompatible with the available body of consistent evidence and its compatibility with related domains of knowledge (as Kahneman’s “Thinking, fast and slow” documents).
EY: Would you agree with him that there is a "train wreck looming" and that priming researchers must take action to address the suspicions?
NS: If there is a “train wreck” looming, it is one of public perception, not one of the quality of the vast majority of the scientific work. The perceived “suspicion” far exceeds what critics’ supporting evidence might warrant. But as the climate change debate illustrates, the perceptions created by such debates are difficult to change through scientific evidence. Obviously, Danny Kahneman is more optimistic on this count than I am -- he thinks that the “suspicions” are unwarranted and that the perceptions can be corrected by the daisy-chain replications he suggests.
EY: Does his suggestion of a daisy-chain of labs carrying out replications make sense? Are you willing to take up the suggestion, and would others in the field do the same? If so/not, why?
NS: A daisy-chain of replications is an interesting idea and could provide information about the reliability of new results that is more quickly available than the results of meta-analyses. I will participate in such a daisy-chain if the field decides that it is something that should be implemented in a broader way. I will not participate in it when it is merely directed at one single area of research that happens to be the target of poorly supported “suspicions” voiced by critics who find a few isolated individual results implausible and ignore the majority of the available research. (Independent of this, I will obviously provide what is needed for others to replicate findings from my lab, but that’s not the point of the Kahneman proposal.)www.0711zp.com玩转心理学网