Amid the recent controversy surrounding UCLA political science graduate student Michael LaCour, a professor at Emory University reported similar discrepancies for another one of LaCour’s studies.
Gregory Martin, as assistant professor in political science, attempted last summer to replicate LaCour’s study “The Echo Chambers are Empty,” which claimed that the vast majority of both conservatives and liberals consume predominantly centrist news. The study was relatively influential, cited more than 30 times in the academic community, Martin said.
The Daily Bruin’s Alejandra Reyes spoke to Martin on Wednesday about the discrepancies he found in replicating LaCour’s study and the consequences of erroneous research.
Daily Bruin: What discrepancies did you find between LaCour’s study and your attempted replication?
Gregory Martin: I was using a similar method (for a different paper) and my results were much messier than what LaCour was showing. So I was curious and I asked him for his codes. I thought maybe he had a better method for doing it than I did.
He sent me a computer program that he claimed generated these results, which I’m fairly sure was actually taken from a class he took with (political science professor) Jeff Lewis. The code is something totally different than what he described in the paper. My sense was that he never actually ran it.
When he’s describing his method, he has almost a page (where) he took a verbatim quote from (economists) Matt Gentzkow and Jesse Shapiro. He cites the paper, but he’s got an excerpt about a page long which is not in quotes, as if he had written it himself.
DB: Why exactly do you think the results are different? Do you think LaCour falsified data?
GM: I think it’s completely falsified. It would be very hard to reverse engineer (this part of his research) so I think he created from nowhere and decided where he wanted the TV shows to be. There’s no actual analysis.
DB: Why didn’t you report these differences initially?
GM: In retrospect, I wish I had looked into it more deeply. I didn’t have enough information to match what he was doing and so I couldn’t be sure that it was (falsified). It didn’t seem possible that it would just be completely fabricated. It was (more) likely to me that there was something different in the data … that would cause us to get different results.
DB: Why did you decide to investigate this study and your findings after the recent accusations against LaCour?
GM: Researchers are always building on past work to determine where the new research goes. That’s how the original fabrication in the canvassing study (came about). Science builds on past results and (if) it turns out it’s based on faulty data, it’s important to correct that. I would hope that other people working in the field, if they had cited in their own work, they would go back and remove citations and not make any claims based on LaCour’s results and his paper.
DB: Do you think there is pressure to publish noteworthy research for graduate students?
GM: Certainly. The academic job market as a professor is very competitive. There are many more grad students coming out with Ph.D.s than there are jobs. There’s certainly a pressure to have publications before you go on the job market.
DB: What does producing a fraudulent study mean for a career in research or academia?
GM: Producing falsified results in the sense of actually having faked data, which is what it seems like (LaCour) did, is a career-ending move. There’s not many examples of it because it so rare, but it’s not something that you recover from. It’s pretty much impossible to recover trust of other researchers.
Compiled by Alejandra Reyes-Velarde, Bruin reporter.
“There’s not many examples of it because it so rare..”
My question is this – is it so rare because people don’t fudge their data, or is it so rage because people are rarely *caught* fudging their data?
T
Unfortunately, it is not so rare as it once was. The number of retractions in peer reviewed publications — including the top of the line like Science and Nature — seems to have claimed lately. The pressure to publish in order to get to tenure or even get some sort of decent non-Post Doc job is enormous.
Retractions of articles does not mean the underlying data is faked. It could mean that there is a flaw in the experimental design or data analysis (molestation) that led the authors astray. To falsify data and get away without is surprisingly difficult. What often happens is someone tries to use the faked data and determines it is faked because they can reproduce the results as described.
Okay, at this point I’m just liking this whole thread.
Fakery is not rare at all. And it can be enormously consequential. Remember Dr. Andrew Wakefield? Or anyone want to google Michael Bellesiles? I just did and found that he actually published again! https://www.insidehighered.com/views/mclemee/mclemee290
It’s pretty pathetic when “faked data” is done so badly
Yes. I can see the temptation to cut corners, but to fake all the data, to have no backup story, no plausible deniability, and to put so little effort into making the fake data look real — all that speaks to a problem more significant than “mere” moral turpitude or other character disorder. Either he wanted to get caught at some level, or he has or had some more serious mental issues.
Or simply intellectual arrogance.
I bet you are right that arrogance is a big part of it. But for him to perform so badly, wouldn’t such arrogance have to have been paired with a level of delusion that we would consider it a mental illness (such as, for example the manic phase of a bipolar disorder)? (Note that I’m not defending him with such a speculation; people with mental illness still have free will, and I don’t see how “hire me, I was crazy” would get one a job anywhere.)
It is certainly possible. I guess there is no real way to know. However, in any discipline, the act of fraud needs to be accompanied by fearsome penalty.
I think you’re right that such brazen fabrication means that either he wanted to get caught or he had some kind of mental issues that put him in a very irrational state. Given the stuff he did (claiming to have received non-existent grant money, making up fake emails, verbatim copying of text from other research articles, etc) it’s amazing that he got this far in a PhD program, let alone publish in Science, without someone noticing. It’s also amazing that he thought his recently posted “rebuttal” would do anything to help his case.
“Producing falsified results in the sense of actually having faked data, which is what it seems like (LaCour) did, is a career-ending move.”
Maybe he could find a job with Rolling Stone, reporting on sexual assault.
…..or become a climate scientist.
Well, the temperature data the client scientists are working on is all real. Their manipulations of data and graphics are a second-order problem which is found in all the social sciences, and which doesn’t end careers, both because the results are pleasing to their peers and funders, and because every data manipulation has an arguable purpose.
So, the ends justify the means. Brilliant.
The ends justified more cash!
How did you read that into my comment? At any rate, no, I am not in favor of manipulating data to come to preferred conclusions.
Seems to me I remember a problem of the raw data from the initial round of “adjustments” that had been housed at East Anglia Univ. had been “lost.” I might be mistaken or it might have ultimately been found, though. Anybody with an update?
I wrote up something on this years ago. I think you are right. I might have a link in here: http://americanhousewifeinlondon.blogspot.com/2010/08/on-environmentalism.html Not an update but I do have a bunch of background links on Climategate in the Climategate section.
Awesome blog! Thanks for the link!
You are obviously looking for sites that preach to the choir. Environmentalists claimed x and y happened (sans links of course).
I found a scientist’s response to questions about the connection between the recent floods in Texas and climate change illuminating. In Newsweek he points out that the incidence of intense rainfall has increased and that can be connected to climate change. But with respect to the unprecedented flooding the connection is less clear because there are other variables in play:
Studies have shown the odds of very intense rainfall in this part of the country [Texas] have gone up substantially over the last century. The cause and effect with climate change and surface temperature is fairly direct. There’s definitely a connection there.
In terms of the overall weather pattern, we do not know if El Niño will be more frequent or less frequent because of climate change. Overall, we can’t say that the weather patterns that led to the wet conditions this month have had any relationship to climate change that we know of.
And more rain in the Southwest is bad because…
One could attribute the Texas floods to former Governor Perry’s “Pray for Rain” days. Be careful what you wish for.
While rain is valuable, intense rainfall is the least desirable type. It causes flooding and mudslides both of which affected residents of Texas and caused enormous damage. While those who survived are no doubt grateful given others perished and some bodies have yet to be located, their homes may be ruined due to water damage and piles of mud.
I can tell you’re a lib–because NONE is the least desirable type. And we’ve had plenty of that. And while tragic losses have occurred, to try to imply this is “global warming” is just more of the hocus pocus crap you people throw out there. Tell me, Yili–you got any idea of how much atmospheric CO2 mankind is responsible for? How about in the US? How about US coal? Break that down for us, will you? And when you get the answer, you’ll see how foolish this goose chase is. It’s 99% based on fear and guilt.
Thank you. That’s an old one. I’ve happily learned to write shorter these days (although I am currently trimming a third of a family law and custody piece.) I have chatty fingers.
“because every data manipulation has an arguable purpose”
To illuminate a relationship of interest, be it Saddam Hussein’s WMDs or trends in home ownership, etc.
Since Yili doesn’t seem too interested in presenting the numbers, I’ll do it for anyone who cares to know the truth about mankind’s contribution to CO2. To get to the number that say, coal use in the US contributed to atmospheric CO2–it’s the most egregious, after all–multiply this string of numbers: 0.0004 x 0.04 x 0.14 x 0.32 x 0.75. That’s 400ppm total CO2, 4% manmade, 14% US apportioned (explained later) 32% for electrical generation, and 75% of that coal. SO how much does coal use in the US contribute to atmospheric CO2? A whopping 0.0000005376–that’s 5376 TEN BILLIONTHS. Round it up–it’s 5.4 TEN MILLIONTHS. It’s the equivalent of a snipe hunt. There’s no equipment that can read that small amount on a global scale, so it’s all GUESSES. Add to that–the canard that the CO2 mankind contributes to the atmosphere has an outsized effect on global climate change–then why, in the whole time that CO2 has climbed from 385ppm to 400ppm, has mankind’s contribution remained level at 4%? It’s a scam and anyone willing to do the math–which, like observed temperatures, is NOT adjusted or meddled with–will come to the conclusion that this is nothing more than a politically motivated ruse by leftists (read DEMS) to separate you from your hard-earned money.
Career ending move? Unlikely. LaCour’s results pleased all the right people. He still has a shot. Fraud didn’t kill Bellesiles’ career.
Bellesiles was found guilty of academic misconduct by Emory University, had to resign his professorship and was widely denounced as a traitor to his profession (history). He then got an adjunct (part-time) job at Central Connecticut State University, which does not reflect well on that institution, but that’s several steps down from his position at Emory.
Bellesiles has an academic job. Might not be a great career but its a real career. And don’t be surprised if LaCour finagles one too.
LaCour has to get his doctorate before he cam secure an academic position. I would presume granting him the degree is unlikely given the misrepresentations he has admitted to, in addition to his inability to document he ever carried out any surveys associated with the retracted article. Bellesiles was an established scholar when the problems with his book were discovered. His position at CCSU is part-time, it is unlikely to be his only job.
“There’s not many examples of it because it so rare, but it’s not something that you recover from.”
Well, Michael Mann has done OK so far.
Amen–a whole field of charlatans.
You know what bothers me about this? They’re being caught because there are techniques that spot bogus data. But given a little time, these cheats can come up with techniques that create fake data that looks real.
Therein resides the problem. Anyone who is in the academic business knows that you need to look very closely at papers arising form certain places and from certain labs. However, the really good frauds are rarely caught for two reasons: first, that are good at it and their data could be merely a different enough sample that the results could be possible and, second, once discovered, there is a tendency to cover up the sin quietly to avoid tarnishing an institution or discipline. This last was more common years ago but still exists.
The very best frauds, though, are the ones working in fields where data collection is difficult and subject to very broad variance. Particularly, any field that requires obscure and obtuse modeling that is subject to investigator bias is liable to attract fraud. “Climate Science”, something of an oxymoron, is exactly this.
In fact LaCour was revealed to be a fraud rather quickly in contrast to a cancer researcher at Duke. http://www.theverge.com/2015/6/9/8749841/science-frauds-potti-lacour
“Social Science” is neither social nor science.
Except that it is. Maybe go take that ignorance and shove it back in your pocket.
It’s only science if you believe in fairy dust and unicorns. What do think we are talking about here? A social scientist created bogus “research” cut from a whole cloth. A lack of rigor (or, to be charitable, a lack of understanding the analytic techniques used), a lack of decent peer review and a total lack of control over investigator bias means that these fields are not even one step above opinion, typically.
Or if you believe in the scientific method, probability theory, statistical sampling, and quantitative analysis.
You look like fool trying to condemn entire discliplines simply because of one charlatan. Don’t be obtuse.
I am not trying to be obtuse. Look at the data generated by the legions of the semi-competents in these fields. Are some decent, somewhere, I suppose. I have seen too many abuses of MANOVA, too many multiple T tests, too much argumentation in the literature about why all of this statistical rigor is useless and that small N experiments analyzed by guess and by golly are more relevant.
No, vast fields of these oxymoron “disciplines” need to be burned and a fresh start made.
Again, your statistically meaningless anecdotes of some dubious studies (as if those don’t exist in the earth or biological sciences too) is not grounds to condemn entire discliplines. There are many scientific studies–social science as well–that are scientifically rigorous and sound.
The social sciences often lack a careful scientific theory of what they are talking about. In much of social science, the data are mostly surveys. A major exception is economics, where there is a large body of theories grounded in algebra and calculus, and where the data is generated by financial markets or accounting systems or demographics. Economics has also mastered game theory and worked it hard. In much of social science, there is no careful reflection on human motivation. The only well understood and well articulated theory of human action is the theory of rational self-interested actors that permeates economics, and that has made some headway in sociology and political science.
But even economists usually have a flawed understanding of their data, and of statistical methods. I know economists whose careers are based on small experiments where the subjects are undergrads. These economists have never studied experimental design. When working with small data sets, statistical methods based on ranks are valuable. Economists do not learn such methods, and software designed for economists does not automate such methods. Empirical economics is very much grounded in the fitting of equations to data. Most economists do not have a sophisticated up to date understanding of regression diagnostics. Economists fail to appreciate that statistical methods work best with experimental data, and nearly all data sets in economics are made up of observational data. Observational data can be very treacherous.
The rational self-interested actor is a myth. I thought even the dismal science had gotten that far.