The New England Journal of Medicine bills itself as “the world’s most influential medical journal,” and it unquestionably publishes groundbreaking articles about medicine. But all too often in recent years the NEJM has strayed from what it knows -- medicine – into what it doesn’t – law and public policy, particularly tort policy. No longer content with editorials encouraging litigation against anyone but doctors, the NEJM now publishes public policy advocacy pieces dressed up as scientific studies, with the implicit suggestion that those studies should get the benefit of the NEJM’s good name in public policy debates.
The problem with straying so far from home is that you end up in places with unfamiliar customs and language. If you pretend to be an expert in unfamiliar territory, you say dumb things that show you don’t know what you are talking about. The NEJM did that when it published its recent “Special Report” on “Whistle-Blowers’ Experiences in Fraud Litigation against Pharmaceutical Companies.”
The purported “study” is really a summary of conversations with 26 people identified as whistleblowers in 17 federal qui tam cases against pharmaceutical manufactures that were settled between January 2001 and March 2009. Talk about a biased sample! No prosecutors, no pharmaceutical manufacturers, not even an unsuccessful whistleblower. These 26 qui tam relators each received a median net payment of $3 million after taxes from their suits. We imagine that might have colored just a bit what they had to say.
And the methodology – well, there doesn’t appear to be much. The study reads like a Hite Report for qui tam litigators. The authors’ method sounds like what any journalist would do to write a magazine article – you call up people and listen to their stories. The whistleblowers complained that it ain’t easy being a whistleblower: relationships are strained; retaliation happens. This sounds like lots of newspaper and magazine articles we have read over the years, or the plot of movies like “The Insider” and “The Informant!”
If this article had appeared in a regular magazine, one might read it and then move on to an in-depth interview with the latest reality television star. But the authors of this study published in the NEJM try to pass anecdotes off as science. They start with a section called “Study Methods,” which throws around a lot of jargon to make it sound scientific. The authors solemnly report that they “conducted individual, semistructured interviews with 26 (62%) of” 42 identified whistleblowers. We don’t know what a “semistructured interview” is but it sounds, at best, like a semiscientific method. The authors then report that “the interviews had a median duration of 40 minutes (interquartile range, 31 to 49 minutes . . . .” Using terms like “median” and “interquartile” sure makes this all sound precise and scientific, doesn’t it? It sounds to us like a fancy way of saying that the authors talked to people for about a half an hour to 45 minutes – but we aren’t scientists.
The study authors then say that they “analyzed the interview transcripts using the constant comparative method of qualitative analysis” and direct interested readers to “a detailed description of the study methods . . . in the Supplementary Appendix.” We’ll spare you the trouble of poring over that document. What it means is that somebody asked 26 people questions based loosely on a script (remember, they are only semistructured interviews) and counted up what they said. If the editors of the NEJM who published this article think that this is science, then we should not be surprised if they nominate Andrew Wakefield for the Nobel Prize in Medicine.
The Special Report then provides some familiar anecdotes of the difficulties of being a whistleblower. Again, this is the stuff we’ve read in numerous magazine articles, although a good magazine editor would omit some of their selected details: “a typical day could be meeting an FBI agent in a parkway rest stop. Sitting in his car with the windows rolled up. Neither heat nor air conditioning.” Special Report at 1836. Wow.
More problems come when the authors move from telling occasionally banal stories to trying to analyze the “data” to come up with “findings.” Despite their pretensions to scientific methods, there is no real statistical analysis of the data. They simply count up how many of the 26 people said various things. We could have done that.
The authors conclude that only six relators “specifically intended” to bring qui tam lawsuits, and the other 20 “fell into the qui tam process” after talking to lawyers on other issues or after being encourage to file suit by family or friends. Id. at 1834. The authors then make a “finding” that many relators became qui tam litigators “accidentally.” Id. at 1837. As lawyers, we know it’s pretty hard to file a lawsuit accidentally. And the study authors must have a curious view of intent to conclude that an action taken voluntarily and after carefully consideration is “accidental” because the actor was urged to take the action by family, friends, or lawyers. If so, then millions of people went to school, took jobs, and got married “accidentally.” Try that argument at home to excuse some dumb thing you did – “But it was accidental, dear, my friends urged me to do it” – and see how far that gets you.
Then there’s the authors’ credulity. They report: “Every relator we interviewed stated that the financial bounty offered under the federal statute had not motivated their participation in the qui tam lawsuit.” Id. at 1834. Right. Anyone with an ounce of common sense would not believe that the relators’ true motivations are so pure. This conclusion demonstrates the pitfalls of taking everything an already biased sample produces at face value. Garbage in, garbage out, as they say.
But the authors don’t even draw the logical conclusion from what their data is telling them. Assume (solely for the sake of argument) that these 26 successful qui tam plaintiffs are being honest. Then the qui tam system urgently needs revision. We’re wasting millions that could be benefiting us as taxpayers. Obviously, these relators didn’t need the millions of dollars they received to motivate them to bring qui tam lawsuits. All that money can go back to the taxpayers or, better yet, our clients, the pharmaceutical companies.
Speaking of money, the authors’ discussion of “financial bounty” is rather ironic because their financial disclosure form states that the lead author, Aaron S. Kesselheim, served as an expert witness against Merck in the Vioxx litigation and received money from a grant funded by the settlement of a fraud case regarding the promotion of Neurontin. We are sure that Dr. Kesselheim would state as convincingly as his 26 subjects that “the financial bounty offered [by his expert fees and grant] had not motivated [his] participation in this [article],” but others might think that he has an axe to grind against Big Pharma. We’ve warned before that plaintiffs’ experts and peer review don’t mix, and this is a good example.
If this article had simply stopped with recounting the subjective experiences of 26 successful qui tam relators, we might have dismissed the article as a harmless collection of anecdotes and statistically insignificant conclusions, a bad but largely inconsequential lapse in editorial judgment by the NEJM. But the Special Report then goes on to offer policy recommendations about changing the entire qui tam system based solely on what that biased sample has to say.
What a great idea – deciding important public policy questions based only on a few interviews of one of many interested groups. Maybe someone will follow this NEJM-approved “scientific” approach to public policymaking and make serious proposals to reform the nation’s regulation of doctors based solely on 30 to 45 minute interviews of 26 successful medical malpractice plaintiffs. Somehow we doubt that the NEJM would publish that article, even if its science was every bit as good as the “science” in the Special Report.
In addition to being based only on the views of a small sample of extremely interested parties, the policy recommendations also betray a serious lack of knowledge of the qui tam system. It is fundamental to public policy analysis that you have to understand a system before making recommendations to change the system, but the Special Report authors apparently don’t understand the qui tam system. Nor, apparently, do the NEJM editors or peer reviewers, or else they wouldn’t have published this piece. The Special Report states: “[T]he FCA does not distinguish between relators outside and inside the defendant company, whereas we found that insiders tended to contribute much more to the Justice Department investigation and suffered more for their involvement. Such factors should be taken into account in determining compensation.” Special Report at 1838. But you can’t judge what the FCA takes into account solely by interviewing relators; you have to analyze the laws, Department of Justice guidelines, and case law. Although we spend most of our time on product liability cases, we know enough about the qui tam system to understand that the existing qui tam system clearly and unequivocally takes these factors into account.
Specifically, a key factor in determining compensation is the relator’s contribution to the investigation, and if insiders contribute more to an investigation, that factor will be considered in awarding compensation. The FCA specifically provides that “the extent to which the [relator] substantially contributed to the prosecution of the action” is a factor in awarding compensation. 18 U.S.C. § 3730(d)(1). And the Department of Justice’s Relator’s Share Guidelines, which the DOJ has used since 1996 to determine compensation, state that compensation should be higher if the relator “provided extensive, first-hand details of the fraud to the Government” and “provided substantial assistance during investigation and/or pretrial phases of the case.” Far from ignoring whether insider relators “suffered more for their involvement,” the Relator’s Share Guidelines state that compensation should be higher if “the filing of the complaint had a substantial adverse impact on the relator.” And courts also have considered whether relators suffered from their involvement in awarding compensation. See, e.g., United States ex rel. Anderson v. Quorum Health Group, Inc., 171 F. Supp. 2d 1323, 1337-38 (M.D. Fla. 2001); United States ex rel. Burr v. Blue Cross & Blue Shield of Fla., Inc., 882 F. Supp. 166, 169 (M.D. Fla. 1995) (“A relator may be entitled to the statutory maximum percentage in situations where the relator has suffered personal or professional hardship.”). Somehow, this whole subject strikes us as beyond the NEJM editorial board’s sphere of professional competence.
It gets worse. Based on interviews with participants in cases that settled from 2001 to 2009 (and whose relevant experiences with retaliation must stretch back into the 1990s), the Special Report further recommends “broadening the scope, or strengthening the penalties, of the antiretaliation provisions,” particularly for insiders and those make internal complaints. Id. at 1839. These recommendations ignore existing antiretaliation provisions and how Congress broadened the scope, and strengthened the penalties, of the antiretaliation provisions during the time period studied by the authors.
Again, here are some of the specific facts ignored by the study authors. In 2002, Sarbanes-Oxley augmented existing antiretaliation protections by adding 18 U.S.C. § 1513(e), which punishes criminally “interference with the lawful employment or livelihood of any person, for providing to a law enforcement officer any truthful information relating to the commission or possible commission of any Federal offense.” Sarbanes-Oxley also added extensive private civil remedies for whistleblowers in publicly traded companies, which allow a whistleblower to file a complaint with and receive full relief from the Secretary of Labor and to file a complaint in federal court. 18 U.S.C. § 1514A. Contrary to the Special Report’s suggestion that there are inadequate protections for whistleblowers who make internal reports, section 1514A specifically protects from retaliation employees who report misconduct within the company.
It would seem to be pretty elementary science that if you are going to draw any conclusions about something you are studying, you need to consider and account for changes to the system during the study period. Public policy analysis would tell you the same thing. Although we might expect that the NEJM wouldn’t care much about public policy principles, you would think that the NEJM would insist that its articles show some fidelity to scientific principles. But no; the NEJM article ignores this fundamental change to the antiretaliation protection smack in the heart of the time studied and never considers whether retaliation problems lessened after antiretaliation provisions were strengthened. This is like studying the liver disease of a patient population without accounting for half of them becoming alcoholics in the middle of the study period – or like surveying patients about medical records privacy experiences during the past 30 years while ignoring HIPAA. No one would or should take the recommendations of either study seriously.
The Special Report concludes by acknowledging some limitations. “Response to some queries, such as motivations and the role played by the prospect of financial gain, may reflect a socially desirable response bias.” Special Report at 1839. Duh. In other words, the responses may not be true. The authors further acknowledge that their “findings represent the subjective experiences that whistle-blowers were willing to report in interviews.” Id. In other words, their responses don’t give us the whole truth, either. After admitting that (1) their subjects could have lied in what they said, (2) their subjects could just as easily be concealing things, and (3) only those biased 26 subjects – and therefore not any other stakeholder in the qui tam process – were consulted, the authors nonetheless conclude: “Notwithstanding these limitations, our findings suggest that changes to the FCA and qui tam process that mitigate relators’ hardships may help promote responsible whistle-blowing and enhance the effectiveness of this integral component of efforts to combat health care fraud.” Special Report at 1839. No, no, no, no. Any good doctor, social scientist, or public policy expert will tell you that if a study has such severe limitations, you can’t make credible “findings” about the phenomenon being observed, nor should you recommend a course of action. It certainly makes no sense to suggest changing an entire system based only on the subjective views of a biased sample and without understanding either the system or the views of all the other interested stakeholders.
And that’s the problem with this venture by the NEJM into an unfamiliar public policy arena. The NEJM has published an article cloaked in the trappings and jargon of scientific research, and most readers would conclude that an article in the NEJM is based on science. The Special Report isn’t science or even intelligent design; it is simply a collection of anecdotal interviews and deserves no more weight than that – and probably less, given the blatant selection bias. Our real problem is that the NEJM is misusing the credibility it has quite legitimately obtained as “the world’s most influential medical journal” by putting its good name on public policy pieces that are neither scientific nor remotely about medicine. The NEJM should stick to what it does best – medicine – or risk tarnishing its reputation as a respected medical journal.