The only FDA-approved drug for morning sickness, taken by some 33 million women worldwide since the 1950s, has had a history of ups and downs. A new study adds further uncertainty about the drug. Diclegis, approved by the FDA in 2013, is the rebranded version of an earlier medication called Bendectin. That pill was widely prescribed for more than 50 years, but in the late 1970s, lawsuits began calling into doubt its safety, alleging that the drug caused birth defects. Merrell Dow Pharmaceuticals pulled the product from the US market in 1983. But further studies indicated the birth defect link was unfounded. In its approval of Diclegis, the FDA gave the drug its highest safety rating. But what is just now coming to light are the data on which the agency based its decision, data primarily stemming from a study in the early 1970s of Bendectin that was never published. A pair of researchers from the University of Toronto got hold of 36,000 pages of information on Bendectin from the FDA, as well as documents from the agency’s northern sister, Health Canada. Their analysis, which appears in PLOS ONE, suggests that the original “Bendectin Antinauseant 8-way” trial was hardly a slam dunk in favor of the treatment. Although the results don’t indicate Bendectin was linked to serious side effects, the trial was marred by several methodological defects. These include the loss of massive amounts of data, widespread inconsistencies in how physicians recorded symptoms of morning sickness, a high rate of dropouts from the study, and, most damning of all, the unavailability today—or in 2013, for that matter—of the raw data of the study. Bottom line, according to the Toronto researchers: “the prescribing of this medication should not be based on this trial.” And, the authors of the study say, “regulatory decisions that are based on this trial should be revisited.” We agree. Bringing data to light This data trove is only the latest unearthed by the “restoring invisible and abandoned trials” (RIAT) initiative. Launched in 2013 by the BMJ, PLOS, and a group of scientists, the goal of RIAT is to give researchers, regulators, clinicians, and patients access to data that have long been hidden from sight and which might have been prone to bias. Many people are surprised to learn that researchers who conduct clinical trials do not post or publish all of their study data. Or even most of it. Sometimes, none is publicly available. RIAT is among several recent efforts to bring those data to light, but the process has been slow, to say the least. The first RIAT project looked at whether a particular protein was useful in figuring out which people with colon cancer needed another surgery. It found that the original study, broadly speaking, had arrived at the correct results: The protein wasn’t useful. Another was a 2015 review of Paxil, an antidepressant commonly prescribed to children on the strength of a controversial 2001 paper. The analysis found that the paper, which was tainted by ghostwriting, overstated the benefits of the drug. In the new light, Paxil now appears not only to be ineffective in adolescents, it may be more harmful than the original publication reported. Studies like these speak to the importance of erring on the side of publishing more rather than less data. We recognize that publishing studies—or even posting raw data—takes time and resources. Although some researchers deliberately hold onto their data to keep it away from “research parasites”—a terrible term—many may just not prioritize writing papers, for innocent reasons. A survey of clinical trialists published in early 2015, for example, found that “it was apparent from trialists’ accounts that they were often genuinely unaware of the potential problems their decisions not to publish could cause.” The authors of that survey called this “scientific naivety,” rather than “intentional wrongdoing.” Another factor, of course, is money—or, more precisely, the lack of it. Efforts like RIAT are noble ideas, but without adequate funding, they are at risk of becoming curiosities and not the sort of essential research they deserve to be. So here’s a thought: Perhaps the FDA could ask drug companies seeking approval for a new medication to set aside money for an independent re-analysis of all of their trial data. The agency could then grant those funds to interested scientists. Of course, that only helps with future drug approvals. For those already taken place, let’s hope that RIAT and others like it can persevere in freeing the data that show whether our drugs are truly safe and effective. Republished with permission from STAT. This article originally appeared on January 4, 2016

The only FDA-approved drug for morning sickness, taken by some 33 million women worldwide since the 1950s, has had a history of ups and downs. A new study adds further uncertainty about the drug.

Diclegis, approved by the FDA in 2013, is the rebranded version of an earlier medication called Bendectin. That pill was widely prescribed for more than 50 years, but in the late 1970s, lawsuits began calling into doubt its safety, alleging that the drug caused birth defects.

Merrell Dow Pharmaceuticals pulled the product from the US market in 1983. But further studies indicated the birth defect link was unfounded. In its approval of Diclegis, the FDA gave the drug its highest safety rating.

But what is just now coming to light are the data on which the agency based its decision, data primarily stemming from a study in the early 1970s of Bendectin that was never published.

A pair of researchers from the University of Toronto got hold of 36,000 pages of information on Bendectin from the FDA, as well as documents from the agency’s northern sister, Health Canada. Their analysis, which appears in PLOS ONE, suggests that the original “Bendectin Antinauseant 8-way” trial was hardly a slam dunk in favor of the treatment.

Although the results don’t indicate Bendectin was linked to serious side effects, the trial was marred by several methodological defects. These include the loss of massive amounts of data, widespread inconsistencies in how physicians recorded symptoms of morning sickness, a high rate of dropouts from the study, and, most damning of all, the unavailability today—or in 2013, for that matter—of the raw data of the study.

Bottom line, according to the Toronto researchers: “the prescribing of this medication should not be based on this trial.”

And, the authors of the study say, “regulatory decisions that are based on this trial should be revisited.” We agree.

Bringing data to light

This data trove is only the latest unearthed by the “restoring invisible and abandoned trials” (RIAT) initiative. Launched in 2013 by the BMJ, PLOS, and a group of scientists, the goal of RIAT is to give researchers, regulators, clinicians, and patients access to data that have long been hidden from sight and which might have been prone to bias.

Many people are surprised to learn that researchers who conduct clinical trials do not post or publish all of their study data. Or even most of it. Sometimes, none is publicly available. RIAT is among several recent efforts to bring those data to light, but the process has been slow, to say the least.

The first RIAT project looked at whether a particular protein was useful in figuring out which people with colon cancer needed another surgery. It found that the original study, broadly speaking, had arrived at the correct results: The protein wasn’t useful.

Another was a 2015 review of Paxil, an antidepressant commonly prescribed to children on the strength of a controversial 2001 paper. The analysis found that the paper, which was tainted by ghostwriting, overstated the benefits of the drug. In the new light, Paxil now appears not only to be ineffective in adolescents, it may be more harmful than the original publication reported.

Studies like these speak to the importance of erring on the side of publishing more rather than less data. We recognize that publishing studies—or even posting raw data—takes time and resources. Although some researchers deliberately hold onto their data to keep it away from “research parasites”—a terrible term—many may just not prioritize writing papers, for innocent reasons. A survey of clinical trialists published in early 2015, for example, found that “it was apparent from trialists’ accounts that they were often genuinely unaware of the potential problems their decisions not to publish could cause.” The authors of that survey called this “scientific naivety,” rather than “intentional wrongdoing.”

Another factor, of course, is money—or, more precisely, the lack of it. Efforts like RIAT are noble ideas, but without adequate funding, they are at risk of becoming curiosities and not the sort of essential research they deserve to be.

So here’s a thought: Perhaps the FDA could ask drug companies seeking approval for a new medication to set aside money for an independent re-analysis of all of their trial data. The agency could then grant those funds to interested scientists.

Of course, that only helps with future drug approvals. For those already taken place, let’s hope that RIAT and others like it can persevere in freeing the data that show whether our drugs are truly safe and effective.

Republished with permission from STAT. This article originally appeared on January 4, 2016