Caitlin K. Moynihan, BA, Panne A. Burke, MS, Sarah A. Evans, PhD, Amie C. O’Donoghue, PhD and Helen W. Sullivan, PhD, MPH

First published July 2018

Prescription drug promotion to health care professionals is prevalent, with billions of dollars spent on it yearly.1 Exposure to this promotion has been correlated with physicians’ increased prescribing frequency.2 Some studies have found an association with lower prescribing quality, but that is not a consistent finding.2 At the same time, several studies indicate that the way clinical trial results are reported can influence physicians, including their intent to prescribe.36 This may be a function of physicians’ knowledge about clinical trial design or their experience with and skill in interpreting statistics. Surveys have found that physicians believe knowledge of biostatistics is important, but they have less knowledge than is needed to understand all clinical trial results.78 Rather, they tend to rely on informational framing in making prescribing decisions. For example, physicians tend to be more likely to prescribe a drug when results are framed as relative risk reduction instead of absolute risk reduction.6 Further, little is known about physicians’ actual reactions to and evaluations of clinical trial data presented in professional prescription drug promotion. To this end, we conducted in-depth interviews with physicians to examine their understanding of clinical trial data, as presented in prescription drug promotional materials.

Methods

This study was approved by the Food and Drug Administration’s Institutional Review Board. We conducted 60-minute interviews with 72 practicing physicians across the United States via telephone and computer. Participants were recruited through Doctor Directory, a former public-use search engine that connected researchers to health care providers who have opted in for potential studies. The final sample included primary care physicians (n = 50) and endocrinologists (n = 22) who wrote at least 50 prescriptions per week, allowing the sample to reach data saturation for reliable inferences.9 We chose to include both primary care physicians and endocrinologists in the study to allow for potential comparisons based on the drug promotional materials chosen as study stimuli.

We used quotas to ensure geographic diversity and to reflect the American Medical Association’s demographics (see Table 1 for final participant demographics). Remote interviews were conducted via telephone and web platform to facilitate stimuli sharing by a trained moderator skilled in leading semistructured discussions. Physicians were asked about their experience with and understanding of clinical trial data, as well as about any training they may have received in biostatistics and interpreting clinical trial data.

Following the initial background discussion, physicians viewed promotional materials for a prescription drug indicated for weight loss and another indicated for glycemic control in adults with diabetes mellitus and answered follow-up questions. The stimuli were promotional materials for actual prescription drugs (a slide deck and a sales aid). Due to time limitations, the stimuli were abbreviated. Content provided included the indication, common adverse reactions, contraindications, precautions and warnings, and clinical trial data. Exposure order was randomized, so that the participant only saw one stimulus set initially. With the first set, each participant was instructed to read the material as he or she normally would to evaluate the information. The moderator discussed the first set with each participant and asked specific questions. With the second set, the moderator focused on specific questions only. The materials contained clinical trial data concepts such as noninferiority and rerandomization. Four researchers coded interview transcripts into an organizational scheme for emergent themes by using NVivo (Ks = 0.83–0.90). Following organizational coding, specific terminology responses were extracted for further coding on comprehension, and 2 coders categorized responses as accurate, inaccurate, or did not know (K = 0.70; refer to Table 2).