Today the New York Times had the second in a series of a new drug treatment drug called "buprenorphine" that has its own set of problems with regards to the Doctors administering it and monitoring those in need. In other words, the Doctor was doing what he was prescribing. Sounds great. The article neglected to mention who was funding his work but I suspect that aside from private insurance, I believe that the good Doctor was treating largely Medicare patients. We supported not one but two addictions. But Big Pharma still wins big.
Then we have the whole debate on the new statin issue and of course there is concern that the sudden uptick in demand may be biased by race and gender and again who is responsible for payment. More fraud more bias. The article about that is here.
Then I found an article in NCBI that discusses the fraud and duplicity with regards to drug and alcohol assessments and educational programs. Remember "Just Say No"? The idea is that the very programs set in place to prevent those from early use are simply using it to fund their programs. There is a great deal of concern right now with privacy rights and student access and now to find a decade old study that shows many of these facilities are simply in it for profit actual prevention and education, not so much. SHOCKING I KNOW.
So when anyone is required to be drug screened or alcohol screened be sure that there will be a diagnosis that of course allows the participant to partake in expensive and likely unnecessary programs and treatments. And that does what exactly? Oh it keeps the bottom line going but does nothing to actually help people recover, rebuild or simply move forward.
Corrupt, venal and incompetent is our current state of America. A state of chaos united.
Conflict of Interest in the Evaluation and Dissemination of “Model” School-based Drug and Violence Prevention Programs
Dennis M. Gorman and Eugenia Conde
1.1 Evaluation of Drug Prevention Programs
One-and-a half decades ago, Joel Moskowitz (1993) published a paper in this journal that raised serious questions about the quality of outcome research conducted in the field of drug prevention program evaluation. He concluded that the shortcomings present in the design, implementation and data analysis of evaluations was not simply the result of limitations of resources but rather stemmed from the broader structural and institutional context within which research was conducted. Among the institutional pressures, Moskowitz included conflicts of interest, noting that:
“Unfortunately, much of the drug abuse prevention research conducted to date suffers from real or apparent conflicts of interest. Evaluations are often conducted to prove that a program merits funding or to market the program on a broader scale. Many investigators evaluate programs that they, or their institutions, have developed and intend to market. Thus, the financial interests of the investigators and their institutions may be directly affected by the outcomes of the research, increasing the likelihood of bias in reporting methods and results” (Moskowitz, 1993, p. 7).
Since the publication of Moskowitz's paper, the types of drug education programs he discussed have become the mainstay of prevention policy in the United States (National Institute on Drug Abuse, 2003; Schinke, Brounstein & Gardner, 2002). While concerns about the quality of evaluations of these programs continue to be raised , these are never mentioned in reviews of the literature written by program developers or in documents that describe so-called “model” or “research-based” programs Indeed, drug prevention evaluation has become a field of research in which critical debate about issues pertaining to the design, implementation and analysis of the most widely advocated programs is almost entirely absent. It is therefore hardly surprising that the issue of conflict of interest is almost never raised in the drug prevention literature.
1.2 Conflict of Interest
As Tobin (2003) observes, the term “conflict of interest” refers to “a set of conditions in which professional judgment concerning a primary interest, such as the validity of research, might be influenced by a secondary interest, such as financial gain” . Tobin further draws a distinction between conflict of interest and bias. The latter occurs when a researcher's judgment concerning his/her primary interest (i.e., the production of objective knowledge) has been clearly influenced by some secondary and competing interest. In contrast, a conflict of interest exists irrespective of whether the researcher's judgment and behavior can be demonstrated to have been adversely influenced – that is, it exists simply as a condition of the researcher having two competing interests.
The competing interest that has received most attention in the literature addressing the prevalence and effects of such conflicts on the practice of empirical research has been that of financial relationships between investigators and research sponsors. The primary focus of studies that have addressed this issue has been on the scope and influence of the pharmaceutical industry's funding of biomedical research. These studies show that financial relationships between the pharmaceutical industry and researchers are widespread (about 25% of researchers have industry affiliations) and that there exists a systematic bias in the reporting of study outcomes favoring the products of those companies sponsoring the research. Industry sponsorship has also been found to be associated with the use of inappropriate study designs, irregular data analysis and reporting practices (e.g., selective reporting among numerous outcome variables, multiple subgroup analysis), and publication delay.
Recently these types of analyses have been extended beyond the examination of biomedical research. Cosgrove and colleagues (2006), for example, assessed the pharmaceutical industry's relationship with members of the advisory panels that recommend changes in the Diagnostic and Statistical Manual of Mental Disorders, and found that 56% of these individuals had financial ties with drug companies. And although the literature is less comprehensive, studies have also been conducted that examine the influence of other large industries on the quality of empirical research that they sponsor, notably the tobacco and food industries.
1.3 The Institute of Medicine's Open Systems Model of Conflict of Interest
It is easy to be skeptical about research funded by multinational industries whose profit motive clearly outweighs concern for public health. However, from a purely fiscal standpoint there is nothing unique to the industries mentioned above when it comes to having a vested interest in the outcome of studies into the effectiveness of their products. As Tobin (2003) observes, the primary obligation of any manufacturer is to deliver a sound financial return on an investment, and hence there is an inevitable vested interest in the products that are manufactured being perceived as effectively performing the functions for which they were intended with minimal adverse side effects. This is as true of a school-based prevention curriculum as it is of a pharmaceutical pain-killer, a soft drink or a cigarette. It is this interest in the success of the product that is fundamentally at odds with the disinterested orientation that is so basic to the norms of the practice of science
The divergence between these two norms and their influence upon the practice of research can best be understood using the open-systems model employed in the Institute of Medicine's (2002) recent report on research integrity. The model highlights the fact that conflict of interest typically occurs within a complex organizational system that has a structure and culture, requires inputs of financial and human resources in order to function, and exists in order to produce outputs in the form of goods, products, services, and activities. Using this model, we argue that the following basic differences exit between the organizational culture of the typical institution that functions solely to develop drug and violence prevention intervention programs and the organizational culture of the typical institution that functions solely to evaluate such programs.1
Program Developer Organizational Culture
- Mission: develop and disseminate programs
- Audience: consumers, practitioners, policy-makers
- Commitment to belief system and/or financial return from product
- Culture: advocacy and promoting of products
- Norms: interest, commitment, belief
Program Evaluator Organizational Culture
- Mission: produce and disseminate knowledge
- Audience: researchers, scientists, scholars, practitioners, policy-makers
- Commitment to science, objectivity, “truth”
- Norms: skeptical, critical, rational, inquisitorial
It can be seen that while there is some overlap in the audience to which their work is targeted, there are also fundamental differences between the two organizational cultures. Specifically, the researcher's skepticism and commitment to unearthing the truth is likely to conflict, at least on some occasions, with the programs developer's belief in, and advocacy of, his/her program. Indeed, if one accepts the principle of equipoise (that is, that evaluations should only be conducted when there is genuine doubt about the efficacy of an intervention) then one would expect the results of an evaluation to conflict with the developer's positive expectations in about half of the studies conducted. Thus, given the context within which the evaluation occurs, the potential for conflicts of interest to occur is high when either the program evaluator and program developer are employed by the same organizationor when the program developer and program evaluator are one and the same person
The present study examined the relationship between program developers and program evaluators using a sample of “model” school-based drug and violence prevention interventions. Specifically, we addressed the following two questions. First, what is the nature of the relationship between the developers of these “model” programs and the organizations that distribute them? Second, what proportions of evaluations of these programs that have been published in peer-reviewed journals have been conducted by the developers of the programs compared to independent evaluation teams?
2.1 Sample of School-Based Drug Prevention Programs
The sample of “model” drug and violence prevention programs used in the analysis was taken from the Substance Abuse and Mental Health Services Administration's (SAMHSA) National Registry of Effective and Promising Programs (NREPP). We chose the NREPP list as this is the most comprehensive of the best practice drug prevention lists available and has been influential within the field given its sponsoring agency. The research reported here is part of a larger study designed to assess the types of data analysis and presentation practices used in drug prevention research that commenced in late 2005. Thus, the NREPP model program list used in this study was the one available on the SAMHSA website at this point in time. While the NREPP rating criteria and selection procedure have subsequently been revised (Substance Abuse and Mental Health Services Administration, 2006), the list of 66 model programs used in our larger study is still accessible on the SAMHSA website, as are the materials (such as program fact-sheets) produced for each.
According to the fact-sheets that appear on the SAMHSA webpage for each program, eight were designated “treatment”. Of the remaining 58 prevention programs, 34 were designated “school-based” or “school-based/community” (hereafter referred to as “school-based”) and 16 were designated just “community” on their fact sheets. Of the remaining eight programs, three were designated “environmental”, four “workplace”, and one was a secondary prevention program targeted at heavy drinking college students. Here we focused on the 34 school-based interventions since these are the most extensively evaluated group of programs and the ones most often packaged in a form that can be sold commercially (typically as curricula).
2.2 Data Collection: Identification of Evaluation Studies
Two types of data pertaining to the 34 school-based NREPP programs were collected for the present study. First we sought to identify all evaluations of the programs that had been published in peer reviewed journals. The initial step in this process was to conduct a literature search using the general search engine of the Texas A&M University library system, which searches the following databases: the University's Library catalog; the Medical Sciences Library Catalog; Academic Search Premier (EBSCO); Academic Search Premier(EBSCO); MLA Bibliography (EBSCO); PsycINFO 1872-current (CSA); Science Direct; ERIC (EBSCO); and CAB Abstracts (Ovid). The name of the program was first searched for by itself, followed by the name of the program developer. In the next step in identifying appropriate publications to review, the abstracts obtained through step 1 were reviewed by one of the authors (DMG) to ensure that the publication directly pertained to the NREPP program. The lists that resulted from this process (which included journal articles, book chapters and books) were then sent to the program developers, who were informed that we were interested in the material that had been used in establishing the program as a NREPP model intervention. Since it was difficult to establish exactly when this status had been conferred, we sent each developer the complete list of evaluation papers that we had identified through steps 1 and 2 described above. We asked each to review the list and amend it as necessary. Thirty-two of the program developers agreed to review the list and make appropriate edits, one refused, and one stated that she thought we had missed some relevant papers upon her initial review but failed to provide additional information upon subsequent requests.
For the present analysis, we limited our focus to publications that had appeared in peer reviewed journals. The revised lists that we received from the program developers (along with the two lists that were not reviewed by the developers) were further reviewed to ensure that the publication described an evaluation of the program. In order to be included as an “evaluation”, the publication had to include a description of the study design and data pertaining to a process or implementation evaluation and/or an outcome evaluation (including studies focused just on mediational analysis or cost-effectiveness data). The definition was broad in the sense that it set no restrictions on the type of study design, type of data reported, or the length of follow-up. Multiple reports from a single study were also included. Review articles summarizing findings from a series of published evaluations were excluded, as were those publications that only described the conceptual basis of the program or its development and components. Those that just described the design of the evaluation study or that used study data to test etiological models were also excluded.
2.3 Data Collection: Program Developer-Program Distributor Relations
The second type of data we collected pertained to the relationship between the program developer and program distributor. Each SAMHSA fact-sheet lists the program developer and either this or the SAMHSA website states how the program can be obtained. In many cases it was clear as to the type of institution or agency that distributed the program (for example, a university). In those instances in which it was not clear, we searched ReferenceUSA and LexisNexis using the program's name in order to obtain more details. Finally, for those programs that were distributed by a third party (i.e., not by an organization that the developer owned or directed, or by which he/she was employed), we reviewed publications to identify disclosure statements that specified the type of financial relationship between the developer and distributor. In those instances where this information could not be found in the public domain, we contacted the developer to ask the type of a financial arrangement he/she had with the distributor (e.g., licensing agreement, royalty payment).
3.1 Distribution of Programs
Table 1 presents details of the relationships between the developers of the 34 school-based NREPP model programs and the distributors of the programs. We grouped the relationships into five broad categories (rows a-e), with two of the programs being placed in two categories (row f) since the developers both directed companies that provided training in the use of the program and distributed the programs through a publishing company. Two additional points should also be noted. First, one developer had three programs on the list (all of which were distributed by the university at which he worked) and another had two (each of which was distributed by a publishing company). Second, the relationships described in the table are those that existed in late 2006. This is especially important in the case of those programs that are distributed by a third party, as most of these relationships were some time after the program was initially developed (indeed, in some instances many years). Publishers typically purchase and distribute established programs that have been evaluated in one or more studies. Thus, the distribution mechanisms for some programs may well have changed over time (e.g., from a university to a publishing company).
The most direct financial relationships between the program developer and distributor exist in those cases where the former owns or directs the company that distributes the program (or provides training in it) or receives remuneration from a third party (typically a publishing company) that sells the program. The distribution of 17 of the 34 programs involved such relationships. The majority of the remaining programs (15) were distributed by the organization for which the developer worked either as an employee or a consultant. Nine of these were universities (row d) and six were private companies (row c). The remaining two programs were distributed by a third party from whom the developers received no royalty payment. In one case this was a charitable foundation and in the other a voluntary health organization.
3.2 Evaluation of Programs
The search procedures described in the methods section produced a total of 246 evaluation studies. For two of the programs (both in the category Developer Employed by Private Company that Distributes Program) there were no evaluations published in peer reviewed journals. In addition, there was one other program that was part of a multi-component intervention that was also on the NREPP list. These two programs (both in the category Developer Distributes Program through Third Party with which he/she has a Financial Relationship) were treated as one in the present analysis, since we did not want to double count the two papers that pertained to both programs. The range of published evaluations across the remaining 31 programs was 1 to 37.
The majority (193/246) of the published evaluation reports included the program developer as an author. Only 27 of the 246 publications were totally independent in the sense that the program developer was not one of its authors. These 27 publications came from evaluations of just nine of the programs. In the case of the remaining 26 publications, while the program developer was not an author on these there was some association between him/her and the authors. Specifically, the developer had either published previously with at least one of the authors of the publication, or worked in the same organization as the author(s), or was a co-investigator on the project from which the publication came, or was acknowledged by the authors in the publication for contributions to the project.
This examination of the 34 school-based programs that appear on the NREPP list of model drug prevention programs suggests that little has been done to address Moskowitz's concern that “much of the drug abuse prevention research conducted to date suffers from real or apparent conflicts of interest” The data presented indicate that there are relatively few published evaluations of these programs that do not involve program developers and that there are few instances in which there is complete separation between the program developer and program distributor.
With regard to the first of these issues, it was argued in the introduction that given the difference between the organizational culture of an agency that develops intervention programs and one that evaluates these, separation of the roles of program developer and program evaluator is preferable in the assessment of the effectiveness of interventions, at least if one's primary goal is to limit conflicts of interest and reduce the potential for bias or distortion that can result from advocacy of the intervention. Examples from the drug prevention field of such separation of roles include most of the evaluations of the DARE program and the Hutchinson Smoking Prevention Project. These studies produced little evidence of program effectiveness. This is consistent with other areas of evaluation research which show that studies in which program evaluators were significantly involved in program delivery report substantially larger effect sizes than independent evaluations
Lipsey (2005) argues that the most plausible explanation of the association between developer involvement in an evaluation and increased effect size is implementation integrity; program developers are likely to ensure that the program is delivered in the appropriate manner and with sufficient intensity. He contrasts this idea with a “cynical view” that attributes this finding to biasing or “wish fulfilling” emanating from the developer's vested interest in the outcome of the evaluation. This is clearly an area that requires further study, and we hope to be able to shed some light on this issue in our larger study by examining the types of data analysis and presentation practices used in evaluations that include program developers and those that are conducted by independent evaluation teams.
As for the relationship between program developers and program distributors we found that in 32 of 34 cases the developer had a financial relationship with the distributor. The nature of this relationship varied: in some cases the developer owned the distribution company, in some he/she received royalty or consulting payments from the distributor, and in some he/she was the distributor's employee. The latter case included developers who distribute their programs through a university. We did not examine the nature of the financial relationships here, for example if the revenue goes into salary savings or other discretionary accounts of the developer. In addition, even when there is no financial conflict in such relationships, there is the potential for what might be termed “ideological conflicts of interest” that arises from a different set of institutional pressures. This type of conflict is especially relevant to the evaluation of intervention policies and programs intended to prevent undesirable behaviors such as drug use, and results from adherence to a specific set of beliefs, values or theories that are resistant to rejection or modification when faced with conflicting evidence. A number of the programs on the NREPP list are part of a much broader theoretical or conceptual model that the developer/evaluator has also developed and built a research career upon. This type of potential conflict is probably unavoidable, since it is desirable that interventions are theory-based and researchers who are knowledgeable about drug use are likely to produce a better intervention than those who know little or nothing about this behavior. However, psychological theories are at times very resistant to modification, and so independent evaluations of all prevention programs – not just those for which there exists a financial conflict of interest – is desirable.
The analysis described herein is exploratory in nature and limited by its focus on just 34 programs and 246 publications. Since conflict of interest is an important issue it deserves further empirical analysis, especially in light of the emphasis now placed on the identification and dissemination of evidenced-based interventions in the field of drug and violence prevention. While the organizations that develop and disseminate these programs obviously have a different mission to businesses such as the food and pharmaceutical industries, it is likely that many of the same institutional pressures will arise as the marketplace for these interventions becomes more lucrative. As noted in one of the independent evaluations that we reviewed: “In 21st century America, education materials are a significant business. However, psychological theories are at times very resistant to modification , and so independent evaluations of all prevention programs – not just those for which there exists a financial conflict of interest – is desirable.