THE
PARTNERSHIP FOR A DRUG-FREE AMERICA STRIKES BACK!
IN RESPONSE:
Drug Issues - Letters (Brandweek - 4152 words - June 8, 1998)
To The
Editor:
During
the course of my professional career, I have worked with hundreds of
reporters from nearly all national media, but Dan Hill's article "Drug
Money," in Brandweek, represents one of the most slanted pieces
of reporting I have encountered. It is filled with distortions, misquotes,
innuendo and one-sided coverage of the facts. As one of the major sources
for this article, I freely provided Mr. Hill significant interview time
and am disturbed at how my contributions were distorted and misrepresented.
Senior Editor David Kiley's diatribe about how misguided a media campaign
against drugs is compounds the problem, because it is based on many
of the fallacious assumptions in the article itself. It also raises
serious questions, in my mind, whether he or another member of the editorial
staff hired freelance writer Hill specifically to conduct a "hatchet
job," rather than a balanced piece of reporting.
One of
the particular ironies about this article is that is asserts that the
Partnership for a Drug-Free America (PDFA) ad campaign, and the forthcoming
federally-funded campaign that builds upon it, are proceeding with a
minimally adequate research base. In fact, more research probably has
been used in both the development and the evaluation of PDFA campaigns
than any other ad campaign in history. The PDFA's first task in the
mid-1980s, was an intensive review of the scientific knowledge on youth
drug abuse behavior, with an evaluation of the literature and consultations
with leading scientists in the field. They have continued to keep in
close touch with the scientific community in the years since, and also
have carried out their own ongoing programs of research, conducting
both annual national surveys and a great many focus groups of children,
adolescents, and parents--the major target audiences of the campaign.
Mr. Hill dismisses the earlier national PDFA surveys because the youth
samples were obtained in shopping malls. What he fails to say is that
the samples and the field procedures were carried out in a rigorous
and systematic way over time, generating quite valid results for many
of the purposes for which they were conducted.
David Kiley
says ". . . my curiosity stemmed from realizing that ad agencies
seldom expend a fraction of the sweat and research over such [pro bono]
advertising that they do for their paying clients." That may be
true of the ad agencies, but the PDFA itself spends an exceptional amount
on research, which is made available to the ad agencies and used in
the selection of audience, subject matter and ad strategies.
Kiley also
states that, "The truth is that entities like the PDFA and the
Ad Council have had to be content with what they can get from agencies
and media companies." That may or may not be true for the Ad Council,
but he forgets that the PDFA is a creation of the American Association
of Advertising Agencies, and that the individual agencies often put
their best talent forward so as to look good in the eyes of their peers,
a number of whom are on the Creative Review Committee. Numerous high
industry awards for this advertising strongly attest to the quality
of their work, directly contradicting Mr. Kiley's facile observations.
People
can certainly disagree with the need for a media campaign to prevent
drug abuse, but that does not justify this kind of reporting. Both Kiley
and Hill fault the government for mounting a media ad campaign without
conclusive proof of the success of such projects. Two points are worth
noting. First, we seldom have conclusive proof of anything--witness
the decades-long debate over whether smoking causes lung cancer, even
after 5,000 published studies. Thus, they hold up the ad campaigns to
a ridiculous and unattainable standard. Second, the evidence they spend
so much effort discounting is pretty convincing overall and it comes
from unbiased and independent sources--that is, independent from both
the PDFA and each other.
I believe
that Mr. Hill's article repeatedly misrepresented what I said to him
and that the misrepresentations were biased in the direction of helping
him make his point. In a long article such as this, filled with shadings
of the truth up to outright falsehoods and misrepresentations, it is
difficult to set the record straight. Let me go part of the way by addressing
several specifics:
(1) In
the fourth and fifth paragraphs of the article, Hill says that the PDFA
and ONDCP "have jointly embarked on a nearly $2 billion anti-drug
campaign backed by good intentions and self-interested partisan politics,
but also by flimsy research that would hardly justify launching a new
stain remover, let alone a program meant to help keep children sober
and alive. At least that's what the authors of the research say."
As one
of the three authors to whom he refers, I can say flatly that I neither
said, nor do I think, anything of the sort, and Mr. Hill knows it. His
standards of proof for research prior to action are absurd.
The work
of the PDFA campaign has been based on an extraordinary use of research,
both from the larger field and from the research specifically conducted
by the PDFA. Mr. Hill did not share with his readers my comments on
the fact that, when we first asked questions of students about anti-drug
ads, I never expected to see the high degree to which adolescents said
that the ads affected their attitudes and behavior. Knowing that adolescents
do not like to admit that anyone influences them, especially those who
are trying, and especially adults, I thought it quite possible that
we would get a finding of no-effect, even if there were one.
He quoted
another researcher as stating that the kids might be telling her what
she wanted to hear, but chose not to quote me saying that I thought
that the response bias might go in the other direction.
(2) Mr.
Hill seems to want very badly to discredit our findings from the Monitoring
the Future study--findings that showed a very high degree of recalled
exposure to the PDFA ads, high credibility with the audience, and high
judged impact on their behavior--by emphasizing that they are not yet
published in a journal article. When asked why not, I told Mr. Hill
that we have a great many important things for which we are responsible
on this study, and that this one had not yet risen to the top of the
pile. Mr. Hill took the considerable liberty of summarizing this by
quoting me as saying that I "just have more important things to
do." This quote captures the meaning of my original statement but
connotes a derogatory implication that was never there. I am sure this
was not accidental. I consider that to be biased and dishonest reporting;
and adding quotations to statements that I never made is particularly
indefensible.
But to
return to the substance of Mr. Hill's critique, the reality is quite
straightforward. I have published the data on anti-drug ads in various
chapters and reference volumes and shared them with professional audiences
(as he reports), but they have not been the primary focus of a journal
article to date. The reason is quite simple: The Monitoring the Future
study is a very large study, with over 2,000 variables measured annually,
dealing with dozens of subjects. Evaluating the media campaign was not
among our primary original objectives--we simply added some questions
about it to the surveys when it was launched. However, that does not
make the results any less valid, nor is there any evidence of our having
restricted others' access to the findings.
(3) Other
misquotes. "But it's very difficult to measure the effect of the
ads, drugs are such a feature of the culture," he said. This nonsensical
statement, which has the effect of undermining my credibility, should
have read "...But it's very difficult to measure the effects of
the ads, because the ads are such a ubiquitous feature in the culture."
Another quote was, "I'd like to see a ubiquitous campaign, the
voice of society [regain that tenor.]" This mangled quote was further
emphasized by featuring it next to my picture in bold without the brackets.
It follows, "Society was speaking with a single negative voice
about drugs in the late '80s." I believe what I said was that I
would like to see that unified voice regained. In answer to a separate
question, I had said that I thought the ad campaigns important because
they tended to convey a unified norm about drug use to young people.
(4) Finally,
I would like to address Mr. Hill's criticism of self-report data that
underlies his facile dismissal of a lot of very good research. Most
of our information about drug use among our young people is based on
self-report data, and there is a considerable body of evidence that
shows that when young people feel that the researchers have a good reason
for asking, and can provide sufficient protection of their confidentiality,
they give valid information on quite sensitive subjects like drug use.
In our own surveys, we have had up to two-thirds of a high school class
admit illicit drug use by the end of secondary school and up to 80%
of them admit such use by the time they reach their late 20s. Questions
about the effects of advertising are of a far less sensitive nature,
and in our own surveys the respondents have no one to impress (or please)
but the computer, because the self-administered questionnaires are quite
obviously to be optically scanned by machine.
(5) Ironically,
Mr. Hill quotes a pro-drug crusader who himself uses self-report error
to make a fallacious point. Hill writes, "First, [Steven] Donziger
[of Partnership for Responsible Drug Information] quotes a 1996 U.S.
Health and Human Services survey that only 5.9% of children aged 12-17
had ever tried an inhalant." This "fact" is used by both
Donziger and Hill to show that a PDFA newspaper ad claiming that "20%
of all children had ever tried an inhalant by eighth grade," was
inaccurate and therefore sent a dangerous "everybody's doing it"
message. In fact, the PDFA statistic came from our study of eighth graders,
and was accurate. Because the National Household Study is conducted
in the home setting, it routinely gets under-reporting of drug use by
teenagers--a fact that was used strategically, and quite possibly dishonestly,
to trash the work of the PDFA.
(6) On
the last page of the article, in setting up his charge that the PDFA's
focus on marijuana is misguided, Mr. Hill states, "Public health
experts, including Dr. Johnson's research, say that approximately 18%
of youth who try marijuana go on to more serious drugs." Well,
this "fact" is simply grossly in error. What we have repeatedly
found is that from half to two-thirds of young people who use marijuana
go on to use other drugs--in other words, far from being the exception,
it is the norm.
In conclusion,
I found myself wondering what could be the motivation for writing, or
publishing, such a misleading, inaccurate and one-sided tract on a subject
of such importance to our society. Perhaps you would be able to edify
me on that point.
Lloyd D.
Johnston, PhD
Senior
Research Scientist and Program Director
University
of Michigan
I want
to congratulate you on an intelligent opening of the discussion of the
effectiveness of the Partnership for a Drug-Free America's massive new
advertising campaign.
Your reporter,
Dan Hill, prepared a well-researched, accurate, cautious and thoughtful
evaluation of an emotionally charged and poorly understood issue: the
impact of anti-drug advertising on drug use by children. This article
will become a classic because it is the first serious examination of
PDFA's effectiveness in a major journal with no axe to grind.
The PDFA
has the capacity to be a great institution with its efforts to reduce
drug use amongst our youth. One of its major problems is its fear of
criticism. If it senses an inquiry might not support its program, it
cuts off all information about its activities. Many reporters have indicated
that to me.
This is
reminiscent of the Army Corps of Engineers when they were building the
dams, levies and dikes that they promised would save the floods in the
Mississippi valley. They quenched criticism vigorously. All America
supported their effort to stem the river's floods. They ignored the
criticism from what they called "flakes." Stemming the use
of drugs by youth is supported by all, but not at the expense of being
dishonest with youth about the facts.
It is predictable
that they will vigorously resent your suggestion that the data is not
in on their approach, or on the strategy of their methods. They will
demand a rejoinder rather than ask for your help in examining the situation
to see if they can improve their efforts to reduce drug use by adolescents.
If they are wrong in their assumptions, it may take many more increases
in youth drug use--at the expense of our youth.
As a scientist,
I praise the work done in the article. As a scientist, I would like
to find out from you if the above prediction is correct. Congratulations.
Thomas
Haines
Chair,
Executive Committee
Partnership
for Responsible Drug Information
Chair,
Biochemistry, CUNY Medical School
Your column
was right about approaching the drug problem on a professional and strategic
basis as opposed to a "creative basis." Making it an agency
creative exercise is at the heart of the problem. As you indicated,
it should be treated as a marketing (or un-marketing) problem.
Jack Trout
Trout &
Partners
Greenwich,
Conn.
Congratulations
on the article. It is astonishing that it should draw instant criticism
when it is merely saying such expenditures should be outcome and research
based. I've been having a similar war over here, and wonder if you can
point me in the direction of answers. I have yet to see any substantial
research on the efficacy of the anti-smoking adverts that we are bombarded
with. These ads focus on scare tactics; tumors and brain bleeds, etc.
My understanding of scare tactics for non-users is that they tend to
have the opposite effect (the success of the 'Death Cigarette' range
in the U.K. being testimony to this), and for smokers, my understanding
from the research on motivational counselling is that telling dependent
users what to do will also often have the opposite effect (the notion
of 'cognitive dissonance'). No one has been able to provide me with
even the flimsiest of evidence that such campaigns work, other than
statements that because Australia's smoking rate has fallen from 33%
[to] 24%, they must be. No accounting for extraneous variables of course.
David Wray
David.Wray
health.wa.gov.au
I applaud
you for your courageous article about the ineffectiveness of the current
strategies of the PDFA. It is unbelievable that there has been not a
word of criticism about these campaigns until now, and it is all the
more admirable of you to do so.
No one
wants kids smoking pot, but pretending that marijuana is a monster that
destroys anyone that touches it only ruins our credibility with kids.
They aren't stupid. Let's focus instead on cocaine and heroin (although
in fact kids are much more likely to harm themselves with alcohol and
tobacco).
Scott Kurz
New Paltz,
NY
It was
with a great deal of dismay that we read your article "Drug Money."
The article contained a number of inaccuracies related to our research,
"Does Anti-Drug Advertising Work?"
For example,
we found it very paradoxical that the first portion of the article derides
the use of self reported data and intentions measured via anonymous
pencil and paper task, and then the second portion ("Desperately
Seeking Solutions") relies on focus groups to ascertain what form
of message is most effective in reaching adolescents! Clearly, teens
will say that advertising doesn't effect them--the real issue is: does
it? We believe that a mounting body of evidence suggests that it can
indeed reduce the likelihood of trial.
While our
paper is indeed in the process of being revised, to say that the paper
"has been withdrawn from consideration" with the implication
that we did so because of problems with the research badly mischaracterizes
the peer review process and distorts the conversation that Dr. Block
had with Dan Hill. As Dr. Block told your reporter our reason for withdrawing
the paper was to replicate the results using a related econometric modeling
approach. Our research has now been conducted using two different methodologies,
and each produced essentially the same result. Further, we offer a host
of corroborating evidence consistent with our findings. Much of this
evidence is based on actual behavioral data, as opposed to self-reported
data. All of this suggests that our findings are extremely robust. Since
Dan Hill spent hours on the phone asking specific, detailed questions
about our study, it is highly unlikely that this mischaracterization
can be construed as anything but intentional.
In our
research, we use state-of-the-art econometric techniques and data from
the Partnership Attitude Tracking Study (PATS). Note that this self-reported
data is more likely to understate drug use, suggesting that if anything
we understate the impact of PDFA advertising. In general, if one is
able to obtain significant results when the data results in a bias toward
a no-result (i.e. advertising does not have an impact), then the results
are biased toward no result (i.e. advertising does not have an impact),
then the results are clearly stronger. This is the case on our research
using the PATS and related data.
In maligning
our research, Dan Hill again demonstrates blatant disregard for the
nature of research by quoting out of context, "Since this quasi-experiment
has neither a control group, nor random assignment, it is open to selection
biases, history effects and other sources of error." This quasi-experiment
was conducted as a secondary analysis to corroborate and provide additional
support for our previous findings using mathematical modeling techniques.
A quasi-experiment simply means that it was not a controlled laboratory
experiment, and as such, is always open to the biases we report in our
study. Using this supporting technique, we find that drug consumption
levels were lower after the advertisements were aired than before they
were aired.
The results
of our current econometric models indicate that anti-drug advertising
had a significant impact on the probability of both marijuana and cocaine
trial by adolescents. However, anti-drug advertising did not generally
affect the decision regarding how much marijuana or cocaine to use for
existing users. Further, we test and reject the hypothesis that marijuana
use increases the probability of trying cocaine. Thus, although we find
strong evidence that anti-drug advertising decreases drug trial, its
impact on the volume of consumption of harder drugs by existing drug
users appears to be minimal.
Our research
and the related results represent a balanced perspective regarding anti-drug
advertising. We do not state uncategorically that anti-drug advertising
works. Additionally, our conclusions are based on solid scientific methods.
The results were obtained without prior expectations regarding the effectiveness
of anti-drug advertising. While we were fortunate to obtain the PATS
data, we were working independently, and were not funded, sponsored
or commissioned by the Partnership. In all these regards, the characterization
of our research in the article is extremely inaccurate.
We sincerely
hope that you take some action to correct what appears to be deliberate
mischaracterization of our research.
Dr. Lauren
Block
Dr. Vicki
Morwitz
Dr. Bill
Putsis
Dr. Subrata
Sen
NYU Stern
School of Business
As someone
who has worked in Australia for many years at preventing the transmission
of HIV among injecting drug users I must write in support of your stance
on drug education. After years of running "tribes-"based campaigns
targeting harm reduction and HIV transmission information to sub-cultures
of drug users. I am acutely aware of the types of research, support
and efforts required to make programs effective. Most of the public
education campaigns on drugs suffer acutely in this country from reinforcing
those things that the parents or producers of the adds would most like
to be true, but very rarely are.
Every time
a person smokes a joint, or has a line or a shot for the first time
and doesn't die, millions of dollars of drug education disappear in
a puff of logic. When drug users are actively involved in the discussions
around drug education the education is inherently more effective and
the complexities become evident. I wish you luck in your attempts to
improve the efficacy of drug education, but I suspect the bucks will
go to the agencies that tell the funders what they want to hear.
Timothy
Moore
tmoore
afao.org.au
I am writing
to express my outrage at writer Dan Hill's misrepresentation of myself
and my research in his article, "Drug Money," published in
Brandweek. In no way, do I "now cast(s) grave doubts on the research
techniques that support my (1994) paper," as Mr. Hill claims in
the first page of the article. In my interview, I did not say that I
believed that our study respondents "were telling us what they
thought we wanted to hear." I did discuss the limitations inherent
in any study based on self report, such as the possibility of respondents
giving "socially acceptable answers," including the facts
that our study was completely anonymous and voluntary, and that school
personnel were not involved in collecting the surveys. Furthermore,
the students' self-reported drug use rates were similar to the rates
reported in a statewide survey. It is imperative to note that all studies
of drug use by teens are based on self report. While Mr. Hill repeatedly
referred to the pressing problem of rising rates of youth drug use,
he did not explain that these figures are obtained by the very research
technique he maligned, i.e. self report.
I also
want to bring to your attention the fact that I spoke with senior editor
David Kiley. I expressed my concern about Mr. Hill's apparent bias against
the anti-drug media campaign. Mr. Kiley assured me that I would not
be portrayed in the article as refuting my study findings. It is now
quite clear that Mr. Kiley shares Mr. Hill's bias. Or, in fact, Mr.
Kiley may be the source of such bias, as is illustrated by his editorial,
with the denigrating comments such as "that research . . . hardly
stands up to the slightest breeze of inquiry."
I strongly
urge you to correct the erroneous statements made by Mr. Hill and supported
by Mr. Kiley. It must be made clear that I unequivocally support our
published study findings. I fully expect that, as a member of the media,
you will uphold your responsibility to report the facts, and not knowingly
distort them to create a story.
Evelyn
C. Reis, MD
Assistant
Professor of Pediatrics
University
of Pittsburgh School of Medicine
Bravo!
Your more-than-courageous analysis of the Partnership for a Drug-Free
America was long overdue. For more than a decade, this dubious organization
has run its skull-and-crossbones up the flagpole and the entire advertising
industry has saluted--at attention, with honor guard and cannon fire
. . . it has extorted pro bono ads from agencies and commandeered half
the public service spots for the past 11 years--$3.3 billion worth--at
the expense of all the other less-influential charities vying for precious
public service time.
Peter McWilliams
peter mcwilliams.com
Los Angeles
I congratulate
you and your publication for running an article which reminds people
that being objective is important, particularly when the issues involved
are emotive and the decisions made have far-reaching implications. I
hope that you will continue to encourage organizations, both private
and public, to act in a professional, ethical and conscientious manner
when deciding how to conduct an ad campaign. I believe that teaching
by example, as you have done (by producing a rational analysis), is
ultimately the most productive thing to do. Good luck in your future
efforts.
Michael
Nendick, Tokyo
micnen
tkb.att.ne.jp
Congratulations
on Daniel Hill's excellent article "Drug Money." We need to
be certain that resources used in the drug war are really being used
to reduce drug use among our youth and not for other much less admirable
purposes. Hill's article is an important step in the right direction.
Thank you.
Dawn Day,
PhD
Director
Dogwood
Center
Princeton,
N.J.
IN RESPONSE:
Drug Issues (Brandweek - 4403 words - June 8, 1998)
On April
27, Brandweek published a story entitled, "Drug Money." The
article was about the National Youth Anti-Drug Media Campaign, a program
of anti-drug advertising being directed by the White House's Office
of National Drug Policy in conjunction with The Partnership for a Drug-Free
America. We set out with a basic premise, asking, with so much, including
taxpayer dollars, now riding on the effort, does anti-drug advertising
work? As freelance writer Daniel Hill pursued the story, there initially
seemed plenty of evidence to support anti-drug advertising. But after
Hill began reading the three bodies of research specifically offered
up as evidence of anti-drug advertising's efficacy by the PDFA, a different
story began to materialize. Hill found that the research was in large
part reliant on self-reported data--surveys filled out by kids about
how they feel about the ads; whether the ads have had an impact; their
own experience with drugs, etc. Interviews with some those of behind
the research led to further scrutiny on Hill's part, not of whether
the researchers' work was valid, but of whether it was strong enough
to be the foundation of so massive an enterprise. An accompanying column
by senior editor David Kiley suggested that if taxpayers are going to
foot the bill and media companies match ad time and space dollar-for-dollar
with the government, then it is incumbent on the ONDCP and PDFA to present
a better case that the anti-drug ad crusade is money well spent. Was
Brandweek's story tough? Yes. Did we enter into the story with some
dark political agenda, with a bias against anti-drug advertising or
the PDFA? Categorically, no. As our story plainly stated, we think advertising
can play a role in "unselling" drugs to America's youth. But,
as a journal of marketing issues, we also suggest that if there is a
question as to whether kids are being communicated to effectively in
such a monumental effort, it should be addressed. The ONDCP and PDFA
are already endeavoring to introduce more tools of behavioral science
to the program, which we applaud. And we recognize that there is a gray
area between the disciplines of market research and academic research
and that there are no quick fixes in bridging the two.
That said,
what we examined was the inventory of ads the program started out with
and the research upon which that effort was based. Questions were raised,
and we followed the reporting where it led. Because we respect the effort
and intentions of the PDFA, we have allocated this editorial package
of responses to our story, both pro and con, including a generous allotment
of rebuttal space to the ONDCP and PDFA themselves, in what we hope
will prove to be a thought-provoking forum on this issue.
I would
like to correct some of the more unfortunate errors in your April 27
editorial and articles on the National Youth Anti-Drug Media Campaign
and provide some context regarding important information omitted. The
government's largest media campaign represents a significant public
investment. Your readers deserve a more accurate accounting of the campaign
so that they can form their opinions based on the facts.
Perhaps
most disappointing is your editorial concluding that the campaign "lowers,
not raises the bar" on public service advertising. Because of its
scope and magnitude--and the fact that the campaign uses public funds
to pay for time and space--the government has taken significant measures
to raise the bar substantially higher than any public service campaign
in history.
We are
particularly proud of the eight-month planning process we took to shape
the design of the campaign, including consultation with numerous experts
and organizations in both the private and public sectors. Many of the
nation's most esteemed authorities in social and commercial marketing,
teen and youth marketing, advertising, media, behavioral science, substance
abuse, public health communications research and other fields were involved.
Three expert panels were convened to advise on the communications strategy
of the campaign and its implementation.
This consultation
process underscored the need to incorporate three new measures that
greatly increase the chances of success. In addition, to a process PDFA
has long employed to ensure its ads are on the mark and reviewed for
scientific accuracy, an independent panel of behavioral scientists has
been established to provide further input into the creative briefs that
PDFA provides its ad agencies. And although the PDFA has a solid history
of developing creative briefs that guide its participating ad agencies,
each new ad it produces with public funds will now also be rigorously
tested through an independent process. Finally, the campaign's impact
will be evaluated by the National Institute of Health through a long-term
scientifically rigorous research program. No public service campaign
has ever before had this degree of planning, accountability and evaluation.
Lost in
your reporting was the perspective that this is a "social marketing"
campaign aimed at changing the norms of behavior of young teens, not
selling them a shoe. It is not just an ad campaign. Your writer completely
missed this point and appears to be unaware of the most current understanding
of how to conduct health communications campaigns that are so vital
to protecting kids in the years ahead from illicit drugs, underage smoking
and drinking, drug-related AIDS, violence and other serious threats
to our children. This campaign includes other major components to work
in concert with the ads and which were not brought out in Brandweek,
including: a range of projects and collaborations with the entertainment
industry to ensure honest depiction of drugs in film, television and
music; a major Internet and new media initiative; a corporate sponsorship
effort (already receiving enthusiastic interest); partnerships with
myriad organizations and associations that reach kids, teachers, coaches,
pediatricians, professional sports, civic associations, community anti-drug
coalitions, media, etc.
Mr. Hill
also chose to ignore the solid empirical evidence that advertising campaigns
have been proven to prevent teen smoking and reduce drinking and driving
among teen drivers.
Another
critical omission is the phenomenal impact the campaign has so far had
in our $19 million six-month test in 12 cities. Phone calls from the
public, particularly parents, to local anti-drug coalitions are up five-fold.
Orders for a publication that offers effective strategies for parents
on how to deal with this issue are up 335% from a national clearinghouse.
Anti-drug coalitions in the 12 cities are experiencing increased demands
for drug presentations from corporations and schools, increased volunteerism,
more calls for treatment services, increased funding, more partnerships,
etc. The use of media to mobilize parents and support community anti-drug
coalitions are two critical objectives of the campaign.
This summer,
Barry McCaffrey, director of the White House Drug Policy Office, will
announce the launch of the national advertising phase of the campaign.
We believe your readers, not only as marketing professionals but also
as parents and aunts and uncles, will experience first hand how this
effort can make a difference in the lives of their children and communities.
We are
well aware of the concerns raised in Brandweek about expenditure of
public funds. However, the National Youth Anti-Drug Campaign is an historic
public-private partnership that benefits from the planning and expertise
from some of the nation's most talented communications professionals.
We are particularly pleased with how the campaign was developed how
it is being implemented, and initial feedback we are receiving about
its impact.
Readers
should look at our Web site, whitehousedrugpolicy.gov, if they really
want to understand this campaign. It contains much of the information
omitted from the reporting on the issue.
Alan Levitt
Director
National
Youth Anti-Drug Media Campaign
Office
of National Drug Policy
To say
we were disappointed by your April 27 cover story about the Partnership
for a Drug-Free America is an understatement. The Partnership has been
critiqued and analyzed publicly before. That's part of being in the
public eye. But rarely have we seen journalism so sloppy, so biased,
so malicious in intent and so overwhelmingly inaccurate.
Most glaringly
absent for your "editorial package" about the Partnership
are these undeniable, extremely relevant facts, given the context of
your story: Today in America, there are 10 million fewer adult drug
users verses 1985. The country has cut regular drug use by 50% and cocaine
use by an astonishing 70%. Crime rates have come down to 20-year lows,
and experts believe decline in drug use has contributed significantly
to this encouraging trend.
We do not
claim that our advertising is single-handedly responsible for these
remarkable developments. But experts agree that the Partnership's anti-drug
campaign contributed significantly to the decline in drug use in America.
If you examine the data honestly and objectively you will see what others
have seen: Drug use decreased as Partnership advertising aired heavily
during the late 1980s. When fewer anti-drug ads aired throughout the
1990s, drug use began increasing among teens and pre-teens. These powerful,
correlative data do not prove causality, nor do we suggest they should.
But completely ignoring these facts, given the context of your story,
is a true disservice to your readers and shows an utter disregard for
objectivity and fairness.
The main
charges your freelance reporter levies against the Partnership include
the following:
1. PDFA
advertising is not built on solid research.
All PDFA
advertising is based on extensive qualitative and quantitative research.
We have more research on attitudes about drugs than any organization
in this field, and all of it is used in the development of our advertising
strategies. We conduct the largest, on-going study in the country that
tracks drug-related attitudes among pre-teens, teens and parents. We
have conducted 10 waves of the Partnership Attitude Tracking Study (PATS),
since 1993 via Audits and Surveys Worldwide, one of the most respected
research organizations in the field. The latest installment of PATS
sampled close to 10,000 pre-teens, teenagers and parents. With this
survey alone, we have conducted more than 82,000 interviews cumulatively.
Your reporter
categorically dismisses all of this valuable learning. He does not acknowledge
how this research is used in our creative development, nor does he recognize
how these data track consistently with the most respected studies on
drug use in the country.
2. Research
data demonstrating PDFA's effectiveness are flimsy at best.
In his
assessment of our advertising's effectiveness, your reporter completely
ignores the massive reductions in drug use we've seen since the 1980s,
and fails to recognize the parallels between our media weight and fluctuations
in drug use. How can a journalist ignore reductions of 50% in overall
drug use, 70% in cocaine use and 10 million fewer drug users in the
context of this story? More important than individual case studies about
PDFA, these data represent real change in the marketplace.
In an attempt
to refute the specific case studies on our advertising, your freelance
reporter harps endlessly on the shortcomings of self-reported data without
ever acknowledging the acceptability of this methodology in analysis
of a wide range of social issues, nor does he mention the strengths
and value of this type of data.
3. Because
PDFA advertising is donated, it is not research-based, nor is it top-quality
work.
Each and
every Partnership message is research based. Advertising agencies that
create our messages are briefed extensively on the target audience with
more research (from PDFA and independent sources) than most agencies
ever encounter on an average advertising assignment.
To ensure
that PDFA has only the highest quality advertising, each Partnership
ad must pass through and be approved by our Creative Review Committee,
comprised of some of the brightest minds in advertising, which may explain
why upwards of 50% of the advertising concepts that come before the
committee are never approved.
Advertising
will not, in and of itself, solve the drug problem. The Partnership
is not perfect, by any stretch of the imagination. But we have decided
to enter into a historic public-private partnership with the White House
Office of National Drug Control Policy because we believe our campaign,
when executed with the right creative and the right exposure, can influence
the way children, teenagers and parents think and feel about drugs.We
believe our advertising can indeed unseal drugs. And we've got solid
research that backs our conviction.
By entering
into this new federally-funded, public-private sector campaign, our
campaign will come under more intense scrutiny than, perhaps, any other
advertising effort to date. From the beginning, we understood this would
be the case. We welcome this analysis because, at the end of the day,
this will only improve our campaign. If fairness and objectivity are
brought to bear in public critiques of our work, we're confident the
Partnership will live up to the toughest possible analysis.
Richard
D. Bonnette, President and CEO
Partnership
for a Drug-Free America
Specific
points raised by PDFA:
Funding.
Describing this new advertising effort as either a $2 billion or $1
billion effort is misleading. Both figures are speculative and based
on aggressive forecasting. Coordinated by the White House ONDCP, the
National Anti-Drug Youth Media Campaign will spend $178 million in fiscal
year 1998. Current plans are to request $195 million per year for the
next five years. Research on the progress of the ONDCP-PDFA media campaign
will play a major role in refunding this particular program.
Agency
Participation. Vendors, such as ad agencies, will not collect commissions.
Rather, they will be paid in full for agreed amounts of compensation.
Advertising agencies will be reimbursed only for out-of-pocket costs
for PDFA advertising used in this campaign. All agency services will
be donated to the effort for free. Agencies will not receive commissions.
PDFA Research
Practices and Background. Criticized repeatedly throughout the story,
the Partnership's research efforts are inaccurately described as "thin
and overly-determined," and as solely focus-group based. In fact,
our advertising development is mainly based on the battery of attitudes
tracked for more than a decade via the Partnership Attitude Tracking
Study, conducted first for PDFA by the Gordon Black Corporation and
since 1993 by Audits & Surveys Worldwide. Over the last 10 years,
PDFA has conducted more quantitative research on children's parents'
attitudes about drugs than any other organization in the country. Further,
focus groups have been used as one of many research tools for enriching
understanding of what the usage and attitude data trends mean in terms
of the potential consumer. The sheer amount of research PDFA conducts
ensures that Partnership's anti-drug ads communicate with target audiences
as effectively as possible. Since our overall mission is to reduce demand
for drugs via media communication, we do indeed advocate that children
and teenagers not use illegal drugs--or "zero tolerance,"
as your reporter puts it (although this term has somewhat of a political
connotation for your reporter). Suggesting that there is another "acceptable
paradigm," or objective regarding mass media communication targeting
children/teenagers regarding drugs (i.e., moderate use of marijuana,
perhaps?) is something PDFA would disagree with strongly.
Brandweek
wrote: "The PDFA and ONDCP cling steadfastly to all three pieces
of research, the only work the organization cites among the hundreds
of academic articles extant on teens and drugs."
We regard
our up-front quantitative research--research that helps us understand
our consumers and their attitudes about drugs--as research that "ground(s)
our entire enterprise." PDFA has conducted more research in this
area than any other organization in the country. Among the studies we
have conducted since 1987.
- The Partnership
Attitude Tracking Study (PATS). An annual tracking study of drug-related
attitudes and usage among children, teens and parents, the 10th installment
of PATS was completed in 1997 and released April 13, 1998. PATS has
had samples of 7,000 to 12,000 in each installment, with a cumulative
82,000 interviews to date. This Partnership study has been conducted
by Audits & Surveys Worldwide since 1993, and with the Gordon S.
Black Corporation previously. PATS is the largest drug-related attitudinal
study in the U.S. and the only drug-related study that tracks children
as young as 9.
- The New
York City In-School Study surveyed drug-related attitudes and drug use
among 42,000 inner city school children between 1992 and 1995. This
sample explored the unique attitudinal make up of below-poverty urban
children, and includes an unprecedented sample of 6-8 year olds, as
well as preteen, 8th and 10th grade children. It has yielded surprising
data on the inner city drug problem. Conducted by Audits & Surveys
Worldwide for PDFA.
- The Los
Angeles Study. Two waves of research (1995 and 1997) that measure preteen
and teen attitudes toward use of illegal drugs. It is projectable to
Los Angeles County. Total sample size: 11,000. Conducted by Audits &
Surveys Worldwide for PDFA.
- Teen
Segmentation Study (1994), using the Partnership Attitude Tracking Study,
statistical analyses were conducted. The technique, a modified discriminate,
predicts the group of non-users most likely to use--"at risk"
or potential users--and separates this group from the non-users who
have a lower probability of use. Conducted by Ken Warwick for PDFA.
Brandweek
quoted Lawrence Wallack, professor of public health at the University
of California, Berkeley, saying, "There's no solid data that show
the media campaigns create meaningful changes in behavior." This
may, indeed, be true if one is looking for scientific data to document
a cause-and-effect relationship between advertising and behavior. Another
perspective to consider: The same can be said for advertising campaigns
undertaken to drive sales of a particular product. In other words, Ford
may spend millions of dollars on advertising to sell its cars, trucks
and other vehicles. Yet Ford, like most commercial advertisers, does
not have the type of data that Professor Wallack is looking for. In
the commercial marketplace, "meaningful changes in behavior"
are tracked by correlative data--i.e., tracking inventory of product
sales during the course of an advertising campaign. Evidence of sales
is usually enough to persuade marketing managers that ad campaigns might
have spurred sales. Proving that beyond an academic's doubt, however,
would be difficult.
Brandweek
quotes William DeJong of Harvard saying, "My fondest wish is to
get these campaigns rigorously evaluated." A major component of
the ONDCP-PDFA media campaign will be evaluative research on the advertising.
ONDCP will be conducting a large quantitative study of youth (9-12 year-olds),
teens (13-17 year-olds) and parents, in all of the 12 test markets with
matching control markets. Larger studies will be conducted, pre and
post, when the campaign goes national in July 1998. The context of the
story suggests such research is not even under consideration, nor is
it a major component of the campaign.
Brandweek
wrote: "The most glaring inherent weakness of the case is self-reporting,
or drawing conclusions based on what kids say they react to and say
they do, rather than measuring what they actually do and actually react
to."
Many academic/government
institutions use self-reporting when research sensitive issues, i.e.,
Centers for Disease Control (Youth-At-Risk), University of Michigan
(Monitoring the Future), U.S. Health and Human Services (National Household
Survey on Drug Use). As long as researchers make it clear that there
are potential limitations in self-reporting, it is considered an acceptable,
valuable means of surveying.
In an attempt
to refute the specific case studies on our advertising, your reporter
focuses narrowly on the shortcomings of self-reported data without ever
acknowledging the acceptability of this methodology in analysis of a
wide range of social issues, nor does he mention the strengths and value
of this type of data. In fact, self-reported data is, by far, the dominant
methodology used in the marketplace. Even if it were not, what are the
realistic alternatives for gathering large samples of data on problems
like drug abuse? Drug-testing thousands of kids in America? Monitoring
those studied by video camera? Testing hair samples for traces of drugs?
Brandweek
wrote: "While ad agencies good intentions are true as any other
partner in the mix, most are also too taxed to put the same of rigorous
research and account planning into a PDFA ad that they might for a paying
client . . . a lack of checks and balances proper research can provide
may lead to work that, while creative, can hinder the desired effect."
PDFA provides
ad agencies more research, prior to ad development, than many creative
teams receive on commercial accounts. PDFA provides Partnership and
independent research to ad teams. Sources of independent research include,
but are not limited to the National Institute on Drug Abuse, the U.S.
Department of Health and Human Services, the Substance Abuse and Mental
Health Services Administration and, of course, the University of Michigan's
Monitoring the Future.
(Harvard's
DeJong) feels that PDFA needs more input from behavioral scientists
who know how to translate public health theory into messages that produce
behavior change. "But PDFA is resistant, they want to restrict
it to advertising folks . . ." As part of the new ONDCP-PDFA anti-drug
campaign, behavioral scientists will review advertising strategies to
ensure ads are in concert with the latest behavioral research. From
its beginning, PDFA has reviewed many of its ads with a wide variety
of experts--scientists, child psychologists, drug experts, etc--to ensure
the accuracy of its copy.
On the
subject of the program's focus on marijuana as a gateway drug: The National
Center on Addiction and substance Abuse at Columbia University published
a superb report on the gateway theory--i.e., alcohol, tobacco and marijuana
leading to other drug use--two years ago. The study analyzes data from
several different perspectives to substantiate a pattern of drug use
among children who begin with the so-called softer drugs and then proceed
to use more dangerous substances.
DANIEL
HILL RESPONDS:
The PDFA's
donated time and space totaled $361 million in 1990; $367 million in
1991; $323 million in 1992; $305 million in '93 and $295 million in
'94. The PDFA's Mike Townsend and the ONDCP's Alan Levitt both told
me that behavior lags advertising by two or three years, and, since
according to Lloyd Johnston's own surveys, drug use started going up
in 1992 (among 8th graders) and in 1993 among older students, the effect
of the ads on overall national trends is murky at best. Note that '91
was the peak year, and the donated advertising did not fall off a cliff
subsequently. That is why we wanted to examine peer-reviewed, published
work. The PDFA cited only Evelyn Reis, Lauren Block and Johnston. They
neither mentioned nor offered any other research, despite my request,
and the ONDCP's deputy director [and Reis' co-author] could cite no
other work but theirs to support the advertising.
As to self-reported
data, I refer again to the quote cited in the story--from 1998, not
1988 as stated--from the American Journal of Psychiatry on self-reported
drug use. And The New York Times noted on May 8, 1998 that new research
"appears to call into question much of the data that has been gathered
on sensitive subjects like drug use . . ." Finally, Block cites
a 1991 article by Sickels and Taubman in the American Economic Review
as supportive of self-reported data. Yet, three of the four articles
discussed therein raise questions about it. One refers to adults already
in drug treatment, who have nothing more to lose by disclosure; another
refers to usage reports as being too low "because of shame associated
with admitting to partaking in an immoral [not to mention illegal] activity
. . ." And this is the supportive article.
Regarding
Johnston's letter: He refers to the level of research on the PDFA. We
were looking for published research that addressed the ads' ability
to change behavior. Industry awards operate in their own political arena.
Laudatory or not, they do not speak to the ads' effect on behavior in
any rigorous fashion. Per Johnston's numbered points: (1), we did quote
his unpublished finding, the actual percentage he refers to; we did
not mention his surprise at the findings. (2), in the world Johnston
operates in, unpublished studies are considered weaker than those that
have passed professional scrutiny. (3), I stand by my quotes (see below).
(4), see discussion on self-reporting above. (5), I don't believe the
Partnership for Responsible Drug Information would categorize itself
as "pro-drug." Otherwise, it probably would not have the likes
of two former U.S. attorneys general and a past president of the American
Bar Association on its board. The Department of Health and Human Services
and Johnston came up with different figures. Who is to say who's right?
Per Johnston's point (6), John Morgan, a professor of pharmacology at
City University, provided me with the 18% figure; he also cited a National
Institute of Drug Abuse finding that two-thirds of marijuana smokers
have used no other drugs. Block's letter also states, "[We] test
and reject the hypothesis that marijuana use increases the probability
of trying cocaine."
As to the
letter from Block, et al.: Our focus group was for perspective only,
an exercise that will not inform any marketing efforts. Mentioning that
her paper had been rejected for publication, Block told me her team,
with input from an additional author, is "revamping the modeling
to a more sophisticated econometric approach." That is not replication.
As to corroborating evidence, Block's paper led me to the Sickels and
Taubman article. I was confused by her statement that "self-reported
data is more likely to understate drug use, suggesting that if anything,
we understate the impact of PDFA advertising." As I read this,
if they've understated drug use, then haven't they overstated the impact
of PDFA advertising? As to the "quasi-experiment," I quoted
a footnote referring to their written statement that "the initiation
of the PDFA advertising campaign formed a natural, quasi-experiment."
This is not a secondary analysis, but analysis of the PDFA data itself
from 1987 subject of her paper. We did not discuss Block's funding,
etc.
Since the
article by Reis, et al., was the only one published, it's appropriate
to address her letter last. In our first sentence discussing Reis' work,
we stated she stood behind her paper. "I think [they] were telling
us what they thought we wanted to hear," "You can't tell,
based on the paper, that it actually works," and "My concern
is the kids think they're supposed to say the ads work, the younger
kids more so"--these statements, among other elements of the reporting,
led us to use the term "grave doubts."
To this
point, I wish to state all statements within quotation marks, as my
notes indicate, were made by the sources cited.
|