The
Context
PUBLIC-OPINION POLLING
By F. Christopher Arterton
From the viewpoint of those running for public office, election campaigns are mostly composed
of an extensive effort to communicate with divergent audiences. Candidates must get their
message across to party officials, party members, potential contributors, supporters, volunteers,
journalists, and, of course, voters. Ultimately, all campaign activities are secondary to a
candidates efforts to communicate with voters. Accordingly, it is not surprising to learn
that the largest share of campaign resources are poured into this two-way communication:
advertising to send persuasive messages to voters, and polling to learn the concerns that
voters have and the opinions they hold.
Over the past three decades, polling has become a principal research tool for developing
campaign strategy in American elections. The major elements of that strategy consist of the
answers to two simple questions: (1) what are the target audiences that a campaign must
reach? (2) what messages does it need to deliver to these audiences? Polling is essential for
answering both of these questions.
SURVEYING
VOTERS ATTITUDES
By and large, the technique most frequently employed for these purposes is the
cross-sectional, random-sample survey in which the campaigns polling firm telephones a
random sample of citizens and asks them an inventory of standard questions. Sampling theory
dictates that if the citizens are selected at random and are sufficiently numerous, their answers
to these questions will deviate only slightly from the answers that would have been given if
every eligible voter had been asked. Completing such a survey before new, major events
change the attitudes of voters can also be very important, so most polls are conducted over a
three- or four-day period. That means that a large number of interviewers either paid or
volunteer have to be used to reach several hundred voters each evening, between the
hours of 5:00 and 10:00 P.M. They ask the same questions in the same way to all potential
voters.
Surprisingly, most campaign pollsters do not base their sample upon the population of all
citizens of voting age. As is widely known, in the United States substantial numbers of eligible
voters do not actually cast their ballots on election day. Campaigns have learned through much
hard experience that it is more efficient to concentrate their efforts on likely voters. Accordingly,
the first few questions on most survey instruments try to ascertain how likely it is that the citizen
being questioned will actually vote. The interviewer will thank the unlikely voters and move on to
other calls. As a result, campaign communications strategy is built around the interests of likely
voters, and campaigns rarely make major efforts to attract votes among hard-core
nonvoters.
After identifying likely voters, the first task of the survey is to divide them into three groups:
confirmed supporters of the candidate in question, confirmed supporters of the opponent and
the undecideds. Then, the basic principle of American election campaigns can be
reduced to three simple rules: (1) reinforce your base of support, (2) ignore the
opponents base and (3) concentrate most attention upon the undecideds. That is, in the
United States most of the energy of election campaigns is directed at the approximately 20 to
30 percent of the voters who may potentially change their votes from Democratic to Republican
or vice versa.
Though most candidates are desperately interested in who is more popular with voters, the
usefulness of the cross-sectional survey goes far beyond simply measuring the closeness of
the election contest. Campaigns need an accurate measurement of voter opinions, but they
also need to know how to change (or preserve) these opinions. The term
cross-sectional refers to the differences among groups of citizens; the survey
technique is designed to record opinion among the various subsections that differentiate the
pool of voters. If there are gender differences in the way voters look at the election, for
example, the survey will be able to measure these distinctive attitudes. The campaign that
discovers itself doing better with male voters, among all those who have already decided how
they will vote, will begin to concentrate its efforts upon men who are still undecided, because
those voters are likely to be easier to win over.
DETERMINING
APPROPRIATE MESSAGES
By asking many questions about voters preferences for different public policies, the
political poll also provides candidates with insights about the messages they need to deliver to
critical groups of voters. Late in an election race, for example, undecided voters may be those
who are more cynical about election politics. This result may tempt the candidate to attack his
opponent for a poor attendance record or some action that can be pictured as favoring a
particular interest group over the general public. In the case of gender differences, a campaign
that is doing poorly among females may discover some special concerns held by women
through polling and attempt to devise a message specifically for them.
Normally, the process of creating the messages that will move critical groups relies on statistical
methods; the answers of supporters, opponents, and the undecideds are analyzed to determine
the strength of the association between candidate support and public-policy attitudes. A strong
association is a good indication that the policy area in question may be driving
the choice of candidates. Other questions will give the campaign an idea of how to deliver the
appropriate message to the target group. Voters are asked about their radio-listening habits,
the organizations they belong to, the television programs they watch and the newspapers they
normally read.
CONSTRUCTING
THE SURVEY
Polling is both science and art. Constructing a random sample, designing the questionnaire,
fielding the survey instrument and analyzing the results constitute the science of public-opinion
research. All these aspects rely upon well-established, validated techniques. The art comes in
writing the questions. Question wording can markedly affect the results obtained. Consider, for
example, two different questions: Do you support sending U.S. troops to Kosovo to
enforce the recent peace accord? versus Do you support President
Clintons plan to send U.S. troops to Kosovo to enforce the recent peace accord?
Voters are likely to react differently to these questions; some opinions will be altered either in
favor of or against the proposal simply by the association with the president. Which of these
wordings is more appropriate depends upon the judgment of the pollster and the purposes of
the survey.
In general, when polls are to be used to develop strategy, the consultants labor to write
questions that are fair and impartial so they can achieve an accurate measurement of public
opinion. Lately, however, campaigns have been resorting to so-called push questions to test
possible campaign themes. In these questions, voters are asked to react to questions that have
been deliberately worded in very strong language. Consider the following example: If
you knew that one of these candidates had voted to cut welfare payments to the poor, would
that increase or decrease the chances that you would vote for that candidate? When the
poll data reveal that many undecided voters back away from a candidate when confronted with
this information, then the candidate sponsoring the poll is likely to use this approach in
attacking his or her opponent.
At times this technique has been carried too far, and some unscrupulous campaigns have
conducted surveys with the sole intention of planting negative information about their opponent.
Though it is difficult to prove a campaigns real intent, the American Association of Political Consultants
has recently condemned push polling as unethical. Nevertheless, within
appropriate bounds, a few push questions normally are used in most campaign polls to test
possible messages.
Increasingly, political pollsters combine focus-group research with random-sample surveys in
order to develop campaign messages. In the typical focus group, voters are telephoned at
random and asked to participate in a collective discussion on a given evening. In these group
sessions involving between eight and 15 voters, pollsters are able to gather a qualitative,
in-depth
view of citizen thinking. Often focus-group discussions will provide a more detailed
interpretation of the survey results. Knowing how voters reach their conclusions can be just as
important as the quantitative distribution of opinion gathered by surveys. Focus groups can also
provide pollsters with question wording that captures the thought processes of citizens, so that
the influential messages they work into campaign advertising will have maximum impact.
TRACKING THE
CAMPAIGN
Behind the scenes, most major political campaigns rely on polling from the beginning to the end
of the election race. The typical candidacy will be formulated on the basis of a
benchmark poll taken about eight months before the election. This expensive
survey may take as much as 30 minutes to complete over the phone and will include a large
enough sample (usually around 1,000 to 1,500) so that inferences can be drawn about
important subgroups of voters. Once the campaign has begun and voters are being bombarded
with competing campaign messages, the pollster returns to the field, often several times, using
much shorter questionnaires in order to get an idea of how the opinions have changed from the
original benchmark.
A number of well-funded campaigns usually those for president or for senator or
governor in larger states recently have begun using tracking surveys to
follow the impact of campaign events. The pollster completes, say, 400 interviews on each of
three nights. The resulting 1,200 voters constitute an adequate sample with an error rate of
about 3 percent. On the fourth night, the pollster calls another 400 voters and adds that to the
database, dropping off the answers of those voters reached on the first night. And this process
continues, sometimes for six months of campaigning, so that the sample rolls along at a
constant 1,200 drawn from the previous three nights. Over time, the resulting database will
allow the pollster to observe the effect of campaign events such as televised debates, a
major news story or the start of a new advertising theme upon voter attitudes and
preferences. If, for example, the lines indicating support for two candidates are roughly parallel
until the point at which the opponent started attacking on the basis of character rather than
policies, and after that point the two lines start to diverge as the opponents support
increases, then the pollster had better figure out a way of countering the character message
being used by the opponent or the race will be lost.
Figuring out how to counter the opponents attack may involve examining particular
subgroups in the electorate, or it may call for a new message from the injured campaign, but in
either case, the response will be based on survey research. Polling, American politicians would
agree, has become an essential ingredient of campaign strategy.
F. Christopher Arterton is dean
of the Graduate School of Political Management at the George Washington University in
Washington, D.C.
Back to top |
THE CONTEXT
Special Issue: Issues of Democracy, October 2000 |
IIP E-Journals |
IIP Home