An Analysis of the Survey on TV Advertising
Friday, August 31, 2007 at 12:42AM
J. Mark Jordan

vizio_l42hdtv_001.jpg

 

Now that the results of the survey about advertising on television have been published, we are free to talk about the implications they hold, both in the overall survey and in the responses to individual questions. As a proponent of advertising, and as a member of the committee that drafted the survey, I think everyone had some surprises in the outcome. The project itself, the controversy notwithstanding, was very revealing. We learned some things about ourselves and our proclivities that many of us did not suspect. Although the scope of my discussion here will be a bit more comprehensive than just a reaction to the raw data, I think some deeper, more insightful analysis may be forthcoming in the months ahead. I am intrigued by the notion that twenty questions could tell so much, yet so little, about an organization.

The first thing that strikes me about the survey is the low percentage of participants. My calculations tell me that an average of 2,353 responses for the sixteen questions was registered. That means of the 9,076 surveys mailed out, 6,623 ministers did not respond in any way, or a full 74% of our constituency did not fill out the form and send it in. This is a very significant number because it means that all the questions answered on the survey represent only 26% of the total number of ministers. Any extrapolations from the data must keep this in mind to avoid broad, inaccurate generalizations.

Some who refer to polling data averages compiled by national organizations such as Gallup, Rasmussen or Zogby which base assumptions on a small sampling of the population, contend that the same assumptions can be made about the figures in this survey. This direct correlation does not work. First, we must account for the margin of error.

“There is no concept as confusing as ‘Margin of Error.’ It is used a lot but few people understand it. Suppose a polling company calls 1000 randomly selected people in a state that is truly divided 50-50 (say Missouri), they may, simply by accident, happen to call 520 Democrats and 480 Republicans and announce that Claire McCaskill is ahead 52% to 48%. But another company on the same day may happen to get 510 Republicans and 490 Democrats and announce that Jim Talent is ahead 51% to 49%. The variation caused by having such a small sample is called the margin of error and is usually between 2% and 4% for the sample sizes used in state polling. With a margin of error of, say, 3%, a reported 51% really means that there is a 95% chance that the true (unknown) percentage of voters favoring McCaskill falls between 48% and 54% [and a 5% chance that it is outside this (two sigma) range].

“In the first above example, with a 3% MoE, the 95% confidence interval for McCaskill is 49% to 55% and for Talent 45% to 51%. Since these overlap, we cannot be 95% certain that McCaskill is really ahead, so this is called a statistical tie. When the ranges of the candidates do not overlap (i.e., the difference between them is at least twice the margin of error), then we can be 95% certain the leader is really ahead. The margin of error deals ONLY with sampling error—the fact that when you ask 1000 people in an evenly divided state, you are probably not going to get exactly 500 Democrats and 500 Republicans. However, a far greater source of error in all polls is methodological error.” (www.votefromabroad.org)

In addition to the margin of error, there is a vast difference between polling the general public and taking an in-house survey of members who have a vested interest in the issue and will definitely be impacted by the results. When only a quarter of the members respond to an in-house survey, the level of apathy, lack of knowledge or negative feelings about conducting such a survey in the first place must be considered very high.

Why did the survey elicit such a low number? One can only speculate here. A certain percentage may not have received the survey due to incorrect addresses or postal problems. Others may have completed it but mailed it back too late. It would be a stretch to say that these situations constituted the major reason for the small response. It would also be difficult to say that the ministerial body was uninformed. It seems to me that the overwhelming majority of ministers were either not interested or deliberately did not respond because they were against the idea of a survey. The number of respondents was somewhat higher than the number of ministers who attend general conference. That may mean that ministers who have no plans to go to Tampa didn’t bother to participate. The prevailing attitude among them may be that their opinions would make little or no difference in the outcome. Whatever the reason that only 26% of the ministers responded, we must look at all the data with that in mind.

The first question asked whether we want to allow UPCI ministers to advertise on television. It elicited 1,175 yes votes and 1,195 no votes, a margin of 20 ministers, with 47 not responding. This was a difference of 0.8% of 2,370 who answered it. When figured on 9,076 UPCI ministers, the difference becomes 0.002%, a statistically insignificant number. No one who attempts to make this a mandate for or against TV advertising should be afforded credibility. If a meteorologist predicted a 0.002 % (or even 0.8 %) chance of precipitation, few people would take their umbrellas with them. Statisticians would say that we are basically split down the middle on this question. It is true that, strictly speaking, more ministers voted no than yes. Any significance given to this fact is more than offset by the responses to other questions later in the survey.

The second question addressed ministering on television as opposed to simple advertising. A wider margin said no to this question than the previous one. (14.6%). Apparently, we believe that televising preaching services constitutes more of a threat to area churches than mere advertising. It has been expressed in various articles that other Apostolic people who view such programming would be drawn to a church with a better choir, a nicer building or a more charismatic preacher. Also, since televising services may include appeals for offerings, some feel that they would lose income. These issues may have merit, but they have never been vetted out in a business meeting because the total ban that has been in place for thirty years has precluded such a discussion.

The next two questions have little or no bearing on this issue. They do seem to verify that the ministerial constituency wants to continue to ban televisions from the homes of church members and ministers. The margin between continuing or lifting the ban is much smaller than many would have suspected, or at least hoped for. Had this question been asked twenty years ago, how much greater would the margin have been? Much greater, in my estimation. These questions have been impacted by the rise of the internet. Several articles have pointed out that television signals may now be received by computers via the internet as easily as turning on an actual television set. More and more MSM broadcasters are making the latest episodes of their popular programming available on their home page. While the older generation thinks that viewing TV programs on the internet is too inconvenient—thus putting it safely out of reach—the younger, more computer-savvy generation thinks little or nothing about the difficulties. The point is that these two media continue to edge closer to each other. Soon, (yesterday in some cases) most technologies that people use from day-to-day will merge into a single device. At that point, look for another huge discussion to take place among us.

Question five addresses the heart of the survey’s original intent: How will adopting the resolution affect the relationship of ministers to the UPCI? Over sixty percent of the respondents said a limited use of TV (for advertising) would not have a negative effect on their relationship with the organization. The “negative effect” is not defined. One might assume that it means relinquishing their license, but no proof exists that this worst case scenario is necessarily true. At any rate, the 935 ministers who said they would be negatively affected represent only ten percent of the entire constituency. Of course, this assumes that the 74% who failed to respond would not be hurt by the change. As explained earlier, we cannot presume that to be the case. But neither can we validate the data that suggests nearly forty percent would leave. The huge fly in the ointment continues to be the low number of respondents. This makes it simply too difficult to draw solid conclusions.

Questions seven and eight make sense if we compare apples to apples. That is, if we think the other data are valid based on a quarter of the constituency responding, then we have something to talk about. Why would 49.6% say they would grant liberty for a brother to advertise on TV, and then 40.6% say they themselves would not advertise on TV? (Plus nearly 70% say they would not minister on television.) The disparity between the two positions cannot be based on a conviction that TV advertising is wrong. If it could, then the two questions would have produced identical results. What other reasons would explain the difference? There could be several. For some, the expense would be prohibitive. For others, TV advertising wouldn’t make good demographic sense. Much of our constituency is rural and other forms of advertising may work best for them. Still others may feel that their fellowship with certain friends would be jeopardized should they opt for TV advertising. The point is that merely because a minister says that he would not advertise on television does not mean that he thinks it is wrong to do so.

The ninth and tenth questions were, unfortunately, written in a way that could possibly skew the responses. The questions do not explain our traditional position on television. Is it the ban on televisions in the homes of UPCI people? Or is it the ban on advertising on television? If the respondent took the question to mean the former, the answer could have gone one way. If he took it to mean the latter, the answer could well have been the opposite. The wording of a question is vital to a coherent response. Consider the following:

What are methodological errors?

First, there are many ways a polling can give misleading results due to poor methodology. For example, the phrasing of the questions is known to influence the results. Consider the following options:

If the Senate election were held today, would you vote for the Democrat or the Republican?
If the Senate election were held today, would you vote for the Republican or the Democrat?
If the Senate election were held today, would you vote for Sherrod Brown or Mike DeWine?
If the Senate election were held today, would you vote for Mike DeWine or Sherrod Brown?
If the Senate election were held today, would you vote for Democrat Sherrod Brown or Republican Mike DeWine?
If the Senate election were held today, would you vote for Republican Mike DeWine or Democrat Sherrod Brown?
If the Senate election were held today, for whom would you vote?
“These questions will all get different responses.

“Second, the sample may not be random. The most famous blunder in polling history occurred during the 1936 presidential election, in which Literary Digest magazine took a telephone poll and concluded that Alf Landon was going to beat Franklin D. Roosevelt in a landslide. Didn’t happen. What went wrong? At the height of the Depression, only rich people had telephones, and they were overwhelmingly Republican. But there weren’t very many of them. Even now, telephone polls miss people too poor to have telephones, people who don’t speak English (well), and people scared of strangers or too busy to deal with pollsters. These effects could systematically bias the results if not corrected for.” (www.votefromabroad.org)

Moreover, questions nine and ten set up a non-sequitur argument to make it appear that the use of TV for advertising will usher in a decline in holiness standards. It makes TV advertising take the fall for a condition that already runs rampant in our ranks. It would be grossly unfair for such an argument to be advanced. Even to suggest that TV advertising would exacerbate the present struggle with holiness would be speculation. Anecdotal evidence may exist here and there, but there is also anecdotal evidence that the opposite is true. If TV advertising is considered a holiness issue, it is only so because some have declared it to be so, or because “we’ve always believed it was a holiness issue,” not because they possess empirical evidence to that end.

The results to questions eleven and twelve must be very difficult to understand for those who have tried valiantly to demonstrate a difference between advertising on TV and advertising on the internet. It would appear that the respondents did not buy the argument. In the interests of consistency, they evidently recognized that if we disallow advertising or ministering on TV, we certainly shouldn’t allow advertising or ministering on the wild and wooly internet. The logic says that if we are against television, we should be a thousand-fold more against the internet.

The two kinds of advertising, TV and internet, are virtually identical…same programming format, same cameras, same lights, same video clips, same still pictures, same text, same music, same sounds…except for the way the signal is conveyed to the consumer and the device by which it is received by the consumer. Indeed, most pre-recorded clips that are played on television now are generated by computers the same way clips are made for the internet. Those ministers and churches who advertise on the internet make their product just like they would if they were to advertise on TV. The question is not how the footage will be prepared, but how will it be disseminated? Any differences would be of a minor and/or technical nature that would not appreciably affect this contention.

The response to question thirteen delivered the main shock to most of us. The respondents said 52% to 48% that the UPCI should produce a commercial to be broadcast on television. After all this controversy, the majority still said they would favor a commercial from the UPCI as an organization. To me, this means that the majority is not against the principle of TV advertising. Despite the dire warnings of opponents to TV advertising that it is going to lead us to destruction, they still spoke for it. They are for a UPCI commercial; they just don’t want the brother across town or in the next county to do it. This may go back to the fear issue spoken of earlier—fear of a better choir, nicer building, etc. I consider this the most significant response in the entire survey.

The questions and answers about pastors or evangelists have little meaning. All they tell us is that 1,638 pastors and 118 evangelists responded. We don’t know how each ministry type responded. Even if we did, it would all be academic.

My overall impression of the survey is that it is inconclusive. The small response, the statistically insignificant spreads, the unfortunate wording of some questions and illusion that it tells us where we are when, in reality, it does not…all these questions render the questionnaire very weak in terms of its professed purpose. Ironically, the survey itself may end up being a catalyst for change. Some may be emboldened by certain responses and change their vote in Tampa accordingly. To summarize, this is what I think the survey tells us:

Much more can be said about this survey. I predict that there will be much said as time goes on. And, as I have said before, I am with the UPCI, regardless of the vote in Tampa.

Article originally appeared on ThoughtShades (http://www.jmarkjordan.com/).
See website for complete article licensing information.