Search | Statistics | User Listing Forums | Calendars | Skins
Piredeu Open Forum
Piredeu Open Forum ->  Voter Survey -> Draft Voter Survey: Discussion Area (read only) -> View Thread

You are logged in as a guest. ( logon | register )

Vote-recall question proposal
Moderators: Marcel, W.vanderBrug

Jump to page : 1
Now viewing page 1 [25 messages per page]
View previous thread :: View next thread
   Voter Survey -> Draft Voter Survey: Discussion Area (read only)Message format
 
levi
Posted 18/10/2008 02:16 (#133)
Subject: Vote-recall question proposal


New user

Posts: 1

For the voting question I propose the following:

The next question is about the elections in [Month]. In talking to people about elections, we often find that a lot of people were not able to vote because they were sick, they forgot, or they just didn’t have time. We also sometimes find that people who thought that they had voted actually did not vote. Also, people who usually vote may have trouble saying for sure whether they voted in a particular election. In a moment, I’m going to ask you whether you voted on [Day (like Sunday)], [Month and day], which was _____ [time fill] ago. Before you answer, think of a number of different things that will likely come to mind if you actually did vote this past election day; things like whether you walked, drove, or were driven by another person to your polling place [pause], what the weather was like on the way [pause], the time of day that was [pause], and people you went with, saw, or met while there [pause]. After thinking about it, you may realize that you did not vote in this particular election [pause]. Now that you’ve thought about it, which of these statements best describes you?

[READ STATEMENTS 1 to 4]

1. I did not vote in the [Month and day] election.
2. I thought about voting this time but didn’t.
3. I usually vote but didn’t this time.
4. I am sure I voted in the [Month and day] election.

---

I understand that this questions takes additional time to ask but it has been shown to perform significantly better. But I am convinced that this question would “get it right” (ie. what Mark was asking for.) Also, with this question, backward compatibility is significantly less of an issue than with past questions.

There are two theories why people over report participation. One is social desirability bias; the other is memory effects.

About social desirability bias: To minimize this, face saving oportunities need to be offered to the respondent. Needless to say it is best if face saving opportunities are offered both in the question (or the preface of the question) and in the response options. The original EES question only uses a statement that many people do not vote. The newly proposed question offers legitimate reasons people could have for not voting making it that much more OK to offer a negative response. In addition to the legitimate reasons offered, the response categories also allow the respondent to tell the truth but still show that they are good citizens. This should diminish the intentional misreporting due to social desirability bias.

More interestingly, the other theory of misreporting focuses on unintentional memory failures. Theoretically speaking people who intend to vote, think about voting can easily produce false memories as time passes after the election. A person who cares about voting probably plays out the voting process in their head, how the go to the polling place, walk up to the booth, check the candidate of choice, and put the ballot in the box producing make believe episodic memories that is difficult to distinguish from real ones. This process could be played out even hundreds of times during the election campaign. Research on memory (reviewed in the cited articles) cites this exact process as a main cause for the production of false memories. This theory is in line with the findings of the original study of validated misreporting that found that people who misreport tend to be more interested in politics and have a strong preference (Granberg and Holmberg 1991).

Three published works explore both the question of social desirability and memory effects (Belli et al 1999, 2001 and 2006). The 1999 and 2006 study proposed the above-cited question wording as a more effective and more accurate way of assessing turnout. (The question was slightly modified to fit the non-US context. The originally proposed question included “weren’t registered” vs “forgot”.) The EES could be the first to use this improved question in a multi-cultural context (AAPOR 2003 conference paper I do not have access to by McCutcheon et al did a bi-cultural UK-US comparison).

Brief review of the three articles (I also posted the cited articles at: http://levente.littvay.hu/belli/ ):

Summary of the Belli et al 1999 article

The authors use two split ballot telephone survey experiments. One was a national survey done after the Presidential elections in November 1996 and is conducted throughout the 3 months after the election. In this survey (roughly) half of the sample received the standard NES question (that included no memory clues or allowed extra time with a pause, it included face saving in the question but not in the response options only expecting a yes/no answer) and the rest of the sample received an extensive form of the question (similar to the one proposed) containing both memory clues and face saving descriptions and the four response options.

The second survey was conducted in the state of Oregon after the special senate election which took place in January 1998, the turnout question was asked from a randomly selected 1/3 of the respondents. For the Oregon sample the authors were able to get 94.6% of the respondents address and validate the self-reports.

In the national sample less people reported voting in the experimental condition (8.9% significant difference). The result was robust to the inclusion of control variables. In the Oregon sample vote validation allowed for the testing of the actual question accuracy. The short form of the question produced an accurate response 79.9% of the time while the extended form was accurate 87.2% of the time. This difference was significant.

When testing the interaction between experimental condition (question asked) and the length of time between the election and the interview, at the national sample those who received the standard question reported having voted more often during later months. There was no significant change in those answering the experimental question. The same effect is visible in the Oregon sample. The reported voting, over reporting and accuracy did not change significantly in later months for those who answered the experimental question. The authors systematically show that the difference between the experimental and control groups increase as time passes. This actually suggests that the value of the extended form question increases as the time between the election and the interview increases. The study shows no improvement when the data was collected in November suggesting the payoff from the extended form only materializes after 3-4 weeks. This should be considered in our specific context (though as cited later, there could be cross-cultural differences in this so testing is worthwhile either way). If our procedure calls for collecting the data less than 3 weeks after the election, memory clues could have no impact. In this case the following question would probably suffice:

The next question is about the elections in [Month]. In talking to people about elections, we often find that a lot of people were not able to vote because they were sick, they forgot, or they just didn’t have time. Which of these statements best describes you?

[READ STATEMENTS 1 to 4]

1. I did not vote in the [Month and day] election.
2. I thought about voting this time but didn’t.
3. I usually vote but didn’t this time.
4. I am sure I voted in the [Month and day] election.

In the end for Belli et al 1999 the author conclude that experimental questions was superior to the standard one especially in later interviews but admits that this can be caused not necessarily by the direct memory or source-monitoring cues but by the fact that the experimental question makes the respondents try harder to remember what they actually did. And, consequently, the same effect could be obtained by question concentrating on notifying respondents that over reporting is a problem and they should try remembering if they voted. No matter the case, question proposed should provide improvements in accuracy.


Summary of the Belli et al 2001 study. (Probably the least relevant for us.)

This study explored the predictors of overreporting. It used the American National Election Study Datasets from 1964, 78, 80, 84, 86, 88 and 90. (The ANES has validated turnout data.) Overreporters in these samples range from 7,9 % to 14,2 %.

Of importance is that the study also confirmed that time passed between the election and the interview is a significant predictor of overreporting (which provides additional empirical evidence for the existence of memory effects). It also showed that favorable political attitude, education, ethnicity predict overreporting significantly. There is no difference in overreporting between presidential and non-presidential election years. (Initially I was going to make an argument that people lose inconsequential episodic memories quicker than more important ones, and this would even increase the importance of using the long form of the question as European Parliamentary elections are as low profile as it gets in most countries, but it appears that type of election in the US has no impact. Though I would argue that a Senate election is still a more important election than an EP election, at least as most countries are concerned. There could be cross-cultural differences here too.)


Summary of the Belli et al 2006 study.

This study is a replication of the previous studies. This time they test the effectiveness of three question formats using a random-digit-dialing national probability sample. The three questions were the short form with and without the four response options (vs no response options offered, yes/no answer expected) and the long form presented with the four response options. The survey was conducted after the congressional elections of 1998. Data collection took place in December 1998, January and February of 1999. Logistic regression is used; the results show that the long format fares better than all other question formats at reducing the rate of self-reported voting, regardless of the moment at which the measurement is taken. However, the authors admit that in this study the question was not tested at closer time intervals to the moment of the election.

Other analyses reveal that the moment at which the measurement is taken plays a role in the effectiveness of the question. At closer time intervals to the moment of the election, the short form of the question (only with the four response options) is more effective; as time passes, the long-format becomes better able at reducing the rate of self-reported voting. The authors believe this is due to the fact that the loss of episodic detail happens at increased rates as time passes after the moment of the election; as long as the measurement is taken before this happens, the short-form is effective. After this period of time (a few weeks) the long-format better reduces the rate of self-reported voting.

---

To add to the evidence I put together a turnout question research group of CEU PhD and MA students with an interest in the topic. So far we have very little preliminary results. Using the CSES it appears that the time between the election and interview can significantly predict vote report in a pooled logistic regression for cases where the independent variable was readily available (for some countries it will have to be calculated by manually). A standard deviation increase in the time variable leads to a 5% increase in the probability of reported voting. Of course, this is very preliminary. We’ll have to calculate the independent variable for all countries, add controls and random effects that correct for the autocorrelation due to non-independent cases. We plan to do the same with the EES data and a pooled dataset of national election studies. I should have more by the November meeting but preliminary evidence suggests that something is leading to increased reporting as time passes. If we believe the Belli et al articles it is probably memory failure. We probably should do everything we can to correct for it. (Especially when the recipe of the solution is readily available.)

To sum up, if data collection will span more than 3 weeks after the election, the above-proposed question is expected to be more accurate, though it comes at the cost of additional time. Finally, please note that I am not arguing for a complete change of question. I am asking for a split half between the the proposed question and the old EES (or, even better, above cited short form). Doing this split half is cutting the interviewing time requirement in half (or close to half). This would allow for the systematic assessment of the improved question’s effectiveness giving us valuable information for future studies. (Possible multi way splits could be considered between the original, short and long form with and without face saving response options but I am wary splitting the sample into 4 or more ways.)

Finally, a split sample experiment would also allow for the systematic cross-cultural assessment of memory effects. This is important as the McCutcheon et al 2003 AAPOR paper found that memory effects were significantly larger in the US vs. the UK which is an interesting and important scientific contribution. (The UK did not show significant memory effects, but I do not know the details. I only had access to the abstract.) It goes without saying that if we decided to collect the data, I would conduct the analysis and report back with recommendations for the 2014 study.

(I would like to acknowledge the contribution of Sebastian Popa, Csaba Zsolt Kiss, Constantin Manuel Bosancianu who did the first draft of the article reviews, and the turnout question research group: Istvan Gergo Szekely, Zoltan Fazekas, Csaba Gallfy and once again Sebastian Popa.)

Works Cited:

Belli, R., Moore, S., and Van Hoewyk, J. (2006) “An Experimental Comparison of Question Formats Used to Reduce Vote Overreporting” Electoral Studies 25

Belli, R., Traugott, M., and Beckmann, M. (2001) “What Leads to Voting Overreports? Contrasts of Overreporters to Validated Voters and Admitted Nonvoters in the American National Election Studies” Journal of Official Statistics 17

Belli, R, Traugott, M., Young, M., and McGonagle, K. (1999) “Reducing Vote Overreporting in Surveys: Social Desirability, Memory Failure, and Source Monitoring” Public Opinion Quarterly 63

Granberg, D., Holmberg, S. (1991) "Self-reported turnout and voter validation" American Journal of Political Science 35

McCutcheon, A. L., Belli, R. F. and Tian, Y. (2003) "Social Desirabilityand Faulty Memory in Vote Over-Reporting: A Cross-National Comparison of the American and British Electorates" Paper presented at the annual meeting of the American Association for Public Opinion Research, Sheraton Music City,Nashville, TN http://www.allacademic.com/meta/p_mla_apa_research_citation/1/1/6/2...

Top of the page Bottom of the page
Rosema
Posted 11/11/2008 11:38 (#136 - in reply to #133)
Subject: RE: Vote-recall question proposal


Member

Posts: 6

Location: University of Twente
Although I am sympathetic towards initiatives aimed at decreasing misreported voting behaviour, I am not convinced this suggestion would improve the questionnaire. The proposal actually has two elements: a different question and different answer categories. My primary concern is that the question is rather long. Questions should just be a short as possible, not only to limit reading time for interviewers but also to facilitate understanding of the question by interviewees. An additional argument against would be that the answer categories do not provide useful information: yes/no is the only thing that really matters here: if the question would be used, the answer categories 'yes' 'no' and 'don't know' still suffice. Finally, I am afraid that some respondents might find this question a bit paternalistic and get mildly annoyed. But I admit this latter point is merely an intuition, not somehting I can found by facts of whatever kind. (But I would like to encourage experimental research on these matters outside the EES.)

Edited by Rosema 11/11/2008 11:45
Top of the page Bottom of the page
amy.corning
Posted 14/11/2008 16:50 (#142 - in reply to #133)
Subject: RE: Vote-recall question proposal


New user

Posts: 1

I’d like to endorse Dr. Littvay’s suggestion for using a different question to ask about voting in recent elections. As Dr. Littvay indicates, both memory and social desirability are influences that can increase vote overreporting, and modified question wording will legitimize not voting as well as help respondents to remember their behavior.

Moreover, and apart from concerns about social desirability, etc., the question included in the current version of the questionnaire seems likely to contribute to vote overreporting through its use of the word “abstain,” which the Compact OED defines as “formally choose not to vote,” and Webster’s defines as “to refrain deliberately...from an action.” Thus the question as currently phrased seems to allow for only two limited alternatives: voting, and making a deliberate decision not to vote. Respondents may be even less likely to report their behavior accurately than if the question were phrased more neutrally, allowing, e.g., for those who just didn’t get around to voting or didn’t manage to vote.

The shorter form introduction as employed in the ANES (“In talking to people about elections, we often find that a lot of people were not able to vote because they weren’t registered, they were sick, or they didn’t have time.”) seems a desirable change, helping to reduce overreporting by legitimizing not voting without greatly increasing the interviewer’s burden. And using a split-ballot experimental design to ask both the existing question and the new version will permit the study to make an important contribution to the literature on vote overreporting as well as to make methodologically informed decisions for the next round.

On the question of which response options to use, see Duff et al. (2007), for an experimental comparison of two question forms using ANES data. The introductory text was the shorter version in both forms. The “standard” form then asked, “How about you—did you vote in the election this November?” and offered only two response options (“yes, voted”; and “no, didn’t vote”). The experimental form replicated the response options used by Belli et al. (1999) – but not the more extended introduction with memory cues – asking respondents “Which of these statements best describes you?” and offering the four response options:

1. I did not vote in the [Month and day] election.
2. I thought about voting this time but didn’t.
3. I usually vote but didn’t this time.
4. I am sure I voted in the [Month and day] election.

Thus, the authors are able to isolate the effect of the response options, since introduction/question wording is held constant; they find that the additional response options reduce overreporting by about 8 percentage points. They also find that the reduction in overreporting is not distributed evenly across respondents, but is concentrated among those who are younger, have the fewest resources, who are the least politically interested and knowledgeable, and who feel the least efficacious – a conclusion that contrasts with Belli et al.’s (2001) results. Examining such patterns in a cross-cultural context would contribute to the further understanding of who does and does not vote, as well as generate additional insights into susceptibility to social desirability in vote reporting. (In this regard, see also Karp and Brockington 2005, who consider cross-national and contextual differences in the degree of social desirability pressures and their relationship to vote overreporting.)

References

Duff, Brian, Michael J. Hanmer, Won-Ho Park and Ismail K. White. 2007. “Good Excuses: Understanding Who Votes with an Improved Turnout Question.” Public Opinion Quarterly 71:67-90.

Karp, Jeffrey A. and David Brockington. 2005. “Social Desirability and Response Validity: A Comparative Analysis of Overreporting Voter Turnout in Five Countries.” Journal of Politics 67: 825-40.

Top of the page Bottom of the page
Jump to page : 1
Now viewing page 1 [25 messages per page]
Jump to forum :
Search this forum
Printer friendly version
E-mail a link to this thread
http://alicefilm.com/ - http://alittleboysblog.net/ - http://brasssolutions.com/ - http://brunodunker.com/

(Delete all cookies set by this site)
Running MegaBBS ASP Forum Software
© 2002-2017 PD9 Software