7 Comments
User's avatar
hypnosifl's avatar

"First, if there are relatively broad correlations, ties become less likely, and so the likelihood that my vote will be pivotal decreases"

If I am considering a form of evidential decision theory where my own vote influences the probability I assign to many others voting similarly, wouldn't I no longer be interested in the question of whether my individual vote is "pivotal" (in the sense of being a tie-breaker) but just the total probability that blue vs. red wins given the shift in probability distribution by my voting either red or blue?

"Combined, these two effects might raise the impact of my vote by a couple of orders of magnitude. While that’s a relatively substantial difference, the chance that my voting blue will make a positive difference is still negligible, something around 10-8 or 10-9."

Is this number premised on the idea that my vote must be the tie-breaker to "make a positive difference"?

"On the other hand, if I’m updating on expected correlations between myself and others, I should think that there are a lot more people who will reason similarly to me who will ultimately vote red than there are in the bloc that would correlate with me from before. If this is right, then updating on this broader correlative data will lead me to expect more total red voters conditional on my voting blue than I would have otherwise expected without that evidence as background."

I don't understand this part, can you elaborate on why, if my vote is correlated with others, my voting blue could be associated with believing there are more total red voters?

Friction's avatar

"If I am considering a form of evidential decision theory where my own vote influences the probability I assign to many others voting similarly, wouldn't I no longer be interested in the question of whether my individual vote is "pivotal" (in the sense of being a tie-breaker) but just the total probability that blue vs. red wins given the shift in probability distribution by my voting either red or blue?"

Yes, and I cover this in §4.1.

"Is this number premised on the idea that my vote must be the tie-breaker to "make a positive difference"?"

Yes, since if it's not the tie-breaker and red wins, then red would win regardless of what I do. And if it's not the tie-breaker and blue wins, then blue would win regardless of what I do. Of course, as I discuss, there are some other considerations relevant to the choice, especially as discussed in §3.3 and §3.4.

"I don't understand this part, can you elaborate on why, if my vote is correlated with others, my voting blue could be associated with believing there are more total red voters?"

Sure. Imagine I haven't yet considered what my own reasoning style implies about how similar people will vote. If I vote blue, that will somewhat increase the expected number of blue votes among similar reasoners. However, I also know that I'm the sort of reasoner who would generally favor red. Thus, I actually have good reason to think that the reference class is mostly red voters, even if I vote blue. On the whole, updating on this correlative data decreases the expected number of blue voters, even if it decreases somewhat less if I vote blue.

hypnosifl's avatar

"Yes, since if it's not the tie-breaker and red wins, then red would win regardless of what I do."

Are you speaking in terms of causal decision theory rather than evidential decision theory here? In terms of evidential decision theory, I could model my vote as statistically tied to many other votes and then it might be the case that in such a model P(red wins | I vote red and I am not a tie-breaker) is high (say >0.9), but P(red wins | I vote blue and I am not a tie-breaker) is low (say <0.1). Do you disagree?

"However, I also know that I'm the sort of reasoner who would generally favor red."

Do you mean your own personal preferences are for red, or is "I" a hypothetical generic person here? If the latter, are you just making a point that all else being equal, most everyone would prefer guaranteed survival than risking their life, even if this first-order preference might be altered by other facts like the potential to save lives by taking a risk?

Luke P's avatar

This post is very interesting, however, personally I don't find the idea of using change in expected values given a subtle vote very convincing with this particular problem. Your expected values show that on average ~1.5 billion people will die so by pressing red you are taking part in permitting an expected ~1.5 billion people to die, personally, I could just not live with that on my conscious.

In your article you mention: "Imagine you have a young child, and you could tell them what to do. You know that if they vote blue, there’s about a 50% chance that they’ll die, and about a 0% chance that they’ll save anyone else. If you told them to vote blue, I would think that you either didn’t fully understand the situation or that you’re some sort of moral monster." But by telling them to vote red you've violated your assumptions that the voting is independent. By your assumption it's expected that a significant portion of your friends and family are going to die if blue loses. Even if the optimal choice is to press red, would you truly want to live with the fact that you were part of the population that allowed those deaths?

Eugene Earnshaw's avatar

It makes me sad that anyone would even consider pressing blue.

Luke P's avatar

Why? You don't like people not wanting others to die?

Eugene Earnshaw's avatar

Because if everyone presses red nobody dies. And there is no reason for people to press blue once that is clear. I don’t want anyone to die so I want everyone to press red.