Pur Autre Vie

I'm not wrong, I'm just an asshole

Thursday, February 17, 2011

This Post Can Make Your Life Happier Than Any Other On The Subway

So Eileen told me about a paper by Rawls ("Two Concepts of Rules," The Philosophical Review, Vol. 64, No. 1, pp. 3-32, Jan. 1955) that I think has changed my view of rule utilitarianism. Caveat: I haven't read the paper, so I could easily be butchering the argument.

But here goes. "Act utilitarianism" is the belief that at each moment in time, each individual should do whatever produces the most expected utility going forward. "Rule utilitarianism" involves specifying a set of rules that, if followed, lead to maximum global utility. Then, in rule utilitarianism, the moral requirement is not necessarily to maximize utility in every instance, but to follow the utility-maximizing rules.

I used to think that rule utilitarianism was silly. Why? Because it seemed no different from sophisticated act utilitarianism. Of course you should keep your promises, even when a particular promise leads to unfortunate results, because the ability to make credible commitments is important. But an act utilitarian can seemingly take that into account, by asking, "What is the global effect of breaking this promise? Will it increase utility in the short term but, in the long term, contribute to the erosion of norms that are hugely beneficial? What is the likelihood that it will have that effect, and what is the size of the effect?" So then you balance the good and the bad. Not all promises should be kept, that's why we have personal bankruptcy. But keeping promises doesn't seem to require layering rules on top of utilitarianism.

But I was wrong - it is relatively easy to come up with models in which rule utilitarianism leads to more aggregate utility than even the most sophisticated act utilitarianism. The logic is very similar to the 2-box strategy in Newcomb's problem. Imagine that everyone gains a certain amount of utility from knowing that they live in a society in which innocent people are never tortured. And imagine that every once in a while there is a large utility boost from torturing an innocent person (perhaps you can extract the location of a hidden bomb by torturing a terrorist's innocent wife). Imagine that the police can, every once in a while, torture someone with literally 0% chance that anyone will find out beyond a small circle of people (the victim, the terrorist, a few police officers). And imagine that everyone in this society is both rational and a principled utilitarian (the terrorist just has radically different factual beliefs about how to maximize utility). And finally, everyone knows that the police can torture people secretly, but the public can never discover whether it has happened in a particular case.

Okay, so, if the math works out right, then sometimes the police will torture the innocent victim if they are act utilitarians, but not if they are rule utilitarians, and the world in which they are rule utilitarians (perhaps a world in which everyone is a rule utilitarian) is a world of greater aggregate utility than the world in which they are act utilitarians. This is because the torture is secret - the only way people will know that it doesn't happen is if they know the police officers are rule utilitarians. If the police offers are act utilitarians, then everyone will assume that the torture happens, because everyone knows that it will be utility-maximizing in some particular instances.

It's like taking the second box in Newcomb's problem. You might as well - it can only increase your winnings. What you ideally want is to be the kind of person who only takes one box, but you never actually want to take only one box. Your decision to take two boxes doesn't affect the amount of money in the box (your decision itself, as opposed to your tendencies, was unobservable to the person who arranged the boxes). Similarly, if you're an act utilitarian police officer, you go ahead and perform torture (if you believe it will increase utility) because your actions don't affect people's beliefs (your decision itself, as opposed to your moral belief system, is unobservable to the public). What you want is to be the kind of person (a rule utilitarian) who doesn't torture anyone.

Okay, so, I think that makes sense. And although this is a somewhat exotic hypothetical, it's not hard to imagine that lots of decisions are effectively unobservable to the public. We make plenty of promises that are unverifiable by the general public, or even by specialized institutions like credit bureaus. And yet the tendency to keep those promises is hugely valuable, even if in particular cases (1) the debtor gets more utility from the money than the creditor, and (2) the promise is unverifiable, so there is no (net) utility to be gained by keeping it.

So now I think rule utilitarianism is almost self-evidently better than act utilitarianism, though before tonight I thought the exact opposite (to be precise, I thought rule utilitarianism was incoherent or a mere refinement of act utilitarianism). So, philosophy works! I think! (Haven't actually read the paper.) Too bad normativity is dead.

9 Comments:

Blogger Alan said...

I agree with your distinction between act and rule utilitarianism. However, I don't think your analogy to Newcomb's problem is useful.

To recap, and to make sure we're on the same page, you're imagining a society whose utility function is such that a rule prohibiting all torture of suspects would yield more utility than a series of act-utilitarian decisions whether or not to torture each suspect. This is because under the rule-utilitarian system, the society would gain more utility from knowing that it never tortured than it would lose by foregoing the (assumedly) utile torture of suspects who know valuable information. You're also assuming that under the act-utilitarian system, the government might as well torture suspects when it's confident enough that they would cough up valuable information, because society already knows that the government probably tortures -- the cat's out of the bag; the damage (or at least enough of it) has been done. Obviously, ceteris paribus, this society would be better served by adopting the rule.

Then you bring up Newcomb's problem. You make the point that under the classic 2-boxer account, you obviously should take the second box because what's done is done; if you didn't take it you'd be leaving money on the table. Then you compare this to the government's decision whether or not to torture an obviously guilty and knowledgeable suspect under the act-utilitarian regime -- the government might as well, because if it didn't it would be leaving valuable information on the table.

You go on to note that what someone faced with the choice in Newcomb's problem (hereinafter a "boxer") wants is to be the kind of person who is predicted to take only one box. Unfortunately, it seems that people have no control over whether this is the case, hence the aforementioned argument for taking both boxes. But "A-ha!" you say: the society has the power to do what the boxer can't: the society can adopt the no-torture rule and thereby gain utility just like a boxer who, before choosing, somehow made himself the kind of person who would be predicted to take only one box.

I get it, but I don't see what the analogy adds. Analogies, to be valuable, must enhance the clarity of a point and/or facilitate valuable insights. I don't see how the above analogy does either. Regarding clarity, I think the analogy actually confuses, if anything. What's interesting about Newcomb's problem -- where the rubber hits the road -- is precisely that the boxer seems to have no control over the prediction, even though he has control over his choice and one-boxers have vastly higher historical returns. In your torture example, though, the society can simply decide to adopt the no-torture rule if doing so would yield more utility than the act-utilitarian alternative. So sure, your analogy technically works -- it makes sense -- but I don't think it sharpens my understanding of why the rule-utilitarian option is better for your society. As for the facilitating-valuable-insights prong, well, I haven't come up with any yet!

One other thing: your conclusion shouldn't be that "rule utilitarianism is almost self-evidently better than act utilitarianism." As your torture hypothetical illustrates, it depends on the utility function in question. Rather, your conclusion should be that having the option of rule utilitarianism is self-evidently better than being stuck with only act-utilitarianism. After all, it's not as if act-utilitarianism is obsolete -- there can't be a rule for every decision; at the very least one must make an act-utilitarian decision about whether to adopt the first rule and, if adopted, what its content should be.

I don't mean to be harsh, just clear.

1:03 AM  
Blogger Zed said...

I think the analogy is quite helpful because it gets at why rule utilitarianism cannot be reduced to act utilitarianism + foresight. If a.u. + foresight gave you r.u., there would probably be an analogous argument for taking one box in the Newcomb problem. There isn't; therefore it's best to take the r.u. position as fundamental.

12:59 PM  
Blogger Alan said...

SG, I don't get this: "If a.u. + foresight gave you r.u., there would probably be an analogous argument for taking one box in the Newcomb problem." Please explain.

I don't know if this is relevant, but I'm assuming that each boxer makes a single act-utilitarian decision: one box or two? This may be part of the reason why I don't see a good parallel between Newcomb's problem and the foresight vs. precommitment issue.

1:34 PM  
Blogger Tarun Menon said...

Alan,

I think you're missing the point when you talk about society adopting the rule. I don't even know what it means for society to adopt a rule in this sense. James' point is about individuals being act vs. rule utilitarians. The societal angle is just that there is common knowledge that all members of this society are individually rule utilitarians.

The analogy with the Newcomb problem is that you want to forestall deliberation when faced with the actual decision. Here's another analogous case: getting healthier would make me much better off, and the way for me to do that is to not eat cake all the time. So I decide to stop eating cake. Now someone offers me cake and I begin evaluating whether I should eat it. It's true that in the aggregate I should not be eating a bunch of cake, but the marginal cost of eating this cake is practically negligible. Refraining from eating this one cake will barely contribute to my health. And it is a delicious cake! So my deliberation at the moment of choice should, if I am rational, lead me to eat the cake.

But this will happen every time I'm offered a cake, and before you know it I'm a bloated mess. What I want to do is just not deliberate at the moment the cake is offered, but have an automatic response: "No thanks." So a good strategy when you're dieting is to make a commitment to a standard response when faced with a particular choice, rather than deliberating anew each time the choice is presented.

The same holds for the Newcomb case. If I'm in an actual Newcomb situation and I begin deliberating what I should do, I'm going to two-box. But that's a bad way for me to behave in the long run. Instead I want to be able to make a pre-commitment not to deliberate but just mechanically one-box every time I'm presented with the choice. You seem to think making a commitment of this sort is impossible, but I don't see why. Individuals do seem to make analogous commitments when they're dieting or when they quit smoking.

Anyway, this is the crucial difference between act and rule utilitarianism. Being an act utilitarian requires you to deliberate whenever you're faced with an ethical choice. And the torture example shows that this can be counterproductive. In some cases maximizing utility requires refraining from deliberation about utility when faced with a particular case. This is what rule utilitarianism does: once the rule has been established, you no longer deliberate about individual choices. You just have an immediate mechanical response.

2:24 PM  
Blogger Zed said...

A more direct analogy is the iterated prisoner's dilemma after Axelrod. One could ask whether any foresighted act-utilitarian scheme would give you tit-for-tat (if one stipulates that you can't assume the other player is following a rule). I think the answer has to be no, because on act-utilitarian grounds the history (prev. move) should be irrelevant. Of course a small population of act utilitarians in a sea of rule utilitarians might follow the rules, but this is a different question.

I don't really want to press the point re one-boxing, but I think the iterated Newcomb problem has the same structure in the sense that a set of _independent_ statistical decisions by the machine is a useful stylized version of what an act utilitarian might look like to another act utilitarian.

2:35 PM  
Blogger Alan said...

Mr. Menon,

When I talked about "society" adopting the rule, I basically envisioned something like a universal agreement in favor of the rule or, more realistically, the government credibly committing to never torturing suspects. I'm not sure if this is relevant in this case, though, because I think James's hypothetical assumes that everyone has the same anti-torture preference.

Anyway, I understand the point of rule utilitarianism; I know it entails, well, adopting a _rule_ about how to make a set of decisions as opposed to deliberating about each one. What I was confused about is the alleged analogy between it and the decision in Newcomb's problem. I didn't know that James -- presumably -- was analogizing the precommitment to one-box with the precommitment to never torture. I don't think -- and I don't see why I seemed to think -- such precommitments are impossible.

-- A. Bloated Mess

2:52 PM  
Blogger Tarun Menon said...

I interpreted this as saying that pre-commitment to one-boxing is impossible:

"You go on to note that what someone faced with the choice in Newcomb's problem (hereinafter a "boxer") wants is to be the kind of person who is predicted to take only one box. Unfortunately, it seems that people have no control over whether this is the case, hence the aforementioned argument for taking both boxes."

I assumed you meant people have no control over whether they are the kind of person who one-boxes. But I was wrong! I guess you meant that at the moment of choice people have no control over the prediction?

3:04 PM  
Blogger Alan said...

To follow up, it seems that I overlooked the argument that someone could "pre-commit" to being a one-boxer, as is evident from my initial comment:

"What's interesting about Newcomb's problem -- where the rubber hits the road -- is precisely that the boxer seems to have no control over the prediction, even though he has control over his choice and one-boxers have vastly higher historical returns. In your torture example, though, the society can simply decide to adopt the no-torture rule if doing so would yield more utility than the act-utilitarian alternative. So sure, your analogy technically works -- it makes sense -- but I don't think it sharpens my understanding of why the rule-utilitarian option is better for your society."

So yeah, maybe I should've been thinking outside the two-box; I probably should've refreshed my understanding of Nukem's problem. This all seems pretty simple now...

3:05 PM  
Blogger Alan said...

Heh, no, Mr. Menon, I think -- insofar as I know what was going on in my head when I made my initial comment -- I was assuming that people have no control over whether they will be predicted to take one box or two. I think I was assuming that the argument for one-boxing necessarily relies on backwards-causation or something -- that the predictor made the predication before the boxer even knew that he would have to make the one- vs. two-box decision.

3:12 PM  

Post a Comment

<< Home