Pur Autre Vie

I'm not wrong, I'm just an asshole

Thursday, February 24, 2011

Not the Idea of Bullshit but the Thing Itself

My mind has been in turmoil since I made my last foray into the dark, vaguely alimentary passages of normative philosophy. [Is this a gay joke or a normative-philosophy-is-shit joke? -Ed. Why can't it be both?]

Here's the problem. Once you embrace rule utilitarianism, it seems as though you have abandoned consequentialism. And so it has become nearly impossible for me to draw a line between rule utilitarianism and other non-consequentialist normative philosophies. I mean, sure, the literal distinction is that those philosophies don't purport to maximize utility on average, though many of them may beat act utilitarianism when it comes to utility-maximization.

But so imagine that you are trying to use rule utilitarianism to convince someone to keep a promise when the consequences will be unobservable (and therefore will have no impact on future promise-making and -keeping). Try it! I, for one, almost immediately start invoking words like "fairness" and "duty" and "advantage-taking."

But it doesn't stop there. Rule utilitarianism must take into account the world inhabited by the moral actor. [Paul Newman? -Ed.] Lots of moral rules are utility-maximizing only if everyone else is doing them. Should you keep promises when no one else does? Or imagine a world in which everyone keeps promises made on the 5th day of each month, but otherwise cheats with abandon (it's kind of fun to imagine a world like that, actually - busy day for lawyers I think). You probably shouldn't take advantage by keeping only promises made on the 10th, even though a priori that would be just as good a moral rule.

And but so! Rule utilitarianism now has to deal with the interaction between ideal, a priori utility-maximizing rules and the real world. Whereas act utilitarianism merely has to take laws and social customs into account, rule utilitarianism has to re-run the whole simulation. And I'm pretty sure you get a coastline-length problem - you have to decide the scope of your analysis before you can ask meaningful questions about the ideal rules.

Luckily this can all be safely ignored because it is bullshit. Interesting to think about, though.

Brilliant Passages

The first of an occasional series. From The Memory Chalet, by Tony Judt:

And yet New York remains a world city. It is not the great American city - that will always be Chicago.

Thursday, February 17, 2011

This Post Can Make Your Life Happier Than Any Other On The Subway

So Eileen told me about a paper by Rawls ("Two Concepts of Rules," The Philosophical Review, Vol. 64, No. 1, pp. 3-32, Jan. 1955) that I think has changed my view of rule utilitarianism. Caveat: I haven't read the paper, so I could easily be butchering the argument.

But here goes. "Act utilitarianism" is the belief that at each moment in time, each individual should do whatever produces the most expected utility going forward. "Rule utilitarianism" involves specifying a set of rules that, if followed, lead to maximum global utility. Then, in rule utilitarianism, the moral requirement is not necessarily to maximize utility in every instance, but to follow the utility-maximizing rules.

I used to think that rule utilitarianism was silly. Why? Because it seemed no different from sophisticated act utilitarianism. Of course you should keep your promises, even when a particular promise leads to unfortunate results, because the ability to make credible commitments is important. But an act utilitarian can seemingly take that into account, by asking, "What is the global effect of breaking this promise? Will it increase utility in the short term but, in the long term, contribute to the erosion of norms that are hugely beneficial? What is the likelihood that it will have that effect, and what is the size of the effect?" So then you balance the good and the bad. Not all promises should be kept, that's why we have personal bankruptcy. But keeping promises doesn't seem to require layering rules on top of utilitarianism.

But I was wrong - it is relatively easy to come up with models in which rule utilitarianism leads to more aggregate utility than even the most sophisticated act utilitarianism. The logic is very similar to the 2-box strategy in Newcomb's problem. Imagine that everyone gains a certain amount of utility from knowing that they live in a society in which innocent people are never tortured. And imagine that every once in a while there is a large utility boost from torturing an innocent person (perhaps you can extract the location of a hidden bomb by torturing a terrorist's innocent wife). Imagine that the police can, every once in a while, torture someone with literally 0% chance that anyone will find out beyond a small circle of people (the victim, the terrorist, a few police officers). And imagine that everyone in this society is both rational and a principled utilitarian (the terrorist just has radically different factual beliefs about how to maximize utility). And finally, everyone knows that the police can torture people secretly, but the public can never discover whether it has happened in a particular case.

Okay, so, if the math works out right, then sometimes the police will torture the innocent victim if they are act utilitarians, but not if they are rule utilitarians, and the world in which they are rule utilitarians (perhaps a world in which everyone is a rule utilitarian) is a world of greater aggregate utility than the world in which they are act utilitarians. This is because the torture is secret - the only way people will know that it doesn't happen is if they know the police officers are rule utilitarians. If the police offers are act utilitarians, then everyone will assume that the torture happens, because everyone knows that it will be utility-maximizing in some particular instances.

It's like taking the second box in Newcomb's problem. You might as well - it can only increase your winnings. What you ideally want is to be the kind of person who only takes one box, but you never actually want to take only one box. Your decision to take two boxes doesn't affect the amount of money in the box (your decision itself, as opposed to your tendencies, was unobservable to the person who arranged the boxes). Similarly, if you're an act utilitarian police officer, you go ahead and perform torture (if you believe it will increase utility) because your actions don't affect people's beliefs (your decision itself, as opposed to your moral belief system, is unobservable to the public). What you want is to be the kind of person (a rule utilitarian) who doesn't torture anyone.

Okay, so, I think that makes sense. And although this is a somewhat exotic hypothetical, it's not hard to imagine that lots of decisions are effectively unobservable to the public. We make plenty of promises that are unverifiable by the general public, or even by specialized institutions like credit bureaus. And yet the tendency to keep those promises is hugely valuable, even if in particular cases (1) the debtor gets more utility from the money than the creditor, and (2) the promise is unverifiable, so there is no (net) utility to be gained by keeping it.

So now I think rule utilitarianism is almost self-evidently better than act utilitarianism, though before tonight I thought the exact opposite (to be precise, I thought rule utilitarianism was incoherent or a mere refinement of act utilitarianism). So, philosophy works! I think! (Haven't actually read the paper.) Too bad normativity is dead.