Pur Autre Vie

I'm not wrong, I'm just an asshole

Wednesday, August 22, 2012

Half a Thought from the Vat

I may be inadvertently plagiarizing Hilary Putnam here, but I don't think I am.

There are two worlds.  In each world there is a society composed of brains in vats.  In one world, the sensory input into the brains is simply fabricated  (though of course it is fabricated with sufficient care that the brains don't detect any irregularities—the inputs obey "physical laws").

In the other world, a more complicated system is used.  Robots representing each brain's "body" roam the earth, and the sensory input into the brains is completely "genuine."  That is, imagine that each robot has cameras, microphones, etc. and pipes the resulting data back to the vat.

If you hold a correspondence theory of truth, then the brains in the first world are basically incapable of making true statements about the world around them.  On the other hand, the brains in the second world have no trouble speaking truthfully of mountains, roads, buildings, etc.  (They may have trouble describing neurosurgery, as that part would still have to be fabricated.  But maybe the brains could somehow be inserted into little vats inside the robots...)

And so one society would be truth-prone and the other wouldn't, even if the sensory input were exactly the same in both worlds.  That is, the brains could be making the same statements, using the same models to come up with the same predictions, etc.

Actually, I suppose you would be free to argue that the brains can't successfully refer to anything real in either world.  But certainly the brains in the first world can't make true statements about it.

Anyway, just getting the thought down, I had a much more cogent point to make before I fully woke up and wrote this.  Like MacArthur I shall return.

UPDATE:  Oh, I think one of my points was, how much should these brains worry about which kind of world they are in?  Assume, again, that the sensory input is the same in both worlds.  How much should it matter to the brains whether they are in the fabricated world or the robot world?  I would say, it doesn't matter at all.  And this is my way of making the traditional point that truth shouldn't be viewed as correspondence to some noumenal world.  The idea is definitely not original to me, and I doubt the example is original either, but it helped me clarify my thoughts.  More soon.

FURTHER UPDATE:  I mean, in a sense this is nothing more than the classic brain-in-a-vat hypothetical.  Because after all, what is the "robot-world" other than life as we know it?  Stick the brains inside the robots, and you've got the non-vat world we think we live in.  And yet if the inputs are the same, why should the brains care?  So yeah, I'm somewhat embarrassed to have taken this long to internalize the brains-in-a-vat hypothetical.  But, you know, better late than never.

Sunday, August 12, 2012

Why You Won't Have Children

I read somewhere recently that an upper-middle-class worker needs to have around $2 million in savings when he retires, less if he owns a house.

I ran some numbers, and here is roughly how much you would have to save each year to hit $2 million, assuming you get the specified rate of return.  I am assuming you simply save $x every year from your 25th birthday to your 65th birthday, and that your rate of return is constant across those years.  (Needless to say, these are unrealistic assumptions, but I don't think it affects my point.)  I guess these should be viewed as real rates of return, since the point is to have $2 million in purchasing power when you turn 65.

If your rate of return is:

10% you must save a little over $4,500/year

7% you must save a little over $10,000/year

4% you must save a little over $21,000/year

0% you must save $50,000/year

-2% you must save a little over $72,000/year


Bear in mind these are real returns, so if inflation runs at 2% then each of these would need to be 2% higher in nominal terms.

So what I think this shows is how changing perceptions of the economic environment can have huge consequences for personal behavior.  If you earn $100,000/year, then a 3% change in your expected rate of return (7% to 4%) would require you to save an extra 11% of your income in order to have the same amount at retirement.  That is a huge reduction in spending, and if you extrapolate it across the economy, it could help explain the big swings in aggregate demand we have seen.

Meanwhile, if a saver earning $100,000/year thought his real rate of return were -2%, he would have to devote over 72% of his budget to saving in order to hit $2 million on retirement.  Assuming he pays 15% tax on his income, that leaves $13,000.  It would be very tough to raise a family in NYC on $13,000.  In fact, I think it would be tough to raise a family on $13,000 in most cities.  And yet $100,000 is well above median income, even in Manhattan.  (Manhattan seems to have median household income in the neighborhood of $65,000.)

Of course, historically many investments have yielded more than -2% returns, but there is no guarantee that this will be the case in the future.  And some people think that Chinese savers get a rate of return even worse than -2%. This all makes me think that we have built a society that is too expensive to sustain itself, and that we will all go the way of the Japanese. That is, we won't be able to afford to raise children (I probably can't, and I earn a pretty good salary—though of course there are other reasons I can't/shouldn't have children), and so our population will inexorably shrink and our society will fade away.