Pur Autre Vie

I'm not wrong, I'm just an asshole

Wednesday, August 22, 2012

Half a Thought from the Vat

I may be inadvertently plagiarizing Hilary Putnam here, but I don't think I am.

There are two worlds.  In each world there is a society composed of brains in vats.  In one world, the sensory input into the brains is simply fabricated  (though of course it is fabricated with sufficient care that the brains don't detect any irregularities—the inputs obey "physical laws").

In the other world, a more complicated system is used.  Robots representing each brain's "body" roam the earth, and the sensory input into the brains is completely "genuine."  That is, imagine that each robot has cameras, microphones, etc. and pipes the resulting data back to the vat.

If you hold a correspondence theory of truth, then the brains in the first world are basically incapable of making true statements about the world around them.  On the other hand, the brains in the second world have no trouble speaking truthfully of mountains, roads, buildings, etc.  (They may have trouble describing neurosurgery, as that part would still have to be fabricated.  But maybe the brains could somehow be inserted into little vats inside the robots...)

And so one society would be truth-prone and the other wouldn't, even if the sensory input were exactly the same in both worlds.  That is, the brains could be making the same statements, using the same models to come up with the same predictions, etc.

Actually, I suppose you would be free to argue that the brains can't successfully refer to anything real in either world.  But certainly the brains in the first world can't make true statements about it.

Anyway, just getting the thought down, I had a much more cogent point to make before I fully woke up and wrote this.  Like MacArthur I shall return.

UPDATE:  Oh, I think one of my points was, how much should these brains worry about which kind of world they are in?  Assume, again, that the sensory input is the same in both worlds.  How much should it matter to the brains whether they are in the fabricated world or the robot world?  I would say, it doesn't matter at all.  And this is my way of making the traditional point that truth shouldn't be viewed as correspondence to some noumenal world.  The idea is definitely not original to me, and I doubt the example is original either, but it helped me clarify my thoughts.  More soon.

FURTHER UPDATE:  I mean, in a sense this is nothing more than the classic brain-in-a-vat hypothetical.  Because after all, what is the "robot-world" other than life as we know it?  Stick the brains inside the robots, and you've got the non-vat world we think we live in.  And yet if the inputs are the same, why should the brains care?  So yeah, I'm somewhat embarrassed to have taken this long to internalize the brains-in-a-vat hypothetical.  But, you know, better late than never.