Pur Autre Vie

I'm not wrong, I'm just an asshole

Tuesday, November 18, 2014

How We Know What We Know

Here's one way to think about knowledge, truth, etc.  It's an artificial thought experiment, but it's intended to focus attention on what I think are important issues.  It relies on two analogies, which are really variants of the same thought.

For the first analogy, imagine that we have dozens (or hundreds or thousands, doesn't really matter) of machines.  Each machine is like a slot machine:  you pull a lever, and a variable amount of money comes out.  However, these machines have a few unique features:

1.  Each machine has two levers.  You can pull either of these levers (but not both) once per hour (or whatever - assume we have the capacity to use each machine in such a way that they never "go to waste," that is, go unused when they could have given a payout).  It doesn't cost anything to play.

2.  The machines are completely opaque, in the sense that you can't tell what is going on inside by direct observation.  The only information you can obtain about each machine is a list of its historical payouts.  The list indicates which lever was pulled for each payout.  It is impossible to determine what the payout would have been if the other lever had been pulled.

Now let's assume that we always want to pull the lever that results in a higher payout.  At first, we will have no choice but to pull levers more or less at random.  (We have no basis for predicting which lever has a higher payout.)  As we gather data, we can consider how to use it.  For instance, one machine might always pay $1 if the left lever is pulled, but $2 if the right lever is pulled.  We should pull the lever on the right side, not the left.  Of course I am using "always" in a limited sense.  We can't rule out the possibility that in the next round this machine will pay $100 from the left or right lever.  "Always" is a backward-looking statement.

Now assume that everyone agrees on the state of affairs I've described, so there are no radical skeptics or anything when it comes to the basic situation.  (No one is saying things like, "How do we know there are really machines?")  However, people take different attitudes to what we can know about the machines' payouts.  Some people hypothesize an "underlying reality" that is built into the machines and that we can model using quantitative tools, giving us access to "the truth" about the function determining the payout stream.  Many of our theories may be wrong or imprecise, but there is a truth "out there" that we are capable of discovering through empirical investigation.  Other people think that any knowledge derived in this way is contingent at best and relies on unfounded assumptions about the degree to which the future will resemble the past.  To these people, "the truth" never comprises an absolute grasp of what is inside the machine (which is fundamentally inaccessible to us), but rather is a more complicated function of usefulness and "fit" with observational data.  Other people might think that there is no basis for any prediction, or for any knowledge about future payouts, because it is impossible to tell whether a machine will diverge from its historical pattern (as machines frequently do, even if they had previously been stable for years).  There is a good reason to pull a lever, but no good reason to pull any particular lever at any time.  We have no access to "the truth" and maybe it doesn't exist.  It is true that some machines seem to be utterly reliable.  But on the other hand, sometimes machines that seem utterly reliable start behaving weirdly, and we have no demonstrably effective way of sorting reliable machines from unreliable ones.

The second analogy is basically the same thing.  We are playing a video game.  We have access only to inputs and outputs, not the source code.  So in other words, we can make our characters jump across the screen, and we can propose a sort of theory of the physics underlying the video game world.  But the physics might change in ways that are unpredictable from level to level.  (Suddenly the coefficient of friction on the ground is much lower - it is an "ice" level.)  And the physics might even change on replaying a level.

You can imagine the same attitudes forming as in the previous example.  (Really the two examples are just about identical.)

Now I think one thing to note is that it seems respectable to deny that we have access to some kind of absolute "truth" and nevertheless to believe that we can do "better than random" when pulling the levers.  We can doubt whether we will ever reverse engineer the "one true source code," and yet we can navigate the video game world.  I don't think our only choices are at the extremes.  In fact, I think the extremes are more or less untenable, although there's room for disagreement.


Blogger Grobstein said...

A lot depends on the strength of your claim. We often think that "to know" P is to be able to furnish an unconditional guarantee of P. If we hold ourselves to this standard, then we never know anything about the distribution of payouts.

BTW, as you know but some readers may not, this scenario resembles a well-studied problem in computer science, the multi-armed bandit. A related problem is the German tank problem. If you start with very little information about the underlying distributions, you can devise a good learning strategy.

But in your case, we never have any information about the underlying distributions, unless the trials we perform give us such information. But they don't, because the universe of possible distributions is too large or something like that.

Concretely, if we pull one lever 1,000 times, and it pays out $1 each time, we don't seem to have any basis for preferring wooops got sleepy

Insofar as this is an analogy to real life, I think the best thing we can say in defense of our actual behavior (in which we do learn from experience) is something like: we find it psychologically necessary to adopt some assumptions about distributions that can't be (otherwise) justified.

11:18 PM  
Blogger Sarang said...

I tend to think the extremes are more tenable than the middle. If we postulate a real world out there that behaves in a regular way, then (as a deductive matter) our usual inferential methods can be justified as giving us information about this world. If we do not want to make this assumption we can "justify" these methods as psychological tics. But I'm not seeing what the middle ground is: "usefulness" is either inductive usefulness (in which case you are implicitly swallowing the entire realist hog) or psychological usefulness (in which case you are a skeptic).

8:54 AM  
Blogger Tarun Menon said...

Hey, dude!

8:26 AM  
Blogger James said...


10:26 AM  

Post a Comment

<< Home