A Strange Kind of Courage
The older I get, the more I think that everyone should take a decision-making class at some point, probably early in college. To use one of my favorite examples (which I actually did learn in college, but not very well): Imagine that you have a machine, like a slot machine, with two different levers. You can pull one of the levers each day (each day you choose which lever to pull). Each time you pull a lever, you get a payout of some kind. Assuming the payout is always desirable (you can never get too much of whatever the machine is dispensing), you should choose the lever that you expect will result in the higher payout.
But there is no transparency about how payouts are determined. All you can do is observe past payouts and try to extrapolate into the future. Are the payouts predictable according to an orderly function? Does lever A always pay out 5 and lever B always pay out 10? (Should you check lever A every once in a while just to make sure it is still paying only 5?)
So you can work through different approaches, and I think AI/machine learning classes deal with examples like this. But what is great about the example is that it brings up deeper issues about what we can learn about the world. For instance, it's a pretty neat illustration of the classic Hume point: even if there is a strong empirical regularity in the payouts, who is to say that the next payout will behave according to that regularity? You have no direct access to the machine's algorithm, so it is always a matter of assuming, on some level, that the future will resemble the past. And even if you don't want to embrace extreme skepticism on this point, you still have to make assumptions about the validity of extrapolating from one set of data to another. Models stop being true, and they don't always do it in predictable ways. (There are also infinitely many models that would generate any observed set of payouts, so you have the issue of choosing which of those models is better or worse.)
And so it's a great introduction to empirical methods, and the limits of those methods. And I think that would be true of a bunch of other decision-making examples (although it's something I'm still not very knowledgeable about). For instance, a related problem is the bounded-rationality (or limited-processing-power) problem. The basic idea is, say you want to optimize something, but your time and effort are themselves limited. So you might ask, "How much time should I spend investigating the optimal solution to problem A vs. problem B?" But solving that problem (which you could call problem C) itself requires time, so really you have to allocate among A, B, and C. But that allocation problem (allocating time among A, B, and C) is itself a distinct allocation problem, and so now you have problems A, B, C, and D. And so on.
In real life you just make some kind of ad hoc decision based on your intuition. And so it becomes clear that the basic tools of decision-making, like quantitative analysis, only get us so far, and we are left with ineradicable subjectivity and imperfection. And that is a crucial lesson, because it empowers us to be decisive even in the fact of uncertainty and doubt. In other words, the intellectual journey goes something like this:
1. Comprehending the difficulty of making choices under uncertainty.
2. Learning some quantitative approaches that have promise.
3. Learning some of the limitations of those approaches.
4. Realizing that decisions have to be made notwithstanding the recognized limitations, and therefore adopting appropriate standards for what counts as a good basis for a decision. Something better than coin-flipping and worse than perfection.
And hopefully then comes realization that this is where we live our lives, somewhere between coin-flipping and perfection. The idea that decision-making can be reduced to a "science" can be discarded, and so can the idea that "you can't do better than coin-flipping." (I guess there are corner cases where coin-flipping is as good as anything else, but there are plenty of cases that aren't like that.) Decision-making is just a messy business, but you have to go forward with it anyway, with the best tools you can bring to bear. You may be making the wrong decision, but what does that have to do with the price of tea in China? You are always running the risk of making bad decisions. You are always relying on imperfect, ad hoc judgments. Once this is internalized, there is no reason to shy away from a possibly bad decision in any particular instance. It's a strange kind of courage, since it comes from the recognition that you can never be very confident in your decision-making, but what matters is that it emboldens you to make decisions when it is necessary.
But there is no transparency about how payouts are determined. All you can do is observe past payouts and try to extrapolate into the future. Are the payouts predictable according to an orderly function? Does lever A always pay out 5 and lever B always pay out 10? (Should you check lever A every once in a while just to make sure it is still paying only 5?)
So you can work through different approaches, and I think AI/machine learning classes deal with examples like this. But what is great about the example is that it brings up deeper issues about what we can learn about the world. For instance, it's a pretty neat illustration of the classic Hume point: even if there is a strong empirical regularity in the payouts, who is to say that the next payout will behave according to that regularity? You have no direct access to the machine's algorithm, so it is always a matter of assuming, on some level, that the future will resemble the past. And even if you don't want to embrace extreme skepticism on this point, you still have to make assumptions about the validity of extrapolating from one set of data to another. Models stop being true, and they don't always do it in predictable ways. (There are also infinitely many models that would generate any observed set of payouts, so you have the issue of choosing which of those models is better or worse.)
And so it's a great introduction to empirical methods, and the limits of those methods. And I think that would be true of a bunch of other decision-making examples (although it's something I'm still not very knowledgeable about). For instance, a related problem is the bounded-rationality (or limited-processing-power) problem. The basic idea is, say you want to optimize something, but your time and effort are themselves limited. So you might ask, "How much time should I spend investigating the optimal solution to problem A vs. problem B?" But solving that problem (which you could call problem C) itself requires time, so really you have to allocate among A, B, and C. But that allocation problem (allocating time among A, B, and C) is itself a distinct allocation problem, and so now you have problems A, B, C, and D. And so on.
In real life you just make some kind of ad hoc decision based on your intuition. And so it becomes clear that the basic tools of decision-making, like quantitative analysis, only get us so far, and we are left with ineradicable subjectivity and imperfection. And that is a crucial lesson, because it empowers us to be decisive even in the fact of uncertainty and doubt. In other words, the intellectual journey goes something like this:
1. Comprehending the difficulty of making choices under uncertainty.
2. Learning some quantitative approaches that have promise.
3. Learning some of the limitations of those approaches.
4. Realizing that decisions have to be made notwithstanding the recognized limitations, and therefore adopting appropriate standards for what counts as a good basis for a decision. Something better than coin-flipping and worse than perfection.
And hopefully then comes realization that this is where we live our lives, somewhere between coin-flipping and perfection. The idea that decision-making can be reduced to a "science" can be discarded, and so can the idea that "you can't do better than coin-flipping." (I guess there are corner cases where coin-flipping is as good as anything else, but there are plenty of cases that aren't like that.) Decision-making is just a messy business, but you have to go forward with it anyway, with the best tools you can bring to bear. You may be making the wrong decision, but what does that have to do with the price of tea in China? You are always running the risk of making bad decisions. You are always relying on imperfect, ad hoc judgments. Once this is internalized, there is no reason to shy away from a possibly bad decision in any particular instance. It's a strange kind of courage, since it comes from the recognition that you can never be very confident in your decision-making, but what matters is that it emboldens you to make decisions when it is necessary.
0 Comments:
Post a Comment
<< Home