Nihilists Aren't Believers in a River in Egypt
I don't have the time [or the capacity - ed.] for a fully thought-out post, but I want to put in a place-holder to be elaborated on in the future. (And as always I should disclaim any originality - I am quite sure in this case that I am far from any point that can be considered novel.) Here's the basic idea. You can think of knowledge as justified true belief (though of course you don't have to). So when nihilists say that you can't know anything, they could be attacking a number of different aspects of knowledge.
But focus on the "justified" part. I've written before about the difficulty of deciding what the threshold is for belief. I will probably have to drill down a little on that concept to make all of this work. But for now, recall that part of the problem is that people have limited mental resources and can't run all of the optimization calculations that one might theoretically want. So for instance, if I have 10 minutes to solve a problem, I want to use that time optimally. But solving that optimization problem is not costless: perhaps I have to spend 2 minutes figuring out how to optimize my time, and then I can spend 8 minutes using the time optimally. Well, fine, but how do I know whether I'm better off using a sub-optimal calculation method for 10 minutes rather than an optimal one for 8? (And how do I know that I will have 8 minutes left once I've run the optimization calculation?) To answer that question, maybe I need to spend a few minutes thinking about it. But wait! How do I know how much time to think about it? And so on.
This is exhausting. And at some point, on some level of analysis, it involves a brute force "retreat to intuition" (I am using "brute force" in its non-technical sense, basically meaning arational.) And that "retreat to intuition," that abandonment of rationality, is often (always?) outcome-determinative, as a lawyer might say. In other words, the decision to cut off the analysis and accept a belief can never, itself, be rational. (And neither can the decision not to cut off the analysis.)
And so once we've recognized that the rationality involved in justifying a belief is sharply limited, maybe we come to feel that beliefs simply can't be justified by humans. It depends on what kind of standards we want to maintain in the "justification" department.
And so now we have maybe a way of understanding nihilism as a somewhat attractive view of the world. A nihilist is just someone who maintains high standards for justification, even in the face of our inability to meet those standards. A pragmatist is someone who relaxes the standards in light of our limited capacities. It's not so clear to me that the nihilist's approach is clearly inferior. We don't relax the definition of "immortal" in such as way that long-lived humans can meet it. Why should we lower our standards for justification simply so that we can call something "justified"?
And it strikes me that a nihilist need not live his life according to formal principles of justification. A nihilist could consider it highly advisable to get out of the way of a moving car, though he does not accept that this action is based on beliefs that are formally justified or justifiable (by anyone subject to bounded rationality). In other words, a nihilist might take a sort of Humean approach to life, distinguishing between formal justification and practical reasons for action/belief.
I emphasize that I have not fully thought this through. Just a placeholder. Thoughts welcome. (The most obvious objection, I think, is that if the nihilist embraces some lower standard that renders a belief usable, but refuses call it "justification," then he is merely playing word games. Haven't fully considered how telling this objection is. I am okay with nihilism being a much more subtle and technical rejection of the possibility of knowledge than is traditionally assumed.)
[UPDATE: I'm actually having pretty serious doubts about whether this way of thinking about it makes sense. In particular, the connection between rationality and justification is one that I think bears more thought than I've given it. I'm actually probably going to write on another topic before I return to this one, hopefully with a more worked-out theory.]
But focus on the "justified" part. I've written before about the difficulty of deciding what the threshold is for belief. I will probably have to drill down a little on that concept to make all of this work. But for now, recall that part of the problem is that people have limited mental resources and can't run all of the optimization calculations that one might theoretically want. So for instance, if I have 10 minutes to solve a problem, I want to use that time optimally. But solving that optimization problem is not costless: perhaps I have to spend 2 minutes figuring out how to optimize my time, and then I can spend 8 minutes using the time optimally. Well, fine, but how do I know whether I'm better off using a sub-optimal calculation method for 10 minutes rather than an optimal one for 8? (And how do I know that I will have 8 minutes left once I've run the optimization calculation?) To answer that question, maybe I need to spend a few minutes thinking about it. But wait! How do I know how much time to think about it? And so on.
This is exhausting. And at some point, on some level of analysis, it involves a brute force "retreat to intuition" (I am using "brute force" in its non-technical sense, basically meaning arational.) And that "retreat to intuition," that abandonment of rationality, is often (always?) outcome-determinative, as a lawyer might say. In other words, the decision to cut off the analysis and accept a belief can never, itself, be rational. (And neither can the decision not to cut off the analysis.)
And so once we've recognized that the rationality involved in justifying a belief is sharply limited, maybe we come to feel that beliefs simply can't be justified by humans. It depends on what kind of standards we want to maintain in the "justification" department.
And so now we have maybe a way of understanding nihilism as a somewhat attractive view of the world. A nihilist is just someone who maintains high standards for justification, even in the face of our inability to meet those standards. A pragmatist is someone who relaxes the standards in light of our limited capacities. It's not so clear to me that the nihilist's approach is clearly inferior. We don't relax the definition of "immortal" in such as way that long-lived humans can meet it. Why should we lower our standards for justification simply so that we can call something "justified"?
And it strikes me that a nihilist need not live his life according to formal principles of justification. A nihilist could consider it highly advisable to get out of the way of a moving car, though he does not accept that this action is based on beliefs that are formally justified or justifiable (by anyone subject to bounded rationality). In other words, a nihilist might take a sort of Humean approach to life, distinguishing between formal justification and practical reasons for action/belief.
I emphasize that I have not fully thought this through. Just a placeholder. Thoughts welcome. (The most obvious objection, I think, is that if the nihilist embraces some lower standard that renders a belief usable, but refuses call it "justification," then he is merely playing word games. Haven't fully considered how telling this objection is. I am okay with nihilism being a much more subtle and technical rejection of the possibility of knowledge than is traditionally assumed.)
[UPDATE: I'm actually having pretty serious doubts about whether this way of thinking about it makes sense. In particular, the connection between rationality and justification is one that I think bears more thought than I've given it. I'm actually probably going to write on another topic before I return to this one, hopefully with a more worked-out theory.]
2 Comments:
Interesting post. I think I probably disagree with this post in a lot of ways and on a lot of levels (more than 5 at least) but I’ll try to limit myself here.
One: I don’t love “knowledge as justified true belief”. We don’t’ have a way of judging truth of non analytic / mathematical propositions other than on the basis of evidence and our basis for forming belief operates on the same evidential basis. Knowledge can at best be formalized as belief with a confidence of greater than X where X is some arbitrarily chosen value (97% for example) [warning to the impressionable: don’t actually use 97%].
ii: [redacted]
3: I question your regress argument on optimization as grounding everything in fundamental irrationality. It grounds everything in statistical learning. In our lives we encounter various problems and try various levels of pre optimization and going meta on pre optimization. Sometimes we get better outcomes than others. Future choices are informed by these outcomes. I don’t see why the process is, at a fundamental level, dissimilar to the one which leads to our belief that flipping a switch will turn on the lights. And that process works pretty well, not perfectly, but well enough that we can assign some confidence to it.
IV) I would also maintain that, as factual matter, the process mentioned in 3 is outcome determinative in less than 50% of cases within reasonable bounds. (Reasonable bounds excluding weird cases where someone decides to go all the way meta and do all optimizing calculations and no object level work.) In practice a 9-1 work optimization spilt or a 7-3 optimization spilt will lead to similar end results for most decisions). Note this isn’t actually an argument. I’m just asserting a different factual claim than one which you accept as a premise, but contesting a premise contests a conclusion.
5) I think the end result produces a use of ‘justified’ that is different from the way the word is commonly used. Where we can split ‘justified-subC’ to mean the way the word is commonly used and ‘justified-subN’ (n for nihilist) to mean a function that always returns false. If you use ‘justified-subC’ to make decisions for yourself (cross the street or not) but you apply ‘justified-subN’ to your opponents arguments, that doesn’t really seem sporting to me.
I'll have a response. Short version is that statistical learning depends on an assessment of the extent to which the future will resemble the past. That assessment is subject to the same problem of infinite regress (or it is simply unanswerable). The physical laws appear to be stable (though I'm told they were different in the earliest moments of time). But how do I know they won't change tomorrow? Or if you prefer something a little less extreme, how do I know whether the traditional correlation between coal burning and national wealth will still apply 100 years from now? How do I know whether Moore's Law will continue to function 100 years from now?
Post a Comment
<< Home