An Externalist Internalism
The debate between internalist and externalist conceptions of epistemic justification might with justice be called the most important issue in contemporary epistemology. As is the case with other important philosophical debates, not a minor part of the energy is spent in trying to clarify what the opposing views amount to, and one has the feeling that many arguments, on both sides, are addressed against positions that no one holds. (Not that those arguments aren't interesting; a philosophical argument need not be addressed against a view that someone actually holds in order to be interesting.)
In recent years the conceptual situation has, I think, been greatly clarified. But the doctrinal question, to continue with the Quinean idiom, is still unanswered. We know what it would be like to be an internalist and what it would be like to be an externalist (or at least we know different ways of being either an internalist or an externalist), but we don't know what we should be. Some authors have been tempted to say that we should be both. One might be ecumenical regarding the internalism/externalism debate in two main ways. First, one might think that there are two concepts of epistemic justification: an externalist one, closely related to the notion of warrant (understood as whatever has to be added to true belief to get knowledge), and an internalist one, closely related to the idea of doing one's best by one's own lights. Second, one might hold that the one and only notion of justification has both internalist and externalist components.
My main aim in this paper is to advance a form of the second variety of ecumenicalism. To do so, I present a general framework in which to give accounts of epistemic justification, where the central idea is that for a belief to be epistemically justified is for it to be explained in a certain way—in a way in which the same belief might be explained for a theoretically rational agent. The notion of epistemic justification can then be (partly) accounted for in terms of a list of explanatory principles. I then argue that the correct notion of epistemic justification is one where the main component of the antecedents of those explanatory principles is a non-factive mental state—hence the internalism; and that the (or at least a main) reason why the principles are correct is that their antecedents are reliably connected to the truth of the proposition mentioned in their consequents—hence the externalism.
2. Explanatory epistemic principles
What is it to believe rationally? We might try to begin to answer that question by saying that to believe rationally under certain circumstances is to believe what a rational agent would under those same circumstances (where the "circumstances" include aspects of the subject's psychology). Now, maybe there are situations that no rational agent will ever get himself into, and so you might think that it doesn't make sense to ask what a rational agent would believe under those circumstances. But even if there are circumstances such that no rational agent would ever get himself into (or that are such that every rational agent would do whatever is in his power to stay out of), it doesn't follow that it doesn't make sense to ask what a rational agent would believe in precisely that situation.
But there are other problems with the idea that to believe rationally under certain circumstances is to believe what a rational agent would believe under those same circumstances. To begin with, there are circumstances where a rational agent could either believe a number of propositions or even not believe anything (the immensely rich character of experience, together with our limited capacity to absorb the information provided, mark perceptual beliefs as a clear case), and so there is no one belief that a rational agent would believe. So we should instead say that to believe rationally under certain circumstances is to believe what a rational agent might believe under those same circumstances.
But even that would be incorrect, for your belief could be the belief that a rational agent might believe under the circumstances but still be irrational—e.g., you might believe that the sky is falling when (i.e., while it would be correct to characterize your psychological state by saying that) you see that the sky is falling, but you do it because you would like it to be true (i.e., it is that wish that causes your belief).
We can try to get a sufficient condition by adding that the reason why you believed what a rational agent would have believed under your circumstances is the same as the rational agent's reason. This is, I think, on the right track, but much depends on how we understand what it is for something to be the reason why you believed a certain way. If we interpret the condition as saying that the rational agent's reason must be what causes your belief, we will still not get a sufficient condition, for your belief can be linked to a reason by a deviant causal chain, in which case we wouldn't say that it was rational—even if the reason was the same as the rational agent's. (Your seeing the sky falling causes a psychological disorder in you, which in turn causes you to believe that the sky is falling.) We could say that the causal relation between the reason and the belief must be of the appropriate kind, and leave for later the task of specifying what it is for a causal chain to be of the appropriate kind. But there is a more direct route: we can say that what has to happen for your belief to be rational—over and above its being the belief that a rational agent might believe under the circumstances—is that its explanation must be same as the explanation of the action of the rational agent. This amounts to no more than the claim that what accounts for the rationality of a belief is that it is explained in a certain way. All beliefs must have some explanation—and this can be seen as stipulative of the sense of "belief" that I am interested in. But rational beliefs will have a specific kind of explanation.
We thus arrive at a general thesis regarding theoretical rationality:
Explanatory thesis: the rationality of a belief has to do with what explains the belief or the action.
Thus, an agent is rational insofar as his beliefs are explained in certain ways. One gets specific epistemic theories by giving an account of which are those ways.
In arriving at the explanatory thesis, we have made free use of the notion of a rational agent. Isn't this a viciously circular procedure, or at the very least a radically incomplete one, given that we want to clarify what it is for a belief to be rational? Incomplete it certainly is, for we still don't have (and, I'm sad to report, we will never have here) a complete list of the ways in which a belief can be explained for it to be rational. But the form of the envisaged final theory is certainly not circular: it is a specification of the ways in which a rational belief might be explained, and nothing in what was said so far suggests that we will have to make an uneliminable use of the notion of a rational agent in the specification of such ways. My purpose in using the notion of a rational agent as a first step in clarifying the notion of a rational belief is to give a more or less direct route from our initial intuitive considerations to the explanatory thesis.
Is the explanatory thesis unfair to any particular epistemological theory? "Evidentialist" theories hold that the central epistemological notion is that of a proposition being justified for an agent, whether the agent believes the proposition in question or not. "Reliabilist" theories, on the other hand, hold that the central epistemological notion is that of a belief having been reliably produced. Evidentialists can still make room for the notion of a justified belief (in the sense of a justified episode of believing), via the notion of "properly grounded" belief. Thus, the proposition that there are fruits in the refrigerator may be justified for you because you believe that there are bananas and pears in the refrigerator. But if you believe that there are fruits in the refrigerators because your guru told you so, then your so belieiving is not properly grounded on your evidence. And reliabilists can still make room for the notion of a justified proposition, by abstracting from concrete episodes of belief formation or sustaining and saying that a proposition is justified for you if and only if, if you were to believe it, it would be a reliable believing.
Both projects—the evidentialist project of making room for the notion of a properly held belief and the reliabilist project of making room for the notion of a justified proposition—are to certain extent artificial, forcing the theories to extend beyond the realm where they are conceptually "at ease". Still, the fact that the evidentialist project is in principle feasible within the framework of the explanatory thesis means that we are not loading the dice against the evidentialist from the beginning, by adopting a framework in whose terms the theory is not even statable.
I will address questions about what which are the ways of reason in a rule framework. That is, I will assume that a theory of rationality can, on a first approximation, take the following general form: an agent is rational if and only if she follows the rules of rationality. You get specific theories by giving a list of rules. There are several issues that arise regarding this framework. First, is it really enough, to have a theory of rationality, to give a list of rules and then say that rationality consists in following those rules? Second, is there a general form that the rules should take? Third, how is this rule framework related to the explanatory thesis defended in the previous section? Fourth, what is it to follow a rule? All of these issues are more or less related to each other. I take them in the order presented.
First, then, do we really have a theory of rationality if all we have is a list of rules and the claim that being rational consists in following those rules? Explanation has, indeed, to end somewhere, but that doesn't imply that it can end anywhere. Is a list of rules a proper stopping place for the explanation of what rationality consists in? There are, I think, two different cases that we should distinguish. Suppose that it turns out that there is only one rule of practical rationality, so that the theory of practical rationality says that an agent is practically rational if and only if she follows that rule. Suppose that, on the other hand, it turns out that there are a number of rules of theoretical rationality, so that the theory of theoretical rationality says that an agent is theoretically rational if and only if she follows those rules. Assuming that the theories are otherwise adequate (assuming, for example, that they are not self-contradictory, that they correctly account for our pre-theoretic judgments regarding what actions and beliefs are rational, etc.), it seems to me that the it will be more plausible to say that the theory of practical rationality is complete as it stands than to say that the theory of theoretical rationality is correct as it stands. If there are a number of different rules, then the question "What do they have in common?" is perfectly legitimate, whereas it doesn't make sense if there is only one rule. That doesn't mean that there couldn't be a deeper explanation of why the single rule of practical rationality is a rule of practical rationality, but it does mean that we are lacking one important reason to think that there must be such an explanation.
Those two possible cases (the case where there is only one rule and the case where there are a number of rules) are associated with another interesting difference among possible theories. If a theory of, say, practical rationality says that there is only one rule of practical rationality, then, for any given action, either it conforms to the rule or it doesn't. If it does, the action is rational, if it doesn't, it is irrational. Now suppose that there are a number of rules of theoretical rationality. In that case, any given belief can be such that it conforms to some rule or rules and, at the same time, violates another rule or rules. Is the belief, in that case, rational or irrational? One possible answer, the one that I will adopt, is to say that, in the case where there are a number of rules, they are rules of prima facie (or perhaps pro tanto) rationality. In that case, whether any given belief is rational or irrational will depend on how the different rules are to be weighed against each other in case of conflict. The laying down of such principles might turn out to be another task for a theory of rationality—but it might also turn out that rationality itself is only involved in the rules, whereas the principles for weighing rules against each other are, for example, pragmatic principles.
So, to go back to out first question, an epistemological theory is not really complete with a list of epistemic principles. It still has to tell us why those principles are the correct ones.
Second, is there a general form that the rules must take? I don't think there is a form that the rules must take, at least if 'form' is taken as referring to superficial linguistic form, for there surely are going to be different but equivalent formulations of the same rules. Now, that doesn't mean that we cannot stipulate a canonical form in which the rules are to be formulated, and that is exactly what I am going to do. The general form of a rule of rationality will be a conditional of the following sort:
GENERAL FORM OF A RULE OF RATIONALITY (first pass): If you are in state s , then f .
Where the f -ing in question is a believing of a specific proposition, or a disbelief in such a proposition, or a suspension of judgment.
Rules of rationality have normative force. That may seem trivial, but we need to exercise some care in spelling out exactly what we mean when we say it. At a minimum, we mean that rules of rationality are not intended as descriptions of when and how beliefs are actually formed and sustained and when and how actions are actually performed, but rather, if it is correct to think of them as descriptions at all, they are descriptions of when and how beliefs ought to be formed and sustained and when and how actions ought to be performed. But the normative force applies to the rule, the whole conditional, not just to the consequent. Now, there could be rules whose antecedent is such that it doesn't make sense to apply normative force to it, and so if the antecedent obtains the only meaningful obligation that the rule leaves open is to perform the action or form the belief mentioned in the consequent. Suppose that one rule of practical rationality says the following: "If you desire to kill your hamster, and you believe that by smashing him with a hammer you will kill him, then smash your hamster with the hammer". In that case, the rule doesn't force you to smash your hamster if you have the requisite desire and belief—you can also stop having that desire and/or that belief. Compare that with this possible rule of theoretical rationality: "If it looks to you as if there is a tomato in front of you, then believe that there is a tomato in front of you". Does it make sense to say, in this case, that the rule doesn't force you to believe that there is a tomato in front of you even if it does look to you as if there is a tomato in front of you because you can stop having those experiences? Of course, there is no problem in one sense of "stop having those experiences"—you can just close your eyes, for example. But the question is whether it makes sense to say that you rationally ought to stop having those experiences. I think it doesn't. If so, then the rule instructs you to believe that there is a tomato in front of you. It is still a conditional rule, for it instructs you to believe that there is a tomato in front of you if certain circumstances obtain—but, when the circumstances do obtain, you are not free from believing, you are not free to rationally make the circumstances not obtain anymore.
Third, how is the framework of epistemic principles related to the explanatory thesis? An explanation of why a rational agent believes a certain proposition will always include a belief in its explanandum (of course!) and a number of states (of the agent and/or the world) in its explanans. My proposal, then, is that something is a principle of theoretical rationality if and only if its input-state is the explanans and its output-state the explanandum of the explanation of the belief of a rational agent.
Fourth, what is it to follow a principle? I'm not going to give a complete characterization of what it is to follow a principle, and fortunately I don't have to, but I will assume that to follow a principle is something different from merely "falling under" it—something that can happen to you just by chance—and that it is not necessary, to follow a principle, to think about it consciously or even be able to formulate it. To follow a principle is not merely to fall under it, because you can fall under a conditional, I suppose, just by making the consequent true or by making the antecedent false. In the case of belief, what the principles require is that, when you form a belief, the antecedent of one of the relevant principles must figure in the explanation of why you have that belief—and similarly for the case of action.
3. Kinds of internalism/externalism distinctions
As noted by William Alston, the distinction between internalist and externalist theories of epistemic justification is usually wielded by self-proclaimed internalists against unsuspecting externalists. The idea is that there are some internalist constraints on epistemic justification that some epistemologists violate in putting forward their view. It behooves the internalist, then, to clarify what those constraints are. Unfortunately, there seems to be no clear consensus in the internalist camp about this. Following Conee and Feldman (2001), we can distinguish at least two kinds of internalism, "access internalism" (or "accessibilism", for short), on the one hand, and mentalist internalism ("mentalism"), on the other.
Both versions of internalism can be presented as supervenience theses. The general idea is that whether (and also, presumably, to what extent) a belief is justified supervenes on a certain kind of configuration internal to the subject. Accessibilism cashes out the relevant meaning of "internal" in epistemic terms:
ACCESSIBILISM (first pass): What justifies a belief for a subject S supervenes upon factors to which S has a special sort of access.
Obviously, accessibilism will not be an intelligible thesis until we are told what kind of thing that special sort of access is, and what it is to "have" that kind of access. The general idea of special access, it seems to me, is better characterized negatively. There are some facts which are external to the subject in question, in the sense of "external" which we have inherited from Descartes—a sense of "external" in which, for example, the fact that I have hands is external to me. Call the kind of access that we have to those facts "ordinary access". Accessibilism, then, can be more informatively characterized thus:
ACCESSIBILISM: What justified a belief for a subject S supervenes upon factors to which S has a non-ordinary sort of access.
Admittedly, this negative characterization of accessibilism is still not entirely satisfactory, precisely because it is only a negative characterization—we haven't still been told what kind of access we do have to the facts upon which the justificatory status of our beliefs supervenes upon, only what kind of access we don't have. Still, a negative characterization is better than no characterization at all, and we have at least some gesturing towards the idea that the facts upon which the justificatory status of our beliefs supervenes are internal in the Cartesian sense—whatever that sense is.
What about "having" that special access? Here we have two main alternatives. Either accessibilism claims that having the access to what justifies our beliefs implies having accessed those items in the privileged way, or, more plausibly, it only implies that we have the ability to access them in that special way.
Instead of focusing on the kind of access that we have to the factors that justify our beliefs, mentalism focuses directly on the facts themselves, and requires that they be mental:
MENTALISM: What justifies a belief for a subject S supervenes upon factors which are internal to S's mental life.
As Conee and Feldman remark in "Internalism Defended", many philosophers have failed to distinguish between accessibilism and mentalism—and this can be explained by the fact that those philosophers think that the positions are at least extensionally equivalent, that the things to which we have the special sort of access are all and only items of our mental life.
We have characterized both accessibilism and mentalism as supervenience theses regarding "what justifies a belief". Given our framework in terms of epistemic principles, we could mean two different things by that phrase: we could be referring to the items that figure in the antecedents of the principles, or we could be referring to what makes the principles the correct ones. What justifies my current belief that I am typing? Two kinds of answers seem possible: on the one hand, what justifies that belief for me is the fact that I am having a certain complex set of experiences (together, perhaps, with certain memories); on the other hand, if there is something in virtue of which those experiences justify my belief, then the obtaining of that fact also justifies my belief.
When we combine mentalism and accesibilism with these two different kinds of items to which the distinction can be applied, we obtain four regards in which a theory can be either internalist or externalist. So, assuming that the four choices are independent of each other, we have at least sixteen different positions regarding the internalism/externalism debate—small wonder, then, that many times participants to the debate seem to be talking past each other! The possible positions can be summarized in the following table (where a "Yes" in the intersection of, for example, the row labeled "Mentalism" with the column labeled "Antecedents" means that the theory is mentalist regarding the antecedents of the epistemic principles):
4. Antecedent Mentalism
Let's call the thesis that all epistemic principles must appeal to a mental item in their antecedent "antecedent mentalism". Conee and Feldman defend antecedent mentalism by arguing that it best accounts for our intuitions regarding cases—especially evil-demon types of cases. But a theory can account for our intuitions and yet be inadequate—for example, it might be highly disjunctive (on the extreme, it can be just the disjunction of our intuitions regarding the cases), or lacking in explanatory force for some other reason. Is there any rationale for antecedent mentalism—a rationale deeper than the fact that it accounts for our intuitions, which is something that it shares with very many inadequate sets of principles? I think there is.
In "Internalism Explained", Ralph Wedgwood has provided an argument for antecedent mentalism that I find quite compelling. In this section I will provide my version of it. I will focus mainly on the rationality of perceptual beliefs, and then say a few things about other sources of belief in the next section.
Imagine that I am cooking, following a recipe that calls for a tomato. I open the refrigerator door and I see a red and round tomato. Nothing in the situation that I am imagining is strange or otherwise different from usual cases of seeing a tomato. In that case, I rationally believe, and know, that there is a tomato in the refrigerator. Moreover, part of the explanation of why I know it is that I am seeing that there is a tomato in the refrigerator. I could have known the very same thing for other reasons: because my wife told me, for example; or because I put one there yesterday and think that it has not been taken out; or for other reasons. But none of that happens now. I did not know that there was a tomato in the refrigerator before I saw it, and I came to rationally believe (and know) that there is one because I saw it. Let's call this situation "the good case".
Now, I said that at least part of the explanation of why I know that there is a tomato is that I am seeing it. But another fact is that my belief is rational. According to the explanatory thesis, if we knew what explains my belief we would know what makes it rational. What is, then, the explanation of my belief that there is a tomato? You might think that the answer must also appeal to the fact that I see that there is a tomato in the refrigerator. If we are tempted to give this explanation, we would also be tempted to say that the following is one of the epistemic principles:
Direct perceptual principle: You rationally ought to be such that if you see that p, then you believe that p.
In the next section we will see that similar considerations would support analogous principles for other belief-sources. All of them share the feature that their antecedent mentions a factive mental state. For our purposes, it will suffice to characterize a factive mental state as an attitude towards a proposition that entails that proposition's truth. Thus, seeing is a factive mental state, as is hearing, knowing, etc. The direct perceptual principle is thus a form of antecedent externalism regarding perceptual beliefs, for whether you see that p depends on whether p, and whether p is not, in general, something mental or something to which you have any special kind of access.
But now imagine that I am cooking again, and I need another tomato. There was, indeed, a tomato in the refrigerator, but my friends have taken it out and replaced it with a papier-mâché tomato, to play a joke on me. The papier-mâché tomato is a very careful replica of a real tomato. Actually, the papier-mâché tomato looks to me exactly like a real tomato would look to me in the same situation. I have no idea about the intentions or actions of my friends regarding the tomato in the refrigerator. On seeing the papier-mâché tomato in the refrigerator, I come to rationally believe that there is a tomato in the refrigerator. But now, I surely don't know that there is a tomato in the refrigerator—if for no other reason, because there is none (although there are other reasons). Let's call this situation "the bad case".
Now, as I said, it is clear that in the bad case I don't know that there is a tomato in the refrigerator, but it is almost equally clear that my belief is still rational. The direct perceptual principle cannot explain the rationality of my belief in the bad case, though, precisely because it mentions a factive mental state in its antecedent, and the proposition that I believe is in this case false.
What does explain my belief in this case? Traditionally, philosophers have appealed to objects, such as sense data, or to states of the perceiving subject, such as ways of being appeared to or ways things look, to play that role. In what follows, I will talk in terms of ways of how things look. It is the fact that it looks to me as if there is a tomato in the refrigerator that explains my belief that there is a tomato in the refrigerator. And, given that my belief, so explained, is rational, there is reason to suppose that the following is a correct epistemic principle:
PERCEPTION: You rationally ought to be such that if it looks to you as if p, then you believe that p.
But now we have a problem. What explains my belief in the good case? According to the direct percetual principle, it is my seeing the tomato. But PERCEPTION, although motivated by the bad case, applies also to the good case—for not only do I see a tomato in the refrigerator, it also looks to me as if there is a tomato in the refrigerator. We have, then, a superabundance of explanations for the good case. And, paraphrasing Kim's comments on a related subject, too many explanations can be as bad as no explanation at all.
But there is a plausible principle of proportionality for explanations which seems to rule out the direct principle as having explanatory power even in the good case. The principle, which Wedgwood borrows from Yablo's discussion of causal proportionality, can be formulated as follows:
Principle of proportionality for explanations (PPE): If one fact is partially constituted by a second fact, and a certain effect would still have been produced even if the second fact had obtained while the first fact had not, then if either fact explains that effect, it is the second fact rather than the first.
Now, there is a very simple argument from PPE to the conclusion that the direct principle is inadequate. The argument is the following:
(2) Seeing that p is partially constituted by its looking to you that p.
(3) Either its looking to me that p or my seeing that p explains why I believe that p in the good case.
(C) It is the fact that it looks to me that p, and not the fact that I see that p, that explains why I believe that p in the good case.
The argument is valid. But what about its premises?
Premise (1) is PPE itself. Even though the idea of facts being partially constituted by other facts is far from being transparent, I think that we need not, for our purposes, spend too much time trying to make it more clear. A couple of examples will suffice to give the general idea: the fact that Socrates guzzled the hemlock is partially constituted by the fact that he drank the hemlock; the fact that he confessed his love in the lounge is partially constituted by the fact that he confessed his love; etc. It is necessary (but obviously not sufficient) for a fact A to partially constitute another fact B that the following be true: "necessarily, if B then A". But the necessity in question need not be knowable a priori—the fact that this glass contains water is partially constituted by the fact that it contains hydrogen, but this is not something that we can know a priori; the constitution in question is metaphysical, not linguistic (nor conceptual, in a linguistic sense of "conceptual"). Let A be, again, the constituting fact, and let B be the constituted fact. Suppose now that we want to explain a certain other fact C. Given that B obtains, we are tempted, let's suppose, to cite it as an explanation of C. PPE says that, if A would have been a good explanation of C had it obtained without B, then A, and not B, is the explanation of C even in the actual case. Socrates' death is explained by his drinking the hemlock, not by his guzzling it, for he would have died had he drunk it without guzzling it; she laughed hysterically because he confessed his love, not because he did it in the lounge, for she would have laughed hysterically had he done it in some other place.
Whether fact A screens off fact B's explanatory powers depends, on an intuitive level, on the order of generality of the explanandum. Thus, the fact that Socrates died when he died is explained by the fact that he guzzled the hemlock—as opposed to drinking it slowly; the fact that she laughed hysterically in the lounge is explained by his confessing his love in the lounge—as opposed to in the cafeteria. PPE claims that, in these and similar cases, the constituted facts retain their explanatory powers precisely because the constituting facts wouldn't have explained the explanandum in its absence—and this seems the right diagnosis.
Let's turn now to premise (2). In a nutshell, it think that (2), and analogous claims regarding other sources of knowledge, are the default position. A bit pedantically, the default position can be characterized thus: all factive mental states are partially constituted by non-factive mental states. The thesis, remember, is metaphysical. It is not being claimed that factive mental states can be partially analyzed in terms of non-factive mental states. Arguments against the analyzability of factive mental states partly in terms of non-factive mental states, therefore, are not arguments against the metaphysical thesis. But there is a philosophical position, disjunctivism, which would deny both claims (the metaphysical as well as the analytical), and that, therefore, does represent a challenge to that presupposition. This, however, is not the place to discuss disjunctivism. Those attracted to it can regard my argument in this paper as establishing a conditional claim: if disjunctivism is false, then PERCEPTION is the correct principle—and analogous principles are correct regarding other sources.
Premise (3) is, I think, innocent enough. Of course, whether it looks to me as if there is a tomato or whether I see a tomato are not the only two possible explanations of the fact that I believe that there is a tomato in front of me. I could believe it because of some deep trauma in my infancy, or because of some physiological condition. But if we take into account that we are now focusing on what would explain the perceptual belief of a rational subject, the direct principle and PERCEPTION do seem like the only two plausible alternatives. We should also remember that we are talking about more immediate kinds of explanations, for if we didn't then there could be something that explains both my seeing a tomato and it seeming to me as if I see a tomato (something like the presence of a tomato in my visual field).
In this section I argued for antecedent mentalism, the thesis that the antecedents of epistemic principles must mention a non-factive mental state. My argument relied on a principle of proportionality for explanations, which is independently plausible and allowed us to decide the explanatory competence between factive and non-factive mental states. I carried on my discussion exclusively in terms of perceptual beliefs. In the next section I want to consider, however briefly and tentatively, other belief sources.
5. Other principles
Before launching in the brief and tentative consideration of other belief sources, I want to remark once again on its tentativeness. In fact, it seems to me that the principles that I will advance here are surely wrong as fundamental epistemic principles. I nevertheless proceed because my main aim in advancing them is not to get at the truth, but to show certain problems that arise in the search—problems that I have no reason to think wouldn't arise regarding the true epistemic principles, and that are related to the internalism/externalism debate. Another source of the tentativeness of the principles resides in the fact that they should be taken as giving pro tanto reasons for the doxastic attitudes mentiones in their consequent. In that case, whether any given belief is rational or irrational will depend on how the different principles are to be weighed against each other in case of conflict (or, better put, in case of overlapping of spheres of influence). The laying down of such meta-principles might turn out to be another task for a theory of rationality—but it might also turn out that rationality itself is only involved in the principles, whereas the meta-principles for weighing principles against each other are, for example, pragmatic meta-principles.
Besides perception, which we have just considered, introspection, memory, reason, and, more recently, testimony have been taken as belief sources. The inclusion of memory on this list makes clear that we are using a slightly technical sense of "belief source", for if you remember some truth, then it wasn't memory itself that explained why you first believed that truth. (Memory can sometimes be a source of belief in this stronger sense: you remember a scene that you saw yesterday, and, as a result, you form the belief that there were flowers in the room, a belief that you didn't form yesterday. This is sometimes called "generative memory", as opposed to the previous "retentive memory".) Analogous considerations apply, actually, not only to memory, but to every other belief source as well. I might ask my wife whether we have any tomatoes, and when she tells me that there is a tomato in the refrigerator I believe her; but then, when I open the refrigerator door and see the tomato, the source of my belief at that time is perception. The belief source of belief p at t, then, is what explains that the subject believes that p at t. The same point could be put in different terminology, by saying that we are interested not only in belief formation, but, more generally, in belief revision—which includes the retention and the deletion, as well as the acquisition, of beliefs.
Let's turn first to testimony. In analogy with the perceptual case, there are two principles that we should consider:
Direct testimony principle (first pass): You rationally ought to be such that, if you hear S testify that p, then you believe that p.
TESTIMONY (first pass): You rationally ought to be such that, if it seems to you that you are told that p, then you believe that p.
But would we want to say that it is rational to believe whatever you are told, no matter how implausible it sounds or how unreliable the testifier seems to be? We should, perhaps, modify the proposed principles thus:
Direct testimony principle: You rationally ought to be such that if you hear S testify that p and you don't rationally believe that S is unreliable on this occasion, then you believe that p.
TESTIMONY: You ought to be such that if it seems to you that you are told that p by S and you don't rationally believe that S is unreliable on this occasion, then you believe that p.
I say "you don't rationally believe that S is unreliable on this occasion" because if you are testified to that p but you irrationally believe that the testifier is unreliable, I still think that you have good reasons to believe that p. And the "rationally" modifier should be understood in the sense of theoretical rationality. If your analyst tells you that your mother hates you, your belief that he is unreliable can be practically rational, but it might be theoretically irrational, and, in that case, you would still have good reason to believe what he tells you.
Those considerations apply to the direct testimony principle as well as to TESTIMONY. But the argument from PPE in the previous section does discriminate between the two principles, in favor of TESTIMONY. Your hearing S testify that p is partially constituted by its seeming to you that you are being told that p. This implies, it seems to me, a kind of reductionism about testimony, for TESTIMONY could plausibly be construed as a case of PERCEPTION—where what it seems to you is that you are being told that p. But with respect to the dispute between Reidian and Humean conceptions of testimony, where what is at stake is whether or not rational acceptance of testimony must be preceded by an argument to the effect that the testifier is in this particular case reliable (and which can also be viewed as a dispute between reductivist and non-reductivist accounts of testimony), our formulation of TESTIMONY does favor the Reidian side. This fact doesn't reflect my preferences in the matter, but, given that only the Reidian position gives a place to testimony as a separate source of rational belief (for the Humean, testimony would be a case of inductive reason), I favor it only to show how you could construct an independent principle for testimony in my framework.
What about memory? Take generative memory first. You look for a moment at a complex scene; some time later, you remember some features of the scene which didn't then prompt belief (or did, but the belief didn't last), but now do ("Ah! There was a dog in the car"). In this case, generative memory works rather like perception, except that the belief is formed as a response to the way things looked in the past. You might also remember that your friend said that she wouldn't go to the party, and now come to believe it (although you didn't pay attention to it when she said it). More generally, it seems that generative memory works with the inputs of other sources to generate a delayed doxastic response. If so, we could try this principle of rational generative memory belief:
G-MEMORY: You ought to be such that if either things look to you as if p at t, or it seems to that you are told that p by S at t and you don't rationally believe that S was unreliable on this occasion, or... [insert here a disjunction of the inputs of the rest of the principles of rational belief], then you believe that p at some t* such that t*=t+e , e >0.
The alternative direct principle, which is ruled out by the argument from PPE, would be the following:
Direct g-memory principle: You ought to be such that if either you see that p at t, or you are told that p by S at t and you don't rationally believe that S was unreliable on this occasion, or... [insert here a disjunction of the inputs of the rest of the principles of rational belief], then you believe that p at some t* such that t*=t+e , e >0.
Why should t* be greater than t (in particular, why couldn't they be the same), and how large should e be? We have to take a time at some distance from the time of the initial input, on pain of confusing memory with perception or some other belief source. Therefore, e should be at least as large as it takes to make that difference between memory and other belief sources. How large could e at most be? It is pretty obvious that the answer is: it depends. The better memory you have, the larger it can be. But, what does it mean to say that your memory is good? Just that e can be pretty large? In a sense, yes. But a full explanation of the relevant sense of "can" will have to wait until section 7.
With respect to retentive memory, a possible account of it in terms of principles is the following:
R-MEMORY: You ought to be such that if you believe that p at t, then you believe that p at some t* such that t*=t+e , e >0.
And the direct version would be:
Direct r-memory principle: You ought to be such that if you know that p at t, then you believe that p at some t* such that t*=t+e , e >0.
Analogous comments regarding e apply here as well. Just as generative belief borrows its inputs from other sources' inputs and generates a delayed doxastic response, retentive memory borrows its inputs from other sources' outputs (beliefs) and generates a delayed doxastic response—which, in this case, consists in copying the input.
Introspection can be conceived as the internal analogue of perception or as a form of intuitive reason. It could also be conceived as a sui generis faculty, but I don't know how one might go about doing that, so I won't give an independent account of introspection.
Reason comes mainly in two flavors: intuitive or direct reason, as when we just "see" that 2+2=4 or that nothing can be green all over and red all over at the same time, and inferential reason, as when we conclude that the butler must have done it, given that whoever did it was barefooted, and only the butler was barefooted at the relevant time. With respect to inferential reason, there is the temptation of modeling the principles of rational inferential beliefs on rules of logic. There is, for example, the temptation of saying that the following is one such principle:
MP: You ought to be such that if you rationally believe that p and that if p, then q, then you believe that q.
Many authors have argued, however, that the temptation should be resisted. One argument is the following: the mere fact that you rationally believe that some proposition q follows from other things that you rationally believe doesn't have any tendency at all to show that you rationally ought to believe that q. It might well be that, on the contrary, you rationally ought to stop believing whatever implies q. This is easy to see if q is a contradiction, but lesser absurdities will yield the same result.
The argument, I think, is formally correct as it stands, but it is a case of ignoratio elenchi: MP doesn't say that if you rationally believe that p and that if p, then q, then you rationally ought to believe that q. The point has to do, again, with the scope of the deontic operator: it doesn't conditionally affect the consequent; rather, it categorically affects the whole conditional. MP is equivalent, therefore, to the following:
MP: You ought to be such that either you don't rationally believe that p and that if p, then q or you believe that q.
Although the observation about the scope of the deontic operator effectively rebuts the argument against MP, there is a related (although much less virulent) criticism that won't be so easily taken care of. The related criticism is that MP is too specific. Why choose it over, for example, this (or any other) analogous principle? :
AND: You ought to be such that if you rationally believe that p and that q, then you believe that p and q.
The answer should be that there is no need to choose: we can have both MP and AND as principles of rational inferential reason beliefs, together with many other principles. But the point of the criticism that MP (and also AND, for that matter) is too specific was precisely that there is a more general principle that covers MP, AND, and any other analogous principle:
I-REASON: You ought to be such that if you rationally believe that p, then you believe every q that follows from p.
The immediate reaction to I-REASON is that it must be wrong for the following reason: none of us believes everything that follows from what we rationally believe, nor are we to the least extent irrational for not doing it. Again, this is no objection to I-REASON, which provides only a pro tanto, not necessarily dominating reason to believe whatever follows from what you rationally believe. But you might think that I-REASON is wrong for another, related reason: shouldn't it be restricted to rationally believed consequences of what we believe, thus?:
I-REASON*: You ought to be such that if you rationally believe that p and that q follows from p, then you believe q.
(Note that there is an important difference between MP and I-REASON*: the difference between believing in the truth of a material (or, perhaps, indicative) conditional and believing that some proposition follows from another.) I don't think that we should modify I-REASON in the direction of I-REASON*, however, essentially for the reasons dramatized by Lewis Carroll in "What the Tortoise Said to Achilles". If you think that, for example, believing that p and that if p, then q is not enough to make it rational for you to believe that q (waiving for the sake of the example other reasons that you might have not to believe that q), if you think that you have to add the belief that ((if p and if p, then q,) then q,) or maybe the belief that q follows from (p and if p, then q), then you are off in a vicious regress. For the same reasons that there are to think that the initial set of beliefs won't do are sufficient to think that the widened set won't do either.
You might protest that the example that I gave is not fair to the objection. Take, instead, the case where you believe in the Peano axioms: do you have even a pro tanto, not necessarily dominating reason to believe every remote theorem that follows from them, even those that you don't believe that follow? Fair enough. The case brings to light a complication in the analysis of inferential reason as a source of rational belief: in many cases, perhaps in all of them, believing that q because you believe that p and q follows from p (and remember that we want to capture this kind of explanatory connection in our principles) would carry with it belief in the logical connection as well. I'm even prepared to be argued into thinking that the connection is necessary: that it is not possible to believe that q because you believe that p and q follows from p without also believing that q follows from p. But what I don't see is why this belief should play any explanatory role in your believing that q. More strongly: I think it cannot play any explanatory role, for, if it did, you would be vulnerable to the Tortoise attack. So, to go back to the example: maybe it is impossible to believe in some remote theorem based on the Peano axioms without also believing that the remote theorem follows from the Peano axioms, but that doesn't mean that this extra belief plays any explanatory role—and it better not.
The case also helps to bring out another complication: for most of us, the Peano axioms would play a more or less indirect role in the explanation of our belief in the remote theorem: there would be a chain of inference, each of them regulated by I-REASON, that can be traced back to belief in the Peano axioms (a belief whose rationality would presumably have to be explained in terms of intuitive reason). But there might be subjects whose belief in the remote theorem is explained by their belief in the Peano axioms in a more direct way, in the same way in which, for most of us, belief that it is not raining and it is nighttime explains belief in it is nighttime and it is not raining. What explains this difference? Keep that question in mind for section 7.
Let's turn now to intuitive (or direct) reason, the faculty responsible for our believing that 2+2=4, that nothing can be red all over and green all over at the same time, and maybe also responsible for our beliefs about our own mental states. Does such direct reason work through intermediaries? Some people, like George Bealer, believe it does. Bealer believes that there are "intuitions", or "intellectual seemings" which are akin to perceptual seemings. Just as it seems to me that there is a table in front of me, it also seems to me that 2+2=4, or that I have a headache. With those intuitions at hand, we could try to formulate a principle of direct reason, thus:
D-REASON: You ought to be such that if you have the intellectual intuition that p, then you believe that p.
Notice that it wouldn't be an objection to D-REASON that intellectual intuitions are to be understood, as Sosa (1996) suggests, as inclinations to believe on the basis of understanding.
But what if we think that there are no such things as intellectual seemings, or, at any rate, that we haven't been told enough about them to evaluate their epistemic relevance? Do we then have to despair of finding any principle of rational direct reason beliefs? (By the way, an analogous question would arise if you doubt any of my previous accounts of rational beliefs for the other sources.) We might have to despair of being able to formulate such a principle, but there is no reason to think that there is no such principle. Remember that a principle of rational belief is directly tied to the explanation of belief. If there is an explanation available, then there is a principle. And the suggestion that there are no intellectual seemings, or even that there are no intermediaries of any kind on which direct reason relies to generate belief, is not the suggestion that beliefs generated by direct reason have no explanation at all—it better not be that, for that is highly implausible. But there still remains a fair question: even if we cannot formulate a precise principle of rational direct reason beliefs, can we say anything at all about how that faculty regulates belief?
We have now before us three unanswered questions: what does it mean to say that someone has a good memory?; what is the difference between most of us and the subject who can directly infer the remote theorem from the Peano axioms?; and, finally, can we say anything at all about the faculty of rational direct reason if it doesn't work through intermediaries? I think that the answers to all three questions revolve around a common notion. But, before presenting it, we should take a look at the possibility of correctness internalism.
6. Correctness internalism?
We can turn now to the question about the correctness condition for principles of theoretical rationality: a condition that will tell us what all the principles of theoretical rationality necessarily have in common, and that will help explain why they are principles of theoretical rationality. In this section I will look at two attempts to provide an internalist correctness condition, and will find them both wanting. In the next section, I present an externalist correctness condition.
Before doing that, though, I would like to briefly address one possible reaction to our question about the correctness condition. One could, it seems, just say that there is no such condition—that there is nothing epistemically relevant shared by all the principles, nothing in virtue of which they are the correct principles. The question about theoretical rationality must end with a list of principles, or so this possible position would have it. This might be one way of understanding evidentialism, the view that epistemic justification consists in proportioning belief to the evidence that one has—where the evidence can be interpreted as the inputs of our principles. Now, rejecting the intelligibility of the question about a correctnes condition is not strictly speaking an internalist answer to the question—it is just not to give an answer at all. This is a position that should only be accepted in extremis—only when everything else has failed; for it amounts to a surrender of explanatory power in one's theory.
A different, but related, answer is to say that the different epistemic principles are something like analytically true: they partly define what it is to have good reasons. This would be an answer to the question about the correctness condition analogous to Strawson's "solution" to the problem of induction. I think that it is an interesting but ultimately unsatisfactory answer, although I won't argue against it here. Let me just mention one problem that someone advancing that line should address: it seems prima facie implausible to say that the fundamental epistemic principles are analytic, for the simple reason that, whatever they are, they are surely terribly complicated.
Let's now turn to the discussion of the two internalist correctness conditions that I was able to find in the literature. I should say first that the two authors that I will discuss (James Pryor and Ralph Wedgwood) present these conditions only in passing, and only as possible positions, explicitly claiming that they will not defend them. My objections to the positions, then, should not be taken as objections to the more general epistemological project of these authors.
The first internalist correctness condition that I want to address is James Pryor's. In "The Skeptic and the Dogmatist", Pryor argues for the view that our perceptual experiences give us prima facie justification for beliefs about our surroundings in accordance with them. In laying down PERCEPTION as a principle of theoretical rationality, then, I am agreeing with Pryor. Pryor wishes his official view about perceptual justification to remain neutral about why is it that experience as of a hand in front of me gives me prima facie justification for believing that there is a hand in front of me—about the correctness condition for perceptual justification. In a footnote, however, he does make reference to three possible positions, two of which he rejects, and one of which he finds at least plausible. The position that he favors is that what explains why our experiences give us immediate prima facie justification is
the peculiar "phenomenal force" or way our experiences have of presenting propositions to us. Our experiences represent propositions in such a way that "it feels as if" we could tell that those propositions are true—and that we are perceiving them to be true—just by virtue of having them so represented. (Of course, to be able to articulate this "feeling" takes a high grade of reflective awareness.) I think this "feeling" is part of what distinguishes the attitude of experiencing that p from other propositional attitudes, like belief and visual imagination. Beliefs and visual images might come to us irresistibly, without having that kind of "phenomenal force." (...) It is difficult to explain what this "phenomenal force" amounts to, but I think that it is an important notion, and that it needs to be part of the story about why our experiences give us the justification they do.
I confess that I am a bit puzzled by Pryor's use of scare quotes—I'm not sure whether he takes the "phenomenal force" of experiences to be a "feeling" or not. But let's suppose he does—let's call it the "veracity feeling". That would nicely explain why the condition is internalist: if the correctness of PERCEPTION is explained in terms of a way that perceptual experiences feel to the subject, then this is certainly correctness mentalism.
There are two main questions about Pryor's condition. The first one is: is the presupposition that only experiences have the veracity feeling true? To answer that question to any detailed degree we would have to be told more about what that feeling is. What we are told is that it is a feeling as if the proposition presented is true—and as if we are perceiving it to be true. Well, isn't rational intuition just like that too? I'm thinking that 2+2 =4. That proposition is being presented to me in thought as if it were true—and as if I were perceiving it to be true.
But perhaps Pryor need not be worried about this, for two reasons. First, there would be nothing wrong if that feeling in virtue of which perceptual experiences justify beliefs were also present in other sources of beliefs—if Pryor is prepared to argue that the feeling provides the source with prima facie justificatory power whenever it is present. And, on the other hand, he could, if he wished to, differentiate perception from rational intuition by saying that the veracity feeling is presented in a special sort of way in perceptual experience—in the visual way, or the auditory way , or...
But there is another question which is more serious. Suppose that we agree that there is such a veracity feeling associated with perceptual experiences. So what? What good does the veracity feeling do to the experience—or to the belief, or to the relation between the two? Why would it be epistemically better to have a belief that corresponds to an experience with the veracity feeling as opposed to a belief which doesn't correspond to any such feeling? Suppose that there were a society of believers whose perceptual apparatus works just like us except that their experiences lack the veracity feeling. Would the beliefs that they form as a response to experience be any worse than ours, epistemically speaking?
I suppose that Pryor could object to that last question on the basis that the veracity feeling is essential to experience, in the sense that what the subjects that I am imagining would be doing would not be experiencing—precisely because they lack the veracity feeling. I could grant the essential nature of the veracity feeling and still hold that it doesn't make any epistemic difference, of course, but, it seems, I would no longer be able to appeal to the little thought experiment in order to argue for that idea. But is it so clear that I wouldn't? Suppose that someone suggests that the reason why I like water so much is that it contains hydrogen. I say that it couldn't be that, for I would still like it just as much even if it didn't contain hydrogen. Does the fact that nothing which doesn't contain hydrogen could be water prevent the thought experiment from having legitimacy in this case? It doesn't seem so, and I don't see any relevant difference between this case and the one about the veracity feeling.
At any rate, we need not appeal to the thought experiment to support the view that the veracity feeling, by itself, does nothing epistemically good to the belief in question. Recall that it is supposed to be the veracity feeling itself that explains why perceptual experiences justify beliefs. But would a feeling of some other kind, say a feeling of disgust, equally explain this justificatory power? Clearly not. So why would a veracity feeling do it? What is it about the fact that it is a feeling as if the proposition presented were true, as opposed to a feeling as if the proposition presented were disgusting (a feeling which, by the way, some experiences do have), that makes it fit to explain the justificatory power of experiences?
Pryor might have been thinking that believing what feels like true is being coherent in an epistemically relevant sense, and that is why the veracity feeling does its work. If you experience as if the lights are on, and if Pryor is right in thinking that this experience presents the proposition that the lights are on with the veracity feeling attached to it, then at the very least there would be prima facie some incoherence in your believing that the lights are off—and maybe there is some way of transforming this into the claim that there would be prima facie some incoherence in your not believing that the lights are on (although I don't see any clear route from the one to the other).
We must remember that the veracity feeling is not a belief. Pryor is not saying that having an experience as if p is, inter alia, believing that p. If the veracity feeling were a belief of this kind, then to say that not believing that p when that proposition is presented to you with the veracity feeling attached to it is incoherent will be to say too little (and maybe also too much): it is impossible both to believe that p and not to believe that p.
Keeping in mind that the veracity feeling is not a belief, then, we should re-examine the claim that there would be something prima facie incoherent in not believing a proposition presented to you with the veracity feeling attached to it. The objection that I wish to press against this line is a cousin of one that I already presented to the original position. Why would it be prima facie epistemically incoherent not to believe a proposition with the veracity feeling attached to it whereas it wouldn't be even prima facie epistemically incoherent not to believe a proposition that is presented with a different kind of feeling attached to it (say, when I imagine a disgusting scene)? Why should our beliefs cohere with propositions presented to us with the veracity feeling as opposed to cohere with propositions presented to us with the disgusting feeling?
Note that there is an obvious answer to these objections that is not available to Pryor: our beliefs should cohere with propositions presented to us with the veracity feeling because propositions that are presented to us with that feeling tend to be true. That is, I take it, what explains the intuitive idea that, should there be such a thing as the veracity feeling, our beliefs should cohere with the propositions that are presented to us with such a feeling. This answer is not available to Pryor, or indeed to any other internalist, because what would be doing the epistemic work in that case would be not the feeling itself, but the fact that the feeling is reliably connected to the truth of the proposition presented.
Let's now turn to Wedgwood's attempt at constructing an internalist correctness condition. According to him, what makes the principles of theoretical rationality correct (for example, what makes it the case that an experience as if p justifies a belief that p) is that those principles are "built into" the structure of the cognitive capacities that the subject has. What Wedgwood means by talking of principles (he says "rules") that are "built into" cognitive capacities, I think, is that having, say, the capacity of perceptually acquiring information consists in (perhaps inter alia) being such that one follows the principle that if it appears as if p, then one forms the belief that p. There are other principles that cognitive subjects follow, principles that are not thus built into our cognitive capacities—but Wedgwood thinks that we follow these principles by following the basic principles, which tell us that the non-basic ones are reliable. So the question about the correctness condition for principles of theoretical rationality reduces, in Wedgwood's framework, to the question about the correctness condition of the basic principles. And it is this question that Wedgwood answers by appealing to the idea that some principles are built into our cognitive capacities.
The same objection that I presented against Pryor's conception of a correctness condition applies to Wedgwood's—and, perhaps, more clearly so. What good does the fact that they are built into our cognitive capacities do to the principles, or to the belief formed by following them, or to the relation between the principles and the beliefs? Why would it be epistemically better to have a belief that corresponds to a principle that comes built into one of our cognitive capacities as opposed to a belief which doesn't correspond to any such principle?
Wedgwood cannot appeal to a natural answer to our question: our beliefs should be formed according to principles that are built into our cognitive capacities because beliefs formed according to those principles tend to be true. That, I take it, would tell us why it is epistemically good to form beliefs according to those principles. And if not that, then what?
Now, there is one thing about principles that are built into our cognitive capacities (if there are any such principles) that we all non-skeptics should accept. If you are a non-skeptic, then you think that most of the beliefs that, pre-theoretically, we would think are epistemically justified really are so. And, if there are any principles that are built into our cognitive capacities, then those principles are followed in forming many of our beliefs. Therefore, if you are non-skeptic and you think that there are some principles of theoretical rationality that are built into our cognitive capacities, then you should also think that there must be something epistemically good about those principles. All that is right, but it would plainly be a fallacy to infer that, therefore, what is good about principles that are built into our cognitive capacities is the fact that they are so built into them.
Those, then, are the two internalist conceptions of the correctness condition that I know. Neither of them will do, and for the same reason—namely, neither of the conditions is such that, intuitively, it would do any epistemic good to the principles, or the beliefs formed by following them, or to the relation between the principles and the beliefs. But if correctness internalism is wrong, then what is the right view about the correctness condition?
7. Correctness reliabilism
I propose the following as a correctness condition on principles of theoretical rationality:
RELIABILITY: The antecedent of a principle must be reliably connected to the truth of the proposition mentioned in its consequent.
What is it for A to be reliably connected to B? In general terms, it is for there to be a counterfactual-supporting regularity between A and B, such that, if A were to obtain, B would tend to obtain. The higher that tendency is, the higher the reliability of the connection between A and B. How high must it be in order for us to say that A and B are reliably connected simpliciter (as opposed to, for example, saying that their connection is more reliable than the connection between C and D)? High enough: there is an inevitable vagueness on the notion of reliability. If you have irresistible regimentation urges, I suggest you go relativist, for different degrees of reliability will be useful in different situations. We should also characterize what it is for a connection between A and B's being F to be reliable. It is for there to be a counterfactual-supporting regularity between A and B's being F such that, if A were to obtain, and B were to obtain because A obtained, then B would tend to be F.
RELIABILITY says, then, that the antecedent and the consequent of a principle of theoretical rationality must be connected in a way such that, if the antecedent were to obtain, then the belief mentioned in the consequent would tend to be true. To lay down RELIABILITY as a necessary condition on principles of theoretical rationality is, then, to claim that a conditional would not be a principle of theoretical rationality if it didn't comply with this condition.
Isn't RELIABILITY trivially satisfied in the case of beliefs in necessary truths? A belief in a necessary truth is automatically such that, if it were to be held, for whatever reason, it would be true.
My answer to the objection is to accept it. RELIABILITY is indeed trivially satisfied in the case of belief in necessary truths. But that is not, I think, a damaging objection to RELIABILITY itself. After all, it is advanced only as a necessary condition, and the fact that any old conditional with a necessary truth in its consequent would satisfy it doesn't have any tendency at all to show that it is not a necessary condition. But there is also something more to say. The other necessary condition required by our framework, that there be a rationalizing explanatory connection between the antecedent and the consequent of a principle of theoretical rationality seems to me to help in the case of necessary truths. It would be nice, of course, to have necessary and sufficient conditions for something's being a rationalizing explanatory connection, but, short of that, we have our intuitive judgments about what counts as such a connection and what doesn't. It wouldn't be a principle of theoretical rationality, for example, that you ought to be such that if the Pope says that p then you believe that p, even restricted to necessary propositions, even though that the Pope says it could explain why you believe it. This is not, I repeat, to deny that RELIABILITY is not doing any independent work here, for there is indeed a reliable connection between the Pope's saying that p and p's being true, if we restrict ourselves to necessary propositions. I suspect, although I won't develop the suggestion, that the fact that the Pope principle wouldn't appeal to any faculty of the subject plays a definitive role here.
Let's return now to the three questions that we left dangling from section 4. First, what does it mean to say that someone has a good memory? We said in the previous section that we can say that a memory faculty is better the larger e can be in G-MEMORY and R-MEMORY. But what is the relevant sense of "can" here? If we wish, we can just lay down a version of either principle with an infinite e , but that wouldn't count as a principle of theoretical rationality. What other condition should the principle comply with so that how large e is is not an arbitrary matter? The answer is: RELIABILITY. G-MEMORY and R-MEMORY should include an e as large as it is compatible with their complying with RELIABILITY. To have good memory is to be able to reliably remember things for a long period of time. This means that different principles will be adequate for different subjects; but, at a rather obvious level of abstraction, the principles would be the same: G-MEMORY and R-MEMORY can be formulated in terms of a t* that is as greater than t as it is compatible with the reliability of the connection.
The second question was: what is the difference between most of us and the subject who can directly infer the remote theorem from the Peano axioms? The answer, again, is RELIABILITY: that a subject has superior powers of logical reason just means that the explanatory relation between the Peano axioms and what for us is a remote theorem is, for him, a reliable connection.
The third question was: can we say anything at all about the faculty of rational direct reason if it doesn't work through intermediaries? Well, as long as intuitive reason beliefs are explained at all (and I don't think I can understand the alternative), we can lay down as a rule that the explanation must comply with RELIABILITY. And so, even if we cannot say anything else about intuitive reason, we can say that it must be a reliable faculty, and that seems substantive enough.
In this paper I have argued that the notion of epistemic justification has both internalist and externalist components. The internalist component is given by the fact that the antecedents of epistemic principles always contain essentially a non-factive mental state; the externalist component is given by the fact that a central correctness condition for epistemic principles is that the truth of the antecedent be reliably connected to the truth of the proposition mentioned in the consequent. My argument for antecedent mentalism is based on a principle of proportionality for explanations, which rules out that the mental states mentioned in the antecedents of epistemic principles could be factive; my argument for correctness externalism is an argument by elimination: none of the internalist alternatives to a reliability condition are able to do their job.
Alston, William (1986), "Internalism and Externalism in Epistemology", reprinted in Alston (1989), pp. 185-226.
Alston, William (1988), "An Internalist Externalism", reprinted in Alston (1989), pp. 227-45.
Alston, William (1989), Epistemic Justification (Ithaca: Cornell University Press).
Bealer, George (1993), "The Incoherence of Empiricism", in Steven J. Wagner and Richard Warner (eds.), Naturalism (Notre Dame, IN: Univ. of Notre Dame Press), pp. 163-96.
Cohen, Stewart (1984), "Justification and Truth", Philosophical Studies 46, pp. 279-95.
Conee, Earl and Feldman, Richard. 1985. "Evidentialism." Philosophical Studies 48, pp. 15-44.
Foley, Richard (1993), Working Without a Net (New York: Oxford University Press).
Goldman, Alvin (1986), Epistemology and Cognition (Cambridge, Mass.: Harvard University Press).
Goldman, Alvin (1988), "Strong and Weak Justification", Philosophical Perspectives, Vol.2: Epistemology, pp. 51-71.
Greco, John (2000), Putting Skeptics in Their Place (New York: Cambridge University Press).
Harman, Gilbert (1986), Change in View (Cambridge, MA: MIT Press).
Kim, Jaegwon (1989), "Mechanism, Purpose, and Explanatory Exclusion", Philosophical Perspectives 3: Philosophy of Mind and Action Theory, reprinted in Kim, Supervenience and Mind (New York: Cambridge University Press: 1993), pp. 237-64.
Lewis, C. I. (1946), An Analysis of Knowledge and Valuation (La Salle, Ill.: Open Court).
Nagel, Thomas (1970), The Possibility of Altruism (Oxford: Clarendon Press).
Pryor, James (2000), "The Skeptic and the Dogmatist", Nous 34, pp. 517-49.
Pryor, James (2001), "Highlights of Recent Epistemology", British Journal for the Philosophy of Science 52, pp. 1-30.
Sosa, Ernest (1991), Knowledge in Perspective (New York: Cambridge University Press).
Sosa, Ernest and David Galloway (1999), "Man the Rational Animal?", Synthese, pp. 1-14.
Wedgwood, Ralph (1999), "The A Priori Rules of Rationality", Philosophy and Phenomenological Research, Vol. LIX, No. 1, pp. 113-31.
Wedgwood, Ralph (forthcoming), "Internalism Explained", Philosophy and Phenomenological Research.
Yablo, Stephen (1992), "Cause and Essence", Synthese 93, pp. 403-49.