Saturday, 6 May 2017

The Pre-Kripkean Puzzles are Back

Yes, but does Nature have no say at all here?! Yes.
It is just that she makes herself heard in a different way.
Wittgenstein (MS 137).

Modality was already puzzling before Kripke - there’s a tendency for the potted history of the thing to make it seem like just before Kripke, philosophers by and large thought they had a good understanding of modality. But there were deep problems and puzzles all along, and I think many were alive to them.

There is a funny thing about the effect of Kripke’s work which I have been starting to grasp lately. It seems like it jolted people out of certain dogmas, but that the problems with those dogmas were actually already there. The idea of the necessary a posteriori sort of stunned those ways of thinking. But once the dust settles and we learn to factor out the blatantly empirical aspect from subjunctive modality - two main ways have been worked out, more on which in a moment - the issue comes back, and those ways of thinking and the problems with them are just all still there.

(When I was working on my account of subjunctive necessity de dicto, I thought of most pre-Kripkan discussions of modality as irrelevant and boring. Now that I have worked that account out, they are seeming more relevant.)

What are the two ways of factoring out the aposterioricity of subjunctive modality? There is the two-dimensional way: construct “worlds” using the sort of language that doesn’t lead to necessary a posteriori propositions, and then make the truth-value of subjunctive modal claims involving the sort of language that does lead to them depend on which one of the worlds is actual.

This is currently the most prominent and best-known approach. However, it involves heady idealizations, many perplexing details, and various questionable assumptions. I think the difficulty of the two-dimensional approach has kept us in a kind of post-Kripkean limbo for a surprisingly long time now. Except perhaps in a few minds, it has not yet become very clear how the old pre-Kripkean problems are still lying in wait for us. I have hopes that the second way of factoring out will move things forward more powerfully (while I simultaneously hope for a clearer understanding of two-dimensionalism).

What is the second way? It is to observe that the subjunctively necessary propositions are those which are members of the deductive closure of the propositions which are both true and C, where C is some a priori tractable property. (On my account of C-hood, the closure version of the analysis is equivalent to the somewhat easier to understand claim that a proposition is necessary iff it is, or is implied by, a proposition which is both C and true. On Sider’s account of C-hood this equivalence fails.)

My account of subjunctive necessity explains condition C as inherent counterfactual invariance, which in turn is defined using the notion of a genuine counterfactual scenario description. And it is with these notions that the old-style puzzles come back up. Sider’s account has it that C-hood is just a conventional matter - something like an arbitrary, disjunctive list of kinds of propositions. (Here we get a revival of the old disagreements between conventionalists and those who were happy to explain modality semantically, but suspicious of conventionalism.)

What are these returning puzzles all about? They are about whether, and in what way, meaning and concepts are arbitrary. And about whether, and in what way, the world speaks through meaning and concepts. Hence the quote at the beginning, and the quote at the end of this companion post.

Tuesday, 2 May 2017

On Warren's 'The Possibility of Truth by Convention'

Recently I read Jared Warren's 'The Possibility of Truth by Convention'. Sometimes when I read a paper, I strongly suspect I will end up finding it wrong-headed, and accordingly go in with a spot-the-fallacy attitude. I confess I did this in this case, but as I read it my attitude changed. Having read it, I now think I have had a prejudice against conventionalist ideas. And this makes sense: I have been trying to develop an analysis of subjunctive necessity de dicto, and an explanation of apriority, which appeal crucially to considerations I have been thinking of as broadly semantic. And one important defensive point for me has been that this does not entail conventionalism of any kind. I still think that's true, and still think it's important to point out since many philosophers are dead against conventionalism, but I think getting used to making that defensive point has led me to underestimate conventionalism. I may not agree with it (I suppose I'm agnostic now), but Warren's paper has helped me to see that there is more to it than I had been willing to allow.

Still, there is a point late in the argument that I have an issue with. In this post I will briefly summarize the key moves in Warren's argument and then raise this issue.

Warren discusses a widely adhered-to 'master argument' against conventionalism which runs as follows. The basic idea behind it is that truth by convention is a confused idea because, while conventions may make it the case that a sentence expresses the particular proposition is does, conventions cannot make the proposition itself true (unless it's itself about conventions).

Master Argument:

P1. Necessarily, a sentence S is true iff (p is a proposition & S means p & p is the case).
P2. It's not the case that linguistic conventions make it the case that p.
C. Therefore, it's not the case that linguistic conventions make it the case that S is true.

Warren points out that 'making it the case that' admits of different readings. One is metaphysical, as in truthmaker theory. Another is causal, as in 'causes it to be the case that'. But another is explanatory, as in 'explains why it is the case that'. This explanatory reading, Warren contends, is what real conventionalism should be understood as working with. And, Warren argues convincingly, the argument isn't valid on that reading, since explanatory 'makes it the case that' contexts are hyperintensional: if you take a sentence embedded in such a context and substitute for it a sentence which is intensionally equivalent, you sometimes change the truth-value of the sentence it was embedded in. Warren's illustrative example:
[I]t is true that God's decree of ‘let there be light’ made it the case that (in the relevant sense) light exists, but it is false that either 2 + 2 = 5 or God decreeing ‘let there be light’ made it the case that (in the relevant sense) light exists.
So, the Master Argument isn't valid on the explanatory reading of 'make it the case that'. But can't this be patched up? As Warren notes:
if proponents of the argument accept a special principle requiring that explanations of sentential truth must also explain why the proposition expressed obtains, then a modified version of the master argument can be mounted that doesn't assume the intensionality of explanatory contexts.
Warren considers the prospects of shoring up the Master Argument with the principle he calls Propositional Explanation:
Propositional explanation : If Δ (explanatorily) makes it the case that sentence S is true, then Δ (explanatorily) makes it the case that p (where p is a proposition and S means that p). 
But Warren argues that the conventionalist has no good reason to accept this, and that it comes out of a way of thinking about the philosophy of language - he uses the phrase 'meta-semantic picture' - which they 'can, do, and should reject' (which makes the anti-conventionalist argument pretty weak). On the way of thinking Warren has in mind, propositions are in some sense more fundamental, and the truth of sentences is in some sense derivative of the truth of propositions. 

Now, I am happy to agree that conventionalists 'can, do, and should reject' this sort of picture of the philosophy of language. But I am not so sure that they should therefore deny Propositional Explanation. Maybe they can (and even should) accept Propositional Explanation, not because propositions come first in the order of explanation, but because - on their picture - once you've explained the truth of a sentence, you get an explanation of the truth of the proposition it expresses "for free". They can still block the Master Argument, however, by denying P2.

(Note in this connection that we should arguably separate two uses of 'the case' in this discussion. In the first premise of the Master Argument - 

P1. Necessarily, a sentence S is true iff (p is a proposition & S means p & is the case).

- 'is the case', for the argument to work against truth by convention, should be read as 'true'. But in the second premise - 

P2. It's not the case that linguistic conventions make it the case that p.

- the second 'the case' is part of the phrase 'makes it the case that' which, Warren argues, is intended by the conventionalist to pick out an explanatory relation. And here we're really talking about making it the case, in this sense, that a proposition is true - and this gets passed over if we just write 'makes it the case that p'.)

Now, on my suggestion, the conventionalist's reason for accepting Propositional Explanation would be anathema to the anti-conventionalist for whom propositions are more fundamental, just as the anti-conventionalist's reason is anathema to the conventionalist. But maybe they can (and even should) agree on Propositional Explanation itself. This doesn't leave the conventionalist in much of a pickle, since they can - instead of trying to deny Propositional Explanation - just hammer their explanatory reading of 'makes it the case that' and use that to deny P2, arguing that P2 may be right if 'makes it the case that' is read metaphysically or causally, but that it is false on their intended reading.

Why doesn't Warren suggest going this way? His reasons are suggested in these passages:
(...) a version of conventionalism about arithmetical truth might maintain that the truth of ‘2 + 2 = 4’is fully explained by our linguistic conventions while also thinking that a full explanation of why 2 + 2 = 4 is a matter internal to mathematics and therefore should appeal to mathematical facts rather than linguistic facts.
And: 
Premise (2) will be justified by some argument to the effect that it would be extremely odd and implausible to think that our linguistic conventions could fully explain why 2 + 2 = 4 (e.g.), since this will be true in languages with markedly different linguistic conventions than our own and would have been the case even if our linguistic conventions had never existed. 
Wanting to allow for these points seems to make Warren think conventionalists should deny Propositional Explanation. But note that the above points are about 'why 2 + 2 = 4', i.e., not about why some proposition has some status. So for these points to support Propositional Explanation, propositions have to be thought of as having a very close metaphysical relationship to states of affairs (whose explanations, if they are mathematical states of affairs for instance, should be internal to mathematics). But it seems to me that that way of thinking about propositions is anathema to the conventionalist, who instead should see them as a kind of abstraction from sentences and our uses of them. That is why they can accept Propositional Explanation on the grounds that once you've explained the truth of a sentence you get an explanation of the truth of its expressed proposition "for free". And that is why they can deny P2.

So, the latter part of Warren's argument seems, if I'm reading him right, to be that the conventionalist, after defending themselves against the Master Argument by pointing out that they intend an explanatory reading of 'makes it the case that' on which that argument is invalid, should go on to respond to the modified Master Argument by protesting that it rests on a view, Propositional Explanation, which is anathema to their approach to the philosophy of language. But I suspect that it may be better for them to embrace Propositional Explanation - not because propositions are more fundamental in some way that their opponents think they are, but because if you explain the truth of a sentence, you get an explanation of the truth of the expressed proposition "for free" - and instead deny P2, which is anathema to their approach to the philosophy of language.

The conventionalist can hold that the Master Argument is invalid and that it rests on a false premise, and that the modified Master Argument, i.e. the Master Argument augmented with Propositional Explanation, is valid but unsound, not because Propositional Explanation is false, but because of the false premise that the plain Master Argument also contained.

This is of course not a fundamental disagreement with Warren's overall project here. In a broad sense, I am working alongside Warren and trying to give the conventionalist more options (something I am surprised to find myself doing!). If I have a disagreement with Warren here, it is about which option is best for them.

Reference

Warren, Jared (2014). The Possibility of Truth by Convention. Philosophical Quarterly 65 (258):84-93.

Friday, 21 April 2017

Explaining the A Priori in Terms of Meaning and Essence

It wasn't just the positivists who thought there was a tight connection between meaning and truth in the case of a priori propositions:
However, it seems to me that nevertheless one ingredient of this wrong theory of mathematical truth [i.e. conventionalism] is perfectly correct and really discloses the true nature of mathematics. Namely, it is correct that a mathematical proposition says nothing about the physical or psychical reality existing in space and time, because it is true already owing to the meaning of the terms occurring in it, irrespectively of the world of real things. What is wrong, however, is that the meaning of the terms (that is, the concepts they denote) is asserted to be something man-made and consisting merely in semantical conventions. (Gödel (1951/1995), p. 320.)
Perhaps we should try to recover some insight from the idea, nowadays highly unfashionable within philosophy (but alive and well in the broader intellectual culture, I think), that a priori truths like those of mathematics are in some sense true owing to their meanings. Philosophers often used to express this by calling such propositions 'necessarily true', but since Kripke that sort of usage has been crowded out by another.
  
Noteworthy in this connection is that Kripke was not altogether gung ho about his severance of necessity from apriority:
The case of fixing the reference of ‘one meter’ is a very clear example in which someone, just because he fixed the reference in this way, can in some sense know a priori that the length of this stick is a meter without regarding it as a necessary truth. Maybe the thesis about a prioricity implying necessity can be modified. It does appear to state some insight which might be important, and true, about epistemology. In a way an example like this may seem like a trivial counterexample which is not really the point of what some people think when they think that only necessary truths can be known a priori. Well, if the thesis that all a priori truth is necessary is to be immune from this sort of counterexample, it needs to be modified in some way. [...] And I myself have no idea it should be modified or restated, or if such a modification or restatement is possible. (Kripke (1980), p. 63.)
This may make it sound like the required modification would consist in somehow ruling out the problematic contingent a priori truths from the class of truths whose epistemic status is to be explained. But Chalmers' idea of the tyranny of the subjunctive suggests another route: try instead to find a different notion of necessity - indicative, as opposed to subjunctive, necessity; truth in all worlds considered as actual, rather than truth in all worlds considered as counterfactual - better suited to the explanation of apriority.

Now, in Chalmers' epistemic two-dimensionalist framework, indicative necessity is itself explained in epistemic terms. But if we try for a more full-bloodedly semantic conception of it, we may get something more explanatory of the special epistemic status of a priori truths. The notion we are after is something like: a proposition is indicatively necessary iff, given its meaning, it cannot but be true. And the modality here is not supposed to be epistemic.

But what aspect of its meaning? Sometimes 'meaning' covers relationships to things out in the world, and even the things out there themselves. What we are interested in is internal meaning. Putnam's Twin Earth thought experiment - though this is not how he used it - lets us see the distinction we need here. We want to talk about meaning in the sense in which Earth/Twin Earth pairs of propositions mean the same. This can be articulated using the middle-Wittgenstein idea of the role an expression plays in the system it belongs to (see Wittgenstein (1974, Part I)).

So, what if we say that a proposition is indicatively necessary iff any proposition with its internal meaning must, in a non-epistemic sense, be true? Can indicative necessity in this sense be used to explain apriority?

Maybe not, since there are indicatively necessary truths which are indicatively necessary only because their instantiation requires their truth. Example: language exists. (Language is here understood as a spatiotemporal phenomenon.) This is indicatively necessary, because any proposition with its internal meaning must be true, if only because the very existence of that proposition requires it to be true. Its truth comes about from the preconditions for its utterance, but - you might think - not from the internal meaning itself. It is interesting to note that it is indicatively necessary, but it lacks the special character of a priori propositions whereby they, in some sense, don't place specific requirements on the world.

This situation pattern-matches with Fine's celebrated (1994) distinction between necessary and essential properties. Socrates is necessarily a member of the set {Socrates}, but that membership is not part of his essence, since it doesn't have enough to do with Socrates as he is in himself. Likewise, he is necessarily distinct from the Eiffel Tower, but this is no part of his essence. So let us throw away the ladder of indicative necessity and instead hone in on the notion of essential truth. A proposition is essentially true iff it is of its internal meaning's essence to be true (i.e. to be the internal meaning of a true proposition).

Thus, with encouragement from Gödel and Kripke, we can develop ideas from Chalmers, Putnam, Wittgenstein, and Fine, to yield:

To say that a proposition is a priori is to say that it can, in some sense, be known independent of experience. (You may need experience to get the concepts you need to understand the proposition, but you don't need any particular further experience to know that the proposition is true.) What is distinctive about these propositions which explains their being knowable in that peculiar way? It is that their internal meanings - their roles in language - are, of their very essence, the internal meanings of true propositions; any proposition with that internal meaning must be true, and not for transcendental reasons relating to the pre-conditions of the instantiation of the proposition, but as a result of that internal meaning in itself.

So we can have an account of apriority which explains it in terms of a tight connection between meaning and truth, freed of its accidental associations with conventionalist and deflationary views about meaning, modality and essence.

This is not to say that a priori propositions' truth is to be explained in a case by case way by considerations about meaning and essence. That would be to crowd out the real mathematical justifications of non-trivial mathematical truths. But explaining apriority in general in this way wards off misunderstandings which come from treating a priori truths too much like empirical truths. And that is what makes it an explanation.

References

Chalmers, David J. (1998). The tyranny of the subjunctive. (unpublished)

Fine, Kit (1994). Essence and modality. Philosophical Perspectives 8:1-16.

Gödel, Kurt (1951/1995). Some basic theorems on the foundations of mathematics and their implications. In Solomon Feferman (ed.), Kurt Gödel, Collected Works. Oxford University Press 290-304. (Originally delivered on 26 December 1951 as the 25th annual Josiah Willard Gibbs Lecture at Brown University.) 

Kripke, Saul A. (1980). Naming and Necessity. Harvard University Press.

Putnam, Hilary (1973). Meaning and reference. Journal of Philosophy 70 (19):699-711. 

Wittgenstein, Ludwig (1974). Philosophical Grammar. University of California Press.

Modern Quantificational Logic Doesn't Subsume Traditional Logic

It seems to be a received view about the relationship of traditional Aristotelian logic to modern quantificational logic that the inferences codified in the old-fashioned syllogisms - All men are mortal, Socrates is a man, etc. - are all, in some sense, subsumed by modern quantificational logic. (I know I have tended to assume this.)

But what about:

P1. All men are mortal.
C. Everything is such that (it is a man  it is mortal)?  

This is a logical inference. It is not of the form 'A therefore A'. It embodies a very clever logical discovery! P1 and C are not the same statement. Talk of 'translating' the former by means of the latter papers over all this.

Modern quantificational logic does not really capture the inferences captured by traditional logic, any more than it captures this link between the two. It does capture inferences which, given logical insight, can be seen to parallel those codified by traditional logic, but that is not the same thing.

Monday, 3 April 2017

Reflections on My Claim that Inherent Counterfactual Invariance is Broadly Semantic

This post presupposes knowledge of my account of subjunctive necessity de dicto as expressed in my thesis and in a paper derived from it which I have been working on. (I hope my self-criticism here doesn't cause any should-have-been-blind referees to reject the paper. A revise-and-resubmit verdict I could live with.) Here I try to take a next step in getting clear about the status and significance of the account.

In my thesis and derived paper, I propose that a proposition is necessary iff it is, or is implied by, a proposition which is both inherently counterfactually invariant (ICI) and true, and explicate this notion of ICI.

I claim that ICI is broadly semantic, and put this forward as a key motivation and virtue of the account. I don’t provide much argument for this claim - the intention, I suppose, was that this would just seem self-evident. But I have become increasingly aware of the importance of the fact that this could be challenged, and the importance of getting clearer about the underlying primitive notion of a genuine counterfactual scenario description (CSD).

I do provide one reason, near the end of my presentation of my account, for thinking that ICI is broadly semantic given my preferred approach to propositions and meaning. But there are two reasons for wanting more. One is that it may be hoped that my claim that ICI is broadly semantic could be justified independently of my particular approach to propositions and meaning, where I advocate understanding what I distinguish as the ‘internal’ component of meaning as role in language system. A second, perhaps more suggestive, reason for wanting more is that, even given my preferred approach, the argument I give is basically this: ICI is explained in terms of how a proposition - its negation, really - behaves in certain contexts - namely CSDs. But here of course I have to single out genuine CSDs.

And here’s the thing. (At least, the following seems to be right.) For my claim that ICI is broadly semantic to hold water, the notion of genuineness of a CSD had better be broadly semantic. For it is not enough for a notion to be broadly semantic that it can be characterized in terms of appearance in certain sorts of linguistic context C, where C-hood is blatantly extra-semantic. For instance, we may say a proposition has G iff it (or its negation, to make this more like the ICI case) doesn’t appear in any description which has the property of being written in some notebook I have in my room. In that case, it is plain that whether or not a proposition has G is not a matter of its meaning or nature.

So, I now think that the little argument I give at the end of my presentation of my account, about how my particular approach to propositions and meaning ‘fits well’ with the notion of ICI as broadly semantic only goes so far, and that as an argument that given my particular approach the notion of ICI is or should be seen as broadly semantic, it is weak, since it gives no reason to think that the all-important notion of genuineness of CSD is broadly semantic.

Further, I think it is clear that I want to put forward my account, and I think the account has theoretical value, independent of whether a case can be made that genuineness of CSD is broadly semantic. And so my whole presentation of why my account is interesting and of its motivation is somewhat crude. As a story about what caused it, and the specific things I was thinking, it may have some interest. But as a way of situating the theory and giving a sense of what its value (within philosophy) consists in, it is crude and not really to the point. I do of course hint at other sources of interest (e.g. that the account clarifies the relationship between the notion of necessity and those of truth and implication), and don’t rest everything on the ‘semantic hunch’, but I do perhaps give it too prominent a place - or at least, an incompletely justified place.

So, is the notion of a ‘genuine counterfactual scenario description’ broadly semantic? And what does it mean to be broadly semantic? I may follow up with a post addressing these questions more thoroughly, but for now a couple of remarks. Whatever it is to be broadly semantic, it is not to be conventional in any sense. The idea is perhaps better gotten at, in some ways, by saying that genuine CSD-hood is a conceptual matter. But really I need to roll up my sleeves and investigate this more closely - it is not merely a question of hitting on some formulation. Finally, I propose that the following passage from §520 of Wittgenstein’s Investigations seems very to-the-point when it comes to the questions and difficulties I find myself coming up against here, and may help me plumb the depths of the matter:
So does it depend wholly on our grammar what will be called (logically) possible and what not,—i.e. what that grammar permits?”—But surely that is arbitrary!—Is it arbitrary?—It is not every sentence-like formation that we know how to do something with, not every technique has an application in our life [...].

Tuesday, 14 March 2017

Quantification and 'Extra Constant' Semantics

(The following is a companion piece to this offsite post.)

In a fascinating new paper entitled 'Truth via Satisfaction?', N.J.J. Smith argues that the Tarskian style of semantics for first-order logic (hereafter 'FOL'), which employs the special notion of satisfaction by numbered sequences of objects, does not provide an explication of the classical notion of truth - the notion of saying it like it is - but that the second-most prominent style of semantics for FOL, which works by considering what you get if you introduce a new constant, does. I agree with him about the first claim but disagree with him about the second.

My main point in this post, however, is not to argue that Smith's preferred style of semantics for FOL fails to explicate the classical notion of truth. I will do that a bit at the end - although not in a very fundamental way - but the main point will be to draw out a moral about how we should think about the 'extra constant' semantics for FOL, and more generally, about how we need to be careful in certain philosophical contexts to distinguish mathematical relations (such as 'appearing in an ordered-pair with') from genuinely semantic ones (such as 'refers to'). The failure to do this, in fact, is what made Tarski introduce his convoluted satisfaction apparatus which others have muddle-headedly praised as some sort of great insight. (I blogged about this debacle, to this day largely unrecognized as such by the logical community, offsite in 2015.)

By way of intuitive explanation of the universal quantifier clause of his preferred semantics for FOL, Smith writes: 
Consider a name that nothing currently has—say (for the sake of example) ‘Rumpelstiltskin’. Then for ‘Everyone in the room was born in Tasmania’ to say it how it is is for ‘Rumpelstiltskin was born in Tasmania’ to say it how it is—no matter who in the room we name ‘Rumpelstiltskin’. (p. 8 in author-archived version).
But this kind of explanation is not generally correct. Get a bunch of things with no names and stick them in a room. Now, doesn’t this purported explication of what it is for quantified claims to be true run, in the case of the claim ‘Everything in this room is unnamed’, as follows: for ‘Everything in this room is unnamed’ to say it how it is is for ‘Rumpelstiltskin is unnamed’ to say it how it is--no matter what in the room we name ‘Rumpelstiltskin’? And this, I think, is very clearly false; by hypothesis, everything in the room in question is unnamed, so surely ‘Everything in this room is unnamed’ says it how it is. But if we name one of the things in the room‘Rumpelstiltskin’, then ‘Rumpelstiltskin is unnamed’ will certainly not say it how it is.

Now, as Smith pointed out to me in correspondence, this problem with unnamedness can be avoided by considering another method of singling out objects, such as attaching a red dot to them. (The worry arises that some objects are abstract and so it makes no sense to talk about attaching a red dot to them, but I won't pursue that here.) Then you can use a slightly different form of explanation, and say that for 'Everything in the room is unnamed' to say it how it is is for 'The thing with the red dot on it is unnamed' to say it how it is no matter which thing in the room has the red dot on it. Now we will of course get a counterexample involving 'red-dotlessness' but we can then just consider a different singling-out device.

But this slightly different style of explanation is also not generally viable, as becomes clear when we consider, not unnamedness, but unreferred-to-ness. Things which haven't been named but have been referred to, say by a definite description, count as unnamed but not as unreferred-to. And let's stipulate that we are talking only about singular reference - so that even if 'All the unreferred-to things' in some sense refers to the unreferred-to things, it doesn't singularly refer to them, so this wouldn't stop them from being unreferred-to in the relevant sense.

Now, applying the new style of explanation involving an arbitrary singling-out method to the case of 'Everything in this room is unreferred-to', we get:
For 'Everything in this room is unreferred-to' to say how it is is for 'The thing with the red dot on it is unreferred-to' to say how it is, no matter which thing we put the red dot on.
And this is wrong, not because the thing has a red dot on it, but because 'The thing with the red dot on it is unreferred-to' can't be true, whereas the quantified claim can be.

No analogous problem arises in the formal setting. If we specify that 'G' is to be mapped to the set of things in some room and 'F' is to be mapped to the set of unreferred-to things, and consider '(x)(Gx  Fx)', then neither Smith's preferred style of semantics for FOL nor the silly Tarskian style cause any sort of problem, since for there to exist a function which maps some constant c to an object o is compatible with o being unreferred-to. Thus we get the desired truth-value for '(x)(Gx  Fx)'.

(You might now think: OK, but what if we replace unreferred-to-ness with not-being-mapped-to-by-any-function, or whatever? Don't we then get the wrong truth-value? Well, no, because - at least on a classical conception of functions - nothing is unmapped-to-by-any-function.)

So, quantified propositions are not correctly explicated by talking about the truth-values of propositions you get by naming things. Nor are they correctly explicated by adopting a non-semantic singling-out device and then considering propositions which talk about 'The thing' singled out. This in itself shouldn't really be news, but also noteworthy is that, despite such explications being incorrect, the style of semantics for FOL which works via consideration of an extra constant gives no undesired results, and is arguably better than the Tarski-style semantics, which is needlessly complicated and is born of philosophical confusion. (Still, it does create a danger that students of it will wrongly think that you can explain quantified propositions in the way shown here to be incorrect.)

What does this mean for Smith's claim that 'extra constant' style semantics for FOL explicates the classical conception of truth, the conception of saying it like it is? Well, I think that's an interestingly wrong idea anyway, and probably deeper things should be said about it, but: Smith's incorrect informal gloss of the formal quantification clause - which gloss, as we have seen, cannot be corrected by moving to an arbitrary singling-out device and talking about 'The thing' singled out - certainly seems to be doing important argumentative work in his paper. His main claim, bereft of the spurious support of the informal gloss, is as far as I can see completely without support.

Many thanks to N.J.J. Smith for discussion.

Wednesday, 15 February 2017

The Resurgence of Metaphysics as a Notational Convenience

Reading Jessica Wilson's interesting new SEP entry on Determinables and Determinates, the following speculation occurred to me: the oft-remarked-upon resurgence of metaphysics heralded by the work of David Lewis, D.M. Armstrong and others was driven in part by cognitive resource limitations and practicalities of notation; putting things metaphysically often lightens our cognitive loads and makes thinking and writing more efficient in many philosophical situations.

Wilson's piece is dripping with metaphysical turns of phrase, but much of what she says could be re-expressed in a conceptual or linguistic key. I think this goes for a good deal of contemporary metaphysics. However, converting metaphysically-expressed ideas and claims into a conceptual or linguistic key may make them a bit fiddlier to think and express. And if you're doing hard philosophy and need to think and express a lot of things, this extra cost is going to pile up. Sometimes, having things in a conceptual or linguistic register may make things clearer, and sometimes it may be essential. But for many purposes the metaphysical register does fine, and often has the benefit of being less resource-hungry.

Yes, some metaphysics may not be capturable in conceptual or linguistic terms, and perhaps even in favourable cases the capturing will not be complete or perfect. And there are doubtless other important things going on behind the sociological phenomenon of the resurgence of metaphysics. But maybe this is part of the story.

UPDATE: Brandon Watson (at the end of a post on Fitch's paradox) links to this post, writing: 'I'm very interested, of course, in accounts of how philosophical scenes get transformed, how ideas transmogrify, and the like. This hypothesis for the rise of analytic metaphysics makes considerable amount of sense, and is probably true.' This is encouraging!