Vagueness is a well known problem
in logic. Imagine, for example, a rough table-top being gently sanded flatter
and flatter. Eventually it will become flat (i.e. flat enough to count as flat
in some apposite context). However, since ‘flat’ is not so precisely defined
that sanding away a few scratches could be enough to flatten the table-top,
hence after each bit of sanding the table-top will still not be flat, from
which it follows that it will never be flat. That contradiction is a problem
that cannot be solved just by redefining ‘flat’ more precisely, because all the
terms of natural languages are, in such ways, at least a little vague, and it
is within such languages that we all reason. So, there is a borderline, between the table-top
being flat and it not being flat, that is more like a pencil line than a
mathematical line – there are borderline cases of flatness – but, there is no
region between the table-top being flat and it not being flat where it is
neither flat nor not flat, because in such a region the table-top would not be
flat and yet would be flat (which the meaning of ‘not’ rules out).
Nevertheless, it is logically possible for the table-top to be about as flat as
not. At such times it would not so much be false as

*only about as false as not*to say that it was not flat, and similarly, a little later, to say that it was flat (which resolves our logical problem). At such times, we might be more likely to say that the table-top was getting flat, since that would be true. We reason best with descriptions that are either true or else not true (false, in classical logic). Of course, ‘getting flat’ is no less vague than ‘flat’, but its borderlines are in different places; and in general, while we cannot remove all the imprecision from our languages, we can always move the borderlines out of the way of our logical language-use. Our words are defined as precisely as our purposes have required them to be, with the two classical truth-values – ‘true’ and ‘false’ – meeting at a place where descriptions are described as well by ‘not true’ as by ‘true’. We do not have to do much with such descriptions, other than identify them as needing to be replaced with truer descriptions, and so we need only add the following definition to the classical definitions of ‘true’ and ‘false’: To say, of what is about as much the case as not, that it is the case, or that it is not the case, that is to say something that is about as true as not. A description that is much truer than not will be true enough to count as true (by definition of ‘much’), while one that is not much truer than not will be about as true as not (by definition of ‘about’); and if we need to make sharper distinctions than that, then we need to avoid borderline cases and use classical logic. We do not need a formal definition of ‘as true as not’ (in some non-classical logic), because mathematical precision is inapposite when the sharp distinction between something being the case and it not being the case is absent. It would, in particular, be wrong to model the idea that self-referential claims like ‘this claim is not true’ are*about as true as not*as such claims having truth-values of 0.5, as the fuzzy logicians do. Now, while there are similar resolutions of the other semantic paradoxes (see other posts of mine), the set-theoretic paradoxes have no such resolutions: Sets are essentially non-variable collections and it makes no sense to think of a collection as being about as variable as not. That distinction, between semantic and set-theoretic paradoxes, originates with Frank Ramsey, who was a mathematical constructivist; and quite a few mathematicians believe that the set-theoretic paradoxes show that there are too many numbers – too many possible sizes of sets – for them all to exist as distinct numbers. But, such constructivism seems to clash with the objectivity of arithmetic: How could 2 exist but not, say, 4? Four is just two twos. So, most mathematicians think that the set-theoretic paradoxes should be showing something else, which may have motivated formalising the borderline truth-value in a mathematics that would then apply, instead of classical logic, to those paradoxes. But in fact, although the existence of whole numbers,*n*, is essentially the possibility of sets of*n*objects, and although such possibilities are intuitively timeless, such possibilities can emerge as distinct possibilities from more general possibilities. To see that, consider how the possibility of*you*would have been, had you never existed, the possibility of*someone just like you*: Looking back now, there was always the possibility of*you yourself*, as well as that more general possibility; but, there could have been no such distinction had you never existed. It is, then, logically possible for distinct numbers to emerge in an unending stream from some more indistinct coexistence – as possibilities inherent in the concept of*a thing*– and so a coherent story can be told of 1 + 1 = 2 existing – via the concept of*another thing of the same kind*– and 2 + 1 = 3 existing, along with the question of what 2 + 2 is, and only then 2 + 2 = 2 + 1 + 1 = 3 + 1 = 4 existing. Note that such a story might be more plausible were the small natural numbers replaced by large transfinite numbers. Furthermore, if the concepts involved were divine conceptions, then such arithmetic would be as objective as anything. So the main reason why the set-theoretic paradoxes are paradoxical is the prevailing atheism within science (which is all but a*reductio ad absurdum*of atheism).