I was frustrated by my inability to get anything out of the brief writeup about Voevodsky’s univalent foundations in this month’s Notices. Fortunately, one of our graduate students, who has been reading the homotopy type theory book, has promised to explain it to me. I’m excited!
Already, hardly understanding anything, I have learned one interesting thing about type theory. (I haven’t yet learned anything about homotopy type theory.) It asks us to throw aside assertions like “P is true” as the basic objects of mathematical interest, and instead to concentrate on assertions like “p is a proof of P.”
In this context, what is conjunction? To say “x is a proof of P and Q” is to say that x is a pair (p,q) where p is a proof of P and q is a proof of Q.”
What is disjunction? In other words, what do we mean by “a proof of P or Q”? Well, there are two kinds of proofs of P or Q; there are proofs of P, and there are proofs of Q.
And this is why intuitionistic logic (which as I understand it is the kind of logic you get when you take this point of view seriously) rejects the law of the excluded middle. In the logic I know, you can just check from the truth table that (P or not-P) is always true, so (P or not-P) is the same thing as T.
But in type land, a proof of (P or not-P) must be either a proof of P or a proof of not-P. If neither of these is available, then there’s no proof of (P or not-P), so it can’t be the same as T.
I know in some sense this seems tautological — but I do find it rather striking as an illustration of an actual substantial difference between the two set-ups.
As Vladimir teaches me more, I’ll keep posting!
P.S.: by the way: in type theory, a proof of an implication “P implies Q” is a function from proofs of P to proofs of Q. (Or does it have to be an explicit algorithm that returns a proof of Q given a proof of P? I don’t know.) This is not the same as a proof of “not-P or Q”, which would have to be either a proof of not-P or a proof of Q, a completely different thing. So the logical account of implication I’m used to also dies!
On the other hand, a proof of Q is a proof of (not-P or Q). What’s more, if I have a proof q of Q, then the constant function sending every proof of P to q is a proof of (P implies Q).
I think the above paragraph is a correct proof in the type-theoretic sense of (Q implies (not-P or Q)) and also (Q implies (P implies Q)). What do you think?
(Note that I haven’t told you what a type is yet. That’s because I don’t know. I told you, I only understand one thing so far.)
And I guess I think there is no proof of ((not-P or Q) implies (P implies Q)), nor is there a proof of ((P implies Q) implies (not-P or Q)). Those two things are just different.
Is this type theory something coming from computer science? (Well, through computer science, even if it originates in mathematics.)
If it is roughly what I’m thinking it might be, I think the best way to really understand it is to do some programming in some dialect of ML (or other explicitly type-inferring functional language with parametric polymorphism).
I’m sitting in on a HoTT course in the CS dept. here, and it’s been a lot of fun. Having a computer check your proofs is really really satisfying. Though, if they expect mathematicians to use Agda it’s going to have to get a lot easier to install. For homework, I just did the excellent exercise of proving not (not (P or not P)) constructively. Something was a bit broken in my intuition as I assumed that for each of the two flavors of De Morgan’s law you’d lose one direction, when actually three of the four statements are totally fine.
They’re teaching the homotopy version in the CS department? Cool!
Wait, so are you saying that there’s a proof of not (not (P or not P)) even though there’s no proof of (P or not P), or am I wrong in the post that there’s no proof of (P or not P)?
That’s right, there’s no constructive proof of (P or not P) but there is a constructive proof of not (not (P or not P). There’s no contradiction here because constructively not (not Q) is weaker than Q!
There is a proof that ((not P or Q) implies (P implies Q)). This is the same as proving (((not P) implies (P implies Q)) and (Q implies (P implies Q))). And there are proofs of the two constituent parts: you’ve given one for the latter, and the former is equivalent to ((P and (not P)) implies Q), which is a tautology.
Noah: That’s a consequence of Radford’s theorem, right? :)
In the classical world, if I want to prove (P or Q), I have many more options available than just proving P or proving Q. For example, I can prove (P or R) and once I’ve done that I can prove (R implies Q). In words, I’d be saying something like, “If P isn’t true, then R must be true. But in that case, Q is true.” We use this kind of argument the whole time. I realize that it isn’t constructive, but is there really no flexibility of this approximate kind in HoTT?
Tim — if you mean the steps constructively, then it’s fine — the first step is either a proof of P or a proof of R. If it’s the former, it’s also a proof of (P or Q). If the latter, your proof of (R implies Q) is a transformation rule that turns it into a proof of Q, done again.
Evan — so this is a part I don’t get. A proof of ((P and not-P) implies Q) would be a way of transforming every proof of P-and-not-P into a proof of Q. Now if I think of “way of transforming” as meaning “function,” then sure, there’s such a thing, because I don’t think there are any proofs of P-and-not-P (though I’m somewhat confused about whether I’m “allowed to say that” in this context.) But if I think of “way of transforming” as being “algorithm for transforming,” then am I still OK?
I think you’re missing the key property of 0. 0 is the type with no constructors, and so, by recursion, it has a trivially defined map to any other type (see page 45 of HoTT). (Warning: Just because it has no constructors doesn’t necessarily mean that there’s nothing of that type, see footnote 9 on page 55 of HoTT.) A proof of not P is exhibiting something of type P -> 0, that is a way of starting with a proof of P and constructing a contradiction. If you have p : P (read p of type P) and f : P -> 0, then just by composition you have f p (read f of p) of type 0, and hence by recursion you have (rec0 Q) (f p).
Or in Agda:
open import Data.Product –Standard library that defines “and”
data empty : Set where –This defines “empty” as having no constructors
RecursionEmpty : (P : Set) -> empty -> P –This is the recursion principle for empty
RecursionEmpty P () –This is how you tell agda the empty definition
not : Set -> Set
not P = (P -> empty)
proof : (P Q : Set) -> (P × (not P)) -> Q
proof P Q (p , f) = (RecursionEmpty Q) (f p)
Or said more simply, given something of type 0 you can *construct* something of any other type by doing nothing. This is the whole point of the type 0, and it is the analogue of the fact that a contradiction implies anything in classical logic.
Yes, what Noah said. One really needs to be careful throwing around words like “tautology” when doing stuff like this!
I think that one of the big conceptual hurdles here is what exactly it means to do things “constructively.” A naïve interpretation might be that since it’s “impossible” to construct a term of type 0, there’s no way to prove things of the form P -> 0 constructively. But of course, this is nonsense: for instance, if I have a proof p : P, then it’s easy to construct a proof of (not (not P)) = ((P -> 0) -> 0): given a proof f : P -> 0, send it to f(p).
JSE — “But if I think of “way of transforming” as being “algorithm for transforming,” then am I still OK?” Yep, that’s a good way to think of it!
The “…function for transforming proofs of X into proofs of Y…” point of view can also work, but it has to act on not just proofs-without-assumptions, but proofs with extra assumptions. In your case, for ((P or not P) => Q), there needs to be not just a function acting on proofs-without-assumptions of (P or not P) (since, as you say, there may be no such proofs), but also functions taking proofs of (P or not P) under any set of assumptions to proofs of Q under the same assumptions.
Category-theoretically, this is like the difference between a function from global elements of X to global elements of Y (which may trivially exist if X has no global elements), and a natural family of functions from Hom(Z,X) to Hom(Z,Y) for all Z, which corresponds (by the Yoneda lemma) to a morphism X -> Y, or equivalently to a global element of [X,Y].
I’m not sure if this is type theory, exactly. This is the Brouwer-Heyting-Kolmogorov interpretation of intuitionistic logic.
Regarding types, I wrote a blog post that may or may not be helpful.
I love mathematics; I enjoy analytic philosophy. Somewhere in between exists an uncanny valley which induces a sense of revulsion.
It may be worth mentioning here the existence of minimal logic (see also this math.SE question about it), an even weaker subset of intuitionistic logic which may be slightly more in line with JSE’s intuition: It doesn’t prove the principle of explosion (though it does prove a weak version), or “not A implies (A implies B)”, or “(B or not A) implies (A implies B)”. But as the math.SE question shows, it seems to be horribly weak; hardly seems like it would catch on.
When I was in school, I read through (some? most? all? of) Sheaves in Geometry and Logic: A First Introduction to Topos Theory, by Saunders MacLane, and I seem to recall a) enjoying it and b) thinking that it was relevant to intuistionistic logic. (E.g. it would talk about contexts in which not(not(not(X)) was equivalent to X.) So I figured I’d mention it in case you were interested.
Or, if you want to go even a little weirder into logic that may or may not be related to intuitionistic stuff, you can look into Navya-Nyaya logic – the books that I have for that are Materials for the Study of Navya-Nyaya Logic by Daniel Ingalls and The Navya-Nyaya Doctrine of Negation: The Semantics and Ontology of Negative Statements in Navya-nyaya Philosophy, by Bimal Krishna Matilal. Been even longer since I’ve read those, but I think they were interesting, perhaps even interesting in ways that could be interested in placing next to contemporary mathematical approaches to logic. Could be wrong, though…
I used to think I already knew this, but then I read your post and realized I never really really knew it until now. Your wording is perfect, and I’ll be using it next time someone asks me about this! :)