How we might have viewed the continuum hypothesis as a fundamental axiom necessary for mathematics
By mounting a philosophical historical thought experiment, I argue that our attitude toward the continuum hypothesis could easily have been very different than it is.
This is the transcript of a talk I gave at the Oxford Philosophy of Mathematics Seminar on 19 May 2025, in the elegant Ryle Room in the Faculty of Philosophy. There was a one-hour presentation, followed by one hour of Q & A with comments by Daniel Isaacson, Timothy Williamson, Wesley Wrigley, Christopher Scambler, Alex Paseau, Marcus Giaquinto, Beau Mount, and others.
The talk was based on my paper:
Joel David Hamkins, “How the continuum hypothesis could have been a fundamental axiom,” Journal for the Philosophy of Mathematics (2024), DOI:10.36253/jpm-2936, arxiv:2407.02463.
For more content on infinity and the philosophy of mathematics, subscribe to Infinitely More as a free or paid subscriber.
Transcript
Introduction
Chris Scambler: Okay, thanks everyone for coming. So, We have a speaker today who doesn't need much introduction in many places, but certainly not here. We all know Joel Hamkins, back from Notre Dame, to talk to us about how we might have viewed the continuum hypothesis as a fundamental axiom.
Joel David Hamkins: Well, thank you very much for the introduction, and it's such an honor to be here at the Phil Maths Seminar, and such a pleasure to be back in Oxford and see so many friends, so… I want to talk about the continuum hypothesis, so let's just jump right into it.
I'm going to go pretty quickly through the kind of background stuff, which I'm hoping is familiar to many people.
And then we're going to get into the new part, which is the thought experiment argument, in this third section. So let me just jump into it.
Continuum hypothesis
Cantor famously proved that the reals are uncountable, the cardinality of the reals is strictly larger than the natural numbers, and once you do that, of course, it's an extremely natural question to ask if there's any infinity in between them, and he formulated the continuum hypothesis, which is the assertion that there is no infinity Between the natural numbers and the real numbers.
And so, in other words, in ZFC, the continuum hypothesis asserts that the continuum is the first uncountable infinity. So, of course, in ZFC, we have all these cardinals here, and we can prove they're the same: the cardinality of the reals, or the continuum, beth_1 is the same as 2 to the aleph_0, or the power set of the natural members. Those are provably equivalent in ZFC. The new part of the continuum hypothesis is that these cardinals are all also equal to aleph one.
It was formulated in the late 19th century, and he spent his life in frustration trying to prove or refute it without ever coming to an answer.
It was the first problem on Hilbert's famous list of open problems that went on to guide 20th century mathematical research.
Cantor proved that it holds for closed sets. It's easy to show it holds for open sets. He proved it for closed sets with the Cander-Bendixson theorem. He had the idea of working up to more complicated sets, and that strategy is partially fulfilled because, for example, it follows from the existence of large cardinals that the continuum hypothesis also holds for all projectively definable sets, which is a way of continuing that idea.
It was open for decades, until in 1938 Kurt Gödel at least proved you cannot refute the Cantinuum hypothesis Because he proved that CH holds in the constructible universe L. This is the set theoretic universe that he constructed, in which all of the ZFC axioms are true, as well as the continuum hypothesis and the generalized continuum hypothesis. So because of this, if ZF is consistent, it follows that ZF plus the axiom of choice plus the continuum hypothesis is also consistent.
It was several more decades until we had the Cohen results, showing that you also cannot prove the continuum hypothesis. This is using the method of forcing, of course, which allows you to take any model of set theory and construct a larger model of set theory, the forcing extension, and you can do so in such a way that CH fails in the extension.
Therefore, the continuum hypothesis is independent of the axioms of ZFC, it is neither provable nor refutable.
In fact, the forcing argument shows not just that you can force not CH, but you can also force the positive version CH itself, so if you have any model of set theory, then you can go to a forcing ascension where CH is true, and to another one where it's false, and a further one where it's true again. You can turn it on and off like a light switch, and in fact, CH is an example of a kind of statement that's called a switch in the modal logic of forcing, a statement that you can turn on and off, like an Ori sentence, if you know what that means.
By now, forcing has been used in thousands of arguments, and has helped to reveal the pervasive ubiquity of the independence phenomenon. Almost every non-trivial assertion of infinite combinatorics has been shown independent of ZFC. This is the general situation, and CH was maybe a first instance of this.
None of that really answers the question whether it's true. Maybe this is the question we really want to know: Is it true or not?
Of course, the independence might just be showing us that ZFC is a weak theory. Any non-trivial statement is independent over any sufficiently weak theory, and so the fact that CH is independent of ZFC might not be significant, it might just be saying, look, we need to strengthen the base theory in order to settle this question.
Gödel had hoped that we would settle CH by the strong axioms of infinity, by the large cardinals, but this turns out to be wrong in light of the Lévy-Solovay theorem, which shows us that all of the large cardinal hypotheses, all of the common ones, so inaccessible cardinals, Mahlo cardinals, weekly compact, measurable, supercompact, everything. All of them are preserved by the forcing both of CH and of not CH. So if you have a large cardinal, then you can have that large cardinal with CH, and you can also have it with not CH, and so none of the large cardinal notions will settle CH in light of this theorem.
The CH is used really throughout set theory. It's true not just in Gödel's constructible universe, but also in all of the other canonical inner models that are constructed for large cardinals, so L of mu and L of E-vector and so on. These are the canonical inner models of large cardinals. They all have the continuum hypothesis being true.
But also, CH is refuted by all of the standard forcing axioms, like Martin's axiom at omega one, or the proper forcing axiom, or Martin's maximum. These latter two both imply that the continuum is equal to aleph two. This one doesn't settle it, except it does show that it's at least aleph two.
Also, the negation of the continuum hypothesis is routinely considered in the subject of cardinal characteristics of the continuum, where we study the bounding number and the dominating number and the other cardinal characteristics, because the subject is trivialized under CH. The whole point of that subject is to see how these different cardinal characteristics can differ, but if CH holds they don't differ. And so it's only interesting in the case of not CH.
Philosophical arguments about CH
OK, we can't settle CH on the basis of mathematical proof from the ZFC axioms. So set theorists have consequently offered various philosophical arguments in an attempt to settle CH. I want to mention a few, and then I'm going to get into the new one that this talk is really about.
One of the most interesting ones to my way of thinking is Freiling's axiom of Symmetry. This is the axiom that says that if you have a function that maps real numbers x to countable sets of real numbers, then there should be two real numbers, x and y, so that y is not in the set that's attached to x, and x is not in the set that's attached to y.
And Freiling gave a kind of philosophical proof of this axiom by thinking about throwing darts at the real line, which is also part of the title of his paper. OK, we throw two darts at the real line. Maybe the first one lands at some point x, and that point x determines the countable set of real numbers A sub x. So then, almost surely, because it is a countable set, almost surely, this second dart is not landing in that set. So it's going to be not in A sub x.
And that second point, y, determines also some countable set A sub at y. And now, the point of Freiling's argument is that, and the reason why it's called the axiom of symmetry, is that it shouldn't matter which dart we thought of as the first dart, and which dart we thought of as the second dart. There's a kind of symmetry in that choice, and so therefore, we should also think almost surely x is not in the set determined by y. So therefore, neither x nor y is in the other point's set, and so we have fulfilled the axiom.
And so the argument isn't just that there are these two maybe difficult to find points that satisfy the property. The argument is that basically almost any two points are going to satisfy the property, if you just pick them randomly.
In my experience, it's quite easy to convince a room full of mathematics graduate students of the truth of this axiom using this kind of dart explanation that I've just given. Many mathematicians will readily accept that explanation.
OK, it is a little more difficult if there are senior mathematicians or more experienced people in the room, particularly people who have heard of the axiom before, but if it's the first time, then maybe this argument is a little bit convincing that there should be these two points, x and y, that have the property. Young mathematicians readily agree to this axiom on the basis of Freiling's argument in my experience.
Until you do the second half of the explanation, which is that the axiom, in fact, settles the CH. It is equivalent to the negation of CH.
There's an elementary argument showing that the axiom of symmetry is simply equivalent to the negation of CH.
There are some stronger versions, where you attach not just to one point, but two points, pairs of points are associated with a countable set of real numbers. Or maybe you have n-tuples of points attached with a countable set of real numbers.
And then, for example, in the pair case, you would throw 3 darts at the real line. What you're hoping for is that each of the real numbers is not in the set that's determined by the other two. So that's a kind of slightly stronger version of the axiom of symmetry, which seems to have the same philosophical justification. And it's equivalent to the assertion that the continuum is strictly larger than aleph two in the three dart case or aleph n in the n plus 1 dart case.
OK, his theorem was not generally accepted as a solution to CH. It's treated as a kind of curiosity, or kind of warning about the non-measurable sets. In fact, although those sets were countable, and so on, we should really be looking at a subset of the plane, the set of pairs (x,y) such that x is in this set determined by y.
And with that set, it doesn't really make sense to talk about "almost surely" unless you know that that set of pairs is a measurable set, which generally won't be if the continual hypothesis holds.
And so, mathematicians, who have a kind of more informed attitude about it, reject the Freiling philosophical argument, and rather take the result as a warning about the measure-theoretic monsters.
That response is somewhat similar to the response that mathematicians give to the Banach-Tarski paradox in regard to the action of choice. They accept this bizarre conclusion of the Banach-Tarski paradox. They don't take it as a refutation of AC, but rather as a warning that we need to be careful with non-measurable sets.
Woodin has advanced philosophical arguments actually on both sides of the continuum hypothesis debate. He made a case years ago for the negation based on considerations of Omega logic and forcing absoluteness. But then more recently he is arguing for CH on the basis of his theory of the ultimate L, which is a canonical inner model accommodating even the largest large cardinals.
Kreisel argues, and also Dan Isaacson defends this view, that CH is settled in second-order set theory, in second order ZFC. Zermelo, in his famous quasi-categoricity result proved that the models of second-order ZFC are exactly the models V sub kappa for inaccessible cardinals kappa.
You chop the universe off at an inaccessible. These are also the same thing as the Grothendieck-Zermelo universes, that are used in category theory. But in particular, they are linearly ordered, and they all agree for a long way on initial segments of the universe, and therefore, they will agree on whether CH holds, because that's settled already down low at omega plus 2.
So all of the models of ZFC2 agree on the answer. We don't know which it is; we don't know what the answer is, but they all give the same answer, and this is the point.
Critics of this line of reasoning take it as begging the question, as circular, because the very meaning of second-order logic is grounded in set theory, and for example, if you're a pluralist about set theory, then of course you're going to be a pluralist about second order of logic. And so different concepts of set are going to give rise to universes in which those V kappas all had the same answer, but they have different answers in the alternative conceptions.
CH has been at the center of the debate on pluralism that's taking place in said theory. On one side of that dichotomy, we have the universe view, also known as set-theoretic monism. According to this view, there is a unique, absolute background concept of set, which is instantiated in the cumulative universe of all sets. And all set-theoretic assertions have a definitive truth value in that set theoretic universe. Every mathematical question comes to its answer, and so in particular, CH has a definite answer according to the universe view.
The main challenge for the universe view, of course, is that the central discovery of set theory has been an enormous range of set theoretic possibility. The most powerful tools that we have are most naturally understood as methods of constructing alternative set theoretic universes, alternative set concepts. I am thinking of forcing and ultrapowers and all the other methods that we use in set theory to build alternative models of set theory. Set theory has been about constructing as many different models of set theory as possible and made to exhibit diverse features or specific relationships with other models.
And that leads you to the multiverse view, or set-theoretic pluralism, which is the philosophical position holding that there are numerous distinct legitimate concepts of set each giving rise to its own set theoretic universe.
There is a fruitful analogy to be made with geometry, to my way of thinking. Geometry for thousands of years was viewed as the mathematics of space. There is the one true geometry. But of course, this point of view was splintered with the discovery of non-Euclidean geometry, and now we have many different geometries, hyperbolic space and Euclidean space, and so on, and spherical geometry, elliptical geometry, and so on. The point being that almost all mathematicians today are pluralists when it comes to geometric truths.
The set-theoretic pluralism view is that a similar thing is happening in set theory.
Many set theorists have yearned for what I call a dream solution by which we settle the CH by finding the missing axiom. We're going to find the axiom that we all agree is a manifest principle that's true of the concept of sets, and which happens to settle CH.
But I have argued that this is impossible because of our experience with multiple kinds of set theoretic worlds, some of which have CH and some of which don't. The situation with CH is not merely that it's formally independent and we don't know anything more about it. Rather, we have an informed deep understanding of how it could be true and how it could be false. So, we grew up in these worlds, and we moved from one to the other while controlling other subtle features about them.
And therefore, if someone were to present a new principle and prove that it implies not CH, then we couldn't take it as manifestly true. That's exactly what happened with the axiom of symmetry, right? You present this principle, which seems manifestly true on the naive account, but then once you know that it implies not CH, then you're no longer really willing to take it as a manifestly true principle.
It would be like someone proposing a principle implying that only Brooklyn really exists, but we already know about Manhattan and the other boroughs.
Is CH an open question? Sometimes you can find people in print saying CH is an open question. But I have argued that it's incorrect to describe CH as an open question. The answer to CH consists of this deep body of knowledge that we have about how it behaves in this set-theoretic multiverse, how we can force it or to force its negation while controlling for all the other set theoretic features that we might want. That is a kind of answer to CH.
The CH thought experiment
OK, so now I want to move into the new argument that I'm proposing in this talk, a kind of thought experiment. I want to describe how our attitude towards the continuum hypothesis could easily have been very different than it currently is.
I'm going to claim that if our mathematical history had been just a little bit different in a way that I find quite reasonable, if mathematical discoveries have been made in a slightly different order, then we would naturally view the continuum hypothesis as a fundamental axiom, one necessary for mathematics. And indeed, indispensable even for calculus. So, that's what I want to do.
Here's the thought experiment to begin. Let's imagine in the early days of calculus that Newton and Leibniz had provided somewhat fuller accounts of their ideas about infinitesimals.
In the actual world, of course, we didn't have that. It was lacking. Berkeley historically famously mocked the foundations of infinitesimals, writing: "And what are these same evanescent increments? They are neither a finite quantities nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?"
It simply wasn't clear enough what kind of thing the infinitesimals were in the early days. Were they part of the ordinary number system, or did they somehow transcend it? They just weren't clear.
So what I want to do is imagine that they were a little bit more clear. I want to imagine that they posited two number realms: the ordinary real numbers, and then a larger realm of numbers. I'm going to call them the hyperreal numbers. This is a kind of anachronism to use the contemporary terminology for this thing, but I'm going to use that word anyways. I'm not saying that they would use that word at all, I'm just saying they're positing two number realms, the ordinary numbers and then this expanded realm that has the infinitesimals in it.
That idea alone immediately addresses, to my way of thinking, the withering Berkeley criticism, because it releases the tension of this paradoxical claim. The problem with infinitesimals is that they're positive, but they're smaller than every positive number. This just seems contradictory, because they would have to be smaller than themselves, and so on. But if you have the two-number-realm idea, it's not a contradiction anymore, because you're only saying that the infinitesimals are positive in the larger realm, and they're smaller than every positive real number. They are fitting into the infinitesimal gap, just like we understand infinitesimals to be. So there's no contradiction there. It releases that tension, and also it enables us to clarify exactly how the infinimal number system relates to the real numbers by having these two number realms. I also think it enables a kind of frank discussion of the nature of real numbers and the contestants and how they work.
OK, I want to imagine a little bit further. Let's imagine Leibniz clarifying the existence assertions about infinitesimals. Maybe he said something like: "Every conceivable gap in the real numbers is filled by infinitesimals."
So this is the existence assertions for infinitesimals. For example, the gap between the real number 0 and the positive real numbers should have infinitesimals in it, but also there should be hyperreal numbers at infinitesimal distance to square root of 2, so on. Every real number should have its infinitesimal neighborhood. Those are gaps that should be filled according to imaginary Leibniz.
There is also the gap that's above all the real numbers, the infinite hyperreal numbers. And those would be the reciprocals of the infinitesimal numbers near zero. So we would want that the hyperreal field is not Archimedian because of that.
The contemporary perspective on that way of thinking would be that it's an incipient form of saturation. Countable saturation is the principle that says that, if you have a weakly increasing sequence of numbers in R-star and a weekly decreasing sequence of numbers strictly above it, then there is some hyperreal number strictly in between. So we have these two sequences like this, then there should be a hyperreal number strictly in between. This is exactly what it means to be countably saturated.
The reals are not countably saturated. It's something like a Dedekind cut, except it's not a Dedekind cut, because we're not saying that z is the least upper bound. Maybe all the x's are equal to each other, for example, so this is why you're getting infinitesimals that are strictly bigger than a real number, but it's strictly smaller than the real numbers that are above that real number. So this is not exactly the same as a Dedekind cut. This is the countable saturation property.
The actual Leibniz, the historical Leibniz, was inclined to high orders of infinitesimals. Countable saturation gives you orders of infinitesimality. Because if we have an infinitesimal, then you're also going to have infinitesimals relative to it, and so on, by countable saturation.
And Berkeley had complained that some mathematicians, notably Leibniz and L'Hopital hold that there are infinitesimal quantities of all the orders, and there are infinitesimals of infinitesimals of infinitesimals, without ever coming to an end. Berkeley was complaining about the absurdity of that, but now I'm saying, well, that Leibniz is positively asserting that there are these orders of infinitesimality.
Euler also explored the vast space of infinite orders. He looked at the functions x and x squared and x cubed, and he looked at square root of x, and he tried to understand how these different polynomials and functions relate to one another with respect to their growth behaviors at infinity. Those ideas led to further work of Hausdorff with the Hausdorff Gaps, and so on, if you know what these are, and the work of Hardy on the orders of infinity. So, it's a similar kind of thing. Those structures are countably saturated in exactly the same way.
So now let's turn to the imaginary Newton, and I imagine that he's going to say: "The two number realms fulfill all the same fundamental mathematical laws."
Of course, we need these laws. I mean, when we're working with the hyperreal numbers, we want them to behave like real numbers according to the same rules of distributivity and so on. For example, and this is what I wanted to write down the board here.
(writing on whiteboard)
If you compute the derivative of x squared, I mean, using the infinitesimal delta, you would take x plus delta squared minus x squared over delta. Let's distribute it out. You know, I'm not going to take a limit as it goes to zero, but delta is an infinitesimal number, and I perform this calculation. When you expand this out, you get x squared, what is it, 2x delta plus delta squared minus x squared over delta. The x squareds cancel, the deltas cancel, and you get 2x plus delta. And that's why the derivative of x squared is 2x, because this is vanishing to 0, and so on, the ghost—we just let that disappear, and the derivative of x squared is 2x. So we want to calculate with infinitesimals and the same rules of mathematics to apply. Both Newton and Leibniz were doing that.
So the hyperreal numbers should be an ordered field. In fact, a real closed field, so every positive number should have a square root, and every odd degree polynomial should have a root, and so forth.
From a contemporary perspective, of course, that is an incipient form of what we would call the transfer principle. The transfer principle is the principle saying that the real field is an elementary submodel of the hyperreal field. And we want this not just for the field structure, but for any extra predicates that we put on, we want to be able to use those predicates and functions. We want to be able to apply the log function and the sine function and so on in the hyperreal context, and the transfer principle allows us to do that.
The actual Newton had, in fact, a deflationary attitude towards infinitesimals and fluctuations. He wrote that it was just a method of avoiding the tediousness of the old calculations. There's nothing new happening. They were eliminable, a conservative extension, which is a kind of hint at the transfer principle, if it's saying that the laws are the same, nothing new is happening.
The thought experiment is that Newton and Leibniz have expressed the idea of two number realms with vaguely expressed ideas that we could now look at as forms of the transfer and saturation principles. That's what I want.
I'm not requiring that they have a full-blown theory of the hyperreals with nonstandard analysis. That can come in time. They just need the vague idea of two number rounds with those other vague ideas, but then, of course, rigor comes over the years, over hundreds of years, we would get more precise accounts of exactly the nature.
We know for a fact that transfer and saturation are fundamentally coherent and correct accounts of the hyperreal numbers, and furthermore, those ideas are sufficient for a highly successful and insightful development of the fundamental theory of calculus. It's possible, because of non-standard analysis, to found calculus on those ideas. So they wouldn't meet any obstacle if that's the track that they were headed on.
For example, you could look at Keisler's wonderful book, I think it's called Elementary Calculus: an Infinitesimal Approach. It's like a intro calc book that you would get in, you know, Calc 101. It is not using epsilon and deltas, but it's based on infinitesimal numbers and the hyperreal numbers. If you open the front cover, you see the properties of finite real numbers. If epsilon is an infinitesimal number, then 1 over epsilon is an infinite number, and so on, and it gives you all the basics.
And they go on to develop all of calculus using those infinitesimal ideas. So even with very simple ideas at the Calc 1 level, it is extremely robust as a theory of calculus, and there would be no need with that approach to calculus to ever adopt the Bolzano-Weierstrass use of epsilon deltas, and so on.
OK. If you consider that in our actual history, even in a completely incoherent account of infinitesimals was extremely successful and led to many discoveries, including all of the fundamental theorems of calculus. In the old days, they proved all the fundamental theorems of calculus without any proper limits and so on, and they only had this extremely flawed concept of infinitesimal.
It brings the question: Do you need a rigorous foundation for insightful mathematical discoveries of enduring importance?
And the answer is: Apparently not! Because that's exactly what we had in calculus. It wasn't a good foundation, and yet it was extremely robust and valuable, and they proved all the right theorems, even though their foundations weren't any good.
Okay, so in the thought experiment, of course, we get all the actual theorems that were proved with the bad foundation, but it's going to be more correct, and it's going to be more rigorous, because it's a proper foundation. So they would get an enduring calculus based on infinitesimals proceeding roughly along the lines of non-standard analysis.
This is really what my thought experiment is, that calculus would become based on the hyperreals in a much more profound and thorough manner than it is currently.
At bottom, the proposal is that the hyperreal numbers would become one of the standard number systems that mathematicians discovered and became familiar with. We have the natural numbers and the integers and the rational field, and the real numbers, and the complex numbers. But next to them, in the thought experiment world, would be the hyperreal numbers. It would be one of the standard structures that mathematics was about.
Gödel comes out in favor of the imaginary history. This was at the 1974 conference, I believe, at Princeton, the Robinson Conference. I don't know if you were there (referring to Dan Isaacson) You were in New York at this time, right?
Dan Isaacson: This is actually in the preface to the second edition of Robinson's book.
JDH: Oh, I see, okay. So, but it's based on his public remarks, right?
Isaacson: I don't know where he first said it, but it's certainly…
JDH: That's what I read. maybe I have it wrong. I'm going to go look it up.
Okay. So, Gödel says there are good reasons to believe that non-standard analysis in some version over will be the analysis of the future, and he describes the process By which we move from the integers to the rationals, and so on. The next quite natural step after the reals, namely the introduction of infinitesimals, has simply been omitted. We have omitted it. I think in coming centuries, it will be considered a great oddity in the history of mathematics that the first exact theory of infinitesimals was developed 300 years after the invention of the differential couples.
So what he's saying is that the actual history is weird, it's the imaginary history that should have happened. I think that is pretty strong support for the reasonableness of the thought experiment world that I'm describing.
Categoricity
OK, so now I want to come to the categoricity part. Isaacson, taking inspiration from Kreisel, describes the process by which mathematicians come to know their mathematical structures. We become familiar with the structure, we find the essential features of it, and then we prove that those features categorically characterize it up to isomorphism. He writes, "the reality of mathematics turns ultimately on the reality of particular structures, the reality of a particular structure constituting the subject matter. A branch of mathematics is given by its categorical characterization of the principles which determine that structure to within isomorphism."
The categorical accounts of our particular structures become the framework of our mathematical reality itself, is how one might say it.
Indeed, that's exactly how things played out at the end of the 19th century and the early 20th century. Mathematicians began to provide categorical accounts of all of the standard fundamental number systems. We identify the axioms that are true in the structure, such that those axioms furthermore determine that structure up to isomorphism. Of course, usually you have to use second-order logic when you're doing that, since in light of the Löwenheim-Skolem theorem, there's no first-order theory that can be categorical in that way for an infinite structure.
One of the first and most important is Dedekind's categorical account of the natural numbers and the theory of the successor relation. Any two models of Dedekind's theory of the successor are isomorphic. The theory says zero is not a successor, the successor function is one-to-one, and second-order induction, asserting that every number is generated from 0 by successor in the sense that if you have a set of numbers and it contains zero, and it's closed under successor, then it contains all the numbers. So that's the Dedekind account of a successor relation, and it's categorical. It determines that structure uniquely up to isomorphism.
Using that, mathematicians provided categorical accounts of the integer ring. For the integers, you want to go sort of both ways. There's an inductive version going both ways. Or you can take a definition of it as the same-difference relation in the natural numbers. There's a lot of different ways to get a categorical account of the integers.
The rational field is going to be the quotient field of the integers, and this is going to be a categorical.
Cantor gave a categorical account of the rational order, the countable endless dense linear order. He proved using and introducing the back-and-forth method, that any two countable endless dense linear orders are isomorphic. That's a categorical account of that order.
Huntington provided the categorical account of the real field. It is the unique, complete ordered field. And the complex numbers can be characterized as the unique algebraic closure of the real field, or you could also characterize them as the unique algebraically close field of characteristics 0 and size continuum. There's a lot of different equivalent characterizations.
Each of the central structures in mathematics around that time found its categorical account. And these categorical accounts enable the coherence of the mathematical enterprise in the manner that Dan Isaacson described. Categoricity is what enables us to refer to those various fundamental structures by their defining characteristics. We don't have to talk about a particular complete ordered field that we built, say, in set theory, using Dedekind cuts in the rationals, whether it's a particular construction and so on. We can just say the complete ordered yield, and there's only one up to isomorphism, and if we only care about it up to isomorphism, then that is going to be completely adequate.
For this reason those categoricity results implement the philosophy of structuralism in a very direct manner. They're connected deeply with the mathematical use of structuralism. And I want to mention this issue that I have observed, namely. There are really two independent strands of structuralism in the literature. There is a philosophical strand which generally grows out of Benacerraf, and many philosophers are talking about structuralism, the structuralism that's coming out of Benacerraf. But when you talk with mathematicians, and many of them have very strong views on structuralism, and they've thought a lot about structuralism, but almost never will they mention Benacerraf. It's just not part of what they're talking about. Rather, they're tracing their ideas on structuralism back to the categoricity arguments. It begins with Dedekind. And so, when you're thinking about structuralism, it is fruitful to compare these two strands, because they're not often distinguished as much as they might be.
Back to the thought experiment. In the imaginary history, the hyperreal numbers have become a core structure on the list. Natural numbers, integers, real, complex,... hyperreals. And so we would demand a categorical account of that structure. It would be necessary for the coherence of mathematics.
But is it possible? Can we give a categorical account of the hyperreal numbers?
The answer is yes, we can prove a categoricity theorem for the hyperreal numbers. The final step of my thought experiment is that we imagine a Zermelo-like figure who writes down a theory, just like the actual Zermelo wrote down his axioms in order to give an account of the well-ordering theorem. In my thought experiment world, the Zermelo-like figure is writing down a theory that's going to provide the categorical account of the hyperreal numbers. And he's going to prove the hyperreal categoricity theorem.
If you assume ZFC plus CH, then there is, up to isomorphism, a unique smallest, countably saturated, real closed field. So we get categoricity for the hyperreals, if we assume CH.
You can prove this in a back-and-forth argument. It's similar to Cantor's back-and-forth argument for the countable dense linear order. You build up the isomorphism in a sequence of length omega using finite partial isomorphisms. At each stage, you get another point, and it fits in with the previous ones, and you can extend the isomorphism to accommodate that new point, and you go back and forth in this way. And get the full isomorphism there.
In the hyperreal Categoricity theorem, it's exactly the same, except now it is transfinite. It's a little more difficult because it's a transfinite length construction of length omega 1. If you have CH, then if you have two countably saturated real-closed fields of size continuum size aleph one, then we're going to build up the isomorphism with countable partial isomorphisms. We look at the new point we haven't yet handled, and it fits with the previous countably many points, and we can extend that isomorphism. There's a point just like that on the other side because of saturation, and we can build up the isomorphism this way. So it's just a transfinite generalization of Cantor's back-and-forth construction.
OK, we didn't really need very much of the transfer principle. All we needed was that it was a real closed field in order to prove that. So that is a little of what I had imagined Newton to have said. The theorem, in the actual history, was proved in 1955. A slightly stronger version. Under CH, there is up to isomorphism only one real closed field of size continuum whose order is countably saturated. So we only need the order to be countably saturated, not the whole structure. In fact, they're equivalent anyways.
So in this way, from the continuum hypothesis, we can prove that there's a unique hyperreal structure up to isomorphism.
But there's no category result without CH. There's no theorem. If you don't have CH, it's not a theorem. You can't do it. The first hint of that was shown by Roitman, who shows it's relatively consistent with the negation of CH to have multiple non-isomorphic hyperreal fields arising as ultra-powers. Those ultrapowers are always countably saturated, so this would be a countably saturated, real closed field.
But they wouldn't all be isomorphic under non-CH. Then Alan Dow proved, in fact, it's just equivalent. If CH fails then there are always multiple non-isomorphic ultra-powers that are non-isomorphic even in the order. The order isn't isomorphic. so therefore, CH is just equivalent to the hyperreal categoricity theorem.
CH holds if and only if there's a unique countably saturated real closed field of size continuum. With CH, we have category for the hyperreals. Without CH, we lack categoricity for the hyperreals.
It is easy to find mathematicians talking about "the hyperreals". If you Google on MathOverflow, there's dozens of occurrences of that phrase. But it's not actually meaningful. I mean, "the"… what is "the"? There's not just one structure. Without the categoricity result, you're not entitled to speak of "the." ZFC doesn't prove that there's a unique hyperreal structure, as we just said, you need the continuum hypothesis for "the."
I have argued that the lack of a categoricity result for the hyperreals actually explains, in part, the hesitancy of mathematicians to do non-standard analysis. The epsilon-delta approach to calculus and non-standard analysis are two alternatives, and there's definite hesitancy, and you can explain a lot of the hesitancy by upbringing, because we all learned the epsilon-delta approach, and so on, and so it's a cultural phenomenon, because mathematicians who later learned the non-standard analysis approach, you know, liked it. We can translate back and forth, and so maybe it doesn't matter that much, but there is a definite hesitancy to adopt the nonstandard analysis. And part of the reason might be that it's a bit incoherent to base this whole theory on the hyperreals, if the hyperreals are not a thing. It's many things. Which R star are you going to use? And they're not all isomorphic in general, right? Which one do we use? How can we even describe which one to use? We can't pick any of them out by a property that they have, because there is no categorical account. The lack of categoricity leads to a kind of lack of reference.
The thought experiment at the bottom is that the hyperreal field has become a core idea present from the beginning, and essential for calculus.
Calculus would be based in the non-standard analysis manner on the hyperreals. At first, in a pre-rigorous manner, but then with increasing rigor and sophistication. And eventually, mathematicians demanded categorical accounts of all of their structures, and this would be required. Also, for the hyperreals. And it's possible to do that, but only if you have CH.
So this is how CH gets on the list of fundamental axioms, because it's a necessary part of providing meaning in mathematics and even at the level of calculus, it's indispensable for the foundations of calculus.
Of course, that's an extrinsic justification for CH. We get enormous extrinsic support for CH because of that that desired consequence of a categorical account of the hyperreals. But of course, it's similar to a way of thinking about ZFC, which provides an enormously successful account of the real numbers. It's proving so much of what we want to prove about the real numbers. We can prove the categorical acount in ZFC, and much, much more.
Similarly, CH would be seen as vital for the account of the hyperreals.
In the thought experiment world, knowing human nature, once you have extrinsic classification, then of course, mathematicians would find it very easy to rationalize intrinsic justifications in a manner that arguably has occurred with other axioms, such as the axiom of choice. Axiom of choice maybe at first is viewed with extrinsic support, but now I think it's quite common to take it as having enormous intrinsic support. Many people, myself included, take the axiom of choice as something close to a law of logic, which is a way of saying that it has intrinsic support.
I'm imagining maybe one of the forms of intrinsic justification would go like this. The continuum hypothesis, which is definitely true because it's needed to provide the hyperreal categoricity theorem, is obviously true because it unifies the two methods of producing uncomfortable infinities. CH is equivalent to the identity of aleph one and beth one. And the aleph one is one way of getting to a larger infinity. In the Hartog manner, and the other one is the taking of the power set. To unify these two accounts of the uncountable is equivalent to CH. So that's possibly, speculatively, an intrinsic justification.
A critic wants hyperreal fields of larger cardinalities, because the hyperreals I was talking about were just the ones of size continuum. We can satisfy this critic. If you assume ZFC plus the generalized continuum hypothesis, then you can prove that there is up to isomorphism a unique, saturated, real closed field in every uncountable regular cardinality. So we get hyperreals at the level of aleph 2, aleph 3, and so on. All the regular cardinals are going to have their hyperreal field at that level. And in fact, the existence of those saturated models is equivalent to the generalized continuum hypothesis.
So we have this sort of tower of infinitesimality continuing to all the higher cardinals, and I take that as a kind of natural continuation of the saturation ideas that Leibniz and Euler Hausdorff and Hardy were thinking about, that I mentioned earlier. We are just looking at higher levels of those ideas.
Ultimately, what we get is those hyperreal fields of all the different cardinalities would form an elementary chain, whose union is the surreal numbers. So it is going all the way up to the proper class-sized fields.
Let me turn to a slightly different topic. So, Tim Williamson has speculated on how it would be that other beings with physical and mathematical powers similar to humans might settle the continuum hypothesis. He introduced the concept of a normal mathematical process, and said that one normal mathematical process is adopting a new axiom, and if set theorists finally resolve the CH, that is how they will do it. Of course, just arbitrarily assigning some formula of the status of an axiom does not count as a normal mathematical process, because doing so fails to make the formula part of mathematical knowledge. So in particular, we can't resolve CH simply by tossing a coin in order to determine it. We want to know whether it holds, not merely to have a true or false belief one way or the other. And the question arises, when does acceptance of an axiom constitute mathematical knowledge?
I believe that my thought experiment engages with Williamson's challenge in that regard. The people in my thought experiment world are not flipping a coin to decide. They're choosing CH for a sound mathematical reason. I have described the richer context and process that would lead mathematicians to CH. They would introduce it as an axiom because it enables them to prove this this absolutely required result, the categoricity theorem for the hyperreal numbers, which is necessary for the coherence and reality of their practice. So take them to be undertaking in accepting that axiom the normal mathematical process.
Maddy has written on the contingency of ZFC, because the very situation of my thought experiments is that ZFC is contingent. If history had been different, we would have different axioms. She says the fact that these few ZFC axioms are commonly enshrined in the opening pages of mathematics text should be viewed as a historical accident. Not as a sign of their privileged epistemological or metaphysical status. She was mainly concerned about extensions of ZFC by large cardinals, not about the kind of extension I'm talking about. But it is another kind of historical contingency, showing how we might have taken CH principle.
The discovery via forcing that there could be multiple non-isomorphic hyperreal fields. With forcing, we can make not CH happen, and in those models, there would be multiple different non-isomorphic hyperreal fields. I think that this, in the thought experiment world, be seen as bizarre and chaotic. It would be a little bit similar to the situation that we know happens now. If you drop the axiom of choice, it's possible that there are two different algebraic closures of the rational field. Normally, when you have the rational field, you can take the algebraic closure of it. These are called the algebraic numbers. There's only one. This is a standard result. But you need the axiom of choice to prove that theorem. Actually, it's consistent with ZF, that there is countable algebraic closure of the rational field, as well as an uncountable one, which is completely bizarre.
If you ask mathematicians about the algebraic numbers in these models in ZF, the reply that you usually get is that those models do not understand the algebraic numbers. They have a wrong theory of what the nature of algebraic numbers are like.
I think a similar thing would happen. A similar view would happen in my thought experiment world about the models where CH was failing, where you had multiple non-isomorphic hyperreal fields. They would be seen as weird and bizarre and that those models do not understand the hyperreals, in exactly the same way that we think about this Läuchli result as weird and bizarre.
In the imaginary universe that I'm describing, forcing would be seen as a little bit less successful, precisely because it doesn't preserve the fundamental theory. One of the important things about forcing is that if you have a ZFC model, and you go to a forcing extension, you still have ZFC in the forcing extension. But if your basic theory is ZFC plus CH, it's no longer true that every forcing extension still has CH, so the basic theory isn't preserved.
Those models maybe would be viewed in a similar way that the symmetric model construction is viewed in today's world. Symmetric models are a manner of building models of ZF that don't have the axiom of choice, and in particular, you can build some really badly behaved models in which choice fails like the Läuchli model, and so on, the other models, where choice fails. Or where you have, you know, amorphous sets or infinite sets that are not Dedekind infinite, and so on. These weird models where choice fails are constructed with this method called symmetric model construction, and I think the attitude that mathematicians and set theorists have towards those models. It's a way of building weird models. It would be shared by forcing, in general, amongst the ones that don't preserve CH.
Conclusion
Let me wrap up here. My thesis is that we could have had a very different perspective on the continuum hypothesis. Early mathematicians could have been clearer about infinitesimals, positing distinct number rounds. The hyperreal numbers would have become a core structure of vital importance in calculus woven into the into the heart of calculus. All the core structures require a categorical characterization for meaning and reference. But a categorical account of the hyperreals is possible only with CH. And therefore, CH would have been part of our foundational theory. It would have had extrinsic justification, because of that, and in turn, intrinsic justification. We would have seen CH as necessary for mathematics, indispensable even for calculus.
So that's it. Thank you very much.
Q & A discussion
Chris Scambler: Okay, so let's get started with the Q&A. During the talk, there was a question from Alan Slomson, or at least it looks like maybe there was. I don't know if Alan is here and would like to ask that question, but, um… Others who may have questions, put your hand up, and I'll take notes. If he's not here right now, so I'll come back to him later. So, Wes, why don't you start?
Wesley Wrigley: Well, thanks very much for the talk. Really, really fascinating stuff. My question is just about, so what makes this a kind of alternate history? Or maybe the question is sort of how it fits in with your own view, because, I mean, you mentioned the quote of Gödel. That seems to have some normativity built into it. This is how things should have happened. In the future, people will see that they've gone wrong, and will have that. So, I suppose, yeah, maybe my question is just, why isn't this a really nice argument for CH? I take it, I think it can't be, on your view, but… well, given how… given that history didn't happen the way you described. Why shouldn't we go back and correct it and… Yeah, we actually do need a categorical characterisation, or just CH. You've got the dream sort of breakthrough done, perhaps.
JDH: Excellent. So, that's a really good question. Um, you're totally right, I mean, I think I agree with you that there's this normative aspect of those view on that matter that the history should have been like that, whereas I'm really only arguing that it could have been. And it wasn't like that, and so for this sort of contingency about ZFC as the fundamental theory, the fundamental theory could have been different. Maybe there's a lot of different ways, maybe we could have been better with type theory in the early days, and set theory would be this sort of niche thing. And we would all be doing type theory in a much more rigorous way, and so on.
I'm not sure how to answer the question about the connection with pluralism, though, because, of course, if you're a monist, then you think that there is an answer, and so if you think that it could have been a fundamental axiom, then that's the same thing as arguing that it IS a fundamental axiom, if you're you know, if you're a hard-core monist. And so, in a way, the thought experiment argument is talking with the monist a little bit, but I'm not going as far as they are. I mean, if they accept this argument that it could have been a fundamental axiom, then on the basis of the monism, then they would deduce that it is already a fundamental axiom.
Whereas I'm not making that final step. But I still think it's important to realize that we could have this different way of thinking about CH as a fundamental axiom, which I think is interesting, and I think it is shedding light on how we should think about some of these issues. So, I'm sorry, that's not a very good answer, but…
Wesley: No, no, I think that was helpful, thank you.
Joshua Loo: Sure. Hey, I was going to ask a follow up question, which is, how much room is there as a pluralist, to say that following this path would have been better, if not the right answer, at least. How much sort of flesh can you put on that boot? Because that seems to be quite a tempting conclusion. But it would have been better to go this way.
JDH: Oh, I see. Yeah, that's not at all how I was thinking about it, so that's quite interesting that you say it's tempting. I mean, I don't think I was trying to hint at that conclusion, rather.
Joshua: Well, at least I'm tempted by that.
JDH: I think rather the lesson that I would like to take as a pluralist from this, if the thought experiment is successful, then I would want to look, well, what other kind of theories can we give similar kind of reasonable-seeming accounts for that we would arrive at. For example, is there a similar kind of thought experiment where you would arrive at not CH? And maybe there would be something about some fundamental thing requiring a distinction between the bounding number and the dominating number of something, and that the ideas would only make sense if CH field, and so, I mean, I can imagine such. I don't know any successful such arguments, but it would seem that the pluralists might be tempted to kind of give a lay of the land, you know, what are the possible situations that we might have found ourselves in. And that would be a kind of pluralist undertaking, right, to investigate all those different possible fundamental axioms that we might have been led to for similar kinds of reasons Uh, with sound motivations.
Joshua: Yes, I agree with that.
Chris Scambler: So, I forgot to say, if somebody has a question from, uh, the online portion of the meeting. Let us know in the chat, we'll put your hand up as Daniel has done. And I will call on you, as I will call on Daniel right now.
Daniel: Hi, yeah, I mean, I love this paper and this talk, and I guess I had a follow-up to the first question, which is, how… so… What would you say… so, I'm tempted to think of this, you know. In a frame of… in a certain frame of mind. Where I find this to be a compelling argument for For CH, I then… start to think that what you've done in this paper is provide a kind of error theory, um… About how… It turns out that adventitious historical developments can sometimes shape in profound ways. What we… what we accept. We know that this is the case independently outside of mathematics, so… a lot of the content of specific religious traditions is not something… is very widely accepted devoutly believed. I mean, a paradigm of devout belief. But in fact, often contingent on support. On historical developments that need not have occurred, and sometimes It's surprising to think, but this does happen. So… Um, analogously, in this case, what we have is a reason to think that for… Specific… contingent historical reasons, we… uh, are… in a situation where we regard the question of CH as deeply open. But we have now an explanation of how this need not have occurred, and we can… yeah. What do you think of that kind of… diagnosis of what this historical story is there.
JDH: Right, I guess that's basically what I'm trying to argue for with the contingency, the historical contingency. I mean, I think that much of what you said resonates with what I'm saying. For example, one can easily imagine that say Brauer and Bishop had been more successful in their arguments, and that mathematicians were tempted more to found more mathematics on the constructive approaches, explicit computation, and so on. There would be a different culture of mathematics in such a way that it would be more strongly founded on intuitionistic logic and constructive axiom systems. And that would be another example of the kind of historical contingency for the axiom, so ZFC. In that kind of thought experiment world would be rejected and replaced with more constructive analogs, and I guess there's probably quite a lot of other instances of such kind of situations that you can imagine, and maybe give some legs to those thought experiments that would make you realize that the list of axioms that we have written down is contingent in exactly the way that you were describing.
Daniel: All right, thanks. That's helpful, thanks.
Chris: So, do you want to stop sharing the screen, then we would see… Oh, okay, sure, yeah, I can… They can bring it back if it's needed.
JDH: Like this?
Chris: Yeah, no, it'll go to the next speaker. Then you turn this camera off. Okay, uh, Beau?
Beau Mount: Yes, thank you. So that's really interesting. This, I think I was going to ask Wes's question, so I will make this just kind of a follow-up to that. I think that the universist can take on board. A great deal of this methodology, and properly understood, can assent to your claim that in a certain sense, this could have been a fundamental axiom, as long as we leave open whether we're taking that description of practices in a factive sense or not, because, you know, what I think the universalist will say this demonstrates is that rational mathematicians could have been who progressed through a certain historical trajectory could have been in a position where they had very strong reasons for believing CH, and not particularly convincing reasons for believing not CH. But that, of course, is compatible with thinking that there are other trajectories on which the reasons could have come in a different order. It seems, for any really complex question on for which there are interesting and difficult arguments on both sides, it seems really plausible that we can think that there could have been rational inquirers who encountered those arguments in a certain order that would leave them at the end of it such that they had very strong reason for believing P, as opposed to not P. And vice versa. That's surely compatible with there being a fact of the matter as to it. We should admit that people can sometimes have very strong reasons for believing things that are nonetheless false. So I'm just saying, I think that, like, nothing you've presented here is anything that the universist has to reject as an investigative methodology.
JDH: That's interesting, thank you. Let me try to push back against it a little bit, so, if you have the universe point of view, a sort of set-theoretic monism, there's the one true set-theoretic universe, and you imagine the thought experiment of the people who became obsessed with the hyperreal numbers in the calculus, and they insisted that calculus be done that way, which is a slightly more disparaging way of describing my thought experiment, right? So much so that those people required CH in their fundamental axioms because that's what it took to get the categoricity result for the hyperreals, which is what they thought was important. But mainly, we already know, so do we really need a category result for the hyperreals in order to make sense of mathematics? Well, no, because we are, you know, in our actual world, we don't have that theorem. It's not a ZFC theorem, and we do calculus fine, right? And, let's imagine that Woodin's arguments for not CH based on omega logic are successful in that where universes with the one true Cedric universe, but we're getting not CH. And if that were true, then we would have reason to look at those thought experiment people and say, look, they got the truths wrong. They thought CH should hold, because they were obsessed with hyperreals, but actually we know, because of Omega logic that not CH should be the case. And so this would be a way on the universe view of saying that, a way of criticizing the thought experiment were not being reasonable, right? I mean, if we had reason to think not CH, then we should think that they were wrong to found calculus on the hyperreals and insist on a categorical account, because that requires them to deny the fundamental axiom of not CH.
Isn't that a way of pushing back at your suggestion that somehow it's somehow automatically compatible with the universe view? Because in this way of thinking, I think it wouldn't be compatible. I mean, if you have the universe view, and you are agnostic still about CH because you don't yet know, then I agree with you that you could take the thought experiment as maybe possible evidence that maybe CH is true in the one true set-theoretic world. And because of Omega logic, maybe not CH is true, and we still don't know, so that's a sense of it being compatible, and I would agree.
But I can imagine that if one of those sides, you know, was more successful, it would be grounds for rejecting the reasonableness of the other side.
Beau: Well, I mean, it would be grounds for thinking that they have a false belief.
JDH: Right.
Beau: And, you know, I can think that, you know, Newtonian mechanics is a false theory, but nonetheless think that it was extremely reasonable for people in the 17th century, on the basis of the evidence they had to think that it was correct.
JDH: Oh, I see, that's really interesting, I like that.
Beau: So, I mean, this is just, like treating it in the same way we would treat any other disputed factual question.
JDH: Right, so the question would be, what is the fact that if you're amongst universists, who believe there's one truth, Is there a fact of the matter about CH. What does the thought experiment say about this fact of the matter if we disagree? You know, that's quite interesting.
Chris: Marcus Jacquinto.
Marcus: Hi, Thanks for an extremely interesting talk. Can I be heard?
Room: Yes, yes.
Marcus: There's just one thing that worries me about your argument, that we could easily adopt CH. In this counterfactual world, which is… Well, at some point, you acknowledge that to do calculus with infinitesimals, you don't need the categoricity of hyperreals. You don't need it. You just need the transfer principle. But later on, towards the end of your talk, you said that category was necessary with a coherence of the practice. And that, I think towards the end, you making a stronger case. All right, then there really is. Because you don't need the categority. And you, of course, as you pointed out, you don't… You don't have it in Robinson's non-standard analysis.
JDH: Right, that's absolutely right.
Marcus: All you need it for, all you need it for, is is for, um… to justify mathematicians or others who talk about THE hyperreals. And you can… it's just a… it's just a small mistake, and… It's not… it's not a big deal. So… so I… I… This… this need for, uh, for CH. Which rides on the back of the supposed need For the categoricity of the hyperrelees. Um, is… It's a weak point. Isn't it?
JDH: No, I don't agree. And I tried to argue at length about that. This is what Section 4 was about, when I'm talking about Isaacson's views on categoricity as extremely important. I totally agree with the mathematics of what you're saying. You can do nonstandard analysis without categoricity, and that is how it's done, because we do it in ZFC. And we do it with one hyperreal field, and we don't really care which one it is, even though they're not isomorphic. But nevertheless, I view it as a sort of core mathematical demand that all of our fundamental structures should have categorical accounts, and that's the part where I'm saying it's essential, it's essential for the meaning and the reference of our mathematical activity.
The reference of the terms in our mathematical activity and for the meaning of these structures, and so I'm looking at the practice of providing categorical accounts of our core structures is vital. And I'm taking Isaacson to argue, relying on that part, so not just on the practice of non-standard analysis, which I agree does not require categoricity, but rather it's this philosophical demand for categoricity in all our fundamental structures.
One could ask, do you need category of the real numbers to do real analysis? I mean, suppose that there were, in some sense, you know, multiple different real number of systems. I mean, as there are, for example, in intuitionistic logic, they have the Dedekind reals and the Cauchy reals and so on, and they're not the same. And the field is a mess. Because of that, they have to be constantly saying what kind of real numbers they're talking about, and which system they're in, and so on, when they're doing the arguments. And so the fact that we have categoricity for the real numbers is extremely helpful for our mathematical understanding of what's going on in that subject. And I would claim the same would be true in calculus if it was founded more thoroughly on the hyperreal numbers. We would also want that kind of situation in the, you know, the calculus founded on non-standard numbers.
Chris: Oh. Let's go. Hey, Marcus, please.
Marcus: We only knew that these… People had some grasp of the of the rational numbers, before they started thinking of them as the the smallest field containing the integers. And the same with the real numbers. I mean, people were operating with them and had some familiarity with them before they who conceived of as the completion of the rational mindless. I mean, really, all that came in with Dedekind. But the use and understanding of those Other number systems was already there. Without, yeah, you know, without… Yeah. I've said it.
JDH: Sure, that's true, but you can't deny that the categocity result for the real numbers is seen as a core, foundational result in real analysis. It's theorem 1 in, you know, well, I don't know for a fact its theorem 1, but, you know, it's a core theorem at the beginning of any book on real analysis, proving that the reals are the complete ordered field. It's a core part of our understanding of what's going on in the subject, and so even though it's possible to do real analysis without having that categoricity result, which I totally admit, you're totally right about that, but that result is providing a fundamental unity to the subject in a way that's extremely welcome to mathematicians, and I think that if we were doing a similar kind of investigation in calculus using hyperreals, it would serve a similar role. It's possible to do it without, I totally admit that, but I think it would be an extremely welcome and seen as a fundamentally important result in the subject, to know that you have a categorical structure.
Chris: So we have a lot of questions, so I'm going to skip the follow-up. Oh, sure, okay, sure. You're on my list somewhere. So, next up, it's Benedict.
Benedikt: So, you know, since don't remain current. So there's something promise for you about that position. Whereas, I understand that you argument for pluralism, sort of takes it as a premise that we should take current set-theoretic practice quite seriously. But so it seems like there's, like, an asymmetry in the ... We shouldn't take the current practice as serious as your experience. Whereas universists cannot say, well, the fact that (garbled) very different perception of the set. Doesn't look a threat to mine. He's questioning.
JDH: Great. Thank you. Well, I don't really take myself to have been arguing for pluralism in this talk.
Benedikt: (garbled)
JDH: Okay, that's fine. So… So, I think you're right about the universe is not basing their views on current practice, but rather there's this sense that that the concept of set is clear enough, and that… you know, that it has only one meaning, and it gives rise to the one true set theoretic universe, and that's the kind of idea, I think, that's motivating those who have, uh, the universe viewed. Whereas if you're denying that, you need to explain, well, how… You know, what are these different concepts of set that might have given rise to the alternatives, right, for a set theoretic pluralist? And one way of doing that is to… is to look at, you know, the different set concepts that we've built in our current practice. We have forcing and so on, and intermodals and so on. We've built all these different models which are naturally interpreted as alternative set concepts. And so it's not that you're being led to pluralism because of practice, it's rather just that The current practice is providing you with this abundant array of alternative set concepts and it seems like there's an abundance of alternative possible concepts of set, and each of them would give rise to their own set, their universe.
Okay, and then when you combine it with this kind of historical thought experiment way of thinking, then I suppose it's just leading even more to alternative ways of thinking about what we might take as alternative set-theoretic universes or alternative concepts of set. So it's connected with looking at practice. My argument on the dream solution, on the other hand, is explicitly appealing to this kind of sociological phenomenon that you mentioned, namely, I argued that we can't have a dream solution of CH, because we've already become familiar with the set concepts in which CH is true and the other ones in which it's false, and they both seem totally set theoretic and completely reasonable, so if you have an axiom that's implying non-CH, then I can't accept it as a fundamental principle of sets, because I already know what it's like to live in a world where We're the opposite hypothesis holds. You know, I can't accept your assertion that only Brooklyn exists, because I've been on Fifth Avenue in Manhattan, and I know it exists, and it's a perfectly reasonable place. So that's the kind of sociological argument, right?
If you think about how does the dream solution argument work in my thought experiment world, I think it's interesting to consider that, because In that thought experiment world, they wouldn't have had that experience in the not CH worlds. They would have had CH as an axiom, and they would have looked…I argued explicitly that they would have looked at the not CH worlds as weird, because those are the worlds where you don't have a the correct theory of the hyperreals, you have these weird multiple non-isomorphic hyperreals. Which is bizarre in the same way that having multiple algebraic closures of Q is seen as weird. and so they wouldn't accept the dream solution argument, they would have said, look, the CH is true, because we use it to approve the hyperreal category theorem. So in a sense, they, in that world, the dream solution method is successful. Sort of inverted, though. They're not assuming an axiom to prove CH, but rather They have this strongly desired conclusion, the hyperreal categorization that they want to prove, but the only way to prove it is with CH. But it's similar to adopting the dream solution.
Chris: Michael Collins.
Michael: Oh, thank you. I mean, say, I'm going to take you off on a slight tangent, if you don't mind, Joel.
JDH: Sure, that's fine.
Michael: I want to warn everyone else. From a pure algebraist, and not in any sense a logician. But I was fascinated by the one comment which, in fact, you just brought up. In your last answer, that is about the algebraic closure of the rationals. Because what you said is that you need the axiom of choice To prove, essentially, uniqueness up to isomorphism.
JDH: Yes.
Michael: If you didn't… if you just took… ZF, and threw in the continuum hypothesis, but not the axiom of choice. What would the outcome be?
JDH: Oh, I see, that's interesting
Michael: Because if you could not prove, or most important, if you could prove there are multiple models, then that would imply the continuum hypothesis does not imply the action of choice. And this takes me back to… and I say, declare my age in the process. The Cohen's theorem was the big thing when I was an undergraduate.
JDH: Thank you for the question, that's quite interesting. If you have ZF, if you don't have the axiom of choice, and you're working in ZF, then you have to be a little bit more careful about what you mean by the continuum hypothesis, because the statements that are equivalent in ZFC are not equivalent just in ZF. For example, one way of saying the continuum hypothesis is that the reals have size aleph one. This is one of the standard ways of saying CH in ZFC. But it's not equivalent to the statement that there are no infinities between the naturals and the reals. In ZF, you cannot prove the equivalence. For example, the axiom of determinacy proves—that's incompatible with the axiom of choice, but it proves there's no infinities between the natural numbers and the reals. Every set of reals is either countable or equinumerous with the whole set of reals. But it doesn't prove that the reals have size aleph 1, because they're not well orderable under AD. And so that's a situation where the continuum is not aleph one, and yet there are no infinities between the natural numbers and the real numbers. So if we take the sort of strong form of CH that the assertion that they're well-ordered of size omega 1, then I strongly believe you can prove the uniqueness of the algebraic closure of the rationals. I think… I'd have to think more about it, but I suspect that's true. But my suspicion also… I'm sorry, I don't have proofs of either one of these, it's just a kind of intuition, but if you can well order the reals, then I think you could you could use that information to provide the isomorphism that you want between the two algebraic closures of Q. But if you only have that there's no infinities between the natural numbers and the reals, then I don't really see how you would be able to prove that, in fact, maybe in Läuchli's model, in which there are multiple algebraic closures of Q, maybe he even has the CH in that form, in that model. I'd have to look it up. And I don't know the details, but it's quite an interesting question.
Another related question is that the generalized continuum hypothesis implies the axiom of choice. So, if you just have CH, you're not going to get any choice from that, unless you have a strong form of it. As I said, then you get choice at the level of the reals, because you can well order it. But if you have the GCH, then this implies AC. So I don't know…
Go ahead.
Michael: Okay, thanks. So I was just going to say thank you. Since we… this point's… In fact, I made a large number of other interesting questions. In the area of the relationship between the axiom of choice And the continuum hypothesis, or the lack of the relationship between them. I suppose I was being pedantic.
JDH: Great, thank you very much for the question. Alright, thank you.
Daniel Isaacson: Oh, okay. Thank you very much in agreement, isn't. Indicated, um, I think it's a very exciting development. I would be inclined to want to go further than you are, and see it as a basis for, taking the continuum hypothesis as true. It certainly is akin to a development. I mean, but I think then one has to wonder about the situation when forcing came to be used. I mean, you might even say, well, it's lucky we didn't have that attitude, because then Cohen wouldn't have bothered to develop and find the forcing notion, and then we wouldn't have had all this wonderful mathematics. However, and I wonder, what is your thought about this? I mean, I don't think that's the case. I think that there is a long tradition, a long practice in mathematics that when you find axioms for theories, you want to know whether they're independent. And so you have that in set theory. I mean, the whole business of showing the axiom of replacement is independent from separation and so on. I mean, those developments, and when can choice be proved, and when it can't be. Those are very important mathematical developments. So I do think that it would have still been important to developed forcing as a means of establishing that, as it turned out, that the continuum hypothesis is not provable from the other axioms. I don't think that counts against, and indeed, it counts in favor of taking the contingent hypothesis as an axiom. I mean, just a thought about it would be...
JDH: Yeah, that's really quite interesting. I mean, the way I understand your remarks. I was trying to hint at it earlier. Which is that if you're a set-theoretic monist, which I take you to be, and you think that it's possible that CH is a fundamental axiom, then you have to think that it is a fundamental axiom, and I think that's basically the argument that you just gave us, right? And that's a step too far for me, but I think your other remarks are really quite interesting. So, for example, another case would be, say, the parallel postulate in geometry we want to know, is it provable from the other ones? And that's a case where in asking the question whether it's possible in geometry that the parallel postulate is false, you're led, thereby, to non-Euclidean geometry, which is exactly undermining the monist's perspective, because that development is exactly what gave rise to geometric pluralism, right? I mean, we have the Euclidean and there's non-Euclidean and so on, all the others, precisely because of that kind of activity of looking at whether the axiom was provable or not.
And so, you're right about how things would work in the thought experiment world, that the mathematicians would say, well, can we prove CH from the other axioms? And then they would develop forcing, and so on, and prove that, no, they can't. But then they would discover these other set-theoretic worlds where CH was false, and so on, and so maybe this would undermine the monist perspective in exactly the same way that it did in geometry.
And so maybe that's one development that would happen, or it's one way of making sense of that kind of activity. I think it's really quite interesting.
Isaacson: I mean, on that parallel, I think… when I consider the… I mean, there was the huge question for several thousand years, is the parallel postulate provable on the basis of the other axioms? When it was, well the demonstration that it wasn't came later, after the development of nonEuclidean geometry, but we can reorder the developments as we like.
JDH: Right.
Isaacson: Like, conceptually, but, I mean, in the end, you get a situation in which we see that the parallel postulate is true in the geometry of the plane. And then we have these other geometries, and I would suppose, then, one could say that the situation in ZFC would be that We find that by the categoricity theorem the hyperreal categoricity theorem, we see that the continuum hypothesis is a true axiom for that mathematics, and then you can consider other mathematics, the mathematics of other structures. And… as you do in geometry, use… negations of the… of the parallel postulate for those theories, as you can set theory, I mean, I would… I'm not… it does seem to me unclear, or maybe I've just… it's just ignorance, but I don't think that in set theory we have the situation of the non-continuum hypothesis set theories having the conceptual clarity that the non-Euclidean geometries have. And indeed, not the applications either.
JDH: Well, we just have many more models. Yeah. There's comparatively few geometries.
Isaacson: Yes, I mean, which is a plus.
Room: (laughter)
JDH: So it's easier to be a pluralist if there's just very few worlds, which is closer to universism in a way.
Isaacson: Well, there's a conceptual clarity in that…
JDH: Right, right, I agree. About this idea that we should always be seeking independence for our axioms. I think this is often mentioned, but I think sometimes exaggerated, because it's not always true. We have many theories where the axioms aren't independent, including some of the standard foundational theories, like Peano arithmetic. The induction axioms are not independent, they're rather a hierarchy that's increasing in strength. Sigma-2 induction implies sigma-1 induction, and it's implied by sigma-3 induction. And so there's huge redundancy in the standard maximization of PA, and the same thing happens in ZFC with the replacement axiom and the separation axiom and so on, if you think of sigma n separation. This is a hierarchy, it's a scheme of axioms that are getting stronger and stronger, and therefore are not independent. You could make it independent by, like, taking the extra part and subtracting off the earlier part, replacing that axiom with Either the other one fails, or we have this great thing. And if you do that with all of them, then I think they become independent, technically, but… That seems extremely artificial in the natural way to look at it is in rather a non-independent, increasing hierarchy of strength. And so, I don't agree that we should always seek this kind of independence in our fundamental axioms, because the main foundational theories don't have that feature.
Isaacson: I mean, independence of facts and schema is a different matter.
JDH: Oh, I see, if you think of the schema as one axiom. Yes.
Isaacson: And then it's not independent.
JDH: Yeah, that's true.
Isaacson: Okay, so thank you,
JDH: sure.
From back of the room: Uh, yeah, and this is a sort of being spaced around a bunch of different questions, but, like, I've been able to… you go ahead and make sense? Okay. So, I guess, like, if you want to see a lot, but if I'm right on that, then. Which, just to set aside vitamin, which is sort of, like. Okay, sorry, that P could have been indispensable to happen to practice, and not P could not have been indispensable to that practice. Now, I'm not saying that this is true for CH, there's some reasons why, but my knowledge from CH. But I just wanted to ask you, like, how path would you think this sort of straining argument might be if you are… Say it again, how powerful it was? For us, and how about, like. What do you think it's the best you saw, and that Western thing that I just wanted to comment on.
JDH: Right. So that's very good. I think it's important, I mean, I would really like to have a similar thought experiment for not CH. I mean, that would be great, but I don't have one. I don't have any argument that there isn't one. But I could imagine, you know, some kind of focus on the… I guess, let's see, I gave an argument in my multiverse paper about what I had called the power set size axiom there. The argument was… see if I have to remember the details of it. People looking at, yeah, the symmetric groups on sets of different cardinalities. So you can take, for any set of a given cardinality, you can look at the group of permutations of those elements. And if you have a bigger set, you get a bigger group, right? But it's not actually always bigger in cardinality. Because it could be that 2 to the kappa equals 2 to the lambda even when Kappa is smaller than lambda. So the size of the symmetry group is 2 to the kappa. So, so maybe you have some naive argument that the symmetric group on kappa is not isomorphic to the symmetric group on lambda. When they're different, because they have different sizes, and so maybe someone's going to, what the different sizes of the power set to be injective. Right. Okay, so I can imagine maybe there's some kind of argument like that that's going to make us land with not CH. I don't know any argument, but maybe there… Maybe there is one. And that would be a way of addressing this symmetry, and maybe it would be a rebuttal to the universe view argument that this thought experiment is a proof of CH, a philosophical proof of CH, right? If there was an equally compelling thought experiment that was landing you at not CH, then we would know that, well, look. We can either have a categorical account of the hyperreals, or we can, you know, XYZ, whatever it is in that other explanation, but I don't know what it is.
So does this kind of dichotomy happen in mathematics? I don't know, I'm not sure, but I would love to find out if I could.
Another question from the back: So, I think you've provided a very convincing how CH might have become a basic axiom. As I can imagine, the second chapter in that history where we lose CH again as an axiom. Because mathematicians… some mathematicians are interested in a minimal axiomatization. And, I mean, this has been said before, you can develop analysis without Um, see age, and both between there. Um… Let's say the criterion would be… our basic experimentation on mathematics is the minimal one we need So that 99 percent of mathematicians can develop their own mathematics in the framework. So then, that second chapter of the history, we would lose the CH again. And I think there's evidence for that, that the actual distribution method. So, I believe they're growing deep, we wanted to have an accessible cardinal to develop algebraic geometry. But because nowadays, geometers just prefer to not have seen that, and let's say this next project, which is now used as sort of go ahead And as they made reference for algebraic geometry, it doesn't have to assume the existence of an inaccessible cardinal. So, we would develop is its theory in a minimal accentization. So, I guess my plan would be any historic counterfactual or not would converge to set up C. Assuming that, let's say, the composition of mathematicians is roughly, they say they care about roughly the same things. So that's why we wouldn't move CH, the axiom of choice, because algebraists wouldn't have basis for the vector space.
JDH: That's a very good remark. So I addressed a little bit of it in the paper, where I talk about exactly these issues. I don't make the conclusion that you make, though. For example, when you think about ZFC in current mathematical practice, there are many, many people pockets of communities of people working in much, much weaker systems, for example, in constructive mathematics, they don't even use classical logic, right? And they're working out the nature of constructive mathematics, or in Homotopy type theory, or whatever, in these weaker systems, they are working in the kind of weaker system that you're talking about. And yet, for most of mathematics, ZFC is a kind of default foundation, and so I would expect in my thought experiment world that, of course, people would be working in the theory without CH, and they would be developing the consequences just of ZFC alone, or even much less, in the same way that constructive mathematicians today are working out the consequences of those weak foundations, or maybe they're using you know, Kripke-Platek set theory or ZFC- or any of the other extremely weak theories that we have available. Of course, they would be investigated.
I'm not quite with you that somehow mathematics always converges to the minimal theory. I don't think that's true. I mean, it's obviously not true because ZFC is a quite standard axiomization, and it's far beyond what's needed. And even though this work you described about abandoning the universe axiom in algebraic geometry, I think it's still quite standard, though, to talk about universes. I mean, it's all over mathoverflow, for example. It's really quite common, and people have questions about it, and so on. So it's not like the universe axiom has been abandoned in that subject. I don't think that's true. People still use it and talk about it, even if this work that you mentioned doesn't happen to do that. But maybe it's also something like in large cardinal set theory, where at any moment, if it is ever convenient or needed to assume a slightly larger large cardinal, then you just go ahead and do it. That's the practice, and so I think if the algebraic geometers ever needed a universe, then I would fully expect them to say, well, okay, let's give ourselves a universe, and then maybe another one on top of it. And that would be, I think. Probably pretty standard practice. So I'm not sure that it's correct that we always converge to the weakest theory. That's not what we're observing in mathematical practice now, and I wouldn't expect it in my thought experiment.
Chris: Alex.
Alex: Thanks so much, Joel. So I had a historical question. So, I want to know a bit more about why you assumed there was no … you imagine this imaginary world in which there's real analysis ends up looking a little bit different from Newton and Leibniz onwards. It's got the hyperreals. But then somehow there's no interference from that development to other areas of mathematics and in set theory we still end up with ZFC. I would say this is sort of very naive historical question, but if we imagine this alternative as well, wouldn't there be a sort of knock-on effect on the rest of the mathematics, but ZFC as we know it, still have been developed? What do you think? I was expecting you to fit in that bit of a story, but you changed one thing quite radically, and the other thing is… exactly the same, and you see no connections between them whatsoever.
JDH: Oh, I see. Yeah, that's a very good point. Well, I mean, I guess the Zermelo-like figure that I was imagining that provided the hyperreal categoricity theorem was using ZFC plus CH. So, I mean, of course, I was assuming that in the thought experiment world, also, we would have the categorical accounts of the reals, and so on, but this doesn't require CH, you can just do it in ZFC. And so that kind of practice… I mean… In a sense, I mentioned that one can view ZFC as providing the successful theory of the real numbers, and you only needed the CH extra bit in order to provide this categorical comfort, also the hyperreal numbers. And so I'm not sure that there would be any kind of tension. It's just this add-on axiom, but the rest of the ZFC development, why wouldn't it just be pretty similar? I mean.
Alex: I'm just wondering, I'm just thinking, you know, I really don't have a kind of argument against, I'm just… I just want to know a little bit more about the history would have gone, you know, development of analytic geometry, acclimatization of analysis, and so on. Those would obviously be quite, quite different in the imaginary world, but yet they have no knock on effect on people. Come on the late 19, early 20th century, we still get exactly the same territory. I mean, in a way of saying maybe you would be a bit… bit more… even more radical, you know, you wouldn't go as ZFC. Why are you assuming ZFC is in there? Is it somehow not connected to the development?
JDH: Oh, I see. Or maybe we would have some other theory plus CH in there. Is that the idea?
Alex: Yeah, maybe that's something very different. Well, you know… I don't have an argument that we would. So when you… I'm just wondering why you assume that we don't.
JDH: Right. Yes. Okay, I don't know if I have a very good answer, but one observation that you might make is that if you have a robust conception of the hyperreal numbers, and you know that one way of constructing them is with ultra-powers, or if you have, say, a robust conception of the transfer principle. This implies that there are ultrfilters. I mean, you can prove this from the transfer principle quite easily. You just take a non-standard integer, and then you say a set is large, if it's non-standard analog, contains that point. So, in other words, if it expresses a feature that this non-standard number has, so… So, this was an objection made by, um… Uh, who was one of these critics of nonstandard analysis? that I've written on. I'm sorry, I'm not going to remember the name. (Added later: it was Connes) But one of the objections that mathematicians might make to nonstandard analysis is that it's giving you ultra filters, which of course is a highly non-constructive thing, and so that suggests that any kind of robust account of hyperreal numbers is going to be connected with non-standard analysis in this way that's giving us ultrafilters and other consequences of the axiom of choice. So we're going to… those people would tend to look more favorably on the axiom of choice, for example, than otherwise. And so, maybe in the thought experiment world, ultrafilters would play a much bigger role earlier on than they did in actual history, where they were sort of a late construction. I mean, we didn't have the Los theorem until the 50s or something like this, right? And so this could have come much earlier if we were better with ultra filters from an earlier age. Maybe forcing would have been invented earlier, therefore, because it's also connected with ultrafilters in this way. So, uh… But otherwise, I'm sorry, I don't have a very good answer to your question.
Timothy Williamson: Yeah, so I'm just wondering how you're understanding what counts as one of these core mathematical theories for which, you know, the practice demands that we have categoricity, because I mean, we don't have categoricity for set theory itself, and so why presumably, you wouldn't count set theory as a core mathematical theory in that sense. Is there some independent way of saying what disqualifies it?
JDH: Great. Well, I mean, I think we do have categoricity for set theory in the same style as the Huntington result and the Dedekind result, and so on, because we have the… first of all, we have quasi-category coming from Zermelo.
Tim: Yeah, but I mean, that's still short of categoricity.
JDH: but you can prove a full categorical account. In one of my papers, I do so. Zermelo only gets quasi-categoricity because he's only talking about subsets of elements in his models, instead of subsets of the model itself, which would be the full second-order logic. He's not really using second-order logic in his models, he's just the second-order separation axiom that Zermelo needs to prove the categories of is saying, take a set that I have, and then I have all the actual subsets of it, but he's not saying, I have all you know, all the classes… that are possible in the universe are subject to replacement. Because if he did that, for example, you might want to say every subset of the universe is a set. I mean, every subset of the universe is a set in the universe. This provides fully categorical account. I mean, it's a stronger… it's not ZFC2, it's a strengthening of it, by saying, not only are we making sure to have the full width, but also we're having the full height. Right? And then you get a fully categorical account. And it's exactly similar to… it's a set theoretic analog of the categorical accounts of the reals, and so on. So I'm not sure I agree that we don't have a categorical account incentive. I think we do.
The usual way of stating it, you're right, is just this quasi thing, but there is a way of improving that to get a fully categorical theory.
Tim: But it's kind of then maybe surprising that that isn't a more celebrated result, if this is a requirement of mathematical practice, which is in fact met, but But hardly anybody knows that it's not.
JDH: Yeah, it's similar to Tony Martin wrote this paper in, what is it called? If someone could help me out, about the uniqueness of the concept of set. So it's, in Topoi, he wrote an article, and the argument given there is basically the categoricity argument that I just described, he doesn't state it that way, but he's talking about the uniqueness of the concept of set, and so you can see in that phrase already that it's something about categority.
So, I agree. I think it should be more of a celebrated. Robin Holberg and I prove this in our paper on categorical cardinals, which is growing out of his dissertation. But, I don't know why it's not more celebrated. I don't why.
Tim: And why people don't worry about it.
JDH: Somehow, maybe it's connected with this sort of upwardly extensible idea of set theory that somehow the universe is...It's maybe incompatible with reflection ideas, which is part of what Robin and I write about, the tension between reflection and categority, right? Reflection says anything that's true in the whole universe is true, somehow, along the way, in a fragment of the universe. But of course. If you think that you should have a categorical account of the universe, then you're going to be denying reflection, right? Because you want your account to be true only for the whole universe and not for any fragment, because that fragment isn't the whole thing. And so… so there's this tension between these set-theoretic ideas of reflection and categoricity that is connected with the reason why we like the quasi-categoricity. And maybe we don't talk so much about the full categoricity. It's necessarily using ideas, the fully categorical account must use concepts to which reflection does not apply. But we definitely want reflections, so maybe that's part of the explanation.
Chris: Well, thanks very much, Joel, for a very interesting talk.
(applause)
Chris: There's going to drinks at the Royal Oak and then dinner at the Giggly Squid if anyone wants to join just meet us outside.
“The answers provided by PD are better and more uniform and so forth than the corresponding answers provided by V=L, which leads to a vision in contrast of set theory as the land of counterexamples and bad news, while in PD so many things work smoothly.”
This aesthetic judgment may be good, but I don’t know because I have no idea what you mean by “smoothly” vs “bad news”.
As a third alternative, if you assume a Real Valued Measurable Cardinal, do the answers to these questions in descriptive set theory come out more like PD or more like V=L, and how do they compare according to the aesthetic criteria you are using?
Thanks for this. Not a pluralism qn: Given Prof. Woodins more recent phil orientated lectures, (i.e justifying Additional axioms https://youtu.be/WaSBt0RZBRY?si=VUHT2IRGcRT8R0tj) and the comments on the usefulness of ZFC+PD on closing second order PA, how does that fit with the Categorization you talk of? I think that a world of ZFC+PD+CH is possible (i.e. PD is compatible with CH) but ZF+AD is not … is the process to find the point (in V) at which we satisfy the ability for CH to remain compatible (or independent, and thus able to be assumed as an axiom) but also retain the benefits something like PD provides (in its own usefullness wrt PA)… akin to woodin’s own Ultimate L ?