The Clarkkkkson vs. the xkcd Number

I just stumbled upon this webpage in which some kid (years ago, presumably in high school) who uses odd slang defined a huge number by putting various operators together and making recursive call after recursive call. He supposes that this is the biggest number anyone’s ever bothered to concisely define. My intuition is that A(g64, g64) (feel free to call it the xkcd number) is bigger. However, intuition is completely useless in this kind of question.

The Clarkkkkson defined:

(And the xkcd number is the result of the Ackermann function with Graham’s Number as both the arguments, as defined in this comic.)

Anyone want to take a crack at setting up some correspondences and demonstrating which is bigger — The Clarkkkkson or the xkcd number?

168 thoughts on “The Clarkkkkson vs. the xkcd Number

  1. Umm, hate to be a bother, but what is the actual xkcd number? Rather, can you define it in a slightly more precise way, or link to something that can?


  2. If you don’t know what the Graham number is or what the Ackermann function is, you can use Wikipedia (like I did) to learn what they are.

    I feel like the xkcd number would be bigger, but I think to really be sure, we should evalute both of them and compare. Now if we can just get the attention of Chuck Norris….


  3. I mean, the first thing seems like it would be just expressing the hyper function thingy in Knuth arrow notation, then you can draw a relation between the first steps of the Clarkkkkson and Graham’s Number. I am guessing the Clarkkkkson is bigger than Graham’s Number, but it would be a starting point.


  4. Wikipedia claims that Ackermann(x, y) is hyper(2, m, n + 3) − 3.
    Wikipedia also claims that Graham’s number is:
    Define f(n) = hyper(3,n+2,3) = 3→3→n, then, using functional powers, G=f 64(4).
    So we could build a moderately large, yet feasible, tree of hyper operators to define the xkcd number in terms of hyper operators.

    Wikipedia’s hyper operator is the same as Clarkson’s hypf. I suspect they’ve both been reading the same primary sources.

    Clarkson’s “ck” function builds trees of hyper operators, so it could easily build a bigger tree – note I didn’t say bigger number – than the xkcd number.

    However, K, which is the starting point of the actual “number” that Clarkson wanted to define, is a function of time. Therefore “the clarkkkkson” is also a function of time.

    What does it mean for a constant to be compared in size to a function of time? They’re incomparable. And if you pinned down a date and asked “Is the Clarkkkkson of that date larger or smaller than the xkcd number?” I still wouldn’t know, because though the clarkkkkson undoubtedly has a larger tree, it’s not the size of the hyper operator tree, its the shape that matters.


  5. How about if you’re feeling bad about it, we can just say (and I’m using _’s for subscripts, because I don’t know a cleaner way to do it) A(g_64, g_64) is xkcd_1, A(g_g_64, g_g_64) is xkcd_2, and so on.

    And then if you’re feeling extra silly, we can throw in xkcd_xkcd_1, or xkcd_xkcd_xkcd_1.

    It’s fun to pile ridiculous functions on top of each other.


  6. Can I also just note that Ackermann also came up with my favorite sequence, Ackermann numbers? I love a sequence that goes 1, 4, a number incomprehensively vast.


  7. Johnicholas Hines: Yeah, I was assuming we were just using K from some particular time around when this was written. I’m happy with anything between a googol and 10^googol. If the answer depends on K that sensitively, we can always just look for the dividing line.


  8. The creator of the Clarkkkkson number claims:

    …you’re not allowed to use a previously defined number, unless you defined it. And you didn’t. So you can’t.

    So perhaps the xkcd number being defined in terms of Graham’s number is against some kind of rules? 😛


  9. Well, how do you define “defining a number”?

    In doubt, I can say I just use G+1 as my number and thats my very own, defined large number.
    If I am not allowed to do this, nothing is possible, because neither of us has defined any set of number, like the complex numbers. Thus, we all use predefined numbers and go down.


  10. Since The Clarkkkkson is a function of time and the xkcd Number is a constant, and since the question is already pretty hard simply because of the sheer size of the numbers, why don’t we change the original question to:

    For what moment in time does The Clarkkkkson equal the xkcd Number?

    Also, I think it’s interesting a question can become this hard simply because it’s about really big numbers.

    There’s also



  11. What better input for the fastest-growing mathematical function, but the largest number ever seriously used in mathematics?

    Whichever one is larger, it’s clear that the xkcd number has more digits than the number of particles in Graham’s number of universes! Now that’s some distinction.

    @Martijn: It’s probable that they will never be equal, and at one point The Clarkkkkson will leap dramatically in size to surpass the xkcd number. The problem is that this might have already happened, or perhaps might not happen until after the heat death of the universe. We’re not sure which.


  12. But if one can generalize every single function Clarkkkkson uses to be defined on the real numbers, including the number of recursions, it’s possible to render the Clarkkkkson function continuous and thus (theoretically) to know at what point(in the domain of positive reals) XKCD is equal to it. If it’s continuous at every point, and it diverges as t approaches infinity, the IVT means that it must equal XKCD at some point. Right? BTW, for the number of recursions to be quadratic, say (5/4)F=1(C) where (1)G=G, define it maybe to be C(C(C(C(x))))=F(F(F(F(F(x))))) and then define the real arguments to be the limit of a convergent sequence of quadratic arguments. The gamma function provides an intuitive answer for generalizing the factorials, and for Kappa squaring, one could use the obvious K(0)^(2^t)=K(T), etc.

    DISCLAIMER: I could be totally wrong. Don’t hate the player, hate the name.


  13. It’s nice to know that I’m not the only one in the world who creates floor tile rules… I seriously thought I was the only one


  14. Uhh, I meant to say rational where I said quadratic. I’ve been reading a lot of math recently and so I thought Q. The rest is history.


  15. From what I read in that page it looks like the Clarkkkkson number is continuously growing. It uses kappa which is a number that has been squared every day for 10 or so years, right?


  16. After reading the comic the other day I decided to read up on the ackerman function. I realized what the problem was with it when I got a stack overflow error with A(4,3). I had gone past the maximum depth of 16000 recursions(I forget, but it was in lispworks pro).

    That said. I would have to suggest that the bigger number may not be A(G64,G64), but the number of recursions that are needed to solve A(G64,G64).

    I would have to presume that no matter how fast the the return values of the ackerman function increase the number of recursions needed to arrive at the answer will be much higher.

    The really tricky thing here would be to figure out how to find the exact number of recursive calls it would take to solve any particular input to the ackerman function without running it and keeping a count of how many times the function gets called.


  17. Can’t you, you know, multiply them and add one? I think that would be bigger. It would also be a lot easier.


  18. @Martijn: using dynamic programming to help compute the Ackermann function is kind of like using a crowbar as a lever for picking up the Himalayas rather than just picking it up. We’ll never, ever, ever be able to compute A(4, 3).


  19. @Jeff: Actually, using dynamic programming is more like creating a container an order of magnitude smaller than the Himalayas, and then creating a scale model in the container ;). However, I really don’t think that the computation of A(4,3) is provably equivalent to the halting problem (I could be wrong), so it is possible to compute it, possibly on a quantum computer.


  20. Shaun: When stack overflow gets you down, it’s time to remove recursion.

    I agree with previous that A(4,3) isn’t going to work on PCs unless you have a large number library to assist. …but recursion removal is a handy tool to have at your disposal.

    If the stack can’t handle your recursive calls, make your own dynamic stack with a linked list and just let your function iterate, pushing previous states on to your stack and popping them off when necessary. Then you’re limited not by OS stack size, but by available physical & virtual memory.

    This guy explains it in some of his popular books:


  21. Well, it’d certainly help if the ck(…) function were well-defined, because our good friend… you know… doesn’t.

    Here’s definition one:
    … but that isn’t all. Another function is used. The Clarkkkkson function ck() is an extension of the hyperfactorial function. It goes like this:

    ck(class, operator, number, repeats)

    The repeats part means how many times the answer is fed back into the hypf() function, for example:

    ck(1,2,4,2) =
    hypf(1,2,4) = 4! * 3! * 2! = 288
    hypf(1,2,288) = 288! * 287! * 286! ……… 3! * 2! = ??????
    What we get from this is that ck(c, o, n, r) is defined as:

    hypf(c, o, n) — if r = 1, or
    ck(c, o, hypf(c, o, n), r – 1) — if r > 1

    However, then he describes it another way:
    The 288 from the first hypf() is fed into the second hypf(). Only two hypf()’s are used, because we are only repeating twice. If we went even to three repeats, the answer would be uncomputable (probably). So get ready for the big one. The Clarkkkkson is worked out like this:

    ck(K,K,K,K) = A1 1

    The latter of the two is, of course, CLEARLY bigger, and so that’s probably what he was truly driving at. But if this guy was a serious mathematician at all, he would have caught that error in logic. So in truth, I’m not certain even he knows what he’s talking about, so whether his number is the “largest in the world” or not… well, I’m betting against it, especially since the guy has never done his homework since 1998. 😉


  22. Huh, it cut off my results at some point — Darn HTML markup throwing a comment in there:

    Anyway, the second definition is ck(c, o, n, r) =
    hypf(c, o, n) — if r = 1, or
    ck(hypf(c, o, n), hypf(c, o, n), hypf(c, o, n), r – 1) — if r > 1


  23. So the definition of the function gets a little messed up at the end when Clarkson says “ck(K,K,K,K)=A_1, ck(A_1,A_1,A_1,A_1)=A_2, … until you get A_K”. What he means to say is clearly “ck(K,K,K,K)=A_K where A_1=hypf(K,K,K), A_2=hypf(K,K,A_1), etc”

    Someone should let the poor kid know he fucked it up at the finish.

    Anyhow, assuming what he meant for it to mean, A(g_64,g_64) is still vastly larger. The general idea to show this is just upper-bounding (terribly, but still totally good enough) ck(K,K,K,K)

    The constant that he uses, K, is quite small compared to the numbers we’re looking at. K


  24. (ick. using gt and lt in html… also, it looks like Missing Link beat me to some of this. Including the HTML screwing things up thing.)

    The constant that he uses, K, is quite small compared to the numbers we’re looking at. K < 2^(2^3650) (which it will roughly equal at the ten year mark).
    Order-n hyperfactorial is strictly smaller than order-n hyper, ( so k(!n) < k(^n)k ), which means that hypf(p,n,k) < (k(^p)k) (^n+1) k
    which itself is much much less than k (^n+p+1) k
    which equals k (^n+p+2) 2

    so for ck(K,K,K,K):
    A_1 << K (^2K+2) 2
    A_2 << (K (^2K+2) 2) (^2K+2) 2


  25. (comment deletion would be a handy trick)

    so for ck(K,K,K,K):
    A_1 << K (^2K+2) 2
    A_2 << (K (^2K+2) 2) (^2K+2) 2 < K ^ (4K+5) 2
    so A_n << K (^ [2^n * (K +3)) 2

    ck(K,K,K,K), then, is roughly a [K * 2^K]-order hyper operation, so less than a 2^2^2^2^12 order hyper operation, which is, again, less than a (2(^8)2)-order hyper operation.

    A(g_64,g_64) looks like taking a g_64-order, so 3(^64)3-order, hyper.


  26. My middle school had these hideous brown tiles… but in various places, broken tiles had been replaced with hideous tan tiles. Naturally, the tan tiles were laser turrets that made the rows and columns of tiles in all four directions, out to the next wall, result in death. There weren’t enough broken tiles to completely invalidate entire hallways, so you just had to remember which columns were safe to walk along, and then be sure to step over the invalidated rows.


  27. The Clarkkkkson site uses the lower hyper operator, so I’m not even sure if this number is as big as Graham’s. The class-n factorial is pathetic compared to the hyper operator. You can bound it above with LOT’S of room to move by realizing n!


  28. Hmm… my comment got cut off as soon as I used a less than sign. Will this work now?



  29. both that Clarkkkson’s stuff and Ackermann function and Conway chained arrow notation are nothing more than recursion, and i would say last two ways of getting large numbers have much more style and scientifical prestige than first one. thus xkcd number being more prestigios. what would be interesting to do is try to note xkcd number in Conway notation (find bounds, as it won’t be presented strictly)

    and about large numbers really used in solving matematical problems take a look for example at Skewes’ number. well all articles in are those to read regarding these questions


  30. Also notice that in the clarkkkkson website, the author incorrectly defines 4^^5 = 1.34 * 10^154, as he uses the incorrect notation for repeated exponentiation, that is:

    (((4^4)^4)^4)^4 = 4^256

    where the correct version is

    4^(4^(4^(4^4))) = 4^(4^(4^256))

    which of course is vastly bigger.

    See for a nicely formatted explanation!

    I’m an applied mathematician by trade so I’m not so used to the concept of big numbers, but I’m intrigued by this problem and will look into it, much to the detriment of the actual real work I should be doing! 🙂


  31. Way too much notational abuse going on, there’s a reason mathematicians use the function notation at times. Which would be why he screwed up so many calculations, the author is confused by his abuse of notation.

    A grasp of recursive definitions would go a long way to clearing up the abuse.


  32. Notational abuse aside, it would be nice if we could compromise.

    It seems as if A(clarKKKKson,clarKKKKson) is a nice size.


  33. A(Clarkkkkson, Clarkkkkson)? While we all know the Ackerman function grows excessively fast, and this number is vastly larger than the Clarkkkkson, the Clarkkkkson is so large, that this is relatively much like adding 1 to a googol plex. You can’t really tell the difference between Clarkkkkson and A(Clarkkkkson, Clarkkkkson), just as you can’t really tell the difference between 10^10^100 and 10^10^100 + 1 (provided of course you’re given a less obvious representation of the numbers :P).

    And about the “incorrect notation,” there is absolutely nothing incorrect about it. The website is simply using a different definition (the “lesser,” “lower,” or “left-associated” hyper operator) than the more common one you are familiar with. See the Wikipedia article:

    But yeah, the Clarkkkkson is way bigger:


  34. I like Nicholas’ idea, although I was thinking calling the busy beaver function with A(clarKKKKson, clarKKKKson) as the argument. That would throw everything out of whack.

    I asked one of my friends who is currently in college, instead of my not-currently-in-college status. Perhaps he will relay my question to his teachers, and we’ll get an answer.

    Oh, I figured out that K is 10^63(^3028)2.


  35. Perhaps I should have mentioned that K is only that for the date of 1/1/2007. It grows.


  36. Meh, they’re both small. Almost all integers are bigger than either of them. 🙂


  37. Ok, touche Lothar, but I still like the standard notation better 🙂 And well done on the Clarkkkkson proof… you unknowingly just saved me a few enjoyable but ultimately wasted hours trying to figure out which was larger (especially as I was going to first try finding an *upper* bound on Clarkkkkson and prove it’s lower than xkcd).

    I’m presuming your proof means the Clarkkkkson number is bigger even for a hyper operator defined the “other way”, which means if you redefine it the right-to-left way (and maybe make a few more changes, for example redefining the higher-order factorial as some higher order hyper… I mean come on, everyone knows n!


  38. Yes, it’s bigger even though it’s defined the lesser way. This is of course the part my proof glossed over completely, so it’s not rigorous, but you could even assume lowerhyper(a, b, c) ~ HigherHyper(a, b/g_64, c) and the proof still stands. From my findings, though, I think lowerhyper(a, b, c) is generally between HigherHyper(a, b-2, c) and HigherHyper(a, b-1, c) for b greater than 4, though I’m nowhere near sure. This is all from finding an upper bound to lowerhyper(a, b, c) since I too started by trying to prove C < xkcd. Once I realized that xkcd isn’t actually significantly larger than Graham’s number (oh man, what an understatement! But in some form of hyper-relative sense it’s true – it’s between g_65 and g_66, while Clarkkkkson is greater than g_K [probably by leaps and bounds]), I started trying it the other way.


  39. You know those SAT questions where you have to decide which of two numbers is bigger, column A or column B? I’d love to see this particular example on a test.

    Or did they take those out along with the analogies?


  40. I would place my bet that A(g_64,g_64) is far larger. The number of recursions necessary to compute the “Clarkson” function seems much smaller than the number necessary for the Ackerman.


  41. I agree. The Clarkson number K is computable. Maybe not by a household computer, but something can do it, I’m sure. xkcd, on the other hand, is based itself upon a number on the order of K, then run back through a function which grows impossibly large with numbers 5 or higher entered into either argument.


Comments are closed.