Fair Chess and Simultaneous Games
The game of chess is famously flawed due to the first-move advantage, in which the player who moves first has a significant advantage. Let's fix it and make it fair.
The game of chess is famously flawed due to the first-move advantage, in which the player who moves first has a significant advantage. Let's fix it and make it fair.
In my previous post, I described an approach for extending multi-stage programming (MSP) to infinity and beyond. We start with a simple computing “machine” M[0] such as the lambda calculus. Each machine M[a] has a successor machine M[a+1] which is capable of building and reasoning about programs from M[a], yielding towers of nested machines. From there, we note that MSP itself can’t be implemented in any such finite M[n], but must exist in some limit machine M[ω], yielding transfinite towers of machines M[a] for each ordinal a.
In Part 1 I attempted to give some intuition for the process of iterating reflection to infinity and beyond: any time we can iterate reflection infinitely, we can reflect on this infinite process. In this post, I’ll be formalizing this notion in a simple programming language, and I recommend reading that last post before this one. Code is linked at the bottom.
In the beginning, I wrote blog posts.
Once I ran out of ideas, I started to write blog posts about my experience writing blog posts. These were my first meta-posts.
As my ego inflated ever larger, I began to post about my meta-posts.
…
My last two posts were about ordinals and pure lists, and they got a combined whopping 11 claps, so I’m selling out and writing a post about both. In particular, this post is about a data structure I discovered for lists of transfinite length. A repo is linked at the bottom.
Linked lists are simple and ubiquitous. When used as a stack, they allow for constant time pushing and popping. Furthermore, they are purely functional: multiple processes can branch off of a linked list and push their own items without stepping on eachothers’ toes, so long as they avoid mutating the list. However, indexing into a list is slow: retrieving the i’th element of a list requires i “hops” along the items. While indexing isn’t technically required of a stack, it is often desired. (For example, when implementing an interpreter with De Bruijn indices, variable lookup requires indexing into the stack representing the environment.) As the title suggests, it turns out that we can improve upon lists to achieve O(log(i)) indexing without forfeiting purity and O(1) push/pop. I call them rainbow lists.
Let’s say we’ve defined a function f from ordinals to ordinals and want to plot it like we might plot a function from integers to integers. We want to line up our ordinals on the x- and y-axes, and plot a point at (x, f(x)) for each ordinal x. Well where should we actually put those ordinals on the axes? This simple question is the topic of this post.