First, a brief history of state. In the “beginnings” of computers, there were two models of computation that are now quite famous: the lambda calculus and Turing machines. While these two models are equivalent in their representational ability (Turing completeness), the Turing model became quite popular at the time. Why? I’m sure there are quite a lot of reasons, but the one I wish to emphasize now is that Turing machines are fundamentally built on this notion of program state. The infinite tape acts as read-write memory, which demonstrates that an algorithm is defined as “instructions that act on data.” Under similar principles, we eventually get things such as assembly language and C, fundamentally grounded on the idea that computers are these machines with bits of memory that are flipped on and off by code. The fact it’s such a rhythmic operation of changing data with a bunch of instructions makes this model easy for most people to comprehend: just yell orders at your computer and it’ll give you results.

However, this understanding of how an algorithm works is not the proper one to use. Suppose you travel to a new universe. If you bring a computer with you, it will no longer function properly, as the laws of physics no longer necessarily apply. The data of your computer will corrupt, and you won’t be able to run code. What will work in this universe is mathematical axioms. The axioms tell you exactly what you are assuming, and these assumptions won’t change in any parallel universe. So if we base our programming language on mathematical axioms, it will continue to function in any universe, and lambda calculus does just that. No wonder it’s called functional programming!

Now, I know you’re fully convinced and will go delete your C code base right away, but before you do, let’s discuss why we still want to reason about state. The answer is, how do we execute a program in the real world? Even if we don’t see state in the lambda calculus, the piece of code itself is data and requires memory to store it. Even if we write the operations on paper, the paper itself is data. Since we live in the real world with energy consumption and other laws of physics, there is no way of getting around the notion of state. It’s an engineering issue instead of a coding problem, but it’s still quite an important one.

Here’s an interesting example of a “computer” I thought of. Consider a marble track. Each track attachment is simply one of the combinatorial calculus operators or something, so we can express code via building tracks together. Then we can run a marble through the track, verifying the correctness of the proof we expressed. Can you find the “state” in this example? You might think of the position of the marble as the state. Or you might think of the track itself as a piece of state since it is a real world resource. Either way, we are still stuck with reasoning about state.

Fundamentally, our definition of state comes from the fact we want to store the program in memory. Memory in this case is the bits on the computer’s hard drive. How to represent the program in memory is an interesting problem, but we’ll treat it like an implementation detail. We have uploaded this one giant expression into the computer, and then this code steps forward to become another expression. This is modeled by the computer’s hard drive jumping from one state to another. How it does this jump is also an interesting problem, which we will abstract out by saying it’s just magic. So there we have it. It’s a real world computer program.

Now you may be thinking: we store far more than just the program in memory. Let’s zoom in on this block of finite memory on our computer. We have the space allotted to hold the program code as it morphs, but we also have some extra space left over. We might as well use this extra space to store more information as an added feature in our language. As such, we can flip bits in this area at command, where this flipping bits acts as moving from one computer state to another, no different than executing code normally. This is what is known as the heap. If we view the whole block of memory as our whole program, writing to memory is a functional programming process. It simply steps to a new program state where the memory is updated, much like the code updates during execution. We thus express our entire program as a pair <h,e> where h is the heap and e is the program.

Now there are various different interactions these two pieces of the state can have. The easiest to understand is when we can’t write to the heap at all. This means the heap is always just empty as we didn’t include this feature in our language. The other common one is arbitrary reads and writes. The heap can grow as you allocate new things (no longer just finite memory), and things in the heap can change as you run the program and you can look up data in the heap as much as you want. It does, however, make expressions difficult to analyze on their own, because the heap interacts very strongly with the expression as it runs.

I would like to explain another kind of heap interaction that is very nice to have: write once, read as many times as you want. Why would we want this? Well, it’s a way of directly encoding functional programming languages as a series of reads and writes to memory. It’s called immutable memory. As soon as you write to it, it stays there forever, no information is destroyed, which is the same property we want in functional programming.

This does have some negative drawbacks. For example, if we want to allocate an array and fill in the values one by one, we cannot do so. As soon as we instantiate the array, it stays that way forever. We cannot fill in the missing values later as that would me mutating the memory we have. This means we must copy and paste all the old values of the array into a new array every single time that we want to add a new element to the array. This sounds very yikes!

But wait. Would we ever care about the partially initialized array? Not really. As soon as we assign a new value to the array, we can forget about the old one. We lose all references to this array, and as such, we can garbage collect it. If we are going to garbage collect it, we can premeditate this garbage collection and thus “garbage collect in place” by simply rewriting the same location in memory. With this method, we maintain immutable memory and all of its nice programming language properties while not losing the efficiency guarantees of mutable memory via garbage collection optimizations. Additionally, memory is not lost because you can have the garbage collector remember and backtrack if you write a good enough garbage collector. We would still like access to mutable properties of state within our programming language, so it would be nice to split the heap into two pieces: a write once part and a write as many times as you want part. This way, we can keep proofs about these two kinds of heaps separate: we maintain nice functional properties in our write once heap while having the power of side effects in our infinite write heap. This is actually doable, and really nice. All programming languages should consider doing something like this. I think Rust has nice features like this, but I don’t know the details.

Now, a curious mind might be asking: are heaps all there are in the state that we can add as features to our language? The answer is no: don’t forget about the expression in our state. Since we have a copy of our code stored in memory, we can do whatever we want to do to it as a stepping operation in our language. Imagine what kind of crazy hacks that might lead to.

And one of these hacks is quite well known: the control flow operators call/cc and throw. Let’s cut a hole in the program so that we have an evaluation context E[.] and an expression e that fits in the hole. We can take this E[.] and store it somewhere else in memory. This is known as the call/cc or let/cc or C operation. We then continue running our program normally, except we can always arbitrarily decide to hot swap out the currently running program with E[.] by filling in the hole. This is known as throw or A. We’ve essentially duplicated our computer! We copy pasted a large chunk of memory into another part of memory!

Another one of these interesting hacks is parallelism. We suddenly split the code up into several different sections, and even split the heaps as well! We then run all these pieces simultaneously, then merge the memory chunks back together. There is quite a lot of active research in how to efficiently split up this memory into chunks so that we don’t need to duplicate as much of the memory, along with how these different processes can communicate with each other in shared memory. All of this is simply utilizing new ways of reasoning about the chunk of memory in our computer.

This magical hard drive on your computer is probably not limited to just a <h,e>. Maybe there’s something more out there that we haven’t discovered. There can be a new way to utilize memory, or maybe even some understanding of memory that allows us to add new features into our language we didn’t have before. Even quantum computation sounds interesting because it’s a new notion of state! Much like computers run programming languages, programming languages evolve as computers do, as we keep discovering new ways of engineering things into state. I would like to leave you guys on this note: what is the next big idea about state?