Structural Normalization (0: prelude)
In 1935, Gentzen [1] introduced natural deduction as a system of formal logic and wanted to prove it sound. To do this, he also invented the sequent calculus, proved it sound (via the Hauptsatz, better known today as Cut Elimination), and showed that every natural deduction proof could be translated to it via a funny intermediate system called LHJ (the "H" is for Hilbert, because the system is described with axioms, aka Hilbert-style). Since then, there have been several variants on this proof involving various intermediate calculi, notably Prawitz [2], whose system of normal natural deduction will play a role in my own investigations.
Folks like me who study relationships between logic and computation, care about this proof at least in part because propositional natural deduction corresponds to simply-typed lambda calculus (STLC), via propositions-as-types. It is often somewhat handwavingly stated that "soundness of propositional natural deduction" is, via props-as-types, equivalent to normalization for simply-typed lambda calculus (i.e., every STLC term normalizes). Certainly, at least for minimal logic (STLC with functions as the only type constructor), "running an STLC term" (aka natural deduction proof) through such a translation produces a normal-form STLC term. I will refer to this procedure as Structural Normalization (hence the title of this post).
However, the proof on the logic side doesn't actually establish STLC normalization. It only establishes that every inhabited type has a normal-form inhabitant. We don't automatically get the result that the resulting normal term is in any way related (say, beta-eta equal) to the term we started with. Indeed, on the logic side, it's easy to regard the proofs themselves as merely a device for establishing relationships between provability. But if we are to think of the computational content of the proof as a "normalization procedure", we have more work to do to show it actually computes the right thing.
So what does structural normalization actually do? Does it normalize STLC terms? What do the intermediate representations look like at each step? Recently, I've been trying to answer this question by implementing it and running it on some examples. In the interest of keeping posts short so that I actually finish them, I'm going to start describing what I did in the next installment.
If I were writing a paper, I'd need to tell you about what other people have done to investigate this and similar questions, such as Zucker [3] and Herbelin [4], and under some obligation to argue that what I did is somehow new, but this is a blog post, so I make no such claim and in fact have not (yet) read Zucker nor Herbelin's papers. Thanks to Guillaume for pointing me in the right direction to this related work so that I may at least gesture to it here.
- Gentzen, Gerhard. Investigations into logical deduction. Mathematische Zeitschrift 39, 1 (1935).
- Prawitz, Dag. Natural Deduction, a Proof-Theoretical Study, Almquist and Wiksell, Stockholm, 1965.
- Zucker, J. I. Correspondence between cut-elimination and normalization, part I and II, Annals of mathematical logic, Vol 7, 1974, pp 1-156.
- Herbelin, Hugo. A λ-calculus structure isomorphic to Gentzen-style sequent calculus structure. CSL 1994. https://dl.acm.org/doi/10.5555/647844.736588