程序代写代做代考 scheme compiler Lambda Calculus flex Haskell chain algorithm Interpretations of Classical Logic Using λ-calculus
Interpretations of Classical Logic Using λ-calculus
Freddie Agestam
Abstract
Lambda calculus was introduced in the 1930s as a computation model.
It was later shown that the simply typed λ-calculus has a strong connec-
tion to intuitionistic logic, via the Curry-Howard correspondence. When
the type of a λ-term is seen as a proposition, the term itself corresponds
to a proof, or construction, of the object the proposition describes.
In the 1990s, it was discovered that the correspondence can be ex-
tended to classical logic, by enriching the λ-calculus with control oper-
ators found in some functional programming languages such as Scheme.
These control operators operate on abstractions of the evaluation context,
so called continuations. These extensions of the λ-calculus allow us to find
computational content in non-constructive proofs.
Most studied is the λµ-calculus, which we will focus on, having several
interesting properties. Here it is possible to define catch and throw oper-
ators. The type constructors ∧ and ∨ are definable from only → and ⊥.
Terms in λµ-calculus can be translated to λ-calculus using continuations.
In addition to presenting the main results, we go to depth in under-
standing control operators and continuations and how they set limitations
on the evaluation strategies. We look at the control operator C, as well as
call/cc from Scheme. We find λµ-calculus and C equivalents in Racket, a
Scheme implementation, and implement the operators ∧ and ∨ in Racket.
Finally, we find and discuss terms for some classical propositional tautolo-
gies.
Contents
1 Lambda Calculus 1
1.1 The Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Alpha-conversion . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Beta-reduction . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Normal Forms . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Computability . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Typed Lambda Calculus . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Simply Typed Lambda Calculus . . . . . . . . . . . . . . 4
2 The Curry-Howard Isomorphism 6
2.1 Intuitionistic Logic . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Proofs as Constructions . . . . . . . . . . . . . . . . . . . 6
2.1.2 Natural Deduction . . . . . . . . . . . . . . . . . . . . . . 7
2.1.3 Classical and Intuitionistic Interpretations . . . . . . . . . 7
2.2 The Curry-Howard Isomorphism . . . . . . . . . . . . . . . . . . 10
2.2.1 Additional Type Constructors . . . . . . . . . . . . . . . . 10
2.2.2 Proofs as Programs . . . . . . . . . . . . . . . . . . . . . . 13
3 Lambda Calculus with Control Operators 14
3.1 Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Evaluation Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Common Evaluation Strategies . . . . . . . . . . . . . . . 15
3.2.2 Comparison of Evaluation Strategies . . . . . . . . . . . . 17
3.3 Continuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.1 Continuations in Scheme . . . . . . . . . . . . . . . . . . . 18
3.3.2 Continuation Passing Style . . . . . . . . . . . . . . . . . 18
3.4 Control Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Operators A and C . . . . . . . . . . . . . . . . . . . . . . 20
3.4.2 Operators A and C in Racket . . . . . . . . . . . . . . . . 21
3.4.3 call/cc . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4.4 Use of Control Operators . . . . . . . . . . . . . . . . . . 22
3.5 Typed Control Operators . . . . . . . . . . . . . . . . . . . . . . 23
4 λµ-calculus 24
4.1 Syntax and Semantics . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.1 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.1.2 Typing and Curry-Howard Correspondence . . . . . . . . 25
4.1.3 Restricted Terms . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Computation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.1 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.2 Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.1 Implementation in Racket . . . . . . . . . . . . . . . . . . 29
4.3.2 Operators throw and catch . . . . . . . . . . . . . . . . . 30
4.3.3 Type of call/cc . . . . . . . . . . . . . . . . . . . . . . 31
4.3.4 A Definition of C . . . . . . . . . . . . . . . . . . . . . . . 32
4.4 Logical Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.1 CPS-translation of λµ-calculus . . . . . . . . . . . . . . . 33
4.5 Extension with Conjunction and Disjunction . . . . . . . . . . . 36
4.5.1 Definitions in λµ-calculus . . . . . . . . . . . . . . . . . . 37
4.5.2 A Complete Correspondence . . . . . . . . . . . . . . . . 38
4.6 Terms for Classical Tautologies . . . . . . . . . . . . . . . . . . . 38
5 References 42
A Code in Scheme/Racket 43
A.1 Continuations with CPS . . . . . . . . . . . . . . . . . . . . . . . 43
A.2 Continuations with call/cc . . . . . . . . . . . . . . . . . . . . . 44
A.3 Classical Definitions of Pairing and Injection . . . . . . . . . . . 45
A.3.1 Base Definitions . . . . . . . . . . . . . . . . . . . . . . . 45
A.3.2 Disjunction . . . . . . . . . . . . . . . . . . . . . . . . . . 46
A.3.3 Conjunction . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
B Code in Coq 49
1 λ-calculus
�� ))
2.1 Intuitionistic logic
tt
3 Control operators
))
2.2 Curry-Howard
��
4 λµ-calculus
Figure 1: Chapter dependency graph — a directed acyclic graph (partial order)
showing in what order sections can be read. The reader may do his/her own
topological sorting of the content.
1 Lambda Calculus
The lambda calculus was introduced in the 1930s by Alonzo Church, as an at-
tempt to construct a foundation for mathematics using functions instead of sets.
We assume the reader has some basic familiarity with λ-calculus and go quickly
through the basics. For a complete introduction, see Hindley and Seldin [HS08],
considered the standard book. For a short and gentle introduction, see [Roj97].
We start by introducing the untyped lambda calculus.
1.1 The Calculus
Definition 1.1 (Lambda-terms). Lambda terms are recursively defined, where
x ranges over an infinite set of variables, as 1
M,N ::= x | λx.M | MN
where x is called a variable, λx.M an abstraction and MN is called applica-
tion.
Variables are written with lower case letters. Upper case letters denote
meta variables. In addition to variables, λ and ., our language also consists
of parentheses to delimit expressions where necessary and specify the order of
computation.
The λ (lambda) is a binder, so all free occurrences of x in M become bound
in the expression λx.M . A variable is free if it is not bound. The set of free
variables of a term M is denoted FV(M).
A variable that does not occur at all in a term is said to be fresh for that
term.
A term with no free variables is said to be closed. Closed terms are also
called combinators.
Terms that are syntactically identical are denoted as equivalent with the
relation ≡.
Remark 1.1 (Priority rules). To not have our terms cluttered with parentheses,
we use the standard conventions.
• Application is left-associative: NMP means (NM)P .
• The lambda abstractor binds weaker than application: λx.MN means
λx.(MN).
1.1.1 Substitution
Definition 1.2 (Substitution of terms). Substitution of a variable x in a term
M with a term N , denoted M [x := N ], recursively replaces all free occurrences
of x with N .
1If the reader is unfamiliar with the recursive definition syntax, it is called Backus-Naur
Form (BNF).
1
We assume x 6= y. The variable z is assumed to be fresh for both M and N .
x[x := N ] = N
(λx.M)[x := N ] = λx.M
(λy.M)[x := N ] = λy.M [x := N ] if y /∈ FV(N)
(λy.M)[x := N ] = λz.M [y := z][x := N ] if y ∈ FV(N)
(M1M2)[x := N ] = M1[x := N ]M2[x := N ]
When substituting (λy.M)[x := N ], y is not allowed to occur freely in N ,
because that y would become bound by the binder and the term would change
its meaning. A substitution that prevents variables to become bound in this
way is called capture-avoiding.
1.1.2 Alpha-conversion
Renaming bound variables in a term, using capture-avoiding substitution, is
called α-conversion. We identify terms that can be α-reduced to each other, or
say that they are equal modulo α-conversion.
λx.M ≡α λy.M [x := y] if y /∈ FV(M)
We will further on mean ≡α when we write ≡, no longer requiring that
bound variables have the same name.
We want this property since the name of a bound variable does not matter
semantically — it is only a placeholder.
1.1.3 Beta-reduction
The only computational rule of the lambda calculus is β-reduction, which is
interpreted as function application.
(λx.M)N →β M [x := N ]
Sometimes is also η-reduction mentioned.
λx.Mx→η M
The notation �β is used to denote several (0 or more) steps of β-reduction.
It is a reflexive and transitive relation on lambda terms.
1.2 Properties
1.2.1 Normal Forms
Definition 1.3 (Redex and Normal Form). A term on the form (λx.M)N is
called a redex (reducible expression). A term containing no redices is said to be
normal, or in normal form.
A term in normal form have no further possible reductions, so the compu-
tation of the term is complete. We can think of a term in normal form to be a
fully computed value.
The following theorem states that we may do reductions in any order we
want.
2
Theorem 1.1 (Church-Rosser Theorem). If A�β B and A�β C, then there
exists a term D such that B �β D and C �β D.
For a proof, see [SU06].
A calculus with a reduction rule→ satisfying this property is called confluent.
A relation → having this property is said to have the Church-Rosser property.
Theorem 1.2. The normal form of a term, if it exists, is unique.
Proof. Follows from Theorem 1.1.
Theorem 1.3 (Normal Order Theorem2). The normal form of a term, if it
exists, can be reached by repeatedly evaluating the leftmost-outermost3 redex.
The Normal Order Theorem presents an evaluation strategy for reaching
the normal form. For evaluation strategies, see further Chapter 3.2, which also
presents an informal argument to why the theorem holds.
1.2.2 Computability
Normal forms do not always exist. Here is a well-known example:
Example 1.1 (The Ω-combinator). The term λx.xx takes an argument and
applies it to itself. Applying this term on itself, self-application applied on
self-application, we get the following expression, often denoted by Ω:
Ω ≡ (λx.xx)(λx.xx)
This combinator is a redex that reduces to itself and can thus never reach a
normal form. 4
Since this combinator can regenerate itself, we suspect that the lambda
calculus is powerful enough to accomplish recursion. To accomplish recursion, a
function must have access to a copy of itself, i.e. be applied to itself. A recursion
operator was presented by Haskell Curry, known as the Y -combinator:
Y ≡ λf.(λx.f(xx))(λx.f(xx))
It has the reduction rule
Yf �β f(Yf)
when applied to any function f . The function f is passed Yf as its first argument
and can use this to call itself.
With the accomplishment of recursion, it is not surprising that the lambda
calculus is Turing complete.
Functions of multiple arguments can be achieved through partial application.
Note that the function spaces A×B → C and A→ (B → C) have a bijection. A
function taking two arguments a and b, giving c, can be replaced by a function
taking the first argument a and evaluating to a new function taking b as an
argument and giving c. This partial application of functions is called currying.
In λ-calculus, the function (a, b) 7→ c is then
λa.λb.c
2Sometimes called the second Church-Rosser theorem
3Reducing leftmost-innermost will also do.
3
1.3 Typed Lambda Calculus
Lambda calculus can be regarded as a minimalistic programming language:
it has terms and a computation rule. As in a programming language, it is
meaningful to add types to restrict the possible applications of a term. For
example, if we have a term representing addition, it is reasonable to restrict its
arguments to being something numeric.
The study of types is called type theory.
1.3.1 Types
The types are constructed from a set of base types.
• If A ∈ BaseTypes, then A is a type.
• If A1, A2 are types, then A1 → A2 is a type.
The→ is an example of a type constructor since it constructs new types out
of existing types. Other types constructors will appear later (Section 2.2.1). A
type on the form A1 → A2 is a function type: it represents a function that takes
an argument term of type A1 and produces a result term of type A2.
These types should not be thought of as domain and codomain, because the
functions we are using in lambda calculus are not functions between sets. The
types should instead be thought of as a predicate that a term must satisfy.
The set of base types could be for example some types used in programming
languages: {int, real, bool}. We will not be using any specific base types,
just metatypes such as A1, A2, and will thus not be concerned with which the
base types actually are.
Remark 1.2 (Precedence rule). A→ B → C means A→ (B → C).
1.3.2 Simply Typed Lambda Calculus
We will add simple types4 to lambda calculus, using only the type constructor→.
To denote that a term M is of type A we write M : A. We often add the
type to bound variables, for example λx : A.M .
Definition 1.4 (Typing judgement). Let Γ be a set of statements on the form
x : A. Γ is then called a typing environment.
A statement on the form Γ ` M : A, meaning that from Γ, we can derive
M : A, is called a typing judgement.
The type of a term can be derived using the rules of Figure 2. The following
theorem asserts that these rules work as we expect them to, so that the type of
a term is not ambiguous.
Theorem 1.4 (Subject reduction). If M : A and M �β M
′, then M ′ : A.
Proof. Induction on the typing rules.
4Simple types are called simple as opposed to dependent types, where types may take
parameters. We will only consider simple types.
4
Γ, x : A ` x : A (ax)
Γ, x : A `M : B
Γ ` λx.M : A→ B λ abs
Γ `M : A→ B Γ ` N : A
Γ `MN : B
λ app
Figure 2: Typing judgements for λ-calculus.
Note that our type restriction restricts what terms we may work with — what
terms we may compose (apply) — but not the computation (the reduction) of
these terms. Thus we still have the confluence property.
The next result is a quite powerful one: A term that can be typed (its type
can be derived) can also be normalised (a normal form can be computed).
Theorem 1.5 (Weak normalisation). If Γ ` M : A for some A, then M has
a normal form. In other words, there is a finite chain of reductions M →β
M1 →β M2 . . .→β Mn such that Mn is in normal form.
For a proof, see [SU06]. The proof requires some work, but the idea used
in [SU06] is to look at the complexity, or informally, the size, of the type. With
a certain strategy for choosing the next redex to reduce, the complexity of the
the type will decrease. Note that this is not trivial, because some reductions can
increase the length of the type (counted in the number of characters), because
the function body contains something complex.
In fact, we have a stronger result.
Theorem 1.6 (Strong normalisation). Every chain of reductions M →β M1 →β
M2 . . . must reach a normal form Mn after a finite number of reductions.
Again, for a proof, see [SU06].
As a result, the Ω-combinator cannot be given a type, since it has no normal
form. Also a recursion operator, such as the Y -combinator, cannot be typed. If
a function can be self-applied, it must have its own type D as the first part of
its type, which would look something like
D = D → A
for some A. Trying to make a such recursive definition of D will expand to
something infinite. Attempts at typing other terms which involve some sort of
self application will also result in an “infinite type”.
Since the simply typed lambda calculus satisfies strong normalisation, all
computations are guaranteed to halt and the calculus is not Turing complete.
As well as there being terms that cannot be typed, there are types without
closed terms of that type.
Definition 1.5 (Type inhabitation). A type A is said to be inhabited if there
exists some term M such that M : A. M is then called an inhabitant of the
type A.
A type is said to be empty if it is not inhabited.
The type inhabitation problem is to decide whether a type is inhabited.
5
2 The Curry-Howard Isomorphism
The Curry-Howard isomorphism, or the Curry-Howard correspondence, is a
correspondence between intuitionistic logic and simply typed lambda calculus.
This correspondence will be central in our main topic. We first introduce intu-
itionistic logic.
2.1 Intuitionistic Logic
The reader is assumed to have seen the “standard logic” before, i.e. classical
logic. Central in classical logic is that all propositions are either true or false.
Intuitionistic logic was introduced in the early 1900s, when mathematicians
began to doubt the consistency of the foundations of mathematics. The central
idea is that a mathematical object can only be shown to exist by giving a
construction of the object.
Some principles taken for granted in classical logic are no longer valid. One
of these is the principle of the excluded middle, “for any proposition P , we
must have P ∨ ¬P”, which is immediate in classical logic. Likewise, the double
negation elimination (also known as reduction ad absurdum) “¬¬P → P for any
formula P” is not valid. Intuitionistic logic rejects proofs that rely on these and
other equivalent principles.
The probably most famous example demonstrating the difference is the fol-
lowing.
There exist irrational numbers a, b such that ab is rational.
Proof:
√
2 is known to be irrational. Consider
√
2
√
2
. By the prin-
ciple of excluded middle, it is either rational or irrational. If it is
rational, we are done. If it is irrational, take a =
√
2
√
2
, b =
√
2 and
we have ab = (
√
2
√
2
)
√
2 = 2 which is rational.
This proof is non-constructive, since it does not does not tell us which the
numbers a and b are. There exist other proofs showing which the numbers are
(it can be shown that
√
2
√
2
is irrational). However, to show that, the proof
becomes much longer.
In fact, by requiring that all proofs shall contain constructions, less theorems
can be proved. On the other hand, a proof with a construction will provide a
richer understanding. Often, we do not want to merely show the existence of
an object, but to construct the object itself.
2.1.1 Proofs as Constructions
In intuitionistic logic, a proof is a construction. The most known interpretation
of intuitionistic proofs is the BHK-interpretation, which follows. A construction
is defined inductively, where P and Q are propositions:
• A construction of P ∧Q is a construction of P and a construction of Q.
• A construction of P ∨Q is a construction of P or a construction of Q, as
well as an indicator of whether it is P or Q.
6
Γ, φ ` φ (ax)
Γ, φ ` ψ
Γ ` φ→ ψ → I
Γ ` φ→ ψ Γ ` φ
Γ ` ψ → E
Γ ` φ Γ ` ψ
Γ ` φ ∧ ψ ∧I
Γ ` φ ∧ ψ
Γ ` φ ∧E
Γ ` φ ∧ ψ
Γ ` ψ ∧E
Γ ` φ
Γ ` φ ∨ ψ ∨I
Γ ` ψ
Γ ` φ ∨ ψ ∨I
Γ ` φ ∨ ψ Γ, φ ` σ Γ, ψ ` σ
Γ ` σ ∨E
Γ ` ⊥
Γ ` φ ⊥E
Figure 3: Natural deduction for intuitionistic propositional logic.
• A construction of P → Q is a function that from a construction of P
constructs a construction of Q.
• A construction of ¬P (or P → ⊥) is a function that produces a contra-
diction from every construction of P .
• There is no construction of ⊥.
Such a proof is also called a proof or a witness.
The classical axiom ¬¬P → P does not make any sense under a constructive
interpretation of proofs. To prove a proposition P we need a proof of P . But
given a proof of ¬¬P , we cannot extract a construction of P .
2.1.2 Natural Deduction
The notation Γ ` P , where Γ is a set of propositions, means that from Γ,
we can derive P . The set Γ is called the assumption set and the statement
Γ ` P is called a proof judgement. The derivation rules for natural deduction
in intuitionistic logic is shown in Figure 3.
This notation is called the sequent notation, where all free assumptions are
written left of the ` (turnstile). The reader may be more familiar with the tree
style notation, where assumptions are written in the leaves of a tree and marked
as discharged during derivation. The sequent style notation is much easier to
reason about formally, whereas the tree notation is easier to read.
2.1.3 Classical and Intuitionistic Interpretations
Natural deduction formalises our concept of a proof. But we also need to in-
troduce the concept of (truth) value of a formula, which corresponds to our
7
notion of a formula being “true” or “false” when the atomic propositions it con-
sists of are given certain values. When we use classical logic, our propositions
are assigned values from Boolean algebras. The most well-known such is B,
which consists of only two elements, {0, 1}. When we study intuitionistic logic,
our equivalent of Boolean algebras are called Heyting algebras. The Boolean
algebras and the Heyting algebras specify how the constants and connectives
⊥,>,∨,∧,¬,→ are interpreted. We do not define these algebras here; the inter-
ested reader can read about it in for example [SU06] or a simpler presentation
in [Car13], covering only classical logic. We point out that all Boolean algebras
are Heyting algebras.
This is a quite technical area, but we go quickly through it, a bit informally,
and state the main results without proofs.
Definition 2.1. A valuation is any function v : Γ → H, sending every atomic
proposition p in Γ to an element v(p) in the Heyting algebra H. The value JP Kv
of a proposition P is defined inductively:
• JpKv = v(p)
• J⊥Kv = ⊥
• J>Kv = >
• JP ∨QKv = JP Kv ∨ JQKv
• JP ∧QKv = JP Kv ∧ JQKv
• JP → QKv = JP Kv → JQKv
• J¬P Kv = ¬JP Kv
where the constants and connectives ⊥,>,∨,∧,¬,→ on the right side are inter-
preted in H.
We introduce some notation:
• The notation v,Γ |= P means that under the valuation v of Γ, P is true.
• The notation H,Γ |= P means that v,Γ |= P for all v : Γ→ H.
• The notation Γ |= P means that H,Γ |= P for all Heyting algebras H.
• The notation |= P means that Γ |= P with Γ = ∅.
For classical logic, we do the corresponding definitions, replacing the Heyting
algebra H with the Boolean algebra B.
Definition 2.2 (tautology). A tautology is a formula P that is true under all
valuations. We write |= P .
A well known example of a Boolean algebra is the following. Note that there
exist other Boolean algebras.
Example 2.1. The Boolean algebra B is the set {0, 1}, with
• a ∨ b as max(a, b)
• a ∧ b as min(a, b) = a× b
• ¬a as 1− a.
8
• a→ b as ¬a ∨ b
• > as 1
• ⊥ as 0
4
Theorem 2.1. A formula in classical logic is true under all valuations to
Boolean algebras iff it is true under all valuations to B. Or more compact:
B,Γ |= P ⇐⇒ Γ |= P
This theorem states that for verifying if Γ |= P , it is sufficient to test all
assignments of atomic propositions to 0 or 1, i.e. to draw “truth tables”.
Example 2.2. Consider subsets of R, where int(S) (the interior) is the largest
open subset contained in the subset S ⊆ R. An example of an Heyting algebra
are all open subsets in R, with
• A ∨B as set union A ∪B
• A ∧B as set intersection A ∩B
• ¬A as the interior of the set complement int(AC)
• A→ B as int(AC ∪B)
• > as the entire set R
• ⊥ as the empty set ∅
Note that all these sets are open sets, as required (given that A and B are open).
We will denote this Heyting algebra by HR. 4
We can immediately find a valuation where the Principle of the Excluded
Middle, P ∨ ¬P is not valid. Let v : {P} → HR, with v(P ) = (0,∞). Then
J¬P Kv = int(R (0,∞)) = int((−∞, 0]) = (−∞, 0), so JP ∨ ¬P Kv = (0,∞) ∪
(−∞, 0) = R− {0} 6= R = J>Kv.
Theorem 2.2. A formula in intuitionistic logic is true under all valuations to
Heyting algebras iff it is true in all valuations to HR. Or more compact:
HR,Γ |= P ⇐⇒ Γ |= P
This theorem states that verifying if Γ |= P , intuitionistically, comes down
to searching for counter examples in open sets on the real line.
We have two central results:
Theorem 2.3 (Soundness Theorem).
Γ ` P =⇒ Γ |= P
Stated in words: “what can be proved, is also true under all valuations”.
Theorem 2.4 (Completeness Theorem).
Γ |= P =⇒ Γ ` P
9
Stated in words: “what is true under all valuations, can also be proved”.
These two theorems are true for both classical and intuitionistic logic, using
Boolean and Heyting algebras respectively. These two theorems assert that
our notions of “provable” and “true” coincide. A tautology is then a formula
that can be derived with all assumptions discharged, i.e. what we usually call a
theorem.
Example 2.3 (Classical, but not intuitionistical, tautologies).
Double negation elimination, or Reduction ad absurdum (RAA), ¬¬P → P
The principle of excluded middle (PEM), P ∨ ¬P
Peirce’s law, ((P → Q)→ P )→ P
Proof by contradiction, (¬Q→ ¬P )→ (P → Q)
Negation of conjunction (de Morgan’s laws), ¬(P ∧Q)→ (¬P ∨ ¬Q)
Proof by cases, (P → Q)→ (¬P → Q)→ Q 5
RAA implies PEM, (¬¬P → P )→ P ∨ ¬P 4
To prove this, one firstly concludes using truth tables that these are classical
tautologies. To show that they are not intuitionistic tautologies, one finds a
valuation into a Heyting algebra, e.g. HR, where they are not true. The reader
is suggested to try to construct such counter examples in HR for some of these
propositions.
2.2 The Curry-Howard Isomorphism
Typed lambda calculus can be introduced with another motivation. Under the
BHK-interpretation of proofs, proofs are constructions. If p is a construction
and P is a formula, we let p : P denote that p is a construction of P .
We now see a similarity to lambda calculus: If q(p) : Q can be constructed
given a p : P , then the function λp.q(p) is a construction of the formula P → Q.
We get the following correspondence:
proof term
proposition type
assumption set typing environment
implication function
unprovable formula empty type
undischarged assumption free variable
normal derivation normal form
tautology combinator
The list can be extended, for example in order to find a correspondence for
the other logical connectives. That is what we will do now.
2.2.1 Additional Type Constructors
To get a correspondence for ∨, ∧ and ⊥, we extend the simply typed lambda
calculus with new constructs.
5or (P → Q) ∧ (¬P → Q) → Q by currying
10
We add a new type ⊥, which is empty. This type is hard to interpret. It
can be interpreted as contradiction. A term of this type can be interpreted as
abortion of computation. Another interpretation that a term of type ⊥ is an
“oracle”, that can produce anything. It has a destruction rule: given an oracle,
we can produce a variable of any type. Since such an oracle is impossible, the
type ⊥ is an empty type.
For ∧ and ∨, we introduce two new type constructors alongside our old (→).
They also have destructors, or elimination rules, as well as reduction rules. The
type ⊥ only has a destructor.
Γ, x : A ` x : A (ax)
Γ, x : A `M : B
Γ ` λx.M : A→ B λ abs
Γ `M : A→ B Γ ` N : A
Γ `MN : B
λ app
Γ `M : A Γ ` N : B
Γ ` pair(M,N) : A ∧B
pair
Γ `M : A ∧B
Γ ` fst(M) : A
fst
Γ `M : A ∧B
Γ ` snd(M) : B snd
Γ `M : A
Γ ` inl(M) : A ∨B inl
Γ `M : B
Γ ` inr(M) : A ∨B
inr
Γ `M : A ∨B Γ, a : A ` N1 : C Γ, b : B ` N2 : C
Γ ` case(M,λa.N1, λb.N2) : C
case
Γ `M : ⊥
Γ ` any(M) : A
any
where A is any type
Figure 4: Typing judgements for λ-calculus.
Function If A1, A2 are types, then A1 → A2 is a type. It is interpreted as a
function.
The data constructor for → is λ. It has the destructor apply(M,N), but
we typically write just MN . The reduction rule is
apply(λx.M,N) →β M [x := N ]
or simply (λx.M)N →β M [x := N ]
Pair If A1, A2 are types, then A1 ∧A2 is a type. It is interpreted as a pair (or
Cartesian product).
11
The data constructor for ∧ is pair(, ). It has the destructors fst(M) and
snd(M) (projection) and the reduction rules
fst(pair(M,N))→∧ M
snd(pair(M,N))→∧ N
Injection If A1, A2 are types, then A1 ∨ A2 is a type. It is interpreted as a
union type, which can be analysed by cases.
The data constructors for ∨ are inl() and inr(). It has the destructor
case(, , ). and the reduction rules
case(inl(M), λa.N1, λb.N2)→∨ (λa.N1)M
case(inr(M), λa.N1, λb.N2)→∨ (λb.N2)M
Empty Type We add the base type ⊥, which does not have a constructor. It
has a destructor any(), without an reduction rule. The type of any(M) is
any type we want.
Example 2.4 (Injection type). The type A ∨ B can be interpreted as a type
where either A or B works for the purpose. An example from programming can
be the operator “less than”, <. It can compare terms of both int and real.
The type for such a term < is then (int ∨ real) ∧ (int ∨ real)→ bool.
The term < itself does a case analysis on the type of its two arguments, to
determine how they should be compared. 4
Our new typing judgements are shown in Figure 4. Note that if we strip our
typing statements of their variables and just keep the types, we get exactly the
rules in Figure 3. We extend our “glossary” from previously:
logical connective type constructor
introduction constructor
elimination destructor
implication function
conjunction pair
disjunction injection
absurdity the empty type
Theorem 2.5 (The Curry-Howard Isomorphism (for intuitionistic logic)).
Let types(Γ) denote {A | x : A ∈ Γ}.
(i) Let Γ be a typing environment. If Γ `M : A then types(Γ) ` A.
(ii) Let Γ be an assumption set. If Γ ` A, then there are Γ′,M such that
Γ′ `M : A and types(Γ′) = Γ.
Proof. Induction on the derivation. The set Γ′ can be constructed as
Γ′ = {xA : A | A ∈ Γ}, where xA denotes a fresh variable for every A.
This is a quite remarkable result. To show that a formula P is intuitionisti-
cally valid, it is sufficient to construct a term M of type P , i.e. an inhabitant.
As a bonus, we also get a proof, since the derivation of the typing judgement
12
M : P also contains the proof of the formula P , which is retrieved by removing
all variables in each typing judgement.
Conversely, to construct a term of a type A, one can construct one given a
proof that A is a valid intuitionistic formula.
Remark 2.1. Note that the word “isomorphism” is not a perfect description
of the correspondence. Some types can have several inhabitants, still different
after α-conversion and normalisation, if there are several variables of the same
type. The type derivation thus contains more information than an intuitionistic
deduction (in sequent notation), because the variables reveal which construc-
tion is used in which situation. A solution is to use the tree notation, with
annotations of which assumption is discharged at which step in the deduction.
2.2.2 Proofs as Programs
Under the Curry-Howard correspondence, proofs are terms and propositions
are types. Since the λ-calculus is a Turing complete functional programming
language, a proof can be seen as a program. When we find an intuitionistic proof
for a proposition, we have also an algorithm for constructing an object having
that property. Conversely, if we have an algorithm for constructing an object
having a property, given that the algorithm is correct, we can also construct a
proof from it showing the correctness of the algorithm. This is the content of
the proofs-as-programs principle.
It was discovered in the 1990’s that the correspondence can be extended
to classical logic, if lambda calculus is extended to λµ-calculus. This will
also be our main topic further on. Although we have argued for the non-
constructiveness of classical proofs, where algorithms cannot be constructed,
we will try to find some computational content of the classical proofs.
13
3 Lambda Calculus with Control Operators
We will introduce control operators into lambda calculus. Control operators al-
lows abstraction of the execution state of the program, enabling program jumps,
like in imperative languages. We will look at examples from the functional pro-
gramming language Scheme, which is based on lambda calculus. We will work
with untyped lambda calculus most of this chapter and therefore Scheme is
suitable since it is dynamically typed.
To understand control operators, we will first introduce the concepts of con-
texts, evaluation strategies and continuations.
With the notion of control operators, we will be able to extend the Curry-
Howard isomorphism to classical logic in the next chapter.
3.1 Contexts
Definition 3.1. A context is a term with a hole in it, i.e. a subterm has been
replaced with a “hole”, denoted by �.
Example 3.1. For example, in the term
(λx.λy.x)(λz.z)(λx.λy.y)
the subterm λz.z can be replaced by a hole �, to get the context
(λx.λy.x)�(λx.λy.y)
If we call this context E, then substituting a term M for the hole in E, which
is denoted E[M ], will give us the term
(λx.λy.x)M(λx.λy.y)
4
In lambda calculus, if we allow any subterm to be replaced by a hole, we can
define contexts as
E ::= � | EM | ME | λx.E (1)
where M is a lambda-term and x is a lambda-variable.
We will make several more limited definitions of contexts for different pur-
poses.
3.2 Evaluation Strategies
One use of contexts is to define an evaluation strategy. An evaluation strategy
specifies the order of evaluation of terms. In untyped and simply typed lambda
calculus, we have the confluence property which guarantees that the normal
form is unique. That is, two computations of the same term, using different
sequences of β-reduction, cannot yield two different results (if we by “result”
mean a term in normal form).
In both these calculi, all evaluation strategies are thus allowed. Using the
notion of contexts, we can express the β-reduction rules as
14
(λx.M)N →β M [N/x]
M →β N
E[M ]→β E[N ]
which captures the idea that redices in subterms can also be reduced. Writing
these rules explicit, using the definition of contexts in equation (1), we get
(λx.M)N →β M [N/x]
β
M →β N
PM →β PN
λright
M →β N
MP →β NP
λleft
M →β N
λx.M →β λx.N
λbody
In other calculi, one may choose to restrict the possible evaluation strategies.
It might be required to get a confluent calculus. It can also be of interest
when implementing an programming language, in order to get computational
efficiency or a specific behaviour. For example, the rule λbody is typically not
used in programming languages — the body M of a function λx.M is not
evaluated before application.
3.2.1 Common Evaluation Strategies
The following defines an evaluation strategy where arguments of a function are
evaluated before the function is applied, a so called call-by-value strategy. It is
commonly used in programming languages, for example in Scheme.
Definition 3.2 (Call-by-value lambda calculus). Let x range over variables, M
over lambda terms, and define values as variables or λ-abstractions:
V ::= x | λx.M
and define contexts as follows:
E ::= � | V E | EM
Then define β-reduction as
(λx.M)V →β M [V/x]
βcbv
M →β N
E[M ]→β E[N ]
Ccbv
The left side rule (βcbv) specifies that arguments to a function are evaluated
before function application. The context definition specifies that arguments are
evaluated left to right. 6
To better understand why the contexts define a certain evaluation strategy,
the reader is suggested to construct an arbitrary lambda term, use the Backus-
Naur-grammar definition of contexts to derive a syntax tree and use this syntax
tree to derive the order of evaluation of the term.
6Most call-by-value programming languages specify the order of argument evaluation.
Scheme and C do not. In Scheme and other functional languages it does not matter since
(the pure part of) the language has no side-effects. For details on Scheme evaluation order,
see R7RS section 4.1.3 [R7R13].
In C it is undefined and varies between compilers.
15
Example 3.2. The term below, a non-value function applied to a non-value
argument, separated by brackets for legibility,
[(λx.(λz.z)x)(λy.y)][(λa.λb.a)(λc.c)]
has, in a left-to-right call-by-value calculus, the context derivation
E
|| ##
E
|| ""
M
V E
��
�
The underlined context marks the redex closest to the hole, which is the next
redex to reduce. After one step of reduction, the term looks like
[(λz.z)(λy.y)][(λa.λb.a)(λc.c)]
and happens to have the same context derivation. After another step of reduc-
tion, the function is now to a value and the evaluation moves to the argument.
The term looks like
[(λy.y)][(λa.λb.a)(λc.c)]
and the context looks like
E
|| ""
V E
|| ""
V E
��
�
4
If instead right-to-left call-by-value evaluation is wanted, one defines contexts
as
E ::= � | EV | ME
We have already encountered an evaluation strategy: In untyped lambda
calculus, the Normal order theorem (Theorem 1.3) states that normal order
reduction will always reach the normal form if it exists.
Definition 3.3 (Normal order evaluation). Define contexts by
E ::= � | EM | λx.E
and use the usual rule for β-reduction. This defines a strategy where the leftmost
redex is reduced first and the insides of functions are reduced before application.
This is also called the leftmost-innermost reduction. 7
7As commented on in Theorem 1.3, both leftmost-innermost and leftmost-outermost strate-
gies will reach the normal form if it exists. Normal order reduction is typically defined to be
leftmost-outermost, but leftmost-innermost is easier to define with contexts, which is why
leftmost-innermost was chosen in this example.
16
Another well-known evaluation strategy is call-by-name, which we will use
later.
Definition 3.4 (Call-by-name evaluation). Define contexts by
E ::= � | EM
and use the usual rule for β-reduction. This defines a strategy where the leftmost
redex is reduced first, but the insides of functions are not reduced.
The memoized version of call-by-name is called call-by-need and is also
known as lazy evaluation in programming.
3.2.2 Comparison of Evaluation Strategies
Below is an informal argument that the Normal Order Theorem (Theorem 1.3)
holds.
Definition 3.5. Given an evaluation strategy, we say that an evaluation of a
term has terminated if the term is a value and the evaluation strategy cannot
reduce the term further.8
The criterion that an evaluation of a term has terminated is weaker than the
evaluation having reached normal form.
If the evaluation of term never terminates, it is because a subterm loops or
reproduces itself in some way, for example the Ω-combinator (see Example 1.1).
In certain cases, some evaluation strategies can fail to terminate while an-
other may succeed. Another evaluation strategy may choose another way to
evaluate an expression, so that the problematic term never has to be evalu-
ated, because it is never used. For example, a constant function applied to the
Ω-combinator:
(λx.a)Ω
A call-by-value strategy will try to reduce the argument before application, but
end up in an infinite reduction of the Ω-combinator. A normal order reduction
or the call-by-name reduction will not evaluate the term before application, and
the term evaluates to a. In general, the call-by-name strategy never evaluates
a term until it is used, and is therefore the evaluation strategy most likely to
terminate.
Still, call-by-name leaves the body of a function unevaluated, so if the func-
tion body contains redices, it is not in normal form. Normal order reduction
is like call-by-name, except that it also evaluates function bodies (first or last),
which is why it will find the normal form if it exists. However, if the function
body contains something that loops, normal order reduction will become caught
in the loop, while call-by-name terminates.
8We only consider evaluation strategies where the term after termination is a value (a
variable or an abstraction), like those mentioned above. Such evaluation strategies are the
only meaningful ones, but it is technically possible to construct one that terminates without
producing a value.
17
3.3 Continuations
Definition 3.6. A continuation is a procedural abstraction of a context. That
is, if E is a context, then λz.E[z] is a continuation. The variable z is assumed
to be fresh for the context.
What we mean by a continuation slightly depend on what definition of con-
texts we are using. The idea is the same though — it is a function λz.M , where
z occurs exactly once in M . If M is to be evaluated, evaluation will start at the
point where z occurs.
3.3.1 Continuations in Scheme
From a programming perspective, a different description of a continuation can
be given: A continuation is a procedure waiting for the result (of the current
computation). Consider the following Scheme code
(define divide
(lambda (a b)
(/ a b)))
(display (divide 5 3))
The expression (divide 5 3) is evaluated in the context (display []). A
procedural abstraction of that context has the form (lambda (z) (display z)).
Now let us rewrite the code, so that we instead send this continuation to the
current computation, divide, and let divide apply this function to the result
once it is done. This continuation is a function waiting for the result of the
computation of the division.
(define divide
(lambda (a b k)
(k (/ a b))))
(divide 5 3 (lambda (z)
(display z)))
Now divide is passed its continuation as the final parameter k. This is the
idea of continuation-passing-style.
3.3.2 Continuation Passing Style
In continuation-passing-style programming (CPS), one embraces this idea:
Every function is passed its continuation as a final parameter.
All functions are then rewritten to follow this idea. The resulting expressions
will look like the old expressions written inside out.
Example 3.3 (Pythagoras’ formula for hypotenuse). Using Pythagoras’ theo-
rem, we have the formula for the hypotenuse
18
(define pythagoras
(lambda (a b)
(sqrt (+ (* a a) (* b b)))))
which is rewritten into CPS as
(define pythagoras
(lambda (a b k)
(mul a a (lambda (a2)
(mul b b (lambda (b2)
(add a2 b2 (lambda (a2b2)
(sqrt a2b2 k)))))))))
assuming that mul, add and sqrt are also rewritten into CPS version. 4
An effect of writing a program in CPS are that all middle step terms, such
as a2, i.e. a2, will be explicitly named. Another effect is that the evaluation
order becomes explicit — a2 is calculated before b2 in the pythagoras-example.
With CPS, all functions have access to their continuations. That means that
functions potentially have access to other continuations, since they can be passed
around. In the below example, safe-divide is a helper function that divides
numbers. What is interesting is that it is given another continuation fail in
addition to its own, success. The parent procedure that calls safe-divide can
here send another continuation, maybe a top level procedure, to be called if an
error occurs (b is zero). Then safe-divide can abort computation by ignoring
the parent procedure and instead call the top level procedure fail. Now we
have a functional program that can jump!
Example 3.4 (CPS with multiple continuations). For an example program, see
Appendix A.1.
(define safe-divide
(lambda (a b fail success)
(if (= b 0)
(fail "Division by zero!")
(success (/ a b)))))
4
Since continuations can be passed around, they can also be invoked (called)
multiple times, producing other interesting results. That is left to the reader to
explore.
3.4 Control Operators
The term control flow refers to the order in which statements of a program are
executed. We are usually not too concerned with the control flow when working
with functional programming, because there are no side effects. The addition
of continuations to a language will however add more imperative features.
We established that the hole in a context represents the current point of
evaluation. It is therefore also the current position of the control flow. An
abstraction of a context — a continuation — is therefore a function that allows
19
to return to a certain point of control flow when we invoke it. In a functional
language, calling another function is typically the only way to jump to another
part of the program, but with access to continuations, the program may jump
anywhere.
To use these ideas, our program must have access to the context. Either we
can write the entire program in continuation passing style. Or we can extend
the language with a control operator operating on the context. Such operators
gives terms full access to the context where they are applied. An abstraction of
the current context is called the current continuation.
Since we are operating on contexts, and therefore on the state of the pro-
gram, we may suspect that the order of evaluation of terms — our evaluation
strategy — can affect the outcome of the program. Not only will the choice
of an evaluation strategy affect whether a program halts — we may also lose
confluence.
3.4.1 Operators A and C
Following Griffin 1990 [Gri90] we introduce two control operators for manipulat-
ing the contexts. They were originally introduced by Felleisen 1988 [FWFD88].
It can be assumed below that we use either the call-by-value or call-by-name
lambda calculus — which one does not matter, unless otherwise is specified.
The operatorA, for abort, abandons the current context. It has the reduction
rule
E[A(M)]→A M
The operator C, for control, abstracts on the current context and applies its
argument to that abstraction. 9
E[C(M)]→C Mλz.A(E[z])
The term λz.A(E[z]) is an abstraction of a context, a continuation, and fur-
thermore the same context where C was applied — the current continuation.
Expressing the reduction rule in words, we say that C applies its argument to
the current continuation.
What does the application do? The continuation becomes bound to a vari-
able in M , let us call it k. If k never is invoked (applied), then M will return
normally. However if k is invoked, then it is applied to some subterm N of M
in some context E1:
E[C(M)] →C Mλz.A(E[z])
. . .
→ E1[kN ]
→ E1[(λz.A(E[z]))N ]
→ E1[A(E[N ])]
→ E[N ]
9 Note that, since we excluded the rule E ::= . . . | λx.E, the term Mλz.A(E[z]) will not
exit the abstraction and reduce to E[z]. The insides of functions are not evaluated before
application.
20
Why is not the continuation simply λz.E[z]? The operator A is also included to
ensure that control exits the context where the continuation is applied. When
we are done evaluating E[N ], we do not want to return to E1. Compare this
to Example 3.4 with the safe-div procedure, where application of the fail-
continuation exits the parent procedure of safe-div, so that control never re-
turns to the parent procedure.
Note that A can be defined from C by taking a dummy argument d:
E[A(M)]→A E[C(λd.M)]
3.4.2 Operators A and C in Racket
Felleisen started in 1990 the team behind the programming language Racket, a
variant of Scheme. Racket has several control operators, including the operators
abort and control with behaviour corresponding to A and C.10
The syntax looks like
(abort expr)
(control k expr)
where k is the name of the variable to which the continuation is bound. If
M = λk.N , then the Racket term corresponding to C(M) is (control k N).
The operator control is evaluated like
E[(control k expr)]→C ((lambda (k) expr) (lambda (z) E[z]))
Example 3.5 (Operators control and abort in Racket). From the Racket
shell:
> (control k (* 3 2)) ;; (* 3 2)
6
> (abort (* 3 2)) ;; (* 3 2) (same as previous)
6
> (+ 4 (control k (* 3 2))) ;; (* 3 2)
6
> (+ 4 (abort (* 3 2))) ;; (* 3 2) (same as previous)
6
> (+ 4 (control k (k (* 3 2)))) ;; (+ 4 (* 3 2))
10
> (+ 4 (control k (* 3 (k 2)))) ;; (* 3 (+ 4 2))
18
> (+ 4 (control k (* 3 (control k1 (k 2))))) ;; (+ 4 2)
6
> (+ 4 (control k (* 3 (abort (k 2))))) ;; (+ 4 2) (same as previous)
6
4
10 The operators are available in racket/control-module. Put (require racket/control)
at the top of your file.
21
3.4.3 call/cc
Scheme provides the operator call-with-current-continuation, or call/cc
for short, for manipulating the context. It has a behaviour similar to C, except
that it always returns to the context where it was called (which C will not do if
the continuation never is invoked). Adding such an operator K would have the
following reduction rule
E[K(M)]→K E[Mλz.A(E[z])]
If the evaluation exits normally, it will return to the outer E. If the continuation
is invoked, it will return to the inner E.
Example 3.6 (Comparison between control and call/cc ). If the continu-
ation k is never invoked, then control and call/cc act differently. Operator
control will abort, i.e. skip the outer context (+ 4 []), whereas call/cc will
return to that context.
> (+ 4 (control k (* 3 2))) ;; (* 3 2)
6
> (+ 4 (call/cc (lambda (k) (* 3 2)))) ;; (+ 4 (* 3 2))
10
4
Example 3.7 (Loss of confluence). The result of a computation can depend
on the evaluation order. The same expression can have different results in a
call-by-value and call-by-name calculus. This example can even have different
results in (different implementations of) Scheme, since the order of evaluation
of arguments is not specified in the standard.
In a call-by-name calculus, it would depend on the (probably hidden) imple-
mentation of the function +.
(call/cc
(lambda (k)
(call/cc
(lambda (l)
(+ (k 10) (l 20))))))
If k is evaluated first, the result is 10, but if l is evaluated first, it is 20. 4
3.4.4 Use of Control Operators
As pointed out earlier, control operators allow us to jump to any part of the
program. Is is basically the equivalent of the imperative GOTO. Abstracting on
the current context corresponds to setting a label, and invoking the continuation
jumps to that label.
For this reason, Scheme’s call/cc has received similar criticism as GOTO.
Excessive use of call/cc can lead to a program where the control flow is very
hard to follow — so called “spaghetti code”. However, there are some useful
high-level applications of call/cc such as exceptions, threads and generators,
to mention a few.
22
A throw/catch-mechanism — exceptions — is particularly easy to simulate,
because that is exactly how call/cc works. We repeat that call/cc has the
reduction rule
E[K(M)]→K E[Mλz.A(E[z])]
During the evaluation of E[K(M)], the continuation is bound to some vari-
able that we again refer to as k. We then have M = λk.T for some T .
E[K(M)] →K E[Mλz.A(E[z])]
→ E[T [k := λz.A(E[z])]]
If k is never invoked, we have a normal return to the context E. If, during the
evaluation of T , k is applied to some argument N , the rest of the term T is
ignored and N is returned. We have an exceptional return where N is thrown
to the context E.
3.5 Typed Control Operators
So far we have only considered untyped lambda calculus with our control oper-
ators. Griffin 1990 [Gri90] attempts to give the operator C a type.
We have the reduction rule
E[C(M)]→C Mλz.A(E[z])
and we want that evaluation preserves typing (subject reduction), so both sides
should have the same type.
Let E[z] be of type B if z : A. The continuation should thus be of type
A→ B. The term M takes this as an argument, and returns E[z] : B, so it has
type (A → B) → B. So what about preservation of types? Well, if E[C(M)]
should be a valid substitution, and the hole in E has type A while M has type
(A→ B)→ B, then C(M) : A, so C : ((A→ B)→ B)→ A.
This derivation has an important consequence. We noted before thatA(M) =
C(λd.M) for some dummy variable d. Then if M is a closed term of type B,
then we can deduce any type A for d. That is, from the typing judgement M : B
we can derive any typing judgement d : A. To get a logically consistent typing,
we must then have that B is the proposition which has no proof, B = ⊥. Then
operator C has then the type of double negation elimination, ¬¬A→ A. 11
This conclusion suggests a connection between control operators and classical
logic. We will now turn to an extension of the lambda calculus entirely based
on this idea, the λµ-calculus.
11The reader may point out that if B = ⊥, then E[] : ⊥, so the rule can never be applied
because ⊥ is not inhabited. Griffin suggests a solution to this problem, see further [Gri90].
23
4 λµ-calculus
After Griffin’s discovery of the connection between control operators and clas-
sical logic, several calculi have been constructed. The most known and studied
is the λµ-calculus, originally introduced by Parigot in [Par92], which will allow
us to extend the Curry-Howard isomorphism to classical logic.
The λµ-calculus is the simply typed λ-calculus enriched with a binder µ for
abstracting on the current continuation. These continuations will be referred
to as addresses and have separate variables. This calculus is typed and satisfies
subject reduction, confluence and strong normalisation.
4.1 Syntax and Semantics
We extend the λ-calculus with a binder µ for abstracting on continuations.
We will initially just consider the simply typed lambda calculus with the type
constructor →. The typing judgements for our new terms will correspond to
the derivation rules in classical logic, considering only the connective → (see
Figure 5). An extension with pairs and injections will be discussed in Section 4.5.
4.1.1 Terms
Definition 4.1 (Raw λµ-terms). The raw terms of the λµ-calculus are defined
as follows:
M,N ::= x | λx.M | MN | [α]M | µα.M
where x is a λ-variable and α is a µ-variable. The µ-variables are also referred
to as addresses.
Remark 4.1. Convention: Roman letters will be used for λ-variables and Greek
letters for µ-variables.
We have extended the lambda-calculus with two new constructs, address
application [·] on terms and address abstraction µ . that binds free addresses.
These two constructs are control operators.
• Address abstraction abstracts on the current context and the current con-
tinuation is bound to that address. Evaluation of µα.M evaluates M with
the addition that α is a reference to the current context.
• Address application [α]M applies an address α (a continuation) to a term
M .
That is, if we have an expression
E[µα. · · · [α]M · · · ]
where α is applied to the term M , the evaluation of M will exit the current
context and resume where α is bound, i.e. in the context E. Thus the resulting
expression of this jump will be E[M ].
Note that this is somewhat informal since we have neither defined the con-
texts we will use, nor what reduction rules we will allow.
Addresses are thus a special type of variables used for continuations. Ad-
dresses can only be invoked and abstracted, not passed as arguments to functions
like normal λ-terms are.
24
Γ, A ` A (ax)
Γ, A ` B
Γ ` A→ B → I
Γ ` A→ B Γ ` A
Γ ` B → E
Γ,¬A ` ⊥
Γ ` A RAA
Figure 5: Derivation rules for our subset of classical logic.
We introduce some additional notation. By writing µ .M we will mean µγ.M
for some γ not free in M . By FVµ(M) we denote the set of free addresses in
M .
Remark 4.2 (First class continuations). Since addresses cannot be passed di-
rectly to functions, in programming terms, we would say that addresses are
not “first-class objects”. This means that they cannot appear anywhere where
a normal λ-term may appear — single addresses α are not listed as a term
in our inductive definition of λµ-terms. However an address α can be passed
to a function by the term λx.[α]x. Anywhere we want a first class address
α, we can use the term λx.[α]x instead. So in practise, we have first class
addresses/continuations.
Remark 4.3. The use of square brackets for address application has nothing to
do with contexts or substitution. This notation for address application follows
the convention, which happens to be similar.
Remark 4.4 (Precedence rules). Convention: The control operators have low-
est precedence. [α]MN means [α](MN).
4.1.2 Typing and Curry-Howard Correspondence
In this chapter we consider a subset of classical logic, with the connective →,
the constant ⊥, and ¬, where ¬A abbreviates A→ ⊥. The derivation rules for
natural deduction are shown in Figure 5. Note that the rule RAA replaces the
rule ⊥E as a stronger version; ⊥E is given by RAA without discharging any
assumptions.
We will now add types to the λµ-calculus. Consistent with the conclusions
of Griffin, we will let addresses (continuations) have the type A → ⊥, if A is
the type of the term M that the address is applied to. We abbreviate A → ⊥
as ¬A. An address application [α]M thus has type ⊥. It can be thought of as
an aborted computation. Abstraction of addresses restores the type σ that the
address was applied to.
The typing judgements for λµ-calculus are shown in figure 6. Comparing
the typing judgements with the rules for natural deduction, we see what our
extended Curry-Howard correspondence will consist of: address abstraction
(µ abs) corresponds to the rule (RAA). Address application (µ app) has no
25
Γ, x : A ` x : A (ax)
Γ, x : A `M : B
Γ ` λx.M : A→ B λ abs
Γ `M : A→ B Γ ` N : A
Γ `MN : B
λ app
Γ, α : ¬A `M : A
Γ, α : ¬A ` [α]M : ⊥
µ app
Γ, α : ¬A `M : ⊥
Γ ` µα.M : A
µ abs
Figure 6: Typing judgements for λµ-calculus.
corresponding rule, but it corresponds to deriving ⊥, using rule (→ E) with ¬A
and A as premises. 12
Note particularly when assumptions are discharged.
We can now construct a Curry-Howard isomorphism for classical logic. The
construction is taken from [SU06].
Theorem 4.1 (The Curry-Howard Isomorphism (for classical logic and the type
constructor →.)).
Let types(Γ) denote {A | x : A ∈ Γ}.
(i) Let Γ be a typing environment. If Γ ` x : A then types(Γ) ` A.
(ii) Let Γ be an assumption set. If Γ ` A, then there are Γ′,M such that
Γ′ `M : A and types(Γ′) = Γ.
Proof. Induction on the derivation. The set Γ′ can be constructed as
Γ′ = {xA : A | A ∈ Γ}, where xA denotes a fresh variable for every proposition
A. The construction in (ii) is not quite trivial. Addresses are introduced with
(ax) and applied using (λ app), where the latter is only valid for λµ-terms (single
addresses are not λµ-terms). This makes no distinction between addresses and
variables. It is only when addresses become bound that there is a distinction.
Therefore the translation from the rule RAA, binding addresses, must be done
with some care. We consider two cases when we apply the rule RAA (see again
Figure 5). The premise is Γ,¬A ` ⊥ and the conclusion is Γ ` A. Is ¬A ∈ Γ?
• ¬A ∈ Γ (the assumption ¬A is not discharged).
Introduce a new variable α (i.e. not on the form x¬A, so α /∈ Γ′). By the
induction hypothesis, there is Γ′ ` M : ⊥. Take the typing judgement to
be Γ′ ` (µα : ¬A.M) : A
• ¬A /∈ Γ (the assumption ¬A is discharged).
By the induction hypothesis, there is Γ′, x¬A : ¬A `M : ⊥, with
x¬A : ¬A /∈ Γ′ (there is a variable x¬A in the typing environment from
previously). That means that x¬A can be free in M , but it is not on the
form . . . [x¬A] . . . as required for addresses. We replace all occurrences
12Every derivation of ⊥ does not correspond to an address application though. It might as
well be a normal application (λ app), as variables, not only addresses, may have type ¬A.
26
of x¬A with λz.[x¬A]z which gives us a valid term, and then bind the
address. That is, we take
Γ′, x¬A : ¬A ` (µx¬A : ¬A.M [x¬A := λz.[x¬A]z]) : A
as the typing judgement.
4.1.3 Restricted Terms
In practise we will add a requirement on the λµ-terms: address application and
abstraction may only appear in pairs. Then an address invocation will still have
a normal return type.
Definition 4.2 (Restricted λµ-terms). The restricted13 terms of the λµ-calculus
are defined as follows:
M,N ::= x | λx.M | MN | µα.C
and the commands
C ::= [α]M
where x is a λ-variable and α is a µ-variable.
If we ignore typing, any raw term can be transformed to a restricted term,
by replacing
• all single [α]M , having no accompanying abstraction, to µ .[α]M
• all single abstractions µα.M , having no accompanying application, to
µα.[α]M .
However, this does not work well with typing, because we then require that α
is applicable to M (i.e. µα.[α]M has the same type as M), which is not always
the case.
Most results work well with both raw and restricted terms, but where one
of them will be required, this will be stated.
4.2 Computation Rules
The next step is to define reduction rules, to be able to compute values of terms.
4.2.1 Substitution
To define the reduction rules for our extension, we first need to define con-
texts as well as substitution for addresses, so called structural substitution. The
λµ-calculus is a call-by-name calculus, so we will use call-by-name contexts
(compare Definition 3.4).
Definition 4.3 (λµ-contexts). The λµ-contexts are defined recursively
E ::= � | EN
where N is a λµ-term.
13called “restricted” as in [SU06]
27
Substitution of an address and a context for an address in a term, denoted
M [α := βE], recursively replaces all commands [α]N with [β]E[N ′], where
N ′ := N [α := βE]. For example if E = �, this will just replace all free
occurrences of α with β.
Definition 4.4 (Structural substitution). Structural substitution is recursively
defined:
x[α := βE] := x
(λx.M)[α := βE] := λx.M [α := βE]
(MN)[α := βE] := M [α := βE]N [α := βE]
([α]M)[α := βE] := [β]E[M [α := βE]]
([γ]M)[α := βE] := [γ]M [α := βE] if γ 6= α
(µγ.C)[α := βE] := µγ.C[α := βE]
As with usual substitution of λ-variables, we want the structural substitution
to be capture avoiding, both for λ- and µ-variables.
4.2.2 Reduction
Reduction in λµ-calculus is given by the following rules:
(λx.M)N →β M [x := N ]
µα.[α]M →µη M if α /∈ FVµ(M)
[β]µα.C →µR C[α := β�]
(µα.C)M →µC µα.C[α := α(�M)]
The closure of these rules are denoted by (→µ).
The first rule (→β) is the usual β-reduction. The second rule (→µη) is the
address equivalence to η-reduction, hence its name. It corresponds to having a
proof Σ of P , assuming ¬P , deriving ⊥ and then applying the RAA-rule, giving
P again. If ¬P is not discharged in Σ, we could just as well use P directly. We
use tree notation for clarity:
Σ
P [¬P ]1
⊥
P
RAA1 ⇒ ΣP
If the last step does not discharge ¬P anywhere in Σ, this is a detour which can
be eliminated.
The third rule (→µR) is renaming of addresses. Instead of jumping to the
address α, and from there jump to the address β, one can directly jump to the
address β.
The fourth rule (→µC) states that instead of jumping to α, and there apply
the result on a term M , we may instead first do the application, and then jump
to address α.
Iterated application of the fourth rule can be written as
E[µα.C] � µα.C[α := αE]
28
More generally stated: instead of jumping to address α, and there evaluate the
term in a context, we may instead evaluate a term in this context, and then
jump to address α. Thus the jump is pushed towards the end of the evaluation.
Compare this to natural deduction, where one often tries to push the use of
the RAA-rule towards the end, as to have the rest of the proof intuitionistically
valid.
We add types to the bound variables to clarify the type:
(λx : A.M)N →β M [x := N ]
µα : ¬A.[α]M →µη M if α /∈ FVµ(M)
[β]µα : ¬A.C →µR C[α := β�]
(µα : ¬(A→ B).C)M →µC µα : ¬A.C[α := α(�M)]
Note that the rule (→µC) changes the type of α. The overall type of the term
is unchanged.
4.2.3 Properties
As with the simply typed λ-calculus, we have the following three properties for
λµ-calculus, which we again state without proof. We refer to [SU06] for a proof.
Theorem 4.2 (Subject reduction). If M : A and M �β M
′, then M ′ : A.
Proof. Induction on the derivation of the type.
Theorem 4.3 (Strong normalisation). Every chain of reductions M →β M1 →β
M2 . . . must reach a normal form Mn after a finite number of reductions.
Theorem 4.4 (Confluence). If A�β B and A�β C, then there exists a term
D such that B �β D and C �β D.
4.3 Interpretation
As in the previous chapter about control operators, we present operators from
the racket/control module in Racket which correspond to our operators. As
mentioned before, control operators can be used to implement catch and throw
operators. We will shortly introduce such operators in λµ-calculus. We will also
find the a λµ-term for the operator call/cc, and its type.
4.3.1 Implementation in Racket
In Racket, prompt-tags can be used as µ-variables. The functions
call-with-continuation-prompt (abbreviated as call/prompt ) and
abort-current-continuation (abbreviated as abort/cc ) then corresponds to
µ-abstraction and µ-variable application, respectively.
The syntax for these control operators looks like
µ-abstraction: (call/prompt proc [prompttag handler] args . . .)
µ-application: (abort/cc proc [prompttag])
To simply the syntax for our purpose, we can redefine
29
(define (mu address proc)
(call/prompt proc address identity))
The argument proc must be a procedure that can be called when the corre-
sponding expression should evaluate. We should thus take the corresponding
expression expr and wrap it in an argument-less lambda expression, (lambda
() expr), to prevent it from evaluating before being called. 14
The programs one can write in Racket using prompt-tags are of course a
bit different from what can be achieved in λµ-calculus — for example, Racket
uses call-by-value evaluation and has dynamic typing. Nevertheless, the idea of
setting labels and jumping to them is the same. However, substitution is not
capture-avoiding for prompt-tags in Racket.
Example 4.1. Primality checking of n can be done by testing divisibility with
all numbers [2, n). This procedure uses a jump to exit the testing once a divisor
is found.
(define (prime? n)
(define (divisible-check i)
(if (= (modulo n i) 0)
(abort/cc alpha #f)
#t))
;;; define mu-variable
(define alpha (make-continuation-prompt-tag ’alpha))
(if (> n 1)
(mu alpha (lambda ()
(andmap divisible-check (range 2 n))))
#f))
4
4.3.2 Operators throw and catch
The address application and abstraction can be interpreted as catch and throw
operators in functional languages. An expression of the type µα.M catches any
terms labelled with α in M . The expression [α]N throws (labels) N to the
address α. We define two new operators capturing this idea, but for restricted
terms, as done in [SU06] and [GKM13].
catchαM := µα.[α]M
throwαM := µ .[α]M
Since substitution of µ-variables is capture-avoiding, we get statically bound
exceptions: a term will not catch any exceptions thrown by its argument, only
those thrown by itself. This differs from most programming languages where
exceptions conversely are dynamically bound.
14Such a wrapping is called a thunk. It can be used to simulate call-by-name in a call-by-
value language.
30
We can derive reduction rules for throw and catch operators.
catchαM →µη M if α /∈ FVµ(M)
throwαcatchβ →µR throwαM [β := α�]
catchαcatchβM →µR catchαM [β := α�]
throwαthrowβ →µR throwβM
catchαthrowαM →µR catchαM
E[catchαM ] �µC catchαM [α := αE]
E[throwαM ] �µC throwαM
Example 4.2. Consider a variant of the λµ-calculus with natural numbers.
Here we can formulate the term
catchα (K 0) throwα 2
where K is the K-combinator K ≡ λx.λy.x.
A programming language with call-by-value evaluation will evaluate the ar-
guments to K before the application. Thus the expression throwα 2 will be
evaluated and 2 will be thrown, which is the overall result of the expression.
A programming language with call-by-name evaluation only evaluates terms
where it is needed. The arguments to K will not be evaluated before application.
The result of the application K 0 throwα 2 is therefore 0 which is the overall
result of the application. 4
Which result is correct? We expect two functional (and therefore side-effect-
free) programming languages to produce the same result after evaluation of
the same expression, given that they halt. This example demonstrates that
operations on the control flow depend on the order of evaluation. According to
the reduction rules we have defined, a correct evaluation of the term would be
catchα ((λx.λy.x) 0) throwα 2 →β
catchα (λy.0) throwα 2 →β
catchα 0 →µη
0
so the “correct” result is 0 which is not surprising, since λµ-calculus is a call-
by-name calculus. If we were to extend our contexts with
E ::= · · · | NE
and add a reduction rule
Mµα.C →∗ µα.C[β := β(M�)]
then also the call-by-value evaluation of this term would be correct, but then
we would lose confluence, as demonstrated by this example.
4.3.3 Type of call/cc
The formula ((A→ B)→ A)→ A, known as Peirce’s law, is a classical tautol-
ogy that is not intuitionistically valid. The type is therefore not inhabited in the
31
usual lambda calculus, but in the λµ-calculus it is inhabited by the following
term.
λx : (P → Q)→ P.µα : ¬P.[α]x(λz : P.µβ : ¬Q.[α]z)
Using the catch and throw-operators, as well as omitting the type annota-
tions, we get a more compact term.
λx.catchαx(λz.throwαz)
If we call this term P, we get the following reduction rule for P. It is obtained
by first applying →β and then �µC .
E[PM ] � catchαE[M(λz.throwαE[z])]
As noted by Parigot in the original paper on λµ-calculus [Par92], the be-
haviour of this term is the same as that of call/cc . Compare this reduction
rule with the reduction rule for the term K in Section 3.4.3. The throwing to
address α exits the context just like the operator A.
4.3.4 A Definition of C
Parigot [Par92] also presents a term that acts as operator C.
If we in P change the application of α : ¬P to an application of the free
variable γ : ¬Q, we instead get the type ((P → Q) → Q) → P . If we further-
more take ⊥ as Q, we have an inhabitant of the type ¬¬P → P . Note that this
is not a combinator (!), since γ is free.
λx : (P → ⊥)→ ⊥.µα : ¬P.[γ]x(λz : P.µβ : ¬⊥.[α]z)
We call this term N . The somewhat different reduction rule is
E[NM ] � µα.[γ]M(λz.throwαE[z])
The difference is that when the continuation λz.throwαE[z] never is invoked
in M , the term will not return to the context E and instead be thrown to the
free address γ. Compare with the operator C in Section 3.4.1.
We will later (see Section 4.6) find a combinator inhabiting ¬¬P → P .
4.4 Logical Embeddings
A logical embedding is a translation of theorems in classical logic to counterparts
in intuitionistic logic.
Definition 4.5. The Kolmogorov negative translation translates a formula in
classical propositional logic to a formula in intuitionistic propositional logic. It
is defined as follows:
k(α) := ¬¬α if α is an atomic proposition
k(φ→ ψ) := ¬¬(k(φ)→ k(ψ))
Theorem 4.5. ` ψ in classical propositional logic, if and only if ` k(ψ) in
intuitionistic propositional logic.
For a proof, see [SU06].
32
4.4.1 CPS-translation of λµ-calculus
As discussed in Section 3.4, an alternative way to write a program with control
operators is to write the program in continuation passing style, CPS.
We will now explore this idea and find an inductive CPS-translation for
our terms. The translation is a standard one, found in for example [SU06]. A
program in λµ-calculus will be transformed into a CPS-program in λ-calculus.
The translation of a term M will be denoted by M . Since all functions are
single-parameter functions, we modify our previous description of CPS (where
the continuation is the final parameter): All terms must take a continuation as
parameter, usually denoted by k.
Below follows an informal motivation on how λµ-terms should be translated
to CPS. We initially ignore types.
Functions are values, so they should be “returned” with the continuation.
The function body must be transformed though, so the translation of λx.M is
λk.k(λx.M).
Variables are placeholders for other terms, so they should be given the con-
tinuation. The terms that will substitute them will take a continuation as first
parameter. The translation of x is λk.xk.
Applications MN are trickier. The terms M and N must be translated, but
then we want to extract M to be able to apply it to N . M should therefore
be given a continuation λm. · · ·mN · · · that applies it to the argument N . The
result of the application mN should then be passed the continuation k. The
complete translation of MN is thus λk.M(λm.mNk).
How do we deal with addresses? Addresses are similar in behaviour to
continuations. Address abstraction means that we ignore the given continuation
k and supply our own. The continuation we supply is the identity function λd.d
that will return to this point. We also abstract on a continuation α (the address)
that will be visible to inner functions, so they can exit or jump to this point.
The translation of µα.M could thus be λk.(λα.M(λd.d))k. But we realise the
outermost k is unnecessary, since α also is a continuation, so the criterion “all
terms take a continuation as parameter” is met by simplifying to λα.M(λd.d),
which is our translation.
Address application also means supplying another continuation than the
given one, but the difference is that no new continuation is created — instead
we use one from the scope. The address application [α]M is simply λk.Mα,
where k is ignored and α from the scope is provided instead.
Definition 4.6 (CPS-translation of λµ-terms). The CPS-translation of a (raw)
λµ-term M , denoted M is defined inductively:
x = λk.xk
λx.M = λk.k(λx.M)
MN = λk.M(λm.mNk)
[α]M = λk.Mα
µα.M = λα.M(λd.d)
where k /∈ FV(M,N) and α is a fresh variable.
Is this translation well-typed, and how? What is the type of a continuation?
Particularly, what is the type of the “top-continuation”, the continuation that
33
will be applied to the final result? Assume applying this top-continuation to the
final result yields a term of some special result type R. Then a continuation k
taking a term x of type A and producing the final result must have term A→ R.
That is, the CPS-translation λk.xk of x must have type (A→ R)→ R.
Since addresses are translated directly to continuations, and they have type
A→ ⊥, we see that ⊥ is an appropriate choice as R.
The typed version of the terms looks like, assuming that A,A1, A2 are base
types:
x : A = λk : ¬A.xk
where x : ¬¬A
λx.M : A1 → A2 = λk : ¬(¬¬A1 → ¬¬A2).k(λx : ¬¬A1.M)
where M : ¬¬A2
M : A1 → A2 MN : A2 = λk : ¬A2.M(λm : A1 → ¬¬A2.mNk)
where M : ¬¬A2
M : A, α : ¬A [α]M : ⊥ = λk : ¬⊥.Mα
where M : ¬¬A, α : ¬A
C : ⊥ µα : ¬A.C : A = λα : ¬A.C(λd : ¬⊥.d)
where C : ¬¬⊥
If A,A1, A2 not are base types, we need to translate them recursively as well.
What does the general case look like? We get the following result:
Theorem 4.6 (CPS-translation corresponds to Kolmogorov translation). If
Γ `M : A in λµ-calculus, and Γ = {M : k(A) | M : A ∈ Γ}, then Γ `M : k(A)
in simply typed λ-calculus.
Proof. Induction on the typing derivation of M .
We also want “equality” on terms to be preserved. Terms should not only
have the same type, but they should be equal in some sense: up to reduction.
Lemma 4.7 (Term equality). Let →∗ be a relation with the Church-Rosser
property. Define a relation
A ∼ B def⇔ ∃C : A�∗ C and B �∗ C
The relation ∼ is an equivalence relation.
Proof. The relation is obviously symmetric and reflexive. For transitivity, as-
sume A ∼ B and A ∼ C, that is, for some A′ and C ′ we have A �∗ A′,
B �∗ A
′, B �∗ C
′, C �∗ C
′. Then by the Church-Rosser, considering B, A′
and C ′, there is an E such that A′ �∗ E and C
′ �∗ E.
A
B
~~
C
~~
A′
C ′
~~
E
34
We will denote this equivalence relation by =β and =µ for →β and →µ
respectively.
For our preservation of equivalence, we will need restricted terms. If we
use restricted terms, we can simplify the translation of address application and
abstraction.
x = λk.xk
λx.M = λk.k(λx.M)
MN = λk.M(λm.mNk)
[α]M = Mα
µα.C = λα.C
When only restricted terms are considered, this CPS-translation is sometimes
presented instead, as in [GKM13].
We will now show that the CPS-translation preserves equality. The proof is
a detailed variant of the proof in [SU06].
Theorem 4.8 (Equality preservance of CPS-translation). Let M and N be
restricted λµ-terms. Then M =β N if and only if M =µ N .
Proof. It is sufficient to show that A →µ B implies A =β B. We have to
study our reduction rules and see what happens with substitution under CPS-
translations.
(→β) M [x := N ] =β M [x := N ]
(→µR) C[α := β�] =β C[α := β]
(→µC) C[α := α(�M)] =β C[α := λm.mNα]
Then also note that, since all CPS-translations of terms are abstractions, with
a continuation as parameter, we have λk.Pk =β P , where k /∈ FV(P ).
Now we need to show that
(→β) (λx.M)N =β M [x := N ]
(→µη) µα.[α]M =β M if α /∈ FVµ(M)
(→µR) [β]µα.C =β C[α := β�]
(→µC) (µα.C)M =β µα.C[α := α(�M)]
We take each of the rules in order.
(→β)
(λx.M)N = (λk.(λk.k(λx.M))(λf.fNk))
�β (λk.(λx.M)Nk)
→β λk.M [x := N ]k
=β M [x := N ]
=β M [x := N ]
(→µη) We assume α /∈ FVµ(M).
µα.[α]M = λα.(λk.Mα)(λd.d)
�β λα.Mα
=β M
35
(→µR) Assuming raw terms, we show a weaker statement, which implies the
stronger statement if we restrict ourselves to raw terms.
[β]µα.[γ]M = λk.(λα.(λk.Mγ)(λd.d))β
�β λk.Mγ
= [γ]M [α := β�]
The restriction we have made is that C must be on the form [γ]M .
(→µC)
(µα.C)M = λk.(λα.C(λd.d))(λc.cMk)
→β λk.C[α := (λc.cMk)](λd.d)
≡α λα.C[α := (λc.cMα)](λd.d)
= µα.C[α := α(�M)]
Example 4.3. As pointed out in the proof, the rule (→µR) is exactly where
equality fails for raw terms. For example
[β]µα.x →µR x
but [β]µα.x = λk.λα.(λk.xk)(λd.d)β
and x = λk.xk
when λk.xk 6= λk.λα.(λk.xk)(λd.d)β
The restriction of the rule (→µR) to a new rule (→µR∗) is sufficient to get
equality also for raw terms.
[β]µα.[γ]M →µR∗ [γ]M [α := β�]
4
Are the raw terms too unrestricted? Will restricted terms be sufficient?
No, many of our classical tautologies require raw terms to be inhabited in λµ-
calculus (see Section 4.6). But as a system for computation, the restricted terms
are sufficient, because all terms can be rewritten to restricted terms if we allow
the type of the term to change.
4.5 Extension with Conjunction and Disjunction
So far we have only considered classical logic with the connective →, corre-
sponding to λµ-calculus with the type constructor →. The λµ-calculus can be
extended with type constructors for ∧ and ∨, as done with the λ-calculus in
Section 2.2.1. This gives us a complete Curry-Howard-correspondence.
A more interesting extension is to define the connectives in λµ-calculus, as
done in [SU06]. A similar construction, though not in λµ-calculus, was done
by Griffin [Gri90]. It is essentially the same construction, but using λ-calculus
with the operators A and C instead.
36
4.5.1 Definitions in λµ-calculus
The connectives ∧ and ∨ cannot be defined through→ (and ⊥) in intuitionistic
logic, but definitions can easily be found in classical logic using truth tables.
We will use these definitions, which are symmetrical in A and B:
OR: A ∨B := (A→ ⊥)→ (B → ⊥)→ ⊥
AND: A ∧B := (A→ B → ⊥)→ ⊥
In λµ-calculus we can find corresponding terms and find a computational mean-
ing of these definitions from classical logic. That is, we can find definitions of
inl(), inr(), case(, , ), pair(, ), fst(), snd() instead of adding them as axioms.
We start with the definition of ∨:
A ∨B := (A→ ⊥)→ (B → ⊥)→ ⊥
inl() : x 7→ inl(x) := λx.λf1.λf2.f1x
inr() : x 7→ inr(x) := λx.λf1.λf2.f2x
case(, , ) : (e, g1, g2) 7→ case(e, g1, g2) := λe.λg1.λg2.µα.e(λx.[α]g1x)(λx.[α]g2x)
We add type annotations to bound variables, to clarify the type:
inl() := λx : A.λf1 : ¬A.λf2 : ¬B.f1x
inr() := λx : B.λf1 : ¬A.λf2 : ¬B.f2x
case(, , ) := λe : A ∨B.λg1 : A→ C.λg2 : B → C.µα : ¬C.e(λx : A.[α]g1x)(λx : B.[α]g2x)
Note that if we ignore typing, the terms for inl() and inr() already solves
the task for untyped λ-calculus. They are injection terms that only need to be
applied to two selection functions f1, f2 and use the correct one, depending on
if it was a left or right injection.
The problem with the typing is then: Something of type ⊥ must be returned,
so it is impossible to return the fix which is of type C, or in any other way
extract it. The only solution is to throw the result to a level above and simulate
a return of type ⊥.
The definition of ∧ follows:
A ∧B := (A→ B → ⊥)→ ⊥
pair(, ) : (a, b) 7→ pair(a, b) := λa.λb.λs.sab
fst() : p 7→ fst(p) := λp.µα.p(λa.λb.[α]a)
snd() : p 7→ snd(p) := λp.µα.p(λa.λb.[α]b)
We add types to bound variables:
pair(, ) : (a, b) 7→ pair(a, b) := λa : A.λb : B.λs : A→ B → ⊥.sab
fst() : p 7→ fst(p) := λp : A ∧B.µα : ¬A.p(λa : A.λb : B.[α]a)
snd() : p 7→ snd(p) := λp : A ∧B.µα : ¬B.p(λa : A.λb : B.[α]b)
As with injection, if types are disregarded, the pair constructor already solves
the task for untyped λ-calculus. Applied to a selector, the correct component
will be extracted. This is insufficient in a typed calculus, because the return
type should be ⊥, not A or B. Therefore an address abstraction and application
37
must be used, to simulate a return of type ⊥, when we in fact return one of the
components of the pair.
See Appendix A.3 for an implementation in Racket of the pair and injec-
tion constructors and destructors using λµ-calculus. This implementation uses
continuation prompts, the Racket equivalence of addresses, described in Sec-
tion 4.3.1. Some work has been put into simulating a typed language, since
Racket is untyped. The implementation does type checks in constructors and
destructors as well as in address application and abstraction.
4.5.2 A Complete Correspondence
With our extension with pairs and injections, we get a complete Curry-Howard
correspondence, for classical logic with all logical connectives. The theorem
statement and proof are as before.
Since we have not added any rules to our calculus, our results from previ-
ously hold. The λµ-calculus with pairs and injections is confluent and strongly
normalising.
We have another result, quite unexpected. If we can ignore the use of
addresses, except in the definition of pairs and injections, we have exactly
the λ-calculus with type constructors →, ∧, ∨, corresponding to intuitionis-
tic logic. This means that the proofs of strong normalisation and confluence for
λµ-calculus also prove these properties for λ-calculus extended with pairs and
injections.
4.6 Terms for Classical Tautologies
When we introduced the concept of a proof being a construction, we listed sev-
eral tautologies (in Boolean algebras) which had no constructions (see Exam-
ple 2.3). On our search for constructions for them, we introduced continuations,
control operators and finally the λµ-calculus. This now allows us to find terms
for these propositions, since all classical tautologies, when interpreted as types,
are inhabited in the λµ-calculus, by the Curry-Howard correspondence. The
reader familiar with the programming language Coq can compare the terms
below with the Coq-terms presented in Appendix B.
We have already seen the term for Peirce’s law, as it corresponds to call/cc
by the Curry Howard correspondence (see Section 4.3.3). We include it here
again in our list.
Example 4.4. Peirce’s law, ((P → Q)→ P )→ P , is inhabited by the term,
λx : (P → Q)→ P.µα : ¬P.[α]x(λp : P.µβ : ¬Q.[α]p)
We have also written it more compactly as
λx.catchαx(λz.throwαz)
4
We now present terms of other classical tautologies. If we change Peirce’s
law to ((P → Q) → Q) → P , we get another classical tautology, best known
when we take Q as ⊥ and get (¬¬P → P ). Despite the similarities, this term
cannot be written as a restricted term, only as a raw term.
38
Example 4.5. Reduction ad absurdum, (¬¬P → P ), is inhabited by the term
λx.µπ.x(λp.[π]p)
and type annotated
λx : ¬¬P.µπ : ¬P.x(λp : P.[π]p)
4
The commonly used proof technique proof by contradiction is, not surpris-
ingly, also not valid intuitionistically. Here is a construction in λµ-calculus.
Example 4.6. Proof by contradiction, (¬Q → ¬P ) → (P → Q), is inhabited
by the term
λx.λp.µα.x(λq.[α]q)p
λx : (¬Q→ ¬P ).λp : P.µα.x(λq : Q.[α]q)p
4
Another interesting combinator is one of de Morgan’s laws. Three of de
Morgan’s laws are inhabited in λ-calculus; the fourth is not. Here are the three
first.
¬(P ∨Q)→ (¬P ∧ ¬Q) λc.pair(λp.c inl(p), λq.c inr(q))
(¬P ∧ ¬Q)→ ¬(P ∨Q) λc.λd.case(d, fst(c), snd(c))
(¬P ∨ ¬Q)→ ¬(P ∧Q) λd.λc.case(d, λp.p fst(c), λq.q snd(c))d
The fourth follows here in λµ-calculus.
Example 4.7. Negation of conjunction (de Morgan’s laws), ¬(P ∧Q)→ (¬P ∨
¬Q), is inhabited by the term
λx.µγ.x(pair(µα.[γ]inl(λp.[α]p), µβ.[γ]inr(λq.[β]q)))
Annotated with types, the term looks like
λx : ¬(P∧Q).µγ : ¬¬(P∧Q).x(pair(µα : ¬P.[γ]inl(λp : P.[α]p), µβ : ¬Q.[γ]inr(λq : Q.[β]q)))
4
Another interesting proposition is that RAA implies the Principle of Ex-
cluded Middle (PEM). Neither RAA nor PEM are derivable in intuitionistic
logic, but when we add the derivation rule double negation elimination (RAA)
to natural deduction and get classical logic, both are derivable. Nonetheless,
“RAA → PEM” is not derivable in intuitionistic logic. This is initially quite
confusing, which is what makes this proposition interesting.
The reason that there is a difference between adding a derivation rule RAA
and showing an implication from RAA, is that the derivation rule really looks
like ∀P.(¬¬P → P ). That is, we may eliminate double negations in any formula,
while the implication from RAA only allows us to eliminate double negations
in front of P . As we see in the inhabitant below, we eliminate double negation
not in front of P , but in front of P ∨ ¬P .
39
Example 4.8. RAA implies PEM, (¬¬P → P ) → (P ∨ ¬P ), is inhabited by
the term
λr.µα.[α]inl(r(λn.[α]inr(n)))
or type annotated
λr : ¬¬P → P.µα : ¬(P ∨ ¬P ).[α]inl(r(λn : ¬P.[α]inr(n)))
4
Understanding the term for the principle of the excluded middle itself, is
possibly the hardest, since it is not a function.
Example 4.9 (Principle of Excluded Middle). The following term inhabits the
principle of excluded middle, P ∨ ¬P :
µα : ¬(A ∨ ¬A).[α]inr(λp : P.µβ : ¬⊥.[α]inl(p))
Skipping the µ-abstraction of the address β is also an inhabitant, but this way
we get a restricted term. We can write it with catch and throw-operators:
catchαinr(λp : P.throwαinl(p))
If we denote this term by M, we have the following reduction rule
E[M] �µ catchαE[inr(λp : P.throwαE[inl(p)])]
The “result” of this computation are that both computations, for terms of P
and ¬P respectively, are done in parallel. 4
Like with the combinator P, which acts like call/cc, two computations are
done in parallel. But P differs in that that one of the these computations will
eventually become the final result: either the continuation is ignored, or it is
used. If the continuation is used, that computation becomes the final result.
Else if it is ignored, the outer computation becomes the final result. However,
with the combinator M, none of the two computations can be eliminated (else
we would know if we have P or not ¬P ).
Example 4.10 (Eliminating the Principle of Excluding Middle). If we try to
eliminate M, with the case(, , )-destructor, maybe we can simplify it further?
We first observe what happens if we rewriteM using the classical definitions
of disjunction 15
catchαλf1.λf2.f2(λp.throwα(λf1.λf2.f1p))
Now if this term is applied to two selector functions F1 : ¬P and F2 : ¬¬P , we
get
(catchαλf1.λf2.f2(λp.throwα(λf1.λf2.f1p)))F1F2 →µC
(catchαλf1.λf2.f2(λp.throwα((λf1.λf2.f1p)F1F2))F1F2) →β
(catchαF2(λp.throwαF1p))
15Type annotated, the term looks like
µα : ¬(P ∨ ¬P ).[α]λf1 : ¬P.λf2 : ¬¬P.f2(λp : P.µβ : ¬⊥.[α](λf1 : ¬P.λf2 : ¬¬P.f1p))
40
Remember that the type of α usually changes when we apply the rule (→µC).
Here the new type becomes ¬⊥.
We now try to use the case(, , )-destructor, with two functions G1 : P → Q
and G2 : ¬P → Q. We expect to get a term of type Q.
case(M, G1, G2) →β
µβ.(catchαinr(λp : P.throwαinl(p)))(λx.[β]G1x)(λx.[β]G2x) �µ
µβ.(catchα[β]G2(λp.throwα[β]G1p))
If we call this term M̂, we again get a similar result. (Note that since β : ¬Q,
we have M̂ : Q as expected.)
E[M̂] �µ µβ.(catchα[β]E[G2(λp.throwα[β]E[G1p])])
So we did not advance much further — we skipped the inr() and inl() con-
structors, but introduced another address β. 4
The next classical tautology is proof by cases. We do not know whether we
have P or ¬P , but if we can show that either of them implies Q, then we must
have Q. This is written (P → Q)∧ (¬P → Q)→ Q, or using currying, for a less
complex term, we have (P → Q)→ (¬P → Q)→ Q. The reasoning behind this
term builds on the principle of excluded middle, why it cannot be constructive.
Example 4.11. Proof by cases, (P → Q) → (¬P → Q) → Q, is inhabited by
the term
λp : P → Q.λn : ¬P → Q.µα : ¬Q.[α]n(λx : P.[α]px)
or omitting type annotations
λp.λn.µα.[α]n(λx.[α]px)
Note that we can also write it as
λp : P → Q.λn : ¬P → Q.case(M, p, n)
using the combinator M for the principle of excluded middle. 4
41
5 References
References
[Car13] Jesper Carlström. Logic. Matematiska institutionen, Stockholms
Universitet, 2013.
[FWFD88] Matthias Felleisen, Mitch Wand, Daniel Friedman, and Bruce Duba.
Abstract continuations: A mathematical semantics for handling full
jumps. In Proceedings of the 1988 ACM Conference on LISP and
Functional Programming, LFP ’88, pages 52–62, New York, NY,
USA, 1988. ACM.
[GKM13] Herman Geuvers, Robbert Krebbers, and James McKinna. The
λµT-calculus. volume 164 of Annals of Pure and Applied Logic,
pages 676–701. Elsevier B.V., 2013.
[Gri90] Timothy G. Griffin. A formulae-as-type notion of control. In Pro-
ceedings of the 17th ACM SIGPLAN-SIGACT Symposium on Prin-
ciples of Programming Languages, POPL ’90, pages 47–58, New
York, NY, USA, 1990. ACM.
[HS08] J. Roger Hindley and Jonathan P. Seldin. Lambda-Calculus and
Combinators: An Introduction. Cambridge University Press, New
York, NY, USA, 2 edition, 2008.
[Par92] Michel Parigot. Lambda-my-calculus: An algorithmic interpreta-
tion of classical natural deduction. In Proceedings of the Interna-
tional Conference on Logic Programming and Automated Reasoning,
LPAR ’92, pages 190–201, London, UK, UK, 1992. Springer-Verlag.
[R7R13] Revised7 Report on the algorithmic language Scheme (small lan-
guage). http://trac.sacrideo.us/wg/wiki/R7RSHomePage, 2013.
[Roj97] Raúl Rojas. A tutorial introduction to the lambda calculus. http:
//www.inf.fu-berlin.de/lehre/WS03/alpi/lambda.pdf, 1997.
[SU06] Morten Heine Sørensen and Pawe l Urzyczyin. Lectures on the Curry-
Howard Isomorphism, volume 149 of Studies in Logic and the Foun-
dations of Mathematics. Elsevier, 2006.
42
http://trac.sacrideo.us/wg/wiki/R7RSHomePage
A Code in Scheme/Racket
Code examples in Scheme and Racket.
A.1 Continuations with CPS
This program uses the procedure safe-div mentioned in Example 3.4. The
main program uses elemwise-divide to divide two lists element-wise. The
entire program is written in CPS. If the division fails, safe-div aborts, by
invoking the top-continuation with the message “Division by zero!”. Then
control never returns to elemwise-divide.
Programmers might notice that elemwise-divide uses tail recursion. This
is essential for the construction to work, ensuring that the recursive call only
runs if continue-here is invoked. It is also much more memory effective when
working with continuations, since using tail recursion, the parent continuation is
always reused in the recursion. CPS-programs might consume a lot of memory.
#lang racket
(define safe-divide
(lambda (a b fail success)
(if (= b 0)
(fail “Division by zero!”)
(success (/ a b)))))
(define elemwise-divide
(lambda (list1 list2 k)
;; define tail recursive helper function
(define iter
(lambda (list1 list2 result k)
;; base case — empty list
(if (or (empty? list1) (empty? list2))
(k result)
;; else — construct continuation at this point
(let [(continue-here
(lambda (the-quotient)
;; make recursive call
(iter (rest list1) (rest list2)
(append result (list the-quotient)) k)))]
;; and call safe-divide with that continuation
(safe-divide (first list1) (first list2) k continue-here)))))
(iter list1 list2 empty k)))
(define top-continuation
(lambda (result)
(display result)
43
(newline)))
;; run program
(elemwise-divide (list 1 2 3 6) (list 5 0 3 4) top-continuation)
A.2 Continuations with call/cc
A variant of the previous program (safe-div example), using call/cc instead
of CPS, for comparison. Tail recursion is no longer required, but it has been
kept, so that this program will be similar to the previous one.
#lang racket
(define safe-divide
(lambda (a b fail success)
(if (= b 0)
(fail “Division by zero!”)
(success (/ a b)))))
(define elemwise-divide
(lambda (list1 list2 k)
;; define tail recursive helper function
(define iter
(lambda (list1 list2 result k)
;; base case — empty list
(if (or (empty? list1) (empty? list2))
(k result)
;; else — construct the current continuation
(let [(the-quotient
(call/cc
(lambda (continue-here)
;; call safe-divide with that continuation
(safe-divide (first list1) (first list2) k continue-here))))]
;; and make recursive-call
(iter (rest list1) (rest list2)
(append result (list the-quotient)) k)))))
(iter list1 list2 empty k)))
;; run program
(display
(call/cc
(lambda (top-continuation)
(elemwise-divide (list 1 2 3 6) (list 5 0 3 4) top-continuation))))
(newline)
44
A.3 Classical Definitions of Pairing and Injection
The classical definitions of pairing and projection. Scheme is an untyped lan-
guage and does few type checks. The tricky part of defining pairing and projec-
tion is to get the type correct, so we need to do type checks at every application.
Therefore, Racket contracts have been used to check the typing. Since contracts
cannot take parameters, we need to first create the constructors and destructors
for the specific combination of types that we wants to use. I.e. to use a pairing
constructor for types A and B, we need to call ((pair A B) arg1 arg2).
There is a custom type for ⊥, bot, which consists of the value ’bot.
A.3.1 Base Definitions
#lang racket
;; mu.rkt — address abstraction & application operators
(require racket/control)
(provide (all-defined-out))
;;;;; Some type aliases
(define bot ’bot)
(define bot/c bot)
(define type/c (-> any/c boolean?))
(define address/c pair?)
;;;;;; Control operators
(define/contract (make-address type name)
(-> type/c symbol? address/c)
(cons type (make-continuation-prompt-tag name)))
(define/contract (mu address expr-thunk)
(-> address/c (-> any/c) any/c)
(define/contract (func addr expr-th)
(-> continuation-prompt-tag? (-> bot/c) (car address))
(call/prompt expr-th addr identity))
(func (cdr address) expr-thunk))
(define/contract (ab address expr-value)
(-> address/c any/c bot/c)
(define/contract (func addr expr-val)
(-> continuation-prompt-tag? (car address) bot/c)
(abort/cc addr expr-val)
bot)
(func (cdr address) expr-value))
(define address-abstract mu)
(define address-apply ab)
45
A.3.2 Disjunction
#lang racket
;; disjunction.rkt — disjunction operators
(require “mu.rkt”)
(provide inl inr or-case)
;; Some type aliases
(define (left/c t)
(-> t bot/c))
(define (right/c t)
(-> t bot/c))
(define (injection/c t0 t1)
(-> (left/c t0) (right/c t1) bot/c))
;;;; Left constructor
(define/contract (inl t0 t1)
(-> type/c type/c any/c)
(define/contract (constructor-left x)
(-> t0 (injection/c t0 t1))
(lambda (left right)
(left x)))
constructor-left)
;;;; Right constructor
(define/contract (inr t0 t1)
(-> type/c type/c any/c)
(define/contract (constructor-right x)
(-> t1 (injection/c t0 t1))
(lambda (left right)
(right x)))
constructor-right)
;;;; Destructor
(define/contract (or-case t0 t1 t2)
(-> type/c type/c type/c any/c)
(define/contract (case-apply inj left right)
(-> (injection/c t0 t1) (-> t0 t2) (-> t1 t2) t2)
(define alpha (make-address t2 ’alpha))
(define/contract (retrieve-left x)
(left/c t0)
(ab alpha (left x)))
(define/contract (retrieve-right x)
(right/c t1)
(ab alpha (right x)))
(mu alpha (lambda ()
(inj retrieve-left retrieve-right))))
case-apply)
46
A.3.3 Conjunction
#lang racket
;; conjunction.rkt — conjunction operators
(provide pair fst snd)
(require “mu.rkt”)
;; Some type aliases
(define (select/c t0 t1)
(-> t0 t1 bot/c))
(define (pair/c t0 t1)
(-> (select/c t0 t1) bot/c))
;;;;;; Constructor
(define/contract (pair t0 t1)
(-> type/c type/c any/c)
(define/contract (pair-constructor a b)
(-> t0 t1 (pair/c t0 t1))
(lambda (selector)
(selector a b)))
pair-constructor)
;;;;; Left destructor
(define/contract (fst t0 t1)
(-> type/c type/c any/c)
(define/contract (fst-selector pair)
(-> (pair/c t0 t1) t0)
(define alpha (make-address t0 ’alpha))
(define/contract (fst-retrieve a b)
(select/c t0 t1)
(ab alpha a))
(mu alpha (lambda ()
(pair fst-retrieve))))
fst-selector)
;;;;; Right destructor
(define/contract (snd t0 t1)
(-> type/c type/c any/c)
(define/contract (snd-selector pair)
(-> (pair/c t0 t1) t1)
(define alpha (make-address t1 ’alpha))
(define/contract (snd-retrieve a b)
(select/c t0 t1)
(ab alpha b))
(mu alpha (lambda ()
(pair snd-retrieve))))
snd-selector)
47
A.3.4 Examples
Some examples of the constructions in use.
#lang racket
;; examples.rkt — Examples of conjunction and disjunction
(require “conjunction.rkt”)
(require “disjunction.rkt”)
;;;;;;; Conjunction examples
> (define my-pair ((pair number? symbol?) 10 ’hi))
> ((fst number? symbol?) my-pair)
10
> ((snd number? symbol?) my-pair)
’hi
;;;;;;; Disjunction examples
> (define foo ((inl number? symbol?) 20))
> (define bar ((inr number? symbol?) ’hello))
>
> ((or-case number? symbol? string?) foo number->string symbol->string)
“20”
> ((or-case number? symbol? string?) bar number->string symbol->string)
“hello”
48
B Code in Coq
Below are the derivations of the classical tautologies, which were given terms in
Section 4.6, as well as the first three of de Morgan’s laws, also presented there.
RAA has been added as an axiom, in order to prove these theorems. Care has
been taken to name the variables the same as in the terms. The reader can run
the Coq-code and compare the terms — they are indeed very similar. The only
difference is how RAA is handled. In Coq, abstracting on the address α will be
done in two steps, apply RAA. and intro alpha..
Another difference, although not visible here, is that the addresses are first
class objects here, so we may write simply alpha instead of (fun p: P => (alpha p)).
I.e. we need not write λx.[α]x instead of α. Nevertheless, we have used the more
verbose syntax λx.[α]x below for sake of similarity.
Theorem RAA (P : Prop):
~(~P) -> P.
Proof.
admit.
Qed.
Definition peirce (P Q:Prop) :
((P->Q)->P)->P.
Proof.
intro x.
apply RAA.
intro alpha.
apply alpha.
apply x.
intro p.
apply RAA.
intro _beta.
exact (alpha p).
Qed.
Definition raa (P : Prop) :
~(~P) -> P.
Proof.
intro x.
apply RAA.
intro pi.
apply x.
exact (fun p: P => (pi p)).
Qed.
Definition proof_by_contradiction (P Q:Prop):
(~Q->~P) -> (P->Q).
Proof.
intro x.
intro p.
apply RAA.
49
intro alpha.
apply x.
exact (fun q:Q => (alpha q)).
exact p.
Qed.
Definition morgan1 (P Q: Prop):
~(P / Q) -> (~P / ~Q).
Proof.
intro c.
split.
exact (fun p: P => (c (or_introl p))).
exact (fun q: Q => (c (or_intror q))).
Qed.
Definition morgan2 (P Q: Prop) :
(~P / ~Q) -> ~(P / Q).
Proof.
intros c d.
case d.
exact (proj1 c).
exact (proj2 c).
Qed.
Definition morgan3 (P Q: Prop):
(~P / ~Q) -> ~(P / Q).
Proof.
intros d c.
case d.
exact (fun p: ~P => (p (proj1 c))).
exact (fun q: ~Q => (q (proj2 c))).
Qed.
Definition morgan4 (P Q: Prop):
~(P / Q) -> (~P / ~Q).
Proof.
intro x.
apply RAA.
intro gamma.
apply x.
split.
apply RAA.
intro alpha.
apply gamma.
exact (or_introl (fun p: P => alpha p)).
apply RAA.
intro _beta.
apply gamma.
exact (or_intror (fun q: Q => _beta q)).
Qed.
50
Definition RAA_implies_PEM (P: Prop) :
(~~P -> P) -> (P / ~P).
Proof.
intro r.
apply RAA.
intro alpha.
apply alpha.
apply or_introl.
apply r.
exact (fun n:~P => (alpha (or_intror n))).
Qed.
Definition PEM (P: Prop):
P / ~P.
Proof.
apply RAA.
intro alpha.
apply alpha.
apply or_intror.
intro p.
apply RAA.
intro _beta.
apply (alpha (or_introl p)).
Qed.
Definition proof_by_cases (P Q: Prop) :
(P -> Q) -> (~P -> Q) -> Q.
Proof.
intros p n.
apply RAA.
intro alpha.
apply alpha.
apply n.
exact (fun x:P => (alpha (p x))).
Qed.
Definition proof_by_cases_using_pem (P Q: Prop) :
(P -> Q) -> (~P -> Q) -> Q.
Proof.
intros p n.
assert (pem : P / ~P).
apply PEM.
case pem.
exact p.
exact n.
Qed.
51
Lambda Calculus
The Calculus
Substitution
Alpha-conversion
Beta-reduction
Properties
Normal Forms
Computability
Typed Lambda Calculus
Types
Simply Typed Lambda Calculus
The Curry-Howard Isomorphism
Intuitionistic Logic
Proofs as Constructions
Natural Deduction
Classical and Intuitionistic Interpretations
The Curry-Howard Isomorphism
Additional Type Constructors
Proofs as Programs
Lambda Calculus with Control Operators
Contexts
Evaluation Strategies
Common Evaluation Strategies
Comparison of Evaluation Strategies
Continuations
Continuations in Scheme
Continuation Passing Style
Control Operators
Operators A and C
Operators A and C in Racket
call/cc
Use of Control Operators
Typed Control Operators
-calculus
Syntax and Semantics
Terms
Typing and Curry-Howard Correspondence
Restricted Terms
Computation Rules
Substitution
Reduction
Properties
Interpretation
Implementation in Racket
Operators throw and catch
Type of call/cc
A Definition of C
Logical Embeddings
CPS-translation of -calculus
Extension with Conjunction and Disjunction
Definitions in -calculus
A Complete Correspondence
Terms for Classical Tautologies
References
Code in Scheme/Racket
Continuations with CPS
Continuations with call/cc
Classical Definitions of Pairing and Injection
Base Definitions
Disjunction
Conjunction
Examples
Code in Coq