<strong>Abstract</strong>: <code>(λa M)</code>
</blockquote>
-We'll tend to write <code>(λa M)</code> as just `( \a M )`.
+We'll tend to write <code>(λa M)</code> as just `(\a M)`, so we don't have to write out the markup code for the <code>λ</code>. You can yourself write <code>(λa M)</code> or `(\a M)` or `(lambda a M)`.
<blockquote>
<strong>Application</strong>: <code>(M N)</code>
(x (\x x))
((\x (x x)) (\x (x x)))
-The lambda calculus has an associated proof theory. For now, we can regard the proof theory as having just one rule, called the rule of "beta-reduction" or "beta-contraction". Suppose you have some expression of the form:
+The lambda calculus has an associated proof theory. For now, we can regard the proof theory as having just one rule, called the rule of **beta-reduction** or "beta-contraction". Suppose you have some expression of the form:
((\a M) N)
The rule of beta-reduction permits a transition from that expression to the following:
- M {a:=N}
+ M [a:=N]
What this means is just `M`, with any *free occurrences* inside `M` of the variable `a` replaced with the term `N`.
> T is defined to be `(x (\x (\y (x (y z)))))`
-The first occurrence of `x` in `T` is free. The `\x` we won't regard as being an occurrence of `x`. The next occurrence of `x` occurs within a form that begins with `\x`, so it is bound as well. The occurrence of `y` is bound; and the occurrence of `z` is free.
+The first occurrence of `x` in T is free. The `\x` we won't regard as being an occurrence of `x`. The next occurrence of `x` occurs within a form that begins with `\x`, so it is bound as well. The occurrence of `y` is bound; and the occurrence of `z` is free.
Here's an example of beta-reduction:
M ~~> N
-We'll mean that you can get from M to N by one or more reduction steps. Hankin uses the symbol -> for one-step contraction, and the symbol ->> for zero-or-more step reduction. Hindley and Seldin use (triangle..sub1) and (triangle).
+We'll mean that you can get from M to N by one or more reduction steps. Hankin uses the symbol <code><big><big>→</big></big></code> for one-step contraction, and the symbol <code><big><big>↠</big></big></code> for zero-or-more step reduction. Hindley and Seldin use <code><big><big><big>⊳</big></big></big><sub>1</sub></code> and <code><big><big><big>⊳</big></big></big></code>.
When M and N are such that there's some P that M reduces to by zero or more steps, and that N also reduces to by zero or more steps, then we say that M and N are **beta-convertible**. We'll write that like this:
M <~~> N
-This is what plays the role of equality in the lambda calculus. Hankin uses the symbol `=` for this. So too do Hindley and Seldin.
+This is what plays the role of equality in the lambda calculus. Hankin uses the symbol `=` for this. So too do Hindley and Seldin. Personally, I keep confusing that with the relation to be described next, so let's use this notation instead. Note that `M <~~> N` doesn't mean that each of `M` and `N` are reducible to each other; that only holds when `M` and `N` are the same expression. (Or, with our convention of only saying "reducible" for one or more reduction steps, it never holds.)
-In the metatheory, it's also sometimes useful to talk about formulas that are syntactically equivalent *before any reductions take place*. Hankin uses the symbol (three bars) for this. So too do Hindley and Seldin. We'll use that too, and will avoid using `=` when discussing metatheory for the lambda calculus. Instead we'll use `<~~>` as we said above. When we want to introduce a stipulative definition, we'll write it out longhand, as in:
+In the metatheory, it's also sometimes useful to talk about formulas that are syntactically equivalent *before any reductions take place*. Hankin uses the symbol <code>≡</code> for this. So too do Hindley and Seldin. We'll use that too, and will avoid using `=` when discussing metatheory for the lambda calculus. Instead we'll use `<~~>` as we said above. When we want to introduce a stipulative definition, we'll write it out longhand, as in:
> T is defined to be `(M N)`.
We'll regard the following two expressions:
- (\x x y)
+ (\x (x y))
- (\z z y)
+ (\z (z y))
as syntactically equivalent, since they only involve a typographic change of a bound variable. Read Hankin section 2.3 for discussion of different attitudes one can take about this.
Note that neither of those expressions are identical to:
- (\x x w)
+ (\x (x w))
because here it's a free variable that's been changed. Nor are they identical to:
- (\y y y)
+ (\y (y y))
because here the second occurrence of `y` is no longer free.