Lists and List Comprehensions
In Kapulet, what does
[ [x, 2*x] | x from [1, 2, 3] ]evaluate to?
[[1,2], [2,4], [3,6]]
[ 10*x + y | y from , x from [1, 2, 3] ]evalaute to?
[14, 24, 34]
Using either Kapulet's or Haskell's list comprehension syntax, write an expression that transforms
[3, 1, 0, 2]into
[3, 3, 3, 1, 2, 2]. Here is a hint, if you need it.
letrec dup (n, x) = case n of 0 then ; _ then x & dup (n-1, x) end; # you can assume join is already defined, but we'll redefine it here anyway join xss = fold_right ((&&), ) xss # we help ourselves to &&/append in join [dup(x, x) | x from [3, 1, 0, 2]]
Last week you defined
headin terms of
fold_right. Your solution should be straightforwardly translatable into one that uses our proposed right-fold encoding of lists in the Lambda Calculus. Now define
empty?in the Lambda Calculus. (It should require only small modifications to your solution for
let empty? = \xs. xs (\x z. false) true in ...
If we encode lists in terms of their left-folds, instead,
[a, b, c]would be encoded as
\f z. f (f (f z a) b) c. The empty list
would still be encoded as
\f z. z. What should
consbe, for this encoding?
let left_cons = \x xs. \f z. xs f (f z x) in ...
Continuing to encode lists in terms of their left-folds, what should
last [a, b, c]should result in
last result in whatever
erris bound to.
let left_last = \xs. xs (\z x. x) err in ...
Continuing to encode lists in terms of their left-folds, how should we write
head? This is challenging.
The hint/solution gave this answer:
let m = \a. \n b. n a in let f = \w b. w m b in let left_head = \xs. (xs f I) I err in ...
Here's another, more straightforward answer. We observed before that left-folding the
consfunction over a list reverses it. Hence, it is easy to reverse a list defined in terms of its left-fold:
let left_empty = \f z. z in ; this the same as for the right-fold encoding of lists let flipped_left_cons = \xs x. \f z. xs f (f z x) in ; left-folding supplies arguments in a different order than cons expects let left_reverse = \xs. xs flipped_left_cons left_empty in ...
Then we can get the list's head by reversing it and getting the last member of the result, which we saw in the previous problem is easy:
let left_head = \xs. left_last (left_reverse xs) in ...
I'm not sure which of the two solutions presented here is better. The one given in the hint traverses the list only once; whereas the one gotten by reversing the list and getting the last member of the result traverses the list twice. But the former strategy does more complicated stuff at each step of the traversal (both conceptually and more applications), so in the end it might be computationally "cheaper" to use the latter strategy.
Here is yet a third solution, which is probably the most efficient:
let box = \a. \v. v a in let left_head = \xs. xs (\b x. (K (b (K x)))) (box err) I in ...
Suppose you have two lists of integers,
right. You want to determine whether those lists are equal, that is, whether they have all the same members in the same order. How would you implement such a list comparison? You can write it in Scheme or Kapulet using
letrec, or if you want more of a challenge, in the Lambda Calculus using your preferred encoding for lists. If you write it in Scheme, don't rely on applying the built-in comparison operator
equal?to the lists themselves. (Nor on the operator
eqv?, which might not do what you expect.) You can however rely on the comparison operator
=which accepts only number arguments. If you write it in the Lambda Calculus, you can use your implementation of
leq?, requested below, to write an equality operator for Church-encoded numbers. Here is a hint, if you need it.
(The function you're trying to define here is like
eqlist?in Chapter 5 of The Little Schemer, though you are only concerned with lists of numbers, whereas the function from The Little Schemer also works on lists containing symbolic atoms --- and in the final version from that Chapter, also on lists that contain other, embedded lists.)
letrec list_eq? (left, right) = case (left, right) of (, ) then 'true; (, _) then 'false; (_, ) then 'false; (x&xs, y&ys) when x == y then list_eq? (xs, ys); (_, _) then 'false end in list_eq?
(define list_eq? (lambda (left right) (cond ((null? left) (null? right)) ((null? right) #f) ((= (car left) (car right)) (list_eq? (cdr left) (cdr right))) (else #f))))
In the Lambda Calculus, helping ourselves to definitions of
let true = \y n. y in let and = \p q. p q p in let pair = \a b. \f. f a b in let fst = \x y. x in let num_eq? = \l r. and (leq? l r) (leq? r l) in let length = \xs. xs (\x z. succ z) 0 in let check = \x p. p (\b ys. pair (and b (num_eq? x (head ys))) (tail ys)) in let list_eq? = \xs ys. and (num_eq? (length xs) (length ys)) ; this step could be skipped ; if you were prepared to assume the lists always have the same length (xs check (pair true (reverse ys)) fst) in ...
Recall our proposed encoding for the numbers, called "Church's encoding". As we explained last week, it's similar to our proposed encoding of lists in terms of their folds. Give a Lambda Calculus definition of
zero?for numbers so encoded. (It should require only small modifications to your solution for
let zero? = \n. n (\z. false) true in ...
In last week's homework, you gave a Lambda Calculus definition of
succfor Church-encoded numbers. Can you now define
pred 0result in whatever
erris bound to. This is challenging. For some time theorists weren't sure it could be done. (Here is some interesting commentary.) However, in this week's notes we examined one strategy for defining
tailfor our chosen encodings of lists, and given the similarities we explained between lists and numbers, perhaps that will give you some guidance in defining
Here is the solution we were guiding you towards, due to Paul Bernays:
; helping ourselves to 0 and succ let pair = \a b. \f. f a b in let snd = \x y. y in let shift = \p. p (\x y. pair (succ x) x) in let pred = \n. n shift (pair 0 err) snd in ...
Here is another solution, that is attributed variously to Martin Bunder and F. Urbanek, or J. Velmans:
let box = \a. \v. v a in let pred = \n. \s z. n (\b. box (b s)) (K z) I in ...
That can be hard to wrap your head around. If you trace this expression through, you'll see that:
n == 0, it reduces to
\s z. (K z) I, or
\s z. z
n == 1, it reduces to
\s z. (box z) I, or
\s z. z
n == 2, it reduces to
\s z. (box (s z)) I, or
\s z. s z
n == 3, it reduces to
\s z. (box (s (s z))) I, or
\s z. s (s z)
\v. v a; this stands to
pair a bas one stands to two. Since boxes (like pairs) are really just functions, the technique used in this definition of
predis akin to that used in the hint for last week's
(Want a further challenge? Define
map2in the Lambda Calculus, using our right-fold encoding for lists, where
map2 g [a, b, c] [d, e, f]should evaluate to
[g a d, g b e, g c f]. Doing this will require drawing on a number of different tools we've developed, including that same strategy for defining
tail. Purely extra credit.)
let next = \g. \x p. p (\zs ys. pair (cons (g x (head ys)) zs) (tail ys)) in let finish = \zs ys. (empty? ys) zs ; else ys are too long err_different_lengths in let map2 = \g. \xs ys. (leq? (length xs) (length ys)) (xs (next g) (pair empty (reverse ys)) finish) ; else ys are too short err_different_lengths in ...
leq?for numbers (that is, ≤) in the Lambda Calculus. Here is the expected behavior, where
succ zero, and
succ (succ zero).
leq? zero zero ~~> true leq? zero one ~~> true leq? zero two ~~> true leq? one zero ~~> false leq? one one ~~> true leq? one two ~~> true leq? two zero ~~> false leq? two one ~~> false leq? two two ~~> true ...
You'll need to make use of the predecessor function, but it's not essential to understanding this problem that you have successfully implemented it yet. You can treat it as a black box.
It's easiest to do this if you define
pred 0 ~~> 0:
let pred = \n. n shift (pair 0 0) snd in let sub = \l r. r pred l in let leq? = \l r. zero? (sub l r) in ...
Here is another solution. Jim crafted this particular implementation, but like a great deal of the CS knowledge he's gained over the past eight years, Oleg Kiselyov pointed the way.
let leq? = (\base build consume. \l r. r consume (l build base) fst) ; where base is (pair true junk) ; and build is (\p. pair false p) ; and consume is (\p. p fst p (p snd)) in ...
That can be generalized to give this nice implementation of
; `num_cmp x y lt eq gt` returns lt when x<y, eq when x==y, gt when x>y let num_cmp = (\base build consume. \l r. r consume (l build base) fst) ; where base is (pair (\a b c. b) (K (\a b c. a))) ; and build is (\p. pair (\a b c. c) p) ; and consume is (\p. p fst p (p snd) (p snd)) in let num_eq? = \x y. num_cmp x y false true false in ...
Reduce the following forms, if possible:
Kxy ~~> x
KKxy ~~> Ky
KKKxy ~~> Kxy ~~> x
SKKxy ~~> Kx(Kx)y ~~> xy
SIII ~~> II(II) ~~> II ~~> I
SII(SII) ~~> I(SII)(I(SII)) ~~> SII(SII) ~~> ...; the reduction never terminates
Give Combinatory Logic combinators (that is, expressed in terms of
I) that behave like our boolean functions. Provide combinators for
As Dan pointed out in the Wednesday session, this problem is easy to solve if you first come up with a Combinatory translation of the schema
\p. if p then M else N:
[\p. p M N] = @p. p [M] [N] = S (@p. p [M]) (@p [N]) = S (S (@p p) (@p [M])) (@p [N]) = S (S I (@p [M])) (@p [N]) = S (S I (K [M])) (K [N])
The last step assumes that
pdoes not occur free in the expressions
Next, we translate
true ≡ \y n. yas
false ≡ \y n. nas
KI. Then we rely on the following equivalences:
neg ≡ \p. if p then false else true and ≡ \p. if p then I else K false ; then: and p q <~~> if p then I q else K false q ~~> if p then q else false or ≡ \p. if p then K true else I ; then: or p q <~~> if p then K true q else I q ~~> if p then true else q xor ≡ \p. if p then neg else I ; then: xor p q <~~> if p then neg q else I q ~~> if p then neg q else q
Then we just make use of the schema derived above to complete the translations. For example,
and ≡ \p. if p then I else K falsebecomes
S (SI (K I)) (K (K false)) ≡ S (SI (K I)) (K (K (KI))).
In the case of
neg, this produces
S (SI (K false)) (K true) ≡ S (SI (K (KI))) (K K). A more compact result here can be obtained here by observing that for boolean arguments,
negcan also be defined as
\p y n. p n y, which is just the
Using the mapping specified in this week's notes, translate the following lambda terms into combinatory logic:
[\x x] = @x x = I
[\x y. x] = @x [\y. x] = @x. (@y x) = @x (Kx) = S (@x K) (@x x) = S (KK) I; in general expressions of this form
S(KM)Iwill behave just like
Mfor any expression
[\x y. y] = @x [\y. y] = @x (@y y) = @x I = KI
[\x y. y x] = @x [\y. y x] = @x (@y (y x)) = @x (S (@y y) (@y x)) = @x (S I (Kx)) = S (@x (SI)) (@x (Kx)) = S (K(SI)) K; this is the T combinator
[\x. x x] = @x (x x) = S (@x x) (@x x) = SII
[\x y z. x (y z)] =; this is the B combinator, which can also be expressed more concisely as
@x (@y (@z (x (y z)))) =
@x (@y (S (@z x) (@z (y z)))) =
@x (@y (S (Kx) (@z (y z)))) =
@x (@y (S (Kx) y)) =
@x (S (@y (S (Kx))) (@y y)) =
@x (S (K (S (Kx))) (@y y)) =
@x (S (K (S (Kx))) I) =
S (@x (S (K (S (Kx))))) (@x I) =
S (S (@x S) (@x (K (S (Kx))))) (@x I) =
S (S (KS) (@x (K (S (Kx))))) (@x I) =
S (S (KS) (S (@x K) (@x (S (Kx))))) (@x I) =
S (S (KS) (S (KK) (@x (S (Kx))))) (@x I) =
S (S (KS) (S (KK) (S (@x S) (@x (Kx))))) (@x I) =
S (S (KS) (S (KK) (S (KS) (@x (Kx))))) (@x I) =
S (S (KS) (S (KK) (S (KS) K))) (@x I) =
S (S (KS) (S (KK) (S (KS) K))) (KI)
S (K S) K
For each of the above translations, how many
Is are there? Give a rule for describing what each
Icorresponds to in the original lambda term.
This generalization depends on you omitting the translation rule:
6. @a(Xa) = X if a is not in X
With that shortcut rule omitted, then there turn out to be one
Iin the result corresponding to each occurrence of a bound variable in the original term.
Evaluation strategies in Combinatory Logic
Find a term in CL that behaves like Omega does in the Lambda Calculus. Call it
Are there evaluation strategies in CL corresponding to leftmost reduction and rightmost reduction in the Lambda Calculus? What counts as a redex in CL?
A redex is any
Iwith one argument, or
Kwith two arguments, or
Swith three arguments.
Leftmost reduction corresponds to applying the rules for an
Sin head position before reducing any of their arguments; rightmost reduction corresponds to reducing the arguments first.
Consider the CL term
K I Skomega. Does leftmost (alternatively, rightmost) evaluation give results similar to the behavior of
K I Omegain the Lambda Calculus, or different? What features of the Lambda Calculus and CL determine this answer?
Leftmost reduction terminates with
I; rightmost reduction never terminates. This is because "leftmost reduction" corresponds to "head-redex first", which gives the head combinator the opportunity to ignore some of its (in this case, unreducible) arguments without trying to reduce them. If the arguments are reduced first, the head combinator loses that opportunity.
What should count as a thunk in CL? What is the equivalent constraint in CL to forbidding evaluation inside of a lambda abstract?
Recall that a thunk is a function of the form
\(). EXPRESSION, where
EXPRESSIONwon't be evaluated until the function is applied (to the uninformative argument
()). We don't really have
()in the Lambda Calculus. The closest approximation would be
I, since a triple is
\f. f a b c, a pair is
\f. f a b, and following the same pattern a one-tuple would be
\f. f a(not the same as
a), and the zero-tuple would be
I. But really in the untyped Lambda Calculus, it doesn't matter whether you supply
Ito an abstract that's going to ignore its argument, or any other argument. Instead, you could get the same effect as a thunk by just using
\x. EXPRESSION, where
xis a variable that doesn't occur free in
The way to do the same thing in Combinatory Logic is
K (EXPRESSION). The constraint we need is that arguments to
Kare never reduced until there are enough of them to apply the rule for
K. (If we furthermore insist on leftmost reduction, then expressions will never be reduced when in the position of an argument to another combinator. Only the reduction rules for the outermost head combinator are ever followed.)
More Lambda Practice
Reduce to beta-normal forms:
(\x. x (\y. y x)) (v w) ~~> v w (\y. y (v w))
(\x. x (\x. y x)) (v w) ~~> v w (\x. y x)
(\x. x (\y. y x)) (v x) ~~> v x (\y. y (v x))
(\x. x (\y. y x)) (v y) ~~> v y (\u. u (v y))
(\x y. x y y) u v ~~> u v v
(\x y. y x) (u v) z w ~~> z (u v) w
(\x y. x) (\u u) ~~> \y u. u, also known as
(\x y z. x z (y z)) (\u v. u) ~~> \y z. (\u v. u) z (y z) ~~> \y z. z, also known as