X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blobdiff_plain;f=miscellaneous_lambda_challenges_and_advanced_topics.mdwn;h=c706ec184e38b8f936f030d5fefcd48168ce254f;hp=1289713c6d414912ccc2cff7221eb654d71fcd95;hb=d46bf9ad6b6516143bbe3dc00acff08b70b6dbd3;hpb=4f4f76bdb9fa83733f4afe6543ee0d2d40146f4e diff --git a/miscellaneous_lambda_challenges_and_advanced_topics.mdwn b/miscellaneous_lambda_challenges_and_advanced_topics.mdwn index 1289713c..c706ec18 100644 --- a/miscellaneous_lambda_challenges_and_advanced_topics.mdwn +++ b/miscellaneous_lambda_challenges_and_advanced_topics.mdwn @@ -303,13 +303,13 @@ can use. What if the way we implemented the search procedure looked something like this? At a given stage in the search, we wouldn't just apply some function `f` to the - head at this stage and the result accumulated so far, from folding the same - function (and a base value) to the tail at this stage...and then pass the result - of doing so leftward along the rest of the list. + head at this stage and the result accumulated so far (from folding the same + function, and a base value, to the tail at this stage)...and then pass the result + of that application to the embedding, more leftward computation. - We'd *instead* give that function a "handler" that expected the result of the - current stage *as an argument*, and evaluated to passing that result leftwards - along the rest of the list. + We'd *instead* give `f` a "handler" that expects the result of the current + stage *as an argument*, and then evaluates to what you'd get by passing that + result leftwards up the list, as before. Why would we do that, you say? Just more flamboyant lifting? @@ -324,7 +324,7 @@ can use. This "handler" encodes the search's having finished, and delivering a final answer to whatever else you wanted your program to do with the result of the search. If you like, at any stage in the search you might just give an argument - to this handler, instead of giving an argument to the handler that continues + to *this* handler, instead of giving an argument to the handler that continues the list traversal leftwards. Semantically, this would amount to *aborting* the list traversal! (As we've said before, whether the rest of the list traversal really gets evaluated will depend on what evaluation order is in place. But @@ -339,8 +339,8 @@ can use. f 3 - `f`'s job would be to check whether 3 matches the element we're searching for - (here also 3), and if it does, just evaluate to the result of passing `true` to + `f`'s job would be to check whether `3` matches the element we're searching for + (here also `3`), and if it does, just evaluate to the result of passing `true` to the abort handler. If it doesn't, then evaluate to the result of passing `false` to the continue-leftwards handler. @@ -355,41 +355,42 @@ can use. of the list multiplied to, because that would affect the answer you passed along to the continue-leftwards handler. - A **version 5** list would encode this kind of fold operation over the list, in + A **version 5** list encodes the kind of fold operation we're envisaging here, in the same way that v3 (and v4) lists encoded the simpler fold operation. Roughly, the list `[5;4;3;2;1]` would look like this: \f z continue_leftwards_handler abort_handler. - + (\result_of_fold_over_4321. f 5 result_of_fold_over_4321 continue_leftwards_handler abort_handler) abort_handler + ; or, expanding the fold over [4;3;2;1]: \f z continue_leftwards_handler abort_handler. (\continue_leftwards_handler abort_handler. - + (\result_of_fold_over_321. f 4 result_of_fold_over_321 continue_leftwards_handler abort_handler) abort_handler ) (\result_of_fold_over_4321. f 5 result_of_fold_over_4321 continue_leftwards_handler abort_handler) abort_handler - and so on + ; and so on - Remarks: the `larger_computation_handler` should be supplied as both the + Remarks: the `larger_computation` handler should be supplied as both the `continue_leftwards_handler` and the `abort_handler` for the leftmost - application, where the head `5` is supplied to `f`. Because the result of this + application, where the head `5` is supplied to `f`; because the result of this application should be passed to the larger computation, whether it's a "fall off the left end of the list" result or it's a "I'm finished, possibly early" - result. The `larger_computation_handler` also then gets passed to the next + result. The `larger_computation` handler also then gets passed to the next rightmost stage, where the head `4` is supplied to `f`, as the `abort_handler` to use if that stage decides it has an early answer. Finally, notice that we don't have the result of applying `f` to `4` etc given as an argument to the application of `f` to `5` etc. Instead, we pass - (\result_of_fold_over_4321. f 5 result_of_fold_over_4321 one_handler another_handler) + (\result_of_fold_over_4321. f 5 result_of_fold_over_4321 ) *to* the application of `f` to `4` as its "continue" handler. The application of `f` to `4` can decide whether this handler, or the other, "abort" handler, should be @@ -401,8 +402,8 @@ can use. of the complex expression semantically depending only on this, not on that. A demon evaluator who custom-picked the evaluation order to make things maximally bad for you could ensure that all the semantically unnecessary computations got - evaluated anyway. At this stage, we don't have any way to prevent that. Later, - we'll see ways to semantically guarantee one evaluation order rather than + evaluated anyway. We don't have any way to prevent that. Later, + we'll see ways to *semantically guarantee* one evaluation order rather than another. Though even then the demonic evaluation-order-chooser could make it take unnecessarily long to compute the semantically guaranteed result. Of course, in any real computing environment you'll know you're dealing with a @@ -432,9 +433,6 @@ can use. ; here's the abort_handler larger_computation in let extract_tail = ; left as exercise - ;; for real efficiency, it'd be nice to fuse the apparatus developed - ;; in these v5 lists with the ideas from the v4 lists, above - ;; but that also is left as an exercise These functions are used like this: @@ -448,14 +446,243 @@ can use. your reach. And once you have followed it, you'll be well on your way to appreciating the full terrible power of continuations. - + Of course, like everything elegant and exciting in this seminar, [Oleg discusses it in much more detail](http://okmij.org/ftp/Streams.html#enumerator-stream). + *Comments*: + + 1. The technique deployed here, and in the v2 lists, and in our implementations + of pairs and booleans, is known as **continuation-passing style** programming. + + 2. We're still building the list as a right fold, so in a sense the + application of `f` to the leftmost element `5` is "outermost". However, + this "outermost" application is getting lifted, and passed as a *handler* + to the next right application. Which is in turn getting lifted, and + passed to its next right application, and so on. So if you + trace the evaluation of the `extract_head` function to the list `[5;4;3;2;1]`, + you'll see `1` gets passed as a "this is the head sofar" answer to its + `continue_handler`; then that answer is discarded and `2` is + passed as a "this is the head sofar" answer to *its* `continue_handler`, + and so on. All those steps have to be evaluated to finally get the result + that `5` is the outer/leftmost head of the list. That's not an efficient way + to get the leftmost head. + + We could improve this by building lists as left folds when implementing them + as continuation-passing style folds. We'd just replace above: + + let make_list = \h t. \f z continue_handler abort_handler. + f h z (\z. t f z continue_handler abort_handler) abort_handler + + now `extract_head` should return the leftmost head directly, using its `abort_handler`: + + let extract_head = \lst larger_computation. lst + (\hd sofar continue_handler abort_handler. abort_handler hd) + junk + larger_computation + larger_computation + + 3. To extract tails efficiently, too, it'd be nice to fuse the apparatus developed + in these v5 lists with the ideas from the v4 lists, above. + But that also is left as an exercise. + + +5. **Implementing (self-balancing) trees** + + In [[Assignment3]] we proposed a very ad-hoc-ish implementation of trees. + + Think about how you'd implement them in a more principled way. You could + use any of the version 1 -- version 5 implementation of lists as a model. + + To keep things simple, we'll stick to binary trees. A node will either be a + *leaf* of the tree, or it will have exactly two children. + + There are two kinds of trees to think about. In one sort of tree, it's only + the tree's leaves that are labeled: + + . + / \ + . 3 + / \ + 1 2 + + Linguists often use trees of this sort. The inner, non-leaf nodes of the +tree do have associated values. But what values they are can be determined from +the structure of the tree and the values of the node's left and right children. +So the inner node doesn't need its own independent label. + + In another sort of tree, the tree's inner nodes are also labeled: + + 4 + / \ + 2 5 + / \ + 1 3 + + When you want to efficiently arrange an ordered collection, so that it's + easy to do a binary search through it, this is the way you usually structure + your data. + + These latter sorts of trees can helpfully be thought of as ones where + *only* the inner nodes are labeled. Leaves can be thought of as special, + dead-end branches with no label: + + .4. + / \ + 2 5 + / \ / \ + 1 3 x x + / \ / \ + x x x x + + In our earlier discussion of lists, we said they could be thought of as + data structures of the form: + + Empty_list | Non_empty_list (its_head, its_tail) + + And that could in turn be implemented in v2 form as: + + the_list (\head tail. non_empty_handler) empty_handler + + Similarly, the leaf-labeled tree: + + . + / \ + . 3 + / \ + 1 2 + + can be thought of as a data structure of the form: + + Leaf (its_label) | Non_leaf (its_left_subtree, its_right_subtree) + + and that could be implemented in v2 form as: + + the_tree (\left right. non_leaf_handler) (\label. leaf_handler) + + And the node-labeled tree: + + .4. + / \ + 2 5 + / \ / \ + 1 3 x x + / \ / \ + x x x x + + can be thought of as a data structure of the form: + + Leaf | Non_leaf (its_left_subtree, its_label, its_right_subtree) + + and that could be implemented in v2 form as: + + the_tree (\left label right. non_leaf_handler) leaf_result + + + What would correspond to "folding" a function `f` and base value `z` over a + tree? Well, if it's an empty tree: + + x + + we should presumably get back `z`. And if it's a simple, non-empty tree: + + 1 + / \ + x x + + we should expect something like `f z 1 z`, or `f label_of_this_node `. (It's not important what order we say `f` has to take its arguments + in.) + + A v3-style implementation of node-labeled trees, then, might be: + + let empty_tree = \f z. z in + let make_tree = \left label right. \f z. f (left f z) label (right f z) in + ... + + Think about how you might implement other tree operations, such as getting +the label of the root (topmost node) of a tree; extracting the left subtree of +a node; and so on. + + Think about different ways you might implement leaf-labeled trees. + + If you had one tree and wanted to make a larger tree out of it, adding in a +new element, how would you do that? + + When using trees to represent linguistic structures, one doesn't have +latitude about *how* to build a larger tree. The linguistic structure you're +trying to represent will determine where the new element should be placed, and +where the previous tree should be placed. + + However, when using trees as a computational tool, one usually does have +latitude about how to structure a larger tree---in the same way that we had the +freedom to implement our sets with lists whose members were just appended in +the order we built the set up, or instead with lists whose members were ordered +numerically. + + When building a new tree, one strategy for where to put the new element and +where to put the existing tree would be to always lean towards a certain side. +For instance, to add the element `2` to the tree: + + 1 + / \ + x x + + we might construct the following tree: + + 1 + / \ + x 2 + / \ + x x + + or perhaps we'd do it like this instead: + + 2 + / \ + x 1 + / \ + x x + + However, if we always leaned to the right side in this way, then the tree +would get deeper and deeper on that side, but never on the left: + + 1 + / \ + x 2 + / \ + x 3 + / \ + x 4 + / \ + x 5 + / \ + x x + + and that wouldn't be so useful if you were using the tree as an arrangement +to enable *binary searches* over the elements it holds. For that, you'd prefer +the tree to be relatively "balanced", like this: + + .4. + / \ + 2 5 + / \ / \ + 1 3 x x + / \ / \ + x x x x + + Do you have any ideas about how you might efficiently keep the new trees +you're building pretty "balanced" in this way? + + This is a large topic in computer science. There's no need for you to learn +the various strategies that they've developed for doing this. But +thinking in broad brush-strokes about what strategies might be promising will +help strengthen your understanding of trees, and useful ways to implement them +in a purely functional setting like the lambda calculus. + -5. Implementing (self-balancing) trees - more...