|Notes (4-up)||Notes (1-up)||Relevant textbook material|
|Haskell programming||Haskell programming||Chapter 2 (Davie)|
|Techniques and methods||Chapter 3 (Davie)|
|Types||Chapter 4 (Davie)|
|Advanced concepts1||Sections 5.1 to 5.92, Chapter 7, Sections 9.1 to 9.4 (Davie)|
|Prolog programming||Chapters 1, 2, 3, 4, 5, 6, 10 (Brna)|
|Control, search3||Chapters 7 and 9.1 (Brna); State space search extra|
|Modifying the knowledge base and the search strategy4||Modifying the knowledge base4||Section 9.2 (Brna)|
reflect :: BST a -> BST a
reflect Nil = Nil
reflect (Node x l r) = Node x (reflect r) (reflect l)
This would a slightly more ellaborated structural induction over trees.
There are countless examples of structural induction over lists. A notable class of things that are proven using induction deals with optimization issues. This class includes fusion laws, which can be applied to code to reduce the number of traversals of lists. One of the simplest fusion laws is (map f).(map g) == map (f.g). This illustrates the concept quite nicely (we do one map instead of two). Such properties have relatively simple proofs by structural induction, which you are invited to try.
Still on optimization, using accumulating parameters to improve the efficiency of the code is fine, but writing code for such a thing is more complex than straightforward implementations. For any function with accumulating arguments you should make sure (preferably by a proof) that you are implementing the same thing. An example is that one should prove (using structural induction) that reverse x == pour x  to make sure that the optimization in pour did not break anything. You are invited to try it out.