Exercise 1.13 of SICP

Exercise 1.13: Prove that Fib(n) is the closest integer to n/5, where \phi = \frac{1 + \sqrt{5}}{2}. Hint: Let \psi = 1-\frac{\sqrt{5}}{2} Use induction and the definition of the Fibonacci numbers (see section 1.2.2) to prove that Fib(n) = \frac{\phi^n-\psi^n}{\sqrt5}.

F_n = F_{n-1} + F_{n-2}
F_0 = 0
F_1 = 1

Fn is a linear second-order recurrence with constant coefficients. The characteristic equation for it is: r^2-r-1=0
r = \phi , \psi
This means that the closed form solution is of the form F_n = \alpha\phi^n-\beta\psi^n
The constants \alpha and \beta are determined by the initial conditions bellow.
F_0 = \alpha + \beta = 0
\alpha = -\beta
F_1 = \alpha\phi + \beta\psi = 1
The closed expression has the following form:
F_n = \frac{-\phi^n}{\psi-\phi}-\frac{\psi^n}{\psi-\phi}=\frac{\phi^n-\psi^n}{\sqrt5}

Now that the derivation of the closed form for Fn is done here’s the proof by induction.
\phi = \frac{1 + \sqrt{5}}{2}
\psi = \frac{1-\sqrt{5}}{2}
F_n = \frac{\phi^n-\psi^n}{\sqrt5}
F_0 = \frac{\phi^0-\psi^0}{\sqrt5}=0
F_1 = \frac{\phi^1-\psi^1}{\sqrt5}=1
Perform the induction step:
Assume: F_n = \frac{\phi^n-\psi^n}{\sqrt5}
F_{n+1} = F_{n}+F_{n-1} = \frac{\phi^n-\psi^n}{\sqrt5} + \frac{\phi^{n-1}-\psi^{n-1}}{\sqrt5}=\frac{\phi^{n+1}-\psi^{n+1}}{\sqrt5}
F_{n+1} = \frac{\phi^{n-1}(\phi+1)-\psi^{n-1}(\psi+1)}{\sqrt5}
Because \phi^2 = \phi+1
F_{n+1} = \frac{\phi^{n-1}(\phi+1)-\psi^{n-1}(\psi+1)}{\sqrt5}=\frac{\phi^{n+1}-\psi^{n+1}}{\sqrt5}

The proof that F_n = \frac{\phi^n}{\sqrt5} is straightforward.
\psi = \frac{1-\sqrt{5}}{2} = -0.618
\lim_{n \to \infty}\psi^n = 0 for example \psi^{30} \approx 0.0000005374904998555718 and \phi^{30} \approx 1860498
\epsilon = 1-\frac{\phi^{30}-\psi^{30}}{\phi^{30}} \approx 2.8899105330992825 \cdot 10^{-13}
Clearly for n>30 F_n = \frac{\phi^n}{\sqrt5} is a good approximation.

n approx exact epsilon
0 0.4472135955 0.894427191 0.4472135955
1 0.72360679775 0.4472135955 0.27639320225
2 1.17082039325 1.3416407865 0.17082039325
3 1.894427191 1.788854382 0.105572809
4 3.06524758425 3.1304951685 0.0652475842499
5 4.95967477525 4.9193495505 0.0403252247502
6 8.0249223595 8.049844719 0.0249223594996
7 12.9845971347 12.9691942695 0.0154028652506
8 21.0095194942 21.0190389885 0.00951949424901
9 33.994116629 33.988233258 0.00588337100159

Exercise 1.12 of SICP

Exercise 1.12: The following pattern of numbers is called Pascal’s triangle.

\begin{tabular}{rccccccccc}row=0:&    &    &    &    &  1\\\noalign{\smallskip\smallskip}row=1:&    &    &    &  1 &    &  1\\\noalign{\smallskip\smallskip}row=2:&    &    &  1 &    &  2 &    &  1\\\noalign{\smallskip\smallskip}row=3:&    &  1 &    &  3 &    &  3 &    &  1\\\noalign{\smallskip\smallskip}row=4:&  1 &    &  4 &    &  6 &    &  4 &    &  1\\\noalign{\smallskip\smallskip}\end{tabular}

The numbers at the edge of the triangle are all 1, and each number inside the triangle is the sum of the two numbers above it. Write a procedure that computes elements of Pascal’s triangle by means of a recursive process.

Both 1 and 2 hold true for the top node.

  1. The first condition recognizes that the first element on of every row is always 1, hence P(n,0) = 1 for each n
  2. The second condition recognizes that the last element of every row is always 1, hence P(n,n) = 1 for each n
  3. This final condition states that in order to get the next number in the sequence: go up a row and back one column + up a row and stay in the same column P(n,k)=P(n-1,k-1)+P(n-1,k)

> (pascal 0 0)
> (pascal 7 2)
> (pascal 16 8)

Exercise 1.11 of SICP

Exercise 1.11: A function f is defined by the rule that f(n) = n if n<3 and f(n) = f(n - 1) + 2f(n - 2) + 3f(n - 3) if n> 3. Write a procedure that computes f by means of a recursive process. Write a procedure that computes f by means of an iterative process.

The idea is to use a triple of integers a,b,c initialized to f(2) = 2 f(1) = 1 and f(0) = 0, and to repeatedly apply the simultaneous transformations
a \leftarrow a + 2b + 3c
a is now f(n)
b \leftarrow a
b is now f(n-1)
c \leftarrow b
c is now f(n-2)

This is nothing but a slightly more complex Fibonacci iteration example from earlier.
F(n) = F(n-1) + F(n-2) where F(n) = n if n<2 [latex]F_n = F_{n-1} + F_{n-2}[/latex] [latex]a \leftarrow a + b [/latex] a is now F(n) [latex]b \leftarrow a [/latex] b is now F(n-1) Recursive implementation of f(n) is straightforward:

> (f-rec 1)
> (f-rec 3)
> (f-rec 6)
> (f-rec 8)

Iterative implementation of f(n)

> (f 1)
> (f 3)
> (f 6)
> (f 8)

Exercise 1.10 of SICP

Exercise 1.10: The following procedure computes a mathematical function called Ackermann’s function.

What are the values of the following expressions?

> (A 1 10)
> (A 2 4)
> (A 3 3)

Consider the following procedures, where A is the procedure defined above:
Give concise mathematical definitions for the functions computed by the procedures f, g, and h for positive integer values of n. For example, (k n) computes 5n2

(define (f n) (A 0 n))
f(n) = 2n
(define (g n) (A 1 n))
(g 3)
(A 1 3)
(A 0 (A 1 2))
(* 2 (A 0 (A 1 1)))
(* 2 (* 2 2)
g(n) = 2^n
(define (h n) (A 2 n))
(h 3)
(A 2 3)
(A 1 (A 2 2))
From the previous result I know that (h 3) will look like 2^{A(2,2)}
(A 2 2)
(A 1 (A 2 1))
From the previous result I know that (h 2) will look like 2^{A(2,1)}
h(3) = 2^{2^{A(2,1)}} = 2^{2^2}
h(n) = 2^{2^{\cdot^{\cdot^{2^2}}}}
Where the exponent repeats n times.
This operation is called tetration.
(define (k n) (* 5 n n))
f(n) = 5n^2

Exercise 1.8 of SICP

> (cube-root 1.0 8)
> (cube-root 1.0 1e9)

Exercise 1.9 of SICP

Exercise 1.9: Each of the following two procedures defines a method for adding two positive integers in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its argument by 1.


(+ 4 5)
(inc (+ 3 5))
(inc (inc (+ 2 5)))
(inc (inc (inc (+ 1 5))))
(inc (inc (inc (inc (+ 0 5)))))
(inc (inc (inc (inc 5))))
(inc (inc (inc 6)))
(inc (inc 7))
(inc 8)


(+ 4 5)
(+ 3 6)
(+ 2 7)
(+ 1 8)
(+ 0 9)

Exercise 1.7 of SICP

Exercise 1.7: The good-enough? test used in computing square roots will not be very effective for finding the square roots of very small numbers. Also, in real computers, arithmetic operations are almost always performed with limited precision. This makes our test inadequate for very large numbers. Explain these statements, with examples showing how the test fails for small and large numbers. An alternative strategy for implementing good-enough? is to watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess. Design a square-root procedure that uses this kind of end test. Does this work better for small and large numbers?


x_{n+1} = \frac{1}{2}(x_n+\frac{S}{x_n}).

> (sqrt-iter 1.0 16e-8)
> (sqrt-iter 1.0 16e64)
**Infinite Loop***

> (sqrt-iter2 1.0 16e-8))
< (sqrt-iter2 1.0 16e8) 40000.16244495736

The reason sqrt-iter does so much worse with small numbers is because 1e-4, which is my threshold in good-enough?, happens to be much larger than 16e-8. Once “guess” start converging to the square root, let’s say it becomes something close to 4e-3, (* guess guess) now becomes (* 4e-3 4e-3) which in turns becomes 16e-6. 16e-6 and 16e-8 are both much smaller than the threshold. This makes good-enough? becomes a little meaningless. In this particular case guess eventually gets down to about 0.008: (16e-8)-(.008)^2 = (16e-8) – (6.4e-5) = about (-6e-5). Since this is smaller than the threshold the test passes and “guess” get’s returned.

The reason sqrt-iter does so poorly with large numbers is because of the way floating point operations work. Eventually “guess” gets somewhere close to 4e32. In the good-enough? procedure the following step gets evaluated (abs (- 16e64 (square 4.0e32))) this turns out to be about 2.33e49. Clearly that is the wrong answer. The true result should be somewhere very close to 0. Since floating point numbers have a finite amount of bits to represent a base and an exponent this means large numbers are susceptible to overflow. Which is exactly what happens here. good-enough? now returns false even though the numbers are actually within the threshold, and improve tries to make the “guess” even better but it makes no difference because the original 4.0e32 was good enough. A few places changed in the decimal’s place won’t make a difference to numbers this large.

great-enough? avoids both pitfalls by using ratios of numbers that are of similar magnitude. In both small and large numbers for guesses the bit patterns of new guess and old guess are similar and by taking ratios of these similar numbers the overflow problem is avoided. For numbers much smaller than the threshold their ratios are still meaningful because with respect to each other they are not so small , there is no risk of operating outside of threshold sensitivity. The disparity between each iteration of old and new guess would have to be impossibly large in order to operate outside of the threshold. See the growth of each iterative error term bellow to see why this can’t theoretically ever be the case.

Error Growth for this algorithm:
x_{n+1} = \frac{1}{2}(x_n+\frac{S}{x_n})
\epsilon_n = \frac{(x_n)^2}{S} - 1
(x_n)^2 = S(\epsilon_n+1)
(x_{n+1})^2 = \frac{1}{4}\left(x_n^2+2S+\frac{S^2}{x_n^2}\right)
\frac{(x_{n+1})^2}{S} = \frac{1}{4}\left((\epsilon_n+1)+2+\frac{1}{(\epsilon_n+1)}\right)
\frac{(x_{n+1})^2}{S} = \frac{1}{4}\left((\epsilon_n+1)+2+\frac{1}{(\epsilon_n+1)}\right)
\frac{(x_{n+1})^2}{S} = \frac{(\epsilon_n+1)^2+2(\epsilon_n+1)+1}{4(\epsilon_n+1)}
\frac{(x_{n+1})^2}{S} = \frac{\left((\epsilon_n+1)+1\right)^2}{4(\epsilon_n+1)}
\epsilon_{n+1}=\frac{(x_{n+1})^2}{S}-1 = \frac{\left((\epsilon_n+1)+1\right)^2}{4(\epsilon_n+1)}-1

> (error 1 (- (/ 1.0 16e-8) 1))
> 16
The actual epsilon value is only: 1.6492585075411625e-11
After only 16 iterations the algorithm can return a solution with the accuracy better than 1 part in a billion. The actual value is closer to 1 part in a trillion.

Exercise 1.6 of SICP

Exercise 1.6: Alyssa P. Hacker doesn’t see why if needs to be provided as a special form. “Why can’t I just define it as an ordinary procedure in terms of cond?” she asks. Alyssa’s friend Eva Lu Ator claims this can indeed be done, and she defines a new version of if:

Eva demonstrates the program for Alyssa:

> (new-if (= 2 3) 0 5)

> (new-if (= 1 1) 0 5)

Delighted, Alyssa uses new-if to rewrite the square-root program:

What happens when Alyssa attempts to use this to compute square roots? Explain.

sqrt-iter implemented with new-if will create an infinite loop, this is because scheme uses Applicative Order Evaluation rule by default. In a regular if statement all expressions get evaluated in order of need not at the same time. This difference in the order of evaluation is what makes an passing arguments to an if statement different that passing arguments to functions such as new-if in Applicative Order systems. The way sqrt-iter is implemented using new-if implies the following evaluation rules: good-enough? returns true or false (doesn’t matter which), “guess” is already a number and hence in the simplest form it doesn’t get evaluated any further. The infinite recursion occurs at sqrt-iter call because it will get evaluated no matter what good-enough? does since new-if is just a regular function and under AOE all arguments must be evaluated fully before passed to a function.

Another example of the difference between argument evaluation:

> (if (> 1 0) 1 (/ 1 0)))
(new-if (> 1 0) 1 (/ 1 0)))
*** ERROR — Divide by zero
(/ 1 0)

Clearly a standard if doesn’t evaluate the last argument until it must. It will only evaluate it in the case where the predicate (> 1 0) will evaluate to false, which will never happen. On the other hand new-if evaluates everything right away, the interpreter tries to expand and pass the following (new-if #t 1 *ERROR!*)

It’s interesting that to make new-if behave correctly only requires to change the evaluation strategy to Normal Order Evaluation and the sqrt-iter code will work. I wouldn’t want to trace the NOE version of sqrt-iter because improve functions don’t get evaluated and just get passed along with every recursive call which makes them, a nightmare to untangle on paper.

Applicative Order vs Normal Order cont.

Exercise 1.5:
Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicative-order evaluation or normal-order evaluation. He defines the following two procedures:

Then he evaluates the expression

What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer. (Assume that the evaluation rule for the special form if is the same whether the interpreter is using normal or applicative order: The predicate expression is evaluated first, and the result determines whether to evaluate the consequent or the alternative expression.)

Applicative Order Evaluation:

Will become and infinite loop. Since AOE evaluates all the arguments as much as possible before passing them to a function test, clearly p is has no terminating case. Since p is a case of infinite recursion the program goes into an infinite loop.

Normal Order Evaluation:

This time p won’t be evaluated and test will enter the conditional in which the function terminates by returning 0. The pesky else case is never ran and hence the infinite loop condition never comes up. However if the first argument to test was anything but 0, an infinite loop would still occur.

Exercise 1.3 of SICP

Exercise 1.3: Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.

Just for fun another way: