Exercise 1.22 of SICP

Exercise 1.22: Most Lisp implementations include a primitive called runtime (my implementation of scheme doesn’t. I define a work around) that returns an integer that specifies the amount of time the system has been running (measured, for example, in microseconds). The following timed-prime-test procedure, when called with an integer n, prints n and checks to see if n is prime. If n is prime, the procedure prints three asterisks followed by the amount of time used in performing the test.

Using this procedure, write a procedure search-for-primes that checks the primality of consecutive odd integers in a specified range. Use your procedure to find the three smallest primes larger than 1000; larger than 10,000; larger than 100,000; larger than 1,000,000. Note the time needed to test each prime. Since the testing algorithm has order of growth of \Theta(\sqrt n), you should expect that testing for primes around 10,000 should take about \sqrt 10 times as long as testing for primes around 1000. Do your timing data bear this out? How well do the data for 100,000 and 1,000,000 support the \sqrt n prediction? Is your result compatible with the notion that programs on your machine run in time proportional to the number of steps required for the computation?

> (timed-prime-test 68720001023)
68720001023 *** .4850001335144043

> (timed-prime-test 70368760954879)
70368760954879 *** 16.844000101089478

Since this exercise is a bit dated testing numbers on the order 106 for primality doesn’t give my system any noticeable workout. I test my ranges starting from 1010.
> (search-for-primes (expt 10 10) (- (expt 10 11) 1))

10000000019 *** .14100003242492676
10000000033 *** .1399998664855957
10000000061 *** .14000010490417483

According to the theoretical calculations the numbers on the order of 1011 should take 0.44 \approx 0.14 \cdot \sqrt{10} seconds.
Error: 17%

> (search-for-primes (expt 10 11) (- (expt 10 12) 1))

100000000003 *** .5310001373291016
100000000019 *** .5320000648498535
100000000057 *** .54699993133544923

1.71 \approx 0.54 \cdot \sqrt{10} seconds.

> (search-for-primes (expt 10 12) (- (expt 10 13) 1))

1000000000039 *** 1.7660000324249268
1000000000061 *** 1.75
1000000000063 *** 1.76500010490417483

5.57 \approx 1.76 \cdot \sqrt{10} seconds.
Error: 2%

> (search-for-primes (expt 10 13) (- (expt 10 14) 1))

10000000000037 *** 5.672000169754028
10000000000051 *** 5.672000169754028
10000000000099 *** 5.65599989891052253

It seems that \Theta(\sqrt n) is indeed a good predictor of the time it takes to check for primality of the algorithm.

Exercise 1.21 of SICP

Exercise 1.21: Use the smallest-divisor procedure to find the smallest divisor of each of the following numbers: 199, 1999, 19999.

> (smallest-divisor 199)
199
> (smallest-divisor 1999)
1999
> (smallest-divisor 19999)
7

Iterative version of expmod

Recursive version:

I was a bit surprised that the above algorithm didn’t come with the problem set of to rewrite it in an iterative form. So here it is.
Iterative Version:

Exercise 1.20 of SICP

Exercise 1.20: The process that a procedure generates is of course dependent on the rules used by the interpreter. As an example, consider the iterative gcd procedure given above. Suppose we were to interpret this procedure using normal-order evaluation, as discussed in section 1.1.5. (The normal-order-evaluation rule for if is described in exercise 1.5.) Using the substitution method (for normal order), illustrate the process generated in evaluating (gcd 206 40) and indicate the remainder operations that are actually performed. How many remainder operations are actually performed in the normal-order evaluation of (gcd 206 40)? In the applicative-order evaluation?

GCD Algorithm:

Normal Order:

Every time there is an R in an if = statement that R gets evaluated.
#{R1}+#{R2}+#{R3}+#{R4}= 1 + 2 + 4 + 7 = 14 times remainder is run in the if =
R3 is ran once in order to return the actual remainder.
14+#{R3}=18
Number of Times “remainder” function called 18.

Applicative Order:
(gcd 206 40)
(gcd 40 (remainder 206 40))
(gcd 40 6)
(gcd 6 (remainder 40 6))
(gcd 6 4)
(gcd 4 (remainder 6 4))
(gcd 4 2)
(gcd 2 (remainder 4 2))
(gcd 2 0)
2
Number of Times “remainder” function called 4.

Exercise 1.19 of SICP

Exercise 1.19: There is a clever algorithm for computing the Fibonacci numbers in a logarithmic number of steps. Recall the transformation of the state variables a and b in the fib-iter process of section 1.2.2: a <- a + b and b <- a. Call this transformation T, and observe that applying T over and over again n times, starting with 1 and 0, produces the pair Fib(n + 1) and Fib(n). In other words, the Fibonacci numbers are produced by applying Tn, the nth power of the transformation T, starting with the pair (1,0). Now consider Tpq to be the special case of p = 0 and q = 1 in a family of transformations Tpq , where Tpq transforms the pair (a,b) according to a <- bq + aq + ap and b <- bp + aq. Show that if we apply such a transformation Tpq twice, the effect is the same as using a single transformation Tp’q’ of the same form, and compute p’ and q’ in terms of p and q. This gives us an explicit way to square these transformations, and thus we can compute Tn using successive squaring, as in the fast-expt procedure. Put this all together to complete the following procedure, which runs in a logarithmic number of steps:

Definition of a1 and b1:
a_1 \leftarrow bq + aq + ap
b_1 \leftarrow bp + aq
T^2_{pq} = T_{p'q'}
a_1 = bq + aq + ap
b_1 = bp + aq
a_2 = b_1q + a_1q + a_1p
b_2 = b_1p + a_1q
The key here is to substitute from the definition of a1 and b1 into a2 and b2 in order to get p’ and q’
I start with b2 because the expression is simpler to work with. the values I get for p’ and q’ must also work with a2
b_2 = b_1p + a_1q = (bp+aq)p + (bq+aq+ap)q
b_2 =  bp^2+aqp + bq^2+aq^2+aqp
b_2 =  b(p^2+ q^2)+a(2qp+q^2)
p'=(p^2+ q^2)
q'= (2qp+q^2)
b_2 = bp'+aq'
a2 is trickier to check because of the number of terms, I’ll take a few short cuts here in showing the math
a_2 = bpq + aq^2 + bq^2 + aq^2 + aqp + bqp + aqp + ap^2
a_2 = b(2qp+q^2) + a(2qp+q^2) + a(p^2+ q^2)
a_2 = bq'+aq'+ap'
Finally: T^2_{pq} = T_{p'q'} where p'=  p^2 + q^2 and q'= 2qp + q^2

> (fib 10)
55
> (fib 100)
354224848179261915075

Exercise 1.18 of SICP

Exercise 1.18: Using the results of exercises 1.16 and 1.17, devise a procedure that generates an iterative process for multiplying two integers in terms of adding, doubling, and halving and uses a logarithmic number of steps.

Exercise 1.17 of SICP

Exercise 1.17: The exponentiation algorithms in this section are based on performing exponentiation by means of repeated multiplication. In a similar way, one can perform integer multiplication by means of repeated addition. The following multiplication procedure (in which it is assumed that our language can only add, not multiply) is analogous to the expt procedure:

This algorithm takes a number of steps that is linear in b. Now suppose we include, together with addition, operations double, which doubles an integer, and halve, which divides an (even) integer by 2. Using these, design a multiplication procedure analogous to fast-expt that uses a logarithmic number of steps.

The difference in between \Theta(n) and \Theta(log(n)) is apparent for large numbers.
On my system:
> (mult 2 (expt 20 8))
Takes forever, I had to interrupt it.
> (fast-mult 2 (expt 20 8))
51200000000

The answer is almost instantaneous.

Exercise 1.16 of SICP

Exercise 1.16: Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the observation that (b^{\frac{n}{2}})^2 = (b^2)^{\frac{n}{2}}, keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product a*bn is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)

For me the thing to notice about this algorithm is the following property:
Notice how when the exponent n is odd b^{7} = b \cdot (b^{6}) = b \cdot (b^2 \cdot (b^{4})^1)
All the b’s outside the parenthesis get accumulated in the state variable a of the algorithm.
If b = 2 n = 7
(fast-expt-iter 2 7 1)
(fast-expt-iter 2 6 2)
(fast-expt-iter 4 3 2)
(fast-expt-iter 4 2 8)
(fast-expt-iter 16 1 8)
(fast-expt-iter 16 0 128)
128

Exercise 1.15 of SICP

Exercise 1.15: The sine of an angle (specified in radians) can be computed by making use of the approximation sin(x) = x if x is sufficiently small, and the trigonometric identity
\sin(x) = 3\sin(\frac{x}{3})-4\sin^3(\frac{x}{3})
to reduce the size of the argument of sin. (For purposes of this exercise an angle is considered sufficiently small” if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in the following procedures:

a. How many times is the procedure p applied when (sine 12.15) is evaluated?
5 times.
(p (sine 4.05))
(p (p (sine 1.35)))
(p (p (p (sine 0.45))))
(p (p (p (p (sine 0.15)))))
(p (p (p (p (p (sine 0.05))))))
(p (p (p (p (p 0.05)))))
(p (p (p (p 0.14950000000000002))))
(p (p (p 0.43513455050000005)))
(p (p 0.9758465331678772))
(p -0.7895631144708228)
-.39980345741334

b. What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?
Order of in number of steps is \Theta(\log_3(n)) because log3(n) counts the number of factors of 3 in n. The space growth is the same.

Exercise 1.14 of SICP

Exercise 1.14: Draw the tree illustrating the process generated by the count-change procedure of section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and number of steps used by this process as the amount to be changed increases?

The orders of growth of the space is \Theta(n) where n is the number of coins.
The number of steps used by this process as the amount to be changed increases at \Theta(n^c). I have no idea if this is true nor do I have a proof. This is merely a guess. I need a little more algorithm analysis under my belt to solve this one.