## Exercise 1.32 of SICP

Exercise 1.32:
a. Show that sum and product (exercise 1.31) are both special cases of a still more general notion called accumulate that combines a collection of terms, using some general accumulation function:

Accumulate takes as arguments the same term and range specifications as sum and product, together with a combiner procedure (of two arguments) that specifies how the current term is to be combined with the accumulation of the preceding terms and a null-value that specifies what base value to use when the terms run out. Write accumulate and show how sum and product can both be defined as simple calls to accumulate.

b. If your accumulate procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.

Part A:

> (simpson-integral cube 1 2 1000)
3.75
> (wallis 5000)
3.1419067029355268

It’s good to know general version of sum and product still work.

Part B:

Iterative versions of product and sum via iterative accumulate function

## Exercise 1.31 of SICP

Exercise 1.31:
a. The sum procedure is only the simplest of a vast number of similar abstractions that can be captured as higher-order procedures. Write an analogous procedure called product that returns the product of the values of a function at points over a given range. Show how to define factorial in terms of product. Also use product to compute approximations to using the formula
$\frac{\pi}{4} = \frac{2\cdot4\cdot4\cdot6\cdot6\cdot8\cdots}{3\cdot3\cdot5\cdot5\cdot7\cdot7\cdots}$
b. If your product procedure generates a recursive process, write one that generates an iterative process. If it generates an iterative process, write one that generates a recursive process.

Part A:

Notice that the product function looks almost the same as sum. The only things truly different are the multiplication operation and the base case being multiplicative identity. Factorial doesn’t require much except the identity function and a trivial next function.

It’s easier to think about Wallis Product product if it’s written slightly differently:
\$latex \frac{\pi}{4} = \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdot \frac{8}{7} \cdots

> (wallis 5)
2.9257142857142857
> (wallis 50)
3.1719440917427058
> (wallis 500)
3.1447232866889245
> (wallis 5000)
3.1419067029355268
> (wallis 10000)
3.1417497057380523

While the results do tend to $\pi$ they do so very slowly.

Part B:

This is the iterative version of product. It looks similar to the iterative version of sum.

## Exercise 1.30 of SICP

Exercise 1.30: The sum procedure above generates a linear recursion. The procedure can be rewritten so that the sum is performed iteratively. Show how to do this by filling in the missing expressions in the following definition:

The iterative solution:

## Exercise 1.29 of SICP

Exercise 1.29: Simpson’s Rule is a more accurate method of numerical integration than the method illustrated above. Using Simpson’s Rule, the integral of a function f between a and b is approximated as
$\frac{h}{3}[y_0+4y_1+2y_2+4y_3+2y_4+ \cdots + 2y_{n-2} + 4y_{n-1} + y_n]$
where $h=\frac{b-a}{n}$, for some even integer n, and $y_k = f(a + kh)$. (Increasing n increases the accuracy of the approximation.) Define a procedure that takes as arguments f, a, b, and n and returns the value of the integral, computed using Simpson’s Rule. Use your procedure to integrate cube between 0 and 1 (with n = 100 and n = 1000), and compare the results to those of the integral procedure shown above.

There are several things I want to note here. All the auxiliary functions to represent the leading coefficients as well as h make life a lot easier. It’s also important to note that the variables a and b in sum are not performing the same role as in the integral example earlier. In this case they are just counting indices from a = 0 to b = n. It might not be immediately obvious: When the function term is called from sum a (the start of the integrating interval) is bound to it from the original call to simpson-integral . Term only receives the index which it promptly evaluates.

## Exercise 1.28 of SICP

Exercise 1.28: One variant of the Fermat test that cannot be fooled is called the Miller-Rabin test (Miller 1976; Rabin 1980). This starts from an alternate form of Fermat’s Little Theorem, which states that if n is a prime number and a is any positive integer less than n, then a raised to the (n – 1)st power is congruent to 1 modulo n. To test the primality of a number n by the Miller-Rabin test, we pick a random number an-1 in this way will reveal a nontrivial square root of 1 modulo n. (This is why the Miller-Rabin test cannot be fooled.) Modify the expmod procedure to signal if it discovers a nontrivial square root of 1, and use this to implement the Miller-Rabin test with a procedure analogous to fermat-test. Check your procedure by testing various known primes and non-primes. Hint: One convenient way to make expmod signal is to have it return 0.

Carmichael Numbers: 561, 1105, 1729, 2465, 2821, 6601
Primes: 2, 3, 10169, 31337, 1000249, 382176531747930913347461352433
Non-primes: 6, 27, 49, 1024

> (fast-prime? 561 10)
#f
> (fast-prime? 1105 10)
#f
> (fast-prime? 1729 10)
#f
> (fast-prime? 2465 10)
#f
> (fast-prime? 2821 10)
#f
> (fast-prime? 6601 10)
#f
> (fast-prime? 2 10)
#t
> (fast-prime? 3 10)
#t
> (fast-prime? 10169 10)
#t
> (fast-prime? 31337 10)
#t
> (fast-prime? 1000249 10)
#t
> (fast-prime? 382176531747930913347461352433 10)
#t
> (fast-prime? 6 10)
#f
> (fast-prime? 27 10)
#f
> (fast-prime? 49 10)
#f
> (fast-prime? 1024 10)
#f

## Exercise 1.27 of SICP

Exercise 1.27: Demonstrate that the Carmichael numbers listed in footnote 47 really do fool the Fermat test. That is, write a procedure that takes an integer n and tests whether an is congruent to a modulo n for every a

Carmichael Numbers: 561: 3 11 17 1105: 5 13 17 1729: 7 13 19 2465: 5 17 29 2821: 7 13 31 6601: 7 23 41 > (test-prime? 561) #t
> (test-prime? 1105)
#t
> (test-prime? 1729)
#t
> (test-prime? 2465)
#t
> (test-prime? 2821)
#t
> (test-prime? 6601)
#t

These number indeed fool the test. I will trace through a case where a=10 and n=561. I will leave the remainder procedure out to save text space but I will apply it as if it is there. I don’t think much is left out by applying remainder implicitly.

This exercise and the Miller-Rabin test are getting me to think about Discrete Logarithms.

## Exercise 1.26 of SICP

Exercise 1.26: Louis Reasoner is having great difficulty doing exercise 1.24. His fast-prime? test seems to run more slowly than his prime? test. Louis calls his friend Eva Lu Ator over to help. When they examine Louis’s code, they find that he has rewritten the expmod procedure to use an explicit multiplication, rather than calling square:

“I don’t see what difference that could make,” says Louis. “I do.” says Eva. “By writing the procedure like that, you have transformed the $\Theta(\log n)$ process into a $\Theta(n)$ process.” Explain.

Just like the recursive binary tree computational procedure for Fibonacci numbers was exponential so is this process exponential because of the double call to expmod.
This double call creates a tree of depth log2(n). The amount of nodes in a tree (the amount of computational steps to perform the algorithm), of depth log2(n) is $2^{\log_2 n} = n$. This means the algorithm is indeed $\Theta(n)$.

## Exercise 1.25 of SICP

Exercise 1.25: Alyssa P. Hacker complains that we went to a lot of extra work in writing expmod. After all, she says, since we already know how to compute exponentials, we could have simply written

Is she correct? Would this procedure serve as well for our fast prime tester? Explain.

While the procedure is mathematically accurate and would produce the right results, fast-expt would produce enormous numbers that would then have to be reduced modulo m. For example:

(expmod 2 10 7)
(remainder (fast-expt 2 10) 7)
(remainder 1024 7)

The 1024 might not seem so bad at first.

(expmod 2 100 7)
(remainder (fast-expt 2 10) 7)
(remainder 1267650600228229401496703205376 7)
That’s a bit excessive.

The version bellow keeps the number in check.

I am taking some liberty here in skipping a few minor steps.
(expmod 2 10 7)
(remainder (square (expmod 2 5 7)) 7)
(remainder (square (remainder (* 2 (expmod 2 4 7)) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square (expmod 2 2 7)) 7)) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square (remainder (square (expmod 2 1 7)) 7)) 7) 7) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square (remainder (square (remainder (* 2 1) 7)) 7)) 7)) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square (remainder (square (remainder 2 7)) 7)) 7)) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square (remainder (square 2) 7)) 7)) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square (remainder 4 7)) 7)) 7)) 7)
(remainder (square (remainder (* 2 (remainder (square 4) 7)) 7)) 7)
(remainder (square (remainder (* 2 2) 7)) 7)
(remainder (square (remainder 4 7)) 7)
(remainder (square 4) 7)
2

The key difference is that at no point do any of the numbers grow to be much larger than the base.

## Exercise 1.24 of SICP

Exercise 1.24: Modify the timed-prime-test procedure of exercise 1.22 to use fast-prime? (the Fermat method), and test each of the 12 primes you found in that exercise. Since the Fermat test has (log n) growth, how would you expect the time to test primes near 1,000,000 to compare with the time needed to test primes near 1000? Do your data bear this out? Can you explain any discrepancy you find?

I arbitrarily chose to run the fast-prime? test 100 times. As in 1.22 I ran it only larger ranges than the book suggested.

> (search-for-primes (expt 10 10) (- (expt 10 11) 1))

10000000019 *** .016000032424926758
10000000033 *** .016000032424926758
10000000061 *** .0160000324249267583

> (search-for-primes (expt 10 11) (- (expt 10 12) 1))

100000000003 *** .016000032424926758
100000000019 *** .016000032424926758
100000000057 *** .0150001049041748053

> (search-for-primes (expt 10 12) (- (expt 10 13) 1))

1000000000039 *** .016000032424926758
1000000000061 *** 0.
1000000000063 *** .0159997940063476563

> (search-for-primes (expt 10 13) (- (expt 10 14) 1))

10000000000037 *** .015999794006347656
10000000000051 *** .016000032424926758
10000000000099 *** .0149998664855957033

The test is too fast to give meaningful times to compare against. I would expect numbers of order 106 to take 2 times longer to compute than 103. The reasoning follows the property of logarithms. log(106) = 6 and log(103) = 3.

In my case: log(1010) = .016
I would predict: log(1020) = .032
> (search-for-primes (expt 10 20) (- (expt 10 21) 1))

100000000000000000039 *** .03099989891052246
100000000000000000129 *** .03099989891052246
100000000000000000151 *** .030999898910522463

## Exercise 1.23 of SICP

Exercise 1.23: The smallest-divisor procedure shown at the start of this section does lots of needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to see if it is divisible by any larger even numbers. This suggests that the values used for test-divisor should not be 2, 3, 4, 5, 6, …, but rather 2, 3, 5, 7, 9, …. To implement this change, define a procedure next that returns 3 if its input is equal to 2 and otherwise returns its input plus 2. Modify the smallest-divisor procedure to use (next test-divisor) instead of (+ test-divisor 1). With timed-prime-test incorporating this modified version of smallest-divisor, run the test for each of the 12 primes found in exercise 1.22. Since this modification halves the number of test steps, you should expect it to run about twice as fast. Is this expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how do you explain the fact that it is different from 2?

12 Primes from 1.22:

1. 10000000019 *** .14100003242492676
2. 10000000033 *** .1399998664855957
3. 10000000061 *** .14000010490417483
4. 100000000003 *** .5310001373291016
5. 100000000019 *** .5320000648498535
6. 100000000057 *** .54699993133544923
7. 1000000000039 *** 1.7660000324249268
8. 1000000000061 *** 1.75
9. 1000000000063 *** 1.76500010490417483
10. 10000000000037 *** 5.672000169754028
11. 10000000000051 *** 5.672000169754028
12. 10000000000099 *** 5.65599989891052253

Using time-prime-test with the new next function

1. 10000000019 *** .07799983024597168
2. 10000000033 *** .07800006866455078
3. 10000000061 *** .07800006866455078
4. 100000000003 *** .2969999313354492
5. 100000000019 *** .29600000381469727
6. 100000000057 *** .2969999313354492
7. 1000000000039 *** .9849998950958252
8. 1000000000061 *** .9840002059936523
9. 1000000000063 *** .9839999675750732
10. 10000000000037 *** 3.171999931335449
11. 10000000000051 *** 3.1570000648498535
12. 10000000000099 *** 3.171999931335449
 Using next Exercise 1.22 Factor of speedup 0.07799983 0.141000032 1.807696658 0.078000069 0.139999866 1.794868503 0.078000069 0.140000105 1.79487156 0.296999931 0.531000137 1.787879664 0.296000004 0.532000065 1.797297493 0.296999931 0.546999931 1.841751036 0.984999895 1.766000032 1.792893625 0.984000206 1.75 1.778454912 0.983999968 1.765000105 1.793699353 3.171999931 5.67200017 1.788146372 3.157000065 5.67200017 1.796642399 3.171999931 5.655999899 1.78310215

Judging by the speedup it does look like there is about a 2 fold increase in speed of determining if a number is prime.