Exercise 1.7: The good-enough?
test used in computing square roots will not be very effective
for finding the square roots of very small numbers. Also, in real computers, arithmetic operations
are almost always performed with limited precision. This makes our test inadequate for very
large numbers. Explain these statements, with examples showing how the test fails for small
and large numbers. An alternative strategy for implementing good-enough?
is to watch how guess
changes from one iteration to the next and to stop when the change is a very small fraction
of the guess. Design a square-root procedure that uses this kind of end test. Does this work
better for small and large numbers?
Algorithm:
$$ x_{n+1} = \frac{1}{2}(x_n+\frac{S}{x_n}) $$
(define (average a b)
(/ (+ a b) 2))
(define (improve guess x)
(average guess (/ x guess)))
(define (good-enough? guess x)
(< (abs (- x (* guess guess))) 0.0001))
(define (sqrt-iter guess x)
(if (good-enough? guess x)
guess
(sqrt-iter (improve guess x)
x)))
> (sqrt-iter 1.0 16e-8)
.007819325057615187
> (sqrt-iter 1.0 16e64)
**Infinite Loop***
(define (great-enough? oldguess guess)
(< (abs (- 1 (/ oldguess guess))) 0.0001))
(define (sqrt-iter2 guess x)
(if (great-enough? guess (improve guess x))
guess
(sqrt-iter2 (improve guess x)
x)))
> (sqrt-iter2 1.0 16e-8)) 4.000016244484425e-4
> (sqrt-iter2 1.0 16e8)
40000.16244495736
The reason sqrt-iter
does so much worse with small numbers is because 1e-4, which is
my threshold in good-enough?
, happens to be much larger than 16e-8. Once “guess” start
converging to the square root, let’s say it becomes something close to 4e-3, (* guess guess)
now becomes (* 4e-3 4e-3)
which in turns becomes 16e-6. 16e-6 and 16e-8 are both much smaller
than the threshold. This makes good-enough?
becomes a little meaningless. In this particular
case guess eventually gets down to about 0.008: (16e-8)-(.008)2 = (16e-8) - (6.4e-5) = about
(-6e-5). Since this is smaller than the threshold the test passes and “guess” get’s returned.
The reason sqrt-iter
does so poorly with large numbers is because of the way
floating point operations work. Eventually “guess” gets somewhere close to 4e32. In the
good-enough?
procedure the following step gets evaluated (abs (- 16e64 (square 4.0e32)))
this turns out to be about 2.33e49. Clearly that is the wrong answer. The true result
should be somewhere very close to 0. Since floating point numbers have a finite amount
of bits to represent a base and an exponent this means large numbers are susceptible to
overflow. Which is exactly what happens
here. good-enough?
now returns false even though the numbers are actually within the
threshold, and improve
tries to make the “guess” even better but it makes no difference
because the original 4.0e32 was good enough. A few places changed in the decimal’s place won’t
make a difference to numbers this large.
great-enough?
avoids both pitfalls by using ratios of numbers that are of similar magnitude. In
both small and large numbers for guesses the bit patterns of new guess and old guess are similar
and by taking ratios of these similar numbers the overflow problem is avoided. For numbers
much smaller than the threshold their ratios are still meaningful because with respect to each
other they are not so small, there is no risk of operating outside of threshold sensitivity. The
disparity between each iteration of old and new guess would have to be impossibly large in order
to operate outside of the threshold. See the growth of each iterative error term bellow to see
why this can’t theoretically ever be the case.
Error Growth for this algorithm:
$$ x_{n+1} = \frac{1}{2}(x_n+\frac{S}{x_n}) $$
$$ \epsilon_n = \frac{(x_n)^2}{S} - 1 $$
$$ (x_n)^2 = S(\epsilon_n+1) $$
$$ (x_{n+1})^2 = \frac{1}{4}\left(x_n^2+2S+\frac{S^2}{x_n^2}\right) $$
$$ \frac{(x_{n+1})^2}{S} = \frac{1}{4}\left((\epsilon_n+1)+2+\frac{1}{(\epsilon_n+1)}\right) $$
$$ \frac{(x_{n+1})^2}{S} = \frac{1}{4}\left((\epsilon_n+1)+2+\frac{1}{(\epsilon_n+1)}\right) $$
$$ \frac{(x_{n+1})^2}{S} = \frac{(\epsilon_n+1)^2+2(\epsilon_n+1)+1}{4(\epsilon_n+1)} $$
$$ \frac{(x_{n+1})^2}{S} = \frac{\left((\epsilon_n+1)+1\right)^2}{4(\epsilon_n+1)} $$
$$ \epsilon_{n+1}=\frac{(x_{n+1})^2}{S}-1 = \frac{\left((\epsilon_n+1)+1\right)^2}{4(\epsilon_n+1)}-1 $$
$$ \epsilon_{n+1}=\frac{(\epsilon_n+2)^2}{4(\epsilon_n+1)}-1 $$
(define (error n eps)
(if (< (abs eps) 1e-9)
n
(error (+ n 1) (- (/ (square (+ eps 2))
(* 4 (+ eps 1)))
1))))
> (error 1 (- (/ 1.0 16e-8) 1))
16
The actual epsilon value is only: 1.6492585075411625e-11 After only 16 iterations the algorithm can return a solution with the accuracy better than 1 part in a billion. The actual value is closer to 1 part in a trillion.