Things that surprised me

These are concepts from Math,Comp Sci, Physics that have surprised me over the years:

My first formal introduction to recursion was through Fibonacci numbers. While the uses of these numbers are quite surprising( i.e. Euclidean algorithm’s running time ). I was never really impressed with them. They seem simple enough. What really floored me about recursion, was the solution of the Tower’s of Hanoi problem and subsequently the 8 Queens problem. On the surface they seemed so difficult to write a good simple solution for but where made almost trivial with recursion. Recursion still amazes me today.

Mathematical Induction

Induction ties into recursion above, but it needs it’s own category in my list. Induction did not even seem fair when I first learned it. It was as if I was cheating. I could prove statements that I knew almost nothing about with minimal effort just by slogging through the algebra steps. At the end I still felt like I haven’t really grasped the problem. A good example at the time was the sum of cubes.

Lambda Calculus
If there is one good way to study recursion it’s lambda calculus. This field has augmented my understanding of every single item bellow and still continues to. From combinators to formal renaming schemes to deriving natural numbers(more on this later).

I have learned as much about Computer Science from this language as I did with everything else I’ve encountered in this field combined: Functional Programming, OO, macros, closures, higher order functions, bottom-up programming, parsers, syntax. Each of these areas has either been completely revolutionized in my head or planted there, where before was nothing.

Dedekind Cuts
I have never imagined that it’s possible to derive Real and Complex numbers out of Integers. What an amazing feat (see: Foundation of Analysis by Landau).

Derivatives and Integrals
These two things are reason I decided to also get a Math degree. I really wanted to understand why and how these work.
I remember what I thought when I saw derivatives in high school like it was yesterday. “So I take a function f(x) change an argument I am passing to it to x+h where h is so small that I can never write down a number y greater than zero that is smaller than h. Subtract f(x) and then divide this sum by another effectively zero “number” and I get rate of change with respect to x. Great!” Once that became clear, I ran into another issue.

The situation got worse with Leibniz notation. I could do things like “cancel out” infinitesimals (dx/dt)*(dt/dy)=dx/dy but I could not do this with derivatives with a higher order than 1. Makes perfect sense!

For integrals it was worse. “I take f(x) multiply it by something that makes no sense by itself and add up all the columns in the interval and BAM!… I got the function I was looking for. Great!”

Taylor’s Theorem
I was delighted when I found this was possible. I always wanted to prove it. When I finally got to in Real Analysis and was able to prove it, it was one of the most satisfying experiences of my college career.

Fourier Series/Transform
This started in my Differential Equations course and made worse by my Signals and Systems EE class. In DEq I was told about Laplace transforms. In EE I was told about FTs. I had no idea how these could possibly work. Fourier series was also quite disturbing. If I add a bunch of infinitely smooth functions I can get non-smooth ones. This was a pretty big deal. These transforms are still on my list to prove.

Lagrangian and Hamiltonian Formulation of Newtonian Mechanics

I am still working on these concepts. I was never satisfied by their explanations in class. You’ll have to watch the Sussman lecture to understand why.

Noether’s theorem
Out of all the physics discovered this in my mind is the most amazing theorem. Even in simplified form I am in awe.

Of course the list has more entries but these are my top 10.

Sussman Thought Process

I found a great lecture by Gerald Sussman about thought processes and computers. I stumbled upon it when reading about the Feynman algorithm of solving problems.
The Feynman Algorithm:

1. Write down the problem.
2. Think real hard.
3. Write down the solution.

His talk about Structure and Interpretation of Classical Mechanics reminds me of a quote by Knuth: “Science is what we understand well enough to explain to a computer. Art is everything else we do.”

World Magnetic Model

In about a dozen or so math courses I remember having to prove (always by induction) that the sum of N numbers (1+2+3+…+N) is equal to n(n+1)/2 and always thinking, this is pointless. I stand corrected.

At work I was tasked to calculate magnetic declination using java instead of the C implementation NOAA used. The program is pretty much a function: MagDec:(double,double,double)->double (elevation,latitude,longitude)->magnetic_dec. A key component used to calculate the declination is a file with spherical harmonics gathered by satellites. It looks something like this:

My goal was to create S = G(m,n) – G_dot(m,n) and C = H(m,n) – H_dot(m,n) for each value of m and n.
Since Java IO isn’t the most convenient thing in the world, instead of sscanf() I was forced to use java.util.Scanner, which IMO is not nearly as convenient for this type of data organization.
I realized that if I have a flat list it looks something like this:

Written in a different way:
G(1) G(2) G(3) … G(12)
2 3 4 … 13

which means there are 13(13+1)/2 – 1 coefficients for G. If I am presented with a single column of numbers like Col B. and I want to pick G(2,1) all I need to do is is realize that there are 3(3+1)/2 coefficients up to G(2,2) which means G(2,1) is 3(3+1)/2-1. Of course it would’ve been a lot easier if I could do the straightforward thing easily like: