Why Study Algorithms?

What it feels like studying algorithms

Why Study Algorithms?

This is a question that is plaguing me at the moment as I force myself to relearn calculus for Analysis of Algorithms at Oregon State University. In moments like this, where the concepts are abstract and I need to learn even more abstract concepts so I can understand the first class of abstract ideas, motivation is key.

Well, the short answer is that you don’t need this body of knowledge to develop a wide range of applications and features to applications. In my world, many of the concerns that common sorting, searching, and general optimization algorithms address are not real concerns because they’ve been abstracted to parts of the language or framework. I’m able to do my job because someone else has figured out how to do other parts of my job that normally would need to be created from scratch. So while learning merge sort and analyzing its complexity is a fun exercise, I’ll not be writing it from scratch anytime soon.

The real value in this course, or at least the parts that I find value in so far, are the techniques behind the algorithms. Merge sort, for example, is a divide and conquer algorithm. The idea behind this class of algorithms is that you can solve a problem faster if you break the problem set down into smaller and smaller sets until you’re left with a trivial problem to solve. You solve that small set and then combine it with another solved set and do that until all of the solved sets have been combined and you have a completely solved set.

This past week, we’ve been learning about dynamic programming – another classical paradigm for solving problems. Most dynamic programming problems are based on taking a complex issue and breaking it into a series of subproblems and saving the solutions to those subproblems so that the larger problem can be answered. A common example solving the Nth term of the Fibonacci sequence. This problem can be solved using recursion with a high degree of computational resources, or it can be solved using dynamic programming where each number in the sequence is computed and saved in memory (this technique is called memoization) and the next number computed using those saved numbers. Dynamic programming lets us solve problems that would otherwise be on the order of exponential complexity and solve it in polynomial time instead.

So why study algorithms? In short, because time is money. Energy is money. And computers are designed to optimize both. Unfortunately, computers only do them what you tell them to do all the way down to what they choose to remember. Our job is to figure out what to tell them and how.