I’m over halfway through Oregon State University’s post-bacc computer science program. It’s a (mildly) grueling gauntlet of 15 courses, made more grueling by the fact that I’ve been working full-time and taking two-courses a quarter (with an exception made for this past summer when only one course was on the docket to give me some time to get married 🙂 ). To-date, I’ve taken, or am currently taking, the following courses (in no particular order):

  • Introduction to Computer Science I
  • Introduction to Computer Science II
  • Discrete Structures in Computer Science
  • Data Structures 
  • Web Development 
  • Introduction to Databases
  • Computer Architecture & Assembly Language
  • Analysis of Algorithms
  • Operating Systems (currently in progress)
  • Software Engineering I (currently in progress)

And have the following classes in front of me:

  • Software Engineering II
  • Introduction to Usability Engineering
  • Intro to Computer Networks
  • Mobile and Cloud Software Development
  • Software Projects (Program Capstone)

Like in any program designed to teach a diverse group of students, all with different learning styles and coming from different backgrounds, the course quality and difficulty varies. In my estimation, the most interesting courses have also been what I consider the most difficult ones. 

Introduction to Computer Science II was seemingly designed to weed out students that were not strong programmers. The course was heavy on writing code (C++ is the primary language used in OSU’s program and it was heavily featured here) with a focus on object-oriented design patterns.

Discrete Structures in Computer Science likely would have been easier if my algebra and proof writing muscles weren’t so rusty. Once you’re warmed up, however, the course is a fascinating exercise in inductive reasoning and solid introduction to set and graph theory. This course has proved especially helpful as I investigate disciplines related to artificial intelligence which are heavy on the types of symbology you’re introduced to in this class. 

Computer Architecture & Assembly Language is a trip. While C++ exposes you to concepts like pointers and memory allocation – things most languages abstract away – assembly is a different beast. Here, you learn to move memory around on the CPU and see how loops, conditional statements, and functions are built from the ground up. You’re also introduced to the fetch, decode, execute cycle that all CISCs utilize. In short, you learn what a computer is and how it does all the beautiful things that we take for granted. 

Analysis of Algorithms was, conceptually, the most difficult course in the program. The workload was lighter than Introduction to Computer Science II, but the last time I looked at limits and derivatives was in college. I again found myself going to Khan Academy on the weekends to brush up on calculus, but once the basics were down, it was off to the races. This course was the first one where I felt like a “computer scientist” as the concepts require an abstract way of thinking that goes beyond “just” programming or writing software. Here, you get to see Big-O notation and algorithm analysis, dynamic programming, complexity theory, graph theory and algorithms, and your classical searching and sorting algorithms. The course quality itself leaves something to be desired, but the topics are truly beautiful. 

The other courses aren’t “bad”, per se, but they either lack the rigor or depth of the other courses or the organization/lecture/coursework is so poor so as to be distracting. That said, each course has its own nuggets of interesting content, and most importantly, the continued act of solving problems is key to learning how to be a “developer”, “engineer”, “computer scientist” (or whatever you want to call someone that moves bits around at high speeds).

And now I’m into the home stretch! Over the next three quarters I’ll wrap up 7 courses, with the majority of the coursework left (usability engineering, software engineering, cloud computing) all things that are in my wheelhouse. Come August this year, I’ll be the proud owner of a B.S. in C.S. from Oregon State University and moving on to Georgia Tech’s Master’s in C.S. program. The time commitment of this program has been stressful at times, but I wouldn’t trade this experience for anything. 

Another week of reading, lectures, and confused Googling and skimming through videos is in the bag. This week was all about NP-complete. It has been fun to experience an “aha!” moment in each course I’ve taken so far, and this was an especially fun one. In large part, because seeking answers to these questions is seemingly beyond the reach of computers (in a reasonable time).

We’re accustomed to computers being incredibly fast. So accustomed that we forget just how fast they are. They’re really fast. I’m writing this on a 4-year old computer. It has a 2.5 GHz Intel Core i7 processor. That number equates to how many cycles the system clock of this computer runs in a second. So 2,500,000,000 cycles in one second. The version of the CPU  running on this machine is quite powerful. It should execute around 9 instructions per cycle for a single core in the processor and there are 4 total cores running.

This all adds up to a lot of numbers and those numbers represent commands that we expect the computer to execute for us so that we can….. I dunno….. watch videos of cute cats.

This is to say, you should think, when you hear a computer can’t find an answer to a problem, that it’s a big deal.  These are the sorts of problems that a reasonably programmed (or so we think) computer with enough processors to require fans that sound like this:

just can’t solve if the domain is large enough.

The full list of these problems is here and I suppose there are two ways to think of this. One is very abstract. The other is with a concrete example. I think it’s important to understand the core essence of these problems, so I’ll begin with the abstraction.

Imagine that you have a problem whose possible solutions represent all of the possible groupings of all the variables in the problem. Let’s say you have 2 possible solutions and n possible groupings. That’s 2^n possible solutions. Generally, to find each solutions, you have to do a bit of work to verify it as well. Let’s call that m. Now the total time to find the solution to the problem is proportional to 2^n * m. If n is 5 and m is 10 then 2^(5) * 10 = 320. Let’s say that represents seconds, so we’re at 5 minutes, 20 seconds. If n is 10 and m is 100, then we’re at 102,400, or 1,706 minutes (or 71ish days).

So numbers cause big shifts in our ability to get a problem done in a time that makes sense. All essentially because the problem’s answer could be this or this or this or this. If we had a computer that could simply calculate all of the this’s in parallel with how many or‘s there were (i.e, for each possible solution, have a single processor dedicated to exploring a single solution set) then we could do this in polynomial time. But you can see just how quickly that becomes a problem. Too many processors equals fires (or really loud fans).

The other way to think of this is through a concrete problem. One of the easier ones (in my opinion) to understand is the traveling salesperson problem:

The problem is essentially a question of permutations. Given a list of cities and the distances between those cities, what is the shortest possible route that visits each city and returns to the origin city. Again, to get to the answer we have to look at one path or another path or another path or another path. There is no way to know which path is the shortest without first computing the lengths of all the paths. You can imagine that with more cities, there are more paths and with a large enough set, the problem is so complex that we cannot reasonably answer it. 

In short, these are the problems that baffle us with or without computers. That they all share some key characteristics says something about the nature of these problems and about problems in general. In fact, one of the key notions of NP-complete problems is that if you can solve one in polynomial time (i.e., efficiently, or non-exponentially) then you can solve all such problems in polynomial time. But perhaps some problems are just too complex for a person or machine to do.

Having just finished my midterm in Analysis of Algorithms (yes, the class is as dry as it sounds), my brain is still sharp on a few topics; one of them being dynamic programming, which I mentioned in my last post. In that post, wherein I tried to find motivation for forcing myself to relearn calculus, I used the classic example of trying to calculate the nth term of the Fibonacci sequence.

I thought it would be helpful to see this example running with some real code. Below, we have a JavaScript function – fibRecursive – that takes an integer as a parameter. This integer represents the term that we want from the Fibonacci sequence. For example, a call to the function like so fibRecursive(6) would return 8. 


Continue Reading A Short Story About Dynamic Programming

This is a question that is plaguing me at the moment as I force myself to relearn calculus for Analysis of Algorithms at Oregon State University. In moments like this, where the concepts are abstract and I need to learn even more abstract concepts so I can understand the first class of abstract ideas, motivation is key.

Why does this matter? I’ve been working in the industry for over five years and never needed this knowledge before, why now?

Well, the short answer is that you don’t need this body of knowledge to develop a wide range of applications and features to applications. In my world, many of the concerns that common sorting, searching, and general optimization algorithms address are not real concerns because they’ve been abstracted to parts of the language or framework. I’m able to do my job because someone else has figured out how to do other parts of my job that normally would need to be created from scratch. So while learning merge sort and analyzing its complexity is a fun exercise, I’ll not be writing it from scratch anytime soon.


Continue Reading Why Study Algorithms?

I’m about halfway through Oregon State University’s (Go Beavers!) post-bacc program for computer science, but feel like I’ve just entered the belly of the beast. On the docket for the summer is CS 271 – Computer Architecture and Assembly Language; a fine relaxing course to take in the months before and during my wedding, right? Not so much.

The material is dense as we learn to program how to move memory around on a computer and perform basic actions on the contents of said memory.  The class is focused on the IA-32 – a 32-bit version of the x86 instruction set architecture found in early IBM workstations and personal computers, and then later in embedded systems for phones, aerospace tech, and electronic musical instruments.  I’m only a few weeks in, but already it’s painfully obvious to me that assembly is not like any other language I’ve used.

Continue Reading Oh The Places You’ll Go! …… with Assembly