However, I’ve become fairly proficient in it these past two years and have grown to appreciate the language for what it is: Blazing fast and fine-tuned for several specific use-cases. Unlike JavaScript (my language of choice while at work), C does not come with dozens of associated frameworks that come and go on a moment’s notice. It also doesn’t split its time between the hard logic of an application and managing the user interface. In many ways, C runs the world of computers around us without us even knowing.

The story behind C is the most interesting part of the language. Written in the late 60’s and launched into production around 1972/1973, C was born from necessity. In the late 60’s, Dennis Ritchie and Ken Thompson decided to write an operating system for the PDP-11, Unix (pictured above – it’s the size of a huge refrigerator and it’s processing power isn’t even close to the power of my phone).

Most of the logic of the operating system was written in Assembly at first, but this proved to be fairly clunky. Assembly’s limited support for logical constructs made this a painstaking process that grew so difficult that Ritchie decided to write his own language, specifically for handling Unix, and so C came into existence.

The language is so basic that its primitives don’t include strings, garbage collection is left up to the programmer, and objects are nowhere to be found. However, the language’s relationship to the Unix operating system means that it is nearly ubiquitous as Unix (or some variant) is found on nearly every web server, the majority of smartphones, and the computer that I’m using (a Macbook Pro) to write this post. Wherever Unix is found, C is right behind, managing all of the commands you type into a terminal, the boot process of your computer, and, well, most anything you do.

Like the core functionality of Unix, which has a philosophy of limited design, C has not grown much beyond it’s humble beginnings. Features have been added incrementally, but much of my own work with the language is done in C99, a standard written nearly 20 years ago. One of my favorite bits of back and forth that I have with Scott Fennell, our lead developer at LexBlog, is his love of languages that are well-defined and static. The more a language changes, the harder it is for teams and individuals to manage. Imagine if English were changing at the same breakneck pace of JavaScript; we would barely be able to communicate from day to day.

As I’ve grown less enamored of just getting a project up and running and seen the value of maintainable systems and software, I must admit that I’ve come to Mr. Fennell’s side of the argument. Give me a language that doesn’t change but does the job just as well, if not better, than any of it’s counterparts. That’s not to say that I want to spend my days writing C, but it wouldn’t be the worst thing.

]]>- Introduction to Computer Science I
- Introduction to Computer Science II
- Discrete Structures in Computer Science
- Data StructuresÂ
- Web DevelopmentÂ
- Introduction to Databases
- Computer Architecture & Assembly Language
- Analysis of Algorithms
- Operating Systems (currently in progress)
- Software Engineering I (currently in progress)

And have the following classes in front of me:

- Software Engineering II
- Introduction to Usability Engineering
- Intro to Computer Networks
- Mobile and Cloud Software Development
- Software Projects (Program Capstone)

Like in any program designed to teach a diverse group of students, all with different learning styles and coming from different backgrounds, the course quality and difficulty varies. In my estimation, the most interesting courses have also been what I consider the most difficult ones.

Introduction to Computer Science II was seemingly designed to weed out students that were not strong programmers. The course was heavy on writing code (C++ is the primary language used in OSU’s program and it was heavily featured here) with a focus on object-oriented design patterns.

Discrete Structures in Computer Science likely would have been easier if my algebra and proof writing muscles weren’t so rusty. Once you’re warmed up, however, the course is a fascinating exercise in inductive reasoning and solid introduction to set and graph theory. This course has proved especially helpful as I investigate disciplines related to artificial intelligence which are heavy on the types of symbology you’re introduced to in this class.

Computer Architecture & Assembly Language is a trip. While C++ exposes you to concepts like pointers and memory allocation – things most languages abstract away – assembly is a different beast. Here, you learn to move memory around on the CPU and see how loops, conditional statements, and functions are built from the ground up. You’re also introduced to the fetch, decode, execute cycle that all CISCs utilize. In short, you learn what a computer is and how it does all the beautiful things that we take for granted.

Analysis of Algorithms was, conceptually, the most difficult course in the program. The workload was lighter than Introduction to Computer Science II, but the last time I looked at limits and derivatives was in college. I again found myself going to Khan Academy on the weekends to brush up on calculus, but once the basics were down, it was off to the races. This course was the first one where I felt like a “computer scientist” as the concepts require an abstract way of thinking that goes beyond “just” programming or writing software. Here, you get to see Big-O notation and algorithm analysis, dynamic programming, complexity theory, graph theory and algorithms, and your classical searching and sorting algorithms. The course quality itself leaves something to be desired, but the topics are truly beautiful.

The other courses aren’t “bad”, per se, but they either lack the rigor or depth of the other courses or the organization/lecture/coursework is so poor so as to be distracting. That said, each course has its own nuggets of interesting content, and most importantly, the continued act of solving problems is key to learning how to be a “developer”, “engineer”, “computer scientist” (or whatever you want to call someone that moves bits around at high speeds).

And now I’m into the home stretch! Over the next three quarters I’ll wrap up 7 courses, with the majority of the coursework left (usability engineering, software engineering, cloud computing) all things that are in my wheelhouse. Come August this year, I’ll be the proud owner of a B.S. in C.S. from Oregon State University and moving on to Georgia Tech’s Master’s in C.S. program. The time commitment of this program has been stressful at times, but I wouldn’t trade this experience for anything.

]]>We’re accustomed to computers being incredibly fast. So accustomed that we forget just how fast they are. They’re really fast. I’m writing this on a 4-year old computer. It has a 2.5 GHz Intel Core i7 processor. That number equates to how many cycles the system clock of this computer runs in a second. So 2,500,000,000 cycles in one second. The version of the CPU running on this machine is quite powerful. It should execute around 9 instructions per cycle for a single core in the processor and there are 4 total cores running.

This all adds up to a lot of numbers and those numbers represent commands that we expect the computer to execute for us so that we can….. I dunno….. watch videos of cute cats.

This is to say, you should think, when you hear a computer can’t find an answer to a problem, that it’s a big deal. These are the sorts of problems that a reasonably programmed (or so we think) computer with enough processors to require fans that sound like this:

just can’t solve if the domain is large enough.

The full list of these problems is here and I suppose there are two ways to think of this. One is very abstract. The other is with a concrete example. I think it’s important to understand the core essence of these problems, so I’ll begin with the abstraction.

Imagine that you have a problem whose possible solutions represent all of the possible groupings of all the variables in the problem. Let’s say you have 2 possible solutions and *n* possible groupings. That’s 2^*n* possible solutions. Generally, to find each solutions, you have to do a bit of work to verify it as well. Let’s call that *m*. Now the total time to find the solution to the problem is proportional to 2^*n* * *m*. If n is 5 and m is 10 then 2^(5) * 10 = 320. Let’s say that represents seconds, so we’re at 5 minutes, 20 seconds. If *n* is 10 and *m* is 100, then we’re at 102,400, or 1,706 minutes (or 71ish days).

So numbers cause big shifts in our ability to get a problem done in a time that makes sense. All essentially because the problem’s answer could be this *or* this *or* this *or* this. If we had a computer that could simply calculate all of the this’s in parallel with how many *or*‘s there were (i.e, for each possible solution, have a single processor dedicated to exploring a single solution set) then we could do this in polynomial time. But you can see just how quickly that becomes a problem. Too many processors equals fires (or really loud fans).

The other way to think of this is through a concrete problem. One of the easier ones (in my opinion) to understand is the traveling salesperson problem:

The problem is essentially a question of permutations. Given a list of cities and the distances between those cities, what is the shortest possible route that visits each city and returns to the origin city. Again, to get to the answer we have to look at one path *or* another path* or* another path *or* another path. There is no way to know which path is the shortest without first computing the lengths of all the paths. You can imagine that with more cities, there are more paths and with a large enough set, the problem is so complex that we cannot reasonably answer it.

In short, these are the problems that baffle us with or without computers. That they all share some key characteristics says something about the nature of these problems and about problems in general. In fact, one of the key notions of NP-complete problems is that if you can solve one in polynomial time (i.e., efficiently, or non-exponentially) then you can solve *all* such problems in polynomial time. But perhaps some problems are just too complex for a person or machine to do.

I thought it would be helpful to see this example running with some real code. Below, we have a JavaScript function –Â *fibRecursive*Â – that takes an integer as a parameter. This integer represents the term that we want from the Fibonacci sequence. For example, a call to the function like soÂ *fibRecursive(6)*Â would return 8.Â

In order to compute that, this function recursively calls itself twice for every term up to the term you specified. This sort of recurrence relationship is exponential – specifically O(2^n). That means, very literally, for every nth term we wish to compute, we will actually compute it 2^n times. To compute the 8th term, we must perform 2^8 (that’s 256) actions.Â

While this is a very elegant way of explaining, in code, how the Fibonacci sequence works, it is not the most performant way of computing these sorts of values. There is, however, a very well-defined algorithmic paradigm that we can apply, and you’ve guessed it, it’s dynamic programming.

The general notion of dynamic programming is that the problem exhibits some optimal substructure (i.e., within the solution to the problem are solutions to smaller problems – as a byproduct of computing the 5th term in the Fibonacci sequence, we will compute the 1st, 2nd, 3rd, and 4th terms) and that these problems overlap (i.e., you must revisit each subproblem’s solution repeatedly in order to get the final solution).

The function below shows how we can use a technique called memoization to save the solutions to smaller subproblems in a lookup table (typically an array of some shape and size, but any data structure that helps you solve the problem is acceptable) and then refer back to that solution for the next subproblem. The difference here is that we compute each solution to the subproblems once as opposed to computing them 2^n times. This means that the function below –Â *fibDynamicProgramming* – runs in O(n) time – a considerable uptick in performance. This means that to get to the 8th term in the sequence, we’re actually performing only 8 constant time computations.

Magic! To see more common dynamic programming problems and solutions, I would suggest taking a look at this YouTube playlist by Tushar Roy. His explanations are more thought-out and concise than anything I plan to write

]]>Why does this matter? I’ve been working in the industry for over five years and never needed this knowledge before, why now?

Well, the short answer is that you don’t need this body of knowledge to develop a wide range of applications and features to applications. In my world, many of the concerns that common sorting, searching, and general optimization algorithms address are not real concerns because they’ve been abstracted to parts of the language or framework. I’m able to do my job because someone else has figured out how to do other parts of my job that normally would need to be created from scratch. So while learning merge sort and analyzing its complexity is a fun exercise, I’ll not be writing it from scratch anytime soon.

The real value in this course, or at least the parts that I find value in so far, are the techniques behind the algorithms. Merge sort, for example, is a divide and conquer algorithm. The idea behind this class of algorithms is that you can solve a problem faster if you break the problem set down into smaller and smaller sets until you’re left with a trivial problem to solve. You solve that small set and then combine it with another solved set and do that until all of the solved sets have been combined and you have a completely solved set.

This past week, we’ve been learning about dynamic programming – another classical paradigm for solving problems. Most dynamic programming problems are based on taking a complex issue and breaking it into a series of subproblems and saving the solutions to those subproblems so that the larger problem can be answered. A common example solving the Nth term of the Fibonacci sequence. This problem can be solved using recursion with a high degree of computational resources, *or* it can be solved using dynamic programming where each number in the sequence is computed and saved in memory (this technique is called memoization) and the next number computed using those saved numbers. Dynamic programming lets us solve problems that would otherwise be on the order of exponential complexity and solve it in polynomial time instead.

So why study algorithms? In short, because time is money. Energy is money. And computers are designed to optimize both. Unfortunately, computers only do them what you tell them to do all the way down to what they choose to remember. Our job is to figure out what to tell them and how.

]]>The material is dense as we learn to program how to move memory around on a computer and perform basic actions on the contents of said memory.Â The class is focused on the IA-32Â – a 32-bit version of the x86 instruction set architecture found in early IBM workstations and personal computers, and then later in embedded systems for phones, aerospace tech, and electronic musical instruments.Â I’m only a few weeks in, but already it’s painfully obvious to me that assembly is not like any other language I’ve used.

There is clearly a steep learning curve and I’m at the point where that curve seems daunting, but I’m confident there will come a point in time where the syntax and concepts click. Now is not that time, but the beginning never is. In these moments, I tend to search for papers, blog posts, and Youtube videos that help motivate me or explain the concepts from a different angle.

In a late night search, I stumbled across the following set of Computerphile interviews with Matt Phillips, a video game programmer from the United Kingdom (Manchester, to be exact), who is working on building a SEGA Genesis/Mega Drive game (Tanglewood) in assembly:

I would strongly suggest watching these videos in this order as they lead very well from one to the next and you can marvel at the effort it takes to string together a game in such a low-level language.

It was especially interesting to see how the language on the boards Matt is working with differed from what I’m currently learning. It’s one thing to hear about how machine-specific assembly is and quite another to see the different syntax, memory registers, and peripherals throughout the videos.

Check out the game trailer here:

]]>