How I Got Through Oregon State University’s Online Computer Science Post-Bacc

I enjoy blogging for a number of reasons. It helps me organize, solidify, and advance my thinking. It also provides a platform to put my ideas into a bottle and send it out into the ocean that is the internet. Every so often, a bottle with a message returns to me, usually in the form of an email in my inbox. Most of those emails are about people asking for guidance through (or to get into) Oregon State University’s post-baccalaureate or Georgia Tech’s Master of Science computer science programs.

While I’m still working through GT’s program (should be done spring of 2021!), my time at OSU wrapped up in August of 2019 with a Bachelor’s in Computer Science. As I owe a lot of my personal success to the resources and communities that have sprung up around the program, I wanted to use a moment to write about my path through the degree in the hopes of leaving some breadcrumbs to new and prospective students.

[Read More]

A GT OMSCS Course Review – Robotics: AI Techniques (CS7638)

Robotics: AI Techniques marked the beginning of my foray into Georgia Tech’s OMSCS machine learning and artificial intelligence offerings. As I mentioned in my review of High Performance Computer Architecture (HPCA), my other Georgia Tech courses have focused on computing systems. This was mostly a function of the popularity of the ML/AI courses making them difficult to register for, and the computing system courses being among the most well-regarded classes in the program. However, my chosen specialization for this degree is machine learning, and so this semester it was time to get going.

I chose this as my first AI-centric course as it’s relatively “easy” (I put this in quotes because easy is in the eye of the beholder) and was entirely project-based allowing me to level up on my Python while also giving me time to explore mathematical concepts that were completely new territory. As I’m taking CS6601 – Aritifical Intelligence in the fall, I wanted a course to provide exposure to the field of study without being overwhelming. I can say pretty definitively that this course did all of those things while being pretty fun at the same time.

[Read More]

A GT OMSCS Course Review – High Performance Computer Architecture (CS6290)

For me, Georgia Tech’s OMSCS program’s biggest draw was it’s extensive machine learning and artificial intelligence curriculum. There are other online Master’s programs from well-regarded schools (University of Texas and University of Illinois immediately come to mind), but none as established as Georgia Tech and none with classes that felt worth the time and investment. However, through a quirk of scheduling (most of the ML/AI courses are in high demand and fill up quite quickly), three of my first four classes at GT have focused on computing systems.

Now, it would be unfair to blame this purely on scheduling. It would have been quite easy to take different courses from different specializations, but I got into this program to learn and to challenge myself, and the computing systems offerings come highly recommended from the community of OMSCS students and are known for their difficulty. High Performance Computer Architecture (HPCA) certainly belongs in that conversation, and like the other computing systems courses that I’ve taken so far (Graduate Introduction to Operating Systems and Advanced Operating Systems being the other two), I left the class with a far better grasp on and appreciation for the internals of computers.

[Read More]

The Business of Bits

If your vocation is one that manages computers, you’re in the business of bits. That is to say, you’re somehow responsible for the writing and/or reading of binary digits. Ones and zeros. Bits.

A bit is a fundamental unit for computers. One bit represents a binary logical state as it can be one of two values (again, 0 or 1). Alone, a bit doesn’t tell us much (the value of its information has been measured) but if you string them together magic happens. If you work with computers at a higher level, like writing a web-app in JavaScript or PHP, it’s easy to forget this although you’ll certainly encounter them from time to time. If you work with computers at an even higher level, like say just opening Excel from time to time, then you’re apt to think most of this is gibberish. However, at the lowest levels of software, it’s impossible to escape bits.

[Read More]

Computers Are (Really) Advanced Guessing Machines

One of my favorite (personal) sayings about computers is that they are highly advanced guessing machines. You can see this play out practically with things like branch prediction, where a processor must guess the path of a logical branch based on the history of that branch. This heuristic is analogous to how many humans guess; we use history as a predictor for future events. While HPCA has many similar techniques, this scenario is even more common in the other Georgia Tech course that I’m taking this semester, Robotics: AI Techniques.

Artificial intelligence is the pinnacle of guessing as it employs practical techniques (like search algorithms) and combines them with statistical tricks based primarily on probability density (usually Gaussian) distributions. The mathematics behind these distributions, in my opinion, can often seek to confuse and distract from what is actually a delightfully simple concept.

[Read More]

Not Everything Is About Computers – Sometimes It’s About Bread

Most of my posts are indirectly or directly about computers. For those that know me well, this is no surprise; I feel at home when my hands are on a keyboard. That said, my life expands out of my office from time to time. Usually onto a forest trail or sidewalk for long runs and walks, but also into the kitchen where a few hours of my time pass on a daily basis.

Recently, I’ve taken up baking bread after my wife brought back a sourdough starter from Sea Wolf Bakery, a nearby local bakery in the Wallingford neighborhood here in Seattle. While the starter went largely unused for years (aside from a regular feeding every 3-4 weeks to keep it alive), life is now such that bread baking is easy to incorporate into my daily schedule.

After several unsatisfactory baking sessions and playing around with different techniques and variables, my bread has finally reached it’s happy place. Each loaf gets a great oven spring, the texture of the bread is chewy, but not dense, and the air pockets are well-distributed. It’s also much more flavorful as I’ve adapted to include a 12-hour proof in the fridge to allow the yeast to work its magic without overproofing the loaf.

The overall process is delightfully simple, although it requires some planning and care. A normal bread-baking day starts out when I wake up to make coffee. While the coffee grounds steep in the French press, I pull my starter out of the fridge and measure out 40g of it into a new container along with 160g of water and 160g of flour. The goal here is to refresh the starter so that the yeast reactivates and gets to work. Around 5 or 6pm (about 10 hours later) the starter should look like so:

bubles

Those bubbles are indicative of the yeast working its magic. The real test, however, is when you drop the yeast into the water to begin the bread making process.

[Read More]

The Engineering Art of Balancing Desire with Reality (as told by processor caches)

AMD Zen Architecture

In a course about high performance computer architecture, it’s no surprise that most of the time is spent discussing how to speed up computers using their architecture. It’s almost as though the name of the course tells you exactly what to expect.

This week in CS6290 at Georgia Tech, we’ve moved on to caches, which play a key role in speeding up the retrieval of information. The processor’s goal is crunching data which is held either in main memory (RAM) or on the disk (an SSD or HDD). To get that data, the processor issues requests for memory addresses and retrieves the data from the memory storage unit that holds that information.

[Read More]

Processor Pipelines and the Foundation of Computing Systems

This will probably be my last semester at Georgia Tech that includes a computing systems course (unless high performance computing becomes available again online). The rest of my coursework will be focused on my specialization – machine learning – and while I’m excited to focus more on the questions that brought me to this program, I will undoubtedly miss computing systems.

The beautiful part of this area of computer science is that it is where the rubber meets the road. Theory meets application and provides lessons to feed back into theory which then feeds into other applications.

[Read More]

Everything In Its Right Place – A Primer on Hardware Support for High Performance Computer Architecture

My wife and I have a running joke in the house when either one of us moves something to its “correct” resting place, usually punctuated by breaking out into song.

Computer science is the practical application of many other sciences (solid state physics, calculus, linear algebra, information science, etc., etc., etc.), but it is at its most exacting and least forgiving the closer to the hardware you get. Here, everything truly does have its right place.

[Read More]

Checklists Are Important for Everything – Especially Processors

Every so often, a few posts come across my desk at the same time, and it reminds me of how at some basic level, all work is the same work, just manifested in different ways. Checklists and agendas, which are near and dear to my heart, are crucial for communicating and getting things done correctly across a team. They represent an agreement, a contract, reflections of expectations.

When you enter a meeting that has gone off the rails, it’s likely that either someone has torpedoed the agenda or one was never established. Likewise, any time I’ve needed to get a project back in a manageable state, a forced prioritized to-do list is my weapon of choice.

Similarly, sequential logical steps are the bread and butter of processors. Most of my high performance computing architecture course is focused on how processors squeeze every possible optimization out of a program’s instructions. There are dozens of ways that it does this (branch prediction, loop unfurling, data caches, etc), but perhaps most approachable is how a processor issues and executes instructions.

[Read More]