Friday, December 26, 2008

Practice

I recently read Malcolm Gladwell's new book Outliers, and one of the great points he makes is the huge impact practice makes in an individual's ability to become a master of his specialty. While I don't necessarily agree that The Rule of 10,000 Hours applies to all fields, it is definitely on the right track. It doesn't matter if it is skiing, painting, playing hockey or coding; the human brain and nervous system require some amount of continuous repetition to really master a technique. I haven't figured out what the magic number of hours to become a master is for programming (I'm not sure there is one), but there is no question the best programmers I have come across are not math geniuses or naturally gifted in understanding some esoteric computer technology. They are talented because of the intriguing and diverse projects they have had the opportunity to work on.

They are masters because they continuously practice.

Now, Joel might argue a great programmer also needs to be able to think at multiple levels of abstraction, and they can't have gone to a Javaschool, and they need to understand the nuts and bolts of C and low level programming. Let's assume you have all this and are on your way to greatness at the next new challenging startup. Is the project you are working on forcing you to break out of your comfort zone with C (remember you didn't go to Javaschool)? Are you learning something new? When other people in your department mention the work they are doing do you think, "Wow that sounds really interesting too?"

A lot of companies are not interested in solving something in a new way. They need a program written to take data from location A and move it to location B, and they need it done in a month. It doesn't matter if the program is particularly fast or if you developed some new design pattern to generate an elegant solution. What matters is that your boss can tick off your program's completion on his Gantt chart, and he can report the project is on schedule! If you are on this type of project, you are not practicing. You are buying your time until the next mundane program is needed. Except this time they need it to take that data you so carefully moved to location B and move it back to A. (Why did we move it to B to begin with?!)

Find a project that is challenging. Diversify. Practice. Computer science is too fast an evolving field to spend even 6 months of your life not learning. Yes, there is a lot of white noise, but there are also a lot of gems being discovered. There are also plenty of other companies that are looking for people that can write faster programs and do so with elegant solutions.

Tuesday, December 9, 2008

Consistency

In a recent Java refactoring project I was updating some threading code and came across a HashMap that needed to be converted to a ConcurrentHashMap. I made a one line change from:

Map myMap = new HashMap();

to

Map myMap = new ConcurrentHashMap();


It seemed like a good idea before I started working other parts of the threading problem to run my my unit tests just to see what changing the implementation would do. What happened? Good old Java Null Pointer exception. The problem it turned out was due to the get() method. get() was being called with some null key and ConncurretHashMap was throwing a NPE. Why didn't this happen with HashMap? I thought perhaps I should double check the Javadoc for ConcurrentHashMap just in case I made a silly assumption they both had the same interfaces and contracts. Here is the Javadoc excerpt from the description in the first paragraph of the ConcurrentHashMap page.
A hash table supporting full concurrency of retrievals and adjustable expected concurrency for updates. This class obeys the same functional specification as Hashtable, and includes versions of methods corresponding to each method of Hashtable. However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully interoperable with Hashtable in programs that rely on its thread safety but not on its synchronization details.
They say (ignoring gritty details of threading) that these two classes are interchangeable. The problem is that the devil is in the details and the details are the way Java handles uncaught exceptions. They both extend the same classes and have the same interface methods but as it turns out, HashMap and ConcurrentHashMap treat a null key totally different. In order to enforce these classes being interchangeable they couldn't have ConcurrentHashMap.get() actually throw an exception. Going into a little more detail, here are the actual Javadoc comments on the get() method calls:

HashMap.get()
Returns:
the value to which this map maps the specified key, or null if the map contains no mapping for this key.
ConcurrentHashMap.get()
Returns:
the value to which the key is mapped in this table; null if the key is not mapped to any value in this table.
Throws:
NullPointerException - if the key is null.

All of the sudden someone decided they would throw an uncaught exception? Certainly, there must be a reason behind it. I decided to look at the Java source code:

ConcurrentHashMap:

public V get(Object key) {

int hash = hash(key); // throws NullPointerException if key null

...


HashMap:

public V get(Object key) {

if (key == null)

return getForNullKey();

int hash = hash(key.hashCode());

....

It turns out it comes down to a small null check in one method and the same check missing in the other. The Sun Java developers might have had a good reason for this, but it misses the point. With OOP there is nothing more important then truth in advertising. When you claim one thing and do another that is far more dangerous then simply saying "Use at your own risk".

I find this is common in a large number of APIs and it really seems to show it's ugly head when those API's update. Sometimes this is because the developer updating the API makes some bad assumption about how people use it, sometimes its a technical restriction and sometimes it's just a mistake. What is really scary about this one is that it is a runtime exception. That evil type of exception that normally sneaks through (even with good unit tests) and then hits you in production. Users beware!

Sunday, November 30, 2008

Early Branching

Recently a coworker and I were looking at a versioning problem with some code that had been integrated into the current release branch (from some parallel branch) and we stopped and asked ourselves; "Why are these integ issues always so complicated, and why do we always hit them at the end of a release cycle?"

I have had the fortunate experience to work for software companies where the build was a transparent luxury that developers knew almost nothing about (It just worked!), and companies where the build was some Machiavellian Rube Goldberg machine that worked if no one made any mistakes and sage was burned at the right hour on night before a release. The interesting thing is that despite the technologies or languages or platforms that the build system used the one thing that seemed to make the biggest difference was when the branches were cut.

Of all the branching strategies I have come across the one that I've witnessed the greatest success with is early branching. Why does this seem to work better then late branching (or variants of merge/propagate early/often)? I think for a couple key reasons:
  1. Branching is done for clear and coherent reasons. As soon as a release is planned a branch is cut. It ties together clear release requirements to a physical code base from which those documents can be evaluated against at any given moment.
  2. It isolates potentially conflicting parallel work and helps to minimize developer collisions and build downtime
  3. Reduces concurrent branch explosion. (I have seen companies with 9-10 concurrent branches all hoping they can merge them together at the 11th hour and release)
  4. Potentially underestimated tasks (Hey we need to support a new platform!) are identified early and release plans (or requirements) can be adjusted accordingly.
While I think #1 is probably the one that gives developers and project managers the biggest benefit, #4 is where I've seen hours, days, even weeks of time saved. It is self evident that knowing potential problems early in a software cycle is better then late, but it is also surprising how often this is missed because no one foresaw any major problems. Developers are also human, and as opposed to strategies which are more laissez-faire in their version control restrictions (often relying on the wise developer to remember to do all right integing) this approach minimizes mistakes and is more forgiving when mistakes do happen. And if we have learned anything from books like Microserfs or Dreaming in Code, it's that software is hard enough without adding unforgiving process into the mix.

Sunday, November 23, 2008

Gastronomique


Cooking as an art has seen a popularity explosion in America over the last 15 years. From dedicated cooking channels to reality cooking shows, from the Slow Food Movement and backlash against the fast food industry to boom in organic/fresh grocers like Whole Foods. Cooking has and continues to be an activity where I find peace. To focus on creating something exquisite and present it for others to savor is not only rewarding but meditative. I must own 50 cookbooks on various cooking subjects and recipes but it is Thomas Keller's French Laundry Cookbook that inspires the name of this blog.

Rather then the typical organized listing of recipes, Keller entitles many of his chapters "The Importance of ___". From building good relationships with specialty vendors, to big pot blanching, to the importance of staff dinner, each chapter helps the reader understand why professional cooking is a holistic approach to both people and ingredients. I find Keller's philosophies and passions apply to so much of life outside of cooking that I hope this blog will serve as my personal interpretation of this idea in software, society, and the Bay Area.

Saturday, November 22, 2008

Underindulgence

I grew up in what would certainly be considered a modest to low income family in America. Both my parents were teachers in an affluent city whose primary constituents were doctors, lawyers, and retired movie stars. My family made due with what was available and if there was one lesson we learned it was sharing. Now most people think of sharing as that altruistic forfeiture in which one makes some sacrifice so that another may benefit.

20 Years later I have witnessed a variation of sharing which has become the horror of my place of work. Normally, I shrug this type of offense to the office idiot or perhaps the nameless dirty neighborhood filth-leprechaun who sneaks in to perform some vial offense that employees later come upon in the kitchen and exclaim "Holy Christ...who filled the sink with leftover curry?!"

This particular Offender has decided that whatever free office breakfast, lunch, or snack is offered they will partake of exactly 1/2 an item and leave the rest for another (sharing right?). Now the polite, if not sanitary, way to do this would be cut the item in half. Hey there is even a knife right there! Unfortunately, evidence would suggest The Offender simply uses their mighty cake-hole grinders to tear off a piece of the item leaving behind the rest for some poor soul to stumble upon and ponder "What the hell happened here?".

In the spirit of my advice to voters this last election who gave Bush the deuce run; I plead to The Offender... If you can't finish a whole muffin, consider abstaining this time around.