Tuesday, October 20, 2009

When Bad is Good


If there is one thing I have learned from Predictably Irrational, Traffic, or your pick of any Gladwell book it is that the human thought processes is unbelievably complex. We are so wired to behave in certain ways that despite all logic and precognition we are destined to repeat the same mistakes, victims of our brain's own evolutionary biology. I will get back to this concept, but for the time being let me explain why this has anything to do with Apple and Google.

At the recent San Francisco Stackoverflow DevDays I had the opportunity to sit through two presentations, each from the two of the biggest players in mobile OS platforms, Apple (iPhone), and Google (Android). I will start out with the highlights from the iPhone talk. Read the rules for developing on the iPhone and see if they sound friendly and inviting:
  • You have to learn a 20+ year old programming language with a toolkit that is almost as old
  • You must follow all our programming and UI guidelines to the letter when developing your application. Oh, and we mean it!
  • You must give us 30% of the profit you make from selling your application
  • We can reject your application for any number of reasons you might not like
  • Your application will only run ever be able to run on one kind of phone. Adios portability!
Where do I sign up? No seriously, where?! (BTW great talk Rory Blyth) Believe it or not these rules are exactly why Apple's long term iPhone vision is paying off and Google's never will. This lock down policy is why the experience of using an iPhone is so consistent. Why the battery life seems to remain steady despite me adding 10 new apps. Why different gestures like sliding a panel, toggling a switch, and entering text is consistent in every application. Most importantly, why I have never seen the dialog box like this.


I don't want see dialog boxes like that. Especially when my only option is "Force close". As oppose to what? There is only one choice!!! Just make it for me, hide the whole ordeal, and let me move on.

During the Android talk the representative from Google referred to this as "The error dialog which G1 owners should all be familiar with now". A huge ugly dialog box. Shit happens, force close, but try again mate! He explained to the room that this wasn't Android's fault. It was due to Android application programmers who didn't understand threading. Threading of all things! He remind us that the best thing about Android was their marketplace didn't have draconian rules and guidelines like those meanies over at Apple. Anyone can make an application for Android, and we all know when everyone gets to play, no one wins.

Which brings me back to my intro. You see, Apple understand the economics of the crazy way humans work. Programmers are going to write crappy code and programmers don't like big corporations telling them what they can and can't do. We know best! We must have error dialogs! How dare you not let me use Java, Ruby, or the language of the month to write my iPhone app! We must have our application work on all 39 different models because thats more sales! This should all be obvious!!!

And yet it's all wrong. What we want is not always what is good for us.

In order to create a consistent phone experience, in order to have it work reliability (ok, forget AT&T for this argument I'm talking about Apple's role), in order for the battery to have a decent lifespan, in order to sell tons of applications in an unified consistent manor, they had to do the very things that should of turned developers away. And it worked. And now people are lining up to give Apple 30% of their profits, just to get in on the only mobile platform in town.

Thursday, July 2, 2009

Real Incremental find

How many times do you find yourself searching for something where you don't even know what to search for? This happens to me all the time. I can't remember the name of some new artist who's song I recently heard and loved, and so I search for some of the song lyrics. I can't remember the name of that great Sushi place I just ate at so I search a nearby cross-street I remember passing + "great sushi". But what about if you have absolutely no idea? A recent example I heard (From Joel Spolsky) was someone who was looking for how to create the perfect latte. It turns out in order to do this you need to perfect the steaming your milk until you get it into a state called "microfoam". How would you ever know that term to ask the question: "How do I create perfect milk microfoam" ?

As it turns out this happens in programming all the time. Often you start playing around with some new language and you really want to do "X", but it just doesn't work. As it turns out "X" has a special esoteric name in that language and if you only new that word you would solve your problem in 5 minutes. Instead you spend the next 30 minutes trying to type in searches like:
"Why do I get an exception when I pass a mutable object to NewLanguage.specialFuction(var), but not all the time..."
To me, this is one of the next logical evolutions of search and typeahead is the perfect place for it. I don't want incremental find (typeahead) to finish my sentence for me, I want it to finish my thought. I know, I know, that seems like a lot to ask. Natural language has so many ambiguities and computers are so explicit and logical. But the information is out there, AI can be improved, the graphs and connections are waiting to be formed. I will know someone (Google? Bing? ...Cuil?) is on the right track the minute I start typing; "Will this day never en..." and the first thing that comes up in my typeahead drop down is:

"You need a vacation: How about Fiji?"

Sunday, March 1, 2009

Antique


Buying Amazon’s new Kindle was an interesting transition for me. I have always been passionate about physical collections. I started out collecting comics and Games Workshop figures as a youth. Later in life it was records for DJing and books. A full bookcase or room of milk crates packed with vinyl is a thing of beauty. Like the rest of the 21st century I am slowly making transitions towards new mediums for these items. There wasn’t a specific defining event which instigated this and the transition will never be 100% (Which is why record companies are so wrong when they assume the digital downloader isn’t also buying hard-copies). This is why beautiful album art and companies like McSweeney’s will always have a place on my bookshelf.

It is the uniqueness and ambiance that physical items have. You remember when and where you bought that music album by looking at the jewel case. Or maybe you can’t quite remember, but you still romanticize about it. “Yah, I got Nirvana album the day it came out!” Perhaps you can even remember where you purchased a particularly old and faded book or that good friend who lent it to you then moved to the other side of the country and never got it back. It is the same way with photos. Those old, grainy, washed-out pictures from the 1970’s look like the 1970’s! Super 8 videos look…well dated.

Not as much in the digital age. As I begin to capture high-definition video (I can barely believe HD camcorders are a consumer product) of my new-born daughter, and fill up my hard drive with digital pictures, I wonder if she will be able to visually “date” them when she is older? And I am not talking about dating via the subject of the photo. Will she have the experience I had of going through my Mom’s scrapbook of poorly taken strangely-contrast Polaroid’s where it’s hard to make out if the photo is me or my brother? What is the digital version of the yellowing borders around an old photo, or the grainy image and artifacting on a VHS tape? Does an external hard drive with a hand made sticker saying “Baby’s First Year” emit any sort of nostalgia?

There is beauty to things that physically age. Are we loosing this in a digital age? Once video and photograph resolution exceeds human ability to discern between real and captured, will everyone’s life be perfectly preserved for all eternity just as if the images were taken the day before?

Oh and the Kindle... yes I love it. It has lightened my bag by at least 1-2 pounds and I already have three more books queued to read. Maybe it is the engineer in me, but I am still excited to see what the next new compact, greener, easier to manage digital tool gets invented for me to buy.

Thursday, January 22, 2009

Change


There are two classic stereotypes of software groups that most developers are familiar with. On the left you have the small start-up team with 5-10 engineers all slaving away and working together to overcome that 6 month deadline you have before your competition comes out with your new idea, or worse, the coffers run dry. On the right you have the corporate goliaths with 47 levels of middle management and engineering groups in 23 countries all following multi-year feature release schedules with documentation and specs that rival War and Peace. During a conversation between two engineers, if one of them mentions that terrible start-up that worked them to death, or the huge tech firm where they worked on a project for 5 years and reported to 9 managers, the other engineer can nod and say; "Boy, I know what you mean... that's why I could never work at that sort of company."

Both of these companies have issues (And plenty of them). At the small start-ups you know what you are in for, and the same holds true at the IBM’s and Microsoft’s of the world. Really, regardless of if you’re a junior developer or a VP of Engineering; you just have to figure out what environment you work best in. Strategies for building a cohesive engineering team that generates quality software can be very different at the two. There are great books and blogs that go into many of these strategies, and even a few ideas that work in both environments. However, what seems to be the hardest to figure out are good strategies for all the companies in between. What happens when your small start-up starts to become not so small?

Once a software group reaches 30-50 people you start to enter a dangerous area. Some companies expand and think they need to transition to the "big" model. They quickly higher more managers and before you know it there is a manager for ever 3 people, every engineer has 5 status reports to complete each week, and productivity grinds to a halt. Just as bad are the companies that don’t bring in good leadership and have 50 people all running around with a start-up mentality and interfacing between each team becomes a nightmare. During this growth period you also run the risk of loosing the people that think the company is moving away from the start-up they love and transitioning into the Office Spaces of the world. To get through this you are going to have to figure out what works best for your software group. Different tactics are going to work better at different places, but in general here are a few of the things I have noticed.
  1. If you have good existing senior engineers and managers that are willing to stick with the company as it gets bigger, make sure they work well as a team. If they don't, it will be a disaster. As new engineers join the different teams they will soon pick up on the conflicts and cliques and division will only get worse.
  2. Don't outsource. One of the worst things you can do as a department grows is solidify the fact your company is turning into a cold corporate giant where money reigns supreme. Transitioning is already difficult enough without hallway rumors that John's team is next to go, or all of support just packed their bags. Also, early on this will not save your company money. Companies of this size don’t have the manpower to dedicate to the time that goes into managing an off shore team. So you either increase your salary expenses bringing people in that are specifically hired to help manage off shore or you dig into developer time to manage it. Either way is not free, and most likely it will only lower your productivity.
  3. Be careful with the Architecture Astronauts. If you have senior engineers who already love the 6 month R&D projects that never go anywhere and have been with the company for so long that people aren't even sure who they report to, it's only going to get worse.
  4. If you are a manager, take the time to compliment other team members (the old, the new, and especially the ones not on your team). It could be as simple as, "I saw that check-in you did last night, nice work!" It's the easiest thing in the world and it makes your coworkers feel connected. As your department continues to grow this really helps the teams feel connected.
  5. Train the people you want to see move up the ladder. Don't grab the member of a team who has been with the company the longest and throw him into management. If you don't have to time or resources to train them, you are solving a temporary need with a long term disaster. When that person is totally overwhelmed and their team is 6 weeks behind schedule you can see why management by lottery has never worked.

I equate this period to the software department's version of Crossing the Chasm. You need to keep cohesion as you expand. If you pile on the bureaucracy and corporate attitude you might get a big surprise when productivity slows and so does the economy. Suddenly, you have a 15% work force reduction and now you are back to the small team, except with big company red tape. Likewise, you can't continue the fly-by-night coding and no bug tracking and pray the CEO isn't upset when you try to explain not having enough manpower for that next new huge project despite all the new hires. Work slowly at it, and you will find you will keep the good people, even some of the ones that mourn the days when everyone worked out of the coffee shop downstairs.

Friday, December 26, 2008

Practice

I recently read Malcolm Gladwell's new book Outliers, and one of the great points he makes is the huge impact practice makes in an individual's ability to become a master of his specialty. While I don't necessarily agree that The Rule of 10,000 Hours applies to all fields, it is definitely on the right track. It doesn't matter if it is skiing, painting, playing hockey or coding; the human brain and nervous system require some amount of continuous repetition to really master a technique. I haven't figured out what the magic number of hours to become a master is for programming (I'm not sure there is one), but there is no question the best programmers I have come across are not math geniuses or naturally gifted in understanding some esoteric computer technology. They are talented because of the intriguing and diverse projects they have had the opportunity to work on.

They are masters because they continuously practice.

Now, Joel might argue a great programmer also needs to be able to think at multiple levels of abstraction, and they can't have gone to a Javaschool, and they need to understand the nuts and bolts of C and low level programming. Let's assume you have all this and are on your way to greatness at the next new challenging startup. Is the project you are working on forcing you to break out of your comfort zone with C (remember you didn't go to Javaschool)? Are you learning something new? When other people in your department mention the work they are doing do you think, "Wow that sounds really interesting too?"

A lot of companies are not interested in solving something in a new way. They need a program written to take data from location A and move it to location B, and they need it done in a month. It doesn't matter if the program is particularly fast or if you developed some new design pattern to generate an elegant solution. What matters is that your boss can tick off your program's completion on his Gantt chart, and he can report the project is on schedule! If you are on this type of project, you are not practicing. You are buying your time until the next mundane program is needed. Except this time they need it to take that data you so carefully moved to location B and move it back to A. (Why did we move it to B to begin with?!)

Find a project that is challenging. Diversify. Practice. Computer science is too fast an evolving field to spend even 6 months of your life not learning. Yes, there is a lot of white noise, but there are also a lot of gems being discovered. There are also plenty of other companies that are looking for people that can write faster programs and do so with elegant solutions.

Tuesday, December 9, 2008

Consistency

In a recent Java refactoring project I was updating some threading code and came across a HashMap that needed to be converted to a ConcurrentHashMap. I made a one line change from:

Map myMap = new HashMap();

to

Map myMap = new ConcurrentHashMap();


It seemed like a good idea before I started working other parts of the threading problem to run my my unit tests just to see what changing the implementation would do. What happened? Good old Java Null Pointer exception. The problem it turned out was due to the get() method. get() was being called with some null key and ConncurretHashMap was throwing a NPE. Why didn't this happen with HashMap? I thought perhaps I should double check the Javadoc for ConcurrentHashMap just in case I made a silly assumption they both had the same interfaces and contracts. Here is the Javadoc excerpt from the description in the first paragraph of the ConcurrentHashMap page.
A hash table supporting full concurrency of retrievals and adjustable expected concurrency for updates. This class obeys the same functional specification as Hashtable, and includes versions of methods corresponding to each method of Hashtable. However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully interoperable with Hashtable in programs that rely on its thread safety but not on its synchronization details.
They say (ignoring gritty details of threading) that these two classes are interchangeable. The problem is that the devil is in the details and the details are the way Java handles uncaught exceptions. They both extend the same classes and have the same interface methods but as it turns out, HashMap and ConcurrentHashMap treat a null key totally different. In order to enforce these classes being interchangeable they couldn't have ConcurrentHashMap.get() actually throw an exception. Going into a little more detail, here are the actual Javadoc comments on the get() method calls:

HashMap.get()
Returns:
the value to which this map maps the specified key, or null if the map contains no mapping for this key.
ConcurrentHashMap.get()
Returns:
the value to which the key is mapped in this table; null if the key is not mapped to any value in this table.
Throws:
NullPointerException - if the key is null.

All of the sudden someone decided they would throw an uncaught exception? Certainly, there must be a reason behind it. I decided to look at the Java source code:

ConcurrentHashMap:

public V get(Object key) {

int hash = hash(key); // throws NullPointerException if key null

...


HashMap:

public V get(Object key) {

if (key == null)

return getForNullKey();

int hash = hash(key.hashCode());

....

It turns out it comes down to a small null check in one method and the same check missing in the other. The Sun Java developers might have had a good reason for this, but it misses the point. With OOP there is nothing more important then truth in advertising. When you claim one thing and do another that is far more dangerous then simply saying "Use at your own risk".

I find this is common in a large number of APIs and it really seems to show it's ugly head when those API's update. Sometimes this is because the developer updating the API makes some bad assumption about how people use it, sometimes its a technical restriction and sometimes it's just a mistake. What is really scary about this one is that it is a runtime exception. That evil type of exception that normally sneaks through (even with good unit tests) and then hits you in production. Users beware!

Sunday, November 30, 2008

Early Branching

Recently a coworker and I were looking at a versioning problem with some code that had been integrated into the current release branch (from some parallel branch) and we stopped and asked ourselves; "Why are these integ issues always so complicated, and why do we always hit them at the end of a release cycle?"

I have had the fortunate experience to work for software companies where the build was a transparent luxury that developers knew almost nothing about (It just worked!), and companies where the build was some Machiavellian Rube Goldberg machine that worked if no one made any mistakes and sage was burned at the right hour on night before a release. The interesting thing is that despite the technologies or languages or platforms that the build system used the one thing that seemed to make the biggest difference was when the branches were cut.

Of all the branching strategies I have come across the one that I've witnessed the greatest success with is early branching. Why does this seem to work better then late branching (or variants of merge/propagate early/often)? I think for a couple key reasons:
  1. Branching is done for clear and coherent reasons. As soon as a release is planned a branch is cut. It ties together clear release requirements to a physical code base from which those documents can be evaluated against at any given moment.
  2. It isolates potentially conflicting parallel work and helps to minimize developer collisions and build downtime
  3. Reduces concurrent branch explosion. (I have seen companies with 9-10 concurrent branches all hoping they can merge them together at the 11th hour and release)
  4. Potentially underestimated tasks (Hey we need to support a new platform!) are identified early and release plans (or requirements) can be adjusted accordingly.
While I think #1 is probably the one that gives developers and project managers the biggest benefit, #4 is where I've seen hours, days, even weeks of time saved. It is self evident that knowing potential problems early in a software cycle is better then late, but it is also surprising how often this is missed because no one foresaw any major problems. Developers are also human, and as opposed to strategies which are more laissez-faire in their version control restrictions (often relying on the wise developer to remember to do all right integing) this approach minimizes mistakes and is more forgiving when mistakes do happen. And if we have learned anything from books like Microserfs or Dreaming in Code, it's that software is hard enough without adding unforgiving process into the mix.