Breaking The Ties That Bind

Posted in: Personal Branding

Today, KLSX a radio station in the Los Angeles area announced that they are switching formats from all talk, to some kind of Top 40 music format. Due to the sorry state of the radio market in most of the US, especially on the music oriented stations, I have been listening to KLSX pretty regularly over the last 18 months or so. So I have been listening intently today ever since the morning shift to hear each DJ’s take on the situation.

There has been one common thread through all of the shows today and that is that they have been encouraging their listeners to visit their web pages once the station goes off the air on Friday night. Adam Carolla who does the morning shift stated quite clearly that starting Monday morning his show will continue in podcast format, basically uninterrupted. Frosty, Heidi and Frank who do the midday shift were also encouraging people to visit their website, where they will post updates of whatever it is they do next.

Now, KLSX is an interesting station and may not be run like many other stations. For example it is very interesting that each show/personality on the station really has their own online presence outside of the station’s own website and were allowed to promote their own web presence openly.

It changes the model, through the use of the internet, that these DJs really were much more loosley coupled to the station they were on. This means they could easily take their show, and their listeners to a new station with only minor cosmetic station identification logo changes on their own websites. The radio station itself in this case has simply become a vehicle for the DJs to deliver their own content.

It of course is an interesting time for a radio station, who by all reports was doing well in the various timeslots, to make such a dramatic change. The satellite radio market is on the verge of collapse and if it does go under, there will be a glut of DJ personalities running back to find a safe terrestrial home.

On a personal note, I have been thinking how similar my own situation is to the way these DJs were operating. In addition to my work with The Juggernaut Group, I have a day job. I also do contracting work on the side and I help to write this blog – none of which is in any way connected to my day job. If I were to get laid off from my day job it would have very little influence on my online presence. Any good will I have generated through my online presence would continue on. Anyone who wants to do business with me would easily be able to find me via the internet, no matter where I went for my next gig.

I have essentially become much less connected to my day job. My credibility and hence ability to find another position has much more to do with my online presence than my previous employer ever will.

Session Papparatzi

Posted in: Enterprise Java

An interesting trend I noticed at conferences lately has been the use of digital cameras during a presentation to take photos of the slides being displayed. Not sure what the objective is, perhaps these are just truly dedicated folks that can’t wait for the presentations to be posted online after the conference.

Perhaps there should be a movement to frown upon this practice, or at least a campaign to get them to turn their flash bulbs off. Better yet, just take some notes and wait for the slides on the post-conference website.

Get it right the first time – there is no going back!

Posted in: Software Development Best Practices

How many times have you been writing a piece of code and thought “Oh, that piece is going to be tough, I will put a dumb version in for the moment and come back and put the complete version later”. Probably often. Particularly if it is near the end of the project.

Usually this choice seems pretty benign, in fact it is second nature to most developers. But what are the issues with this approach?

The reality is that very few people ever get to go back and fix poorly written code. Even with the hype on refactoring these days, it really is not a widely spread practice (yet). Projects get pushed out the door prematurely to meet the almighty deadline and the team moves straight on to either the next version or the next project. In fact it is probably fair to say that if you have a bunch of time to be refactoring (assuming you are working on a “traditional” project where refactoring is a luxury at the end and not an everyday occurrence) that you are probably on a downward spiral in terms of job security. Your team should be 100% committed all the time – any other situation is probably a bad sign.

So what’s an engineer to do? Simple, just get it right the first time.

Getting it right the first time is actually spectacularly difficult for a whole bunch of reasons, many of which the engineer has no control over. What seemed perfectly logical will at some point turn out to have been a total waste or miscalculation. This is one of the main reasons the agile community has evolved. Agileists accept that getting it right straight off the bat is difficult, so they follow practices that limit the possible wasted effort by getting feedback early and often so they can more quickly identify when things are going wrong and change course quickly.

If you don’t have the luxury of working in a progressive company where agile techniques are embraced, you really have to be able swallow some hard truths. The biggest of which is, be careful what code you write as it will likely end up in a production environment and once that happens you will likely be supporting it for a long time, perhaps years or even decades. I once read a story (I forget where, someone please let me know if this sounds familiar), about a development team that wanted to ensure they did not commit too early to any technology or give customers false impressions about how much progress had been made, so what they did was literally prototype the UI experience with the customer using cutout felt squares and pinned them to a board and rearranging them until the customer had what they needed.

So think about what you are writing, quality is something you need to worry about now, not just during the QA phase of your project. And quality is more than just whether the code meets the functional requirements without setting the computer on fire. Quality is everything from following your teams coding conventions and code documentation, to extensibility and maintainability.

So if you are in an agile team, realise that the higher the quality of the code base, the more value it has to your company and/or your customer. To that end, refactor mercilessly. If you work in a traditional team, before you write a single line of code, think to yourself “would I want to maintain this piece of code I am thinking of writing” because that is exactly what you will be doing if you write it.

Upgrades, upgrades, upgrades everywhere

Posted in: Enterprise Java

We are bombarded with hype about new versions of tools (like IDEs), or new libraries (like a new JDK), or a new software package (like a new object-relational mapper) all day long. But do we even need them? Can’t we basically create what we need to complete these mundane everyday tasks with the tools we already have? Aren’t most of these new releases just a way to sell new software licenses? Even when it comes to opensource software, upgrading to a new release has an inherent cost that needs to be weighed.

Here is a real world example. I am currently invovled in a project that involves upgrading the code base from JDK 1.4 to JDK 5 as part of the scope. My Engineer brain says that this upgrade is not only necessary, but just a plain good idea. There are some new language features in JDK 5 that we can take advantage of, so there is a benefit. But what are the costs? There is definitely a cost involved in doing an audit of the existing code and seeing if there is any code that is incompatible with the new JDK and putting together a plan and effort estimate to address those issues. There is also a optional cost of retrofitting the existing code to take advantage of the new language features (like enums for example). There is a learning curve cost. Perhaps there is a confusion cost for some junior engineers (I am thinking of the autoboxing feature). There is a cascading cost that we now also have to consider upgrading our application server environment to support the JDK 5 code.

So is all of this cost worth it? Will the benefits outweigh the risk and give the company some kind of ROI? I think in most cases, the answer is actually a defiinite yes.

Take the Java 5 upgrade example. Apart from the obvious technical benefits, there are a whole host of trickle down posiitve benefits. For starters, I think many companies underestimate the importance of keeping the geeks happy. Engineers like to use new tools and stay on the cutting edge. They like to be able to read articles in the latest journals and be able to apply the lessons learned straight away. Happy Engineers are good employees. They are more productive. They are more collaborative. They tend to not be looking quite so hard for that next dream job. They tend to be more forgiving of working conditions that might not be ideal. Engineers are creatures who like to learn and keep learning. Working on the same code and project and tools for year after year will be the death of an Engineer and you will end up with just mediocre Coders left on your team as the Engineers will be long gone.

So before you decide on your next upgrade, think about all of the costs involved, there are many hidden ones you may overlook at first glance. But then make sure you also think about all of the benefits (both short and long term). Not every upgrade will make sense, but I bet that a majority of them will.

Enough is Enough

Posted in: Software Development Best Practices, Software Development Team Leadership

I am personally interested in processes (or removing processes), practices and tools that allow software teams to deliver on time and with high quality on a consistent basis. Now that sounds pretty benign and uninspired. It sounds like a common goal that everyone in the software industry would share. Am I naive to think that it is more than reasonable to expect bug-free software? I know many people think that at a mathematical level, debugging many of today’s complex systems is simply unrealistic. But didn’t humans design the compilers and the IDEs and everything else related to software? Don’t we by definition control the monster? Are we really so smart as to have invented something that is beyond our own abilities and comprehension already? Is this the start of the rise of the machines? I choose to think not.

I do believe that bug-free software is the goal of every software project, even if it isn’t explicitly stated. The problem is the commitment to achieving that goal. Quality is often intangible, at least on the high end (low quality is often very tangible). And intangible things are easy to ignore or sacrifice. Quality often comes at the end of a software project and when deadlines are tight, it is the first to be discounted.

Here is the story of what finally inspired me to start this blog.

I was sitting at San Francisco airport, waiting to fly home. I live in southern California, so it is a short flight and American Airlines only uses little regional jets for the SFO to Orange County route. The flight was not full and the plane is small to start with. So passenger loading was done pretty quick and we were set for an on time take off. But we didn’t take off. We just sat there and sat there. Finally the stewardess announced over the intercom that the delay was because they had introduced a new automated system that week to do some of the paperwork required to be given a green light for take off. And that they were having some trouble getting it to work so they were going back to the manual system so we could get underway.

The most disturbing part of this was the exact words used by the stewardess during the announcement. She said “we are using a new computerized automated system and as with all new software there are some bugs and glitches”. I found that statement to be very profound.

My assumption is that an airline stewardess represents a typical layman user of software – in other words she has no specialized knowledge of how to write software or what it is like to work in a software team. She represents a typical user that only sees software from the end-user perspective. And from her perspective software usually has bugs. Usually has bugs. Can that be true? Over all of the software written and all the smart people that write it, does software still usually have bugs? At least anecdotally in my experience that does appear to be the case.

So I say to that, ENOUGH IS ENOUGH.

Software is now part of a lot of people’s every day lives. It is responsible for many tasks that if completed incorrectly could cost human lives. It is also responsible for many tasks that if completed incorrectly have a financial impact on companies and individuals. There is also a social impact from poor quality software. What was the cost in terms of lost time for the people sitting on that plane with me? What is 20 minutes worth? Time is the only non-renewable resource, so 20 minutes is actually priceless, its value is immeasurable.

Every time you relax your definition of quality you are failing as a software engineer. You are contributing to the continued and currently deserved poor perception that people have of the software industry. And ironically, you are also immediately failing the customer who is probably the one pushing you to do the relaxing. It is a basic, fundamental principle of software development that a relentless pursuit of delivering quality software is actually the shortest and quickest path to delivering a project.

Real world development methodologies

Posted in: Software Development Team Leadership

Came across this great post by Scott Berkun on his The Berkun Blog:

http://www.scottberkun.com/blog/2007/asshole-driven-development/

I had a couple to add:

SCDD – Seagull Consultant Driven Development
This is where a company that already has a development team but believes they are not working to their fullest potential, decide to bring in a consultant to set things straight. The consultant, whose compensation package is not tied to the delivery of the project, comes in, waves his hands a few times, draws a set of cascading boxes on a white board, drops a few key acronyms around, hands out some boiler plate process documents and then leaves. The original development team is then left to sort out the mess created and deliver a whole bunch of design documentation that the consultant told upper management would reduce risk and guarantee delivery on time. The consultant has effectively behaved like a seagull – he flew in briefly, crapped all over everything and left just as quickly.

ADD – Analysis Driven Development or HGDD – Holy Grail Driven Development
This is where a project that is deemed “high priority” by a customer is never really defined and scoped out. The development team is brought in (way to early) to attempt in some magical fashion to divine what it is the customer needs and provide effort estimates. However, since the customer has no bounds set, they want to explore every possible variation of feature set and also want to see every possible alternative development plan. The development team gets mired in Microsoft Project files and gantt charts and never writes a single line of code. The customer continues to search for the development plan that gets them every conceivable feature, but can be delivered on time, for almost no money and has no risk whatsoever associated with it. In the end the customer has spent so much money figuring out what they want, and how optimally to get it, they have to settle for a scaled back version to squeeze out a release before a deadline.

Are you hiring a hacker, a coder or an engineer?

Posted in: Software Development Team Leadership

There are many types of developers who work in the IT industry. When it comes time to hire a new developer it can be tough to figure out which kind of developer you are interviewing. When I do an interview I try to place the candidate in one of 3 categories – hacker, coder or engineer.

The Hacker

How to spot one:

  • Has been coding since they were 12, but have no notable credentials (no college degree, no professional certifications etc)
  • Believes all problems can be solved with some kind of “script”
  • Is unable to answer questions related to emerging industry trends
  • Appears uninterested in training opportunities
  • Their answers will be 1 or 2 words in length and no matter how much you coax them, they will not expand on them
  • Has an unhealthy fascination with the game World of Warcraft, to the point it will probably affect their performance as an employee

When to hire one:
Hiring a hacker is always a gamble for a development role. Often they make better sys admins than actual developers, since they are quite comfortable working all-nighters in dark rooms (as long as there is a supply of energy drinks and pop-tarts). They can also thrive in a QA role where automation of repeating tasks is a key skill (hackers also tend to like to break things).

When not to hire one:
A Hacker can become a rogue and distracting element in your development team. If you are looking for a team member that will have a focus on quality and meeting deadlines, a Hacker is not for you. If you want an employee that will work a regular schedule that doesn’t include them showing up at 11am looking like they have had no sleep, then a Hacker is not for you. These are the kinds of employees that you find out have been running a personal ecommerce site piggybacked on one of your corporate servers for the last 6 months.


The Coder

How to spot one:

  • Probably has a degree, but has no professional credentials (no certifications etc.)
  • Can answer common programming questions, maybe even understands Object Oriented principles, but probably struggles to answer big picture architecture questions
  • Probably cannot answer questions related to emerging industry trends
  • Doesn’t have a strong interest in getting training, attending conferences or expanding their skills in general
  • You will have to work to get them to expand on their answers beyond 1 or 2 words
  • Their references are probably positive
  • They like to play World of Warcraft at least a couple of nights a week

When to hire one:
A Coder is usually a good hire if you are simply looking to increase your development bandwidth. They will be able to contribute to a project in terms of pumping out code, but will need a solid design or requirements to follow closely. You will also need to make sure there is an Engineer close by to keep an eye on them.

When not to hire one:
If you are looking for a people leader or a technology leader, a Coder is not for you. Also don’t expect to be able to have a philosophical discussion about the state of the software industry with a Coder.


The Engineer

How to spot one:

  • Has a degree, maybe even an advanced degree, but is not a new graduate. To be an Engineer, some real world exposure is needed to shake off the bad stuff they learned in college and develop their own ideas.
  • Has a strong opinion about industry trends
  • Asks questions about training opportunities and budgets for attending industry conferences
  • Can name some tech authors they like or industry luminaries they agree with
  • Asks questions about what development processes and tools are used at your company
  • May not be able to answer every low level code or syntax question you throw at them (this is not a concern, Engineers are usually thinking at a higher level and know how to quickly find the documentation about the API they want to use if they need to)
  • Is reasonably articulate and their answers are usually more than 1 or 2 words in length
  • The interview will be much more of a conversation than an interrogation
  • Are aware of the game World of Warcraft, but don’t seem to have time to play it because of all of the tech books and journals they are trying to keep up with

When to hire one:
In general, the Engineer is the employee you are looking for. Just like in sports where you have franchise players, a good Engineer or two on your team can make or break a whole company. They are going to push the envelope in terms of the pace of development, introducing new tools, technologies and ideas and also the basic way in which the team operates. If you are working on projects that require novel solutions to be created, you will need an Engineer, as a Coder will not bring this kind of skill to the table. Also if you want to be able to increase the number of tasks you can delegate, the Engineer is for you.

When not to hire one:
Depending on the project you are working on, an Engineer can sometimes be a bit like having a sledgehammer in your hand when all you want to do is tap in a small nail. If you are working on small simple projects, or the work is generally a matter of taking a set of pre-defined requirements and pumping out a solution that requires little thought, then an Engineer may become bored quickly and you will find them moving on soon. Also expect an Engineer to have a solid idea of how things should work to be effective. Do not expect to mold an Engineer – you should be hiring them to help evolve your team, not to persuade them to the way you do things now. Additionally an Engineer will be aware of the going pay rate for someone with their skill set, so do not expect to get them cheap.


Conclusion

Not all developers are created equal. When you have an open requisition for a new developer, be aware of what kind you are truly looking for. If you need an Engineer, but only have the budget for a Coder, you need to take a hard look at what the longer term implications for your team are if you don’t have the Engineer. Once you are in the interview, make sure you have questions set up to help you quickly determine what kind of developer you are talking to. And never forget what an unusual amount of influence the World Of Warcraft will have on your hiring decisions!

Do I really need to check for bugs in your code?

Posted in: Enterprise Java, Software Development Best Practices

An API is made up of not only code, but also documentation. Imagine the Java JDK without the associated JavaDocs – would it still be as popular or useful? Of course not. In fact without the provided documentation and JavaDoc tool to generate more documentation, the Java language probably would not have been anywhere near as successful as it is.

So while you can enforce many constraints and cover many issues at a code level, there are some issues that are better dealt with at the documentation level. We see comments like “if you pass ‘X’ to this method, the result is undefined”. Perfectly legitimate to say this if that is true of the nature of the code – mathematical algorithms are often dealt with this way.

But now that I have gone to the effort to say this about my method, what happens if someone does pass ‘X’ to my method? Do I need to handle that situation? Should I just throw an exception? If I do nothing, and my code fails gracefully or crashes horribly, is it a bug in my code?

I propose that in fact as a client, it is necessary to comply with the documented API just as much as you comply with the coded API.

Here is an example. Suppose, you are writing some code and you choose to make use of a class from the JDK core classes. You take a quick look at the JavaDocs to make sure you know what you are doing and off you go. You are an agilist so you also write a thorough set of unit tests and make an effort to cover edge cases and corner conditions. On one of these corner conditions you notice the test is failing. You look at the your logging output and see that the method you are calling from the JDK is throwing a runtime exception. You check the parameter values being passed using your favorite debugger and see that the value makes sense to you. You then go back to the JavaDocs and notice that it mentions something about certain values not being valid and that an undefined runtime exception is thrown when those values are passed. At this point you have a choice, change your code to work with the API or complain that the JDK has a bug in it. Of course you change your code – I am not saying there are not bugs in the JDK, but you should consider yourself unlucky if you actually stumble across one.

So if you are writing an API, go ahead and write code defensively, but don’t kill yourself to code against every possible asinine value someone might try to pass to you. If your code interface is designed nicely and you write good documentation and you write a comprehensive set of unit tests, as far as I am concerenced, you are pretty much off the hook. Don’t waste time and lines of code defending against obvious bugs in your client’s code. Besides, if you have enough time to write code to defend against bugs in other people’s code as well as your own, you are probably on a dead end project and are destined to be looking for work soon anyway.

I see this on a regular basis. Engineers who get caught up in the whole defensive coding idea, or test driven development. Not that either concept is bad, but when the first 30 lines of every method are sanity checks for parameters than you should maybe consider how you are spending your productive hours. And don’t get me started about when I see this in non-public methods – really, you can’t trust your own code to pass you the right values?

The Problem with Maven Releases and Continuous Integration

Posted in: Enterprise Java

I mentioned in a meeting recently that I would like to find a better way to make releases with Maven by leveraging CruiseControl (or another CI tool) to save us work and as a result time.

The main problem currently is created because of the flexibility of using Maven’s SNAPSHOT dependency mechanism. Currently when CC does a build, it just creates a simple SNAPSHOT build and does not change the pom file at all. This is a good thing, as it pushes the latest code out to the shared repository and then when developers do their next build they automatically get the latest code, even if they are not working directly on that project.

The down side is that the builds created by CC are not formal releases and so are not ready to go to QA or production. When we do want to move something to QA we have to stop and do a manual release, and then a second manual release for Production. This is caused by a few reasons:

  1. Currently when CC does a build there is no tag applied to our CVS repository to mark the build, which means the build is not repeatable. This is intentional currently, since there is really no need to be able to repeat a SNAPSHOT build. This can easily be solved though, we can have CC apply a tag, but that still leaves the next issue which is more significant.
  2. Because CC is making a Maven SNAPSHOT build, it does not resolve the SNAPSHOT dependencies to other projects, so the build is still not repeatable, even if we had a CVS tag applied.
  3. The QA releases are tagged/named with suffixes like 1.0.0-rc## (for release candidate) or 1.0.0-beta## (where the #s are numbers indicating iterative releases to QA are being made) to clearly indicate they have not passed QA yet.
  4. Because the QA releases are tagged/named in such a way a whole second release needs to be done to clean up the tag/name to just something like 1.0.0

Here is what I think needs to happen instead:

  1. We continue to have CC create and publish SNAPSHOT builds to the shared repository, just as it does now, no changes.
  2. We add an extra step to the CC build that (on a successful build) tags CVS, then creates a branch based on that tag.
  3. It then does a checkout on the newly created branch.
  4. It then updates all of the pom.xml files where SNAPSHOT versions are used in dependencies to other projects/plugins etc. More on these updates in a second.
  5. It then checks those updated poms back into the branch.
  6. It then does a proper Maven release on that branch and publishes that to the shared repository. That is the end of life for that branch, it only exists to allow the release to be made, nobody would ever use this branch directly.

So, for regular development work, nothing changes, you would continue to use SNAPSHOT versions where appropriate. But, we now no longer need to do a special release when we go to QA, and in addition, if a QA build gets approved, it can go straight to production, we do not need to do a special production build anymore.

This all sounds good, but the tricky part is the “updates the poms” part that I mentioned above in the 4th bullet point. I do not think this functionality exists today in Maven.

So here is how I think it would work:

  • We change to use build numbers that look like this “<major version>.<minor version>.<micro version>.<build number>” – the big difference is the “build number” part. Whenever there is a successful CC build, this number gets incremented. One of these releases with a unique build number will be given to QA. If QA approves that build number, that same build number goes to production.
  • Once a build goes to production, the major, minor and/or micro version numbers get incremented (just like we do now), and the build number automatically rolls back to zero.
  • When CC does a build, it examines the pom file of the project for dependencies to other projects/plugins that are SNAPSHOT versions. It then looks in the shared repository for the latest build number of those artifacts and changes the pom to depend on that specific build number instead of the SNAPSHOT. For example, if CC finds a dependency to version 1.0.1-SNAPSHOT of artifact-X, CC goes to the shared repository and finds that the highest build number for that release is 1.0.1.0064, so it changes the dependency in the pom to be that release number.
  • CC continues until there are no SNAPSHOT dependencies left.
  • Now CC checks in the updated pom (or poms plural if it is a multi-module project) into the branch.
  • Now CC just does a normal Maven release based on the branch and deploys the release to the shared repository. However, it needs to determine the correct build number for the main project being released. Basically this is the same algorithm used to determine the highest build numbers for all of the dependencies that we just finished. Except this time when we find the highest existing build number, we add 1 to it and use that for the release process.
  • The Maven release process finishes with the project in question being released with an incremented build number based on a branch in CVS. The trunk is unchanged, hence the normal SNAPSHOT releases continue as before.

That’s it.

Now I see the Maven release plugin has had some updates in recent releases with features like the non-interactive build now being possible by passing build numbers on the command-line. But I do not see any mention of the type of functionality I outlined above.

Am I just missing it, or is there a project out there for us to sink our teeth into?

Better the devil you know

Posted in: Software Development Best Practices, Software Development Team Leadership

It seems nearly everyday a new technology or pattern or paradigm bursts onto the scene. Lately we have seen things like AJAX and Ruby and other web-oriented technologies getting a lot of buzz and plenty of people jumping on the bandwagon. And don’t let me get started on SOA!

But what about the trusted tools we already know. Do we need to be ever pursuing the mastery of these new technologies and applying them to each new project and retroactively applying them to existing (already functioning) projects? What value does that bring? It really only makes sense if the benefits of the new technology outweigh the costs of adoption – these costs can be higher than you think.

To illustrate the point, lets take a team of engineers working for an internal IT shop. They have worked to identify a core set of tools and technologies that they are going to specialize in. They have recruited for those skills and their training dollars have been spent to improve those skills. In addition they have wisely created coding style guides, they have documented their own best practices they have chosen to follow, and best of all they have a continuous integration server set up, that is building and testing their code as fast as they can commit it. All in all, they are quite comfortable in their chosen domain and are working to maintain their skills appropriately.

Now for some reason they are asked to consider technology “X”. This could happen for a bunch of reasons – maybe somebody writing the functional requirements was over zealous and mentioned it in those documents, or a customer has heard of “X” and simply wants to have their project developed with it for no other particular reason except that they have heard of it and want to “contribute” to the process.

Assuming the team is a reasonably mature team with a good mix of senior and junior talent and have a grasp of fundamental engineering principles (quite rare possibly?) then what is the cost of using “X” on the new project instead of their existing tools?

As in most organizations that practice some form of waterfall process, they start by asking the engineering team for a detailed estimate of how long the new project will take to implement. Ignoring the fact that this is a gross error to attempt such an estimate at this stage of the project, the team dutifully works to create the estimate. But now they are faced with the problem that they are not really sure how long it takes to do things with “X”. They are comfortable with their current set of tools and could probably come up with a good estimate if given enough time, but now they are really going to struggle. So they scramble to find some documents, but of course “X” is the new sexy technology with a lot of buzz but no solid documentation or best practices defined. The team’s only true choice (assuming the estimate is needed quickly) is to SWAG it and then likely pad the SWAG to allow for the unfamiliarity of “X”.

So the first issue we have is a wildly unreliable effort estimate due to a lack of experience and confidence with “X”.

Now its time for design. But of course the team has no inherent knowledge of “X” so they get nothing for free in the design phase and have to start from scratch on every detail. So the design phase progresses and the team is able to make some progress, but there are some nagging questions that the team simply cannot find answers for. The team identifies these issues and decides that the only real way to resolve them is to engage some external help from the only people who know “X” – the vendor that just launched it. The vendor is of course more than happy to help, but since they are in great demand (because they are the only ones with any skills in this area), their rates are astronomical. The team reluctantly engages the external resources and embarks on a Proof Of Concept to help finish the design phase.

So the second issue we have is a design phase that takes an incredible amount of time, most of which can be considered more “learning” time than design time and we also have a huge cost outlay for professional services.

Time to implement.

The team is trying to be as agile as possible within the bounds of their company’s tollerance so they are quite commited to unit testing and continuous integration, but pair programming and other practices are still just “crazy talk”. Unfortunately “X” dosen’t have a good toolkit for writing unit tests. The vendor says they are working on it, but it is not on the release calendar yet. Looks like unit testing is out. The team starts implementing, but their preferred IDE dosent support “X” so now they have to run 2 large IDE applications on their (already underpowered) machines at the same time so they can creat the “X” parts, while still having access to the rest of their projects. Additionally, the team has been committed to creating repeatable builds and strict release processes for a while now since some code slipped out to production a while back without being tested or put into there code versioning system. Technology “X” builds fine within the customized IDE the vendor has provided, but there is no support for compiling from other command-line based build tools. So the team assigns resources to write the necessary build tool plugins so that they can appropriately build, tag and release the project.

There is a lot of additional overhead in the implementation phase when using a brand new technology. Over time these overheads reduce on each subsequent project, but the cost on the first few is quite significant. The lack of integrated tools really does hamper the team and the costs incurred because of the extended development time will add up quickly.

Eventually the team finishes implementing, but since no automated unit or integration tests have been running, the quality is questionable. Never the less, the QA team comes along and wants to test the project. But once again, “X” is so new that there are no tools for the QA team either. But even if there were tools available, the QA team would have never used them, so there would be a training and ramp up cost involved. So QA is going to be a manual process. As a result, some of the regression testing is not applied consistently and a lot of bugs slip through the cracks and make it to production.

The cost here is of course an extended QA time line. But possibly more costly is the low quality of the end product.

The project finally goes to production, but of course there is the immediate flurry of bugs and feature enhancement requests that are associated with a 1.0 release.

So the team starts again with the requirements gathering phase. But when it comes to the design and implementation phase, it turns out that in hindsight the code wasn’t really architected as well as it could have been now that the team knows more about “X”. So it is difficult to add the new features because of this. They hack the new features in, and so begins the downward spiral of building more and more features on top of an constantly degrading base. Technical debt increases exponentially.

This cycle of adding features and then fixing the bugs the new features introduced continues for a while. But now 2 of the team on the project are leaving for greener pastures (its surprising they lasted this long really). So the HR person comes by your office and gets a detailed job description from you, which includes all of the skills from the other technologies but now also includes “X”. The HR person not knowing any better says thank you and goes off to do the best keyword matching process they can.

A day or two latter they come back looking frustrated. They are having trouble finding anyone with the mandatory skills you described. Turns out that the two skill sets you must now support are really not common bedfellows and so recruiting is going to be an issue. You really only have one choice – recruit for one skill and spend the money and time to train them on the other.

So here is an ongoing maintenance cost that will be higher than before. Recruiting for skills that don’t usually show up together can be a big issue. If you are lucky enough to find people with the skills, they are likely to be higher priced just because they have that uncommon combination.

So before you choose to adopt the latest sexy technology because all of the weekly e-newsletters you get are mentioning it, think about what it is truly going to cost your team and company. Is it worth it? I propose that in the majority of cases, it is not.