Why Automated Testing is Important – Part 2

Posted in: Software Development Best Practices

In Part 1 of this series I described the characteristics that make up a good Automated Test. Here in Part 2 of this series I will explore all of the benefits you will enjoy by creating those good tests and why the time spent on making good tests is a no-brainer investment.
Continue reading »

Why Automated Testing is Important – Part 1

Posted in: Software Development Best Practices

The adoption of Automated Testing strategies and tools, both in Agile and traditional teams, has been patchy – some teams and communities have embraced it, but many organizations still perceive it as a burden that just slows down development. Those that see the writing and execution of tests as an additional, costly and separate task from development have missed seeing some of the main benefits of an expertly manicured test suite.
Continue reading »

Quote Of The Week – 2009/12/11

Posted in: Quotable Quotes

Programmers are responsible for software quality – quality in their own work, quality in the products that incorporate their work, and quality at the interfaces between components. Quality has never been and will never be tested in. The responsibility is both moral and professional.

Boris Beizer
(from Software Testing Techniques, Chapter 13)

Microsoft Hates Testing … Um, No Surprise There

Posted in: Software Development Best Practices

A colleague of mine forwarded an article to me during this last week, which he prefaced with the following statement …

guys, I’ll write it in all caps and bold:

I AM NOT PROMOTING OR IN AGREEMENT OF ANY OF THE POINTS THE ARTICLE MAKES.

… which begs the question, why did he send it not only to me, but an entire team of people? I choose to believe it was because he is an enlightened soul that understands that the best way to reinforce your own beliefs is to read more of the opposing point of view, not more of the view you already have. I am lucky to have a few of these souls working for me right now.
Continue reading »

Twitter Recap for Week Ending 2009-06-15

Posted in: Social Networking

Twitter Recap for Week Ending 2009-05-24

Posted in: Social Networking
  • Family Blog Update: Subscribe! http://bit.ly/2M6nX #
  • First beta of Adium 1.4 with Twitter support just released. http://bit.ly/XkhAo #
  • Netflix just sent me Australia #
  • New Hoodoo Gurus album rumored for September. Sweet! #
  • RT @alleyinsider: Palm Pre Launching June 6 For $199 $PALM $S by @fromedome http://bit.ly/AhzGK – should make @gorkeyv happy #
  • Just added myself to the http://wefollow.com twitter directory under: #software #agile #j #
  • Just felt another quake here in OC. Confirmed with other people in the office, but not seeing much on the USGS site. #
  • OK, USGS now saying 4.1 magnitude and looks like the exact same location as the one on Sunday night. http://bit.ly/8nyvP #
  • @JonathanGiles there has been wireless in the last few years, but it can be spotty and usually non-functional the first morning in reply to JonathanGiles #
  • Go #Lakers! Ok, even I didn’t believe that. #
  • Netflix just sent me The X-Files: I Want to Believe #
  • Don’t miss Sun’s going away party at #JavaOne. Make sure Oracle gets the right message. http://bit.ly/QcLX2 #
  • D’oh, My Nam Is Earl just got cancelled. http://bit.ly/SxuL3 #
  • Testing out ping.fm #
  • 2nd Test of #Ping.fm #
  • Trying to figure out how to control all of my various content streams and get them to the right people without drowning anyone. #
  • Testing ping.fm from Blackberry #
  • #JCP Party at #JavaOnehttp://bit.ly/QJtMt #
  • #Hulu’s first live-streaming concert = Dave Matthews Band on June 1st. http://ping.fm/ElXz2 #
  • According to this site http://ping.fm/c2hB8 my “Power Animal” is a Honey Badger! Can this be true?? #
  • Just saw StarTrek. Total man-crush on James T. Kirk #
  • Internet connection is down … is suicide really painless as the opening credits of MASH taught me? #
  • Internet connection retored finally. Suicide averted. Verizon blows! #
  • Rolled 7 games this morning, averaged 151, not bad. And then breakfast in central park with wife and little one. God bless long weekends. #
  • @jazzlifejunkie What % of the latest Jonas Bros tour are you getting in exchange for Kalia? Just curious. in reply to jazzlifejunkie #

Enough is Enough

Posted in: Software Development Best Practices, Software Development Team Leadership

I am personally interested in processes (or removing processes), practices and tools that allow software teams to deliver on time and with high quality on a consistent basis. Now that sounds pretty benign and uninspired. It sounds like a common goal that everyone in the software industry would share. Am I naive to think that it is more than reasonable to expect bug-free software? I know many people think that at a mathematical level, debugging many of today’s complex systems is simply unrealistic. But didn’t humans design the compilers and the IDEs and everything else related to software? Don’t we by definition control the monster? Are we really so smart as to have invented something that is beyond our own abilities and comprehension already? Is this the start of the rise of the machines? I choose to think not.

I do believe that bug-free software is the goal of every software project, even if it isn’t explicitly stated. The problem is the commitment to achieving that goal. Quality is often intangible, at least on the high end (low quality is often very tangible). And intangible things are easy to ignore or sacrifice. Quality often comes at the end of a software project and when deadlines are tight, it is the first to be discounted.

Here is the story of what finally inspired me to start this blog.

I was sitting at San Francisco airport, waiting to fly home. I live in southern California, so it is a short flight and American Airlines only uses little regional jets for the SFO to Orange County route. The flight was not full and the plane is small to start with. So passenger loading was done pretty quick and we were set for an on time take off. But we didn’t take off. We just sat there and sat there. Finally the stewardess announced over the intercom that the delay was because they had introduced a new automated system that week to do some of the paperwork required to be given a green light for take off. And that they were having some trouble getting it to work so they were going back to the manual system so we could get underway.

The most disturbing part of this was the exact words used by the stewardess during the announcement. She said “we are using a new computerized automated system and as with all new software there are some bugs and glitches”. I found that statement to be very profound.

My assumption is that an airline stewardess represents a typical layman user of software – in other words she has no specialized knowledge of how to write software or what it is like to work in a software team. She represents a typical user that only sees software from the end-user perspective. And from her perspective software usually has bugs. Usually has bugs. Can that be true? Over all of the software written and all the smart people that write it, does software still usually have bugs? At least anecdotally in my experience that does appear to be the case.

So I say to that, ENOUGH IS ENOUGH.

Software is now part of a lot of people’s every day lives. It is responsible for many tasks that if completed incorrectly could cost human lives. It is also responsible for many tasks that if completed incorrectly have a financial impact on companies and individuals. There is also a social impact from poor quality software. What was the cost in terms of lost time for the people sitting on that plane with me? What is 20 minutes worth? Time is the only non-renewable resource, so 20 minutes is actually priceless, its value is immeasurable.

Every time you relax your definition of quality you are failing as a software engineer. You are contributing to the continued and currently deserved poor perception that people have of the software industry. And ironically, you are also immediately failing the customer who is probably the one pushing you to do the relaxing. It is a basic, fundamental principle of software development that a relentless pursuit of delivering quality software is actually the shortest and quickest path to delivering a project.

Better the devil you know

Posted in: Software Development Best Practices, Software Development Team Leadership

It seems nearly everyday a new technology or pattern or paradigm bursts onto the scene. Lately we have seen things like AJAX and Ruby and other web-oriented technologies getting a lot of buzz and plenty of people jumping on the bandwagon. And don’t let me get started on SOA!

But what about the trusted tools we already know. Do we need to be ever pursuing the mastery of these new technologies and applying them to each new project and retroactively applying them to existing (already functioning) projects? What value does that bring? It really only makes sense if the benefits of the new technology outweigh the costs of adoption – these costs can be higher than you think.

To illustrate the point, lets take a team of engineers working for an internal IT shop. They have worked to identify a core set of tools and technologies that they are going to specialize in. They have recruited for those skills and their training dollars have been spent to improve those skills. In addition they have wisely created coding style guides, they have documented their own best practices they have chosen to follow, and best of all they have a continuous integration server set up, that is building and testing their code as fast as they can commit it. All in all, they are quite comfortable in their chosen domain and are working to maintain their skills appropriately.

Now for some reason they are asked to consider technology “X”. This could happen for a bunch of reasons – maybe somebody writing the functional requirements was over zealous and mentioned it in those documents, or a customer has heard of “X” and simply wants to have their project developed with it for no other particular reason except that they have heard of it and want to “contribute” to the process.

Assuming the team is a reasonably mature team with a good mix of senior and junior talent and have a grasp of fundamental engineering principles (quite rare possibly?) then what is the cost of using “X” on the new project instead of their existing tools?

As in most organizations that practice some form of waterfall process, they start by asking the engineering team for a detailed estimate of how long the new project will take to implement. Ignoring the fact that this is a gross error to attempt such an estimate at this stage of the project, the team dutifully works to create the estimate. But now they are faced with the problem that they are not really sure how long it takes to do things with “X”. They are comfortable with their current set of tools and could probably come up with a good estimate if given enough time, but now they are really going to struggle. So they scramble to find some documents, but of course “X” is the new sexy technology with a lot of buzz but no solid documentation or best practices defined. The team’s only true choice (assuming the estimate is needed quickly) is to SWAG it and then likely pad the SWAG to allow for the unfamiliarity of “X”.

So the first issue we have is a wildly unreliable effort estimate due to a lack of experience and confidence with “X”.

Now its time for design. But of course the team has no inherent knowledge of “X” so they get nothing for free in the design phase and have to start from scratch on every detail. So the design phase progresses and the team is able to make some progress, but there are some nagging questions that the team simply cannot find answers for. The team identifies these issues and decides that the only real way to resolve them is to engage some external help from the only people who know “X” – the vendor that just launched it. The vendor is of course more than happy to help, but since they are in great demand (because they are the only ones with any skills in this area), their rates are astronomical. The team reluctantly engages the external resources and embarks on a Proof Of Concept to help finish the design phase.

So the second issue we have is a design phase that takes an incredible amount of time, most of which can be considered more “learning” time than design time and we also have a huge cost outlay for professional services.

Time to implement.

The team is trying to be as agile as possible within the bounds of their company’s tollerance so they are quite commited to unit testing and continuous integration, but pair programming and other practices are still just “crazy talk”. Unfortunately “X” dosen’t have a good toolkit for writing unit tests. The vendor says they are working on it, but it is not on the release calendar yet. Looks like unit testing is out. The team starts implementing, but their preferred IDE dosent support “X” so now they have to run 2 large IDE applications on their (already underpowered) machines at the same time so they can creat the “X” parts, while still having access to the rest of their projects. Additionally, the team has been committed to creating repeatable builds and strict release processes for a while now since some code slipped out to production a while back without being tested or put into there code versioning system. Technology “X” builds fine within the customized IDE the vendor has provided, but there is no support for compiling from other command-line based build tools. So the team assigns resources to write the necessary build tool plugins so that they can appropriately build, tag and release the project.

There is a lot of additional overhead in the implementation phase when using a brand new technology. Over time these overheads reduce on each subsequent project, but the cost on the first few is quite significant. The lack of integrated tools really does hamper the team and the costs incurred because of the extended development time will add up quickly.

Eventually the team finishes implementing, but since no automated unit or integration tests have been running, the quality is questionable. Never the less, the QA team comes along and wants to test the project. But once again, “X” is so new that there are no tools for the QA team either. But even if there were tools available, the QA team would have never used them, so there would be a training and ramp up cost involved. So QA is going to be a manual process. As a result, some of the regression testing is not applied consistently and a lot of bugs slip through the cracks and make it to production.

The cost here is of course an extended QA time line. But possibly more costly is the low quality of the end product.

The project finally goes to production, but of course there is the immediate flurry of bugs and feature enhancement requests that are associated with a 1.0 release.

So the team starts again with the requirements gathering phase. But when it comes to the design and implementation phase, it turns out that in hindsight the code wasn’t really architected as well as it could have been now that the team knows more about “X”. So it is difficult to add the new features because of this. They hack the new features in, and so begins the downward spiral of building more and more features on top of an constantly degrading base. Technical debt increases exponentially.

This cycle of adding features and then fixing the bugs the new features introduced continues for a while. But now 2 of the team on the project are leaving for greener pastures (its surprising they lasted this long really). So the HR person comes by your office and gets a detailed job description from you, which includes all of the skills from the other technologies but now also includes “X”. The HR person not knowing any better says thank you and goes off to do the best keyword matching process they can.

A day or two latter they come back looking frustrated. They are having trouble finding anyone with the mandatory skills you described. Turns out that the two skill sets you must now support are really not common bedfellows and so recruiting is going to be an issue. You really only have one choice – recruit for one skill and spend the money and time to train them on the other.

So here is an ongoing maintenance cost that will be higher than before. Recruiting for skills that don’t usually show up together can be a big issue. If you are lucky enough to find people with the skills, they are likely to be higher priced just because they have that uncommon combination.

So before you choose to adopt the latest sexy technology because all of the weekly e-newsletters you get are mentioning it, think about what it is truly going to cost your team and company. Is it worth it? I propose that in the majority of cases, it is not.