Application Development

Expert Judgment and Passing the Sniff Test

Looking for two very informative posts on estimating?  Check out one by Josh Nankivel of pmStudent and Glen Alleman of Herding Cats.  Both are discussing estimating techniques that work for them. I wanted to take a moment to add my two cents. Though I certainly believe estimating should be more science than art, I look at estimates from a different perspective. As a disclosure, I'm not the one doing the estimating on this project, therefore I'm not going to say I agree or disagree with any one technique.  Depending on your situation, one estimating technique may provide more accurate results than the other.

What I would like to add, from my perspective, is the need for expert judgment. If you are an expert in a given estimating technique and it gives you the results you and your customer(s) need, does that not validate it as an acceptable estimating choice?

If the estimating technique does not produce the desired results, wouldn't it fail the metaphorical sniff test?

Recently, I questioned a vendor's estimate based on a different technique.  I used a parametric estimate to see if the vendor's estimate would pass or fail my sniff test.

What exactly is a parametric estimate?

An estimating technique that uses a statistical relationship between historical data and other variables to calculate an estimate for activity parameters, such as scope, cost, budget, and duration.  Source: PMBOK Page 439

So, why did the vendor's estimate not pass my sniff test?  As part of a standard estimating practice, software vendors should include time for fixing bugs. Upon review of a recent status report, I noticed the vendor reporting half as many bugs were discovered in a current build than had been estimated. When asked about this, the vendor was very excited to confirm that they indeed found half as many defects in the code they originally estimated and predicted a cost savings of several hundred thousand dollars to the project.  Going into the current build, I knew what the standard deviation was and considered the possible variance.  This fell way below that.

So, why were they discovering so few bugs?  At first glance, I would predict two possible reasons.  [1] Quality through development improved.  [2] Quality through testing worsened.  Either way, you get the same initial result of fewer defects identified.

We'll know the true answer once initial user acceptance testing begins.  If there were no baselines to compare the actuals to, I might not have given it a second thought.

Graphic source via Flickr: pump

The Pain Of IE6 And Application Development

Yesterday, a vendor advised my client the new feature requested to be implemented doesn't work quite right with Internet Explorer (IE)6.  The feature works fine with all "modern" browsers but IE6 is a major pain point.  You may ask yourself why we're even having this conversation.  Well, because we're talking about the Federal Government.  There are legacy applications out there that were built on IE6 and it's not an easy migration.  There are some Agencies which ONLY use IE6 and the users don't have permissions to install a new browser.  So, what do you do?  Do you embed a browser check in your code and advise the users they need to use a different browser?  Do you "fix" what would otherwise be a clean implementation by making it work with IE6?  I've seen issues with IE6 happen over and over again.  Even with my website(s), I pay attention to legacy Internet Explorer traffic.  I'm happy to report my IE6 traffic is 11% of my overall traffic, down from 21% a year ago.  Still, I will continue to test IE6 until it falls below 10%. What lesson can we take away from this?  Do your homework!  The vendor should have done an analysis (or known stakeholder system requirements) before implementing the new feature.  Catching it in QA is too late.  A little due diligence or prototyping could have saved a lot of time and money.  Knowing the current customer base, the vendor should have known this feature would not be accessible by all and advised the customer.  What would you do?

I would love to read your comments or feedback.  Please post them below.

Regards,

Derek