Monday, June 23, 2008

This Blog is 80% Complete

Copyright 2008-2009, Paul Jackson, all rights reserved

How many of you have sat in a weekly status meeting and heard the phrase “I’m 80% done with that task”?  Or 90%, 75%, 50%, etc.?

It’s at this point in a status meeting that my mind starts to wander to other things.  It might wander to the real work I need to do once I get out of the meeting or to the kayaking I plan to do over the weekend, but one way or another I’m not paying attention to these status reports any more.

Why?  Because, in my mind, the only valid status for a task is binary: complete or not complete.

If you have to report a percentage of an individual task in a weekly status, then your task wasn’t broken down enough to begin with.  Now a larger task made up of several steps might be 80% complete after a week of work, but that should be extrapolated from the completeness of its components.

What triggered this post was a status meeting where it was reported that a task was 75% complete – the line-item in Project was scheduled for forty (40) days.  It wasn't a rolled-up, summary task; it didn't represent another, more detailed technical schedule -- it was just forty days of some work.  And it wasn't alone -- there were plenty of twenty and thirty day tasks to keep it company.

God might have been able to accurately estimate Noah's deluge at 40 days of effort (and even He had to work nights to meet the deadline), but I think this is beyond the abilities of most software developers without breaking it down a bit.

In my opinion, forty hours is too large a task and should be broken down further -- just the act of thinking about the necessary steps will help drive a better understanding of the level of effort involved.  And that better understanding of the effort results in better, more accurate estimates.

Something I'm hearing more often in projects is a request for ROM (Rough Order of Magnitude) estimates -- as though changing the terminology from SWAG makes it somehow more acceptable or reliable.  Personally, I like the "magnitude" part -- like measuring an earthquake on the Richter Scale, ROMs are logarithmic.  The likely error in the ROM grows logarithmically as the initial ROM estimate increases.  E.G. the margin for error in a 40-day ROM is going to be logarithmically more destructive than that of a 1-day ROM ... approaching the catastrophic.

Also, like the ROM chip, once that estimate's written, it's read-only. That ROM's what you're stuck with and you'll be held to it.

I'm reminded of a place I used to work that did consulting for county government.  Everything the owner estimated was a ROM and every ROM was "two weeks". 

"Sure, we can do that for you.  Take about two weeks."

Time and materials later, it's amazing how many billable hours there could be in a two-week estimate.

The opposite extreme from the ROM is a schedule and status reporting that's so granular as to impact the ability to do work.  I worked with a project lead once who wanted tasks for his MS Project schedule measured in hours and status updates twice a day.  More time was spent providing updates (and justifying or explaining a deviation from the estimate) than was spent actually coding.  Luckily this didn't last long.

When I'm in charge of an effort, we break things down until the individual tasks are about a day's effort -- many less, some more, but the target's about a day.  Then the developers sign up for the tasks they're confident they can complete in each development iteration (typically a week or two) -- within the technical team we estimate the effort of the individual tasks, but from the Project Lead's perspective, they all have a duration of the iteration length.  Status at the project level is binary: done / not done.  I adapted this a bit from a process I found in Managing Projects with Visual Studio Team System:

If a task is 99% complete, it's reported as not done.  If all the tasks for an iteration aren't done, the iteration's not done.  Just like with a build -- if 99% of the projects build, the build's still broken.

This is what project leads should be concerned with.  Not "did every task take the estimated amount of time", but "is the project on track for completion".  Manageable iterations with manageable workloads accomplish this.

This blog is now 83% complete.

kick it on DotNetKicks.com

Monday, June 16, 2008

GiSTEQ PhotoTrackr Lite

Copyright 2008-2009, Paul Jackson, all rights reserved

I've been on vacation for the last week, so I've had very little to do with technology or programming.  But I did have a chance to play with a new toy that might be of interest to those who like playing with technology as much as I do.

It's a little GPS receiver that simply records your position every 15-seconds, and some software that matches the timestamp on your digital photos to the GPS trip record -- it then geotags your photos for services like Panoramio.

The full article is at my kayaking blog.

Sunday, June 8, 2008

PayPal Plug-In

Copyright 2008-2009, Paul Jackson, all rights reserved

Not .Net or programming related, but useful nonetheless.

I'm always a bit reluctant to enter credit card numbers when shopping at a new site, because I don't know how good their security is.  Once my credit card's past the SSL layer at their site, what becomes of it?

Is it being stuffed into some unsecure database?

Is it being transmitted in clear text as part of a SOAP message throughout their SOA architecture?

Is the order actually being processed by hand, so my credit card's being printed out and stored in a file somewhere?  Or shipped around via email?

PayPal has released a browser plug-in that eliminates these concerns for me. 

One of its features is generation of one-time or multi-use "Secure Cards" -- MasterCard numbers tied to your PayPal account.  This allows your PayPal account to be used securely at any site, even those that don't explicitly support PayPal.

Once installed, the plug-in adds an icon-menu to the browser's toolbar:

image

"Generate Secure Card" prompts for PayPal login:

image

With the image-verification to ensure you're sending the information to PayPal and when combined with the PayPal Security Key:

image

This seems like a very secure login.

You're then prompted to choose either a single- or multi-use card number to generate:

image

And, presto, you have a secure card number to use for your purchase(s):

image

And, no, the card number above isn't valid any more -- nice try.

Monday, June 2, 2008

Manycore Computing - Parallel Extension Library June CTP Available

Copyright 2008-2009, Paul Jackson, all rights reserved

Microsoft has released the June 2008 CTP of Parallel Extension for .Net 3.5 library. Learning this library and the concepts of well-designed, well-behaved multi-threaded applications is becoming more and more critical to an application's success, as the days of being able to count on faster and faster processors being available by the time our applications release are behind us. Instead, we may be faced with having our applications run on PCs with a greater number of slower cores. This Manycore Shift is upon us already and, as developers, we need to be prepared for it.

Our user-communities expect this of us. Years ago, a Microsoft Word user expected printing a large document to tie up the application for however long it took the printer to spew out the pages. Then print spooling was introduced and the users' expectations changed -- they came to expect the application to be returned to their control faster, because the spooler could send data to the printer in the background. Today, the user expects an instantaneous return of the application when they select Print -- no matter how large the document. They expect to be able to immediately edit, save and edit the document again while the application sends the document, in the state it was in when they selected Print, to the spooler in the background. Furthermore, they expect all those other things that used to be synchronous operations (spell check, grammar check, etc.) to now happen behind the scenes without slowing down their use of the application's main functionality. They even expect the application to correct typing errors in the background, while they move on to make new ones. We, as developers, expect this of our tools, as well -- with Visual Studio Intellisense and syntax checking while we code. One of the first comments made about the recently released Microsoft Source Analyzer for C# was essentially: "Why doesn't it check in the background while I type and display blue-squigglies under what needs to be fixed?" The expectations are reasonable and achievable, given the computing power available on the average desktop and the tools available, but how many of us writing line-of-business applications truly take the time to understand multi-threaded programming and build these features into our applications? Threading used to be hard to do for the typical business-software developer. The steps to create and manage new threads were very different from anything they'd been exposed to before and the libraries were arcane and poorly-documented. But all of that's changing, the threading functionality in .Net 2.0 and now libraries like the Parallel Extension Library, PLINQ and CAB insulate the developer from the complexities of threading and make it incredibly simple to start tasks on new threads ... and therein lies a new danger: A co-worker and I rather regularly send each other emails, the gist of which is: threading is the work of the Devil. Not because it's difficult to create a thread or start a process on it, but because the implications of concurrency in a large business application with multiple dependencies still have to be dealt with, no matter how easy it is to send work to the background. For the typical business software developer, who's spent an entire career in a single-threaded environment, it's hard. It requires a different conceptual mindset. As Microsoft continues to work on libraries and extensions to make threading easier to implement, and I'm sure they will, I hope they also put as much effort into learning resources to help developers understand the implications of using these new tools; and I hope that we, as developers, put as much effort into learning the best-practices and fundamental concepts of parallel computing, as we do into learning the mechanics of the tools. kick it on DotNetKicks.com