Tuesday, July 22, 2008

Microsoft Source Analyzer (StyleCop) Extensions Controversy – Part Two

Copyright 2008-2009, Paul Jackson, all rights reserved

A couple weeks ago I got irritable because a blog posting on how to extend StyleCop for TFS TeamBuild had been removed “at Microsoft’s request”.  That posting got a bit of attention on dotNetKicks, quite surprising me. 

I was really pleased to see today that the redacted post that originally got my thong in bunch is now back and there’s an MSDN blog entry trying to clear up confusion around the whole issue.

Apparently the released name of the tool (Microsoft Source Analyzer – now Microsoft StyleCop) confused me a bit – in that, I assumed with a “Microsoft” prefix that it was an official offering with a project team and a budget.  Such seems to be not the case, and, according to that clarification blog, it’s actually the work of a single developer at Microsoft ("on evenings and weekends") – which makes it all the more impressive, in my opinion.  (If Jason Allor ever leaves Microsoft and shows up at your company for an interview, hire him.  StyleCop would be a phenomenal and well-designed if it had a team and a budget, for one guy in his spare time ... )

In addition, the StyleCop blog now announces an upcoming version which will include an SDK and documentation, making my own custom rules tutorial obsolete.  This is a good thing.

Another good thing is that I learned a new word from bharry's comments: "kerfufel".

kick it on DotNetKicks.com

Monday, July 7, 2008

Customer Requirements and Online Dating

Copyright 2008-2009, Paul Jackson, all rights reserved

A couple years ago, my wife and I decided to sign up for eHarmony.   You know, the Internet dating service with the patented, million-point matchmaking system?

Yes, we were already married.  No, we weren’t having trouble and needing to know what was out there.

It was kitten’s suggestion.  (I call my wife “kitten”, the reason for which I explain on my kayaking blog.)

Her theory was that we’d both sign up and see if eHarmony’s sophisticated algorithms would match us with each other.  “It’ll be fun,” she said.

Now, as soon as I heard the suggestion, I knew that there was no way this could end well.  This was an idea that, upon hearing, you just know is going to screw up your life somehow.  But kitten wanted to do it, so I agreed.  If I’d only known the true horror of what would come from this, I’d have hacked eHarmony and brought their servers to a grinding halt.

So kitten goes first, signs up, runs through all the questions and, at the end, out pops a list of hundreds, if not thousands, of compatible matches … which, for the low, low price of $49.95 a month, they’ll be happy to put her in contact with.

Then it’s my turn. 

I start filling out the questionnaire.  Question after question after question … it’s thorough as hell.  And I’m answering everything as honestly as I can.  I get to the end and ...

See, I'm not sure what would have been worse ... to get to the end and have more matches than her or less.  And what would have happened if we weren't in each other's matches?  I don't know what would be worse, because none of that's what happened.  Instead, I got a message from eHarmony. 

It was a nice message.  Politely, even cautiously, worded – with the sort of tone people use with mad dogs or, possibly, wild-eyed men running through the streets with firearms.   I don't have the exact words, but essentially it said:

We're sorry.  Very, very sorry, actually, but there's nothing we can do for you.  We have no one in our entire database who we'd even remotely consider subjecting to a date with you.  In fact, we won't even try to look anymore.

Keep your money.  Put your credit card away.  We don't want it.  Even if you insist, we still won't take your money -- there's simply nothing we can do for you.  Please consider applying the $49.95 per month to a good therapist.

Okay, so I made that last bit about the therapist up, but the rest is the gist of what they said.

That's right ... an online dating service refused to take my money.

For years now, I've been hearing about it from kitten: "You're lucky, because there's no one else who'd have you ... but me, I have options.  Hundreds of options.  Thousands, maybe.  You?  You got no options."

I can't even really argue with her.

What does this have to do with software development?

Well, eHarmony did something hard ... they told the customer "no".

They could have taken my money and matched me up with whatever hopeless, unmatchable women they let into the service to accommodate guys like me.  After all, eHarmony didn't know I was already married and just wanting to see what the list looked like ... from their perspective, I was a real customer, and what I wanted was to join a dating service, right?  They could have even told me "outlook looks bleak, but we'll try" and taken my money.

So as a customer, I told them what I wanted: to join a dating service.  And they told me "no". 

They told me "no", because they looked beyond what I said I wanted to what would really meet my underlying requirement -- joining the dating service isn't the requirement, it's what the customer thinks is the requirement.  The real requirement is to be meet someone compatible.  They looked at what the customer really needed and said "no, we can't do that".

If they went ahead and just did what the customer asked, let me join, despite my apparently unmatchable personality, I would have been a dissatisfied customer.  But their business-model is built on success-rates, not raw numbers, so they said, “no”. 

Telling the customer "no" is hard.  Whether as a consultant or as an employee where the customer is your business-user.  It's hard because that customer's paying the bills ... he's got the money ... and he's telling you exactly what he wants.  But what they want may not get them what they need ... and what they need may not be achievable.

One of the first projects I ever worked on was a program for the court clerk in traffic court.  As the judge heard cases, the clerk was to use the program to capture the outcome and sentence, then it would generate all the right paperwork.  The customer said they needed to see every option on screen at the same time.  All the controls to capture verdict, adjudication, probation, fines, jail time, community service ... everything had to be on the screen at once.  The customer said that's what they wanted.

Visual Basic 1.0 -- we quickly ran out of GDI handles, stooping to putting graphics containers on the form and drawing the static text in order to free up the handles from the labels.

We ran out of screen space -- this was when the typical resolution was 640x480, but $500 ATI graphics cards got us up to 1024x768 and solved that problem.

The user's couldn't see the small controls at that resolution, too small.  $1500, 17" NEC monitors solved  that one for us.

Clerk of Courts, remember -- it's a government contract so money is no object.

And the customer got what they asked for.  They hated it.  That user-interface sucked.

Then we went to write a similar application for criminal court.  How'd we get that contract when the traffic app sucked?  It's local politics, you don't want to know the things the owner of that company did to keep their business.

So we meet with the criminal clerks to find out their requirements, and what do you think they want?  One of the first things they say is, "everything has to be right there on the screen all the time."

This time we said, "no."  This time we asked, "why?"

Turns out, it wasn't about the user being able to see everything at once, it was about speed.  They had to get to things fast enough to keep up with the judge speaking.  Being able to get to things quickly enough was the real requirement, not having everything visible at once.  They weren’t giving us the requirement when they said “everything on screen at once”, they were giving us the technical solution.

The real requirement was one we could meet in other ways, and the criminal application was much better and more usable than the traffic app.

The first time around, we did what the customer told us they wanted -- we let the customer drive the technical solution, instead of having them articulate the problem and then presenting them with a solution.

I've always been glad that I encountered that early in my career, because I've seen the same sort of “requirement” over and over again and that lesson has always served as a reminder to look beyond the customer’s statements – and tell the customer that we aren’t going to do it that way.

As software developers, we should be prepared to analyze every requirement under the microscope of “why”, get to the real, underlying need that the customer has, and present them with a technical solution that meets their real needs.

We also need to manage their expectations – just like eHarmony does.

I recently joined a project and I find myself saying something quite often in response to the customer’s requirements: “Instantaneous is not a service-level agreement.”

Some of the customers have an unrealistic expectations – as unrealistic, apparently, as my finding someone other than kitten who’ll put up with me.

If I don’t manage that expectation and tell them, “no, you will not get an ‘instantaneous’ response to your request, it will actually take some time”, then they’ll be dissatisfied with the result and the project will have failed. 

It’s hard and they don’t like it, but we have to remember to do it.

Today, a customer asked us to change the key that moves the focus from field-to-field in a Windows application from Tab to the semi-colon (;).  I’ll be telling them “no”.

Thursday, July 3, 2008

Come On, Microsoft, Isn’t This a Little Ridiculous?

Copyright 2008-2009, Paul Jackson, all rights reserved

Update July 22, 2008: There've been some clarification and updates that make this a non-issue.

I started this blog last month with a series of posts on Microsoft Source Analyzer, or StyleCop, a new product released on Code Gallery and which can best be described as fxCop for source code.  Where fxCop works on the compiled binary, StyleCop works on the source code itself.  Since we’re currently reviewing our coding standards here, I jumped on this product because checking source code for naming, spacing, etc. standards is a royal pain.

The first thing I ran into was a conflict between our standards and those that ship with StyleCop.  We require an underscore start the name of private fields.  So, assuming that StyleCop was extensible, as fxCop is, I did a quick Google for “customizing stylecop rules” and found nothing.  So I did a little digging (with Reflector) in the public interface of MSA and figured out how to do it.  Since the information wasn’t already out there, I figured I now had something to contribute to the community other than long-winded discourse on the need to learn how do do Manycore programming well.  And to let people know about it, I responded to posts in the StyleCop forum about custom rules.

In response, I caught some crap from the Microsoft team about violating the license agreement by using Reflector.  A few of my posts were deleted.  The community argued the whole license issue, because there are two license agreements involved – the Code Gallery one that pops up when you download and the StyleCop-version when you install.  I don’t think anyone had a reasonable expectation when installing the product that these would be different. 

In any case, the whole discussion about licensing disappeared from the forum and no more was said about it.

Then today I see this thread in the forum.

Apparently, someone wanted to integrate StyleCop with the Team Foundation Server build process.  So he did a little digging with Reflector, figured out how to do it, and blogged about it.  Then he responded to a forum query on how do this.  Sound familiar?

Where his story differs from mine, is that he apparently had more communication with Microsoft than I did.  His blog entry now reads only:

“This MSBuild task has been removed per request by Microsoft.”

Hey, Redmond!  What were you thinking?

You released this product to a community of people interested in how technology works … customization and integration with other tools are critical to our work process … how could you not know we’d try to figure it out and make it work for us?

Yeah, maybe these things will start breaking with a future version of StyleCop, but the community wants them now.  We’re not idiots, we know that if the API changes our stuff will break … that’s a risk we’re willing to take because we want the functionality now.  We want to use the product now because it’s useful to us, but not without these features.  Fine, you didn’t have time to provide them in this release … let us do it!  That’s what “community” is about.

Or are you seriously suggesting that our use of Reflector somehow endangers your intellectual property.  Gentlemen, if it’s truly that important to you, then I submit to you that .Net was the wrong environment to write it in. 

There are a lot of smart people out here who are willing to take your work, build on it the things you didn’t have time to, and make your products more useful and better.  By making things like custom rules and MSBuild tasks available now, StyleCop becomes more beneficial to more people … people who, without those things, would shrug and say, “Nice idea, but it doesn’t do what I need”, and delete the whole damn thing.

Ridiculous.

kick it on DotNetKicks.com

Monday, June 23, 2008

This Blog is 80% Complete

Copyright 2008-2009, Paul Jackson, all rights reserved

How many of you have sat in a weekly status meeting and heard the phrase “I’m 80% done with that task”?  Or 90%, 75%, 50%, etc.?

It’s at this point in a status meeting that my mind starts to wander to other things.  It might wander to the real work I need to do once I get out of the meeting or to the kayaking I plan to do over the weekend, but one way or another I’m not paying attention to these status reports any more.

Why?  Because, in my mind, the only valid status for a task is binary: complete or not complete.

If you have to report a percentage of an individual task in a weekly status, then your task wasn’t broken down enough to begin with.  Now a larger task made up of several steps might be 80% complete after a week of work, but that should be extrapolated from the completeness of its components.

What triggered this post was a status meeting where it was reported that a task was 75% complete – the line-item in Project was scheduled for forty (40) days.  It wasn't a rolled-up, summary task; it didn't represent another, more detailed technical schedule -- it was just forty days of some work.  And it wasn't alone -- there were plenty of twenty and thirty day tasks to keep it company.

God might have been able to accurately estimate Noah's deluge at 40 days of effort (and even He had to work nights to meet the deadline), but I think this is beyond the abilities of most software developers without breaking it down a bit.

In my opinion, forty hours is too large a task and should be broken down further -- just the act of thinking about the necessary steps will help drive a better understanding of the level of effort involved.  And that better understanding of the effort results in better, more accurate estimates.

Something I'm hearing more often in projects is a request for ROM (Rough Order of Magnitude) estimates -- as though changing the terminology from SWAG makes it somehow more acceptable or reliable.  Personally, I like the "magnitude" part -- like measuring an earthquake on the Richter Scale, ROMs are logarithmic.  The likely error in the ROM grows logarithmically as the initial ROM estimate increases.  E.G. the margin for error in a 40-day ROM is going to be logarithmically more destructive than that of a 1-day ROM ... approaching the catastrophic.

Also, like the ROM chip, once that estimate's written, it's read-only. That ROM's what you're stuck with and you'll be held to it.

I'm reminded of a place I used to work that did consulting for county government.  Everything the owner estimated was a ROM and every ROM was "two weeks". 

"Sure, we can do that for you.  Take about two weeks."

Time and materials later, it's amazing how many billable hours there could be in a two-week estimate.

The opposite extreme from the ROM is a schedule and status reporting that's so granular as to impact the ability to do work.  I worked with a project lead once who wanted tasks for his MS Project schedule measured in hours and status updates twice a day.  More time was spent providing updates (and justifying or explaining a deviation from the estimate) than was spent actually coding.  Luckily this didn't last long.

When I'm in charge of an effort, we break things down until the individual tasks are about a day's effort -- many less, some more, but the target's about a day.  Then the developers sign up for the tasks they're confident they can complete in each development iteration (typically a week or two) -- within the technical team we estimate the effort of the individual tasks, but from the Project Lead's perspective, they all have a duration of the iteration length.  Status at the project level is binary: done / not done.  I adapted this a bit from a process I found in Managing Projects with Visual Studio Team System:

If a task is 99% complete, it's reported as not done.  If all the tasks for an iteration aren't done, the iteration's not done.  Just like with a build -- if 99% of the projects build, the build's still broken.

This is what project leads should be concerned with.  Not "did every task take the estimated amount of time", but "is the project on track for completion".  Manageable iterations with manageable workloads accomplish this.

This blog is now 83% complete.

kick it on DotNetKicks.com

Monday, June 16, 2008

GiSTEQ PhotoTrackr Lite

Copyright 2008-2009, Paul Jackson, all rights reserved

I've been on vacation for the last week, so I've had very little to do with technology or programming.  But I did have a chance to play with a new toy that might be of interest to those who like playing with technology as much as I do.

It's a little GPS receiver that simply records your position every 15-seconds, and some software that matches the timestamp on your digital photos to the GPS trip record -- it then geotags your photos for services like Panoramio.

The full article is at my kayaking blog.

Sunday, June 8, 2008

PayPal Plug-In

Copyright 2008-2009, Paul Jackson, all rights reserved

Not .Net or programming related, but useful nonetheless.

I'm always a bit reluctant to enter credit card numbers when shopping at a new site, because I don't know how good their security is.  Once my credit card's past the SSL layer at their site, what becomes of it?

Is it being stuffed into some unsecure database?

Is it being transmitted in clear text as part of a SOAP message throughout their SOA architecture?

Is the order actually being processed by hand, so my credit card's being printed out and stored in a file somewhere?  Or shipped around via email?

PayPal has released a browser plug-in that eliminates these concerns for me. 

One of its features is generation of one-time or multi-use "Secure Cards" -- MasterCard numbers tied to your PayPal account.  This allows your PayPal account to be used securely at any site, even those that don't explicitly support PayPal.

Once installed, the plug-in adds an icon-menu to the browser's toolbar:

image

"Generate Secure Card" prompts for PayPal login:

image

With the image-verification to ensure you're sending the information to PayPal and when combined with the PayPal Security Key:

image

This seems like a very secure login.

You're then prompted to choose either a single- or multi-use card number to generate:

image

And, presto, you have a secure card number to use for your purchase(s):

image

And, no, the card number above isn't valid any more -- nice try.

Monday, June 2, 2008

Manycore Computing - Parallel Extension Library June CTP Available

Copyright 2008-2009, Paul Jackson, all rights reserved

Microsoft has released the June 2008 CTP of Parallel Extension for .Net 3.5 library. Learning this library and the concepts of well-designed, well-behaved multi-threaded applications is becoming more and more critical to an application's success, as the days of being able to count on faster and faster processors being available by the time our applications release are behind us. Instead, we may be faced with having our applications run on PCs with a greater number of slower cores. This Manycore Shift is upon us already and, as developers, we need to be prepared for it.

Our user-communities expect this of us. Years ago, a Microsoft Word user expected printing a large document to tie up the application for however long it took the printer to spew out the pages. Then print spooling was introduced and the users' expectations changed -- they came to expect the application to be returned to their control faster, because the spooler could send data to the printer in the background. Today, the user expects an instantaneous return of the application when they select Print -- no matter how large the document. They expect to be able to immediately edit, save and edit the document again while the application sends the document, in the state it was in when they selected Print, to the spooler in the background. Furthermore, they expect all those other things that used to be synchronous operations (spell check, grammar check, etc.) to now happen behind the scenes without slowing down their use of the application's main functionality. They even expect the application to correct typing errors in the background, while they move on to make new ones. We, as developers, expect this of our tools, as well -- with Visual Studio Intellisense and syntax checking while we code. One of the first comments made about the recently released Microsoft Source Analyzer for C# was essentially: "Why doesn't it check in the background while I type and display blue-squigglies under what needs to be fixed?" The expectations are reasonable and achievable, given the computing power available on the average desktop and the tools available, but how many of us writing line-of-business applications truly take the time to understand multi-threaded programming and build these features into our applications? Threading used to be hard to do for the typical business-software developer. The steps to create and manage new threads were very different from anything they'd been exposed to before and the libraries were arcane and poorly-documented. But all of that's changing, the threading functionality in .Net 2.0 and now libraries like the Parallel Extension Library, PLINQ and CAB insulate the developer from the complexities of threading and make it incredibly simple to start tasks on new threads ... and therein lies a new danger: A co-worker and I rather regularly send each other emails, the gist of which is: threading is the work of the Devil. Not because it's difficult to create a thread or start a process on it, but because the implications of concurrency in a large business application with multiple dependencies still have to be dealt with, no matter how easy it is to send work to the background. For the typical business software developer, who's spent an entire career in a single-threaded environment, it's hard. It requires a different conceptual mindset. As Microsoft continues to work on libraries and extensions to make threading easier to implement, and I'm sure they will, I hope they also put as much effort into learning resources to help developers understand the implications of using these new tools; and I hope that we, as developers, put as much effort into learning the best-practices and fundamental concepts of parallel computing, as we do into learning the mechanics of the tools. kick it on DotNetKicks.com

Friday, May 30, 2008

Real-time Source Analysis soon?

Copyright 2008-2009, Paul Jackson, all rights reserved

Howard van Rooijen is apparently working on a combination of Microsoft Source Analysis and Resharper to get real-time style checking.

Thursday, May 29, 2008

Testing Custom Source Analysis (StyleCop) Rules

Copyright 2008-2009, Paul Jackson, all rights reserved

Regardless of whether Microsoft expected it or not, it appears that the user-community is actively interested in writing custom rules for Microsoft Source Analysis for C# (StyleCop). Sergey Shishkin has been blogging about StyleCop and has worked out the details of test-driven development and unit testing for StyleCop custom rules on his blog.

Tuesday, May 27, 2008

Part III: Creating Custom Rules for Microsoft Source Analyzer

Copyright 2008-2009, Paul Jackson, all rights reserved

So now it's time to create a real, useful custom rule for StyleCop (Source Analyzer).

The rule I'm going to write is to accommodate the naming standards for private fields where I work: they must begin with an underscore, followed by a lower-case letter. This conflicts with two of the default StyleCop rules, so I'll be turning those off and using my custom rule instead.




The first step is to create a new project with references to the Microsoft.SourceAnalysis and Microsoft.SourceAnalysis.CSharp DLLs from the StyleCop install directory:
Next I set up the project to allow debugging of the rules by following the instructions in Part IIa of this series. Then created a class and XML file as described in Part I:
<?xml version="1.0" encoding="utf-8" ?>

<SourceAnalyzer Name="Demo Custom Rules">

<Description>

Demonstration of a working, useful custom rule.

</Description>

<Rules>

<RuleGroup Name="Naming Rules">

<Rule Name="PrivateFieldNameMustStartWithUnderscoreFollowedByLowerCase" CheckId="DM1001">

<Context>Private field names must start with an underscore followed by a lower-case letter.</Context>

<Description>Private field names must start with an underscore character.</Description>

</Rule>

</RuleGroup>

</Rules>

</SourceAnalyzer>

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SourceAnalysis.CSharp;
using Microsoft.SourceAnalysis;

namespace CustomRuleDemo
{
[SourceAnalyzer(typeof(CsParser))]
public class NamingRules : SourceAnalyzer
{
public override void AnalyzeDocument(CodeDocument document)
{
Param.RequireNotNull(document, "document");
CsDocument document2 = (CsDocument)document;
if ((document2.RootElement != null) && !document2.RootElement.Generated)
{
}
}
}
}
One caveat I found about the correspondence between the class name and the XML file name. By default, StyleCop will try to load an XML file with the same FullName as the Type it found descending from SourceAnalyzer. You can also specify an XML file resource in the SourceAnalyzer attribute: [SourceAnalyzer(typeof(CsParser), "SomeXmlFile.xml")]

If your custom rule fails to execute and doesn't show up in the Source Analysis Settings, check this correspondence.

Each source file is represented as nested CsElements in the CsDocument. So in a typical .cs file, each using directive would be an element under the CsDocument, then the namespace would be an element; within the namespace, each defined class would be a child element, etc.:
Document
using
using
using
namespace
class
field
field
method
class
field
method
... and so on.
So in order to check the naming of all the fields, we'll need a method to recursively process the elements and all their children -- and the base SourceAnalyzer class has a Cancel property, so we'll want to stop processing if this becomes True:

private bool processElement(CsElement element)
{
if (base.Cancel)
{
return false;
}

foreach (CsElement child in element.ChildElements)
{
if (!this.processElement(child))
{
return false;
}
}
return true;
}
<

And we'll need to pass the RootElement of the CsDocument into that method to get things started:

public override void AnalyzeDocument(CodeDocument document)
{
Param.RequireNotNull(document, "document");
CsDocument document2 = (CsDocument)document;
if ((document2.RootElement != null) && !document2.RootElement.Generated)
{
processElement(document2.RootElement);
}
}
For this rule, we're interested in fields only, so we want to check the type of each element (CsElement.ElementType) and we want to ensure that we don't run checks on generated code (CsElement.Generated). Since we're going to be validating the naming of these fields, we're primarily interested in the Name property.

But the Name property of CsElement isn't what we're after. That value contains the type of element, so it would have a value of "field _myFieldName". What we really want is CsElement.Declaration.Name property, which will have just the name of the field ("_myFieldName"). Finally the check we're going to make is to ensure that the name starts with an underscore and that the second character of the name is lower-case:

if (element.ElementType == ElementType.Field &&
!element.Generated)
{
if (!(element.Declaration.Name.StartsWith("_", StringComparison.Ordinal))
element.Declaration.Name.Substring(1, 1).ToLower() != element.Declaration.Name.Substring(1, 1))
{
}
}

And add the violation to the base class's collection:

base.AddViolation(
element,
"PrivateFieldNameMustStartWithUnderscoreFollowedByLowerCase",
new object[0]);
The name of the violation being passed to AddViolation should match the name in the XML file.

And that's it! Press F5 to debug and you should get a fresh instance of Visual Studio, create a new project and a class like:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace ClassLibrary3
{
public class Class1
{
private int _temp1;
private int _Temp2;
private int temp3;
private int Temp4;
}
}
Should generate these Source Analysis warnings:
I hope this information was helpful to you, you can download the full source of the custom rule described above.

Part IIa: Creating Custom Rules for Microsoft Source Analyzer

Copyright 2008-2009, Paul Jackson, all rights reserved

Before moving on to Part III and doing some real work with the custom rule, I want to describe how to set up Visual Studio so you can debug your rules.




First, you'll have to start an instance of Visual Studio to use for working on the custom rule project. This instance must be started without the DLLs for your custom rule being present in the Source Analyzer install directory. This is because you'll want Visual Studio to copy your newly built DLLs there as part of the build process -- if a copy of the DLLs is present when you start Visual Studio, then they'll be loaded by Source Analyzer and will be in use when you do a build.

Next, set your project's Debug properties to start an external program (a new instance of Visual Studio):


The path to Visual Studio depends on where you installed it, but should be something like: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe


And set the Command Line Arguments to the solution file you plan to use for testing your rules -- in my case: "C:\vs\SourceAnalyzerSample\ClassLibrary3\ClassLibrary3.sln"


Finally set the project's Post Build Event to copy the DLLs and PDBs for your custom rule to the Source Analyzer install directory:


xcopy "$(TargetDir)SourceAnalyzerSample*.*" "C:\Program Files\Microsoft Source Analysis Tool for C#" /y

You could set your project's target directory to build directly into the Source Analyzer install directory, but I prefer to have it as a separate step.

Now when you press F5 to start debugging your custom rule project, a new instance of Visual Studio should start with your testing project loaded and your newly built rules in place.

Part II: Creating Custom Rules for Microsoft Source Analyzer

Copyright 2008-2009, Paul Jackson, all rights reserved

In Part I we set up a simple custom rule for the Microsoft Source Analyzer (StyleCop) that displays a rule violation for every source file in the project. Now in Part II, I'll explain the elements of the XML file and source code that went into that.



Starting with the elements of the XML file:
<?xml version="1.0" encoding="utf-8" ?>
<SourceAnalyzer Name="Custom Rules">
<Description>
Custom rules added to analyzer.
</Description>
<Rules>
<RuleGroup Name="Custom Rules Group">
<Rule Name="MyCustomRule" CheckId="CR0001">
<Context>This is a custom rule.</Context>
<Description>This is a custom rule description.</Description>
</Rule>
</RuleGroup>
</Rules>
</SourceAnalyzer>

In Settings, StyleCop create a hierarchy of rules based on the SourceAnalyzer Name-attribute, the RuleGroups and the Rules in the XML file. So the XML above becomes:


when loaded into settings by the Source Analyzer. The SourceAnalzyer element's Name attribute becomes a node under C#; each RuleGroup becomes a node under that; and each Rule is contained in its RuleGroup.

The CheckID attribute of a Rule must consist of two capital letters and four digits.

The Context element of a Rule is what displays in Visual Studio analysis results and can contain {0} string formatting placeholders (which we'll see in Part III).

The Description element of a Rule is what displays to the user in Source Analysis Settings when they're choosing which Rules to enforce.

You can use Reflector (one of the top five utilities a .Net developer must have, in my opinion) to examine the Rules included with StyleCop and the associated XML files:

Now on to the code:


namespace SourceAnalyzerSample
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SourceAnalysis;
using Microsoft.SourceAnalysis.CSharp;
using System.Windows.Forms;

[SourceAnalyzer(typeof(CsParser))]
public class SampleAddInRules : SourceAnalyzer
{
public override void AnalyzeDocument(CodeDocument document)
{
Param.RequireNotNull(document, "document");
CsDocument document2 = (CsDocument)document;
if ((document2.RootElement != null) && !document2.RootElement.Generated)
{
AddViolation(document2.RootElement, "MyCustomRule", new object[0]);
}
}
}
}

Our simple example from Part I violates every file it analyzes without actually checking anything -- this was done to demonstrate the minimum code necessary to create a rule and generate a violation. I used Reflector on the included Rules to determine what the minimal code should look like.

First, we need the references and using directives for Microsoft.SourceAnalysis and Microsoft.SourceAnalysis.CSharp.

Then we create a class inherited from SourceAnalyzer and add a SourceAnalyzer attribute on the class, giving it a parameter of typeof(CsParser). StyleCop uses Reflection to find classes inherited from SourceAnalyzer to add to its rules. The CsParser type tells StyleCop that this class analyzes C# source files. Although I didn't find a VB parser or rules in my download, maybe someone at Microsoft is working on one?

We next need to override the AnalyzeDocument method from the SourceAnalyzer base-class. This is the entry point StyleCop will use to run our rule and pass it each source file in the project. Each source file is passed in as a parameter of type CodeDocument.

As part of the Microsoft.SourceAnalysis assembly, they've included a Param class that has a number of methods on it to validate parameters passed to methods. We use this to require that the CodeDocument parameter passed isn't null. As an aside, I've seen similiar functionality in a class called Guard included in a lot of patterns & practices code -- it seems like there's a lot of duplicate code going into validating method parameters ... sounds like framework to me.

Anyway, after ensuring that we were passed a CodeDocument, we want to cast it to a CsDocument. CodeDocument is a base-class and, presumably, there'll be VbDocument and FsDocument coming at some point in the future.

The next step is to check some things on the document. In this case, we're checking to ensure that the document has a RootElement and that it isn't generated-code. The source code is treated a hierarchy of elements containing other elements, which we'll see more of in Part III. We want to avoid analyzing generated-code, because it doesn't make sense to create a bunch of style warnings for code that, theoretically, a human will never have to read. Of course, this presumes that the code generater followed the rules for marking its generated code as such.

Finally, we're going to create the violation. The AddViolation method has a number of overloads:

In general, the method takes:

  • The CodeElement that violated the rule;
  • An Enum or String identifying the Rule that's been violated;
  • An array of Objects -- this array is used to fill {0} placeholders in a formatted string;

You also have the option of passing in a line number identifying the line of code that caused the violation (the Int32 parameters above).

And that's it for the code.

You can use Reflector against the included Rules to learn more about the different types of CodeElements and how to check specific things, which is what we'll be doing in Part III when we create a rule to ensure that private fields begin with an underscore, followed by a lower-case character and have no other underscores in the name.

Part I: Creating Custom Rules for Microsoft Source Analyzer

Copyright 2008-2009, Paul Jackson, all rights reserved

Last week (May 23, 2008), Microsoft released their Source Analyzer for Visual Studio 2008 (also known as StyleCop):

http://code.msdn.microsoft.com/sourceanalysis

http://blogs.msdn.com/sourceanalysis/


Since manual code reviews for style and formatting are a huge time waster, I jumped all over this for use in my shop. Unfortunately, my coworkers can't just accept doing things the Microsoft way; they have to be a bit different, so I started investigating what was involved in creating custom rules.




Part I of this tutorial will create a basic custom rule that loads in StyleCop and add a rule violation message. It will add the rule violation for every source file, every time. In Part II, I'll explain what each piece does and then in Part III, we'll change it to actually do something useful.

The first step is to download and install the Source Analyzer from: http://code.msdn.microsoft.com/sourceanalysis/Release/ProjectReleases.aspx

The default install will go to "C:\Program Files\Microsoft Source Analysis Tool for C#" and add context-menu options to the Solution Explorer in Visual Studio 2008:


Source Analyzer uses Reflection to examine every DLL in its install directory to find custom rules, so all we'll need to do is create a class library with the right classes and attributes and Source Analyzer will automatically load our new rules.

Create a new ClassLibrary project, then add references to the Microsoft.SourceAnalysis and Microsoft.SourceAnalysis.CSharp assemblies:

Then add a new class and an XML file with the same name. Set the Properties of the XML file to be an Embedded Resource and not copy to the output directory:

The XML file should contain the following:
<?xml version="1.0" encoding="utf-8" ?>
<SourceAnalyzer Name="Custom Rules">
<Description>
Custom rules added to analyzer.
</Description>
<Rules>
<RuleGroup Name="Custom Rules Group">
<Rule Name="MyCustomRule" CheckId="CR0001">
<Context>This is a custom rule.</Context>
<Description>This is a custom rule description.</Description>
</Rule>
</RuleGroup>
</Rules>
</SourceAnalyzer>
And the class should look like this:


namespace SourceAnalyzerSample
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SourceAnalysis;
using Microsoft.SourceAnalysis.CSharp;
using System.Windows.Forms;

[SourceAnalyzer(typeof(CsParser))]
public class SampleAddInRules : SourceAnalyzer
{
public override void AnalyzeDocument(CodeDocument document)
{
Param.RequireNotNull(document, "document");
CsDocument document2 = (CsDocument)document;
if ((document2.RootElement != null) && !document2.RootElement.Generated)
{
AddViolation(document2.RootElement, "MyCustomRule", new object[0]);
}
}
}
}

(download the source code)

Build the project and copy its DLL to the Source Analysis install directory. Now, when you run Source Analysis on a project, you should get a warning about the custom rule for every file in the project:

In Part II, I'll explain the elements of the XML file and the code; then, in Part III, I'll demonstrate a useful example.

One of the differences between the standards where I work and Microsoft's is that we require private fields to start with an underscore ("_"), while the Microsoft rules provided by StyleCop require they start with a lower-case letter and contain no underscores. So I'll be turning off the two Microsoft rules (SA1306 and SA1309) and creating a custom rule to enforce our standards.

kick it on DotNetKicks.com

Wednesday, January 2, 2008

Customizing Microsoft StyleCop Rules

Copyright 2008-2009, Paul Jackson, all rights reserved

Part I: Creating Custom Rules for Microsoft StyleCop

Part II: Creating Custom Rules for Microsoft StyleCop

Part IIa: Creating Custom Rules for Microsoft StyleCop

Part III: Creating Custom Rules for Microsoft StyleCop

Testing Custom StyleCop Rules

Posts on Parallel and Multithreaded Programming

Copyright 2008-2009, Paul Jackson, all rights reserved

Book Review: Concurrent Programming on Windows

Parallel Programming in .Net 4.0 and VS2010: Part I – The Parallel Task Library (Parallel.For, Parallel.Foreach() and Invoke())

Parallel Programming in .Net 4.0 and VS2010: Part II – WinForms, Tasks and Service-Level Agreements

Older, Visual Studio 2010 CTP Articles:

Parallel Programming in .Net 4.0 and Visual Studio 2010: Part I – The Parallel Task Library

Manycore Computing (.Net Parallel Extensions)

Parallel Programming in .Net 4.0 and Visual Studio 2010: Part II – WinForms, Tasks and Service-Level Agreements

Multithreading the Visual Studio 2010 CTP

Enabling Parallel Debugging Tools in the Visual Studio 2010 CTP

Parallel Programming in .Net 4.0 and Visual Studio 2010: BlockingCollection<T>

Parallel Programming in .Net 4.0 and Visual Studio 2010: Future<T>

Parallel.For(…) – A Deeper Dive

TaskManager: The Range Rover of the .Net 4.0 Parallel Extensions

An Interesting Side-Effect of Concurrency: Removing Items from a Collection While Enumerating

A Bit About the Performance of Concurrent Collections in .Net 4.0

Tuesday, January 1, 2008

User Group Sessions, Classes and Consulting

Copyright 2008-2009, Paul Jackson, all rights reserved

User Group Sessions

I make the following sessions available to User Groups and Code Camps in Florida.  Each session is designed to run approximately one hour.  To schedule me to present at your meeting, contact me at pjackson@lovethedot.net.

These sessions are also available for presentation to corporate-groups.  Although there is no charge for the session, corporate groups may be asked to cover travel expenses based on the site’s distance from Orlando, FL.

Parallel Programming in .Net 4 – Part I – An Overview

An introduction to the Parallel Extensions coming in .Net 4.0.  This session skims the surface of what will be available to developers, covering the basics of Parallel.For, Parallel.ForEach and Parallel.Invoke, followed by a discussion of Tasks and some of the utility classes from the upcoming release.

Parallel Programming in .Net 4 – Part II – A Deeper Dive

Building on Part I, this session takes a much deeper dive into aspects of the .Net 4.0 Parallel Extensions, including the architecture and managing parallel tasks through the TaskManager.  In-depth coverage of Parallel.For, covering all of the options for controlling and working with the parallel loop.

Improving Developer Productivity with Guidance Automation

Note: While still interesting and useful, this session is a bit dated, given the extensibility of Visual Studio 2008 and the new extensibility options coming in Visual Studio 2010.

This session introduces the Guidance Automation Extensions and Guidance Automation Toolkit (GAX/GAT) from Microsoft patterns & practices.  Explore the concept of software factories and see how generating code can improve developer productivity.  Specifically this session covers the building of a guidance package that will implement the UI Threading Pattern (InvokeRequired, BeginInvoke, etc.) for all public methods on a UserControl.

Introduction to Dependency Injection (ObjectBuilder or Unity)

Explore the concepts of Dependency Injection and Inversion of Control with either the ObjectBuilder or Unity dependency injection libraries.  Learn what DI is, what the benefits are and why you should be using it.

The importance of usability and user-experience

This non-technical, user-focused discussion covers the concepts of user-interface design and usability testing in order to improve the overall user experience. 

Cloud-computing: Amazon S3

Leverage the online storage capacity of Amazon S3 to store data remotely.

Cloud-Computing: Amazon EC2

Learn how to manage a server in the Cloud – run a Windows server for $0.12 per CPU hour with virtually unlimited scalability.

Classes

Classes can run from one to five days, depending on customer customizations and needs. 

Each class can be customized to focus on the material your staff needs the most and eliminate concepts you’re unlikely to ever use.  This allows you to optimize the learning experience for your staff and maximize your training dollars.

The cost is $500 per day for up to 20 students, location and equipment not included.  This comes to $2500 for a full, five-day, instructor-led class, customized with the material of your choice.  If your company does not have the facilities and equipment to hold classes, third-parties can typically be found.

One-on-one training is also available for $250 per day.  These classes typically go much faster than group training because a single student receives all of the instructor’s attention. 

For information on customizing and scheduling a class, email pjackson@lovethedot.net.

The Cloud on $10 a Day: leveraging Amazon S3 and EC2 to create scalable, cost-effective solutions

  • Learn the advantages of Cloud Computing, including scalability and cost-savings
  • Examine the Amazon offerings and pricing models
  • Signing up for S3 and EC2
  • Learn the basics of S3 storage: buckets and paths
  • Putting, getting and removing files with S3
  • S3 Performance Considerations
  • Third-party Wrappers make S3 even easier
  • Learn the basics of EC2 computing: server instances and images
  • Create an EC2 server image
  • Administrate EC2 servers
  • Scale-on-demand: Learn how to dynamically create server instances to handle load
  • Databases on EC2 servers
  • S3 and EC2 together

Introduction to .Net and C#

  • Learn C#  language features such as types, variables, and control constructs
  • Use object-oriented features such as class, interface, protection, and inheritance
  • Use exceptions for error-handling
  • Examine the use of properties to implement the private data/public accessor pattern
  • Group related types  with namespaces
  • Delegates and events
  • Override methods
  • Deployment considerations
  • Learn the difference between inheritance and interfaces
  • Build a basic Windows Forms GUI

Programming the Composite UI Application BLock (CAB)

  • Understand CAB
  • Understand Dependency Injection and ObjectBuilder
  • Understand the Smart Client Software Factory
  • CAB Modules
  • Events
  • Extending the UI
  • Commands
  • Modules
  • Workspaces
  • Model-View-Controller Pattern
  • Services

Coding to Design Patterns

  • Understand design patterns as a vocabulary
  • Understand the basics of Unified Modeling Language (UML)
  • Refactor code to use the most appropriate pattern 
  • Singleton Pattern
  • Factory Pattern
  • Decorator Pattern
  • Hole in the Middle Pattern
  • Observer Pattern
  • Strategy Pattern
  • Template Pattern
  • Command Pattern
  • State Pattern
  • Proxy Pattern
  • Adapter Pattern
  • Fa├žade Pattern
  • Iterator Pattern
  • Visitor Pattern
  • Composite Pattern
  • Anti Patterns like the Magic Pushbutton

Introduction to Windows presentation foundation

  • Understand differences between XBAP, XAML and Silverlight
  • Build user interfaces with Visual Studio 2008
  • Create animations and special effects
  • Use themes to change the application’s appearance 
  • Deploy your application using ClickOnce
  • Understand the differences between a control template, a custom control, and a user control

Introduction to Asp.Net

  • Understand the ASP.NET compilation engine
  • Build pages using server-side controls, custom controls, and user controls
  • Validate user input using validation controls
  • Display and update data using declarative data binding
  • Employ security to authenticate and authorize users of your application
  • Use master pages and themes to create a consistent look and feel
  • Manage data for the users of your application
  • Use caching to improve performance
  • Add AJAX features to your pages

Usability and User Experience

  • Learn the basics of effective user-interface design
  • Understand the importance of standards and consistency
  • Understand the purpose of a Usability Study
  • Design and Implement a Sample Usability Study

Introduction to Visual Studio Team System

  • Use VSTS to enforce best practices of software development
  • Understand the VSTS tools and how and where they fit into the project lifecycle
  • Learn how VSTS can be configured to use different software methodologies
  • Use graphical modeling tools to create a system design and validate its deployment
  • Employ test-driven development to produce robust code
  • Harness the power of source control
  • Learn how to manage testing and track bugs
  • Set up a project portal to access all project documentation
  • Manage databases with Visual Studio Team Edition for Database Professionals
  • Schedule Builds with Team Build
  • Use Team Build continuous integration features

Consulting

I am available for short-term consulting contracts, typically focusing on specialized topics such as:

  • Cloud-computing solutions with Amazon S3 and EC2
  • User Interface Reviews and Design
  • Usability Studies
  • Application and System Architecture, Design and Code Reviews
  • Visual Studio Team System and Team Foundation Server Installation, Setup, Process and Procedures

For further information or to discuss a consulting contract, email pjackson@lovethedot.net.