Saturday, October 2, 2010

Pearl #1

This is something that I have never envisioned...
throw null

can you guess what happens?

Friday, September 17, 2010

maven clover2 plugin adventures

Today at my day job I ran into a strange problem with running clover2 plugin. I have been tweaking the assembly descriptors so that:

the assembly plugin creates a zip file containing the project's jar and all the depedendencies
the project's jar is zipped in the main "directory" of the zip file
the dependencies are zipped to the lib/ subdir of the zip file

I have inherited an assembly descriptor that seemed to work well, until...


and then it turned out that the assembly descriptor worked... by accident (more likely because of some bug). The original assembly descriptor looked like this:


What happened is that in maven 3.0 all the jar files got zipped to the lib subdirectory. This was awkward. Naturally, I wanted to debug the reality... or for the assembly to work the same way in mvn 2.x and 3.0. So I came up with this descriptor:


Now this all seemed to work well, until...


which caused the build to fail. What? How come? The interesting thing is that the project was essentially empty - I was creating a new project and setting it up in maven, svn, hudson and whatnot. And by empty I mean - no code (yet). So I fired up the commandline:

mvn clover2:instrument clover2:clover
[INFO] [clover2:instrumentInternal]
[WARNING] No Clover instrumentation done on source files as no matching sources files found
[INFO] [resources:resources]
[INFO] [compiler:compile]
[INFO] [resources:testResources]
[INFO] [compiler:testCompile]
[INFO] [surefire:test]
[INFO] [jar:jar]
[INFO] [assembly:single {execution: make-assembly}]
[INFO] Reading assembly descriptor: /src/main/assembly/assembly.xml
[INFO] Processing DependencySet (output=lib)
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Failed to create assembly: Error creating assembly archive distribution: You must set at least one file.
Wait, WTF? Why on earth is this running an assembly?!?!? Oh, wait, it also runs all the phases of the default lifecycle before that - WHAT? I began thinking and fired up again:

mvn package

That worked - huh? Now, this seemed VERY suspicious. So I googled for some documentation and figured out why the clover2:instrument runs the package phase - because it will install the instrumented artifacts! You might wonder why would you want to install instrumented artifacts. So do I - if you have an answer - let me know!

And of course because of that the above invocation is NOT the best way to test your coverage, because:
  • it forks the lifecycle, which can result in launching the same tests twice (once for just testing, and once for coverage, which does not make much sense)
  • the assembly plugin behaves differently in the forked lifecycle
  • the forked lifecycle executes up to install phase, which means that every time you run this, a snapshot version of the instrumented artifacts is installed into the local repo >8-O
All of this might seem obvious, but I figured out from this documentation page that the best way to run clover is

mvn clover2:setup test clover2:clover

This does not fork the lifecycle, the assembly plugin does not get launched and all goes well. I am still wondering about why the assembly plugin works in a different way in the forked lifecycle, though...

Sunday, August 15, 2010

Can we measure *anything* in software teams?

Code complete, a great book about software construction, suggest starting with at least some measurements:
  1. number of lines of code written
  2. number of defects detected
  3. number of work months
  4. total dollars
Then the measurements should be standardized between projects and only after they are implemented across the company, they should be taken into account in understanding the process of development. These should influence your thinking about the process and what do the measurements improve. An example might be intoducing the continuous integration game into your hudson server. This will for sure make people optimize for this. In one of the projects we have set it up so that it takes into account checkstyle, PMD violations, broken builds, successful builds, tests failing and so on. Even though the game is still missing a couple of meaurements that could potentially be useful (such as: code coverage measurements, +10 points for each additional percentage of previously uncovered code or something to that extent), it immediately had an influence on what ended up in the repository. People tend to optimize for whatever measurement you are taking. This is a fact of life and I think that Joel is somewhat right in his judgement that the extrinsic motivation replaces an intrinsic one. Is there any way that we can measure things and still keep clean of the "people optimize for the measurement" thing?

Story points - measure of complexity or effort?

Ever since I started working in scrum I always has the impression that the story points estimation is just a rough estimation of effort that it would take to implement a given user story. Things like keeping the reference to prior sprints and stories helps to keep the numbers from growing towards infinity and using fibonacci numbers to estimate the stories were passed on to me from an experienced scrum master and I took them as if these were the rules of scrum. But recently I have began to wonder whether actually treating story points as a measure of effort makes sense.

Story points as effort
Using storypoints as a rough measure of effort has its proponents and it makes a lot of sense if you are doing a couple of things:

you have a longer term backlog definition
you are doing regular backlog grooming sessions, wherein you reestimate the story points for the stories that are on the product backlog
you are using the story points estimation for release planning
Especially the last item is important and leads me to the question: if you don't have a fixed release schedule/date? What if you deploy whatever you have developed as you go along? What if you don't have a longer term backlog defined and the priorities change rapidly? Does it also make sense to use the story point measure as means of estimating effort? Or...

Storypoints as a measure of complexity and uncertainty
Can we make another use of the story point measure? Can we make them into a measure of complexity and uncertainty about a given story? In this scenario we would be estimating complexity of a story and its uncertainty (like: we will be using a new technology that is supposed to be quite easy, but noone on the team has ever used it thus far). Could we then make the commitment based on either the task breakdown results or the story points estimation (whichever fills up first)? Or should the task breakdown and the storypoint estimation always go hand in hand?

Managerial view
I had not realized this until someone pointed that out to me recently, but whenever a manager sees any number that can be measured, his/her eyes start to shine. When there are numbers, the numbers can be measured and they can be compared! So we can compare different teams based on the numbers or perhaps individual programmers as well! Wow! This is how the story points that are supposed to be an aid for the team in planning the work and commitment become a reporting means for the higher management to judge the team's productivity.