I’ve been running NCrunch for quite some time now and I’ve grown to like it. How ever, they’ve decided to charge $159 for a single named user license. This prompted me to have a look at the competition, specifically Mighty Moose.
The first difference I notice after installation is that Mighty Moose gives no feedback on a per line basis. The argument from the developer of MM (Greg Young) is that if you’d need that feedback your method is likely too large and complex anyway, and that you should refactor your methods so that you can keep a clear picture of all the code paths in it. What is shown with the implementation is instead a circle indicating the risk of potential breakage if changing that specific method (it’ll show green, yellow or red). These are called risk margins.
The risk margin indicators make sense, and I’d probably come to the point where I’d use them to make better decisions about changes in code. I work as a consultant, so as Greg Young himself points out, we spend a lot of time in other peoples code. What doesn’t, to me, make sense though is that we don’t seem to get any feedback in the gutter when looking at an implementation as to whether or not the tests covering it are passing – or not. When making a test fail by changing something in the implementation all risk indicators are still green. I find this to be slightly counter intuitive, I would expect a failing test to override and tell me that something has gone wrong. How ever, should you open your test class you will get red crosses in the gutter next to the tests that are currently failing.
Thinking about this from a TDD point of view it confuses me even more. Let’s imagine I want to create the actual implementation of an interface, but before I do that I create a bunch of tests that will fail. When switching to my new and empty class I’m shown a risk indicator that tells me that everything is OK, even though there are failing tests. As I go along and implement features described by my tests there’s no feedback on how I’m doing.
In order to get information on the state of your tests you need to bring up the ContinuousTests window (Ctrl-Shift-Y,G), shown there is a list of the tests that are failing. Some neat features with that window is that you can navigate in that list and bring up more information (I) or debug the highlighted test (D).
So, it seems to me that the point of running tests continuously (in order to give continuous feedback) is kind of missed given that the feedback Mighty Moose has is only found by giving up screen real estate (for a permanent display of the ContinuousTests window) or by taking deliberate steps to check on your tests. The latter alternative is basically identical to invoking your test runner – that which we wanted to avoid by using a continuous test platform.
After all this, are there no nice things with Mighty Moose? Well, yes. There’s the static and dynamic analysis graph and there’s the sequence diagram. But these things do not make a sleek continuous test runner in my eyes.
Seems that I’ll have to get the license for NCrunch.Share