I’ve been running NCrunch for quite some time now and I’ve grown to like it. How ever, they’ve decided to charge $159 for a single named user license. This prompted me to have a look at the competition, specifically Mighty Moose.

The first difference I notice after installation is that Mighty Moose gives no feedback on a per line basis. The argument from the developer of MM (Greg Young) is that if you’d need that feedback your method is likely too large and complex anyway, and that you should refactor your methods so that you can keep a clear picture of all the code paths in it. What is shown with the implementation is instead a circle indicating the risk of potential breakage if changing that specific method (it’ll show green, yellow or red). These are called risk margins.

The risk margin indicators make sense, and I’d probably come to the point where I’d use them to make better decisions about changes in code. I work as a consultant, so as Greg Young himself points out, we spend a lot of time in other peoples code. What doesn’t, to me, make sense though is that we don’t seem to get any feedback in the gutter when looking at an implementation as to whether or not the tests covering it are passing – or not. When making a test fail by changing something in the implementation all risk indicators are still green. I find this to be slightly counter intuitive, I would expect a failing test to override and tell me that something has gone wrong. How ever, should you open your test class you will get red crosses in the gutter next to the tests that are currently failing.

Thinking about this from a TDD point of view it confuses me even more. Let’s imagine I want to create the actual implementation of an interface, but before I do that I create a bunch of tests that will fail. When switching to my new and empty class I’m shown a risk indicator that tells me that everything is OK, even though there are failing tests. As I go along and implement features described by my tests there’s no feedback on how I’m doing.

In order to get information on the state of your tests you need to bring up the ContinuousTests window (Ctrl-Shift-Y,G), shown there is a list of the tests that are failing. Some neat features with that window is that you can navigate in that list and bring up more information (I) or debug the highlighted test (D).

So, it seems to me that the point of running tests continuously (in order to give continuous feedback) is kind of missed given that the feedback Mighty Moose has is only found by giving up screen real estate (for a permanent display of the ContinuousTests window) or by taking deliberate steps to check on your tests. The latter alternative is basically identical to invoking your test runner – that which we wanted to avoid by using a continuous test platform.

After all this, are there no nice things with Mighty Moose? Well, yes. There’s the static and dynamic analysis graph and there’s the sequence diagram. But these things do not make a sleek continuous test runner in my eyes.

Seems that I’ll have to get the license for NCrunch.

Share
3 Responses to “Continuous testing – Mighty Moose vs. NCrunch”
  1. Brian Sayatovic says:

    Thank you for this comparison! I’m contemplating the same thing you are, and you’ve given me things to think about that I hadn’t previously considered. Have you provided your criticisms to the Mighty Moose team? I think you have some very valid points.

  2. Thanks for this post. I’m probably going to wind up buying NCrunch shortly, but I was thinking of what to do when it bombs out tomorrow (install MM or download NCrunch trial), and it was helpful to see a comparison from the perspective of another NCrunch user. I think I wouldn’t be happy about not having the dots.

  3. Greg Young says:

    I tried leaving a post a while ago but was disabled I guess.

    Re: margins

    It is very easy for us to make the margin turn into an X. Maybe this is a good idea, maybe not. I would see it as useful if you also had the graph turning node red for easy navigation to what that failed test(s) was. This is not generally how people are navigating and the rest of your post details your expectation not of a tool but of a tool that keeps the same interaction you became accustomed to.

    “So, it seems to me that the point of running tests continuously (in order to give continuous feedback) is kind of missed given that the feedback Mighty Moose has is only found by giving up screen real estate (for a permanent display of the ContinuousTests window) or by taking deliberate steps to check on your tests. The latter alternative is basically identical to invoking your test runner – that which we wanted to avoid by using a continuous test platform.”

    You have rather obviously only played with MM for a very very short period of time, it works differently than ncrunch. Your issues seem to center around wanting them to work in identical fashions. There are 4 distinct other ways of viewing test failures without having the “feedback window open”. Check status bar in VS it tells you success/failure. There is growl/snarl support (to give messages that you control). There is a red/green overlay on top of VS (transparent and doesnt steal focus). There are even lolcats (happy kitty/sad kitty)! The status bar is on by default. The others are turned on in config or require additional software to be installed (growl/snarl which are fully configurable in terms of how you want the message to appear on your machine)

    As to the feedback window have you found ctrl+shift+j (its not intended to be a “permanent” window)? This is used to toggle its visibility of the feedback window. Most run in full screen mode (or split screen) and use that key to toggle the window when they have a failure. From that window (enter->go to test) (i->view detailed test run information, eg output, exception (can navigate through exception), etc) (d->debugs failing test, including setting breakpoint). That window always has build/test errors in it. If you see red thats where you go. Its not intended to always be open.

    There are some benefits to this method of navigation as well. I am always 2 keystrokes away from navigation to any failed test, 2 keystrokes away being in a debugging session of a failing test, and 2 keystrokes away from viewing detailed run failures (then can navigate to anywhere associated on the failure chain).

    btw:

    “Thinking about this from a TDD point of view it confuses me even more. Let’s imagine I want to create the actual implementation of an interface, but before I do that I create a bunch of tests that will fail. When switching to my new and empty class I’m shown a risk indicator that tells me that everything is OK, even though there are failing tests. As I go along and implement features described by my tests there’s no feedback on how I’m doing.”

    1) this is not TDD?
    2) in this case what you would do is run and keep seeing a test disappear off your failing list.
    3) even if we were to put up a “X” in the margin. You have three failing tests on this method and you just passed one, what is the visual indicator (its still an X as you now have 2/3 failing tests)

    Cheers,

    Greg

  4.