Thursday, November 11, 2010

Fixture-free gets a quick answer

Someone on stack overflow asked what the current directory is when a fixture is executing. I didn't remember so I ran a quick test to find out:

|with|type|System.Environment|
|show|current directory|

No fixture, no IDE, no build. Just the answer I wanted.

Thursday, August 26, 2010

Fixture-free in the rain

I'll be presenting at Agile Vancouver in November on "Fixture-free Automated Acceptance Testing". This is a brief excerpt from a discussion that sparked the idea for the presentation:

Fixture-free is about exercising the System Under Test directly with a testing DSL like Fit or Slim (or whatever). It does require developer-tester collaboration because the System Under Test needs to expose classes and methods that are (a) testable and (b) named with ubiquitous language. (Otherwise it degenerates into trying to write code in a poor excuse for a programming language!). But once we have this, testers can do lots of stuff without additional developer help. For (a trivial) example, if we have a class Account with methods deposit, withdraw and balance, with fitSharp, with no fixture code, we can write:

|with|new|Account|

|deposit|100|

|withdraw|25|

|check|balance|75|

In reality, 100% fixture-free is unlikely, but we can drastically reduce the need for fixtures and for developers to write fixture code for every test.

Monday, August 9, 2010

What Am I Missing?

So Agile2010 is underway this week and due to the eternal constraints of time and money, I'm not there. I'm catching some of the buzz vicariously via Twitter and starting to miss ... well, what am I missing?

Based on past experience, I know I would be enjoying myself if I was there. I know the intense and intimate gathering of the AAFTT group would fill my brain with ideas that would percolate throughout the year. I know the energy and passion of @unclebobmartin would be contagious. I know the wisdom of a seemingly off-hand comment by @RonJeffries would reveal itself to me about a week later. I know a session by @jbrains or @GeePawHill would make me realize, yet again, that my understanding of agile development is incomplete and there's still another layer of mastery for me to pursue.

Oops, that's right! Those last two didn't make it on to the program, did they? And not being there gives me a chance to reflect once more on what is in the program. Of course, there's some truth in the observation that it's not really about the program, it's about the people and the chance to reconnect with old friends and meet new ones. But I think that's really only true if you're there on a speaker's comp. If it's costing you $3000 to be there, there better be some damn good content in the program!

Let's go back to basics. What is absolutely essential for a successful agile project? From my developer-centric POV, it's good developers. I don't believe that agile only works with rock-star developers. But it only works with developers who are dedicated to the concept of the software craft - that we are professionals and we work every day to hone our craft. This means working continuously to improve our understanding of SOLID, TDD and clean code. These aren't skills that you master in a 3 hour tutorial. These are skills you develop throughout an entire career.

What do I see dominating the Agile2010 program? Lots of good sessions on learning, communicating, coaching, creativity, adoption strategies. Sessions that I would probably come out of saying "interesting" and "thought-provoking". But ultimately, sessions that were easy to digest. Not sessions where I encountered concepts that are fundamentally difficult and require hard work to master. Not sessions where I struggled to write good tests and emerged with a determination to rework and discuss the examples over the coming weeks until I finally felt I understood it.

Perhaps I'm a minority. @HackerChick tweeted about a tutorial on TDD where only a quarter of the attendees showed up with laptops, prepared to get their hands dirty. Perhaps Agile2010 isn't the conference for me. A conference where the technical track is only 1/15 of the program. And the technical track includes sessions on "The Butterfly Effect" and how to "Walk and Code". But I worry about another crop of agile converts, filled with all the soft skills and strategies they need to succeed, headed for failure because they don't know about the hard work and dedication needed to build that essential ingredient: the agile developer.

Monday, July 12, 2010

If it's hard to test, look at the design

I was reading recently about a clever technique to verify that a series of calls to a mock object occurred in the correct order. The calls were passing items from a sorted list to a mock view object, and the author was trying to test that the list was sorted correctly.

foreach (Widget widget in sortedWidgetList) view.Show(widget);
The fact that it was difficult to verify the order of calls should first make us wonder if there is a problem in the design, before we look for clever ways to write the test.

Depending on the order of calls indicates a temporal coupling between the classes under test. It means the target of the calls probably has to maintain some kind of state between the calls. These are things we'd like to avoid if we can.

In this case, the solution is simple: pass the sorted list to the view in a single call, rather than passing each item separately in a series of calls.

view.Show(sortedWidgetList);
Now our test is easier to write, and our code is cleaner. The test told us what to do.

Saturday, April 24, 2010

Isolate Your Microtests

I've been working on and off on the FitNesse code base for several months. As a Windows developer, (maybe the only one contributing to FitNesse?), I run into a few issues that no-one else seems to encounter. One of them has been intermittent failures when running the suite of over 2000 microtests (aka unit tests). Several of the tests create files in a test directory, and then, like good tests should, clean up by deleting the files and the directory. Occasionally, these tests fail with an exception on the delete. Today I finally realized what's been happening. My IDE is receiving notifications of file changes and is trying to read the test directory to update its directory tree display. Most of the time, the directory is gone before the IDE can read it, but sometimes it's reading the directory when the microtest tries to delete it and the delete fails. The solution was simple - I put the test directory name in the IDE's file ignore list. Green bar all the time!

Lesson one: know your IDE. But the bigger lesson, I think, is: isolate your microtests. Isolate them even from the file system. We all know that we need to isolate our microtests from databases and networks, and the file system is another external dependency that we should break as well. I always use a file system interface that my microtests can implement with an 'in-memory' file system or a mocking tool. Sometimes I've wondered if this was really necessary, but today I think it definitely is.

Update: I've found other potential causes, e.g., virus scanning, that might occasionally collide with microtests doing file I/O. So don't.

Wednesday, April 7, 2010

Tweaking fitSharp performance

It's convenient to use wildcards in the FitNesse path:
!path c:\mypath\*.dll
However, if there's a lot of unnecessary files loaded by this path specification, performance can be impacted. The biggest overhead that fitSharp incurs, on top of running your System Under Test, is searching assemblies looking for classes. So if you can limit the assemblies that it has to search, this can help:
!path c:\mypath\myfixtures.dll
Sometimes every little bit counts!

Tuesday, March 16, 2010

Fun with null objects

I'm doing some heavy lifting in Java for the first time in a few years, as I'm writing a wiki parser for FitNesse. I came across the good old Null Object pattern as I was rendering the HTML output from the parser:

String translation = currentToken.render(scanner);
Since I'm using a backtracking parser design, this operation may fail. So how do I communicate that to the caller? Of course, I can return null and check for that. But passing nulls around is something I try to avoid at any cost. Well, any reasonable cost. I could throw an exception, but this isn't really an unexpected situation or an illegal operation, it's part of normal operation. I could also add a method to check if I need to backtrack:

if (currentToken.canRender(scanner)) { String translation = currentToken.render(scanner); ... }
However, canRender() could be very expensive as it might have to do all the work that render() does. Often, I wouldn't worry about this, but one of the goals of this parser is to be high-performance. I could cache the results of the work that canRender() does so that render() wouldn't have to duplicate it. But then my design is getting more stateful and more complicated, and my experience tells me to avoid that if possible.

So I decided to return an object that can tell the caller if the operation worked, and if so, what the result is. There's probably a Java idiom for doing this, but not knowing what that might be, I came up with my own:

Maybe<String> translation = currentToken.render(scanner); if (!translation.isNothing()) { result.append(translation.getValue()); ... }
Here's my implementation of the Maybe class:

public class Maybe<T> { public static final Maybe<String> noString = new Maybe<String>("*nothing*", true); private final T value; private final boolean isNothing; public Maybe(T value) { this(value, false); } private Maybe(T value, boolean isNothing) { this.value = value; this.isNothing = isNothing; } public T getValue() { return value; } public boolean isNothing() { return isNothing; } }

Update: Nat Pryce, whose Java skills are infinitely greater than mine, has posted his implementation.

Monday, March 1, 2010

Things I Ponder About

I've been working on a new parser for fitSharp and one of the things I've been kicking around is how to model a tree structure. This is what I'm currently looking at:

public interface Tree<T> { T Value { get; } IEnumerable<Tree<T>> Branches { get; } }
This says that a tree node has a collection of tree nodes that are its branches.

The existing Fit code base takes a different approach, something like this:

public interface Tree<T> { T Value { get; } Tree<T> Child { get; } Tree<T> Sibling { get; } }
This says that a tree node is related to two other nodes, a child and a sibling. (Fit calls them Parts and More). This model actually gives us more information, since we can get a node's sibling, which the first doesn't provide.

But when I started writing implementing classes, I ran into a tangle. I wanted a method to add a branch:

public void AddBranch(Tree<T> newBranch) { ... }
With the first model, this was simple. But with the second model, I need to be able to modify a branch to link in the new branch. So I need to know more about the branches than the basic Tree interface tells me.  The additional information it provides comes with a cost.

Do I really want to pay for the extra benefit of knowing a node's sibling? I don't think so. One thing I find confusing in the Fit code base is that a tree node has two uses. Sometimes it's used as a single node (a table, row or cell) and sometimes as a collection (tables, rows or cells) that can be iterated over using the sibling link. If I mean a collection, I think I should use a collection, like the Branches in the first model.

These are the things I ponder.

Thursday, January 21, 2010

Troubleshooting with fitSharp and FitNesse

To use fitSharp with FitNesse, we set the FitNesse TEST_RUNNER variable.
!define TEST_RUNNER {c:\program files\fitSharp\Runner.exe}
For troubleshooting and debugging, there is an alternate test runner we can use.
!define TEST_RUNNER {c:\program files\fitSharp\RunnerW.exe}
This runner pops up a window before it starts executing tests.




 At this point, we can attach our debugger to the RunnerW process. When we click 'Go', the tests are executed. If an unhandled exception occurs, we'll see it in the window.



When we're finished, we click 'Go' again to finish the test.


Monday, January 4, 2010

Accessing the System Under Test with fitSharp

FitNesse Java Slim has a new feature in the latest release for directly accessing methods on the system under test. fitSharp has had an equivalent feature for a while, courtesy of the old FitLibrary .NET code base. We simply implement the DomainAdapter interface on our Slim fixture.

using fitSharp.Machine.Model; namespace fitSharp.Samples { public class SampleSlimFixture : DomainAdapter { private readonly SampleDomain systemUnderTest = new SampleDomain(); public object SystemUnderTest { get { return systemUnderTest; } } public void SomeFixtureMethod() {...} } public class SampleDomain { public void SomeMethod() {...} } }
Now we can invoke methods on the fixture and the SUT class from a Slim table.

|script|SampleSlimFixture|
|SomeFixtureMethod|
|SomeMethod|