Tapestry Training -- From The Source

Let me help you get your team up to speed in Tapestry ... fast. Visit howardlewisship.com for details on training, mentoring and support!

Monday, December 06, 2004

HiveMind and EasyMock

I've become a big fan of EasyMock. EasyMock is a way to create mock implementations of interfaces, for use in your unit tests. This fits in really well with the overall IoC concept, since when testing class A your can inject mock versions of interfaces B and C.

With EasyMock, you create a control for your interface. The control is an instance of MockControl. From the control, you can get the mock object itself. You then train the mock (I call it a "zombie"). As you invoke methods on the zombie, the sequence of operations is observed by the control.

When you invoke a non-void method, you tell the control what value to return. You can also have any method throw an exception.

You then switch the zombie into replay mode and plug it into the class you are testing. That code interacts with the zombie as if it was, well, whatever it should be.

HiveMind includes a HiveMindTestCase base class that improves this somewhat. Using EasyMock out-of-the-box, you write a lot of test code that just manages the controls . HiveMindTestCase does this for you, which makes it more practical when you have half a dozen pairs of controls and zombies.

For example, here's part of the test for Tapestry's ExternalService:

    public void testService() throws Exception
    {
        MockControl cyclec = newControl(IRequestCycle.class);
        IRequestCycle cycle = (IRequestCycle) cyclec.getMock();

        IExternalPage page = (IExternalPage) newMock(IExternalPage.class);

        Object[] serviceParameters = new Object[0];

        cycle.getParameter(ServiceConstants.PAGE);
        cyclec.setReturnValue("ActivePage");

        cycle.getPage("ActivePage");
        cyclec.setReturnValue(page);

        LinkFactory lf = newLinkFactory(cycle, serviceParameters);

        cycle.setServiceParameters(serviceParameters);
        cycle.activate(page);
        page.activateExternalPage(serviceParameters, cycle);

        ResponseOutputStream ros = new ResponseOutputStream(null);

        ResponseRenderer rr = (ResponseRenderer) newMock(ResponseRenderer.class);

        rr.renderResponse(cycle, ros);

        replayControls();

        ExternalService es = new ExternalService();
        es.setLinkFactory(lf);
        es.setResponseRenderer(rr);

        es.service(cycle, ros);

        verifyControls();
    }
This code tests the main code path through this method:
   public void service(IRequestCycle cycle, ResponseOutputStream output) throws ServletException,
            IOException
    {
        String pageName = cycle.getParameter(ServiceConstants.PAGE);
        IPage rawPage = cycle.getPage(pageName);

        IExternalPage page = null;

        try
        {
            page = (IExternalPage) rawPage;
        }
        catch (ClassCastException ex)
        {
            throw new ApplicationRuntimeException(EngineMessages.pageNotCompatible(
                    rawPage,
                    IExternalPage.class), rawPage, null, ex);
        }

        Object[] parameters = _linkFactory.extractServiceParameters(cycle);

        cycle.setServiceParameters(parameters);

        cycle.activate(page);

        page.activateExternalPage(parameters, cycle);

        _responseRenderer.renderResponse(cycle, output);
    }

Back in the test code, the newControl() method creates a new MockControl. The newMock() method creates a control and returns its mock .. in our unit test, we create an IExternalPage mock instance to stand in for a page named "ActivePage" and ensure that it is passed to IRequestCycle.activate().

The replayControls() and verifyControls() methods come from HiveMindTestCase; they invoke replay() and verify() on each control created by newControl() or newMock().

While replaying the zombie and the control work to ensure that each method is invoke in sequence and with the correct parameters.

The verifyControls() at the end is very important; an incorrect or out of sequence method call will be picked up in line (an exception is thrown), but an omitted method call can only be discovered by verifying the mock.

This technique takes a bit of getting used to; this "training" stage can easily throw you the first time through. Alternatives to EasyMock, such as jMock generally employ a different model, where you train the control (not the zombie), passing in method names and identifying arguments or other expectations in various ways. I suspect jMock is ultimately more powerful and expressive, but I find EasyMock more effective. Your mileage may vary.

Regardless of which framework you choose, this technique works really well! In fact, it is often useful to use this technique to define the interface; this is best explained by example. I was recently recoding how Tapestry performs runtime bytecode enhancement of component classes. The old code was monolithic, one big class. I broke that up into a bunch of "workers". Each worker was passed an object, an EnhancementOperation, that was a facade around a lot of runtime bytecode machinery. EnhancementOperation has methods such as addInterface() and addMethod() that are directly related to code generation, and a number of other methods, such as claimProperty(), that were more about organizing and verifying the whole process.

What's fun is that I coded and tested each worker, extending the EnhancementOperation interface as needed. Once all the workers were tested, I wrote the implementation of EnhancementOperation, and tested that. The final bit was an integration test to verify that everything was wired together properly.

Now, if I had sat down (as I might have done in the past) and tried to figure out all of, or even most of, the EnhancementOperation interface first, I doubt I would have done as good a job. Further, having EnhancementOperation be an un-implemented interface meant that it was exceptionally fluid ... I could change the interface to my heart's content, without having to keep an implementation (and that implementations' tests) in synch.

In sum, EnhancementOperation was defined through a process of exploration; the process of building my workers drove the contents of the interface and the requirements of the implementation. And all of this was possible because of the flexibility afforded by EasyMock.

Out of the box, EasyMock only supports interfaces; behind the scenes, it uses JDK proxies for the zombies. There's an extension that uses bytecode enhancement to allow arbitrary classes to be instrumented as mock object zombies.

This is cool as I rework the Tapestry test suite; often components collaborate and there is not an interface; I still want to be able to mock-up the other component, so I need to use the EasyMock enhancements.

Ultimately, I extended HiveMind's newControl() method to allow an interface or a class to be specified, and to do the right thing for each. Here's an example, where I'm testing how a Block and RenderBlock component work together:

    public void testNonNullBlock()
    {
        Creator c = new Creator();

        MockControl bc = newControl(Block.class);
        Block b = (Block) bc.getMock();

        RenderBlock rb = (RenderBlock) c.newInstance(RenderBlock.class, new Object[]
        { "block", b });

        IMarkupWriter writer = newWriter();
        IRequestCycle cycle = newRequestCycle();

        b.getInserter();
        bc.setReturnValue(null);

        b.setInserter(rb);

        b.renderBody(writer, cycle);

        b.setInserter(null);

        replayControls();

        rb.render(writer, cycle);

        verifyControls();
    }

I'm testing RenderBlock, so I use the Creator (an improved version of Tapestry TestAssist) to create and initialize a RenderBlock instance (even though RenderBlock is an abstract class).

The Block, which will have methods invoked on it by the RenderBlock, is a mock object (a zombie). Notice that its created exactly the same as creating a mock for an interface.

This is a huge improvement over the old approach for testing in Tapestry: build a simple application inside the integration test framework. The problem is, the integration test framework is very slow (several seconds per test) and cranky. A small amount of integration testing is important, but cumbersome for the amount of tests a product as sophisticated as Tapestry requires (at the time of this writing, Tapestry has approximately 545 tests, including about 30 integration tests). Unit tests, by definition, are more precise about failures.

I predict that the number of tests in Tapestry may well double before Tapestry 3.1 leaves beta! But then again, I'm pretty well test infected!

No comments: