Tapestry Training -- From The Source

Let me help you get your team up to speed in Tapestry ... fast. Visit howardlewisship.com for details on training, mentoring and support!

Tuesday, December 28, 2010

Is it time to switch back to IntelliJ?

I've been trying to stick with Eclipse now for a while. I switched from IntelliJ 8 to Eclipse because IntelliJ stopped working for Tapestry (I still don't know why) and because my fingers were getting tied in knots switching between IntelliJ and Eclipse. I have to switch back and forth because my Tapestry training uses Eclipse and I got a lot of negative feedback the times I tried to get people to use IntelliJ as part of the training.

In any case, I've been gritting my teeth against the travesty that is Eclipse; the inconsistent behaviors, the needless complexity, the lack of sense in much of the UI. I often wonder what IDE the Eclipse developers use, because it couldn't possibly be Eclipse itself, or they would have fixed some of the more brain-damaged stuff years ago.

One thing that is currently killing me is that Eclipse has no concept that some source folders contain production code and some contain test code. Because of that, when I collect coverage, EMMA instruments the test code as well as the production code. That not only throws off coverage figures, but breaks some of my tests (advanced ones about bytecode manipulation, that are sensitive to when EMMA adds new fields or methods to existing classes).

Fortunately, EMMA has an option to restrict its instrumentation (though it a global preference, and not configurable for individual projects) ... but shouldn't Eclipse understand this distinction natively? IntelliJ does, and it helps prevent a lot of problems, not just with coding, but with testing as well.

I keep hoping that there's a better, faster, simpler solution out there ... something that is elegant and precise. Eclipse is dumb as a sackful of hammers, and IntelliJ is almost fractal in its complexity, and ugly to boot.

I haven't found my perfect IDE yet. Maybe it's NetBeans?

Thursday, December 16, 2010

Announcing Tapestry 5.2

I'm very proud to announce that the next major release of Tapestry, release 5.2, is now available as Tapestry version 5.2.4.

This is the first stable release of Tapestry since 5.1.0.5 (back in April 2009), which is far too long a cycle. You might wonder: what's been the holdup? The answer, for myself personally, is that I've been using Tapestry on two very, very different applications for two very, very different clients and I've been updating Tapestry to embrace the real world concerns of both of them. At the same time, I've done about a dozen public and private Tapestry training sessions and gathered reams of input from my students.

Let's talk about some of the major enhancements in this release:

Removal of Page Pooling

Prior versions of Tapestry used a page pool; for each page, Tapestry would track multiple instances of the page, binding one page instance to a particular request. This was an important part of Tapestry's appeal ... all the issues related to multi-threading were taken over by the framework, and you could code your pages and components as simple POJOs, without worrying about the threading issues caused by running inside a servlet container.

Unfortunately pages are big: it's not just one object but instead the root of a large tree of objects: components and templates, bindings for component parameters, component resources, and all the extra infrastructure (lists and maps and such) to tie it together. Some of the largest Tapestry projects have hit memory problems when they combined deeply componentized pages with large numbers of parallel threads.

Tapestry 5.2 rewrites the rules here; only a single page tree is now needed for each page; the page and component classes have an extra transformation step that moves per-request data out of the objects themselves and into a per-thread Map object. Now, any number of requests can operate at the same time, without requiring additional page instances. Even better, the old page pooling mechanism included some locking and blocking that also gets jettisoned in the new approach. It's just a big win all around.

Live Service Reloading

People love the ability to change page and component classes in a Tapestry application and see the changes immediately; prior to 5.2 the same people would be disappointed that they couldn't change their services and see changes just as immediately. Tapestry 5.2 eliminates that restriction in most cases.

This is super handy for services such as DAOs (data access objects) where it is now possible to tweak a Hibernate query and see the results as immediately as changing some content in a template. This is another Tapestry feature that you'll find you can't live without once you use it the first time!

ClassTransformation API Improvements

At the heart of Tapestry is the Class Transformation API; the extensible pipeline that is the basis for how Tapestry transforms simple POJOs into working components. Prior to 5.2, if you wanted to do any interesting transformations, you had to master the Javassist psuedo-Java language.

Tapestry 5.2 reworks the API; it is now possible to do all kinds of interesting transformations in strict Java code; Javassist has been walled off, with an eventual goal to eliminate it entirely.

Query Parameter Support

Tapestry traditionally has stored information in the HTTP request path. For example, a URL might be /viewaccount/12345; the viewaccount part of the URL is the name of a page, and the 12345 part is the ID of an Account object. Tapestry calls the latter part the page activation context (which can contain one or more values).

That works well when the a page has a fixed set of values for the page activation context, but not so well when the values may vary. For instance, you may be doing a search and want to store optional query parameters to identify the query term or the page number.

Tapestry 5.2 adds the @ActivationRequestParameter annotation that automates the process of gathering such data, encoding into URLs as query parameters, and making it available in subsequent requests.

Testing

A lot of work has gone into Tapestry's testing support, especially the base classes that support integration testing using Selenium. The new base classes make it easy to write test cases that work independently, or as part of a larger test, automatically starting and stopping Selenium and Jetty as appropriate. Further, Tapestry expands on Selenium's failure behavior, so that failures result in a capture of the page contents as both HTML and a PNG image file. It is simply much faster and much easier to write real, useful tests for Tapestry.

JSR-303 Support

Tapestry now supports the Bean Validation JSR, converting the standard validation annotations into client-side and server-side validations.

Documentation

Tapestry's documentation has always been a challenge; for Tapestry 5.2 we've been doing a massive rework; doing a better job of getting you started using Tapestry; it's still a work in progress, but since it's based on a live Confluence wiki (and no longer tied to the release process) the documentation is free to quickly evolve.

Better yet, you don't have to be a committer to write documentation ... just sign your Apache Contributor License Agreement.

And in terms of exhaustive guides ... Igor Drobiazko's book is being translated from German to English as Tapestry 5 In Action.

Summary

I'm very proud of what we've accomplished over the last 18 months; we've added new committers, new documentation, and lots of new features. We even have a fancy new logo, and a new motto: Code Less, Deliver More.

Tapestry 5 was designed so that it would be possible to make big improvements to the internals and add new features to the framework without impacting existing users and applications; Tapestry 5.2 has validated that design and philosophy. It delivers a lot of punch in a non-disruptive way.

So, if you are looking for a high-productivity, high-performance web framework that doesn't get in your way, it's a great time to take a closer look at Tapestry!

Friday, November 12, 2010

Starting and Stopping Jetty Gracefully with Groovy and JMX

I'm working on a project that uses Tapestry and ActiveMQ together; it works great on my Mac, but on my client's Windows workstation, ActiveMQ doesn't shut down cleanly and corrupts its local files pretty consistently.

Unfortunately, there isn't a way (using RunJettyRun, the Eclipse plugin for Jetty) to gracefully shutdown Jetty. You just pull the plug on it, mid execution.

Looking for a solution, I realized that Jetty can expose most of its internals via JMX; this would allow us to start it up and shut it down cleanly in development.

So, I created a Groovy LaunchApp class to launch Jetty with JMX enabled:

package com.example.main

import java.lang.management.ManagementFactory 


import org.eclipse.jetty.jmx.MBeanContainer 
import org.eclipse.jetty.server.Server 
import org.eclipse.jetty.webapp.WebAppContext
import org.slf4j.LoggerFactory

/** 
 * Alternative the RunJettyRun Eclipse plugin that allows greater control over how Jetty starts up.
 */
class LaunchApp {
 
 static PORT = 8080
 
 public static void main(String[] args) {
  
  def LOG = LoggerFactory.getLogger(LaunchApp.class)
  
  LOG.info "Starting up Jetty ${Server.getVersion()} instance on port $PORT ..."
  
  def server = new Server(PORT)
  
  server.stopAtShutdown = true
  server.gracefulShutdown = 1000 // 1 second
  
  def context = new WebAppContext()
  
  context.setContextPath "/"
  context.setWar "src/main/webapp"
  
  server.setHandler context
  
  def mBeanServer = ManagementFactory.getPlatformMBeanServer();
  def mBeanContainer = new MBeanContainer(mBeanServer);
 
  server.container.addEventListener(mBeanContainer);
  
  mBeanContainer.start();
  
  server.start()
  
  LOG.info "Join the fun at http://localhost:$PORT/landing"
  
  server.join()
  
  LOG.info("Jetty instance has shut down")
 }
}
... and a Groovy StopApp class:
package com.example.main

import javax.management.ObjectName
import javax.management.remote.JMXConnectorFactory
import javax.management.remote.JMXServiceURL

/** 
 * The flip-side of {@link LaunchApp}, this tool locates the running Jetty instance and uses JMX
 * to request a graceful shutdown.
 */
class StopApp {
 
 static JMX_PORT = 8085
 
 static JMX_URL = "service:jmx:rmi:///jndi/rmi://localhost:$JMX_PORT/jmxrmi"
 
 public static void main(String[] args) {
  println "Shutting down Jetty instance"
  
  def connector = JMXConnectorFactory.connect(new JMXServiceURL(JMX_URL), null)
  
  connector.connect null
  
  def connection = connector.getMBeanServerConnection()
  
  def on = new ObjectName("org.eclipse.jetty.server:type=server,id=0")
  
  connection.invoke on, "stop", null, null 
  
  println "Shutdown command sent"   
 }
}

The only trick is to ensure that LaunchApp's JMX MBean server is exposed for access, so you need the following system properties set:

-Dcom.sun.management.jmxremote.port=8085
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false

Thursday, November 04, 2010

First Rule of LinkedIn: Customize the Message

As far as social networks go, I like LinkedIn a lot. Nice site, lots of features, generally quite useful. In fact, I like it enough to be careful about who I link to ... for example, I only link to people I've met in person (or, at least, had a phone conversation with).

For some people, social networks are a game and the score is the number of connections. Thus, I get a fair number of "cold" link requests.

Here's a trick: if you want someone like me to link to you, you need to customize the message. It better say something like "Hey Howard, we hung out at JavaOne last year." or "I attended your talk on Clojure." or something else real and personal or it's quite, quite, likely to be deleted immediately.

Monday, November 01, 2010

Tapestry 5.2.2

... and the latest version of Tapestry, 5.2.2, is now available. This is the second beta release for Tapestry 5.2, addressing a few bugs in 5.2.1, and adding a couple of minor non-disruptive improvements ... read about it in the release notes.

Tapestry 5.2.2 is available for download, or via Maven:

<dependency>
    <groupId>org.apache.tapestry</groupId>
    <artifactId>tapestry-core</artifactId>
    <version>5.2.2</version>
</dependency>

I expect some minor issues will be addressed in Tapestry 5.2.3. Expect that in a week or so.

Thursday, October 28, 2010

Gradle is Great

My initial experience with Gradle is very positive. I've been quickly able to convert over my Maven POMs to Gradle build files, which are wonderfully concise. You can see the last of the results in this commit (work backwards to the parent commits to see some of the Gradle build scripts).

So far, Gradle is living up to its motto: A better way to build.

Next up is dealing with Maven artifact deployments, and the generation of project reports.

I will absolutely be using Gradle for all upcoming projects, and will hopefully revise Tapestry itself to use Gradle at some point soon.

Google Analytics vs. localhost

Are you using Google Analytics and trying to test it on your pages? Does it appear to do absolutely nothing?

After a lot of playing around, we discovered that Google Analytics appears to check if the URL is for "http://localhost" (any port) and disables itself. No warning, even if you are using the new debug version of the Google Analytics script (which is still minimized and obfusicated, which kind of flies in the face of debugging). The GA script just does nothing.

Re-open your page using "http://127.0.0.1:8080/" and it works like a charm.

There is the briefest of mentions in a comment in their forums that kind of indicates this is how it is supposed to work. Ugh. It should be in bold font: This script disables itself when the accessed page is on localhost.

It does make sense that you don't want to flood GA with spurious page hits while you are developing and testing your application ... but would it kill them to send a warning to the FireBug console?

Update: using the asynchronous API along with the debug version (.../u/ga_debug.js) does appear to work, even on localhost. When I first looked at this a couple of months ago, the debug script was not available.

Monday, October 04, 2010

Back in London at SkillsMatter

I'm back in London for another Tapestry training workshop; I can't wait to see how well the revisions and extensions are going to work with a fresh crew of Tapestry developers.

Thursday, September 30, 2010

What a waste: Kindle and Interactive Fiction

I think its a terrible waste that the Kindle doesn't support Interactive Fiction. I would love the be able to play IF games untethered, and IF is the right game for the Kindle which excels at presenting text and falls flat at anything that requires a real refresh rate (the LCD screen refreshes very, very slowly).

I know the Kindle has a development kit; I wonder what it would take to get IF games playing on it?

Friday, September 17, 2010

New testing lab for Tapestry Workshop @ SkillsMatter (London, Oct 5th)

SkillsMatter Logo I'm once again partnering with SkillsMatter to teach my full Tapestry workshop.

I've just finished up the materials for the new lab that covers testing of Tapestry components and applications. I'm quite pleased with how it came out. I'm really looking forward to this training. According to SkillsMatter, there's still room in the class for a couple of more students ... if you want to get started with Tapestry, or you want to get your Tapestry Ninja Skills going, this is the way to do it, fast!

The class will be taught at SkillsMatter's offices in London, from October 5th through the 8th.

Tapestry at JavaOne 2010

Just a reminder: I'll be presenting Tapestry: State of the Union this coming Monday, Sep 20th at 8:30 PM, in Moscone South 309.

Because of pressing demands and client commitments, I'm only going to be in San Francisco briefly: from Sunday afternoon to Tuesday evening.

Blogger finally checks for spam!

This is a big relief for me, because every time I posted a new entry, I'd get half a dozen spams, in chinese. Their new spam filter seems to work nicely. I may eventually open up comments again for non-registered users.

Thursday, August 19, 2010

Groovin' on the Testin'

I'm at the point now where I'm writing Groovy code for (virtually) all my unit and integration tests. Tapestry's testing code is pretty densely written ... care of all those explicit types and all the boilerplate EasyMock code.

With Groovy, that code condenses down nicely, and the end result is more readable. For example, here's an integration test:

    @Test
    void basic_links() {
        clickThru "ActivationRequestParameter Annotation Demo"
        
        assertText "click-count", ""
        assertText "click-count-set", "false"
        assertText "message", ""
        
        clickAndWait "link=increment count"
        
        assertText "click-count", "1"
        assertText "click-count-set", "true"
        
        clickAndWait "link=set message"
        
        assertText "click-count", "1"
        assertText "click-count-set", "true"
        assertText "message", "Link clicked!"        
    }

That's pretty code; the various assert methods are simple enough that we can strip away the unecessary parenthesis.

What really hits strong is making use of Closures though. A lot of the unit and integration tests have a big setup phase where, often, several mock objects are being created and trained, followed by some method invocations on the subject, followed by some assertions.

With Groovy, I can easily encapsulate that as templates methods, with a closure that gets executed to supply the meat of the test:

class JavaScriptSupportAutofocusTests extends InternalBaseTestCase
{
    private autofocus_template(expectedFieldId, cls) {
        def linker = mockDocumentLinker()
        def stackSource = newMock(JavaScriptStackSource.class)
        def stackPathConstructor = newMock(JavaScriptStackPathConstructor.class)
        def coreStack = newMock(JavaScriptStack.class)
        
        // Adding the autofocus will drag in the core stack
        
        expect(stackSource.getStack("core")).andReturn coreStack
        
        expect(stackPathConstructor.constructPathsForJavaScriptStack("core")).andReturn([])
        
        expect(coreStack.getStacks()).andReturn([])
        expect(coreStack.getStylesheets()).andReturn([])
        expect(coreStack.getInitialization()).andReturn(null)
        
        JSONObject expected = new JSONObject("{\"activate\":[\"$expectedFieldId\"]}")
        
        linker.setInitialization(InitializationPriority.NORMAL, expected)
        
        replay()
        
        def jss = new JavaScriptSupportImpl(linker, stackSource, stackPathConstructor)
        
        cls jss
        
        jss.commit()
        
        verify()
    }
    
    @Test
    void simple_autofocus() {
        
        autofocus_template "fred", { 
            it.autofocus FieldFocusPriority.OPTIONAL, "fred"
        }
    }
    
    @Test
    void first_focus_field_at_priority_wins() {
        autofocus_template "fred", {
            it.autofocus FieldFocusPriority.OPTIONAL, "fred"
            it.autofocus FieldFocusPriority.OPTIONAL, "barney"
        }
    }
    
    @Test
    void higher_priority_wins_focus() {
        autofocus_template "barney", {
            it.autofocus FieldFocusPriority.OPTIONAL, "fred"
            it.autofocus FieldFocusPriority.REQUIRED, "barney"
        }
    }
}

That starts being neat; with closures as a universal adapter interface, it's really easy to write readable test code, where you can see what's actually being tested.

I've been following some of the JDK 7 closure work and it may make me more interested in coding Java again. Having a syntax nearly as concise as Groovy (but still typesafe) is intriguing. Further, they have an eye towards efficiency as well ... in many cases, the closure is turned into a synthetic method of the containing class rather than an entire standalone class (the way inner classes are handled). This is good news for JDK 7 ... and I can't wait to see it tame the class explosion in languages like Clojure and Scala.

Tuesday, August 17, 2010

Tapestry Frequently Asked Questions

I'm taking some time to work on the Tapestry documentation ... starting with the FAQ. It's great fun, though this could get to be quite large. I'm just spewing out content right now, over time we'll clean it up, reorganize it, and add further hyperlinks and annotations.

In fact, as I'm working on the FAQ, I'm thinking this might be the best way to document open source projects in general. User's guides and reference documents are rarely read, everyone just Google's their question, so put those questions in their most findable format. Also, it's hard to write a consistent user guide start to finish ... but more reasonable to document one tidbit at a time.

Also, I'm reminded of The Little Schemer, a book that teaches the entire Scheme language (a Lisp variant) via a series of questions of ever broadening scope.

Feel free to suggest additional FAQ topics on the Tapestry Users mailing list.

Monday, August 09, 2010

Tapestry 5.2 leaves the gate

It's been a long time coming. Originally, I had thought we'd be producing Tapestry 5.2 six to eight months after Tapestry 5.1 ... instead, it's been more like 14 months just to get to the alpha release. Why? Well, in that time, I've personally changed jobs (back to an independent consultant), and I've been actively using the nightly snapshots of Tapestry 5.2 in two different projects for two different clients. I've had a lot of chances to see Tapestry in practice and, as always, identify the rough edges and smooth them out.

This new release enhances one of Tapestry secret strengths: meta-programming. It is now ridiculously easy to extend the behavior of components, or method or fields within components, using annotations .... without getting mixed up in all that Javassist business. I'm using that now just about everywhere you might think about using a base class: everything from securing page access, to caching, to integration with Google Analytics.

The big change here is the switch from pooled pages to singletons: In Tapestry 5.1 and earlier, Tapestry kept a pool for page instances. On each request, a localized page instance was pulled from the pool, used exclusively by the one request thread, then returned to the pool. The pool had to be able to expand dynamically, and shrink to release memory.

Starting with Tapestry 5.2, the page pool is deprecated (and only enabled with extra configuration). Instead, a single page instance is created and shared between threads. That may raise your red alert flag ... doesn't that make Tapestry non-thread-safe?

Nope. Tapestry now reworks your simple POJO classes, changing access to all local mutable fields to instead store the value in a per-thread Map. It's an extrapolation of how Tapestry already managed persistent fields (storing the persistent field values in the Session between requests) ... but it now applies to all request-scoped state.

It's an interesting trade off: a lot less memory (just a single instance of each page and all its components) for a bit more work during each request. Part of the reason for this alpha release is to get this code into more hands and get more performance analysis on the result. I'm confident that these changes will not noticeably affect small applications and reasonable request loads but will make a big difference in handling larger applications with heavy request loads.

Meanwhile, the goal is to keep the APIs stable, address a bunch of bugs, and get another release out soon, then vote that up as a beta release. Preferably before JavaOne!

Friday, July 30, 2010

Choosing the Right Web Framework

Thank you Google Alerts, for pointing out this article on choosing a Java web framework. It's over a year old, but I think the things that make Tapestry special have only gotten stronger in the intervening time.

Wednesday, July 28, 2010

Git on Mac OS X: Don't ignore case!

By default, Mac OS X uses a case insensitive file system, and Git seems to honor that. The problem is, most programming languages, especially Java, are case sensitive. Class "JavaScriptSupport" needs to be in file "JavaScriptSupport.java" and not "JavascriptSupport.java". This is even worse when sharing code via a repository since some other developers may check out code on a case sensitive file system.

I was just renaming some classes, from things like "JavascriptStack" to "JavaScriptStack" (because the language is called "JavaScript" not "Javascript") ... and I was dismayed that Git saw that as an in-place update to a file, not a rename of the file.

Unfortunately, it's not as simple as git config core.ignorecase false to make Git do the right thing. That's an essential part of it, but Git still sees changes to the original naming of the file as a change, not a deletion.

I had to use the trick of one commit renaming JavascriptStack.java --> JSStack.java, then a second commit renaming JSStack.java --> JavaScriptStack.java.

Monday, July 26, 2010

Tapestry 5 Training in London: Oct 5 - 8

SkillsMatter Logo I'm once again partnering with SkillsMatter to teach my full Tapestry workshop. This is an expanded version of the class, which is growing from three days up to four; the additional day will ensure that we have time for all the existing materials, and add a new section on testing using TestNG, Selenium and Groovy. It will also give us more time to explore student directed ideas, such as security and meta-programming.

The class will be taught at SkillsMatter's offices in London, from October 5th through the 8th.

Wednesday, July 14, 2010

Everyone out of the Pool! Tapestry goes singleton!

Tapestry applications are inherently stateful: during and between requests, information in Tapestry components, value stored in fields, stick around. This is a great thing: it lets you program a web application in a sensible way, using stateful objects full of mutable properties and methods to operate on those properties.

It also has its downside: Tapestry has to maintain a pool of page instances. And in Tapestry, page instances are big: a tree of hundreds or perhaps thousands of interrelated objects: the tree of Tapestry structural objects that forms the basic page structure, the component and mixin objects hanging off that tree, the binding objects that connect parameters of components to properties of their containing component, the template objects that represents elements and content from component templates, and many, many more that most Tapestry developers are kept unawares of.

This has proven to be a problem with biggest and busiest sites constructed using Tapestry. Keeping a pool of those objects, checking them in and out, and discarded them when no longer needed is draining needed resources, especially heap space.

So that seems like an irreconcilable problem eh? Removing mutable state from pages and components would turn Tapestry into something else entirely. On the other hand, allowing mutable state means that applications, especially big complex applications with many pages, become memory hogs.

I suppose one approach would be to simply create a page instance for the duration of a request, and discard it at the end. However, page construction in Tapestry is very complicated and although some effort was expended in Tapestry 5.1 to reduce the cost of page construction, it is still present. Additionally, Tapestry is full of small optimizations that improve performance ... assuming a page is reused over time. Throwing away pages is a non-starter.

So we're back to square one ... we can't eliminate mutable state, but (for large applications) we can't live with it either.

The best solution would be to require that all those mutable fields be, instead, ThreadLocal objects, and to change all the logic that accesses them to instead read and write values to the ThreadLocal. Oh, and clean up each and every one at the end of the request, so that information doesn't bleed through to the next request. That would be an incredible imposition on Tapestry developers.

Fortunately, Tapestry has lots of options for meta-programming Tapestry component classes.

Tapestry has already been down this route: the way persistent fields are handled gives the illusion that the page is kept around between requests. You might think that Tapestry serializes the page and stores the whole thing in the HttpSession. In reality, Tapestry is shuffling just the individual persistent field values in to and out of the session. To both the end user and the Tapestry developer, it feels like the entire page is live between requests, but it's really a bit of a shell game, providing an equivalent page instance that has the same values in its fields.

What's going on in trunk (Tapestry 5.2 alpha) right now is extrapolating that concept from just persistent fields to all mutable fields. Every access to every mutable field in a Tapestry page is converted, as part of the class transformation process, into an access against a per-thread Map of keys and values. Each field gets a unique identifying key. The Map is discarded at the end of the request.

The end result is that a single page instance can be used across multiple threads without any synchronization issues and without any field value conflicts.

This idea was suggested in years past, but the APIs to accomplish it (as well as the necessary meta-programming savvy) just wasn't available. However, as a side effect of rewriting and simplifying the class transformation APIs in 5.2, it became very reasonable to do this.

Let's take an important example: the handling of typical, mutable fields. This is the responsibility of the UnclaimedFieldWorker class, part of Tapestry component class transformation pipeline. UnclaimedFieldWorker finds fields that have not be "claimed" by some other part of the pipeline and converts them to read and write their values to the per-thread Map. A claimed field may store an injected service, asset or component, or be a component parameter.

public class UnclaimedFieldWorker implements ComponentClassTransformWorker
{
    private final PerthreadManager perThreadManager;

    private final ComponentClassCache classCache;

    static class UnclaimedFieldConduit implements FieldValueConduit
    {
        private final InternalComponentResources resources;

        private final PerThreadValue<Object> fieldValue;

        // Set prior to the containingPageDidLoad lifecycle event
        private Object fieldDefaultValue;

        private UnclaimedFieldConduit(InternalComponentResources resources, PerThreadValue<Object> fieldValue,
                Object fieldDefaultValue)
        {
            this.resources = resources;

            this.fieldValue = fieldValue;
            this.fieldDefaultValue = fieldDefaultValue;
        }

        public Object get()
        {
            return fieldValue.exists() ? fieldValue.get() : fieldDefaultValue;
        }

        public void set(Object newValue)
        {
            fieldValue.set(newValue);

            // This catches the case where the instance initializer method sets a value for the field.
            // That value is captured and used when no specific value has been stored.

            if (!resources.isLoaded())
                fieldDefaultValue = newValue;
        }

    }

    public UnclaimedFieldWorker(ComponentClassCache classCache, PerthreadManager perThreadManager)
    {
        this.classCache = classCache;
        this.perThreadManager = perThreadManager;
    }

    public void transform(ClassTransformation transformation, MutableComponentModel model)
    {
        for (TransformField field : transformation.matchUnclaimedFields())
        {
            transformField(field);
        }
    }

    private void transformField(TransformField field)
    {
        int modifiers = field.getModifiers();

        if (Modifier.isFinal(modifiers) || Modifier.isStatic(modifiers))
            return;

        ComponentValueProvider<FieldValueConduit> provider = createFieldValueConduitProvider(field);

        field.replaceAccess(provider);
    }

    private ComponentValueProvider<FieldValueConduit> createFieldValueConduitProvider(TransformField field)
    {
        final String fieldName = field.getName();
        final String fieldType = field.getType();

        return new ComponentValueProvider<FieldValueConduit>()
        {
            public FieldValueConduit get(ComponentResources resources)
            {
                Object fieldDefaultValue = classCache.defaultValueForType(fieldType);

                String key = String.format("UnclaimedFieldWorker:%s/%s", resources.getCompleteId(), fieldName);

                return new UnclaimedFieldConduit((InternalComponentResources) resources,
                        perThreadManager.createValue(key), fieldDefaultValue);
            }
        };
    }
}

That seems like a lot, but lets break it down bit by bit.

    public void transform(ClassTransformation transformation, MutableComponentModel model)
    {
        for (TransformField field : transformation.matchUnclaimedFields())
        {
            transformField(field);
        }
    }

    private void transformField(TransformField field)
    {
        int modifiers = field.getModifiers();

        if (Modifier.isFinal(modifiers) || Modifier.isStatic(modifiers))
            return;

        ComponentValueProvider<FieldValueConduit> provider = createFieldValueConduitProvider(field);

        field.replaceAccess(provider);
    }

The transform() method is the lone method for this class, as defined by ComponentClassTransformWorker. It uses a method on the ClassTransformation to locate all the unclaimed fields. TransformField is the representation of a field of a component class during the transformation process. As we'll see it is very easy to intercept access to the field.

Some of those fields are final or static and are just ignored. A ComponentValueProvider is a callback object: when the component (whatever it is) is first instantiated, the provider will be invoked and the return value stored into a new field. A FieldValueConduit is an object that takes over responsibility for access to a TransformField: internally, all read and write access to the field is passed through the conduit object.

So, what we're saying is: when the component is first created, use the callback to create a conduit, and change any read or write access to the field to pass through the created conduit. If a component is instantiated multiple times (either in different pages, or within the same page) each instance of the component will end up with a specific FieldValueConduit.

Fine so far; it comes down to what's inside the createFieldValueConduitProvider() method:

    private ComponentValueProvider<FieldValueConduit> createFieldValueConduitProvider(TransformField field)
    {
        final String fieldName = field.getName();
        final String fieldType = field.getType();

        return new ComponentValueProvider<FieldValueConduit>()
        {
            public FieldValueConduit get(ComponentResources resources)
            {
                Object fieldDefaultValue = classCache.defaultValueForType(fieldType);

                String key = String.format("UnclaimedFieldWorker:%s/%s", resources.getCompleteId(), fieldName);

                return new UnclaimedFieldConduit((InternalComponentResources) resources,
                        perThreadManager.createValue(key), fieldDefaultValue);
            }
        };
    }

Here we capture the name of the field and its type (expressed as String). Inside the get() method we determine the initial default value for the field: typically just null, but may be 0 (for a primitive numeric field) or false (for a primitive boolean field).

Next we build a unique key used to store and retrieve the field's value inside the per-thread Map. The key includes the complete id of the component and the name of the field: thus two different component instances, in the same page or across different pages, will have their own unique key.

We use the PerthreadManager service to create a PerThreadValue for the field. You can think of a PerThreadValue as a specialized kind of ThreadLocal that automatically cleans itself up at the end of the request.

Lastly, we create the conduit object. Let's look at the conduit in more detail:

    static class UnclaimedFieldConduit implements FieldValueConduit
    {
        private final InternalComponentResources resources;

        private final PerThreadValue<Object> fieldValue;

        // Set prior to the containingPageDidLoad lifecycle event
        private Object fieldDefaultValue;

        private UnclaimedFieldConduit(InternalComponentResources resources, PerThreadValue<Object> fieldValue,
                Object fieldDefaultValue)
        {
            this.resources = resources;

            this.fieldValue = fieldValue;
            this.fieldDefaultValue = fieldDefaultValue;
        }

We use the special InternalComponentResources interface because we'll need to know if the page is loading, or in normal operation (that's coming up). We capture our initial guess at a default value for the field (remember: null, false or 0) but that may change.

        public Object get()
        {
            return fieldValue.exists() ? fieldValue.get() : fieldDefaultValue;
        }

Whenever code inside the component reads the field, this method will be invoked. It checks to see if a value has been stored into the PerThreadValue object this request; if so the stored value is returned, otherwise the field default value is returned.

Notice the distinction here between null and no value at all. Just because the field is set to null doesn't mean we should switch over the the default value (assuming the default is not null).

The last hurdle is updates to the field:

      public void set(Object newValue)
        {
            fieldValue.set(newValue);

            // This catches the case where the instance initializer method sets a value for the field.
            // That value is captured and used when no specific value has been stored.

            if (!resources.isLoaded())
                fieldDefaultValue = newValue;
        }

The basic logic is just to stuff the value assigned to the component field into the PerThreadValue object. However, there's one special case: a field initialization (whether it's in the component's constructor, or at the point in the code where the field is first defined) turns into a call to set(). We can differentiate the two cases because that update occurs before the page is marked as fully loaded, rather than in normal use of the page.

And that's it! Now, to be honest, this is much more detail than a typical Tapestry developer ever needs to know. However, it's a good demonstration of how Tapestry's class transformation APIs make Java code fluid; capable of being changed dynamically (under carefully controlled circumstances).

Back to pooling: how is this going to affect performance? That's an open question, and putting together a performance testing environment is another task at the top of my list. My suspicion is that the new overhead will not make a visible difference for small applications (dozens of pages, reasonable number of concurrent users) ... but for high end sites (hundreds of pages, large numbers of concurrent users) the avoidance of pooling and page construction will make a big difference!

Thursday, June 24, 2010

Tapestry 5.2: Improved Query Parameter Support

I just checked in some very nice changes for Tapestry 5.2; you can now easily store data about a page in the URL as query parameters:

  @ActivationRequestParameter
  private String name;

By annotating a page (not a component!) field this way, the field will be mapped to the query parameter "name". When a page render link or component event link for the page is created, the current value of the field will be added as parameter "name". When that link is triggered to form a request, the parameter will be read and the field updated from the query parameter value.

It isn't limited to strings ... it uses the whole ValueEncoder machinery so that you can encode numbers or even Hibernate entities (represented in the URL as their primary key).

Cool stuff, if I do say so myself. Even I'm still learning how to flex the massive amount of meta-programming muscle that Tapestry provides. It turns out that the combination of component method advice with custom events triggered on the page can do some really sophisticated things!

Tuesday, June 08, 2010

Who Wants The Func? Gotta Have That Func!

I've been entranced by the concept of laziness since I first really considered it while teaching myself a bit of Haskell. Laziness is the idea that no computation takes place until it is actually needed ... an idea that is common in the functional programming world and one that works best with immutable data.

Why immutable? This has been covered extensively elsewhere, but the gist is that when you have any kind of mutable data (any field that can ever change its value), you add time as an invisible input to your expressions. Literally, the time that any single expression is evaluated relative to other changes to mutable state will affect the outcome of the expression, often in non-predictable ways. In the mathematical world, a function will always return the same value for the same inputs ... in the fuzzy, dirty world of Object Orientation, a method invocation may return different values at different times based on mutable state. Not necessarily mutable state in the object being invoked, but in some other object, somewhere, that the invoked object depends on.

This is why parallel programming in the OO world seems so hard. It requires locks on mutable data, which brings its own problems, such as deadlocks. It can feel like a tottering house of cards.

But remove mutability from this picture and an entirely different world emerges. Functions do behave as functions; same inputs: same result. Side effects disappear, because there's no mutable state. Evaluation of expressions is no longer linked to time: it can be evaluated in parallel threads, or can be deferred until absolutely needed.

That last bit is the laziness. Laziness is a way to bootstrap your code up to a simpler, clearer expression of your algorithms ... once you embrace laziness, you can see that a good amount of the code you write (using mutable data especially) is a case of premature optimization.

Back to Tapestry; as far back as Tapestry 4.0 (where HiveMind and the use of Inversion of Control and Dependency Injection where introduced), Tapestry's internal code has had many functional characteristics. The base unit of work in the Tapestry IoC container is an interface, not a function ... but often those interfaces have a single method. That makes the interfaces look a lot like a function, ready for the kind of composition possible in a functional programming language. Sure, it's a bit clumsy compared to a real functional programming language ... but the power is still there.

Tapestry 5 uses these features to handle a lot of Aspect Oriented Programming concepts; for instance, services are lazily instantiated, and they can be decorated and advised to provide cross-cutting concerns. In fact, Tapestry uses functional composition extensively for all kinds of meta-programming.

Meanwhile outside the realm of Tapestry, my exposure to Clojure has really sold me on the functional approach, and I take to immutable data structures like a warm, comforting blanket. I miss all that when I'm working with ordinary Lists and Sets from Collections API.

Given that Tapestry does a lot of complex things, I started work on a simple functional library. What I've created is not nearly as complex as Functional Java; I think it does less, but does it more cleanly. It's more focused.

The idea is that you'll create a Flow from some source (usually, a Collection). You can then map, filter, reduce, append, concatenate, and iterate the values inside the Flow. Further, the Flows are lazy (as with Haskell and Clojure); all evaluation is deferred until absolutely necessary, and it's thread safe. You can also have infinite flows.

It all starts in a static class F (for Functional) that has the initial factories for Flows. This example uses the F.range() method to create a Flow for a range of integers:

System.out.println(F.range(1, 100).filter(F.gt(10)).map(
  new Mapper() {
   public Integer map(Integer value) {
    return value * 2;
   }
  }).take(3).toList());

When executed, this code prints the following: [22, 24, 26]. That is, it dropped the values less than or equal to 10; it then multiplied each remaining value by 2 and converted the first three to a list.

  • F.range() creates a lazy Flow of integers in the range (from 1 to 99; the upper range is non-inclusive).
  • filter() is a Flow method that keeps only some values, based on a Predicate
  • F.gt() is a static factory method, it creates a Predicate that compares a Number value from the Flow against the provided value
  • map() is a Flow method that is applied to each value in the Flow
  • take() takes a limited number of values from the front of the Flow
  • toList() converts the Flow into a non-modifiable List

Here we mapped from Integer to Integer, but it would have been possible to map to a different type. At each stage, a new (immutable) Flow object is created.

What about laziness? Well if we modify the code a bit:

System.out.println(F.range(1, 100).filter(F.gt(10)).map(
  new Mapper() {
   public Integer map(Integer value) {
    System.out.println("** mapping " + value);
    return value * 2;
   }
  }).take(3).toList());
The new output is:
** mapping 11
** mapping 12
** mapping 13
[22, 24, 26]

... in other words, although we write code that makes it appear that the entire Flow is transformed by the map() call, the reality is that individual values from the original flow are mapped, just once, as needed. The code we write focuses on the flow of transformations, from the input, to the final result: "start with the range, retain the values greather than 10, multiply each by two, keep just the first three".

Does this make a difference? Not with trivial cases like this example. The functional code could be rewritten in standard Java as:

List<Integer> result = new ArrayList<Integer>();
for (int i = 1; i < 100; i++)
{
  if (i >= 10)
  {
    result.add(i * 2);
  }
}

result = result.subList(0, 3);

System.out.println(result);

Yes, this code is shorter, but it does more work (computing many doubles values that are not needed). We could do some extra work to keep a count of result values and end the loop earlier, but that increases the cyclomatic complexity even further. The extra work is not a big deal here, but if the transformations were more expensive (say, re-drawing images in different sizes, or reading data from a database) the work not done unnecessarily would become quite significant.

And is the traditional Java code really shorter? What if we create a reusable factory function:

public static Mapper<Integer,Integer> multiplyBy(final int multiplicand)
{
  return new Mapper<Integer, Integer>() 
  {
    public Integer map(Integer value)
    {
      return value * multiplicand;
    }
  };
}

Then our original Flow expressions becomes:

System.out.println(F.range(1, 100).filter(F.gt(10)).map(multiplyBy(2)).take(3).toList());

... meaning that, once we have a good collection of these Mapper and Predicate factory methods, we can have the efficient, lazy code and it will be more concise and readable.

Anyway, tapestry-func is a work in progress, but it's very promising, and already being used in both Tapestry 5.2, and in some of my clients' code.

Monday, May 31, 2010

Tapestry Happenings: 5.2 beta soon?, New Committers, Dynamic component

There's lots and lots going on with Tapestry right now. We're gearing up to bring Tapestry 5.2 into a beta phase ... hopefully a short one before a final GA release.

We've been busy adding new committers to the team. Not every vote has been successful, but that just shows that the system works.

Lots of people are working on a terrific refresh of the Tapestry home page and documentation. Not just new text, but a new more timely approach (based on Confluence) and a terrific new layout. Just lots of energy going on there. I can't wait for it to be ready ... it really makes Tapestry looks like the first-class citizen it is.

Meanwhile, I've been busy for my clients, writing some useful code.

I've managed to get the nightly builds of the TapX working again ... just in time. I've created an exciting new component: Dynamic.

Dynamic is targeted for use in skinning applications. Tapestry's structure makes skinning an application a little tricky, especially if you want to take a standard application and customize it for different customers. Tapestry templates are expected to be uniform across all instances of a component (in the same way that a classes's methods and fields are fixed across all instances). This makes it tricky to create a component that renders differently for different users, which is the essence of skinning. Often, you end up either simulating a JSP include and injecting a blob of raw markup content into the Tapestry DOM ... or you end up with an evil nest of conditionals and indirection.

The Tapx Dynamic component exists to mix a dynamic template file with live Tapestry components; The dynamic template is any well-formed XML document that can be represented as a Resource; mostly, it is output as-is ... until Tapestry hits an element, such as a <div>, with a special id. When the id is of the form param:block-name, the Dynamic component replaces the dynamic template element with the content of the Block parameter passed to the Dynamic component. We're really weaving Tapestry and the external template together here.

I'm working on a good, simple example here ... but anyone who has struggled with this issue will appreciate what Dynamic accomplishes. It's not unlike running SiteMesh at the Tapestry component level.

Friday, April 30, 2010

Inform 7's Birthday

Long, long ago, some of my first programs were interactive fictions. Really simple stuff, "go north", "kill rat", that kind of thing. While I was hacking that kind of thing together in Basic, others had gone much further: the masters of Infocom. This system, written way back in the late 70's and early 80's, predates Java but has many similar features; including a bytecode-based runtime portable across different operating system and hardware platforms.

Sure games have gone in a different direction with incredible 3d graphics, but there's still a certain joy in playing interactive fiction games; it really is like playing a short story. The games have evolved from classic dungeon crawls into something more, with many of the best games eschewing puzzles and focusing on interaction between the player and the non-player characters.

The modern way to write these games, if you are so inclined, is in the Inform 7 environment. The Inform team try to come up with new releases every year, on the "birthday"; this month is the 17th birthday of the original Inform language (Inform was originally a more C-like objected oriented language that's evolved over the years into its current state).

Here's a getting started screencast:

Inform 7 Introductory Screencast from Aaron Reed on Vimeo.

I've gone a bit deep with Inform in the past, and hope to do more in the future. It's a truly amazing piece of software ... the language is generally a natural language (I call it "the mother of all DSLs") with features combining object oriented, rules based and even aspect oriented programming. In fact, the next release, due shortly, even includes map and reduce operations! The language is very powerful, allowing for concise ways to deal with deep cross-cutting concerns, allowing for human ambiguity

More than that, the IDE is truly full and integrated. It has extensive documentation (both a front to back manual and a cookbook), and error messages include hyperlinks to your code and to the manual pages. It includes hundreds of short examples that can be pasted directly into the editor with a single click. It has built in testing features (shown in the screen cast above). It provides an incredible cross-reference of your project (integrated with the built-in libraries) ... even an automatic map of your game world. It's truly a labor of love, and I wish any of the tools I work with day to day showed so much innovation and usefulness.

If you've ever wanted to write interactive fiction, or just want to play with a really fascinating alternative language, give Inform a try.

Monday, April 19, 2010

Setting up committer access Git for Tapestry 5

Given the problems I'm having, I decided to set up a new local Git repository for futher work. Here's how to do it:

$ git clone git://git.apache.org/tapestry5.git

This sets up a new working folder, tapestry5. It takes it a while to download the necessary Git repository objects.

$ cd tapestry5
$ curl http://git.apache.org/authors.txt -o .git/authors.txt
$ git config svn.authorsfile .git/authors.txt

This fetches the current list of authors so that proper names appear in various Git reports, then configures Git to make use of the file.

$ git svn init --prefix=origin/ --trunk=trunk https://svn.apache.org/repos/asf/tapestry/tapestry5

This tells Git where to sync from and to.

$ git svn rebase

And that finishes things up, ensuring that you have all the most recent revisions.

From here on in, the two commands you need the most are git svn rebase (to pull in repository changes) and git svn dcommit (to push deltas back to Subversion). You should always rebase before a dcommit.

Perhaps that's not quite complete; I generally create local Git topic branches; so I start my work with git co -Blocal to create (or overwrite) my local branch, do my work there as a series of commits, then: git co trunk ; git rebase local to move those commits back over to trunk before git svn dcommit. This helps a lot if you ever have to deal with a merge.

When I'm fixing particular bugs, I often create a branch names after the bug id.

Update: Removed the --tags and --branches arguments ... mostly because of how horribly Git SVN works with branches (don't try it!), and to make the init step nice and fast.

Git & Svn : Not Always A Match Made In Heaven

Apache is stuck using Subversion ... so I've been using the Git/Svn integration built into Git for a while now. The good news is that most of the Git workflow comes with it ... you can create private branches, do local commits to your local repository, and build up a series of changes to dcommit ("delta commit") into SVN.

But that doesn't always work. For reasons I don't understand (given that there have been no other commits to SVN since I started work in my private branch), I keep getting the following error:

Applying: Provide the missing asset request handler for the virtual "stack" folder.
Committing to https://svn.apache.org/repos/asf/tapestry/tapestry5/trunk ...
 M tapestry-core/src/main/java/org/apache/tapestry5/internal/services/ClasspathAssetAliasManagerImpl.java
 M tapestry-core/src/main/java/org/apache/tapestry5/services/LibraryMapping.java
 M tapestry-core/src/test/java/org/apache/tapestry5/internal/services/ClasspathAssetAliasManagerImplTest.java
 A tapestry-core/src/test/java/org/apache/tapestry5/services/LibraryMappingTest.java
Committed r935569
W: 86533530aac8673a9e107e323de5201b7187270f and refs/remotes/origin/trunk differ, using rebase:
:040000 040000 9c78596ee3f916f012c51d8927b4aa31d497f17b 8eb2b9b4f28e825e223c736eaa664bb53018258e M tapestry-core
Current branch trunk is up to date.
# of revisions changed  
before:
 07b37e03cbc17012247d2221e795023c564d8228
0830b5f383dc94ae16088185efefac2e1358cf30
0bf378bcafc3f5372b67edc50d7de5bed8713cd0
95c87c5d7a2435df6bfced0d858bfdcb6ff26f22
3cd4ea4d9b225fd5013e1ce72cb9bac6d5b3e5e2
7b95d935099ebbbeb81845c1b8170a89d6ca6421
af310cfa9a5552aab2574c1e345b3beb049fb040
20805630fc67b83b4ca946b942716aeba4c80bef
b3ef5e069942a30e2dce45a35e4be16382c0108d
1be75a15b7f203c927bc2aa34f43dda59ca968e3
555d94ebab122a688b7a1c0af253bf73609f88f5
729757eb3c35e14e126cb6ef16f5032d95d1cc4a
79dcfa32b291454bf9c652d635374d60638b8fb8
304d12f9d7d040f4dc231d213df663fcdf3863b6
0d626a7b0648735ab83bc7a2fd241390eb92e4e2 

after:
 86533530aac8673a9e107e323de5201b7187270f
07b37e03cbc17012247d2221e795023c564d8228
0830b5f383dc94ae16088185efefac2e1358cf30
0bf378bcafc3f5372b67edc50d7de5bed8713cd0
95c87c5d7a2435df6bfced0d858bfdcb6ff26f22
3cd4ea4d9b225fd5013e1ce72cb9bac6d5b3e5e2
7b95d935099ebbbeb81845c1b8170a89d6ca6421
af310cfa9a5552aab2574c1e345b3beb049fb040
20805630fc67b83b4ca946b942716aeba4c80bef
b3ef5e069942a30e2dce45a35e4be16382c0108d
1be75a15b7f203c927bc2aa34f43dda59ca968e3
555d94ebab122a688b7a1c0af253bf73609f88f5
729757eb3c35e14e126cb6ef16f5032d95d1cc4a
79dcfa32b291454bf9c652d635374d60638b8fb8
304d12f9d7d040f4dc231d213df663fcdf3863b6
0d626a7b0648735ab83bc7a2fd241390eb92e4e2 
 If you are attempting to commit  merges, try running:
  git rebase --interactive --preserve-merges  refs/remotes/origin/trunk 
Before dcommitting
~/work/t5-project
$

I did the right things; git co trunk followed by git svn rebase, then git rebase revised-assets-12apr2010. It claimed to replay my branch changes on top of the trunk branch, but regardless, the dcommit failed.

Doing some hunting around with Google, I found a partial explanation, that at least gives me a way forward. I'd still like to know how I got into this predicament.

At this point I just keep blindly entering the command: git reset --hard 705ccfb1e27d303a9db62de755b2fcfcca9a02f6 ; git svn rebase; git svn dcommit and get one Git commit further each time (that's the Git hash code for my final change in my original branch). Joy.

Sunday, April 18, 2010

An extended stay in London

If you were unable to attend my "In the Brain Of" talk last Tuesday, it's now available at SkillsMatter ... voice, video and slides. This was a fun session, even if I was jet lagged to the point of being dizzy.

Meanwhile, I'm still hanging out in London until this at least the end of this week. I expect to work from my hotel room, visit a couple of clients, and maybe have a bit of fun. Drop me a line.

In London, looking for a Tapestry job? I've been contacted by a recruiter who is looking for you! They're building a team of perhaps 20 developers (I'm not yet sure who the actual client is). Drop me a line and I'll hook you up!

Tuesday, April 06, 2010

Meta-Programming Java with Tapestry

A significant amount of what Tapestry does is meta programming: code that modifies other code. Generally, we're talking about adding behavior to component classes, which are transformed as they are loaded into memory. The meta-programming is the code that sees all those annotations on methods and fields, and rebuilds the classes so that everything works at runtime.

Unlike AspectJ, Tapestry does all of its meta-programming at runtime. This fits in better with live class reloading, and also allows for loaded libraries to extend the meta-programming that's built-in to the framework.

All the facilities Tapestry has evolved to handle meta-programming make it easy to add new features. For example, I was doing some work with the Heartbeat enviromental object. Heartbeat allows you to schedule part of your behavior for "later". First off, why would you need this?

A simple example is the relationship between a Label component and a form control component such as TextField. In your template, you may use the two together:

  <t:label for="email"/>
  <t:textfield t:id="email"/>

The for parameter there is not a simple string, it is a component id. You can see that in the source for the Label component:

    @Parameter(name = "for", required = true, allowNull = false, defaultPrefix = BindingConstants.COMPONENT)
    private Field field;

Why does for="email" match agains the email component, and not some property of the page named email? That's what the defaultPrefix annotation attribute does: it says "pretend there's a component: prefix on the binding unless the programmer supplies an explicit prefix."

So you'd think that would wrap it up, we just need to do the following in the Label code:

  writer.element("label", "for", field.getClientId());

Right? Just ask the field for its client-side id and now all is happy.

Alas, that won't work. The Label component renders before the TextField, and the clientId property is not set until the TextField renders. What we need to do is wait until they've both rendered, and then fill in the for attribute after the fact.

That's where Heartbeat comes in. A Heartbeat represents a container such as a Loop or a Form. A Heartbeat starts, and accumulates deferred commands. When the Heartbeat ends, the deferred commands are executed. Also, Heartbeats can nest.

Using the Heartbeat, we can wait until the end of the current heartbeat after both the Label and the TextField have rendered and then get an accurate view of the field's client-side id. Since Tapestry renders a DOM (not a simple text stream) we can modify the Label's DOM Element after the fact.

Without the meta-programming, it looks like this:

    @Environmental
    private Heartbeat heartbeat;

    private Element labelElement;

    boolean beginRender(MarkupWriter writer)
    {
        final Field field = this.field;

        decorator.beforeLabel(field);

        labelElement = writer.element("label");

        resources.renderInformalParameters(writer);

        Runnable command = new Runnable()
        {
            public void run()
            {
                String fieldId = field.getClientId();

                labelElement.forceAttributes("for", fieldId, "id", fieldId + "-label");

                decorator.insideLabel(field, labelElement);          
            }
        };
        
        heartbeat.defer(command);

        return !ignoreBody;
    }

See, we've gotten the active Heartbeat instance for this request and we provide a command, as a Runnable. We capture the label's Element in an instance variable, and force the values of the for (and id) attributes. Notice all the steps: inject the Heartbeat environmental, create the Runnable, and pass it to defer().

So where does the meta-programming come in? Well, since Java doesn't have closures, it has a pattern of using component methods for the same function. Following that line of reasoning, we can replace the Runnable instance with a method call that has special semantics, triggered by an annotation:

    private Element labelElement;

    boolean beginRender(MarkupWriter writer)
    {
        final Field field = this.field;

        decorator.beforeLabel(field);

        labelElement = writer.element("label");

        resources.renderInformalParameters(writer);

        updateAttributes();

        return !ignoreBody;
    }

    @HeartbeatDeferred
    private void updateAttributes()
    {
        String fieldId = field.getClientId();

        labelElement.forceAttributes("for", fieldId, "id", fieldId + "-label");

        decorator.insideLabel(field, labelElement);
    }

See what's gone on here? We invoke updateAttributes, but because of this new annotation, @HeartbeatDeferred, the code doesn't execute immediately, it waits for the end of the current heartbeat.

What's more surprising is how little code is necessary to accomplish this. First, the new annotation:

@Target(ElementType.METHOD)
@Retention(RUNTIME)
@Documented
@UseWith(
{ COMPONENT, MIXIN, PAGE })
public @interface HeartbeatDeferred
{

}

The @UseWith annotation is for documentation purposes only, to make it clear that this annotation is for use with components, pages and mixins ... but can't be expected to work elsewhere, such as in services layer objects.

Next we need the actual meta-programming code. Component meta-programming is accomplished by classes that implement the ComponentClassTransformationWorker interface.

public class HeartbeatDeferredWorker implements ComponentClassTransformWorker
{
  private final Heartbeat heartbeat;

  private final ComponentMethodAdvice deferredAdvice = new ComponentMethodAdvice()
  {
    public void advise(final ComponentMethodInvocation invocation)
    {
      heartbeat.defer(new Runnable()
      {

        public void run()
        {
          invocation.proceed();
        }
      });
    }
  };

  public HeartbeatDeferredWorker(Heartbeat heartbeat)
  {
    this.heartbeat = heartbeat;
  }

  public void transform(ClassTransformation transformation, MutableComponentModel model)
  {
    for (TransformMethod method : transformation.matchMethodsWithAnnotation(HeartbeatDeferred.class))
    {
      deferMethodInvocations(method);
    }
  }

  void deferMethodInvocations(TransformMethod method)
  {
    validateVoid(method);

    validateNoCheckedExceptions(method);

    method.addAdvice(deferredAdvice);

  }

  private void validateNoCheckedExceptions(TransformMethod method)
  {
    if (method.getSignature().getExceptionTypes().length > 0)
      throw new RuntimeException(
          String
              .format(
                  "Method %s is not compatible with the @HeartbeatDeferred annotation, as it throws checked exceptions.",
                  method.getMethodIdentifier()));
  }

  private void validateVoid(TransformMethod method)
  {
    if (!method.getSignature().getReturnType().equals("void"))
      throw new RuntimeException(String.format(
          "Method %s is not compatible with the @HeartbeatDeferred annotation, as it is not a void method.",
          method.getMethodIdentifier()));
  }
}

It all comes down to method advice. We can provide method advice that executes around the call to the annotated method.

When advice is triggered, it does not call invocation.proceed() immediately, to continue on to the original method. Instead, it builds a Runnable command that it defers into the Heartbeat. When the command is executed, the invocation finally does proceed and the annotated method finally gets invoked.

That just leaves a bit of configuration code to wire this up. Tapestry uses a chain-of-command to identify all the different workers (theres more than a dozen built in) that get their chance to transform component classes. Since HeartbeatDeferredWorker is part of Tapestry, we need to extend contributeComponentClassTransformWorker() in TapestryModule:

  public static void contributeComponentClassTransformWorker(
      OrderedConfiguration<ComponentClassTransformWorker> configuration
  {
  
    ...
    
    configuration.addInstance("HeartbeatDeferred", HeartbeatDeferredWorker.class, "after:RenderPhase");
  }      

Meta-programming gives you the ability to change the semantics of Java programs and eliminate boiler-plate code while you're at it. Because Tapestry is a managed environment (it loads, transforms and instantiates the component classes) it is a great platform for meta-programming. Whether your concerns are security, caching, monitoring, parallelization or something else entirely, Tapestry gives you the facilities to you need to move Java from what it is to what you would like it to be.

Tuesday, March 30, 2010

In The Brain Of Howard Lewis Ship

SkillsMatter Logo While I'm in London for three days of Tapestry 5 Training, I'll also be giving an evening In The Brain Of talk ... on Tapestry, because there's not that much else rattling around my brain lately. Whereas Ben's talk was about lessons learned at the tail end of a Tapestry project, my talk gives you a point of reference on what Tapestry is all about, and why you want to start using it.

Swing by, take in the talk, and come on out for a pint or two! The talk is Tuesday, April 13th at 18:30.

Saturday, March 27, 2010

Getting Ready for SkillsMatter Training

Talk about putting too much on your plate ... I'm working to update my existing Tapestry workshop to release 5.2, even though it is an alpha snapshot. The new features, especially live class reloading in the services layer, are just too useful to ignore.

Thursday, March 25, 2010

Improvements to Live Class Reloading in 5.2

I just spent the morning making some improvements and fixes to live class reloading. The umbrella for reloading has now extended to any class that Tapestry instantiates ... whether is by ObjectLocator.proxy() or any variation of Configuration.addInstance(). In all those cases, when Tapestry is doing the instantiating, and there's a service interface, what gets created is a reloadable proxy ... as long as there is a class file available.

The interesting part is that for development you get the live class reloading behavior. In production, you will not, because all the classes will be packaged up in JARs, not on the file system.

Tapestry Training Returns to London: April 14 - 16, 2010

I'll be returning to SkillsMatter to teach my Tapestry workshop. The course runs from April 14th to the 16th. I had a great time teaching the course back in February: SkillsMatter has a great facility, just perfect for a hands-on class like this one.

Monday, March 15, 2010

Procrastination and JavaOne 2010: See you in 2011!

Well, that's what I get for waiting until the last day ... by the time I had a chance to put together a submission or two for JavaOne 2010, the site was down (from about 10pm on). Kind of frustrating, I was looking forward to talking about Tapestry and Clojure (my main speaking staples) in front of another big crowd.

Friday, March 12, 2010

Live Service Reloading in Tapestry 5.2

A common question I get during Tapestry training sessions is: Why can't Tapestry reload my services as well as my pages and components?. It does seem odd that I talk about how agile Tapestry is, with the live class reloading, and how nicely OO it is, what with services ... but when you move common logic to a service, you lose the agility because services do not live reload.

This came up yet again, during my latest training session, in London.

I've considered this before, and I've been opposed to live service reloading for a couple of reasons. First, live reloading requires creating new class loaders, and that causes conflicts with other frameworks and libraries. You get those crazy ClassCastExceptions even though the class name is correct (same name, different class loader, different class). Further, in Tapestry IoC, services can be utilized to make contributions to other services ... changing one service implementation, or one module, can cause a ripple effect across an untraceable number of other services. How do you know what needs to be reloaded or re-initialized?

When I last really considered this, back in the HiveMind days, my conclusion was that it was not possible to create a perfect reloading process: one that would ensure that the live-reloaded Registry (and all of its services with all their internal state) would be an exact match for what configuration you'd get by doing a cold restart.

So I shelved the idea, thinking that simply redeploying the WAR containing the application (and the services and modules) would accomplish the desired effect.

But as they say, The Perfect Is The Enemy Of The Good. One very sharp student, Andreas Pardeike, asked: Why not just reload the service implementations?.

Why not indeed? Why not limit the behavior to something understandable, trackable, and not very expensive. Most of the infrastructure was already present, used for reloading of component classes. What about ClassCastExceptions? In Tapestry, service implementations are already buried under multiple layers of dynamically generated proxies that implement the service interface. The underlying service implementation is never automatically exposed.

A few hours of work later ... and we have live service reloading. Each reloadable service gets its own class loader, just to load the service implementation class. When Tapestry is periodically checking for updated files, it checks each reloadable service. If necessary, the existing instance, class and class loader is discarded and a new class loader created for the updated .class file.

This is going to make a big difference for me, and for most Tapestry developers. Both applications I'm working on have enough Hibernate entities and other clutter to take some time (20 - 30 seconds) to restart, and most functionality is hidden past a login page. Being able to change a service, for example to tweak a Hibernate query, with the same speed with which I can tweak a template or component class, is just one more thing to keep me in the flow and super productive.

Give it a try ... it's one more step towards making Tapestry so compelling, you wouldn't think of using anything else!

Thursday, March 11, 2010

Why Eclipse leaves me wanting

I think I've come to understand why Eclipse leaves me always feeling a bit frustrated. Yes, it is more stable than IDEA, uses less memory, has some documentation, and a lot of acceptance ... but even so, it just leaves me cold (and I was an early adopter, signed up for the beta way back in 2000!).

Keystrokes are not modal

The fact that I can type a common keystroke into an Eclipse window and not know what it will do is painful. How a keystroke is interpreted depends on what perspective is active, what view or editor has focus, and what kind of data is being edited in the editor. That's dead wrong; keystrokes are about muscle memory, and muscle memory remembers motion, not context. The end result is that I get frustrated hitting keystrokes and seeing nothing happening. It doesn't help that I cycle between Mac and a PC on most days.

You can't have it your way

A tool as powerful and extensible as Eclipse walks the tightrope of offering lots of features and customizations without overwhelming the user. Alas, Eclipse is lying in a broken heap fifty feet below that tightrope. Eclipse has an unending set of options and defaults for things I don't care about, but anything I do care about seems to never be presents. Here's a few ideas of the top of my head:

  • Stop running launches when I close the project (I often have to kill them from the command line)
  • Give me a quick way to stop all running launches
  • Why so many steps to implement an interface? It's the second most common thing I do!
  • How about a button to quickly relaunch the current running launch?
  • Why are the available refactorings so paltry and where are the 3rd party ones?

Who's eating their own dog food?

When I used IDEA, I was constantly struck by little details that showed that the IDE developers were also its prime users. For example, it has open-type and open-resource dialogs much like Eclipse ... but each recognizes the keystroke for the other, so that if you mistakenly activate the open-type dialog, you just hit the normal keystroke to switch over to open-resource. Eclipse makes you cancel the dialog first.

Another example: in IDEA if you rename a field, it notices the getter and setter and will offer to rename those as well.

IDEA also has lots of quick fixes everywhere, such as "implement this interface" and lots of other tiny, cool things I miss every single day I use Eclipse. It's been about a year since I gave up on IDEA and I still miss it.

Is it cultural or organizational? Eclipse gives me the impression that day-to-day developers either have no concept of how the IDE gets used (and what rough spots are causing some serious chafing) OR they are somehow prohibited from fixing things that are obviously wrong.

If you love IDEA so much why don't you marry it?

So why don't I use IDEA anymore? Two main reasons: first, it's become very bloated, to the point that unless you go in and shut off a ton of features, it's unusable on my hardware. Merlyn has the same problem doing GWT work on his MacBook Pro ... all the help it gives you comes at a cost in terms of CPU and memory utilization and some instability.

Secondly, I tried (even before IDEA went open source) to use IDEA in my training labs and I hit a stone wall of non-acceptance. Switching to Eclipse was a benefit to my students since, even running in Ubuntu instead of Windows, it was familiar and easy to navigate. It also out-performs IDEA inside my Ubuntu Virtual Machine. I simply lack the ability to switch between the two on a constant basis without getting completely confused and frustrated. I had to choose one, and I chose Eclipse: stable and accepted, even if it is brain dead.

Why call it Ugly?

One thing I don't get is how many people claim Eclipse is "ugly" and IDEA "beautiful". I found IDEA to be overly chock-full of modal dialogs and a number of improperly resized (or non-resizable) dialogs and windows. It's a real dog's breakfast in terms of UI, and has the classically ugly Swing look and feel.

I've always found Eclipse to look sharp and somewhat elegant. You can have a debate about the technical merits of SWT vs. AWT and Swing, or the ability to tune Swing to look like SWT ... but SWT out of the box is simply a better L&F visually.

On a Mac they both suck at keyboard navigation, though.

There, I've vented. See what going cold-turkey from Twitter can do?

Monday, March 08, 2010

Java Champion

You might call it petty, you might call it vain, but I've aspired to be recognized as a Java Champion for the last couple of years. The process by which you are selected for this is a bit secretive, but I've finally gotten the nod and joined the roster.

My larger goal for Tapestry has always been to create a web application platform so compelling that it would draw developers to the Java programming language, just to be able to use it. Of course, that's not so much a goal as it is a journey. Technologically, I think Tapestry has the chops to embrace that goal (or journey) ... and looking at current discussions and developments in the Tapestry world, I think the other critical areas where Tapestry is lagging (namely, Documentation and Marketing) may come around.

Want to do your part? Blog about Tapestry ... what you like, what you don't like, what's missing, and what's hidden.

Monday, February 22, 2010

March of Progress

Or should that be "Late February of Progress". I have to say I'm a bit envious right now of Rich Hickey ... I can see that he's continuing on like a steam roller, extending and improving Clojure. I guess he's having some success in generating Research and Design budget from funding companies. I can see, following his threads, that he's working on yet more concurrency metaphors for Clojure, which is a good thing (though eventually there'll need to be a big book just to describe them all).

I'm on a different track, in that I fund Tapestry out of pocket while doing training and project work. In some cases, those merge, such as when I add specific features to Tapestry for a specific client.

I'm of two minds here: doing project work keeps me grounded in real requirements for Tapestry. I see what works really well, and what needs some polishing. On the other hand, I come up with ideas for new components, improvements, and integrations all the time and barely have enough free time (between clients, ordinary Tapestry maintenance, and this special project) to even document my ideas, never mind implement, test and distribute them.

So, should I set up a funding option like Rich's? Well, that wouldn't help my current clients (I'm committed to getting their apps into production), but it may change how I would look for future work.

Friday, February 19, 2010

Evolving the Meta Programming in Tapestry 5

I've set a goal of removing Javassist from Tapestry 5 and I've made some nice advances on that front. Tapestry uses Javassist inside the web framework layer to load and transform component classes.

All that code is now rewritten to updated APIs that no longer directly expose Javassist technology. In other words, where in the past, the transformer code would write psuedo-Java and add it to a method using Javassist (for example adding value = null; to the containingPageDidDetach() method), Tapestry 5.2 will instead add advice to the (otherwise empty) containingPageDidDetach() method, and the advice will use a FieldAccess object to set the value field to null.

Basically, I've vastly reduced the number of operations possible using the ClassTransformation API. Before, it was pretty much unbounded due to the expressive power of Javassist. Now a small set of operations exist that can be combined into any number of desired behaviors:

  • Add new implemented interfaces to a component Class
  • Add new fields to a Class
  • Initialize the value of a field to a fixed value, or via a per-instance callback
  • Delegate read and write access to a field to a provided FieldValueConduit delegate
  • Add new methods to a component Class with empty implementations
  • Add advice to any method of a class
  • Create a MethodAccess object from a method, to allow a method to be invoked (regardless of visibility)
  • Create a FieldAccess object from a field, to allow the field to be read or updated (regardless of visibility)

What's amazing is that these few operations, combined in different ways, supports all the different meta-programming possible in Tapestry 5.1. There's costs and benefits to this new approach.

Costs

There will be many more objects associated with each component class: new objects to represent advice on methods, and new objects to provide access to private component fields and methods.

Javassist could be brutally efficient, the new approach adds several layers of method invocation that was not present in 5.1.

Incorrect use of method advice can corrupt or disable logic provided by the framework and is hard to debug.

Benefits

We can eventually switch out Javassist for a more stable, more mainstream, better supported framework such as ASM. ASM should have superior performance to Javassist (no tedious Java-ish parse and compile, just raw bytecode manipulation).

The amount of generated bytecode is lower is many cases. Fewer methods and fields to accomplish the same behavior.

The generated bytecode is more regular across different utilizations: fewer edge cases, less untested, generated bytecode

Key logic returns to "normal" code space, rather than being indirectly generated into "Javassist" code space ... this is easier to debug as there's some place to put your breakpoints!

Summary

Overall, I'm pretty happy with what's been put together so far. In the long run, we'll trade instantiation of long lived objects for dynamic bytecode generation. There's much more room to create ways to optimize memory utilization and overall resource utilization and the coding model is similar (closures and callbacks vs. indirect programming via Javassist script). I'm liking it!

Thursday, February 11, 2010

Live reloading of Tapestry services?

During today's Tapestry Training at SkillsMatter, the question about live class reloading for Tapestry services came up.

Now, my normal response is to talk about class loaders, and mysterious class-cast exceptions it would cause, and the need to shut down and restart the container, etc.

But an idea went around ... what about just live reloading of implementation classes? That sparked some thoughts.

See, it seems to me that it should be possible to create a class loader that loads a single class (and, perhaps, inner classes of that single class), much as Tapestry uses a class loader to load pages and components.

In fact, it should be possible to have separate class loader for every implementation class that just performs the reload of that one class. A periodic check of the file modification date stamps could trigger the release of the class loader (and the current instance) and the instantiation of a new class loader, and the loading of the updated class.

You wouldn't be able to change service interfaces this way, or module classes (including contributions and the like) ... but changing a service implementation should be a snap. This would especially be useful for DAOs while creating and tuning database queries.

I think there would be some limitations here: services that are built via builder methods would not work; neither would services that export this (typically, as a listener to events published by another service). However, the vast majority of services could, I think, be automatically reloaded.

This is worth spending some time on ... if I can pull it off, it would be an incredible coup!

The only downside is that some services may need to move from tapestry-core to tapestry-ioc and some of those may, in fact, be public already (but not widely used).

Devoxx: Clojure Talk Now Available

A full video of my Devoxx 2009 talk, Clojure: Functional Concurrency for the JVM is now available:

The talk runs about 40 minutes and does not include the questions and answers from the end. You can see I was just a touch jetlagged and perhaps awed by the size of the rooms at Devoxx ... they are full scale movie theaters, with gigantic screens above, and powerful lights shining directly in your eyes. Worse, I was getting terrible feedback for the first few minutes.

Don't forget to rate the talk at SpeakerRate!

In any case, I've evolved this talk quite a bit since last November, and the current version focuses on the language itself rather than the concurrency features of Clojure. That's the talk I just gave in Paris.

Meanwhile, my first Tapestry training at SkillsMatter continues (tomorrow is day three of three) and it's just been a lot of fun. It's interesting to see that the students are gaining at the extremes: so many people miss basic features like the ?. operator of the property expression language, or how dependencies work in the IoC container ... and the very same people are really interested in the advanced meta-programming and AOP techniques available to Tapestry.

Monday, February 08, 2010

Paris Clojure Talk

I had a terrific time spreading the word about Clojure tonight, followed by some fun and spirited discussions over dinner. People are intrigued by Clojure, even as they struggled with a strategy for bringing it into their organization.

If you attended, please rate my talk at SpeakerRate!

Commited to Tapestry

Quite a few people have commented on Ten Years of Tapestry, many to note some of the many other great projects being built with Tapestry as a foundation.

We keep a list of tutorials and extensions on the Tapestry home page, with many other sites noted on the wiki (here and here).

Meanwhile, a particular comment from Peter Rietzler was so compelling, it deserves to be top level, so here it goes:

Although our web application uses Tapestry we are using all the Tapestry support stuff far beyond the web tier - I thought that it might be interesting to see that the Tapestry framework is pretty useful in other environments too :)

First and most noteworthy Tapestry IOC got our first choice as dependency injection container, both because of its simplicity and the power of contributions and service overrides. We are building a highly modular (and massively unit- and integration- tested) application where Tapestry IOC's concept of modules and contributions has proven a perfect choice for us. I've written a couple of blog entries about this issue about a year ago when we were searching for a light-weight alternative to OSGi: Is OSGi going to become the next EJB ? and How to Design Software for Flexibility, Reusability and Scalability without loosing KISS principles!

We wrote and contributed Tapestry extensions for popular unit testing frameworks: Unitils (included in next release) and the Spock framework.

Additionally we are heavily using Tapestry services (such as pipelines and chains) in our core services.

Even the type coercion infrastructure has proven very useful for us. We developed a quite powerful Groovy DSL for Enterprise Data Mediation which is targeted to non-developers and we use Tapestry type coercion (with some extensions) tightly embedded in our DSL to free our e-business managers from the burden of providing correct types.

Our whole project heavily relies on small contributions of commands that are instantiated in high volumes at runtime and need environmental stuff injected - another point where Tapestry IOC has proven to be very useful.

Cheers and many thanks for your awesome work,

Peter

btw: I've forgot to mention that we presented our module system with Tapestry IOC at the Austrian Enterprise Java User Group meeting along with another talk about Spring DM and OSGi held by Sam Brannen last autumn.

Thursday, February 04, 2010

Ten Years of Tapestry

I recently realized that the first prototype of Tapestry was written ten years ago! It all started as a home project in my living room, with the original inspiration coming from some brief exposure to WebObjects.

Even the "new" codebase, Tapestry 5, is well over three years old at this point.

How long can I ride this dragon? Pretty far, I think ... Tapestry keeps getting better, I keep learning new things, and the community keeps growing. I'm also very impressed by the other Tapestry committers, who have really been stepping up to the plate, not just with code, but with infrastructure issues and the backporting of bug fixes.

I think there are a lot of exciting things afoot in the larger Tapestry world right now. Powerful new features are in the 5.2 code base (still in alpha), including enhancements for JSR-303 (bean validation) and a lot of (backwards compatible) changes to the way component classes are enhanced at runtime. I'm also steaming ahead with a number of big improvements to how JavaScript is organized in the rendered page.

Outside of the core project, there's quite a lot going on. Here's a few things that have caught my attention recently:

First off, there's Wooki, a sizable Tapestry application (open source, on GitHub) for collaborative book writing. It's very pretty to look at, and the code looks quite ship-shape (no pun intended). I think Wookie is not only going to prove useful on its own terms, but is also going to serve as a great example code base for Tapestry.

Next up is Tynamo ... think Rails/Grails meets Tapestry. It's an extension to Tapestry that supports even faster RAD development, automatically creating CRUD (Create Read Update Delete) pages for Hibernate entities. These same people have been building REST support for Tapestry as well as conversational state. Lots of good stuff here (though I haven't had a chance to try it out in detail).

I've been busy with my own Tapestry Extensions project at GitHub. I'm in a lucky space ... I'm adding features to Tapestry and TapX to fit my client's needs.

We're also seeing the deployments of some very large Tapestry 5 applications, such as SeeSaw which is the UK's answer to Hulu ... streaming video on demand. This is expected to be one of the highest bandwidth sites in Europe once it leaves beta.

The shame of it is ... I'm just the creator of the framework; I don't know 1% of what's going on with applications developed in Tapestry. If you are working on something cool, please drop me a line!