FINN tech blog

tech blog

Posts Tagged ‘front-end’

Profiling and debugging view templates

Ever needed to profile the tree of JSPs rendered server-side?
  Most companies do and I’ve seen elaborate and rather convoluted ways to do so.

With Tiles-3 you can use the PublisherRenderer to profile and debug not just the tree of JSPs but the full tree of all and any view templates rendered whether they be JSP, velocity, freemarker, or mustache.

At FINN all web pages print such a tree at the bottom of the page. This helps us see what templates were involved in the rendering of that page, and which templates are slow to render.

We also embed into the html source wrapping comments like

<!-- start: frontpage_geoUserData.jsp -->
...template output...
<!-- end: frontpage_geoUserData.jsp :it took: 2ms-->

The code please

To do this register and then attach your own listener to the PublisherRenderer. For example in your TilesContainerFactory (the class you extend to setup and configure Tiles) add to the methd createTemplateAttributeRenderer something like:

    protected Renderer createTemplateAttributeRenderer(BasicRendererFactory rendererFactory, ApplicationContext applicationContext, TilesContainer container, AttributeEvaluatorFactory attributeEvaluatorFactory) {
        Renderer renderer = super.createTemplateAttributeRenderer(rendererFactory, applicationContext, container, attributeEvaluatorFactory);
        PublisherRenderer publisherRenderer = new PublisherRenderer(renderer);
        publisherRenderer.addListener(new MyListener());
        return publisherRenderer;

Then implement your own listener, this implementation does just the wrapping comments with profiling information…

class MyListener implements PublisherRenderer.RendererListener {
    public void start(String template, Request request) throws IOException {
        boolean first = null == request.getContext("request").get("started");
        if (!first) {
            // first check avoids writing before a template's doctype tag
            request.getPrintWriter().println("\n<!-- start: " + template + " -->");
        } else {
            request.getContext("request").put("started", Boolean.TRUE);
    public void end(String template, Request request) throws IOException {
        Long time = stopStopWatch(request);
        if(null != time){
            request.getPrintWriter().println("\n<!-- end: " + template 
                                         + " :it took: " + time + "ms -->");
    private void startStopWatch(Request request){
        Deque<StopWatch> stack = request.getContext("request").get("stack");
        if (null == stack) {
            stack = new ArrayDeque<StopWatch>();
            request.getContext("request").put("stack", stack);
        StopWatch watch = new StopWatch();
    private Long stopStopWatch(Request request){
        Deque<StopWatch> stack = request.getContext("request").get("stack");
        return 0 < stack.size() ? stack.pop().getTime() : null;

It’s quick to see the possibilities for simple and complex profiling open up here as well as being agnostic to the language of each particular template used. Learn more about Tiles-3 here.

Putting a mustache on Tiles-3

We’re proud to see a contribution from one of our developers end up in the Tiles-3 release!

The front-end architecture of is evolving to be a lot more advanced and a lot more work is being done by client-side scripts. In order to maintain first time rendering speeds and to prevent duplicating template-code we needed something which allowed us to reuse templates both client- and server-side. This is where mustache templates have come into play. We could’ve gone ahead and done a large template framework review, like others have done, but we instead opted to just solve the problem with the technology we already had.

Morten Lied Johansen’s contribution allows Tiles-3 to render mustache templates. Existing jsp templates can be rewritten into mustache without having to touch surrounding templates or code!

The code please

To get Tiles-3 to do this include the tiles-request-mustache library and configure your TilesContainerFactory like

    protected void registerAttributeRenderers(...) {
        MustacheRenderer mustacheRenderer = new MustacheRenderer();
        rendererFactory.registerRenderer("mustache", mustacheRenderer);
    protected Renderer createTemplateAttributeRenderer(...) {
        final ChainedDelegateRenderer chainedRenderer = new ChainedDelegateRenderer();

then you’re free to replace existing tiles attributes like

<put-attribute name="my_template" value="/WEB-INF/my_template.jsp"/>

with stuff like

<put-attribute name="my_template" value="/my_template.mustache"/>

Good stuff FINN!

XSS protection: who’s responsibility?

In a multi-tier application who can be responsible for XSS protection?
Must security belong to a dedicated team…or can it be a shared responsibility?
Today XSS protection is typically tackled by front end developers.
 Let’s challenge the status quo.  

New Applications vs Legacy Applications

For protection against Stored XSS many applications have the luxury of ensuring any text input from the user, or from a CMS system, is made clean and safe before being written to database. Given a clean database the only XSS protection required is around request values, for example values from url parameters, cookies, and form data.

Image by The Planet via Flickr --

But some applications, especially legacy applications, are in a different boat. Databases typically have lots of existing data in many different formats and tables so it’s often no longer feasible to focus on protecting data on its way into the system. In this situation it is the front end developers that pay the price for the poor quality of backend data and are are left to protect everything. This often results in a napalm-the-whole-forest style of xss protection where every single variable written out in the front end templates goes through some equivalent of

            <c:out value="${someText}"/>

This makes sense but…   if you don’t have control is your only option to be so paranoid?

A Messed up World

To illustrate the problem let’s create a simple example by defining the following service

  interface AdvertisementService{
    Advertisement getAdvertisement(long id);
  interface Advertisement{
    /** returns plain text title */
    String getTitle();
    /** return description, which may contain html if isHtmlEnabled() returns true */
    String getDescription();
    /** indicates the description is html */
    boolean isDescriptionHtml();

The web application, having already fetched an advertisement in the control tier, somewhere would have a view template looking something like

        <h1><c:out value="${advertisement.title}"/></h1>
            <c:out value="${advertisement.description}" 
Here we add another dimension: simple escaping with c:out won’t work if you actually want to write html (and what is the safety and quality of such html data).

When this service is used by different applications each with their own view templates, and maybe also exposed through web services, you end up no doubt protecting it over many times, system developers in the web services, and front end developers in each of the presentation layers… likely there will be confusion over the safety and quality of data coming from this service, and of course everyone will be doing it differently so nothing will be consistent.

  • Is there a better way?
  • Can we achieve a consistent XSS protection in such an environment?
  • How many developers need to know about XSS and about the origin of the data being served?

In the above code if we can guarantee that the service _always_ returns safe data then we can simplify it by removing the isDecriptionHtml() method. The same code and view template would become

  interface AdvertisementService{
    Advertisement getAdvertisement(long id);
  /** All fields xss protected */
  interface Advertisement{
    /** returns plain text title */
    String getTitle();
    /** return description, which may contain safe html */
    String getDescription();

By introducing one constraint: that all data is xss protected in the services tier; we have provided a simpler and more solid service API, and allowed all applications to have simpler, more concise, more readable, view templates.


Having a clean database all non-html data can be escaped as it comes in. Take advantage of Apache’s StringEscapeUtils.escapeHtml(..) from commons-lang library. For incoming html one can take advantage of a rich html manipulation tool, eg like JSoap, to clean, normalise, and standardise it.

With legacy or foreign data, especially those applications with exposed service architecture and/or multiple front ends, a different approach is best: ensure nothing unsafe ever comes out of the services tier. For the html data the services will often be filtering many snippets of html over and over again, so this needs to be fast and a heavy html manipulation library like JSoap isn’t appropriate any more.
A suitable library is xss-html-filter, a port from libfilter. It is fast and has an easy API

String safeHtml = new HTMLInputFilter().filter( "some html" );

If we do this it means
   xss protection is not duplicated, but rather made clear who is responsible for what,
   c:out becomes just junk in verbosity and performance* ,
   service APIs become simpler,
   view templates look the same for both new and legacy applications,
   system developers become responsible for protection of database data and this creates a natural incentive for them to clean up existing data and ensure new data comes in safe,

* No matter what an inescapable fact is all front ends developers must have a concrete understanding that any value fresh from the request is prone to Reflected XSS and there’s nobody but them that can be responsible for protecting these values.

  XSS protection has become a basic knowledge requirement for all developers.
    Like much to do about security … it’s always only as strong as its weakest link.

At because we take security seriously, and because we know we are only human and we need some room for the occasional mistake, for each release we run security audits. These include using WatchCom reports, tools like Acunetix, and custom tests using Cucumber.

Multiple versions of Chrome on OS X

As a professional web developer you should use the bleeding version of Google Chrome, so you’re prepared what your users will see a few weeks ahead. But the dev version tends to be a bit more buggy (naturally), and you have to test that your page works with both the beta and the stable version. Google Chrome has a command-line parameter to specify another profile, but it’s a bit tricky to add command-line parameters to a Mac application. Yesterday I found a blog-post by Duo Consulting. They have made a nice little script that will generate an app with the profile you specify. I modified it a bit to allow running different versions of Google Chrome as well as a profile for each of them.

To make the install process even easier for others I’ve zipped the generated applications (they just contain a script).

Disclaimer: Each version will have it’s own profile, so you have to set up each one from scratch with bookmarks, plugins and everything else. As Scott mentions in the comments, you can use the sync feature in Chrome to keep the installations in sync.

  1. First download each version and rename the original Google Chrome apps to “Google Chrome Stable”, “Google Chrome Beta” and “Google Chrome Dev” and put them in Applications under your user folder.
    I.e.: /Users/gregers/Applications/
  2. Unzip and put the apps in /Applications
  3. Don’t select any of them as default browser, since that will use the default profile instead of the custom. Instead start Safari -> Preferences -> General -> Default web browser -> Select… -> Go to /Applications and choose the version of Chrome you want as default :)

If you’re interested in the script I used to make the wrapper-apps. You can find it here:

The Great Leap – getting serious with JavaScript at

Krafttak for JavaScript are blessed to be based in the same country as one of the brightest young stars when it comes to testing in JavaScript and applying Test Driven Development: Christian Johansen (the poster above was used to promote the event).
He works as a developer at Gitorious during the day time, but he also cranks out frameworks like SinonJS and he has even published a book on how to do Test Driven Development in JavaScript.

Driving your design with tests

We first had an amazing live coding session which he had previously done at FrontTrends. The task was to create a type-ahead widget which sent requests to a server if the delay was more than 50 millies. Driving the design of an HTML widget with tests is just as awesome as doing the same thing with JUint on the back-end. You get a nice set of really simple objects which provide you with a set of tools to create the functionality you want. This is very different from the one-object-with-everything kind of code you see a lot of when you do not focus upon design before you code. You can accomplish good clean code without

Refactor some legacy code with tests

We have just recently ported parts of our platform related to advanced search to a new Java framework, but we did not port much of the JavaScript code. Only thing we have done is to prevent it from flooding the global scope and extract the JS code into a separate file. Christian gave us the challenge of trying to test this code. This was by no means an easy task.
The code was not written with testing in mind. It has some weird coding errors which for some reason does not produce errors in the user interface. Stuff like a function which one would expect to return a boolean, but when we tried to assertTrue on the result of a validation it failed and we couldn’t see why.

function validateForm() {
    var valid = true;
        valid &= validateInput($(this));
    return valid;
    if (validateForm()) { ....}

That was until we noticed the bitwise and operator used to indicate valid, which returns 0 or 1. Not true or false which you might expect since the valid variable is initialized with true.

Two wrongs make a right?

To proceed with testing this advanced search field we had to assert whether the red border was on or not. This was the only thing visible from outside of the object. We hit a bit of a snag when it came to the css-function in jQuery, which it turns out does not return a color if you pass in just border. However if you pass in border-color-left you get the result rgb(204, 0, 0).

function validateInput(element) {
    if (val.length > 0 &&^\d+$/)) {
        element.css({border:"1px solid #CC0000"});
        element.focus(function() {
            element.css({border:"1px solid #BBBBCC"});
        return false;
    return true;

These are just a few of the thing we needed to figure out in order to add a test harness to some of our legacy code. It is very very hard work and you really need some cojones to start doing it. It was truly inspiring to have Christian over and we just have no excuse anymore for not writing tests for our JavaScript code. This is a huge leap forward in terms of getting serious with our JavaScript code.

Developing for mobile is punk rock!

The title of this post is of course a blatant lie. Mobile development is nothing but pain from here until eternity (no, not really). But, luckily the brave developers at aren’t scared of a little pain and therefor we decided to do a mobile development trippel header gig to showcase some of the different approaches to mobile developement.Due to some kids falling ille we had to reduce the show to be a double header. It did not end up to make a difference as the guys doing the talks totally kicked ass!

Oh and the picture on the left is a flyer we put up everywhere to market the event at the headquarters. Finnfrontend is a name for one of our communities of practice dedicated to front-end development. Unfortunately it was reduced to a double header and we missed out on getting the low down on iOS development of native apps.

Mobile web on

Frank and Sven Inge from Team Mobile at presented the beta for As announced previously on the beta for our mobile web service is now available for testing and feedback. We are very excited about this application and we feel that it takes mobile web to a new level here in Norway.

Frank presented some of the plans going forward with the beta (which I am not going to tell here). He also shared some of the experiences and pains with developing for mobile devices. Naturally there are quite a few challenges with compatibility and providing a consistent look and feel across multiple platforms. This has been the bread and butter for web developers for years and it is no surprise that it is hard on mobile devices.

The Team also talked about some of the design decisions they have made and why they ended up solving a challenge a certain way. Choosing to display all information and letting the user scroll as opposed to applying the show/hide-style functionality we are used to from the desktop web seems like a really good choice.

The mobile web solution looks awesome and we can not wait to see the final version! I can only say that it definitely will ship before the 17th of May!

App development with HTML+CSS and Phonegap

Audun Kjelstrup from Nordaker shared his experiences with mobile app development using HTML+CSS+JavaScript for multiple platforms using Phonegap. Drawing on his past experiences with developing the Dolly Dimple pizza configurator for iPhone and the NHO Conference app for iOS, Android and Symbian he covered some of the challenges with this kind of development.

Audun recommended going for a web based app if you wanted a user experience which differed a lot form what comes out of the iOS UIKit. This was mainly due to the cost of development and the work required to create custom experiences such as the Dolly Dimple pizza configurator. He also pointed out that web based apps will always struggle to have the exact same snappyness and scroll smoothness as a native application. Creating a web based app for multiple platforms is hard work (no real surprise there) and having to deal with older Symbian based phones was a lot of pain (no real surprise there either), so you should consider what platform you target carefully.
The slides from this presentations can be downloaded from the Cocoa Heads Oslo Meetup where it was held some days ago.

This was an awesome session with two great talks which highlighted some of the challenges with creating applications for mobile devices across multiple platforms. Thanks to Audun and Team Mobile!

Graded browser support at

Inspired by the kick-ass developers of Yahoo! and their work on Graded Browser Support (GBS) we at decided to adopt graded browser support as a way to communicate what level of support we have for different combinations of browsers and platforms.

Creating the support matrix

FINN is the largest site in Norway when it comes to traffic and we have a good framework for statistics. In order to create the support matrix we took the statistics for the most popular browsers and platform combinations and put them in a grid. This gave us a matrix showing us which browser/platform combinations we need to consider from a business perspective. Although this was a good start, we also needed to figure out what level of support we should provide each combination. Numbers and usage alone does not provide a good enough basis upon setting support levels.

The cost of support

Having read the early drafts of the awesome book Secrets of a JavaScript ninja by John Resig we decided to follow his approach on creating a GBS matrix and perform a cost-benefit-analysis. We did was to do a quick survey among our developers in order to figure out what the costs where for supporting different user agents on certain platforms. The results are displayed in the chart below and shows our subjective opinions on how much effort is required to support a certain browser. Note that this chart will not look the same for everyone. It is will vary based on the skills of your developers, what browsers they work in, etc, etc.

Support levels

Our levels of support are slightly more simple than those of Yahoo! and we only provide three levels:

  • A-support: no visual or functional errors, all errors are reported and all features should be tested for each release.
  • B-support: no functional errors, core features should be tested for each release, visual errors are only reported with a low priority
  • C-support: no functional errors in core features, all other errors are not reported.

This is how we created our GDS matrix and we hope this will inspire you to do the same for your shop. We are going to do a similar exercise when it comes to mobile, but this is still work in progress.

The matrix is available on and here is an English translation.

the ultimate view

This article has been rewritten for Tiles-3.
Redirecting to

A story of getting the View layer up and running quickly in Spring, using Tiles with wildcards, fallbacks, and definition includes, to use the Composite pattern and Convention over Configuration providing a minimal ongoing xml changes.


From the architect’s perspective you see Apache Tiles as rare exotic Italian marble sheets laid out exquisitely, while the same architect will see SiteMesh as the steel wiring stuck inside the concrete slab. The beauty of the tiles is always admired and a key component in creating an eye-catching surrounding.


At we were redesigning our control and view layers. We, being the architectural team of six, had already decided on Spring-Web as a framework for the control layer due to its flexibility and a design for us providing better, simpler, migration path. For the front end we were a little unclear. In a department of ~60 developers we knew that the popular vote would lead us towards SiteMesh. And we knew why for practical purposes sitemesh gives the front end developer more flexibility and less xml editing. But we knew sitemesh has some serious shortcomings…

SiteMesh shortcomings:

  • from a design perspective the Decorator pattern doesn’t combine with MVC as elegantly as the Composite pattern does
  • requires to hold all possible html for a request in buffer requiring large amounts of memory
  • unable to flush the response before the response is complete
  • requires more overall processing due to the processing of all the potentially included fragments
  • does not guaranteed thread safety
  • does not provide any structure or organisation amongst jsps, making refactorings and other tricks awkward

One of the alternatives we looked at was Apache Tiles. It follows the Composite Pattern, but within that allows one to take advantage of the Decorator pattern using a ViewPreparer. This meant it provided by default what we considered a superior design but if necessary could also do what SiteMesh was good at. It already had integration with Spring, and the way it worked it meant that once the Spring-Web controller code was executed, the Spring’s view resolver would pass the ball onto Tiles letting it do the rest. This gave us a clear MVC separation and an encapsulation ensuring single thread safety within the view domain.

Yet the most valuable benefit Tiles was going to offer wasn’t realised until we started experimenting a little more…

Fork me on GitHub casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme