FINN tech blog

tech blog

Archive: ‘Systems development’

Log4j2 in production – making it fly



Now that log4j2 is the predominant logging framework in use at FINN.no why not share the good news with the world and try to provide a summary over the introduction of this exciting new technology into our platform.

let’s just one logging framework

In beginning of September 2013 it became the responsibility of one of our engineering teams to introduce a “best practice for logging” for all of FINN.

The proposal put forth was that we can standardise on one backend logging framework while there would be no need to standardise on the abstraction layer directly used in our code.

The rationale to not standardising the logging abstractions was…
  • nearly every codebase, through a tree of dependencies, already includes all the different logging abstraction libraries, so the hard work of configuring them against the chosen logging framework is still required,
  • the different APIs to the different logging abstractions are not difficult for programmers to go between from one project to another.

While the rationale to standardising the logging framework was…
  • makes life easier for programmers with one documented “best practice” for all,
  • makes it possible through an in-house library to configure all abstraction layers, creating less configuration for programmers,
  • makes life easier for operations knowing all jvms log the same way,
  • makes life easier for operations knowing all log files follow the same format.

Log4j2 wins HANDS down

Log4j2 was chosen as the logging framework given…
  • it provided all the features that logback was becoming popular for,
  • between old log4j and logback it was the only framework that was written in java with modern concurrency (ie not hard synchronised methods/blocks),
  • it provided a significant performance improvement (1000-10000 times faster)
  • it consisted of a more active community (logback has been announced as the replacement for the old log4j, but log4j2 saw a new momentum in its apache community).

This proposal was checked with a vote by finn programmers
        – 73% agreed, 27% were unsure, no-one disagreed.

when nightly compression jams

Earlier on in this process we hit a bug with the old log4j where nightly compression of already rotated logfiles were locking up all requests in any (or most) jvms for up to ten seconds. This fault came back to poor java concurrency code in the original log4j (which logback cloned). Exacerbated by us having scores of jvms for all our different microservices running on the same machines so that when nightly compression kicked in it did so all in parallel. Possible fixes here were to
  a) stop compression of log files,
  b) make loggers async, or
  c) migrate over quickly to log4j2.

After some investigation, (c) was ruled out because there was no logstash plugin for log4j2 ready and moving forward without the json logfiles and the logstash & kibana integration was not an option. (a) was chosen as a temporary solution.

ready, steady, go…

Later on, when we started the work on upgrading all our services from thrift-0.6.1 up to thrift-0.9.1, we took the opportunity to kill two birds with the one stone. Log4j2 was out of beta, and we had ironed out the issues around the logstash plugin.

We’d be lying if we told you it was all completely pain free,
 introducing Log4j2 came with some concerns and hurdles.

    • Using a release candidate of log4j2 in production lead to some concerns. So far the only consequence was slow startup times (eg even small services paused for ~8 seconds during startup). This was due to log4j2 having to scan all classes for possible log4j plugins. This problem was fixed in 2.0-rc2. On the bright side – our use of the release candidate meant we spotted early and provided a patch in getting the upcoming initial release of log4j2 to support shaded jarfiles, of which we are heavily dependent on.
    • Operation’s had expressed concerns over nightly compression, raised from the earlier problem around nightly compression, that even if code no longer blocked while the compressing was happening in a background thread the amount of parallel compressions spawned would lead to IO contention which in turn leads to CPU contention. Because of this very real concern extensive tests have been executed, so far they’ve shown no measurable (under 1ms) impact exists upon services within the FINN platform. Furthermore this problem can be easily circumvented by adding the SizeBasedTriggeringPolicy to your appender, thereby enforcing a limit on how much parallel compression can happen at midnight.
    • The new logstash plugin (which finn has actively contributed to on github) caused a few breakages to the format expected by our custom logstash parsers written by operations. Unfortunately this parser is based off the old log4j format, of which we are trying to escape. Breakages here were: log events on separate lines, avoiding commas are the end of lines between log events, thread context in wrong format, etc. These were tackled with pull requests on github and patch versions of our commons-service (the library used to pre-configure the correct dependency tree for log4j2 artifacts and properly plugging in all the different logging abstraction libraries).
    • Increased memory from switching to sync loggers to async loggers impacted services with very small heap. The async logger used is based of the lmax-disruptor which pre-allocates its ringBuffer with its maximum capacity. By default this ringBuffer is configured to queue at maximum 256k log events. This can be adjusted with the “AsyncLoggerConfig.RingBufferSize” system property.

simply beautiful

To wrap it up the hurdles have been there, but trivial and easy to deal with, while the benefits of introducing log4j2, and moving to async loggers, make it well worth it…
    • The “best practice” for log4j2 included changing all loggers to be async, and this means that the performance of the FINN platform (which consists primarily of in-memory services) is no longer tied to and effected by how disks are performing (it was crazy that it was before).
    • More and more applications are generating consistent logfiles according to our best practices.
    • More and more applications are actually plugging in the various different logging abstractions used by all their various third-party dependencies.
    • All the advantages people liked about logback.
    • An easier approach to changing loglevels at runtime through jmx.
    • Profiling applications in crisis easier for outsiders (one less low-level behavioural difference).
    • Loggers are no longer a visible bottleneck in jvms under duress.
    • And naturally the performance increase.

Because of the significant performance provided by the lmax-disruptor we also use the open-sourced statsd client that takes advantage of it.

We love NPM!


What? Why is FINN.no donating to scale NPM? I thought you guys were a pure Java shop? It is true, we used to be a pure Java-shop. However over the past three years we have adopted new technologies to solve specific problems. We have used Ruby and Cucumber for some time to make a platform for continuos delivery and it has worked out beautifully! Our front-end developers have been forced to deal with out dated and not suitable tools for doing their job. This is largely due to the fact that all innovation when it comes to front-end development does not happen in the Java community. Most of the exciting tools are written in Node and this has become a frustration and a challenge for us.

In the past year FINN have been gradually making a transition away from using only Java-based tools for front-end development and towards a NodeJS powered tool set. We are now at a point were we are on the brink of rolling this out for our projects. Having worked with Node for a while we have learned to appreciate the Node ecosystem which is NPM. Being part of such a vibrant ecosystem of modules makes the transition easier and it also inspires us to become better at giving back. Therefore we are trying to give back to NPM, when we can.

When the scale NPM campaign was launched it was obvious that this was something we wanted to be apart of. It is an investment in our own happiness in a sense, as NPM is becoming a very important part for our technology portfolio.

Nodeify all the things

So were is it that we use Node in our technology stack today? Earlier this year we moved away from JsTestDriver in favor of Karma-runner. This meant that we needed to create a trojan horse containing the goodness of Node/NPM into existing Java projects without causing too many problems for developers with no knowledge of Node. A part of this scheme was the frontend-maven-plugin, which enables us to have control of which Node projects use and allows developers without Node previously installed to build projects and run tests without having to learn anything about Node.

Currently we are working towards removing the need for using Maven to build and deploy pure JavaScript projects using Node. The end result is of course to have more web applications built using Node. Today we have just two which are running in a production environment.

Package Management conflicts Continuous Delivery


The idea of package management is to correctly operate and bundle together various components in any system. The practice of package management is a consequence from the design and evolution of each component’s API.

Package management is tedious

   but necessary. It can also help to address the ‘fear of change’.

We can minimise package management by minimising API. But we can’t minimise API if we don’t have experience with where it comes from. You can’t define for yourself what the API of your code is. It is well beyond that of your public method signatures. Anything that with change can break a consumer is API.

Continuous Delivery isn’t void of API

   despite fixed and minimised interfaces between runtime services, each runtime service also contains an API in how it behaves. The big difference though is you own the release change, a la the deployment event, and if things don’t go well you can roll back. Releasing artifacts in the context of package management can not be undone. Once you have released the artifact you must presume someone has already downloaded it and you can’t get it back. The best you can do it release a new version and hope everyone upgrades to it quickly.

Push code out from behind the shackles of package management

   take advantage of continuous delivery! Bearing in mind a healthy modular systems design comes from making sure you got the api design right – so the amount one can utilise CD is ultimately limited, unless you want to throw out modularity. In general we let components low in the stack “be safe” by focusing on api design over delivery time, and the opposite for components high in the stack.

High in the stack doesn’t refer to front-end code

   Code at the top of the stack is that free of package management and completely free for continuous deployment. Components with direct consumers no longer sit at the top of the stack. As components consumers multiple, and they become transitive dependencies, they move further down the stack. Typically entropy of the component corresponds to position in the stack. Other components forced into package management can be those where parallel versions need be deployed.

Some simple rules to abide by…

  • don’t put configuration into libraries.
    because this creates version-churn and leads to more package management

  • don’t put services into libraries.
    same reason as above.

  • don’t confuse deploying with version releases.
    don’t release every artifact as part of a deployment pipeline.
    separate concerns of continuous delivery and package management.


  • try to use a runtime service instead of a compile-time library.
    this minimises API, in turn minimising package management,

  • try to re-use standard APIs (REST, message-buses, etc).
    the less API you own the less package management.
    but don’t cheat! data formats are APIs, and anything exposed that breaks stuff when changed is API.

Dark Launching and Feature Toggles


Make sure to distinguish between these two.

They are not the same thing,
  and it’s a lot quicker to just Dark Launch.

In addition Dark Launching promotes incremental development, continuous delivery, and modular design. Feature Toggles need not, and can possibly be counter-productive.

Dark Launching is an operation to silently and freely deploy something. Giving you time to ensure everything operates as expected in production and the freedom to switch back and forth while bugfixing. A Dark Launch’s goal is often about being completely invisible to the end-user. It also isolates the context of the deployment to the component itself.

Feature Toggling, in contrast, is often the ability to A/B test new products in production. Feature Toggling is typically accompanied with measurements and analytics, eg NetInsight/Google-Analytics. Feature Toggles may also extend to situations when the activation switch of a dark launch can only happen in a consumer codebase, or when only some percentage of executions will use the dark launched code.

Given that one constraint of any decent enterprise platform is that all components must fail gracefully Dark Launching is the easiest solution, and a golden opportunity to ensure your new code fails gracefully. Turn the new module on, and it’s dark launched and in use, any problems turn it off again. You also shouldn’t have to worry about only running some percentage of executions against the new code, let it all go to the new component, if the load is too much the excessive load should also fail-gracefully and fall back to the old system.

Dark Launching is the simple approach as it requires no feature toggling framework, or custom key-value store of options. It is a DevOps goal that remains best isolated to the context of DevOps – in a sense the ‘toggling’ happens through operations and not through code. When everything is finished it is also the easier approach to clean up. Dealing with and cleaning up old code takes up a lot of our time and is a significant hindrance to our ability to continually innovate. In contrast any feature toggling framework can risk encouraging a mess of outdated, arcane, and unused properties and code paths. KISS: always try and bypass a feature toggle framework by owning the ‘toggle’ in your own component, rather than forcing it out into consumer code.

Where components have CQS it gets even better. First the command component is dark launched, whereby it runs in parallel, and data can be test between the old and new system (blue-green deployments). Later on the query component is dark launched. While the command components can run and be used in parallel for each and every request the query components cannot. When the dark launch of the query component is final the old system is completely turned off.

Now the intention of this post isn’t to say we don’t need feature toggling, but to give terminology to, and distinguish, between two different practices instead of lumping everything under the term “feature toggling”. And to discourage using a feature toggling framework for everything because we fail to understand the simpler alternative.

In the context of front-end changes, it’s typical that for a new front-end feature to come into play there’s been some new backend components required. These will typically have been dark launched. Once that is done, the front-end will introduce a feature toggle rather than dark launch because it’s either introducing something new to the user or wanting to introduce something new to a limited set of users. So even here dark launching can be seen not as a “cool” alternative, but as the prerequisite practice.

Reference: “DevOps for Developers” By Michael Hüttermann

I wish I knew my consumers – Maven Reverse Dependency

At FINN.no being a developer fixing bugs in a library is a breeze. Getting every user of your library to use the fix, however, is a different story. How to know who to notify? I mean, I know my library’s dependencies, but who “out there” has dependency to the component where I just fixed a bug? I wish. Enter maven-dependency-graph.

The idea was born on the plane back home from a Copenhagen hosted conference. Graph database. Download neo4j and start dabbling at a maven plugin. Flying time Copenhagen – Oslo was too short, all of a sudden.

From there, the idea slept for a couple of years. Until the need arose somewhere among the developers. With 100+ different applications running with common core services and libraries, everybody suddenly needed to know who depended on their code which had recently been bugfixed. So the old idea was dusted off and once more saw the light of day. We needed to upgrade the server installation and the API to neo4j – which took some time to grasp; but after some playing around, it became obvious and easy.

The idea was to have every project report its dependencies to a graph database, building the tree of dependencies on each commit. This constitutes one half of the plugin. Over time, all projects will have reported their dependencies, and from there on part two of the plugin comes into use. It will examine the reverse dependencies to the *current* maven project, and report all incoming dependencies to it in the maven log. Hey, presto! We now know who out there uses us! And even which version they are using, thanks to two different keys into the built-in lucene index engine.

The plugin is published on github @ Finn Technology’s account. Feel free!
@gardleopard and @roarjoh

Usage examples

Dependencies to current maven project:

mvn no.finntech:dependency-mapper-maven-plugin:read
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building greenpages thrift-client 3.4.5-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- dependency-mapper-maven-plugin:1.0-SNAPSHOT:read (default-cli) @ commons-thrift-client ---
[INFO] Resolving reverse dependencies
[INFO] no.finntech.travel.supplier:supplier-client:1.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.1.1
[INFO] no.finntech.cop:client:1.1-SNAPSHOT -> no.finntech:commons-thrift-client:3.1.1
[INFO] no.finntech.oppdrag-services:iad-model:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:minfinn:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:service-user:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:service-oppdrag:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:kernel:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3

(umpteen lines skipped…)

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.957s
[INFO] Finished at: Thu Jan 31 09:50:19 CET 2013
[INFO] Final Memory: 9M/211M
[INFO] ------------------------------------------------------------------------

Usage of third party framework (using neo4j’s included admin interface):

Leaving the Tower of Babel

Here at FINN we use mostly Java for our day-to-day work, but we do have some applications and modules written in other languages. We currently have code in Java, Ruby, JavaScript, Objective-C and Scala, supporting things as diverse as testing frameworks and iOS-apps.

Recently there has been some discussion with regards to what we should be using for new projects, as each team of developers start suggesting that they should use something they think would be easier and more effective. The arguments against are based on how easy it would be to recruit new developers who know these new technologies, how well it would perform under our kinds of loads, and wheter the assumption that we can be more effective is actually true.

Our strategy is firm on this point, that experiments can be done, but for major applications serving the public, we should be using Java for the time being. We are however considering if it’s time to take a second look at our chosen languages, and see if we should expand our portfolio. This strategy is founded on a wish to reduce the size of our technology portfolio, so that we don’t have to spend time and effort to bring developers up to speed when they switch teams. There’s also a higher chance that new hires will be familiar with Java, while other technologies would often require a period of training when they first start.

During these discussions, we created a quick poll and sent it out on our internal discussionboard. The poll asked questions like “How long have you worked as a developer?”, “Which language do you use for your day-to-day work?”, “If you were free to chose, which language would you use for your next project?” and “Which programming languages do you know?”. Our definition of “know” was very open in this poll, lowering the bar to allow people who had maybe written a single example program, and understands a bit of code could check the box.

Out of the nearly 100 people that work with development, 56 answered. We can assume that the people who did answer, were the people who were interested in the topic, and might not be a representative selection, but the results are interesting none the less.

So.. what did we learn?

Experience

Being a poll made in the midst of a discussion, mostly for fun, this part of the poll wasn’t framed in a way that gives us much useful information. We can say that the average developer who answered has worked for around seven or eight years, but with what looks like a reasonably fair spread across all groups. Almost as many with a couple years experience, as there are people with 10 to 15 years.

Primary language in day-to-day work

Being a predominatly Java shop, most of the respondents were using Java for their daily work. 35 out of 56 were Java-developers. 11 respondents worked with JavaScript daily, while four worked with Scala, three with Ruby, two with Objective-C, and finally one database-developer who worked mostly in T-SQL (The SQL-variant used in Sybase).

Most popular choice if allowed to chose freely

The most interesting part of the poll, with many interesting insights. This is also the most loaded part, as the results could easily short-circuit any discussion about future choice in language.

The bad news (or good, depending on your point of view), is that there was no clear answer from this section. Keep in mind that around 40 people didn’t answer this poll, and could be assumed to be content with the current status-quo.

The biggest group was Java, not surprisingly. The surprising part is that it was only 16 people who would chose Java if they could chose freely. This is much lower than expected, but still make up the largest group. Of these, 13 people are already using Java today. 20 Java developers would like to use something else.

So, if Java was the largest group, at 16 people, what does that mean for the remainder of the results? Well, 10 people would have chosen Scala, while JavaScript, Ruby, Clojure and Groovy clock in at about four or five respondents wanting to use them. At the bottom of the pack we have Python and Objective-C with three and two respondents chosing them.

Six people were interested enough to answer the poll, but chose the non-commitant “I don’t care, as long as I’m making good stuff for our users” option.

The number of possible answers here was a rather large selection of popular and not-so-popular-but-quite-well-known languages, so the fact that the list only includes eight languages is a sign that it’s not a completely random selection. Still, quite a lot of discussion is needed before any one of those languages gets center stage.

A couple curious details found in this part of the poll includes the fact that the only people who would chose JavaScript are people who are already working in JavaScript every day. Groovy, Clojure, Objective-C and Scala have all managed to be chosen by a person who doesn’t actually know the langauge. There’s only three people who would switch *to* Java, if allowed to chose.

Which languages do we “know”?

As mentioned earlier, the bar for “knowing” a language was set quite low in this poll, to get more diversity in answers. This resulted in some interesting numbers.

Being primarily a Java-shop, it might not surprise anyone that a full 100% knows Java. A little over three fourths know JavaScript, while Ruby and T-SQL was known to about half. These are the main languages most of us have some sort of dealings with in our daily work, so that they score high is not surprising.

Next on the list is PHP and Python, with around 40% knowing them.

We run primarily Sybase for our databases, with a small number of MySQL and PostgreSQL servers for smaller applications and modules. We are trying to standardise on PostgreSQL, but this isn’t reflected in knowledge, with 37.5% knowing how to code MySQL-procedures, 35.7% knowing how to code for Oracle, and only 25% knowing their way around a PostgreSQL codebase.

Our operations-department have had a “vendetta” against Perl for several years, but there are still more people who knows Perl than there are people who knows Scala, with the score 18 to 15. Groovy has quite a following too, with 13 respondents knowing Groovy.

We have a small Apps team working with iOS-development, but Objective-C has a reach far outside that team, with 10 respondents knowing Objective-C.

Common Lisp is known to five respondents, while Clojure suprisingly was only known to seven respondents. As you can read elsewhere in this blog, we had a Clojure workshop at our Technology Day earlier this summer, and these results might be telling us something about the language or our teachers when it didn’t have a better ability to “stick” than this.

We also have people who know some Erlang, Smalltalk, Lua, Haskell, Scheme, Eiffel and ML/SML.

How many languages do we “know”?

On average, we know 6.8 languages each. The most knowledgeable person knows 16 languages, while the least knowledgeable knows only one. The high experience respondents, 16 years or more, have a higher average than the rest of us, with 9.4, while the rest of the “experience brackets” know somewhere between six and seven languages.

Several programmer-gurus seem to think you should stribe to teach yourself one new language every year, and it would seem atleast one of our respondents have been able to follow this advice. For those of us with less extra time, a less ambitious strategy might be more compatible, but that we should stribe to learn something new every once in a while seems to be good advice.

Now what?

As mentioned previously, this poll was not a serious attemt at gaining actionable insights. The results should be taken with a large helping of salt, and at most used as a basis for discussions. On the other hand, I know we will be discussing this topic going forward, because the fact that only 16 of 56 wanted to use Java is telling us it’s time to start the discussion. There are possibly 40 developers who would prefer Java, who didn’t respond, so it’s too early to draw conclusions, but we have a place to start.

There are other questions falling out of this too:

  • There are two developers who wants to work with Objective-C, and eight Java-developers who wants to work with Scala, so why do we have so few internal applicants when we have openings on the teams working with these languages?
  • What is it that makes developers want to work with languages they don’t even know?
  • Of the respondents, more than half the Java-developers wants to use something else. Why is that?

 

Foraging in the landscape of Big Data

 
This is the first article describing FINN.no’s coming of age with Big Data. It’ll cover the challenges arising the need for it, the challenges in accomplishing new goals with new tools, and the challenges that remain ahead.

Big Data is for many just another vague and hyped up trend getting more than its far share of attention. The general definition, from Wikipedia, is big data covers the scenario where existing tools fail to process the increasing amount or dimensions of data. This can mean anything from:

      α – the existing tools being poor (while large companies pour $$$ into scaling existing solutions up) or
      β – the status quo requiring more data to be used, to
      γ – a requirement for faster and more connected processing and analysis on existing data.


http://navajonationparks.orgThe latter two are also described as big data’s three V‘s: Volume, Variety, Velocity. If the theoretical definition isn’t convincing you put it into context against some of today’s big data crunching use-cases…
    • online advertising combining content analysis with behavioural targeting,
    • biomedical’s DNA sequence tagging,
    • pharmaceutics’s metabolic network modelling,
    • health services detecting disease/virus spread via internet activity & patient records,
    • the financial industry ranging from credit scores at retail level to quant trading,
    • insurance companies crunching actuarial data,
    • US defence programs for offline (ADAMS) and online (CINDER) threat detection,
    • environmental research into climate change and atmospheric modelling, and
    • neuroscience research into mapping the human brain’s neural pathways.


On the other hand big data is definitely no silver bullet. It cannot give you answers to questions you haven’t yet formulated (pattern recognition aside), and so it doesn’t give one excuses to store overwhelming amounts of data where the potential value in it is still undefined. And it certainly won’t make analysis of existing data sets initially any easier. In this regard it’s less to do with the difficulty of achieving such tasks and more to do with the potential to solve what was previously impossible.

Often companies can choose their direction and the services they will provide, but in any competitive market where one fails to match the competitor’s offerings it can result in the fall of that company. Here Big Data earns its hype with many a CEO concerned into paying attention. And it probably gives many CEOs a headache as the possibilities it opens, albeit tempting or necessary, create significant challenges in and of themselves. The multiple vague dimensions of big data also allows the critics plenty of room to manoeuvre.

One can argue that to scale up: to buy more powerful machines or to buy more expensive software solves the problem (α). If all you have is this problem then sure it’s a satisfactory solution. But ask yourself are you successfully solving today’s problem while forgetting your future?

“If we look at the path, we do not see the sky.” – Native American saying

One can also argue away the need for such vasts amounts of data (β). Through various strategies: more aggressive normalisation of the data, storing data for shorter periods, or persisting data in more ways in different places; the size of each individual data set can be significantly reduced. Excessively normalising data has its benefits and is what one may do in the Lean development approach for any new feature or product. Indeed a simpler datamodel trickles through into a simpler application design, in turn leaving more content, more productive, pragmatic developers. Nothing to be scoffed at in the slightest. But in this context the Lean methodology isn’t about any one state or snapshot in time illustrating the simple and the minimalistic, rather it’s about evolution, the processes involved and their direction. Much like the KISS saying: it’s not about doing it simple stupid but about *keeping* it simple stupid. Here it questions to how does overly normalised data evolve as its product becomes successful and something more complicated over time. Anyone who has had to deal awkwardly with numerous and superfluous tables, joins, and indexes in a legacy application because it failed over time to continually improve its datamodel due to needs of compatibility knows what we’re talking about. There is another problem from such legacy applications that follow a datamodel centric design: the datamodel itself becomes a public API and the many consumers of it create this need of compatibility and the resulting inflexible datamodel. But this isn’t an underlying but rather overlapping problem as one loses oversight of the datamodel in its completeness and in a way represented optimally.




It also difficult to deny the amount of data we’re drowning in today.
“90% of the data that exists today was created in the last two years.. the sheer volume of social media and mobile data streaming in daily, businesses expect to use big data to aggregate all of this data, extract information from it, and identify value to clients and consumers.. Data is no longer from our normalised datasets sitting in our traditional databases. We’re accessing broader, possibly external, sources of data like social media or learning to analyse new types of data like image, video, and audio..” – greenbookblog.org.

And one may also argue that existing business intelligence solutions (can) provide the analysis required from all already existing datasets (γ). It ignores a future of possibilities: take for example the research going into behavioural targeting giving glimpses into the challenges of modern marketing as events and trends spark, shift, and evolve through online social media with ever faster frequencies – just the tip of the iceberg when one thinks forward to being able to connect face and voice recognition to emotional pattern matching analysis. But it also defaults to the conservative opinion that business intelligence need be but a post-mortem analysis of events and trends so to provide the insights intended and required only for internal review. This notion that this large scale analysis is of sole benefit to company strategy must become the tell tale of companies failing to see how users and the likes of social media change dramatically what is possible in product development today.

The methodologies of innovation therefore change. Real time analysis of user behaviour plays a forefront role in decisive actions on what product features and interfaces will become successful. This is the potential to cut down the risk of product development’s internal guesswork for a product’s popularity at any given point in time. In turn this cuts down time-to-market bringing the products release date closer to its maximum popularity potential moment. Startup companies know that a success doesn’t come only from a clever designed product, there is a significant factor of luck involved in releasing the right product at the right time. Large, and very costly, marketing campaigns can only extend, or synthetically create, this potential moment by so much.

This latter point around the extents and performance of big data analysis and the differentiation it creates between business analysis versus richer, more spontaneous, product development and innovation creates for FINN an important and consequential factor to our forage into big data. Here at FINN a product owner recently said: “FINN is already developing its product largely built around the customer feedback and therefore achieving continuous innovation?”. Of course what he meant to say was “from numbers we choose to collect, we generate the statistics and reports we think are interesting, and from these statistics we freely interpret them to support the hypothesis we need for further product development…” We couldn’t be further from continuous innovation if it hitched a lift to the other side of Distortionville¹.

“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts” – Arthur Conan Doyle

FINN isn’t alone here, I’d say most established companies while trying to brand themselves as innovative are still struggling to break out of their existing models of product development based upon internal business analysis. No one said continuous innovation was easy, and there’s a lot of opinions out on this, but along with shorter deployment cycles i reckon there’s two keywords: truth and transparency. Tell your users the truth and watch how they respond. For example give them the statistics that show them their ads stop getting visits after a week, and then observe how they behave, to which solution do they flock to regain traffic to their ads. Don’t try and solve all their problems for them, rather try to enable them. You’ll probably make more money out of them, and by telling them the truth you’ve removed a vulnerability, or to what some fancy refers to as “a business secret”.

There’s also a potential problem with organisational silos. Large companies having invested properly in business intelligence and data warehousing will have assigned these roles of data collection, aggregation, and analysis, to a separate team or group of experts typically trained database administrators, statisticians, and traffic analysts. They are rarely the programmers, the programmers are on the front lines building the product. Such a split can parallel the sql vs nosql camps. This split against the programmers whom you rely on to make continuous innovation a reality can run the risk of stifling any adoption of big data. With the tools enabling big data the programmers can generate reports and numbers previously only capable from the business intelligence and data warehousing departments, and can serve them to your users at web response times – integrating such insights and intelligence into your product. Such new capabilities doesn’t supersede these traditional departments, rather it needs everyone accepting the new: working together to face new challenges with old wisdom. The programmers working on big data, even if tools and data become shared between these two organisational silos, cannot replace the needs of business intelligence any more than business intelligence can undertake big data’s potential. As data and data sources continue to increase year after year the job of asking the right questions, even knowing how to formulate the questions correctly, needs all hands on deck. Expecting your programmers to do it all might well swamped them into oblivion, but it isn’t just the enormity of new challenges involved, it’s that these challenges have an integral nature to them that programmers aren’t typically trained to tackle. Big Data can be used as an opportunity not only to introduce exciting new tools, paradigms, and potential into the company but as a way to help remove existing organisational silos.

The need for big data at FINN came from a combination of (α) and (γ).   The statistics we show users for their ads had traditional been accumulated and stored in a sybase table. These statistics included everything from page views, “tip a friend” emails sent, clicks on promoted advertisement placements, ads marked as favourite, and whatever else you can imagine.

(α) FINN is a busy site, the busiest in Norway, and we display ~50 million ad pages each day. Like a lot of web applications we had a modern scalable presentation and logic tier based upon ten tomcat servers but just one not-so-scalable monster database sitting in the data tier. The sybase procedure responsible for writing to the statistics table ended up being our biggest thorn. It used 50% of the database’s write execution time, 20% of total procedure execution time, and the overall load it created on the database accounted for 30% of ad page performance. It was a problem we had lived with long enough that Operations knew quickly to turn off the procedure if the database showed any signs of trouble. Over one period of troublesome months Operations wrote a cronjob to turn off the procedure automatically during peak traffic hours – when ads were receiving the most traffic we had to stop counting altogether, embarrassing to say the least!

(γ) On top of this product owners in FINN had for years been requesting that we provide statistics on a day basis. The existing table had tinkered with this idea for some of the numbers that didn’t accumulate so high, eg “tip a friend emails”, but for page viewings this was completely out of the question – not even the accumulated totals were working properly.

At the time we were in the process of modularising the FINN web application. The time was right to turn statistics into something modern and modular. We wanted an asynchronous, fault-tolerance, linearly scaling, and durable solution. The new design uses the Command-Query Separation pattern by using two separate modules: one for counting and one for displaying statistics. The counting achieves asynchronousity, scalability, and durability by using Scribe. The backend persistence and statistics module achieves all goals by using Cassandra and Thrift. As an extension of the push-on-change model: the counting stores denormalised data and it is later normalised to the views the statistics module requires; we use MapReduce jobs within a Hadoop cluster.

http://www.flickr.com/photos/ciordia/2892558385/sizes/m/in/photostream/ 

The resulting project is kick-ass, let me say. Especially Cassandra, it is a truly amazing modern database: linear scalability, decentralised, elastic, fault-tolerant, durable; with a rich datamodel that provides often superior approaches to joins, grouping, and ordering than traditional sql. But we’ll spend more time describing the project in technical details in a later article.

A challenge we face now is broader adoption of the project and the technologies involved. Various departments: from developers to tech support; want to read the data, regardless if it is traditional or ‘big data’, and the habit was always to read it directly from production’s Sybase. And it’s a habit that’s important in fostering a data-driven culture within the company, without having to encourage datamodel centric designs. With our Big Data solution this hasn’t been so easy. Without this transparency to the data, developers, tech support, and product owners alike seem to be failing to initiate further involvement – to solve this, since our big data is stored in Cassandra, we’d love to see a read only web-based gui interface based off caqel.

…to be continued…




—-




the ultimate view — Tiles-3

A story of getting the View layer up and running quickly in Spring…

Since the original article, parts of the code has been accepted upstream, now available as part of the Tiles-3 release, so the article has been updated — it’s all even simpler!


Based upon the Composite pattern and Convention over Configuration we’ll pump steroids into
   a web application’s view layer
      with four simple steps using Spring and Tiles-3
         to make organising large complex websites elegant with minimal of xml editing.



 




Background

At FINN.no we were redesigning our control and view layers. The architectural team had decided on Spring-Web as a framework for the control layer due to its flexibility and for providing us a simpler migration path. For the front end we were a little unclear. In a department of ~60 developers we knew that the popular vote would lead us towards SiteMesh. And we knew why – for practical purposes sitemesh gives the front end developer more flexibility and definitely less xml editing.
But sitemesh has some serious shortcomings…

SiteMesh shortcomings:
  • from a design perspective the Decorator pattern can undermine the seperation MVC intends,
  • requires all possible html for a request in buffer requiring large amounts of memory
  • unable to flush the response before the response is complete,
  • requires more overall processing due to all the potentially included fragments,
  • does not guaranteed thread safety, and
  • does not provide any structure or organisation amongst jsps, making refactorings and other tricks awkward.

One of the alternatives we looked at was Apache Tiles. It follows the Composite Pattern, but within that allows one to take advantage of the Decorator pattern using a ViewPreparer. This meant it provided by default what we considered a superior design but if necessary could also do what SiteMesh was good at. It already had integration with Spring, and the way it worked it meant that once the Spring-Web controller code was executed, the Spring’s view resolver would pass the model onto Tiles letting it do the rest. This gave us a clear MVC separation and an encapsulation ensuring single thread safety within the view domain.

“Tiles has been indeed the most undervalued project in past decade. It was the most useful part of struts, but when the focus shifted away from struts, tiles was forgotten. Since then struts as been outpaced by spring and JSF, however tiles is still the easiest and most elegant way to organize a complex web site, and it works not only with struts, but with every current MVC technology.” – Nicolas Le Bas

Yet the best Tiles was going to give wasn’t realised until we started experimenting a little more…

Dependency Injection with constructors?

Pic of Neo/The Matrix





The debate whether to use
  constructors, setters, fields, or interfaces
    for dependency injection is often heated and opinionated.
Should you have a preference?


The argument for Constructor Injection

We had a consultant working with us reminding us to take a preference towards Constructor injection. Indeed we had a large code base using predominantly setter injection because in the past that is what the Spring community recommended.

The arguments for constructor injection goes like:

  • Dependencies are declared public, providing clarity in the wiring of Dependency Inversion,
  • Safe construction, what must be initialised must be called,
  • Immutability, fields can be declared final, and
  • Clear indication of complexity through numbers of constructor parameters.

And that Setter injection can be used when needed for cyclic dependencies, optional and re-assignable dependencies, to support multiple/complicated variations of construction, or to free up the constructor for polymorphism purposes.

Being a big fan of Inversion of Control but not overly of Dependency Injection frameworks something smelt wrong to me. Yet solely within the debate of constructor versus setter injection i don’t disagree that constructor injection has the advantage. Having been using Spring’s dependency injection through annotation a little recently and building a favouritism towards field injection I was happy to get the chance to ponder it over, to learn and to be taught new things. What was it i was missing? Is there a bigger picture?

API vs Implementation

If there is a bigger picture it has to be around the Dependency Inversion argument since this is known to be potentially complex. The point here of using constructor injection is that 1) through a public declaration and injection of dependencies we build an explicit graph showing the dependency inversion throughout the application, and 2) even if the application is wired magically by a framework such injection must still be done in the same way without the framework (eg when writing tests). The latter (2) is interesting in that the requirement on “dependency injection” is too also inverted, that the framework providing dependency injection is removed from the architectural design and becomes solely a implementation detail. But it is the graph in (1) that becomes an important facet in the following analysis.

With this dependency graph in mind what does happen when we bring into the picture a desire to distinguish between API and implementation design…

The DI graph being requested to be clarified by using constructor injection will fall into one of two categories:
   ‘implementation-specific’, an interface defines the public API and the DI is held private by the constructor in the implementation class,
   ‘API-specific’ when the class has no interface. Here everything public is a fully exposed api. There is no implementation-protected visibility here for injectable constructors.

By introducing the constraint of only ever using constructor based injection: in the pursuit of a clarified dependency graph; you remove or make more difficult the ability to publicly distinguish between API and Implementation design.

This distinction between API and implementation is important in being able to create the simple API. The previous blog “using the constretto configuration factory” is a co-incidental example of this. I think the work in Constretto has an excellent implementation design to it, but this particular issue raised frustrations that the API was not as simple as it could have been. Indeed: to obtain the “simplest api”; Constretto (intentionally or not) promotes the use of Spring’s injection, a loose coupling that can be compared to reflection. It may be that our usage of Constretto’s API, where we wanted to isolate groups of properties, was not what the author originally intended but this only re-enforces the need for designing the simplest possible API.

Therefore it is important to sometimes have all dependency injection completely hidden in the implementation. A clean elegant API must take precedence over a clean elegant implementation. And to achieve this one must first make that distinction between API and Implementation design.

Taking this further we can introduce the distinction between API and SPI. Here a good practice is to stick to using final classes for API and interfaces for SPI. By the same argument as above SPI can’t use constructor injection because they don’t have constructors.

Inversion-of-Control vs Dependency-Injection

What about the difference between IoC and DI. They are overlapping concepts: the subtlety between the “the contexts” and “the dependencies” rarely emphasised enough. (Java EE 6 has tried to address the distinction between contexts and dependencies at the implementation level with the CDI spec.) The difference between the two, nuanced as it may be, can help illustrate that the DI graph in any application deserves attention in multiple dimensions.

   

Drawing an application’s architecture up as a graph where the vertical axis represents the request stack: that which is typically categorised into architectural layers view, control, and model/services; and the horizontal axis representing the broadness of each architectural layer, then it can be demonstrated that:
   IoC generally forms the passing and layering of contexts downwards.

   The api-specific DI is fulfilling the layer of such contexts, and these contexts can be dependencies directly or helper classes holding such dependencies. Such dependencies must therefore be initially defined high up in the stack.

   The DI that is implementation-specific is at most only visible inside each architectural layer and is the DI that is represented horizontally on the graph. Possibly still within the definition of IoC it can also be considered a “wiring of collaborating components”. The need for clarity in the dependency graph isn’t as critical and so applications here often tend towards Service Locators, Factories, and Injectable Singletons. On the other hand many of the existing Service Locator implementations have been poor enough to push people towards (and possibly it was an instigator for the initial implementations of) dependency injection.

   Constructor injection works easily horizontally, especially when instantiation of objects is under one’s ability, but has potential hurdles when working vertically down through the graph. Sticking to constructor injection horizontally can also greatly help when the wiring of an application is difficult, by ensuring at the construction of each object dependency injection has been successful. Missing setter, field, interface injection and Service Locators may not report an error until actually used in runtime.

A simple illustration of difficulty with vertical constructor injection is looking at these helper contexts and how they may be layering contexts through delegation rather than repetitive instantiation, a pattern more applicable for an application with a deep narrow graph. This exemplifies a pattern that has often relied on proxy classes.

Another illustration is when having to instantiate the initial context at the very top of the request/application stack it involves instantiating all the implementation of dependencies used in contexts down through the stack, this is when dependency inversion explodes – the case where the IoC becomes up-front and explicit, and the encapsulation of implementation is lost through an unnecessary leak of abstractions. A problem paralleling to this is trying to apply checked exceptions up through the request stack: one answer is that we need different checked exceptions per architectural layer (another answer is anchored exceptions). With dependencies we would eventuate with requiring different dependency types per architectural layer and this could lead to dependencies types from inner domains needing to be declared from the outer domains. Here we can instead declare resource loaders in the initial context and then letting each architectural layer build from scratch its own context with dependencies constructed from configuration. But this comes the full circle in coming back to a design similar to a service locator. Something similar has happened with annotations in that by bringing Convention over Configuration to DI what was once loose wiring with xml has become the magic of the convention and begins too to resemble the service locator or naming lookups.

follow the white rabbit/The Matrix


For a legacy application this likely becomes all too much: the declaring of all dependencies required throughout all these contexts; and so relying on a little louse-coupling-magic (be it reflection or spring injection) is our answer out. Indeed this seems to be one of the reasons spring dependency injection was introduced into FINN.
And so we’ve become less worried about the type of injection used…

Broad vs Deep Applications

FINN.no is generally a broad application with a shallow contextual stack. Here is the traditional view-control-model design and the services inside the model layer typically interact directly with the data stores and maybe interact with one or two peer services.

Focusing on the interfaces to the services we see there is a huge amount of public api available to the controller layer and very little in defined contexts except a few parameters, or maybe the whole parameter map, and the current user object. There is therefore very little inversion of control in our contexts, it is often just parameterisation. (Why we often use interfaces to define service APIs is interesting since we usually have no intention for client code to be supplying their own implementations, it is definitely not SPIs that are being published. Such interfaces are used as a poor-man’s simplification of the API declaration of public methods within the final classes. Albeit these interfaces do make it easy to make stubs and mocks for tests.)

In this design the implementation details of service-layer dependencies is rarely passed down through contexts but rather hard baked into the application. And in a product like FINN it probably always will be hard baked in. Hard baked here doesn’t mean it can’t be changed or mocked for testing, but that it is not a dynamic component, it is not contextual, and so does not belong in the architectural design of the application.

In such a broad architectural layer i can see two problems in trying to obtain a perfect DI graph:

   cyclic dependencies: bad but forgiven when existing as peers within a group. In this case constructor injection fails. We can define one as the lesser or auxiliary service and fall-back to the setter/field injection just for it, but if they are real equal peers this could be a bullet-in-the-foot approach and using field injection for both with documentation might be the better approach.

   central dependencies: these are the “core” dependencies used throughout the bulk of the services, the database connection, resource loaders, etc. If we enforce these to be injected via constructors then we in turn are enforcing a global-store of them. Such a global store would typically be implemented as a factory or singleton. Then what is the point of injection? Worse yet is that this could encourage us to start passing the spring application context around through all our services. A service locator may better serve our purpose…

Hopefully by now you’ve guessed that we really should be more interested in modularisation of the code. Breaking up this very broad services layer into appropriate groups is an easier and more productive first step to take. And during this task we have found discovering and visualising the DI graph is not the problem. Untangling it is. Constructor injection can be used to prevent these tangles, but so can tools like maven reporting and sonar. This shows the the DI graph is actually more easily visualised through the class’s import statements than through constructor parameters.

With modularisation we can minimise contexts, isolate dependency chains, publish contextual inversion of control into APIs, declare interface-injection for SPIs, and move dependency injection into wired constructors.

   

Back to Constructor injection

So it’s true that constructor injection goes beyond just DI in being able to provide some IoC. But it alone can not satisfy Inversion of Control in any application unless you are willing to overlook API and SPI design. DI is not a subset or union of IoC: it has uses horizontally and in loose-coupling configuration; and IoC is not a subset or union of DI: to insinuate such would mean IoC can only be implemented using spring beans leading to an application of only spring beans and singletons. In the latter case IoC will often become forgotten outside the application’s realm of DI.

Constructor injection is especially valid when it’s desired for code to be used via both spring-injection and manual injection, and it does make test code more natural java code. But imagine manually injecting every spring bean in a oversized legacy broad-stack application using constructor injection without the spring framework – is this really a possibility, let’s be serious? What you would likely end up with is one massive factory with initialisation code constructing all services instead of the spring xml, and looking up using this factory every request. What’s the point here? This isn’t where IoC is supposed to take us.

If code is being moved towards a distributed and modular architecture you should pay be aware on how it clashes with the DI fan club.

If code is in development and you are uncertain if the dependency should be obtained through a service locator or declared public giving dependency inversion, and in the spirit of lean think it smart to not yet make the decision, then using field injection can be the practical solution.

And just maybe you are not looking to push the Dependency Inversion out into the API and because you think of Spring’s ApplicationContext (or BeanFactory) as your application’s Service Locator, you use field injection as a method to automate service locator lookups.

For the majority of developers for the majority of the time you will be writing new code, not caring about dependency injection trashing inversion of control, wanting lots of easy to write tests, and not be worrying about API design so it’s ok to have a healthy preference towards constructor injection…

  Pic of Morpheus/The Matrix  


Keep questioning everything…
  …by remaining focused on what is required from the code at hand we can be pragmatic in a world full of rules and recommendations. This isn’t about laziness or permitting poor code, but about being the idealist: the person that knows the middle way between the pragmatist and ideologue. By knowing: when what can, and for how long, be dropped; we can incrementally evolve complex code towards a modular design in a sensible, sustainable, and practical way.
  In turn this means the programmer gets the chance to catch breath and remember paramount to their work is the people: those that will develop against the design and those end-users of the product.




References:



Credits:
A large and healthy dose of credit must go to Kaare Nilsen for being a sparring partner in the discussion that lead up to this article.

XSS protection: who’s responsibility?

In a multi-tier application who can be responsible for XSS protection?
Must security belong to a dedicated team…or can it be a shared responsibility?
Today XSS protection is typically tackled by front end developers.
 Let’s challenge the status quo.  


New Applications vs Legacy Applications

For protection against Stored XSS many applications have the luxury of ensuring any text input from the user, or from a CMS system, is made clean and safe before being written to database. Given a clean database the only XSS protection required is around request values, for example values from url parameters, cookies, and form data.


Image by The Planet via Flickr -- http://www.flickr.com/photos/26388913@N05/4879419700

But some applications, especially legacy applications, are in a different boat. Databases typically have lots of existing data in many different formats and tables so it’s often no longer feasible to focus on protecting data on its way into the system. In this situation it is the front end developers that pay the price for the poor quality of backend data and are are left to protect everything. This often results in a napalm-the-whole-forest style of xss protection where every single variable written out in the front end templates goes through some equivalent of

            <c:out value="${someText}"/>

This makes sense but…   if you don’t have control is your only option to be so paranoid?

A Messed up World

To illustrate the problem let’s create a simple example by defining the following service

  interface AdvertisementService{
    Advertisement getAdvertisement(long id);
  }
 
  interface Advertisement{
    /** returns plain text title */
    String getTitle();
    /** return description, which may contain html if isHtmlEnabled() returns true */
    String getDescription();
    /** indicates the description is html */
    boolean isDescriptionHtml();
  }

The web application, having already fetched an advertisement in the control tier, somewhere would have a view template looking something like

    <div>
        <h1><c:out value="${advertisement.title}"/></h1>
        <p>
            <c:out value="${advertisement.description}" 
                      escapeXml="${!advertisement.descriptionHtml}"/>
        </p>
    </div>
Here we add another dimension: simple escaping with c:out won’t work if you actually want to write html (and what is the safety and quality of such html data).

When this service is used by different applications each with their own view templates, and maybe also exposed through web services, you end up no doubt protecting it over many times, system developers in the web services, and front end developers in each of the presentation layers… likely there will be confusion over the safety and quality of data coming from this service, and of course everyone will be doing it differently so nothing will be consistent.

  • Is there a better way?
  • Can we achieve a consistent XSS protection in such an environment?
  • How many developers need to know about XSS and about the origin of the data being served?

In the above code if we can guarantee that the service _always_ returns safe data then we can simplify it by removing the isDecriptionHtml() method. The same code and view template would become

  interface AdvertisementService{
    Advertisement getAdvertisement(long id);
  }
 
  /** All fields xss protected */
  interface Advertisement{
    /** returns plain text title */
    String getTitle();
    /** return description, which may contain safe html */
    String getDescription();
  }
    <div>
        <h1>${advertisement.title}</h1>
        <p>${advertisement.description}</p>
    </div>

By introducing one constraint: that all data is xss protected in the services tier; we have provided a simpler and more solid service API, and allowed all applications to have simpler, more concise, more readable, view templates.

Solutions

Having a clean database all non-html data can be escaped as it comes in. Take advantage of Apache’s StringEscapeUtils.escapeHtml(..) from commons-lang library. For incoming html one can take advantage of a rich html manipulation tool, eg like JSoap, to clean, normalise, and standardise it.

With legacy or foreign data, especially those applications with exposed service architecture and/or multiple front ends, a different approach is best: ensure nothing unsafe ever comes out of the services tier. For the html data the services will often be filtering many snippets of html over and over again, so this needs to be fast and a heavy html manipulation library like JSoap isn’t appropriate any more.
A suitable library is xss-html-filter, a port from libfilter. It is fast and has an easy API

String safeHtml = new HTMLInputFilter().filter( "some html" );

If we do this it means
   xss protection is not duplicated, but rather made clear who is responsible for what,
   c:out becomes just junk in verbosity and performance* ,
   service APIs become simpler,
   view templates look the same for both new and legacy applications,
   system developers become responsible for protection of database data and this creates a natural incentive for them to clean up existing data and ensure new data comes in safe,

* No matter what an inescapable fact is all front ends developers must have a concrete understanding that any value fresh from the request is prone to Reflected XSS and there’s nobody but them that can be responsible for protecting these values.

  XSS protection has become a basic knowledge requirement for all developers.
    Like much to do about security … it’s always only as strong as its weakest link.

At FINN.no because we take security seriously, and because we know we are only human and we need some room for the occasional mistake, for each release we run security audits. These include using WatchCom reports, tools like Acunetix, and custom tests using Cucumber.

Fork me on GitHub casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme