FINN tech blog

tech blog

We love NPM!


What? Why is FINN.no donating to scale NPM? I thought you guys were a pure Java shop? It is true, we used to be a pure Java-shop. However over the past three years we have adopted new technologies to solve specific problems. We have used Ruby and Cucumber for some time to make a platform for continuos delivery and it has worked out beautifully! Our front-end developers have been forced to deal with out dated and not suitable tools for doing their job. This is largely due to the fact that all innovation when it comes to front-end development does not happen in the Java community. Most of the exciting tools are written in Node and this has become a frustration and a challenge for us.

In the past year FINN have been gradually making a transition away from using only Java-based tools for front-end development and towards a NodeJS powered tool set. We are now at a point were we are on the brink of rolling this out for our projects. Having worked with Node for a while we have learned to appreciate the Node ecosystem which is NPM. Being part of such a vibrant ecosystem of modules makes the transition easier and it also inspires us to become better at giving back. Therefore we are trying to give back to NPM, when we can.

When the scale NPM campaign was launched it was obvious that this was something we wanted to be apart of. It is an investment in our own happiness in a sense, as NPM is becoming a very important part for our technology portfolio.

Nodeify all the things

So were is it that we use Node in our technology stack today? Earlier this year we moved away from JsTestDriver in favor of Karma-runner. This meant that we needed to create a trojan horse containing the goodness of Node/NPM into existing Java projects without causing too many problems for developers with no knowledge of Node. A part of this scheme was the frontend-maven-plugin, which enables us to have control of which Node projects use and allows developers without Node previously installed to build projects and run tests without having to learn anything about Node.

Currently we are working towards removing the need for using Maven to build and deploy pure JavaScript projects using Node. The end result is of course to have more web applications built using Node. Today we have just two which are running in a production environment.

Low-fi Coffe Surveilence

There is a Norwegian start-up called AppearIn which enables anyone to do online video conferencing with only a web browser. They utilize WebRTC to empower users to create rooms where they can hang out, do meetings, etc. We had the pleasure of having Ingrid form Appear In over for a visit and she suggested we’d contribute to their blog about use cases for appear in. That’s how we became use case #7 on their blog. Naturally this should’ve been an arduino powered thing with lot’s of sophisticated technology, but we decided we’d just get something simple working first and then make it better later on.

This is a really exciting platform as it empowers developers to have video/audio communication with little effort. Dealing with WebRTC, which is still a moving specification, can be quite challenging to implement properly. There’s numerous corner cases and differences between implementations.

Update: Through the wonders of the internet, it turns out that we have brought the webcam thing full circle. A user on Hacker News posted a comment which linked to a wikiepedia article about the first webcam. It turns out the use case for it was exactly the same as we had.

Package Management conflicts Continuous Delivery


The idea of package management is to correctly operate and bundle together various components in any system. The practice of package management is a consequence from the design and evolution of each component’s API.

Package management is tedious

   but necessary. It can also help to address the ‘fear of change’.

We can minimise package management by minimising API. But we can’t minimise API if we don’t have experience with where it comes from. You can’t define for yourself what the API of your code is. It is well beyond that of your public method signatures. Anything that with change can break a consumer is API.

Continuous Delivery isn’t void of API

   despite fixed and minimised interfaces between runtime services, each runtime service also contains an API in how it behaves. The big difference though is you own the release change, a la the deployment event, and if things don’t go well you can roll back. Releasing artifacts in the context of package management can not be undone. Once you have released the artifact you must presume someone has already downloaded it and you can’t get it back. The best you can do it release a new version and hope everyone upgrades to it quickly.

Push code out from behind the shackles of package management

   take advantage of continuous delivery! Bearing in mind a healthy modular systems design comes from making sure you got the api design right – so the amount one can utilise CD is ultimately limited, unless you want to throw out modularity. In general we let components low in the stack “be safe” by focusing on api design over delivery time, and the opposite for components high in the stack.

High in the stack doesn’t refer to front-end code

   Code at the top of the stack is that free of package management and completely free for continuous deployment. Components with direct consumers no longer sit at the top of the stack. As components consumers multiple, and they become transitive dependencies, they move further down the stack. Typically entropy of the component corresponds to position in the stack. Other components forced into package management can be those where parallel versions need be deployed.

Some simple rules to abide by…

  • don’t put configuration into libraries.
    because this creates version-churn and leads to more package management

  • don’t put services into libraries.
    same reason as above.

  • don’t confuse deploying with version releases.
    don’t release every artifact as part of a deployment pipeline.
    separate concerns of continuous delivery and package management.


  • try to use a runtime service instead of a compile-time library.
    this minimises API, in turn minimising package management,

  • try to re-use standard APIs (REST, message-buses, etc).
    the less API you own the less package management.
    but don’t cheat! data formats are APIs, and anything exposed that breaks stuff when changed is API.

Dark Launching and Feature Toggles


Make sure to distinguish between these two.

They are not the same thing,
  and it’s a lot quicker to just Dark Launch.

In addition Dark Launching promotes incremental development, continuous delivery, and modular design. Feature Toggles need not, and can possibly be counter-productive.

Dark Launching is an operation to silently and freely deploy something. Giving you time to ensure everything operates as expected in production and the freedom to switch back and forth while bugfixing. A Dark Launch’s goal is often about being completely invisible to the end-user. It also isolates the context of the deployment to the component itself.

Feature Toggling, in contrast, is often the ability to A/B test new products in production. Feature Toggling is typically accompanied with measurements and analytics, eg NetInsight/Google-Analytics. Feature Toggles may also extend to situations when the activation switch of a dark launch can only happen in a consumer codebase, or when only some percentage of executions will use the dark launched code.

Given that one constraint of any decent enterprise platform is that all components must fail gracefully Dark Launching is the easiest solution, and a golden opportunity to ensure your new code fails gracefully. Turn the new module on, and it’s dark launched and in use, any problems turn it off again. You also shouldn’t have to worry about only running some percentage of executions against the new code, let it all go to the new component, if the load is too much the excessive load should also fail-gracefully and fall back to the old system.

Dark Launching is the simple approach as it requires no feature toggling framework, or custom key-value store of options. It is a DevOps goal that remains best isolated to the context of DevOps – in a sense the ‘toggling’ happens through operations and not through code. When everything is finished it is also the easier approach to clean up. Dealing with and cleaning up old code takes up a lot of our time and is a significant hindrance to our ability to continually innovate. In contrast any feature toggling framework can risk encouraging a mess of outdated, arcane, and unused properties and code paths. KISS: always try and bypass a feature toggle framework by owning the ‘toggle’ in your own component, rather than forcing it out into consumer code.

Where components have CQS it gets even better. First the command component is dark launched, whereby it runs in parallel, and data can be test between the old and new system (blue-green deployments). Later on the query component is dark launched. While the command components can run and be used in parallel for each and every request the query components cannot. When the dark launch of the query component is final the old system is completely turned off.

Now the intention of this post isn’t to say we don’t need feature toggling, but to give terminology to, and distinguish, between two different practices instead of lumping everything under the term “feature toggling”. And to discourage using a feature toggling framework for everything because we fail to understand the simpler alternative.

In the context of front-end changes, it’s typical that for a new front-end feature to come into play there’s been some new backend components required. These will typically have been dark launched. Once that is done, the front-end will introduce a feature toggle rather than dark launch because it’s either introducing something new to the user or wanting to introduce something new to a limited set of users. So even here dark launching can be seen not as a “cool” alternative, but as the prerequisite practice.

Reference: “DevOps for Developers” By Michael Hüttermann

given the git

 

“There is no way to do CVS right.” – Linus



FINN is migrating from subversion to the ever trendy git.
   We’ve waited years for it to happen,
      here we’ll try to highlight why and how we are doing it.

Working together

There’s no doubt that git gives us a cleaner way of working on top of each other. Wherever you promote peer review you need a way of working with changesets from one computer to the next without having to commit to and via the trunk where everyone is affected. Creating custom branches comes with too much (real or perceived) overhead, so the approach at best falls to throwing patches around. Coming away from a pair-programming session it’s better when developers go back to their own desk with such a patch so they can work on it a bit more and finish it properly with tests, docs, and a healthy dose of clean coding. It properly entitles them as the author rather than appearing as if someone else took over and committed their work. Git’s decentralisation of repositories provides the cleaner way by replacing these patches with private repositories and its easy to use branches.

Individual productivity

Git improves the individual’s productivity with benefits of stashing, squashing, reseting, and rebasing. A number of programmers for a number of years were already on the bandwagon using git-svn against our subversion repositories. This was real proof of the benefits, given the headaches of git-svn (can’t move files and renaming files gives corrupted repositories)

With Git, work is encouraged to be done on feature branches and merged in to master as complete (squashed/rebased) changesets with clean and summarised commit messages.

  1. This improves efforts towards continuous deployment due to a more stable HEAD.
  2. Rolling back any individual feature regardless of its age is a far more manageable task.
  3. By squashing all those checkpoint commits we typically make we get more meaningful, contextual, and accurate commit messages.

Reading isolated and complete changesets provides clear oversight, to the point reading code history becomes enjoyable, rather than a chore. Equally important is that such documentation that resides so close to, if not with, the code comes with a real permanence. There is no documentation more accurate over all of time than the code itself and the commit messages to it. Lastly writing and rewriting good commit message will alleviate any culture of jira issues with vague, or completely inadequate, descriptions as teams hurry themselves through scrum methodologies where little attention is given to what is written down.

Maintaining forks

Git makes maintaining forks of upstream projects easy.

With Git
fork the upstream repository,
branch, fix and commit,
create the upstream pull request,
while you wait for the pull request to be accepted/rejected use your custom inhouse artifact.
  With Subversion
file an upstream issue,
checkout the code,
fix and store in a patch attached to the issue,
while you wait use the inhouse custom artifact from the patched but uncommited codebase.



Both processes are largely the same but it’s safer and so much easier using a forked git repository over a bunch of patch files.

Has Git-Flow any advantage?

We’re putting some thoughts into how we were to organise our repositories, branches, and workflows. The best introductory article we’ve so far come across is from sandofsky and should be considered mandatory reading. Beyond this one popular approach is organising branches using Git Flow. This seemed elegant but upon closer inspection comes with more disadvantages than benefits…

  • the majority needs to ‘checkout develop’ after cloning (there are more developers than ops),
  • master is but a sequence of “tags” and therefore develop becomes but a superfluous branch, a floating “stable” tag instead is a better solution over any branch that simply marks states,
  • it was popular but didn’t form any standard,
  • requires a script not available in all GUIs/IDEs, otherwise it is but a convention,
  • prevents you from getting your hands dirty with the real Git, how else are you going to learn?,
  • it goes against having focus and gravity towards continuous integration that promotes an always stable HEAD. That is we desire less stabilisation and qa branches, and more individual feature and fix branches.

GitHub Flow gives a healthy critique of Git-Flow and it helped identify our style better. GitHub Flow focus’ on continuous integration, “deploy to production every day” rather than releasing, and relying on git basics rather than adding another plugin to our development environment.

Our workflows

So we devised two basic and flexible workflows: one for applications and one for services and libraries. Applications are end-user products and stuff that has no defined API like batch jobs. Services are our runtime services that builds up our platform, each come with a defined API and client-server separation in artifacts. Applications are deployed to environments, but because no other codebase depends on them their artifacts are never released. Services, with their APIs and client-side libraries, are released using semantic versions, and the server-side code to them is deployed to environments in the same manner as Applications. The differences between Applications and Services/Libraries warrant two different workflow styles.

Both workflow styles use master as the stable branch. Feature branches come off master. An optional “integration” (or “develop”) branch may exist between master and feature branches, for example CI build tools might automatically merge integration changes back to master, but take care not to fall into the anti-pattern of using merges to mark state.

The workflow for Applications may use an optional stable branch where deployments are made from, this is used by projects that have not perfected continuous deployment. Here bug fix branches are taken from the stable branch, and forward ported to master. For applications practising continuous deployment a more GitHub approach may be taken where deployments from finished feature branches occur and after successfully running in production such feature branches are then merged to master.

The workflow for Services is based upon each release creating a new semantic version and the git tagging of it. Continuous deployment off master is encouraged but is limited to how compatible API and the client libraries are against the HEAD code in the master branch – code that is released and deployed must work together. Instead of the optional stable branch, optional versioned branches may exist. These are created lazy from the release tag when the need for a bug fix on any previous release arises, or when master code no longer provides compatibility to the released artifacts currently in use. The latter case highlights the change when deployments start to occur off the versioned branch instead of off master. Bug fix branches are taken from the versioned branch, and forward ported to master.

Similar to Services are Libraries. These are artifacts that have no server-side code. They are standalone code artifacts serving as compile-time dependencies to the platform. A Library is released, but never itself deployed to any environment. Libraries are void of any efforts towards continuous deployments but otherwise follow very similar workflow as Services – typically they give longer support to older versions and therefore have multiple release branches active.

How any team operates their workflow is up to them, free to experiment to see what is effective. At the end of the day as long as you understand the differences between merge and rebase then evolving from one workflow to another over time shouldn’t be a problem.

Infrastructure: Atlassian Stash

The introduction of Git was stalled for a year from our Ops team as there was no repository management software they were happy enough with to support (integration with existing services was important, particularly Crowd). Initially they were waiting on either Gitolite or Gitorius. Eventually someone suggested Stash from Atlassian and after a quick trial this was to be it. We’re using a number of Atlassian products already: Jira, Fisheye, Crucible, and Confluence; so the integration factor was good and so we’ve paid for a product that was at the time incredibly overpriced with next to nothing on its feature list. One feature the otherwise very naked Stash comes with is Projects, which provides a basic grouping of repositories. We’ve used this grouping to organise our repositories based on our architectural layers: “applications”, “services”, “libraries”, and “operations”. The idea is not to build fortresses with projects based on teams but to best please the outsider who is looking for some yet unknown codebase and only knows what type of codebase it is. We’re hoping that Atlassian adds labels and descriptions to repositories to further help organisation. Permissions is easy, full read and write access to everyone, we’ll learn quickest when we’re all free to make mistakes, it’s all under version control at the end of the day.

Atlassian Stash


We’re still a cathedral

Git decentralises everything, but we’re not a real bazaar: our private code is our cathedral with typical enterprise trends like scrum and kanban in play; and so we have still the need to centralise a lot.
Our list of users and roles we still want centralised, when people push to the master repository are all commits logged against known users or are we going to end up with multiple aliases for every developer? Or worse junk users like “localhost”?
To tackle this we wrote a pre-push hook that authenticates usernames for all commits against Crowd. If a commit from an unknown user is encountered the push fails and the pusher needs to fix their history using this recipe before pushing again.

Releases can be made off any clone and obviously not be something we want. Released artifacts need to be permanent and unique, and deployed to our central maven repository. Maven’s release plugin fortunately tackles this for us as when you run mvn release:prepare or mvn release:branch it automatically pushes resulting changes upstream for you, as dictated by the scm details in the pom.xml

Migrating repositories

Our practice with subversion was to have everything in one large subversion repository, like how Apache does it. This approach worked best for us allowing projects and the code across projects to be freely moved around. With Git it makes more sense for each project to have its own repository, as moving files along with their history between repositories is easy.

Initial attempts of conversion were using svn2git as described here along with svndumpfilter3.

But a plugin in Stash came along called SubGit. It rocks! Converting individual projects from such a large subversion repository one at a time is easy. Remember to moderate the .gitattributes file afterwards, we found in most usecases it could be deleted.


Hurdles

Integration with our existing tools (bamboo, fisheye, jira) was easier when everything was in one subversion repository. Now with scores of git repositories it is rather cumbersome. Every new git repository has to be added manually into every other tool. We’re hoping that Atlassian comes to the rescue and provides some sort of automatic recognition of new and renamed repositories.

Renaming repositories in Stash is very easy, and should be encouraged in an agile world, but it breaks the integration with other tools. Every repository rename means going through other tools and manually updating them. Again we hope Atlassian comes to the rescue.

Binary files we were worried about as our largest codebase had many and was already slow on subversion due to it. Subversion also stores all xml files by default as binary and in a large spring based application with a long history this might have been a problem. We were ready to investigate solutions like git-annex. All test migrations though showed that it was not a problem, git clones of this large codebase were super fast, and considerably smaller (subversion 4.1G -> git 1.1G).


Adaptation

Towards the end of February we were lucky enough to have Tim Berglund, Brent Beer, and David Graham, from GitHub come and teach us Git. The first two days was a set course with 75 participants and covered

  • Git Fundamentals (staging, moving, copying, history, branching, merging, resetting, reverting),
  • Collaboration using GitHub (Push, pull, and fetch, Pull Requests Project Sites, Gists, Post-receive hooks), and
  • Advanced Git (Filter-Branch, Bisect, Rebase-onto, External merge/diff tools, Event Hooks, Refspec, .gitattributes).

The third day with the three GitHubbers was more of an open space with under twenty participants where we discussed various specifics to FINN’s adoption to Git, from continuous deployment (which GitHub excels at) to branching workflows.

No doubt about it this was one of the best, if not very best, courses held for FINN developers, and left everyone with a resounding drive to immediately switch all codebases over to Git.

Other documentation that’s encouraged for everyone to read/watch is


Tips and tricks for beginners…

To wrap it up here’s some of the tips and tricks we’ve documented for ourselves…

Cant push because HEAD is newer
So you pull first… Then you go ahead and push which adds two commits into history: the original and a duplicate merge from you.
You need to learn to do git rebase in such situations, better yet to do git --rebase pull.
You can make the latter permanent behaviour with
git config --global branch.master.rebase true
git config --global branch.autosetuprebase always

Colour please!
git config --global color.ui auto

Did you really want to push commits on all your branches?
This can trap people, they often expect push to be restricted to the current branch your on. It can be enforced to be this way with git config --global push.default tracking

Pretty log display
Alias git lol to your preferred log format…
Simple oneline log formatting:
git config --global alias.lol "log --abbrev-commit --graph --decorate --all --pretty=oneline"

Oneline log formatting including committer’s name and relative times for each commit:
git config --global alias.lol "log --abbrev-commit --graph --all
      --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset'"

Compact log formatting with full commit messages, iso timestamps, file history over renames, and mailmap usernames:
git config --global alias.lol "log --follow --find-copies-harder --graph --abbrev=4
  --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %Cgreen%ai %n %C(bold blue)%aN%Creset %B'"

Cherry-picking referencing the original commit
When recording a cherry-pick commit, using the “-x” option appends the line “(cherry picked from commit …)” to the original commit message so to indicate which commit this change was cherry-picked from. For example git cherry-pick -x 3523dfc

Quick log to see what i’ve done in the last 24 hours?
git config --global alias.standup "log --all --since yesterday --author `git config --global user.email`"

What files is this project got but ignoring?
git config --global alias.ignored "ls-files --others --i --exclude-standard"

Wipe all uncommitted changes
git config --global alias.wipe "reset --hard HEAD"

Edit and squash commits before pushing them
git config --global alias.ready "rebase -i @{u}"



I wish I knew my consumers – Maven Reverse Dependency

At FINN.no being a developer fixing bugs in a library is a breeze. Getting every user of your library to use the fix, however, is a different story. How to know who to notify? I mean, I know my library’s dependencies, but who “out there” has dependency to the component where I just fixed a bug? I wish. Enter maven-dependency-graph.

The idea was born on the plane back home from a Copenhagen hosted conference. Graph database. Download neo4j and start dabbling at a maven plugin. Flying time Copenhagen – Oslo was too short, all of a sudden.

From there, the idea slept for a couple of years. Until the need arose somewhere among the developers. With 100+ different applications running with common core services and libraries, everybody suddenly needed to know who depended on their code which had recently been bugfixed. So the old idea was dusted off and once more saw the light of day. We needed to upgrade the server installation and the API to neo4j – which took some time to grasp; but after some playing around, it became obvious and easy.

The idea was to have every project report its dependencies to a graph database, building the tree of dependencies on each commit. This constitutes one half of the plugin. Over time, all projects will have reported their dependencies, and from there on part two of the plugin comes into use. It will examine the reverse dependencies to the *current* maven project, and report all incoming dependencies to it in the maven log. Hey, presto! We now know who out there uses us! And even which version they are using, thanks to two different keys into the built-in lucene index engine.

The plugin is published on github @ Finn Technology’s account. Feel free!
@gardleopard and @roarjoh

Usage examples

Dependencies to current maven project:

mvn no.finntech:dependency-mapper-maven-plugin:read
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building greenpages thrift-client 3.4.5-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- dependency-mapper-maven-plugin:1.0-SNAPSHOT:read (default-cli) @ commons-thrift-client ---
[INFO] Resolving reverse dependencies
[INFO] no.finntech.travel.supplier:supplier-client:1.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.1.1
[INFO] no.finntech.cop:client:1.1-SNAPSHOT -> no.finntech:commons-thrift-client:3.1.1
[INFO] no.finntech.oppdrag-services:iad-model:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:minfinn:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:service-user:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:service-oppdrag:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3
[INFO] no.finntech:kernel:2013.2-SNAPSHOT -> no.finntech:commons-thrift-client:3.4.3

(umpteen lines skipped…)

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.957s
[INFO] Finished at: Thu Jan 31 09:50:19 CET 2013
[INFO] Final Memory: 9M/211M
[INFO] ------------------------------------------------------------------------

Usage of third party framework (using neo4j’s included admin interface):

StrataConf & Hadoop World 2012…

A summary of this year’s Strataconf & Hadoop World.
    A fascinating and inspiring conference with use-cases on both sides of an ethical divide – proof that the technologies coming are game-changers in both our industry and in society. Along with some intimidating use-cases i’ve never seen such recruitment efforts at any conference before, from multi-nationals to the CIA. The need for developers and data scientists in Big Data is burning – the market for Apache Hadoop Market is expected to reach $14 billion by 2017.

Plenty of honesty towards the hype and the challenges involved too. A barcamp Big Data Controversies labelled it all as Big Noise and looked at ways through the hype. It presented balancing perspectives from a insurance company’s statistician who has dealt successfully with the problem of too much data for a decade and a hadoop techie who could provide much desired answers to previously impossible questions. Highlights from this barcamp were…

  • One should always use intelligent samples before ever committing to big data.
  • Unix tools can be used but they are not very fault tolerant.
  • You know when you’re storing too much denormalised data when you’re also getting high compression rates on it.
  • MapReduce isn’t everything as it can be replaced with indexing.
  • If you try to throw automated algorithms at problems without any human intervention you’re bound to bullshit.
  • Ops hate hadoop and this needs to change.
  • Respecting user privacy is important and requires a culture of honesty and common-sense within the company. But everyone needs to understand what’s illegal and why.

Noteworthy (10 minute) keynotes…

  • The End of the Data Warehouse. They are monuments to the old way of doing things: pretty packaging but failing to deliver the business value. But Hadoop too is still flawed… Also a blog available.
  • Moneyball for New York City. How NYC council started combining datasets from different departments with surprising results.
  • The Composite Database, a focus on using big data for product development. To an application programmer the concept of a database is moving from one entity into a componential architecture.
  • Bringing the ‘So What’ to Big Data, a different keynote with a sell towards going to work for the CIA. Big data isn’t about data but changing and improving lives.
  • Cloud, Mobile and Big Data. Paul Kent, a witty speaker, talks about analytics in a new world. “At the end of the day, we are closer to the beginning than we are at end of this big data revolution… One radical change hadoop and m/r brings is now we push the work to the data, instead of pulling the data out.”

Noteworthy (30 minute) presentations…

  • The Future – Hadoop-2. Hadoop YARN makes all functional programming algorithms possible, reducing the existing map reduce framework to just one of many user-land libraries. Many existing limitations are removed. Fault-tolerance is improved (namenode). 2x performance improvement on small jobs.
  • Designing Hadoop for the Enterprise Data Center. A joint talk from Cisco and Cloudera on hardware tuning to meet Hadoop’s serious demands. 10G networks help. dual-attached 1G networks is an alternative. More jobs in parallel will average out network bursts. Data-locality misses hurt network, consider above a 80% data-locality hitrate good.
  • How to Win Friends and Influence People. LinkedIn presents four of their big data products
        ∘ Year in Review. Most successful email ever – 20% response rate.
        ∘ Network Updates.
        ∘ Skills and Endorsements. A combination of propensity to know someone and the propensity to have the skill.
        ∘ People You May Know. Listing is easy, but ranking required combining many datasets.

    All these products were written in PIG. Moving data around is the key problem. Kafka is used instead of scribe.
  • Designing for data-driven organisation. Many companies who think they are data-driven are in fact metrics-driven. It’s not the same thing. Metrics-driven companies often want interfaces with less data. Data-driven companies have data rich interfaces presenting holistic visualisations.
  • Visualizing Networks. The art of using the correct visualisation and layout. Be careful of our natural human trait to see visual implications from familiarity and proximity – we don’t always look at the legend. A lot of examples using the d3 javascript library.

The two training sessions I attended were Testing Hadoop, and Hadoop using Hive.
Testing Hadoop.
Presented by an old accomplice from the NetBeans Governance board, Tom Wheeler. He presented an interesting perspective on testing calling it another form of computer security: “a computer is secure if you can depend on it and its software to behave as you expect”. Otherwise i took home a number of key technologies to fill in the gaps between and around our current unit and single-node integration tests on our CountStatistics project: Apache MRUnit for m/r units, MiniMRCluster and MiniDFSCluster for multi-jvm integration cluster, and BigTop for ecosystem testing (pig, hive, etc). We also went through various ways to benchmark hadoop jobs using TeraSort, MRBench, NNBench, TestDFSIO, GridMix3, and SWIM. Lastly we went through a demo of the free product “Cloudera Manager” – a diagnostics UI similar to Cassandra’s OpsCenter.

Hadoop using Hive.
Hive provides an SQL interface to Hadoop. It works out-of-the-box if you’re using HBase but with Cassandra as our underlying database we haven’t gotten around to installing it yet. The tutorial went through many live queries on a AWS EC2 cluster, exploring the specifics to STRUCTs in schemas, JOINs, UDFs, and serdes. This is a crucial interface to make it easier for others, particularly BI and FINN økosystem, to explore freely through our data. Pig isn’t the easiest way in for outsiders, but everyone knows enough SQL to get started. Fingers crossed we get Hive or Impala installed some time soon…

A number of meet-ups occurred during the evenings, one hosted at AppNexus, a company providing 10 billion real-time ads per day (with a stunning office). AppNexus does all their hadoop work using python, but they also putting focus on RESTful Open APIs like we do. The other meetup represented Cassandra by DataStax with plenty of free Cassandra beer. Latest benchmarks prove it to be the fastest distributed database around. I was hoping to see more Cassandra at strataconf – when someone mentions big data i think of Cassandra before Hadoop.

Otherwise this US election was on the news as the big data election

   

Leaving the Tower of Babel

Here at FINN we use mostly Java for our day-to-day work, but we do have some applications and modules written in other languages. We currently have code in Java, Ruby, JavaScript, Objective-C and Scala, supporting things as diverse as testing frameworks and iOS-apps.

Recently there has been some discussion with regards to what we should be using for new projects, as each team of developers start suggesting that they should use something they think would be easier and more effective. The arguments against are based on how easy it would be to recruit new developers who know these new technologies, how well it would perform under our kinds of loads, and wheter the assumption that we can be more effective is actually true.

Our strategy is firm on this point, that experiments can be done, but for major applications serving the public, we should be using Java for the time being. We are however considering if it’s time to take a second look at our chosen languages, and see if we should expand our portfolio. This strategy is founded on a wish to reduce the size of our technology portfolio, so that we don’t have to spend time and effort to bring developers up to speed when they switch teams. There’s also a higher chance that new hires will be familiar with Java, while other technologies would often require a period of training when they first start.

During these discussions, we created a quick poll and sent it out on our internal discussionboard. The poll asked questions like “How long have you worked as a developer?”, “Which language do you use for your day-to-day work?”, “If you were free to chose, which language would you use for your next project?” and “Which programming languages do you know?”. Our definition of “know” was very open in this poll, lowering the bar to allow people who had maybe written a single example program, and understands a bit of code could check the box.

Out of the nearly 100 people that work with development, 56 answered. We can assume that the people who did answer, were the people who were interested in the topic, and might not be a representative selection, but the results are interesting none the less.

So.. what did we learn?

Experience

Being a poll made in the midst of a discussion, mostly for fun, this part of the poll wasn’t framed in a way that gives us much useful information. We can say that the average developer who answered has worked for around seven or eight years, but with what looks like a reasonably fair spread across all groups. Almost as many with a couple years experience, as there are people with 10 to 15 years.

Primary language in day-to-day work

Being a predominatly Java shop, most of the respondents were using Java for their daily work. 35 out of 56 were Java-developers. 11 respondents worked with JavaScript daily, while four worked with Scala, three with Ruby, two with Objective-C, and finally one database-developer who worked mostly in T-SQL (The SQL-variant used in Sybase).

Most popular choice if allowed to chose freely

The most interesting part of the poll, with many interesting insights. This is also the most loaded part, as the results could easily short-circuit any discussion about future choice in language.

The bad news (or good, depending on your point of view), is that there was no clear answer from this section. Keep in mind that around 40 people didn’t answer this poll, and could be assumed to be content with the current status-quo.

The biggest group was Java, not surprisingly. The surprising part is that it was only 16 people who would chose Java if they could chose freely. This is much lower than expected, but still make up the largest group. Of these, 13 people are already using Java today. 20 Java developers would like to use something else.

So, if Java was the largest group, at 16 people, what does that mean for the remainder of the results? Well, 10 people would have chosen Scala, while JavaScript, Ruby, Clojure and Groovy clock in at about four or five respondents wanting to use them. At the bottom of the pack we have Python and Objective-C with three and two respondents chosing them.

Six people were interested enough to answer the poll, but chose the non-commitant “I don’t care, as long as I’m making good stuff for our users” option.

The number of possible answers here was a rather large selection of popular and not-so-popular-but-quite-well-known languages, so the fact that the list only includes eight languages is a sign that it’s not a completely random selection. Still, quite a lot of discussion is needed before any one of those languages gets center stage.

A couple curious details found in this part of the poll includes the fact that the only people who would chose JavaScript are people who are already working in JavaScript every day. Groovy, Clojure, Objective-C and Scala have all managed to be chosen by a person who doesn’t actually know the langauge. There’s only three people who would switch *to* Java, if allowed to chose.

Which languages do we “know”?

As mentioned earlier, the bar for “knowing” a language was set quite low in this poll, to get more diversity in answers. This resulted in some interesting numbers.

Being primarily a Java-shop, it might not surprise anyone that a full 100% knows Java. A little over three fourths know JavaScript, while Ruby and T-SQL was known to about half. These are the main languages most of us have some sort of dealings with in our daily work, so that they score high is not surprising.

Next on the list is PHP and Python, with around 40% knowing them.

We run primarily Sybase for our databases, with a small number of MySQL and PostgreSQL servers for smaller applications and modules. We are trying to standardise on PostgreSQL, but this isn’t reflected in knowledge, with 37.5% knowing how to code MySQL-procedures, 35.7% knowing how to code for Oracle, and only 25% knowing their way around a PostgreSQL codebase.

Our operations-department have had a “vendetta” against Perl for several years, but there are still more people who knows Perl than there are people who knows Scala, with the score 18 to 15. Groovy has quite a following too, with 13 respondents knowing Groovy.

We have a small Apps team working with iOS-development, but Objective-C has a reach far outside that team, with 10 respondents knowing Objective-C.

Common Lisp is known to five respondents, while Clojure suprisingly was only known to seven respondents. As you can read elsewhere in this blog, we had a Clojure workshop at our Technology Day earlier this summer, and these results might be telling us something about the language or our teachers when it didn’t have a better ability to “stick” than this.

We also have people who know some Erlang, Smalltalk, Lua, Haskell, Scheme, Eiffel and ML/SML.

How many languages do we “know”?

On average, we know 6.8 languages each. The most knowledgeable person knows 16 languages, while the least knowledgeable knows only one. The high experience respondents, 16 years or more, have a higher average than the rest of us, with 9.4, while the rest of the “experience brackets” know somewhere between six and seven languages.

Several programmer-gurus seem to think you should stribe to teach yourself one new language every year, and it would seem atleast one of our respondents have been able to follow this advice. For those of us with less extra time, a less ambitious strategy might be more compatible, but that we should stribe to learn something new every once in a while seems to be good advice.

Now what?

As mentioned previously, this poll was not a serious attemt at gaining actionable insights. The results should be taken with a large helping of salt, and at most used as a basis for discussions. On the other hand, I know we will be discussing this topic going forward, because the fact that only 16 of 56 wanted to use Java is telling us it’s time to start the discussion. There are possibly 40 developers who would prefer Java, who didn’t respond, so it’s too early to draw conclusions, but we have a place to start.

There are other questions falling out of this too:

  • There are two developers who wants to work with Objective-C, and eight Java-developers who wants to work with Scala, so why do we have so few internal applicants when we have openings on the teams working with these languages?
  • What is it that makes developers want to work with languages they don’t even know?
  • Of the respondents, more than half the Java-developers wants to use something else. Why is that?

 

Foraging in the landscape of Big Data

 
This is the first article describing FINN.no’s coming of age with Big Data. It’ll cover the challenges arising the need for it, the challenges in accomplishing new goals with new tools, and the challenges that remain ahead.

Big Data is for many just another vague and hyped up trend getting more than its far share of attention. The general definition, from Wikipedia, is big data covers the scenario where existing tools fail to process the increasing amount or dimensions of data. This can mean anything from:

      α – the existing tools being poor (while large companies pour $$$ into scaling existing solutions up) or
      β – the status quo requiring more data to be used, to
      γ – a requirement for faster and more connected processing and analysis on existing data.


http://navajonationparks.orgThe latter two are also described as big data’s three V‘s: Volume, Variety, Velocity. If the theoretical definition isn’t convincing you put it into context against some of today’s big data crunching use-cases…
    • online advertising combining content analysis with behavioural targeting,
    • biomedical’s DNA sequence tagging,
    • pharmaceutics’s metabolic network modelling,
    • health services detecting disease/virus spread via internet activity & patient records,
    • the financial industry ranging from credit scores at retail level to quant trading,
    • insurance companies crunching actuarial data,
    • US defence programs for offline (ADAMS) and online (CINDER) threat detection,
    • environmental research into climate change and atmospheric modelling, and
    • neuroscience research into mapping the human brain’s neural pathways.


On the other hand big data is definitely no silver bullet. It cannot give you answers to questions you haven’t yet formulated (pattern recognition aside), and so it doesn’t give one excuses to store overwhelming amounts of data where the potential value in it is still undefined. And it certainly won’t make analysis of existing data sets initially any easier. In this regard it’s less to do with the difficulty of achieving such tasks and more to do with the potential to solve what was previously impossible.

Often companies can choose their direction and the services they will provide, but in any competitive market where one fails to match the competitor’s offerings it can result in the fall of that company. Here Big Data earns its hype with many a CEO concerned into paying attention. And it probably gives many CEOs a headache as the possibilities it opens, albeit tempting or necessary, create significant challenges in and of themselves. The multiple vague dimensions of big data also allows the critics plenty of room to manoeuvre.

One can argue that to scale up: to buy more powerful machines or to buy more expensive software solves the problem (α). If all you have is this problem then sure it’s a satisfactory solution. But ask yourself are you successfully solving today’s problem while forgetting your future?

“If we look at the path, we do not see the sky.” – Native American saying

One can also argue away the need for such vasts amounts of data (β). Through various strategies: more aggressive normalisation of the data, storing data for shorter periods, or persisting data in more ways in different places; the size of each individual data set can be significantly reduced. Excessively normalising data has its benefits and is what one may do in the Lean development approach for any new feature or product. Indeed a simpler datamodel trickles through into a simpler application design, in turn leaving more content, more productive, pragmatic developers. Nothing to be scoffed at in the slightest. But in this context the Lean methodology isn’t about any one state or snapshot in time illustrating the simple and the minimalistic, rather it’s about evolution, the processes involved and their direction. Much like the KISS saying: it’s not about doing it simple stupid but about *keeping* it simple stupid. Here it questions to how does overly normalised data evolve as its product becomes successful and something more complicated over time. Anyone who has had to deal awkwardly with numerous and superfluous tables, joins, and indexes in a legacy application because it failed over time to continually improve its datamodel due to needs of compatibility knows what we’re talking about. There is another problem from such legacy applications that follow a datamodel centric design: the datamodel itself becomes a public API and the many consumers of it create this need of compatibility and the resulting inflexible datamodel. But this isn’t an underlying but rather overlapping problem as one loses oversight of the datamodel in its completeness and in a way represented optimally.




It also difficult to deny the amount of data we’re drowning in today.
“90% of the data that exists today was created in the last two years.. the sheer volume of social media and mobile data streaming in daily, businesses expect to use big data to aggregate all of this data, extract information from it, and identify value to clients and consumers.. Data is no longer from our normalised datasets sitting in our traditional databases. We’re accessing broader, possibly external, sources of data like social media or learning to analyse new types of data like image, video, and audio..” – greenbookblog.org.

And one may also argue that existing business intelligence solutions (can) provide the analysis required from all already existing datasets (γ). It ignores a future of possibilities: take for example the research going into behavioural targeting giving glimpses into the challenges of modern marketing as events and trends spark, shift, and evolve through online social media with ever faster frequencies – just the tip of the iceberg when one thinks forward to being able to connect face and voice recognition to emotional pattern matching analysis. But it also defaults to the conservative opinion that business intelligence need be but a post-mortem analysis of events and trends so to provide the insights intended and required only for internal review. This notion that this large scale analysis is of sole benefit to company strategy must become the tell tale of companies failing to see how users and the likes of social media change dramatically what is possible in product development today.

The methodologies of innovation therefore change. Real time analysis of user behaviour plays a forefront role in decisive actions on what product features and interfaces will become successful. This is the potential to cut down the risk of product development’s internal guesswork for a product’s popularity at any given point in time. In turn this cuts down time-to-market bringing the products release date closer to its maximum popularity potential moment. Startup companies know that a success doesn’t come only from a clever designed product, there is a significant factor of luck involved in releasing the right product at the right time. Large, and very costly, marketing campaigns can only extend, or synthetically create, this potential moment by so much.

This latter point around the extents and performance of big data analysis and the differentiation it creates between business analysis versus richer, more spontaneous, product development and innovation creates for FINN an important and consequential factor to our forage into big data. Here at FINN a product owner recently said: “FINN is already developing its product largely built around the customer feedback and therefore achieving continuous innovation?”. Of course what he meant to say was “from numbers we choose to collect, we generate the statistics and reports we think are interesting, and from these statistics we freely interpret them to support the hypothesis we need for further product development…” We couldn’t be further from continuous innovation if it hitched a lift to the other side of Distortionville¹.

“It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts” – Arthur Conan Doyle

FINN isn’t alone here, I’d say most established companies while trying to brand themselves as innovative are still struggling to break out of their existing models of product development based upon internal business analysis. No one said continuous innovation was easy, and there’s a lot of opinions out on this, but along with shorter deployment cycles i reckon there’s two keywords: truth and transparency. Tell your users the truth and watch how they respond. For example give them the statistics that show them their ads stop getting visits after a week, and then observe how they behave, to which solution do they flock to regain traffic to their ads. Don’t try and solve all their problems for them, rather try to enable them. You’ll probably make more money out of them, and by telling them the truth you’ve removed a vulnerability, or to what some fancy refers to as “a business secret”.

There’s also a potential problem with organisational silos. Large companies having invested properly in business intelligence and data warehousing will have assigned these roles of data collection, aggregation, and analysis, to a separate team or group of experts typically trained database administrators, statisticians, and traffic analysts. They are rarely the programmers, the programmers are on the front lines building the product. Such a split can parallel the sql vs nosql camps. This split against the programmers whom you rely on to make continuous innovation a reality can run the risk of stifling any adoption of big data. With the tools enabling big data the programmers can generate reports and numbers previously only capable from the business intelligence and data warehousing departments, and can serve them to your users at web response times – integrating such insights and intelligence into your product. Such new capabilities doesn’t supersede these traditional departments, rather it needs everyone accepting the new: working together to face new challenges with old wisdom. The programmers working on big data, even if tools and data become shared between these two organisational silos, cannot replace the needs of business intelligence any more than business intelligence can undertake big data’s potential. As data and data sources continue to increase year after year the job of asking the right questions, even knowing how to formulate the questions correctly, needs all hands on deck. Expecting your programmers to do it all might well swamped them into oblivion, but it isn’t just the enormity of new challenges involved, it’s that these challenges have an integral nature to them that programmers aren’t typically trained to tackle. Big Data can be used as an opportunity not only to introduce exciting new tools, paradigms, and potential into the company but as a way to help remove existing organisational silos.

The need for big data at FINN came from a combination of (α) and (γ).   The statistics we show users for their ads had traditional been accumulated and stored in a sybase table. These statistics included everything from page views, “tip a friend” emails sent, clicks on promoted advertisement placements, ads marked as favourite, and whatever else you can imagine.

(α) FINN is a busy site, the busiest in Norway, and we display ~50 million ad pages each day. Like a lot of web applications we had a modern scalable presentation and logic tier based upon ten tomcat servers but just one not-so-scalable monster database sitting in the data tier. The sybase procedure responsible for writing to the statistics table ended up being our biggest thorn. It used 50% of the database’s write execution time, 20% of total procedure execution time, and the overall load it created on the database accounted for 30% of ad page performance. It was a problem we had lived with long enough that Operations knew quickly to turn off the procedure if the database showed any signs of trouble. Over one period of troublesome months Operations wrote a cronjob to turn off the procedure automatically during peak traffic hours – when ads were receiving the most traffic we had to stop counting altogether, embarrassing to say the least!

(γ) On top of this product owners in FINN had for years been requesting that we provide statistics on a day basis. The existing table had tinkered with this idea for some of the numbers that didn’t accumulate so high, eg “tip a friend emails”, but for page viewings this was completely out of the question – not even the accumulated totals were working properly.

At the time we were in the process of modularising the FINN web application. The time was right to turn statistics into something modern and modular. We wanted an asynchronous, fault-tolerance, linearly scaling, and durable solution. The new design uses the Command-Query Separation pattern by using two separate modules: one for counting and one for displaying statistics. The counting achieves asynchronousity, scalability, and durability by using Scribe. The backend persistence and statistics module achieves all goals by using Cassandra and Thrift. As an extension of the push-on-change model: the counting stores denormalised data and it is later normalised to the views the statistics module requires; we use MapReduce jobs within a Hadoop cluster.

http://www.flickr.com/photos/ciordia/2892558385/sizes/m/in/photostream/ 

The resulting project is kick-ass, let me say. Especially Cassandra, it is a truly amazing modern database: linear scalability, decentralised, elastic, fault-tolerant, durable; with a rich datamodel that provides often superior approaches to joins, grouping, and ordering than traditional sql. But we’ll spend more time describing the project in technical details in a later article.

A challenge we face now is broader adoption of the project and the technologies involved. Various departments: from developers to tech support; want to read the data, regardless if it is traditional or ‘big data’, and the habit was always to read it directly from production’s Sybase. And it’s a habit that’s important in fostering a data-driven culture within the company, without having to encourage datamodel centric designs. With our Big Data solution this hasn’t been so easy. Without this transparency to the data, developers, tech support, and product owners alike seem to be failing to initiate further involvement – to solve this, since our big data is stored in Cassandra, we’d love to see a read only web-based gui interface based off caqel.

…to be continued…




—-




the ultimate view — Tiles-3

A story of getting the View layer up and running quickly in Spring…

Since the original article, parts of the code has been accepted upstream, now available as part of the Tiles-3 release, so the article has been updated — it’s all even simpler!


Based upon the Composite pattern and Convention over Configuration we’ll pump steroids into
   a web application’s view layer
      with four simple steps using Spring and Tiles-3
         to make organising large complex websites elegant with minimal of xml editing.



 




Background

At FINN.no we were redesigning our control and view layers. The architectural team had decided on Spring-Web as a framework for the control layer due to its flexibility and for providing us a simpler migration path. For the front end we were a little unclear. In a department of ~60 developers we knew that the popular vote would lead us towards SiteMesh. And we knew why – for practical purposes sitemesh gives the front end developer more flexibility and definitely less xml editing.
But sitemesh has some serious shortcomings…

SiteMesh shortcomings:
  • from a design perspective the Decorator pattern can undermine the seperation MVC intends,
  • requires all possible html for a request in buffer requiring large amounts of memory
  • unable to flush the response before the response is complete,
  • requires more overall processing due to all the potentially included fragments,
  • does not guaranteed thread safety, and
  • does not provide any structure or organisation amongst jsps, making refactorings and other tricks awkward.

One of the alternatives we looked at was Apache Tiles. It follows the Composite Pattern, but within that allows one to take advantage of the Decorator pattern using a ViewPreparer. This meant it provided by default what we considered a superior design but if necessary could also do what SiteMesh was good at. It already had integration with Spring, and the way it worked it meant that once the Spring-Web controller code was executed, the Spring’s view resolver would pass the model onto Tiles letting it do the rest. This gave us a clear MVC separation and an encapsulation ensuring single thread safety within the view domain.

“Tiles has been indeed the most undervalued project in past decade. It was the most useful part of struts, but when the focus shifted away from struts, tiles was forgotten. Since then struts as been outpaced by spring and JSF, however tiles is still the easiest and most elegant way to organize a complex web site, and it works not only with struts, but with every current MVC technology.” – Nicolas Le Bas

Yet the best Tiles was going to give wasn’t realised until we started experimenting a little more…

Fork me on GitHub casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme casus telefon telefon dinleme