The Tuenti release and development process blog post series

It’s time to announce a blog post series i’m publishing in the Tuenti developers blog.

It’s about the release and development process and i’ll try to explain how we work internally in our company, since a developer starts programming till the code goes to the production servers, passing through the development environment, Jenkins, the continuous integration and delivery and some internal tools we’ve developed to automate and ease the process.

This first part of the series just makes an introduction of what will come further on and shows the differences of how Tuenti was in the past (4 years ago approximately) and how is now.

In a nutshell, before there were many manual and error prone tasks the ended up in bugs in the site and now, everything is fast, reliable and automatic, with no manual intervetion at all.

I will announce the forthcoming posts here.



After Selenium Conf 2012

I’m one of the guys who went to the Selenium Conf 2012 in London and it was awesome!

Very good talks and very nice people, congratulations to the organizers and specially to Simon Stewart and Jason Huggings.

I could write lots of things of what i saw there, but i don’t have much time so i’m going to write some brief conclusions:

  • Most of the top tier companies test their software in a ( more or less ) similar way, which indicates that the testing world is going through a single line and this involves very good things such a future standardization and good cooperation between people within this testing world. Great news.
  • In my current job at Tuenti, we are also in the same page. We achieve, in some cases, better results and have a better testing framework/infrastructure than some companies that are in the same vein. That thing made me feel very proud and happy of what we are doing at Tuenti.
  • Everyone has problems with their tests:
    • slowness
    • flakiness
    • brittle tests
    • undeterminism produced by AJAX asynchronous requests
    • DOM changes made by the developers and its consequent test update
    • Third party elements in tests
  • Some of them could (kind of) fix their problems, but there isn’t a good nor standard solution for all these problems. Here, the testing world has a big room of improvement.  I have some ideas for some of them.
  • Surprisingly for me, Cucumber ( or at least BDD ) is being more and more popular.

When i was seeing some of the talks, i though that maybe in the future i can write an article or even do a talk of how the testing at Tuenti has evolved since my first days in October 2009, some points:

  • We ran 400 tests in 3 hours, now we run 11000 in 23 minutes
  • We used Cruisecontrol, now we use Jenkins
  • Every release we had about 40 tests out of 400 failing randomly due to an important flakiness, now, helped with some magic tricks, i’m proud to says that zero tests fail out of 11000

Triggering Jenkins jobs from the SCM push to avoid the evil polling


If you have a continuous integration infrastructure with Jenkins you might have you jobs configured to make polling over your SCM in order to trigger a job when there are changes. But this has some problems when Jenkins has several nodes and there are a big amount of jobs.

In my case, i configured the Jenkins jobs to poll the SCM repository ( Mercurial ) every 5 minutes. What that means?

  • A very big load in the SCM repository server due to the big amount of pollings from every Jenkins job.
  • Unexpected behaviour of the polling due to Jenkins bugs or SCM plugin bugs ( at least in the Mercurial one )
    • Jobs launched with no changes because the polling couldn’t be done because maybe the workspace of the previous build is not available
    • Jobs not launched although there are changes because of the loss of threads that manage the channel connections between the master and the slaves (Jenkins bug)

Continue reading

Fixing the postbuild order problem of the upstream/downstream solution i wrote

Some days ago i wrote a post about how to get the overall status in the upstream job from the downstream jobs in Hudson or Jenkins (

A user (v22 ) test my code and had some problems, and actually, there is a problem.

As i said in the comment reply, there is a problem in the execution order of the post build actions, this is not deterministic and depends on how the job configuration is stored in a XML file located in the master node.

The relationship between the upstream jobs and the downstream jobs is ready when the fingerprints are recorded. This action is done on a post build action, like the Groovy execution. If the Groovy execution is performed before the fingerprint recording, there is not upstrem job set yet, therefore the Groovy code fails.

I proposed in that reply a workaround to change the order execution of the post build actions, but this is not working as i expected, so there is not a way to force the Groovy execution after the fingerprint recording.

There is an open ticket in the Jenkins and Hudson issue tracker to add a way to specify the post build actions executions:

But everything has a solution, and a solution that always works 😉

Continue reading

Jenkins / Hudson getting the overall status in the upstream job from the downstream jobs

Few days ago i asked in this question:

I wanted to get the result of the upstream job depending on the result of the downstream jobs. I mean:

If in the upstream job i get a “stable” result but if in one of the downstream jobs i get an “unstable” or “failed” result i want to get the worst status in my upstream job.

In my case, i wanted to speed up the test execution paralellizing the build in different downstream jobs, therefore, i wanted to have the overall result of the build in the main job (the upstream) in order to see the result of the complete build in a single point.

I couldn’t find any good solution, only workarounds. So i started to investigate and discovered the best Jenkins / Hudson plugin i have ever seen: the Groovy Postbuild plugin

With this plugin you can do almost whatever you want with Jenkins / Hudson. It allows you to execute a Groovy script in the post build phase and offers you the “build” and “hudson” Java objects.

The solution for my problem was a simple Groovy script added in the downstream jobs. This code gets the upstream job, access to the last build, check the result and if its result is better than the downstream one, it updates it.

This is the piece of code:

Continue reading