Building a remote compilation service for various Qt targets

Targeting many platforms with the same code base is the reason why many of us choose to use Qt. In the Tepee3D project, we would like to attract people to develop widgets using Qt and Qml. Currently, Tepee3D runs on 8 platforms and asking developers to compile their widgets for each of these platforms would be a lot of troubles. In case a new platform is added, they would have to find a way to either crosscompile Qt for that platform or have a dedicated system for it on which to compile it.

On the other hand, building a remote compilation service where developers can request their widget to be build on a given platform would solve that issue. Using Jenkins would be a solution but setting up a dedicated queueing system where build requests can be distributed to nodes that are configured to build a given platform wouldn’t be much harder.

Several queueing and messaging systems are available but we chose to use RabbitMQ. But before thinking about distributing build jobs, you should thing of a way to have access to the code source you wan’t to build. On our website, when a developer asks to build a new widget, a git repository is automatically created and a project template is provided. That way the developer pushes his changes on the git and we have easy access to the code and can tell a worker of a given platform to clone the repository and build the provided projects. If you don’t want to go that route, putting the code on an nfs share or an online cloud storage might be considered a good solution too.

Now that we have a way to access to our the code to build, we can setup our worker and queues.

RabbitMQ can be used with many languages. We chose to use python as it it simple to set up on many platforms, has reasonable performances and can get a lot done in a few lines. To interact with RabbitMQ in python, we used the pika module.

You can install it (in linux) by performing the following command :


sudo pip install pika=="0.9.12"

We’ve encountered issues with the latest version 0.9.13 at the time of this writing that’s why we recommend the 0.9.12.

How RabbitMQ works is by sending messages which can be anything from json to plain text through queues. Then, workers can connect to those queues and retrieve messages.

Installation instructions for RabbitMQ can be found at : http://www.rabbitmq.com/download.html

Once it is up and running you should create a virtual host, a user and give it the right permissions.


rabbitmqctl add_user your_vhost

rabbitmqctl add_user your_user your_password

rabbitmqctl set_permissions -p your_vhost your_user ".*" ".*" ".*"

Once you have your user and virtual host, you can proceed with the creation of the queues in RabbitMQ. Those queues will be durable (meaning they will persist in time and keep their messages)

Below is the code necessary to declare the queues on the server.


import pika

connectionCredentials = pika.PlainCredentials('your_user', 'your_password')
connectionParameters = pika.ConnectionParameters(host='your_hostname',
                                                 port=5672,
                                                 virtual_host='your_vhost',
                                                 credentials=connectionCredentials)
connection = pika.BlockingConnection(connectionParameters)
channel = connection.channel()

qt_platforms = ['linux_x86', 'linux_x86_64', 'android_arm', 'windows_x86', 'windows_x86_64', 'results']

# Declare the exchange which can be seen as the gate/router between messages and queues
channel.exchange_declare(exchange="build_exchange", exchange_type='direct', durable=True)

for platform in qt_platforms:
   # Declaration of a queue
   channel.queue_declare(queue=platform, exclusive=False, durable=True)
   # Assign the queue to an exchange and a routing_key, in this case the routing key is the same as the queue's name
   channel.queue_bind(exchange="build_exchange", queue=platform, routing_key=platform)

connection.close()

You should only create the queues and the exchange once. As they are durable, they won’t be removed from RabbitMQ even when restarted. Note that there is a results queue that will be used to retrieve the results of a build operation executed  by the workers.
If you need to add a new queue, there is no harm in running that same script again, if a queue or exchange as already been declared, RabbitMQ will just ignore the declaration.

Once your queues are declared, you can publish messages to your queues. You can wrap that in another python script that you’ll run periodically using a cron task or a celery task on your server.

Our build requests messages are sent under json form in the RabbitMQ queues and contain the id of the widget to build and the git repository address where the code is hosted. You can change that according to your need.

 import pika
 import json

 connectionCredentials = pika.PlainCredentials('your_user', 'your_password')
 connectionParameters = pika.ConnectionParameters(host='your_hostname',
                                                  port=5672,
                                                  virtual_host='your_vhost',
                                                  credentials=connectionCredentials)
 connection = pika.BlockingConnection(connectionParameters)
 channel = connection.channel()
 channel.confirm_delivery()

 # Perform a database query to retrieve the build requests that have not been sent to the queues yet
 requests = #SQL QUERY HERE

 for widget_request in requests:
   # Routing key here is the name of the platform linux_x86, linux_x86_64 ...
   routingKey = widget_request.platform.name
   # The database id of our widget which can be useful to save data about a widget when we have the result of a build
   widgetId = widget_request.widget.widget_id
   # The git repository address of where is hosted our widget's code source
   repoName = widget_request.widget.repo_name
   # We wrap those up in a json
   body_json = json.dumps({"widget_id" : widgetId,
                           "repo_name" : repoName,
                           "request_id" : widget_request.id })
   # Publish the message on the exchange and route it according to the routing key, delive_mode = 2 tells messages to be persistent and reply_to holds the name of the queue workers will reply to.
   if channel.basic_publish(exchange="build_exchange",
                            routing_key=routingKey,
                            body=body_json,
                            properties=pika.BasicProperties(delivery_mode=2,
                                                            reply_to="results"
                                                            content_type="application/json")):
      print ">>>>>>>> Job correctly published to queue"
   else:
      print "<<<<<<<< Job couldn't be published to the queue"

 connection.close();

Now that our build requests are properly distributed over RabbitMQ queues, we need workers to build them. But first, as I’ve come accustomed to rebuilding Qt every two weeks,
here is a script you can use to build it on linux for linux x86, linux x86 64, qnx (blackberry playbook) and android arm.

#!/bin/bash -xe

QT_BUILD_DESTDIR=/data/lemire_p/Qt
QT_SOURCES=/data/lemire_p/Qt/qt-everywhere-opensource-src-5.1.1
QT_BUILD_VERSION=Qt_5.1.1
# FOR QNX
BBNDK_LOCATION=/data/lemire_p/Programs/bbndk-2.1.0
# FOR ANDROID
ANDROID_NDK_LOCATION=/data/lemire_p/Programs/android-ndk-r8e
ANDROID_SDK_LOCATION=/data/lemire_p/Programs/android-sdk-linux
# NUMBER OF THREADS
THREADS=2

ARGS=("$@")

if [ ${#ARGS[@]} != 1 ]; then
   exit 1
fi

PLATFORM=$1

cd $QT_SOURCES
cd qtbase
# CLEAN SOURCES
rm -f `find . -name Makefile`
rm -f `find . -name .qmake.cache`
rm -f `find . -name *.o`
rm -f `find . -name moc_*`


case $PLATFORM in
 "gcc_64") ./configure -shared -release -opensource -confirm-license -opengl -silent -verbose -prefix "$QT_BUILD_DESTDIR/$QT_BUILD_VERSION/$PLATFORM" -nomake tests -nomake examples -platform linux-g++-64 && make -j $THREADS && make install;;
 "gcc") ./configure -shared -release -opensource -confirm-license -opengl -silent -verbose -prefix "$QT_BUILD_DESTDIR/$QT_BUILD_VERSION/$PLATFORM" -nomake tests -nomake examples -platform linux-g++-32 && make -j $THREADS && make install;;
 "android_arm") ./configure -shared -release -opensource -confirm-license -opengl -silent -verbose -prefix "$QT_BUILD_DESTDIR/$QT_BUILD_VERSION/$PLATFORM" -nomake tests -nomake examples -xplatform android-g++ -android-ndk $ANDROID_NDK_LOCATION -android-sdk $ANDROID_\
SDK_LOCATION -android-ndk-host linux-x86_64 -android-toolchain-version 4.7 && make -j 2 && make install;;
"qnx") source "$BBNDK_LOCATION/bbndk-env.sh"; ./configure -shared -release -opensource -confirm-license -opengl es2 -silent -verbose -prefix "$QT_BUILD_DESTDIR/$QT_BUILD_VERSION/$PLATFORM" -nomake examples -nomake tests -device blackberry-playbook-armv7le -no-neon \
&& make -j $THREADS && make install;;
esac

cd ..

echo "Building Qt SubModules"

# ADD SUBMODULES OF QT YOU WANT TO BUILD, IN THE RIGHT ORDER IF YOU KNOW IT
QT_MODULES=( 'qtjsbackend' 'qtdeclarative' 'qtmultimedia' 'qtgraphicaleffects'
'tepee3d-qt3d' )

for ((i = 0; i < ${#QT_MODULES[@]}; i++)) do
 echo "Building ${QT_MODULES[${i}]}"
 cd"$QT_SOURCES/${QT_MODULES[${i}]}"
 "$QT_BUILD_DESTDIR/$QT_BUILD_VERSION/$PLATFORM/bin/qmake" && make && make install
done

Save it in a file and simply call the script and pass it gcc (for linux_x86), gcc_64 (for linux_x86_64), android_arm or qnx and it should build Qt for you assuming you’ve properly edited the variables and have the proper libraries installed.

Now let’s move on to the workers. Below is a worker template for linux :


#!/usr/bin/env python

import os
import pika
import sys
import json
import subprocess
import logging
from datetime import datetime

logging.basicConfig()
exchangeKey = 'build_exchange'

connectionCredentials = pika.PlainCredentials('your_user', 'your_password')
connectionParameters = pika.ConnectionParameters(host='your_hostname',
port=5672,
virtual_host='your_vhost',
credentials=connectionCredentials)
connection = pika.BlockingConnection(connectionParameters)
channel = connection.channel()
# Tells the worker to only fetch one message at a time so that the workload is evenly shared among workers that can build for a given platform
channel.basic_qos(prefetch_count=1)
# NO NEED TO DECLARE QUEUES AND EXCHANGES AS THEY ARE ALREADY DECLARED BY THE TEPEE3D SERVER
# Tell RabbitMQ to redistribute messages assigned to this worker that have not been acknowledged (in case the worker shutdown unexpectedly)
channel.basic_recover(requeue=True)

print ' [*] Waiting for messages. To exit press CTRL+C'

# Configure here the paths to your qmake executables for the platforms you want to build. This changes for every worker
platform_qt_conf = {'linux_x86_64' : '/data/lemire_p/Qt/Qt_5.1.0/gcc_64/bin/qmake',
                    'linux_x86' : '/data/lemire_p/Qt/Qt_5.1.0/gcc/bin/qmake',
                    'android_arm' : '/data/lemire_p/Qt/Qt_5.1.0/android/bin/qmake',
                    'qnx' : '/data/lemire_p/Qt/Qt_5.1.0/qnx/bin/qmake',
                   }

# Tells the worker to listen for messages on each of the queues defined in platform_qt_conf, doesn't perform automatic acknowledgement
for queue_name in list(platform_qt_conf.keys()):
 channel.basic_consume(callback,
                       queue=queue_name,
                       no_ack=False)

def callback(ch, method, properties, body):
    print &quot; [x] %r:%r:%r&quot; % (method.routing_key, &quot;Received new build job at &quot;, datetime.now().strftime(&quot;%Y-%m-%d %H:%M:%S&quot;))
    # Perform the build
    build_request = json.loads(body)

    # CLONE PLUGIN REPO
    # BUILD PLUGIN
        # QMAKE
        # MAKE
        # MAKE INSTALL

    p = subprocess.Popen('rm -rf /tmp/' + build_request.get('repo_name') + ' &amp;&amp; ' +
                         'git clone ' + build_request.get('repo_name') + '.git /tmp/' + build_request.get('repo_name') + ' &amp;&amp; ' +
                         'cd /tmp/' + build_request.get('repo_name') + ' &amp;&amp; ' +
                         platform_qt_conf.get(method.routing_key) + ' &amp;&amp; ' +
                         'make &amp;&amp; ' +
                         ('make install_qml_folder' if method.routing_key.startswith('android_') else 'make install'),
                         stdout=subprocess.PIPE,
                         stderr=subprocess.STDOUT,
                         shell=True)
    log = p.communicate()[0]

    status = 'error'
    build_artifacts_location = &quot;&quot;
    if p.returncode == 0:
        status = 'success'
        # If the build succeeded, the build artifacts are bundle in an archive which is uploaded to our server using scp. What you do after a build is up to you, use this as an inspiration.
        build_dir = '/tmp/' + build_request.get('repo_name') + '/' + build_request.get('repo_name') + '_Library'
        tar_archive = build_request.get('repo_name') + '.tar.xz'
        artifact_server = '~/builds/' + method.routing_key + '/' + build_request.get('repo_name')
        print build_dir
        if os.path.exists(build_dir):
            print 'scp -r ' + build_dir + ' widget_build_upload:~/builds/' + method.routing_key + '/' + build_request.get('repo_name')
            p2 = subprocess.Popen('cd ' + build_dir +  ' &amp;&amp; ls &gt; index' + ' &amp;&amp; ' +
                                  'tar -cJf ../' + tar_archive + ' . &amp;&amp; ' +
                                  'cd .. &amp;&amp; ' +
                                  'ssh widget_build_upload \&quot;rm -rf ' + artifact_server + ' &amp;&amp; mkdir -p ' + artifact_server + '\&quot; &amp;&amp; ' +
                                  'scp -r ' + tar_archive + ' widget_build_upload:' + artifact_server + '/ &amp;&amp; ' +
                                  'ssh widget_build_upload \&quot;' + 'tar -xf ' + artifact_server + '/' + tar_archive + ' -C ' + artifact_server + ' &amp;&amp; ' +
                                  'rm ' + artifact_server + '/' + tar_archive + '\&quot;',
                                  stdout=subprocess.PIPE,
                                  stderr=subprocess.STDOUT,
                                  shell=True)
            log += p2.communicate()[0]
            if p2.returncode == 0:
                build_artifacts_location = 'http://tepee3d.dyndns.org/builds/' + method.routing_key + '/' + build_request.get('repo_name')

    # SEND ACKNOWLEDGEMENT
    reply = {&quot;widget_id&quot; : build_request.get('widget_id'),
             &quot;request_id&quot; : build_request.get('request_id'),
             &quot;status&quot; : status,
             &quot;log&quot; : log.decode('ascii', 'ignore').encode('utf-8', 'ignore'),
             &quot;build_artifacts&quot; : build_artifacts_location }

    json_reply = json.dumps(reply),

    print &quot;Sending reply at &quot; + datetime.now().strftime(&quot;%Y-%m-%d %H:%M:%S&quot;)
    # The reply is sent to the results queue
    ch.basic_publish(exchange=exchangeKey,
                     routing_key=properties.reply_to,
                     body=json_reply[0],
                     properties=pika.BasicProperties(delivery_mode=2))
    # We acknowledge the build_request to RabbitMQ so that it won't be reprocessed again by another worker (in case of failure)
    ch.basic_ack(delivery_tag=method.delivery_tag)

channel.start_consuming()

This template can then be modified to suit your current configuration. Once properly configured, you can run the script on your worker machines. RabbitMQ will distribute properly build requests across all your workers.
This even works on a RaspberryPI  ! You can run several instances of this script if you have the resources to. Otherwise, don’t worry, one worker per platform or per machine will work, the jobs will be buffered in the queues while waiting to be executed.

Like the build_request distribution script from ealier, you can make a small script that will periodically retrieve messages from the results queue and eventually update your database. You should run it on the same machine that does the job distribution.

    connectionCredentials = pika.PlainCredentials('your_user', 'your_password')
    connectionParameters = pika.ConnectionParameters(host='your_hostname',
                                                     port=5672,
                                                     virtual_host='your_vhost',
                                                     credentials=connectionCredentials)
    connection = pika.BlockingConnection(connectionParameters)
    channel = connection.channel()

    def result_callback(ch, method, properties, body):
        print &quot;Received result&quot;
        build_request = json.loads(body)
        if build_request.get('status') == &quot;success&quot;:
            print &quot;Success&quot;
        else:
            print &quot;Error&quot;

        # Update you DB here if you want to

        # Acknowledge the message
        channel.basic_ack(delivery_tag=method.delivery_tag)

    def close_connection_callback():
        print &quot;Timeout reached&quot;
        channel.stop_consuming()
        connection.close()

    channel.basic_consume(result_callback, queue='results', no_ack=False)
    # Consume for 30 seconds then exists
    connection.add_timeout(30, close_connection_callback)
    channel.start_consuming()

That’s it ! With those simple scripts, I hope you’ve seen how you can easily create a remote compilation service for several Qt platforms.
The fact that is using RabbitMQ ensures that the service will be easily scalable, you can add new workers for a given platforms if you need to handle more load.
If you have any question, do not hesitate to contact me.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s