Tupperware: Mozilla apps in Docker!

Announcing Tupperware, a setup for Mozilla apps in Docker! Tupperware is portable, reusable, and containerized. But unlike typical tupperware, please do not put it in the Microwave.

Tupperware

Why?

This is a project born out of a need to lower the barriers to entry for new contributors to Release Engineering (RelEng) maintained apps and services. Historically, RelEng has had greater difficulty attracting community contributors than other parts of Mozilla, due in large part to how much knowledge is needed to get going in the first place. For a new contributor, it can be quite overwhelming to jump into any number of the code bases that RelEng maintains and often leads to quickly losing that new contributor out of exaspiration. Beyond new contributors, Tupperware is great for experienced contributors as well to assist in keeping an unpolluted development environment and testing patches.

What?

Currently Tupperware includes the following Mozilla apps:

BuildAPI – a Pylons project used by RelEng to surface information collected from two databases updated through our buildbot masters as they run jobs.

BuildBot – a job (read: builds and tests) scheduling system to queue/execute jobs when the required resources are available, and reporting the results.

Dependency apps currently included:

RabbitMQ – a messaging queue used by RelEng apps and services

MySQL – Forked from orchardup/mysql

How?

Vagrant is used as a quick and easy way to provision the docker apps and make the setup truly plug n’ play. The current setup only has a single Vagrantfile which launches BuildAPI and BuildBot, with their dependency apps RabbitMQ and MySQL.

How to run:

– Install Vagrant 1.6.3

– hg clone https://hg.mozilla.org/build/tupperware/ && cd tupperware && vagrant up (takes >10 minutes the first time)

Where to see apps:

– BuildAPI: http://127.0.0.1:8888/

– BuildBot: http://127.0.0.1:8000/

– RabbitMQ Management: http://127.0.0.1:15672/

Troubleshooting tips are available in the Tupperware README.

What’s Next?

Now that Tupperware is out there, it’s open to contributors! The setup does not need to stay solely usable for RelEng apps and services. So please submit bugs to add new ones! There are a few ideas for adding functionality to Tupperware already:

  • Bug 1027410 - Add Treeherder docker container to Tupperware
  • Bug 1027412 - Add multiple vagrant setups to Tupperware to customize setup
  • Bug 1027417 - Have MySQL docker app in Tupperware load database schemas

Have ideas? Submit a bug!

 

Manual Testing of Arbitrary Builds

When a new selfserve-agent change is pushed to production, it's necessary to verify functionality with some maual testing. Here are some steps to basic testing:

  1. If no new try job to mess with, then submit one, see ReleaseEngineering/TryServer

     

    • hg clone http://hg.mozilla.org/mozilla-central
    • cd mozilla-central
    • echo "THING" >> README.txt
    • hg qnew test-patch
    • hg qref –message "try: -b o -p linux64 -u none -t none"
    • hg push -f ssh://hg.mozilla.org/try/
  2. In my case you can see the try job running here: https://tbpl.mozilla.org/?tree=Try&rev=3a5e6ca198d8

     

    • If the push is successful it'll give you your own link
  3. Submit a blank arbitrary job request to https://secure.pub.build.mozilla.org/buildapi/self-serve/try/builders/Linux x86-64 try build/3a5e6ca198d8 using trigger_arbitrary_job.py
  4. python trigger_arbitrary_job.py –buildername "Linux x86-64 try build" –branch try –rev 3a5e6ca198d8

     

    • Leaving –file out so that files = []
  5. See running job here https://secure.pub.build.mozilla.org/buildapi/revision/try/3a5e6ca198d8
  6. Check for pending job at https://secure.pub.build.mozilla.org/buildapi/self-serve/try/rev/3a5e6ca198d8
  7. Also check https://tbpl.mozilla.org/?tree=Try&rev=3a5e6ca198d8
  8. Check buildbot status can be found by finding the appropriate master on the buildapi page https://secure.pub.build.mozilla.org/buildapi/revision/try/3a5e6ca198d8
 

Deployed BuildAPI bug fix, L2 Access, Tupperware

A bunch of new stuff!

New Things

New Bugs

What's next?

  • Test Arbitrary Builds for bug Bug 1009565 – Triggering arbitrary jobs gets branch wrong
  • Make initial commit to hg.m.o/build/tupperware
  • Troubleshoot buildbot web interface on localhost:8501 
  • Make multiple Vagrantfiles to choose from based on required setup needs
  • Publish docker images to new Docker Index for Mozilla repo 
  • Create a wiki doc for Tupperware  
  • Create mysql-app that can load database schemas

That is all for now!

 

BuildAPI, Buildbot, RabbitMQ and MySQL containers are all up! Some testing left…

 BuildAPI, Buildbot, RabbitMQ and MySQL containers are all up now! To run pull http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup and run 'vagrant up' from the vagrant-docker-setup/ directory.

The vagrant up command will take several minutes to run the first time because it needs to pull the docker images from the Docker Index at docker.io. More to come tomorrow on this. NOTE: Buildbot seems to be running, but I have not been able to test *full* functionality just yet. However, the buildapi-app, rabbitmq-app and orchardup/mysql containers run together just fine.

To view

  • BuildAPI: localhost:8888
  • RabbitMQ: localhost:15672
  • Buildbot: localhost:8501 – NOT YET

Keep checking back!

New

  • Added specific app users to mysql with passwords
  • Added version row with value 6 to schedulerdb
  • Showed that an added job from buildapi will show up in mysql on buildbot
  • The malformed url error was caused by the fact that the URL was not importing the environment variable
  • Once the env var was imported, I was still getting the malformed url, but this time it was because I had not created a password for the user. I remember when I was setting up my local buildbot instance that I ran into this same problem. There is a regex that is checking to see that the url is not malformed and it does not take kindly to the absense of passwords, regardless of the fact that mysql is okay with not having a password for a user at all.
  • Uploaded images for johnlzeller/rabbitmq, johnlzeller/buildapi and johnlzeller/buildbot to Docker Index
  • Verified that entire setup can be run in Vagrant

What's next?

  • Create repo on hg.mozilla.org/build for holidng Vagrantfile and Dockerfiles for images and update the new hg.m.o/build repo with Vagrantfile and Dockerfiles for images
  • Troubleshoot why the buildbot web interface is not showing up on localhost:8501
  • Publish setup to blog

After initial release

  • Have 1 of 2 things should happen:

     

    1. Have mysql-app setup to load its own schemas and users
    2. Have individual apps only load schemas and users if they do not already exist… this ensures persistence of the databases
  • Look into using the VOLUME docker command to setup an easy way to share a host directory for editing purposes. The goal here is to make it easy to make changes to the running dev setup and to test that setup. Currently, the docker setup just runs the tip of each repository for buildapi and buildbot

Questions

  • Why/how does schedulerdb.version get propogated with a version number int like 6. Buildbot-app was failing on the fact that there was no row in version. I just added 6 into it, since that is what my local schedulerdb dump had, but is there a more appropriate way to do this? Does this check need to be changed? The assert can be found on line 35 of /usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/db/schema/manager.py
 

BuildAPI-app, RabbitMQ-app and orchardup/mysql are working correctly

BuildAPI-app, RabbitMQ-app and orchardup/mysql are working correctly. This post is a short update of working through the What's next list from the previous post. Here is the updated list

What's next?

The next steps are these:

  • Resolve exceptions.ValueError in buildbot-app
  • Resolve sqlalchemy.exc.OperationalError in buildapi-app
  • Link rabbitmq, mysql, and buildapi and test that everything works
  • Link mysql, and buildbot and test that everything works
  • Link rabbitmq, mysql, buildapi AND buildbot and test that the whole package works
  • See if there is a good way to load statusdb and schedulerdb schemas into mysql in a mysql-app setup built on the orchardup/mysql image. This would prevent the redundanc of loading schemas in buildapi-app and buildbot-app
 

Linking of docker containers and further issues with buildbot-app

All docker containers now exist, and one of the only things left to do is get all the containers playing nice with one another.

MySQL-app

I set out to breakout mysql into its' own docker containerand made good progress, but before proceeding further with debugging some setup problems, I checked out if anyone was opposed to using another mysql docker container as a foundation for our own. There are hundreds of mysql docker containers out there so it seemed silly to dupliate work if unnecessary. Noone had objections, so I went ahead and picked out a mysql docker container to use. I chose orchardup/mysql from the Docker Index because it was pretty barebones and for the nice additional features it add in the form of being able to set environment variables in the container at runtime to do things like setup your own usernames, passwords, databases, etc, etc.

After awhile of trying to modify the run scripts that the orchardup/mysql image uses to launch the mysql server, I decided to back down for the time being. I was attempting to use orchardup/mysql as a base for our own mysql-app, so that I could then have our app do the additional loading of statusdb and scheduelrdb schemas. This proved to be a pain, and so rather than fight it further, I went with the redundant option of having buildapi-app and buildbot-app each individually load the schemas they needed into the database, regardless of if the schema already existed. I am not happy with this as a permanent solution for this development setup, but it should work well for our initial setup.

This also means that vagrant will now simply just need to pull the orchardup/mysql image, run it, forward ports, and link it with the other container apps, making this the lightest setup.

I modified buildbot-app and buildapi-app to use the newly created environment variables for the mysql app when connecting and using the databases (they appear upon running the docker containers when linking).

Buildbot-app

When I went to test buildbot-app, I ran into a an exceptions.ValueError: Malformed url

(Buildbot)root@96fbd42254f3:/# /start_buildbot.sh
mysql: option '-h' requires an argument
cd master && buildbot start $PWD
Following twistd.log until startup finished..
/usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/scripts/logwatcher.py:52: PotentialZombieWarning: spawnProcess called, but the SIGCHLD handler is not installed. This probably means you have not yet called reactor.run, or called reactor.run(installSignalHandler=0). You will probably never see this process finish, and it may become a zombie process.
  env=os.environ,
2014-05-21 05:35:52+0000 [-] Log opened.
2014-05-21 05:35:52+0000 [-] twistd 9.0.0 (/usr/bin/python2.7 2.7.3) starting up.
2014-05-21 05:35:52+0000 [-] reactor class: twisted.internet.selectreactor.SelectReactor.
2014-05-21 05:35:52+0000 [-] monkeypatch_twisted_cbLogin applied
2014-05-21 05:35:52+0000 [-] Creating BuildMaster — buildbot.version: 0.8.2-hg-f6d9311d9246-production-0.8
2014-05-21 05:35:52+0000 [-] loading configuration from /Buildbot/build-master/master.cfg
2014-05-21 05:35:52+0000 [-] unable to import dnotify, so Maildir will use polling instead
2014-05-21 05:35:52+0000 [-] JacuzziAllocator 44938192: created
2014-05-21 05:35:52+0000 [-] nextAWSSlave: start
2014-05-21 05:35:52+0000 [-] nextAWSSlave: start
2014-05-21 05:35:54+0000 [-] JacuzziAllocator 37763792: created
2014-05-21 05:35:54+0000 [-] nextAWSSlave: start
2014-05-21 05:35:54+0000 [-] nextAWSSlave: start
2014-05-21 05:35:59+0000 [-] finished loading config file
2014-05-21 05:36:01+0000 [-] BuildMaster listening on port tcp:9000
2014-05-21 05:36:01+0000 [-] configuration update started
2014-05-21 05:36:01+0000 [-] configuration update failed
2014-05-21 05:36:01+0000 [-] Unhandled Error
    Traceback (most recent call last):
      File "/usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/master.py", line 628, in loadTheConfigFile
        d = self.loadConfig(f)
      File "/usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/master.py", line 933, in loadConfig
        d.addCallback(lambda res:
      File "/usr/local/lib/python2.7/dist-packages/Twisted-9.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 190, in addCallback
        callbackKeywords=kw)
      File "/usr/local/lib/python2.7/dist-packages/Twisted-9.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 181, in addCallbacks
        self._runCallbacks()
    — <exception caught here> —
      File "/usr/local/lib/python2.7/dist-packages/Twisted-9.0.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 323, in _runCallbacks
        self.result = callback(self.result, *args, **kw)
      File "/usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/master.py", line 934, in <lambda>
        self.loadConfig_Database(db_url, db_poll_interval))
      File "/usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/master.py", line 1055, in loadConfig_Database
        db_spec = DBSpec.from_url(db_url, self.basedir)
      File "/usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/db/dbspec.py", line 175, in from_url
        raise ValueError("Malformed url")
    exceptions.ValueError: Malformed url
    

The buildmaster took more than 10 seconds to start, so we were unable to
confirm that it started correctly. Please 'tail twistd.log' and look for a
line that says 'configuration update complete' to verify correct startup.

make: *** [start] Error 1

It's possible this has to do with the mysql setup, as I possibly didn't link things up fully. More testing is necessary for tomorrow.

Buildapi-app

To run rabbitmq, mysql, and buildapi all linked together, run these commands in sequence

  • docker run -d -p 5672:5672 -p 15672:15672 -p 4369:4369 -name rabbitmq rabbitmq-app
  • docker run -d -p 3306:3306 -name=mysql orchardup/mysql
  • docker run -t -i -p 8888:8888 -link rabbitmq:mq -link mysql:sql -name buildapi buildapi-app /bin/bash

This will drop you into a bash shell session in buildapi-app

When I attempt to run /start_selfserve_buildapi.sh I receive the following error:

root@e141d055c1c7:/# ./start_selfserve_buildapi.sh
Starting subprocess with file monitor
Running reloading file monitor
2014-05-21 06:37:13,352 Kombu connection revived
2014-05-21 06:37:13,353 Connected to amqp://selfserveagent@172.17.0.2:5672//
Traceback (most recent call last):
  File "/usr/local/bin/paster", line 9, in <module>
    load_entry_point('PasteScript==1.7.3', 'console_scripts', 'paster')()
  File "/usr/local/lib/python2.7/dist-packages/paste/script/command.py", line 84, in run
    invoke(command, command_name, options, args[1:])
  File "/usr/local/lib/python2.7/dist-packages/paste/script/command.py", line 123, in invoke
    exit_code = runner.run(args)
  File "/usr/local/lib/python2.7/dist-packages/paste/script/command.py", line 218, in run
    result = self.command()
  File "/usr/local/lib/python2.7/dist-packages/paste/script/serve.py", line 276, in command
    relative_to=base, global_conf=vars)
  File "/usr/local/lib/python2.7/dist-packages/paste/script/serve.py", line 313, in loadapp
    **kw)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
    return loadobj(APP, uri, name=name, **kw)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in loadobj
    return context.create()
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
    return self.object_type.invoke(self)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 146, in invoke
    return fix_call(context.object, context.global_conf, **context.local_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 56, in fix_call
    val = callable(*args, **kw)
  File "/buildapi/buildapi/config/middleware.py", line 55, in make_app
    config = load_environment(global_conf, app_conf)
  File "/buildapi/buildapi/config/environment.py", line 66, in load_environment
    init_scheduler_model(scheduler_engine)
  File "/buildapi/buildapi/model/__init__.py", line 7, in init_scheduler_model
    scheduler_db_meta.reflect(bind=engine)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/schema.py", line 2342, in reflect
    conn = bind.contextual_connect()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 2284, in contextual_connect
    self.pool.connect(),
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 209, in connect
    return _ConnectionFairy(self).checkout()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 370, in __init__
    rec = self._connection_record = pool._do_get()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 696, in _do_get
    con = self._create_connection()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 174, in _create_connection
    return _ConnectionRecord(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 255, in __init__
    self.connection = self.__connect()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 315, in __connect
    connection = self.__pool._creator()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/strategies.py", line 80, in connect
    return dialect.connect(*cargs, **cparams)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 275, in connect
    return self.dbapi.connect(*cargs, **cparams)
  File "/usr/local/lib/python2.7/dist-packages/MySQLdb/__init__.py", line 81, in Connect
    return Connection(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py", line 187, in __init__
    super(Connection, self).__init__(*args, **kwargs2)
sqlalchemy.exc.OperationalError: (OperationalError) (2005, "Unknown MySQL server host 'SQL_PORT_3306_TCP_ADDR' (0)") None None

Again, this looks like a linking issue and more testing is necessary.

What's next?

The next steps are these:

  • Resolve exceptions.ValueError in buildbot-app
  • Resolve sqlalchemy.exc.OperationalError in buildapi-app
  • Link rabbitmq, mysql, and buildapi and test that everything works
  • Link mysql, and buildbot and test that everything works
  • Link rabbitmq, mysql, buildapi AND buildbot and test that the whole package works
  • See if there is a good way to load statusdb and schedulerdb schemas into mysql in a mysql-app setup built on the orchardup/mysql image. This would prevent the redundanc of loading schemas in buildapi-app and buildbot-app

Things I found useful here

  • docker logs <container id>
  • vboxmanage modifyvm boot2docker-vm –-nic1 delete http
  • vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,8888,,8888"

Things to look into

  • VOLUME docker command
  • Renaming apps as mozilla/buildbot-dev or mozilla/buildbot-dev
  • Setting multiple natpf's for boot2docker testing
 

Vagrant can now run BuildAPI and RabbitMQ apps

Continuing on from my previous post, I verified that buildapi and selfserve-agent are truly connected and able to exchange over the amqp, and that the entire buildapi application is running well by running similar procedures that work in my local setup.

Once I did that I updated the Vagrantfile to forward the vagrant port 8888 to the host port 8888, and to build and start the rabbitmq-app and buildapi-app. In the wild, the Vagrantfile will not be having docker build the docker images, but rather it will pull them from Mozilla's docker repository, which will be a much faster process. As it stands, running vagrant up from scratch the first time will take about 10-15 minutes to launch.

Here's how you can NOW run a fully functional BuildAPI app locally with a single command :)

  1. hg clone http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup
  2. cd vagrant-docker-setup
  3. vagrant up
 

MySQL databases are all setup in BuildAPI-app docker container!

As I stated in the previous post, the next step here was to setup databases. I spent time attempting to have sqlite work in this situation, but ran into issues with buildapi connecting to the sqlite databases. Rather than chase that rabbithole, I doublechecked the configuration in production buildapi and was reminded by the configs that production is running mysql. So I went ahead and did so. This setup required adding the following to the Dockerfile:

RUN apt-get install -y mysql-server

RUN chown mysql.mysql /var/run/mysqld/

RUN mysql_install_db # Installs mysql database schemas

RUN /usr/bin/mysqld_safe &

After this, everything was peachy except for the sql schemas available in the current buildapi repo. Those schemas are for sqlite, so I dumped my own mysql schemas for use here, and loaded them with the following commands:

mysql < status_schema.mysql

mysql < scheduler_schema.mysql

I went ahead and submitted a patch to add the mysql specific schemas to the buildapi repo in Bug 1007994, but for now I added the schemas in with the files in the buildapi-app directory.

I uploaded the current contents of the buildapi-app docker container and it launches with schemas all loaded and running well.

I am still having some issues verifying that selfserve-agent can execute commands from data sent to it over the amqp by buildapi. Further testing is needed to fix this issue. I am currently getting 404 error with my tests, but that might be a peripheral problem rather than selfserve-agent not getting data from the amqp.

Left to do on buildapi-app is to:

  • Test that buildapi and selfserve-agent are truly connected and able to exchange over the amqp
  • Test the entire buildapi application by running similar procedures that should work in my local setup

Links I found useful for this:

  • http://ijonas.com/devops-2/building-a-docker-based-mysql-server/

 

BuildAPI-app is almost up!

I am very close to having the buildapi-app docker container working completely. I left off last not having selfserve-agent setup, and having a kombu error.

In order to setup selfserve-agent properly, I had to include a selfserve-agent.ini file in the base of the docker file to be used by selfserve-agent.py when called with: python buildapi/buildapi/scripts/selfserve-agent.py -w; Additionally, I included a simple bash script to ensure that the container is able to launch both processes side by side without blocking one another.

The error I was having with kombu was because I did not have rabbitmq-app running. Kombu is used (as carrot was before) to make a connection to the amqp that rabbitmq sets up as an mq. After getting rabbitmq-app up, it needed to be linked with buildapi-app, and once it was it became clear that localhost was not the proper host for buildapi or selfserve-agent to attempt to find the amqp. When docker links containers, it allocates all the ports and IPs for them. It makes these new connections available to you in the form of environment variables. Once I had the 2 apps up and linked by running:

docker run -d -p 5672:5672 -p 15672:15672 -p 4369:4369 -name rabbitmq rabbitmq-app

docker run -t -i -p 8888:8888 -link rabbitmq:mq -name buildapi buildapi-app /bin/bash     # bash so that I can play with the variables

Then I was able to run env and see the environment variables that docker setup:

HOSTNAME=ee13bea5d0db
TERM=xterm
MQ_PORT_4369_TCP_ADDR=172.17.0.2
MQ_PORT_5672_TCP=tcp://172.17.0.2:5672
MQ_PORT_5672_TCP_PORT=5672
MQ_PORT_5672_TCP_ADDR=172.17.0.2
MQ_PORT_15672_TCP_PORT=15672
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MQ_PORT_4369_TCP_PORT=4369
PWD=/
MQ_PORT_15672_TCP_ADDR=172.17.0.2
SHLVL=1
HOME=/
MQ_PORT=tcp://172.17.0.2:4369
MQ_PORT_15672_TCP=tcp://172.17.0.2:15672
MQ_PORT_4369_TCP=tcp://172.17.0.2:4369
MQ_PORT_4369_TCP_PROTO=tcp
MQ_PORT_5672_TCP_PROTO=tcp
MQ_NAME=/buildapi/mq
MQ_PORT_15672_TCP_PROTO=tcp
_=/usr/bin/env

As you can see the proper host to look at is 172.17.0.2 instead of localhost. Luckily, since these are environment variables, we can just insert them into our configs by name, rather than hard coding them.

After this step, I was still getting a kombu error, which was caused by not having proper login credentials for the amqp. In order to fix this I had to add a userid and password to the config.ini and selfserve-agent.ini files in buildapi. However, buildapi/buildapi/lib/mq.py does not open the kombu connection with the userid and password parameters filed in, so I had to patch this file. I also opened a bug to handle this patch, or to have documentation generated for the proper procedure. The patch is simply:

@@ -21,16 +21,18 @@ import logging
 log = logging.getLogger(__name__)
 
 class ConfigMixin(object):
 
     def setup_config(self, config):
         self.heartbeat = int(config.get('mq.heartbeat_interval', '0'))
         conn = Connection(config['mq.kombu_url'],
                           heartbeat=self.heartbeat,
+                          userid=config['mq.userid'],
+                          password=config['mq.password'],
                           transport_options={'confirm_publish': True})
         self.connection = connections[conn].acquire(block=True)
         self.exchange = Exchange(config['mq.exchange'], type='topic', durable=True)
 
     def get_queue(self, queue_name, routing_key):
         return Queue(queue_name,
                      durable=True,
                      routing_key=routing_key,

Once all of this was fixed and setup, it appears that buildapi and selfserve-agent were able to connect to the amqp perfectly fine!

Left to do on buildapi-app is to:

  • Test that buildapi and selfserve-agent are truly connected and able to exchange over the amqp
  • Setup the databases properly and load them with temporary data
  • Test the entire buildapi application by running similar procedures that should work in my local setup

Updates to this setup can again be found in my user repo http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

 

Docker containers up for RabbitMQ and BuildAPI

After spending some time on the buildapi-app docker container, I realized that my issues with kombu were likely due to selfservagent setup stuff that I have not yet done. So before diving into that I went ahead and started getting rabbitmq working. I referenced the following while working on rabbitmq-app and buildapi-app:

http://blog.daviddias.me/get-your-feet-wet-with-docker/
https://github.com/cloudezz/cloudezz-images/tree/master/cloudezz-rabbitmq

http://docs.docker.io/use/working_with_links_names/

I finally worked the app to the point that rabbitmq was up and all the proper users were installed. However, I was not able to access the RabbitMQ management page as I should have been able to at http://127.0.0.1:15672
I began flipping through the boot2docker tutorial again and I noticed that the final step to launching the app was to run the following sequence of commands

$ boot2docker down # The VM must be stopped
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,8888,,8888"
$ boot2docker up

When I initially did this tutorial, I noticed that the vboxmanage command seemed to be connecting host port 8888 and guest port 8888, but I didn't give it much more thought. Well it turns out my initial understanding of using Boot2Docker was incorrect. Boot2docker is simply running a VM in vbox called boot2docker-vm, and within this vm is an environment with docker fully installed and working. This is important because I was under the impression that my docker containers were running in my env and not in a vm themselves. Because of this misunderstanding, I was puzzled as to why I needed to run this vboxmanage command to expose port 8888 on the guest and host, after I had already exposed port 8888 in the Dockerfile of the tutorial, and launched the container with the command docker run -p 8888:8888.

Silly me, the true structure is that
1) the exposed port in the Dockerfile tells the docker container to make 8888 available for exposing, and
2) the command docker run -p 8888:8888 does the connecting of port 8888 in the docker container with port 8888 IN THE BOOT2DOCKER-VM
This seems obvious now, but I apparently had overlooked this simple concept. The container's port was connected just fine to the boot2docker-vm, but I couldn't see them on my own local OS because I hadn't proceeded to forward the ports from the boot2docker-vm to my local machine!

Once I discovered my mistake, I ran the command vboxmanage showvminfo boot2docker-vm and was greeted with a ton of info about the vm, including the following:

NIC 1: MAC: 0800279C9CFD, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 1 Settings:  MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64)
NIC 1 Rule(0):   name = docker, protocol = tcp, host ip = 127.0.0.1, host port = 5000, guest ip = , guest port = 4243
NIC 1 Rule(1):   name = http, protocol = tcp, host ip = 127.0.0.1, host port = 8888, guest ip = , guest port = 8888
NIC 1 Rule(2):   name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2022, guest ip = , guest port = 22
NIC 2:           disabled
NIC 3:           disabled
NIC 4:           disabled
NIC 5:           disabled
NIC 6:           disabled
NIC 7:           disabled
NIC 8:           disabled

The line reading NIC 1 Rule(1) shows the results of setting host port 8888 and guest port 8888

You cannot set more than 1 http rule for NIC 1 (perhaps you can enable NIC 2 and set the http rule there, but I didn't bother with it), so I deleted the previous NIC 1 http rule and added one for port 15672 so that I could test the presence of the rabbitmq management portal at 127.0.0.1:15672

$ boot2docker down
$ vboxmanage modifyvm boot2docker-vm –natpf1 delete http
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,15672,,15672"
$ boot2docker up

After a rebuild/launch of rabbitmq-app, I visited 127.0.0.1:15672 and was greeted with the management portal! Huzzah!
With that I was able to verify that the rabbitmq-app was finished!

From there I needed to ensure that the buildapi-app container could connect to the rabbitmq-app container, so I deleted port 15672 from the NIC 1 http rule and added port 8888 again so that I could visit the buildapi page there. (Yes I could have kept 15672)

In order to get these containers communicating, I needed specify a link between them, and I used these docs to help me figure that part out.

I was able to confirm that buildapi is up and running and pages can be visited in self-serve, though there is no db info so no revisions show up, but that is as expected since I am not feeding in anything to fill those dbs with data. Left to do on the buildapi-app container is to get kombu and selfserveagent running.

I have uploaded the current working Dockerfiles for these apps to http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

Bash commands that were useful today:

for f in $(docker ps -a -q); do docker rm -f $f; done; # Remove all docker containers
for f in $(docker images -q); do docker rmi -f $f; done; # Remove all docker images