Tupperware: Mozilla apps in Docker!

Announcing Tupperware, a setup for Mozilla apps in Docker! Tupperware is portable, reusable, and containerized. But unlike typical tupperware, please do not put it in the Microwave.

Tupperware

Why?

This is a project born out of a need to lower the barriers to entry for new contributors to Release Engineering (RelEng) maintained apps and services. Historically, RelEng has had greater difficulty attracting community contributors than other parts of Mozilla, due in large part to how much knowledge is needed to get going in the first place. For a new contributor, it can be quite overwhelming to jump into any number of the code bases that RelEng maintains and often leads to quickly losing that new contributor out of exaspiration. Beyond new contributors, Tupperware is great for experienced contributors as well to assist in keeping an unpolluted development environment and testing patches.

What?

Currently Tupperware includes the following Mozilla apps:

BuildAPI – a Pylons project used by RelEng to surface information collected from two databases updated through our buildbot masters as they run jobs.

BuildBot – a job (read: builds and tests) scheduling system to queue/execute jobs when the required resources are available, and reporting the results.

Dependency apps currently included:

RabbitMQ – a messaging queue used by RelEng apps and services

MySQL – Forked from orchardup/mysql

How?

Vagrant is used as a quick and easy way to provision the docker apps and make the setup truly plug n’ play. The current setup only has a single Vagrantfile which launches BuildAPI and BuildBot, with their dependency apps RabbitMQ and MySQL.

How to run:

– Install Vagrant 1.6.3

– hg clone https://hg.mozilla.org/build/tupperware/ && cd tupperware && vagrant up (takes >10 minutes the first time)

Where to see apps:

– BuildAPI: http://127.0.0.1:8888/

– BuildBot: http://127.0.0.1:8000/

– RabbitMQ Management: http://127.0.0.1:15672/

Troubleshooting tips are available in the Tupperware README.

What’s Next?

Now that Tupperware is out there, it’s open to contributors! The setup does not need to stay solely usable for RelEng apps and services. So please submit bugs to add new ones! There are a few ideas for adding functionality to Tupperware already:

  • Bug 1027410 - Add Treeherder docker container to Tupperware
  • Bug 1027412 - Add multiple vagrant setups to Tupperware to customize setup
  • Bug 1027417 - Have MySQL docker app in Tupperware load database schemas

Have ideas? Submit a bug!

 

BuildAPI, Buildbot, RabbitMQ and MySQL containers are all up! Some testing left…

 BuildAPI, Buildbot, RabbitMQ and MySQL containers are all up now! To run pull http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup and run 'vagrant up' from the vagrant-docker-setup/ directory.

The vagrant up command will take several minutes to run the first time because it needs to pull the docker images from the Docker Index at docker.io. More to come tomorrow on this. NOTE: Buildbot seems to be running, but I have not been able to test *full* functionality just yet. However, the buildapi-app, rabbitmq-app and orchardup/mysql containers run together just fine.

To view

  • BuildAPI: localhost:8888
  • RabbitMQ: localhost:15672
  • Buildbot: localhost:8501 – NOT YET

Keep checking back!

New

  • Added specific app users to mysql with passwords
  • Added version row with value 6 to schedulerdb
  • Showed that an added job from buildapi will show up in mysql on buildbot
  • The malformed url error was caused by the fact that the URL was not importing the environment variable
  • Once the env var was imported, I was still getting the malformed url, but this time it was because I had not created a password for the user. I remember when I was setting up my local buildbot instance that I ran into this same problem. There is a regex that is checking to see that the url is not malformed and it does not take kindly to the absense of passwords, regardless of the fact that mysql is okay with not having a password for a user at all.
  • Uploaded images for johnlzeller/rabbitmq, johnlzeller/buildapi and johnlzeller/buildbot to Docker Index
  • Verified that entire setup can be run in Vagrant

What's next?

  • Create repo on hg.mozilla.org/build for holidng Vagrantfile and Dockerfiles for images and update the new hg.m.o/build repo with Vagrantfile and Dockerfiles for images
  • Troubleshoot why the buildbot web interface is not showing up on localhost:8501
  • Publish setup to blog

After initial release

  • Have 1 of 2 things should happen:

     

    1. Have mysql-app setup to load its own schemas and users
    2. Have individual apps only load schemas and users if they do not already exist… this ensures persistence of the databases
  • Look into using the VOLUME docker command to setup an easy way to share a host directory for editing purposes. The goal here is to make it easy to make changes to the running dev setup and to test that setup. Currently, the docker setup just runs the tip of each repository for buildapi and buildbot

Questions

  • Why/how does schedulerdb.version get propogated with a version number int like 6. Buildbot-app was failing on the fact that there was no row in version. I just added 6 into it, since that is what my local schedulerdb dump had, but is there a more appropriate way to do this? Does this check need to be changed? The assert can be found on line 35 of /usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/db/schema/manager.py
 

BuildAPI-app, RabbitMQ-app and orchardup/mysql are working correctly

BuildAPI-app, RabbitMQ-app and orchardup/mysql are working correctly. This post is a short update of working through the What's next list from the previous post. Here is the updated list

What's next?

The next steps are these:

  • Resolve exceptions.ValueError in buildbot-app
  • Resolve sqlalchemy.exc.OperationalError in buildapi-app
  • Link rabbitmq, mysql, and buildapi and test that everything works
  • Link mysql, and buildbot and test that everything works
  • Link rabbitmq, mysql, buildapi AND buildbot and test that the whole package works
  • See if there is a good way to load statusdb and schedulerdb schemas into mysql in a mysql-app setup built on the orchardup/mysql image. This would prevent the redundanc of loading schemas in buildapi-app and buildbot-app
 

Vagrant can now run BuildAPI and RabbitMQ apps

Continuing on from my previous post, I verified that buildapi and selfserve-agent are truly connected and able to exchange over the amqp, and that the entire buildapi application is running well by running similar procedures that work in my local setup.

Once I did that I updated the Vagrantfile to forward the vagrant port 8888 to the host port 8888, and to build and start the rabbitmq-app and buildapi-app. In the wild, the Vagrantfile will not be having docker build the docker images, but rather it will pull them from Mozilla's docker repository, which will be a much faster process. As it stands, running vagrant up from scratch the first time will take about 10-15 minutes to launch.

Here's how you can NOW run a fully functional BuildAPI app locally with a single command :)

  1. hg clone http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup
  2. cd vagrant-docker-setup
  3. vagrant up
 

MySQL databases are all setup in BuildAPI-app docker container!

As I stated in the previous post, the next step here was to setup databases. I spent time attempting to have sqlite work in this situation, but ran into issues with buildapi connecting to the sqlite databases. Rather than chase that rabbithole, I doublechecked the configuration in production buildapi and was reminded by the configs that production is running mysql. So I went ahead and did so. This setup required adding the following to the Dockerfile:

RUN apt-get install -y mysql-server

RUN chown mysql.mysql /var/run/mysqld/

RUN mysql_install_db # Installs mysql database schemas

RUN /usr/bin/mysqld_safe &

After this, everything was peachy except for the sql schemas available in the current buildapi repo. Those schemas are for sqlite, so I dumped my own mysql schemas for use here, and loaded them with the following commands:

mysql < status_schema.mysql

mysql < scheduler_schema.mysql

I went ahead and submitted a patch to add the mysql specific schemas to the buildapi repo in Bug 1007994, but for now I added the schemas in with the files in the buildapi-app directory.

I uploaded the current contents of the buildapi-app docker container and it launches with schemas all loaded and running well.

I am still having some issues verifying that selfserve-agent can execute commands from data sent to it over the amqp by buildapi. Further testing is needed to fix this issue. I am currently getting 404 error with my tests, but that might be a peripheral problem rather than selfserve-agent not getting data from the amqp.

Left to do on buildapi-app is to:

  • Test that buildapi and selfserve-agent are truly connected and able to exchange over the amqp
  • Test the entire buildapi application by running similar procedures that should work in my local setup

Links I found useful for this:

  • http://ijonas.com/devops-2/building-a-docker-based-mysql-server/

 

BuildAPI-app is almost up!

I am very close to having the buildapi-app docker container working completely. I left off last not having selfserve-agent setup, and having a kombu error.

In order to setup selfserve-agent properly, I had to include a selfserve-agent.ini file in the base of the docker file to be used by selfserve-agent.py when called with: python buildapi/buildapi/scripts/selfserve-agent.py -w; Additionally, I included a simple bash script to ensure that the container is able to launch both processes side by side without blocking one another.

The error I was having with kombu was because I did not have rabbitmq-app running. Kombu is used (as carrot was before) to make a connection to the amqp that rabbitmq sets up as an mq. After getting rabbitmq-app up, it needed to be linked with buildapi-app, and once it was it became clear that localhost was not the proper host for buildapi or selfserve-agent to attempt to find the amqp. When docker links containers, it allocates all the ports and IPs for them. It makes these new connections available to you in the form of environment variables. Once I had the 2 apps up and linked by running:

docker run -d -p 5672:5672 -p 15672:15672 -p 4369:4369 -name rabbitmq rabbitmq-app

docker run -t -i -p 8888:8888 -link rabbitmq:mq -name buildapi buildapi-app /bin/bash     # bash so that I can play with the variables

Then I was able to run env and see the environment variables that docker setup:

HOSTNAME=ee13bea5d0db
TERM=xterm
MQ_PORT_4369_TCP_ADDR=172.17.0.2
MQ_PORT_5672_TCP=tcp://172.17.0.2:5672
MQ_PORT_5672_TCP_PORT=5672
MQ_PORT_5672_TCP_ADDR=172.17.0.2
MQ_PORT_15672_TCP_PORT=15672
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MQ_PORT_4369_TCP_PORT=4369
PWD=/
MQ_PORT_15672_TCP_ADDR=172.17.0.2
SHLVL=1
HOME=/
MQ_PORT=tcp://172.17.0.2:4369
MQ_PORT_15672_TCP=tcp://172.17.0.2:15672
MQ_PORT_4369_TCP=tcp://172.17.0.2:4369
MQ_PORT_4369_TCP_PROTO=tcp
MQ_PORT_5672_TCP_PROTO=tcp
MQ_NAME=/buildapi/mq
MQ_PORT_15672_TCP_PROTO=tcp
_=/usr/bin/env

As you can see the proper host to look at is 172.17.0.2 instead of localhost. Luckily, since these are environment variables, we can just insert them into our configs by name, rather than hard coding them.

After this step, I was still getting a kombu error, which was caused by not having proper login credentials for the amqp. In order to fix this I had to add a userid and password to the config.ini and selfserve-agent.ini files in buildapi. However, buildapi/buildapi/lib/mq.py does not open the kombu connection with the userid and password parameters filed in, so I had to patch this file. I also opened a bug to handle this patch, or to have documentation generated for the proper procedure. The patch is simply:

@@ -21,16 +21,18 @@ import logging
 log = logging.getLogger(__name__)
 
 class ConfigMixin(object):
 
     def setup_config(self, config):
         self.heartbeat = int(config.get('mq.heartbeat_interval', '0'))
         conn = Connection(config['mq.kombu_url'],
                           heartbeat=self.heartbeat,
+                          userid=config['mq.userid'],
+                          password=config['mq.password'],
                           transport_options={'confirm_publish': True})
         self.connection = connections[conn].acquire(block=True)
         self.exchange = Exchange(config['mq.exchange'], type='topic', durable=True)
 
     def get_queue(self, queue_name, routing_key):
         return Queue(queue_name,
                      durable=True,
                      routing_key=routing_key,

Once all of this was fixed and setup, it appears that buildapi and selfserve-agent were able to connect to the amqp perfectly fine!

Left to do on buildapi-app is to:

  • Test that buildapi and selfserve-agent are truly connected and able to exchange over the amqp
  • Setup the databases properly and load them with temporary data
  • Test the entire buildapi application by running similar procedures that should work in my local setup

Updates to this setup can again be found in my user repo http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

 

Docker containers up for RabbitMQ and BuildAPI

After spending some time on the buildapi-app docker container, I realized that my issues with kombu were likely due to selfservagent setup stuff that I have not yet done. So before diving into that I went ahead and started getting rabbitmq working. I referenced the following while working on rabbitmq-app and buildapi-app:

http://blog.daviddias.me/get-your-feet-wet-with-docker/
https://github.com/cloudezz/cloudezz-images/tree/master/cloudezz-rabbitmq

http://docs.docker.io/use/working_with_links_names/

I finally worked the app to the point that rabbitmq was up and all the proper users were installed. However, I was not able to access the RabbitMQ management page as I should have been able to at http://127.0.0.1:15672
I began flipping through the boot2docker tutorial again and I noticed that the final step to launching the app was to run the following sequence of commands

$ boot2docker down # The VM must be stopped
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,8888,,8888"
$ boot2docker up

When I initially did this tutorial, I noticed that the vboxmanage command seemed to be connecting host port 8888 and guest port 8888, but I didn't give it much more thought. Well it turns out my initial understanding of using Boot2Docker was incorrect. Boot2docker is simply running a VM in vbox called boot2docker-vm, and within this vm is an environment with docker fully installed and working. This is important because I was under the impression that my docker containers were running in my env and not in a vm themselves. Because of this misunderstanding, I was puzzled as to why I needed to run this vboxmanage command to expose port 8888 on the guest and host, after I had already exposed port 8888 in the Dockerfile of the tutorial, and launched the container with the command docker run -p 8888:8888.

Silly me, the true structure is that
1) the exposed port in the Dockerfile tells the docker container to make 8888 available for exposing, and
2) the command docker run -p 8888:8888 does the connecting of port 8888 in the docker container with port 8888 IN THE BOOT2DOCKER-VM
This seems obvious now, but I apparently had overlooked this simple concept. The container's port was connected just fine to the boot2docker-vm, but I couldn't see them on my own local OS because I hadn't proceeded to forward the ports from the boot2docker-vm to my local machine!

Once I discovered my mistake, I ran the command vboxmanage showvminfo boot2docker-vm and was greeted with a ton of info about the vm, including the following:

NIC 1: MAC: 0800279C9CFD, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 1 Settings:  MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64)
NIC 1 Rule(0):   name = docker, protocol = tcp, host ip = 127.0.0.1, host port = 5000, guest ip = , guest port = 4243
NIC 1 Rule(1):   name = http, protocol = tcp, host ip = 127.0.0.1, host port = 8888, guest ip = , guest port = 8888
NIC 1 Rule(2):   name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2022, guest ip = , guest port = 22
NIC 2:           disabled
NIC 3:           disabled
NIC 4:           disabled
NIC 5:           disabled
NIC 6:           disabled
NIC 7:           disabled
NIC 8:           disabled

The line reading NIC 1 Rule(1) shows the results of setting host port 8888 and guest port 8888

You cannot set more than 1 http rule for NIC 1 (perhaps you can enable NIC 2 and set the http rule there, but I didn't bother with it), so I deleted the previous NIC 1 http rule and added one for port 15672 so that I could test the presence of the rabbitmq management portal at 127.0.0.1:15672

$ boot2docker down
$ vboxmanage modifyvm boot2docker-vm –natpf1 delete http
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,15672,,15672"
$ boot2docker up

After a rebuild/launch of rabbitmq-app, I visited 127.0.0.1:15672 and was greeted with the management portal! Huzzah!
With that I was able to verify that the rabbitmq-app was finished!

From there I needed to ensure that the buildapi-app container could connect to the rabbitmq-app container, so I deleted port 15672 from the NIC 1 http rule and added port 8888 again so that I could visit the buildapi page there. (Yes I could have kept 15672)

In order to get these containers communicating, I needed specify a link between them, and I used these docs to help me figure that part out.

I was able to confirm that buildapi is up and running and pages can be visited in self-serve, though there is no db info so no revisions show up, but that is as expected since I am not feeding in anything to fill those dbs with data. Left to do on the buildapi-app container is to get kombu and selfserveagent running.

I have uploaded the current working Dockerfiles for these apps to http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

Bash commands that were useful today:

for f in $(docker ps -a -q); do docker rm -f $f; done; # Remove all docker containers
for f in $(docker images -q); do docker rmi -f $f; done; # Remove all docker images

 

 

Further Steps with Vagrant

Today I spent a bunch of time looking into how to setup a vagrant instance, how to pack it with docker, how to run multiple docker containers in that vagrant instance and how to link them all together to play nicely. Throughout this process I have narrowed down that vagrant will be running hashicorp/precise64, and docker will have 5 apps, one each for buildapi, mysql , redis, rabbitmq and buildbot. Once I have linked all of this together properly, there will be a directory structure that looks a little like this:

./Vagrantfile

./redis-app/Dockerfile

./rabbitmq-app/Dockerfile

./mysql-app/Dockerfile

./buildbot-app/Dockerfile

./buildapi-app/Dockerfile

The Vagrantfile will load a base image of Ubuntu 12.04 Precise 64-bit, install docker (tip), build the images for buildapi (tip), mysql (tip), redis (v2.4.5), rabbitmq (tip) and buildbot (tip), expose all the necessary ports for each and link them properly, and then run each container as a daemon. Once this is all setup the user should be able to simply pull this repo (I am assuming this directory will become a repository on hg.mozilla.org/build), run "vagrant up" and then they will be able to see buildapi at 127.0.0.1:5000 and buildbot at 127.0.0.1:8501. The awesome added benefit to this is that files can be shared between your normal development environment and the VM that vagrant started up, simply by placing and editing files in the base directory where Vagrantfile exists! This will make development of these apps a lot smoother for our community members, with the super simple push button command line interface that vagrant provides for getting these VMs started up. Additionally, once this works it can easily be expanded with more and more apps as we see fit to add them to this Vagrant instance! :)

 

Vagrant + Docker = Happy Fun Times!

I am working with Vagrant and Docker for the first time, and it is awesome! We are working towards capturing all of our "how to install" knowledge for buildbot, buildapi and redis in code by making a set of docker containers that can play well together inside of Vagrant. In this setup, we are assuming that the user has Ubuntu (or equivalent linux) in a virtualbox running. Once that 1 VM exists we are to have separate docker instances per app (buildbot, buildapi, redis, etc.). The benefit of connecting these docker instances inside vagrant to play together, is that it lets any combo of buildbot, buildapi standalone, or "system" to be run. This also allows more modules to be added later. 

So far I have installed Vagrant, Docker and Boot2Docker, worked my way through the Boot2Docker/Docker tutorials (including a sweet hello world app example from David Dias), and I have begun working my way through the Vagrant tutorials. Other than all that prep, I have also begun writing up the Dockerfiles needed to setup each of our apps. I think we will need one each for:

I got most of the way through the BuildAPI and RabbitMQ Dockerfile's before I got to a point where I needed to get Vagrant installed and figured out so that I could also then create a Vagrantfile to go along with this entire setup.

Once I have completed this entire setup, you will be able to get a buildapi, buildbot, redis, rabbitmq, and selfserve system up and running simply by downloading the Vagrantfile and running 'vagrant up'. How cool is that?!

More updates on this next week!

Useful links:

 

RabbitMQ Deux: SUCCESS!

I spoke with catlee today to see if he could send over a copy of the scripts that he used to setup buildapi as a user on rabbitmq, and he did. Coop warned that there may be some finicky issues that are enironment specific to my Mac (ie paths, etc). Indeed when I attempted to run the script, with the RabbitMQ server off, I got the error "Error: unable to connect to node rabbit@localhost: nodedown". Then, when I turned the server on, I got the error "Error: {noproc,{gen_server2,call,[worker_pool,next_free,infinity]}}". Obviously something was not quite right, so I did some more looking around. I found that RabbitMQ has a set of plugins that it comes with and they are disabled by default, once I enabled those, I could go into the web app, add buildapi as a user and then changed some config options on buildapi, and BAM! It magically begam accepting entries into the db.

Here is the step by step I used to get RabbitMQ up and running and working with buildapi on Mac OSX.

  1. If MacPorts is not already installed, then go here.
  2. Once you've ensured that MacPorts is installed you can install RabbitMQ: sudo port install rabbitmq-server

    • The instructions for this can be found here
  3. Once RabbitMQ is installed, you need to add buildapi as a user. Enable the rabbitmq_management plugin: rabbitmq-plugins enable rabbitmq_management

    • The instructions for this can be found here
  4. Then restart RabbitMQ: sudo /opt/local/etc/LaunchDaemons/org.macports.rabbitmq-server/rabbitmq-server.wrapper restart
  5. Now go to http://localhost:15672/ and use the username/password combo of guest/guest
  6. Once in, go to 'Admin'
  7. Select the 'Add a user' option and enter the following

    • Username: buildapi
    • Password: buildapi
    • Tags: administrator
  8. Now submit the new user by selecting 'Add user'
  9. Once you have added 'buildapi' as a new user, you will see it listed undet the 'All users' section above
  10. Select 'buildapi' and a window for permissions will come up
  11. Make sure that the permissions are set to the following

    • Virtual Host: /
    • Configure regexp: .*
    • Write regexp: .*
    • Read regexp: .*
  12. Now submit these permissions by selecting 'Set Permission'
  13. Once you have done this, the only thing left is to adjust the config.ini file at the root of buildapi to include the following lines

    • carrot.hostname = localhost
    • carrot.userid = buildapi
    • carrot.password = buildapi
    • carrot.exchange = buildapi.control
    • carrot.consumer.queue = buildapi-web
  14. Once you have made sure that the previous lines were added to your config.ini file in buildapi, then start up buildapi
  15. Go to http://localhost:15672/#/connections and a connection with the username 'buildapi' should be listed and the state should be 'running'

And that's that! I attempted to click 'rebuild' again from a branch page like try and it worked! The database entry was successful!

Now that I have been able to get this mq issue figured out with the help of catlee and coop, thanks guys!, I will now move onto the following:

  • Update the wiki doc on Setting up a Local Virtualenv for BuildAPI with the new found instructions on getting RabbitMQ installed on Mac.
  • Begin writting up unittests to test for proper entry of new buildrequests into the schedulerdb
  • Write up the needed logic to enter a single buildrequest
  • Review the logic
  • Lather, Rinse, Repeat