BuildAPI, Buildbot, RabbitMQ and MySQL containers are all up! Some testing left…

 BuildAPI, Buildbot, RabbitMQ and MySQL containers are all up now! To run pull http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup and run 'vagrant up' from the vagrant-docker-setup/ directory.

The vagrant up command will take several minutes to run the first time because it needs to pull the docker images from the Docker Index at docker.io. More to come tomorrow on this. NOTE: Buildbot seems to be running, but I have not been able to test *full* functionality just yet. However, the buildapi-app, rabbitmq-app and orchardup/mysql containers run together just fine.

To view

  • BuildAPI: localhost:8888
  • RabbitMQ: localhost:15672
  • Buildbot: localhost:8501 – NOT YET

Keep checking back!

New

  • Added specific app users to mysql with passwords
  • Added version row with value 6 to schedulerdb
  • Showed that an added job from buildapi will show up in mysql on buildbot
  • The malformed url error was caused by the fact that the URL was not importing the environment variable
  • Once the env var was imported, I was still getting the malformed url, but this time it was because I had not created a password for the user. I remember when I was setting up my local buildbot instance that I ran into this same problem. There is a regex that is checking to see that the url is not malformed and it does not take kindly to the absense of passwords, regardless of the fact that mysql is okay with not having a password for a user at all.
  • Uploaded images for johnlzeller/rabbitmq, johnlzeller/buildapi and johnlzeller/buildbot to Docker Index
  • Verified that entire setup can be run in Vagrant

What's next?

  • Create repo on hg.mozilla.org/build for holidng Vagrantfile and Dockerfiles for images and update the new hg.m.o/build repo with Vagrantfile and Dockerfiles for images
  • Troubleshoot why the buildbot web interface is not showing up on localhost:8501
  • Publish setup to blog

After initial release

  • Have 1 of 2 things should happen:

     

    1. Have mysql-app setup to load its own schemas and users
    2. Have individual apps only load schemas and users if they do not already exist… this ensures persistence of the databases
  • Look into using the VOLUME docker command to setup an easy way to share a host directory for editing purposes. The goal here is to make it easy to make changes to the running dev setup and to test that setup. Currently, the docker setup just runs the tip of each repository for buildapi and buildbot

Questions

  • Why/how does schedulerdb.version get propogated with a version number int like 6. Buildbot-app was failing on the fact that there was no row in version. I just added 6 into it, since that is what my local schedulerdb dump had, but is there a more appropriate way to do this? Does this check need to be changed? The assert can be found on line 35 of /usr/local/lib/python2.7/dist-packages/buildbot-0.8.2_hg_f6d9311d9246_production_0.8-py2.7.egg/buildbot/db/schema/manager.py
 

BuildAPI-app, RabbitMQ-app and orchardup/mysql are working correctly

BuildAPI-app, RabbitMQ-app and orchardup/mysql are working correctly. This post is a short update of working through the What's next list from the previous post. Here is the updated list

What's next?

The next steps are these:

  • Resolve exceptions.ValueError in buildbot-app
  • Resolve sqlalchemy.exc.OperationalError in buildapi-app
  • Link rabbitmq, mysql, and buildapi and test that everything works
  • Link mysql, and buildbot and test that everything works
  • Link rabbitmq, mysql, buildapi AND buildbot and test that the whole package works
  • See if there is a good way to load statusdb and schedulerdb schemas into mysql in a mysql-app setup built on the orchardup/mysql image. This would prevent the redundanc of loading schemas in buildapi-app and buildbot-app
 

BuildAPI-app is almost up!

I am very close to having the buildapi-app docker container working completely. I left off last not having selfserve-agent setup, and having a kombu error.

In order to setup selfserve-agent properly, I had to include a selfserve-agent.ini file in the base of the docker file to be used by selfserve-agent.py when called with: python buildapi/buildapi/scripts/selfserve-agent.py -w; Additionally, I included a simple bash script to ensure that the container is able to launch both processes side by side without blocking one another.

The error I was having with kombu was because I did not have rabbitmq-app running. Kombu is used (as carrot was before) to make a connection to the amqp that rabbitmq sets up as an mq. After getting rabbitmq-app up, it needed to be linked with buildapi-app, and once it was it became clear that localhost was not the proper host for buildapi or selfserve-agent to attempt to find the amqp. When docker links containers, it allocates all the ports and IPs for them. It makes these new connections available to you in the form of environment variables. Once I had the 2 apps up and linked by running:

docker run -d -p 5672:5672 -p 15672:15672 -p 4369:4369 -name rabbitmq rabbitmq-app

docker run -t -i -p 8888:8888 -link rabbitmq:mq -name buildapi buildapi-app /bin/bash     # bash so that I can play with the variables

Then I was able to run env and see the environment variables that docker setup:

HOSTNAME=ee13bea5d0db
TERM=xterm
MQ_PORT_4369_TCP_ADDR=172.17.0.2
MQ_PORT_5672_TCP=tcp://172.17.0.2:5672
MQ_PORT_5672_TCP_PORT=5672
MQ_PORT_5672_TCP_ADDR=172.17.0.2
MQ_PORT_15672_TCP_PORT=15672
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MQ_PORT_4369_TCP_PORT=4369
PWD=/
MQ_PORT_15672_TCP_ADDR=172.17.0.2
SHLVL=1
HOME=/
MQ_PORT=tcp://172.17.0.2:4369
MQ_PORT_15672_TCP=tcp://172.17.0.2:15672
MQ_PORT_4369_TCP=tcp://172.17.0.2:4369
MQ_PORT_4369_TCP_PROTO=tcp
MQ_PORT_5672_TCP_PROTO=tcp
MQ_NAME=/buildapi/mq
MQ_PORT_15672_TCP_PROTO=tcp
_=/usr/bin/env

As you can see the proper host to look at is 172.17.0.2 instead of localhost. Luckily, since these are environment variables, we can just insert them into our configs by name, rather than hard coding them.

After this step, I was still getting a kombu error, which was caused by not having proper login credentials for the amqp. In order to fix this I had to add a userid and password to the config.ini and selfserve-agent.ini files in buildapi. However, buildapi/buildapi/lib/mq.py does not open the kombu connection with the userid and password parameters filed in, so I had to patch this file. I also opened a bug to handle this patch, or to have documentation generated for the proper procedure. The patch is simply:

@@ -21,16 +21,18 @@ import logging
 log = logging.getLogger(__name__)
 
 class ConfigMixin(object):
 
     def setup_config(self, config):
         self.heartbeat = int(config.get('mq.heartbeat_interval', '0'))
         conn = Connection(config['mq.kombu_url'],
                           heartbeat=self.heartbeat,
+                          userid=config['mq.userid'],
+                          password=config['mq.password'],
                           transport_options={'confirm_publish': True})
         self.connection = connections[conn].acquire(block=True)
         self.exchange = Exchange(config['mq.exchange'], type='topic', durable=True)
 
     def get_queue(self, queue_name, routing_key):
         return Queue(queue_name,
                      durable=True,
                      routing_key=routing_key,

Once all of this was fixed and setup, it appears that buildapi and selfserve-agent were able to connect to the amqp perfectly fine!

Left to do on buildapi-app is to:

  • Test that buildapi and selfserve-agent are truly connected and able to exchange over the amqp
  • Setup the databases properly and load them with temporary data
  • Test the entire buildapi application by running similar procedures that should work in my local setup

Updates to this setup can again be found in my user repo http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

 

Docker containers up for RabbitMQ and BuildAPI

After spending some time on the buildapi-app docker container, I realized that my issues with kombu were likely due to selfservagent setup stuff that I have not yet done. So before diving into that I went ahead and started getting rabbitmq working. I referenced the following while working on rabbitmq-app and buildapi-app:

http://blog.daviddias.me/get-your-feet-wet-with-docker/
https://github.com/cloudezz/cloudezz-images/tree/master/cloudezz-rabbitmq

http://docs.docker.io/use/working_with_links_names/

I finally worked the app to the point that rabbitmq was up and all the proper users were installed. However, I was not able to access the RabbitMQ management page as I should have been able to at http://127.0.0.1:15672
I began flipping through the boot2docker tutorial again and I noticed that the final step to launching the app was to run the following sequence of commands

$ boot2docker down # The VM must be stopped
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,8888,,8888"
$ boot2docker up

When I initially did this tutorial, I noticed that the vboxmanage command seemed to be connecting host port 8888 and guest port 8888, but I didn't give it much more thought. Well it turns out my initial understanding of using Boot2Docker was incorrect. Boot2docker is simply running a VM in vbox called boot2docker-vm, and within this vm is an environment with docker fully installed and working. This is important because I was under the impression that my docker containers were running in my env and not in a vm themselves. Because of this misunderstanding, I was puzzled as to why I needed to run this vboxmanage command to expose port 8888 on the guest and host, after I had already exposed port 8888 in the Dockerfile of the tutorial, and launched the container with the command docker run -p 8888:8888.

Silly me, the true structure is that
1) the exposed port in the Dockerfile tells the docker container to make 8888 available for exposing, and
2) the command docker run -p 8888:8888 does the connecting of port 8888 in the docker container with port 8888 IN THE BOOT2DOCKER-VM
This seems obvious now, but I apparently had overlooked this simple concept. The container's port was connected just fine to the boot2docker-vm, but I couldn't see them on my own local OS because I hadn't proceeded to forward the ports from the boot2docker-vm to my local machine!

Once I discovered my mistake, I ran the command vboxmanage showvminfo boot2docker-vm and was greeted with a ton of info about the vm, including the following:

NIC 1: MAC: 0800279C9CFD, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 1 Settings:  MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64)
NIC 1 Rule(0):   name = docker, protocol = tcp, host ip = 127.0.0.1, host port = 5000, guest ip = , guest port = 4243
NIC 1 Rule(1):   name = http, protocol = tcp, host ip = 127.0.0.1, host port = 8888, guest ip = , guest port = 8888
NIC 1 Rule(2):   name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2022, guest ip = , guest port = 22
NIC 2:           disabled
NIC 3:           disabled
NIC 4:           disabled
NIC 5:           disabled
NIC 6:           disabled
NIC 7:           disabled
NIC 8:           disabled

The line reading NIC 1 Rule(1) shows the results of setting host port 8888 and guest port 8888

You cannot set more than 1 http rule for NIC 1 (perhaps you can enable NIC 2 and set the http rule there, but I didn't bother with it), so I deleted the previous NIC 1 http rule and added one for port 15672 so that I could test the presence of the rabbitmq management portal at 127.0.0.1:15672

$ boot2docker down
$ vboxmanage modifyvm boot2docker-vm –natpf1 delete http
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,15672,,15672"
$ boot2docker up

After a rebuild/launch of rabbitmq-app, I visited 127.0.0.1:15672 and was greeted with the management portal! Huzzah!
With that I was able to verify that the rabbitmq-app was finished!

From there I needed to ensure that the buildapi-app container could connect to the rabbitmq-app container, so I deleted port 15672 from the NIC 1 http rule and added port 8888 again so that I could visit the buildapi page there. (Yes I could have kept 15672)

In order to get these containers communicating, I needed specify a link between them, and I used these docs to help me figure that part out.

I was able to confirm that buildapi is up and running and pages can be visited in self-serve, though there is no db info so no revisions show up, but that is as expected since I am not feeding in anything to fill those dbs with data. Left to do on the buildapi-app container is to get kombu and selfserveagent running.

I have uploaded the current working Dockerfiles for these apps to http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

Bash commands that were useful today:

for f in $(docker ps -a -q); do docker rm -f $f; done; # Remove all docker containers
for f in $(docker images -q); do docker rmi -f $f; done; # Remove all docker images

 

 

Vagrant + Docker = Happy Fun Times!

I am working with Vagrant and Docker for the first time, and it is awesome! We are working towards capturing all of our "how to install" knowledge for buildbot, buildapi and redis in code by making a set of docker containers that can play well together inside of Vagrant. In this setup, we are assuming that the user has Ubuntu (or equivalent linux) in a virtualbox running. Once that 1 VM exists we are to have separate docker instances per app (buildbot, buildapi, redis, etc.). The benefit of connecting these docker instances inside vagrant to play together, is that it lets any combo of buildbot, buildapi standalone, or "system" to be run. This also allows more modules to be added later. 

So far I have installed Vagrant, Docker and Boot2Docker, worked my way through the Boot2Docker/Docker tutorials (including a sweet hello world app example from David Dias), and I have begun working my way through the Vagrant tutorials. Other than all that prep, I have also begun writing up the Dockerfiles needed to setup each of our apps. I think we will need one each for:

I got most of the way through the BuildAPI and RabbitMQ Dockerfile's before I got to a point where I needed to get Vagrant installed and figured out so that I could also then create a Vagrantfile to go along with this entire setup.

Once I have completed this entire setup, you will be able to get a buildapi, buildbot, redis, rabbitmq, and selfserve system up and running simply by downloading the Vagrantfile and running 'vagrant up'. How cool is that?!

More updates on this next week!

Useful links: