Manual Testing of Arbitrary Builds

When a new selfserve-agent change is pushed to production, it's necessary to verify functionality with some maual testing. Here are some steps to basic testing:

  1. If no new try job to mess with, then submit one, see ReleaseEngineering/TryServer

     

    • hg clone http://hg.mozilla.org/mozilla-central
    • cd mozilla-central
    • echo "THING" >> README.txt
    • hg qnew test-patch
    • hg qref –message "try: -b o -p linux64 -u none -t none"
    • hg push -f ssh://hg.mozilla.org/try/
  2. In my case you can see the try job running here: https://tbpl.mozilla.org/?tree=Try&rev=3a5e6ca198d8

     

    • If the push is successful it'll give you your own link
  3. Submit a blank arbitrary job request to https://secure.pub.build.mozilla.org/buildapi/self-serve/try/builders/Linux x86-64 try build/3a5e6ca198d8 using trigger_arbitrary_job.py
  4. python trigger_arbitrary_job.py –buildername "Linux x86-64 try build" –branch try –rev 3a5e6ca198d8

     

    • Leaving –file out so that files = []
  5. See running job here https://secure.pub.build.mozilla.org/buildapi/revision/try/3a5e6ca198d8
  6. Check for pending job at https://secure.pub.build.mozilla.org/buildapi/self-serve/try/rev/3a5e6ca198d8
  7. Also check https://tbpl.mozilla.org/?tree=Try&rev=3a5e6ca198d8
  8. Check buildbot status can be found by finding the appropriate master on the buildapi page https://secure.pub.build.mozilla.org/buildapi/revision/try/3a5e6ca198d8

BuildAPI-app is almost up!

I am very close to having the buildapi-app docker container working completely. I left off last not having selfserve-agent setup, and having a kombu error.

In order to setup selfserve-agent properly, I had to include a selfserve-agent.ini file in the base of the docker file to be used by selfserve-agent.py when called with: python buildapi/buildapi/scripts/selfserve-agent.py -w; Additionally, I included a simple bash script to ensure that the container is able to launch both processes side by side without blocking one another.

The error I was having with kombu was because I did not have rabbitmq-app running. Kombu is used (as carrot was before) to make a connection to the amqp that rabbitmq sets up as an mq. After getting rabbitmq-app up, it needed to be linked with buildapi-app, and once it was it became clear that localhost was not the proper host for buildapi or selfserve-agent to attempt to find the amqp. When docker links containers, it allocates all the ports and IPs for them. It makes these new connections available to you in the form of environment variables. Once I had the 2 apps up and linked by running:

docker run -d -p 5672:5672 -p 15672:15672 -p 4369:4369 -name rabbitmq rabbitmq-app

docker run -t -i -p 8888:8888 -link rabbitmq:mq -name buildapi buildapi-app /bin/bash     # bash so that I can play with the variables

Then I was able to run env and see the environment variables that docker setup:

HOSTNAME=ee13bea5d0db
TERM=xterm
MQ_PORT_4369_TCP_ADDR=172.17.0.2
MQ_PORT_5672_TCP=tcp://172.17.0.2:5672
MQ_PORT_5672_TCP_PORT=5672
MQ_PORT_5672_TCP_ADDR=172.17.0.2
MQ_PORT_15672_TCP_PORT=15672
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MQ_PORT_4369_TCP_PORT=4369
PWD=/
MQ_PORT_15672_TCP_ADDR=172.17.0.2
SHLVL=1
HOME=/
MQ_PORT=tcp://172.17.0.2:4369
MQ_PORT_15672_TCP=tcp://172.17.0.2:15672
MQ_PORT_4369_TCP=tcp://172.17.0.2:4369
MQ_PORT_4369_TCP_PROTO=tcp
MQ_PORT_5672_TCP_PROTO=tcp
MQ_NAME=/buildapi/mq
MQ_PORT_15672_TCP_PROTO=tcp
_=/usr/bin/env

As you can see the proper host to look at is 172.17.0.2 instead of localhost. Luckily, since these are environment variables, we can just insert them into our configs by name, rather than hard coding them.

After this step, I was still getting a kombu error, which was caused by not having proper login credentials for the amqp. In order to fix this I had to add a userid and password to the config.ini and selfserve-agent.ini files in buildapi. However, buildapi/buildapi/lib/mq.py does not open the kombu connection with the userid and password parameters filed in, so I had to patch this file. I also opened a bug to handle this patch, or to have documentation generated for the proper procedure. The patch is simply:

@@ -21,16 +21,18 @@ import logging
 log = logging.getLogger(__name__)
 
 class ConfigMixin(object):
 
     def setup_config(self, config):
         self.heartbeat = int(config.get('mq.heartbeat_interval', '0'))
         conn = Connection(config['mq.kombu_url'],
                           heartbeat=self.heartbeat,
+                          userid=config['mq.userid'],
+                          password=config['mq.password'],
                           transport_options={'confirm_publish': True})
         self.connection = connections[conn].acquire(block=True)
         self.exchange = Exchange(config['mq.exchange'], type='topic', durable=True)
 
     def get_queue(self, queue_name, routing_key):
         return Queue(queue_name,
                      durable=True,
                      routing_key=routing_key,

Once all of this was fixed and setup, it appears that buildapi and selfserve-agent were able to connect to the amqp perfectly fine!

Left to do on buildapi-app is to:

  • Test that buildapi and selfserve-agent are truly connected and able to exchange over the amqp
  • Setup the databases properly and load them with temporary data
  • Test the entire buildapi application by running similar procedures that should work in my local setup

Updates to this setup can again be found in my user repo http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

Docker containers up for RabbitMQ and BuildAPI

After spending some time on the buildapi-app docker container, I realized that my issues with kombu were likely due to selfservagent setup stuff that I have not yet done. So before diving into that I went ahead and started getting rabbitmq working. I referenced the following while working on rabbitmq-app and buildapi-app:

http://blog.daviddias.me/get-your-feet-wet-with-docker/
https://github.com/cloudezz/cloudezz-images/tree/master/cloudezz-rabbitmq

http://docs.docker.io/use/working_with_links_names/

I finally worked the app to the point that rabbitmq was up and all the proper users were installed. However, I was not able to access the RabbitMQ management page as I should have been able to at http://127.0.0.1:15672
I began flipping through the boot2docker tutorial again and I noticed that the final step to launching the app was to run the following sequence of commands

$ boot2docker down # The VM must be stopped
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,8888,,8888"
$ boot2docker up

When I initially did this tutorial, I noticed that the vboxmanage command seemed to be connecting host port 8888 and guest port 8888, but I didn't give it much more thought. Well it turns out my initial understanding of using Boot2Docker was incorrect. Boot2docker is simply running a VM in vbox called boot2docker-vm, and within this vm is an environment with docker fully installed and working. This is important because I was under the impression that my docker containers were running in my env and not in a vm themselves. Because of this misunderstanding, I was puzzled as to why I needed to run this vboxmanage command to expose port 8888 on the guest and host, after I had already exposed port 8888 in the Dockerfile of the tutorial, and launched the container with the command docker run -p 8888:8888.

Silly me, the true structure is that
1) the exposed port in the Dockerfile tells the docker container to make 8888 available for exposing, and
2) the command docker run -p 8888:8888 does the connecting of port 8888 in the docker container with port 8888 IN THE BOOT2DOCKER-VM
This seems obvious now, but I apparently had overlooked this simple concept. The container's port was connected just fine to the boot2docker-vm, but I couldn't see them on my own local OS because I hadn't proceeded to forward the ports from the boot2docker-vm to my local machine!

Once I discovered my mistake, I ran the command vboxmanage showvminfo boot2docker-vm and was greeted with a ton of info about the vm, including the following:

NIC 1: MAC: 0800279C9CFD, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: virtio, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 1 Settings:  MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64)
NIC 1 Rule(0):   name = docker, protocol = tcp, host ip = 127.0.0.1, host port = 5000, guest ip = , guest port = 4243
NIC 1 Rule(1):   name = http, protocol = tcp, host ip = 127.0.0.1, host port = 8888, guest ip = , guest port = 8888
NIC 1 Rule(2):   name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2022, guest ip = , guest port = 22
NIC 2:           disabled
NIC 3:           disabled
NIC 4:           disabled
NIC 5:           disabled
NIC 6:           disabled
NIC 7:           disabled
NIC 8:           disabled

The line reading NIC 1 Rule(1) shows the results of setting host port 8888 and guest port 8888

You cannot set more than 1 http rule for NIC 1 (perhaps you can enable NIC 2 and set the http rule there, but I didn't bother with it), so I deleted the previous NIC 1 http rule and added one for port 15672 so that I could test the presence of the rabbitmq management portal at 127.0.0.1:15672

$ boot2docker down
$ vboxmanage modifyvm boot2docker-vm –natpf1 delete http
$ vboxmanage modifyvm boot2docker-vm –natpf1 "http,tcp,127.0.0.1,15672,,15672"
$ boot2docker up

After a rebuild/launch of rabbitmq-app, I visited 127.0.0.1:15672 and was greeted with the management portal! Huzzah!
With that I was able to verify that the rabbitmq-app was finished!

From there I needed to ensure that the buildapi-app container could connect to the rabbitmq-app container, so I deleted port 15672 from the NIC 1 http rule and added port 8888 again so that I could visit the buildapi page there. (Yes I could have kept 15672)

In order to get these containers communicating, I needed specify a link between them, and I used these docs to help me figure that part out.

I was able to confirm that buildapi is up and running and pages can be visited in self-serve, though there is no db info so no revisions show up, but that is as expected since I am not feeding in anything to fill those dbs with data. Left to do on the buildapi-app container is to get kombu and selfserveagent running.

I have uploaded the current working Dockerfiles for these apps to http://hg.mozilla.org/users/jozeller_mozilla.com/vagrant-docker-setup/

Bash commands that were useful today:

for f in $(docker ps -a -q); do docker rm -f $f; done; # Remove all docker containers
for f in $(docker images -q); do docker rmi -f $f; done; # Remove all docker images