Ansible Tower POC

As promised in a previous post, I suggested I would take a look at Ansible Tower as the front end to the Ansible engine and having demoed it at a recent work presentation for network automation and POC purposes, I thought I would document it here for posterity. I reused a previous lab I had built with virtual Dell OS10 devices, the details of which you can refer to here:

Tower installs via a simple script, the details of which, I won’t get into here, suffice it say, it took a few attempts and some dependencies to be installed, but after 4-5 attempts it finally installed. After the install you are presented with the login screen:


After successful login with the default username of admin, you are presented with the dashboard which gives you a one pane of glass view of your job status as a graph, recently used job templates (which are basically a wrapper for your playbooks) and recently run jobs and their completion status:


As a front end to the Ansible engine, I found Tower to be easy to navigate and not cluttered or busy with too many knobs and buttons, which can be the case with many enterprise grade configuration management tools. The interface was very simple.

Everything in Tower starts off life as a project, which is the housing for your playbooks (the meat and potatoes of your automation scripts). The section to note on this page is the method from which you “pull” or “source” your playbooks. This can be from a number of sources as displayed in the graphic below:


We have several options from which to source our playbooks, including Git, Subversion, Red Hat Insights and the one I chose –  manual, as I had the playbooks local to my Ansible control server.

The next tab over is the Inventory tab, which points Tower at your source for device identification. Again I went static on this and just pointed it at my hosts file in the root of my playbook directory. You can, through the API of Tower access dynamic inventory, from IPAM systems and other 3rd party tools.


After this tab, we have the templates page, which is where most of the construct is built from which the jobs we want to run are sourced.


We get an overview of the templates in the system and can click into each one for more details pertaining to that job template:


In here we get information regarding the name of the job template, the inventory it is sourcing for targeting devices, the CLI credentials required to gain access to the device, the project source mentioned previously, the playbook (in this case show_connection_status.yml) and some other settings that offer optimizations in how the tool handles processing the job, such as fact cache which Ansible define as follows:

“Tower can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to serve fact requests for all hosts in an inventory related to the job running. This allows you to use job templates with --limit while still having access to the entire inventory of host facts”

As you can see I have needle nosed my blast radius for the job by using a limit on the targeted devices to 2 spines and 3 leafs. As stated above fact caching enhances the limit functionality allowing you to use the limit option while still having access to the entire inventory of host facts. One last interesting point to note on this tab is the “job type” listed in the top right hand corner which classifies the job as either a run job or a check job:


This in effect, allows jobs to be setup in read-only mode for your NOC teams that you require to run checks on the network without necessarily allowing them to make changes.

Finally we have the jobs tab, which gives a broad overview of the success or failure of any job runs we have rolled out in the environment:


We can run a job by clicking on the rocket icon to the right of the job:


We can then access the job by clicking on the job itself from this view:


Tower gives a nice display screen from where we can easily see details of the job/playbook run. To the left we get a sidebar status of the job, including whether the job ran successfully, the start and end date, the template used, the user who launched the job, inventory, playbook name, credential details used for accessing the device, forks (amount of parallel processes Ansible uses on device access), limits, tags and any extra variables we may want include at run time. To the right we get a nice console output to the screen, of the playbook details, including task names, timestamps, output from included playbook directives, and a completion summary/recap when the job ends.


In summary, I have found Ansible Tower a very compelling addition as a workflow tool to the Ansible engine, and I would highly recommend it as a tool that could be leveraged in Enterprises thinking of using Ansible in production to manage their networking and server environments. As it is licensed on a per node basis, the larger the network, the more expensive it could become to manage it with Tower. But having said that, the costs can be offset by the benefit of reduced TCO and hours spent on manual configuration and troubleshooting in those environments where no downtime is critical. It is a tool that sits nicely in the world of DevOps and NetDevOps……………..

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s