Network Automation talk by Andrius Benokraitis

Check out this great talk given by Andrius Benokraitis from the Ansible team at Rad Hat on how to leverage Ansible and start small when attempting the journey to automate networking in the Enterprise space, without having to boil the ocean. Andrius makes the case that Network Engineers don’t need to reinvent themselves or become programmers over night as most of the work is already done through Ansible modules, enabling network operations teams to leverage the simplicity and elegance of Ansible to automate their networks with playbooks and Ansible Tower.

Dell OS6 Campus Switches with Ansible

In my last post on configuring a Spine/Leaf setup with Dell OS10 and Ansible, I continue on here in similar vein with Dell OS6 Campus switches this time and how to leverage Ansible for configuration deployment and information gathering.

My setup is very simple for this example, I have the Ansible control server and 2 Dell N-3048 series switches as my test devices.

In the first example I want to do 3 things. Get a list of interfaces of each device and its status. Secondly get the mac address tables off each device to see what’s connected and on which interface. Thirdly I want to take a backup/snapshot of the current config and back it up to a directory on my control server for later reference.

The playbook for this will look something like this:


I run my playbook with the command below:

ansible-playbook -i inventory.yaml get_mac_intf.yml

In the snippet below you can see it showing the interface status. This is the first task in the playbook:


Secondly it will run the task to get the mac address tables:


Thirdly we get a verbose output of the running configuration on the switch to the screen and then finally we back up that configuration to a local directory on our control system:


In this case the task for the backup is listed as a “COPY” operation


We can verify our running-config is in place by referencing our local configs directory


In the next example, I want to get a snapshot of what firmware version I am running on my Dell OS6 switches. I can do this quite easily with the playbook below:


When I run the playbook it gives me the information in the snippet below, from the the output of the “show version command. This could be really handy if I need to get this from not just 2 switches, but hundreds of switches that could exist in my production campus network:


Lastly, like I did in my last post for OS10, I may want to apply a standard configuration onto my campus switches. I can do this with another playbook, “stnd_config_os6.yaml”

This file looks like this:


In it I want to do 3 things, apply an ACL inline to the switches, apply standard campus VLANs and set SNMP details pertaining to my particular environment.

I run the playbook with the following command:

ansible-playbook -i inventory.yaml stnd_config_os6.yaml

I can then check my result by logging onto the switch:



In this particular case I only applied the configuration to switch 1. To verify this I can run the playbook again targeting both switches and verifying that switch 1 has a standard config applied but switch 2 has not. I achieve this by running the playbook with a flag called check mode. This validates my config and also shows a key attribute of Ansible which is idempotence, which is the ability of the tool to detect when a change is required against a device and when it is not required. This is how we avoid configuration drift and reach desired state in our network. In the below output we can see that switch 2 is out of desired state as indicated by the orange status for all 3 tasks in my playbook, and will need remediation.


All the examples in this post were produced using the standard out of the box Ansible modules for Dell Networking namely theĀ dellos6_command andĀ dellos6_config modules, highlighting once again how easily Ansible can be leveraged to automate your network.

In my next post I will look at how we can incorporate versioning control and workflow tools such as Git and Jenkins to keep our configurations consistent and inline with the DevOps approach of configuration management……….

Dell OS10 Spine/Leaf BMP and Deployment with Ansible

It’s been a while since my last article, but just wanted to put up a post about a work project I have been working on that demonstrates how to deploy and provision a spine leaf switch setup using Dell OS10 devices. For the purposes of this lab demo, I used a virtual edition of the Dell OS10 operating system which can be run inside VMWare ESX. This is the same operating system that runs on Dell’s Hardware Data Center switches such as the Z9100, or S6100 series devices. A screen shot of my vSphere setup below:


The topology consists of 2 spines and 4 leafs. I prestage the VMs in DHCP on my Ansible control server so that when they do a first time boot they acquire an IP address and pull down their OS from the server through HTTP and bootstrap themselves onto the network.


This is what the topology will look like on completion:


After the switch boots up with its OS installed, we can then apply a base configuration onto the switch, such as Management IP, SNMP, NTP etc., The variables for this are referenced in the all.yaml file under the group_vars directory.

Please reference my github repo for more details:

The base config is referenced by a Jinja2 template “base.j2”, iterating over variables from the all.yaml file and inputting those elements into the base configuration applied to the switches:


When we cat the Jinja2 file, it looks like this:


As stated earlier it references the all.yaml file in the group_vars directory. I put it here as this base config applies to all data centre switches. The all.yaml file is below:


The CLI credentials are also listed here. This is just for lab demonstration purposes. In a real production deployment you would hide these login credentials away in Ansible Vault or use SSH Keys.

After we have the base config applied, we can then move onto building our data centre fabric out with the datacenter.yaml playbook file. This file contains references to 3 roles within our Ansible playbook directories, namely an Interfaces roles, a routing roles for BGP and a further configuration addition for SNMP. Interestingly, these roles come pre baked through Ansible Galaxy. We don’t have to do any heavy lifting in building the yaml or Jinja code for these items as they are already built for us. The only items we need change are device specific variables we would need that our unique to our environment such as IP addresses, or VLAN numbers etc.,


Here we see inside the roles for BGP, the yaml data model used to populate the configuration parameters for our deployment:


This view shows the tree view of the role directory:


Our playbook directory now looks like this:


We are ready to run the datacenter.yaml playbook and deploy the spine leaf network. After the playbook is run we can check our configuration and BGP peers and validate everything is working as planned:


If we need to validate our network never deviates from our standards we can run our playbook in check mode to make sure we don’t get configuration drift or errors in our deployment. You could call this continuous Network Validation Testing:


As we have all green in the playbook output, we can rest assured we have a good configuration applied to the devices.

It is worth mentioning again, that Ansible offers a rich platform from which to apply consistent network configuration management to many different vendor manufactured devices. As is seen in this post, Dell networking devices are no exception to this, as seen below there is a wide array of prebuilt ready to go roles in Ansible Galaxy that can be leveraged to get your Dell datacenter up and running in minutes:


In my next post I will demonstrate applying some standard config and gathering some basic facts from Dell OS6 campus switches. Stay tuned…………………..