Network Automation talk by Andrius Benokraitis

Check out this great talk given by Andrius Benokraitis from the Ansible team at Rad Hat on how to leverage Ansible and start small when attempting the journey to automate networking in the Enterprise space, without having to boil the ocean. Andrius makes the case that Network Engineers don’t need to reinvent themselves or become programmers over night as most of the work is already done through Ansible modules, enabling network operations teams to leverage the simplicity and elegance of Ansible to automate their networks with playbooks and Ansible Tower.

Dell OS6 Campus Switches with Ansible

In my last post on configuring a Spine/Leaf setup with Dell OS10 and Ansible, I continue on here in similar vein with Dell OS6 Campus switches this time and how to leverage Ansible for configuration deployment and information gathering.

My setup is very simple for this example, I have the Ansible control server and 2 Dell N-3048 series switches as my test devices.

In the first example I want to do 3 things. Get a list of interfaces of each device and its status. Secondly get the mac address tables off each device to see what’s connected and on which interface. Thirdly I want to take a backup/snapshot of the current config and back it up to a directory on my control server for later reference.

The playbook for this will look something like this:


I run my playbook with the command below:

ansible-playbook -i inventory.yaml get_mac_intf.yml

In the snippet below you can see it showing the interface status. This is the first task in the playbook:


Secondly it will run the task to get the mac address tables:


Thirdly we get a verbose output of the running configuration on the switch to the screen and then finally we back up that configuration to a local directory on our control system:


In this case the task for the backup is listed as a “COPY” operation


We can verify our running-config is in place by referencing our local configs directory


In the next example, I want to get a snapshot of what firmware version I am running on my Dell OS6 switches. I can do this quite easily with the playbook below:


When I run the playbook it gives me the information in the snippet below, from the the output of the “show version command. This could be really handy if I need to get this from not just 2 switches, but hundreds of switches that could exist in my production campus network:


Lastly, like I did in my last post for OS10, I may want to apply a standard configuration onto my campus switches. I can do this with another playbook, “stnd_config_os6.yaml”

This file looks like this:


In it I want to do 3 things, apply an ACL inline to the switches, apply standard campus VLANs and set SNMP details pertaining to my particular environment.

I run the playbook with the following command:

ansible-playbook -i inventory.yaml stnd_config_os6.yaml

I can then check my result by logging onto the switch:



In this particular case I only applied the configuration to switch 1. To verify this I can run the playbook again targeting both switches and verifying that switch 1 has a standard config applied but switch 2 has not. I achieve this by running the playbook with a flag called check mode. This validates my config and also shows a key attribute of Ansible which is idempotence, which is the ability of the tool to detect when a change is required against a device and when it is not required. This is how we avoid configuration drift and reach desired state in our network. In the below output we can see that switch 2 is out of desired state as indicated by the orange status for all 3 tasks in my playbook, and will need remediation.


All the examples in this post were produced using the standard out of the box Ansible modules for Dell Networking namely the dellos6_command and dellos6_config modules, highlighting once again how easily Ansible can be leveraged to automate your network.

In my next post I will look at how we can incorporate versioning control and workflow tools such as Git and Jenkins to keep our configurations consistent and inline with the DevOps approach of configuration management……….

Dell OS10 Spine/Leaf BMP and Deployment with Ansible

It’s been a while since my last article, but just wanted to put up a post about a work project I have been working on that demonstrates how to deploy and provision a spine leaf switch setup using Dell OS10 devices. For the purposes of this lab demo, I used a virtual edition of the Dell OS10 operating system which can be run inside VMWare ESX. This is the same operating system that runs on Dell’s Hardware Data Center switches such as the Z9100, or S6100 series devices. A screen shot of my vSphere setup below:


The topology consists of 2 spines and 4 leafs. I prestage the VMs in DHCP on my Ansible control server so that when they do a first time boot they acquire an IP address and pull down their OS from the server through HTTP and bootstrap themselves onto the network.


This is what the topology will look like on completion:


After the switch boots up with its OS installed, we can then apply a base configuration onto the switch, such as Management IP, SNMP, NTP etc., The variables for this are referenced in the all.yaml file under the group_vars directory.

Please reference my github repo for more details:

The base config is referenced by a Jinja2 template “base.j2”, iterating over variables from the all.yaml file and inputting those elements into the base configuration applied to the switches:


When we cat the Jinja2 file, it looks like this:


As stated earlier it references the all.yaml file in the group_vars directory. I put it here as this base config applies to all data centre switches. The all.yaml file is below:


The CLI credentials are also listed here. This is just for lab demonstration purposes. In a real production deployment you would hide these login credentials away in Ansible Vault or use SSH Keys.

After we have the base config applied, we can then move onto building our data centre fabric out with the datacenter.yaml playbook file. This file contains references to 3 roles within our Ansible playbook directories, namely an Interfaces roles, a routing roles for BGP and a further configuration addition for SNMP. Interestingly, these roles come pre baked through Ansible Galaxy. We don’t have to do any heavy lifting in building the yaml or Jinja code for these items as they are already built for us. The only items we need change are device specific variables we would need that our unique to our environment such as IP addresses, or VLAN numbers etc.,


Here we see inside the roles for BGP, the yaml data model used to populate the configuration parameters for our deployment:


This view shows the tree view of the role directory:


Our playbook directory now looks like this:


We are ready to run the datacenter.yaml playbook and deploy the spine leaf network. After the playbook is run we can check our configuration and BGP peers and validate everything is working as planned:


If we need to validate our network never deviates from our standards we can run our playbook in check mode to make sure we don’t get configuration drift or errors in our deployment. You could call this continuous Network Validation Testing:


As we have all green in the playbook output, we can rest assured we have a good configuration applied to the devices.

Update 08/06/2018:

We can also check some telemetry information regarding the Interface status and attached nodes with the show_connection_status playbook I added:



The first task sends a simple show interface status to the switch to get the status of the up interfaces. The second task sends a show ip arp to see what hosts are attached,  their respective IPs and MAC address and what interfaces they are connected to, whether they be upstream spines or connected servers:



It is worth mentioning again, that Ansible offers a rich platform from which to apply consistent network configuration management to many different vendor manufactured devices. As is seen in this post, Dell networking devices are no exception to this, as seen below there is a wide array of prebuilt ready to go roles in Ansible Galaxy that can be leveraged to get your Dell datacenter up and running in minutes:


In my next post I will demonstrate applying some standard config and gathering some basic facts from Dell OS6 campus switches. Stay tuned…………………..

Netconf and YANG on IOS-XE

I recently read an article by Micheal Kashin over at Networkop detailing how to use Netconf commands with IOS-XE devices. Check out his article:

This is very interesting that Cisco is finally seeing the sense in trying to build agnostic means to program your switches and routers especially in the direction of the Enterprise space. Juniper has had so much better support for this feature for ages now. Of course the next evolution of Netconf will be Openconfig which is being driven by the ISP community as they strive to get away from Vendor specific data models for network programmability.

Since all this fun stuff can be done with the CRS1000v running IOS-XE 16.X, I thought I would build a lab outlining the mechanics and play around with the data models.

Coming soon……………..


Welcome to this, my first post on my new blogging site for all things network automation related!

In this first installment I am deploying one of my favourite technologies – L3 MPLS VPN 

The goal of this first article is to outline how to setup an L3 MPLS lab from scratch for lab and testing purposes without having to touch the cli of a single router (except for activating SSH and a management IP, Ansible requires these to work 🙂

In the screenshot below is an outline of the network setup.


In the screenshot below is an outline of my playbooks involved in the setup:


You can visit my GitHub repo to check out the details of the various components involved and clone to set this up for yourself

The lab starts off with a shell script which is just a wrapper that spins up the 3 playbooks involved. The first playbook builds the MPLS Core, followed by the Customer routers and then finishes off by checking end to end connectivity between the PE loopbacks to test the Core followed by an end to end test from the ACME CEs to each other to verify BGP connectivity is working. This is by no means an exhaustive test, and can be modified to include other various testing if required. Feel free to add any suggestions and comments, I’m a novice at this, so any and all opinions are welcome. Let’s get the community going on automation!

In the screenshot below you can see the provisioning script that starts the deployment:


Between each phase of the deployment I allow a resting period of 60 seconds so the core can build its adjacencies and LDP and BGP have time to cook, before the customer routers come on line.

I would like to extend many thanks to Bernd Malmqvist over at, for his fantastic articles and would highly recommend you check out his many great posts over there on using Ansible for everything networking related. I borrowed heavily from his cisco provisioning lab to build this MPLS lab and in the spirit of true Devops, reused his script above to bootstrap my own playbooks, thanks Bernd!

When the scripts starts we get the visual output of the first playbook deploy_pe.yml, which builds the VPN core:


After 60 seconds the CE deployment kicks off with the deploy_ce.yml playbook


As an example below, is one of my template files I used to build the PE routing configuration for the IPV4, VPNV4 and BGP configuration:


This jinja2 template references an .yml data file which sits in under the host_vars directory which Ansible uses to populate the configuration:


When the deployment of PE and CE routers completes we finish with the connection_check.yml playbook below:


This finishes up the roll out and verifies we have PE-PE and CE-CE connectivity.

On subsequent runs of the playbooks we can see the idempotent power of Ansible and how it contributes to a attaining a desired state configuration standard in the network, as all devices now show as green and no further changes are required as is seen below for the CE playbook rerun:


Well that about covers off this 10,000 foot view of my first automation article, I hope you enjoyed it and check out my Git repo. Keep an eye for further articles and guides through the world of network and systems automation. I look forward to hearing your feedback and learning more on this exciting and curious journey of Devops and Automation. Please leave comments below.

I will soon follow up this lab, with a short video on YouTube, outlining the detailed steps…..