Dell OS10 Spine/Leaf BMP and Deployment with Ansible

It’s been a while since my last article, but just wanted to put up a post about a work project I have been working on that demonstrates how to deploy and provision a spine leaf switch setup using Dell OS10 devices. For the purposes of this lab demo, I used a virtual edition of the Dell OS10 operating system which can be run inside VMWare ESX. This is the same operating system that runs on Dell’s Hardware Data Center switches such as the Z9100, or S6100 series devices. A screen shot of my vSphere setup below:

vsphere

The topology consists of 2 spines and 4 leafs. I prestage the VMs in DHCP on my Ansible control server so that when they do a first time boot they acquire an IP address and pull down their OS from the server through HTTP and bootstrap themselves onto the network.

dhcp-clients

This is what the topology will look like on completion:

spine-leaf-topology

After the switch boots up with its OS installed, we can then apply a base configuration onto the switch, such as Management IP, SNMP, NTP etc., The variables for this are referenced in the all.yaml file under the group_vars directory.

Please reference my github repo for more details:

https://github.com/gitmurph/dell-os10-lab

The base config is referenced by a Jinja2 template “base.j2”, iterating over variables from the all.yaml file and inputting those elements into the base configuration applied to the switches:

base

When we cat the Jinja2 file, it looks like this:

jinja

As stated earlier it references the all.yaml file in the group_vars directory. I put it here as this base config applies to all data centre switches. The all.yaml file is below:

all

The CLI credentials are also listed here. This is just for lab demonstration purposes. In a real production deployment you would hide these login credentials away in Ansible Vault or use SSH Keys.

After we have the base config applied, we can then move onto building our data centre fabric out with the datacenter.yaml playbook file. This file contains references to 3 roles within our Ansible playbook directories, namely an Interfaces roles, a routing roles for BGP and a further configuration addition for SNMP. Interestingly, these roles come pre baked through Ansible Galaxy. We don’t have to do any heavy lifting in building the yaml or Jinja code for these items as they are already built for us. The only items we need change are device specific variables we would need that our unique to our environment such as IP addresses, or VLAN numbers etc.,

datacenter

Here we see inside the roles for BGP, the yaml data model used to populate the configuration parameters for our deployment:

cat-bgp-role

This view shows the tree view of the role directory:

dell-bgp-role-tree

Our playbook directory now looks like this:

Tree

We are ready to run the datacenter.yaml playbook and deploy the spine leaf network. After the playbook is run we can check our configuration and BGP peers and validate everything is working as planned:

bgp-up

If we need to validate our network never deviates from our standards we can run our playbook in check mode to make sure we don’t get configuration drift or errors in our deployment. You could call this continuous Network Validation Testing:

playbook

As we have all green in the playbook output, we can rest assured we have a good configuration applied to the devices.

Update 08/06/2018:

We can also check some telemetry information regarding the Interface status and attached nodes with the show_connection_status playbook I added:

DellOS10-Lab

DellOS10-Lab-Show-Inside

The first task sends a simple show interface status to the switch to get the status of the up interfaces. The second task sends a show ip arp to see what hosts are attached,  their respective IPs and MAC address and what interfaces they are connected to, whether they be upstream spines or connected servers:

DellOS10-Lab-Interfaces

DellOS10-Lab-Hosts

It is worth mentioning again, that Ansible offers a rich platform from which to apply consistent network configuration management to many different vendor manufactured devices. As is seen in this post, Dell networking devices are no exception to this, as seen below there is a wide array of prebuilt ready to go roles in Ansible Galaxy that can be leveraged to get your Dell datacenter up and running in minutes:

galaxy-dell

In my next post I will demonstrate applying some standard config and gathering some basic facts from Dell OS6 campus switches. Stay tuned…………………..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s