It’s been a while since my last article, but just wanted to put up a post about a work project I have been working on that demonstrates how to deploy and provision a spine leaf switch setup using Dell OS10 devices. For the purposes of this lab demo, I used a virtual edition of the Dell OS10 operating system which can be run inside VMWare ESX. This is the same operating system that runs on Dell’s Hardware Data Center switches such as the Z9100, or S6100 series devices. A screen shot of my vSphere setup below:
The topology consists of 2 spines and 4 leafs. I prestage the VMs in DHCP on my Ansible control server so that when they do a first time boot they acquire an IP address and pull down their OS from the server through HTTP and bootstrap themselves onto the network.
This is what the topology will look like on completion:
After the switch boots up with its OS installed, we can then apply a base configuration onto the switch, such as Management IP, SNMP, NTP etc., The variables for this are referenced in the all.yaml file under the group_vars directory.
Please reference my github repo for more details:
The base config is referenced by a Jinja2 template “base.j2”, iterating over variables from the all.yaml file and inputting those elements into the base configuration applied to the switches:
When we cat the Jinja2 file, it looks like this:
As stated earlier it references the all.yaml file in the group_vars directory. I put it here as this base config applies to all data centre switches. The all.yaml file is below:
The CLI credentials are also listed here. This is just for lab demonstration purposes. In a real production deployment you would hide these login credentials away in Ansible Vault or use SSH Keys.
After we have the base config applied, we can then move onto building our data centre fabric out with the datacenter.yaml playbook file. This file contains references to 3 roles within our Ansible playbook directories, namely an Interfaces roles, a routing roles for BGP and a further configuration addition for SNMP. Interestingly, these roles come pre baked through Ansible Galaxy. We don’t have to do any heavy lifting in building the yaml or Jinja code for these items as they are already built for us. The only items we need change are device specific variables we would need that our unique to our environment such as IP addresses, or VLAN numbers etc.,
Here we see inside the roles for BGP, the yaml data model used to populate the configuration parameters for our deployment:
This view shows the tree view of the role directory:
Our playbook directory now looks like this:
We are ready to run the datacenter.yaml playbook and deploy the spine leaf network. After the playbook is run we can check our configuration and BGP peers and validate everything is working as planned:
If we need to validate our network never deviates from our standards we can run our playbook in check mode to make sure we don’t get configuration drift or errors in our deployment. You could call this continuous Network Validation Testing:
As we have all green in the playbook output, we can rest assured we have a good configuration applied to the devices.
It is worth mentioning again, that Ansible offers a rich platform from which to apply consistent network configuration management to many different vendor manufactured devices. As is seen in this post, Dell networking devices are no exception to this, as seen below there is a wide array of prebuilt ready to go roles in Ansible Galaxy that can be leveraged to get your Dell datacenter up and running in minutes:
In my next post I will demonstrate applying some standard config and gathering some basic facts from Dell OS6 campus switches. Stay tuned…………………..