By Michael O’Leary, Senior Consultant
Overview
As a part of our daily consulting practice, Avocado engineers work with many clients at different stages of DevOps maturity. For clients in the early stages of their DevOps journey, we recommend creating several environments prior to getting to production. These environments need to be repeatable and should mimic environments up the chain to production. A common set of environments for releasing application and automation code consists of:
- Local Development Environment.
- Shared Development Environment.
- Testing Environment.
- Stage Environment.
- Production Environment.
For the purposes of this article we are going to focus on creating a multi-node development environment for a Splunk architecture. This is useful for DevOps/Splunk engineers developing on their local workstation or laptop. For this we are going to create a standard Splunk infrastructure (Cluster Master, two Search Head and two Indexers). In addition, we are going to use an automation technology to provision the servers and ensure the infrastructure is immutable. For this reason I have also created a management server called “mgmt”, this will allow Ansible playbooks to be run within the environment for users on Windows operating systems.
Note: Feel free to follow along step by step (however I haven’t documented everything here) but all this code will be released publicly on GitHub. The architecture (which will be a plain vanilla install of the Splunk with some clustering configured) will look like below. This meets the minimum requirements of a Splunk Cluster with data replication.
For this article we are going to the used the following technologies:
- Virtualbox – An open source virtualisation software for creating virtual machines.
- Vagrant – A wrap tool for orchestrating the creation and configuration machine of virtual machines.
- Ansible – A powerful, self-documenting automation tool.
Note: While I’m using the above tools, I don’t cover the installation of these tools. The installation of these tools is sufficiently covered in the documentation for the relevant toolset. See the links above to find the documentation.
Getting Started
Let’s start by creating a blank project directory and a Vagrantfile to house the configuration for our vagrant lab environment.
Starting in my Workspace directory, issue the following commands:
[sourcecode language="plain"]mkdir multinode-lab cd multinode-lab vagrant init base [/sourcecode]
If those commands worked correctly for you, you should now see the Vagrant file if you do a directory listing.
Doing a cat of the file should reveal all the default configuration and comments that come along with a default generation of the Vagrant file.
That looks a little bit busy so let’s tide it up by removing all the comments we don’t need.
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 1] 10:18 $ sed -i '' '/^.*#.*$/d' Vagrantfile ✔ ~/Workspace/multinode-lab [master|✚ 2] 12:38 $[/sourcecode]
Your Vagrant file should look something like this now.
There are multiple different vagrant box types that can be utilised, including Windows, Linux and Unix so there are plenty of options out there to meet your development needs. Vagrant boxes that have been created by the community are located at the following URL: https://atlas.hashicorp.com/boxes/search
Instead of the base box, I’ve decided to use Centos 7 as the default box. To do this I’ve just replaced the value of the config.vm.box to be centos/7 instead of base. I’ve also set the config.ssh.insert_key value to false. The reason for this is marginally quick boot times and I don’t need to replace the default vagrant key as this is my local environment.
[sourcecode language="plain"]Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.ssh.insert_key = false end [/sourcecode]
Ensure your Vagrantfile looks like the code above.
Now that I have my basic configuration lets test that we can load up our default box and ssh to it.
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 1] 09:10 $ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'centos/7'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'centos/7' is up to date... ==> default: Setting the name of the VM: multinode-lab_default_1473635489166_21269 ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 (guest) => 2222 (host) (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2222 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Remote connection disconnect. Retrying... ==> default: Machine booted and ready! ==> default: Checking for guest additions in VM... default: No guest additions were detected on the base box for this VM! Guest default: additions are required for forwarded ports, shared folders, host only default: networking, and more. If SSH fails on this machine, please install default: the guest additions and repackage the box to continue. default: default: This is not an error message; everything may continue to work properly, default: in which case you may ignore this message. ==> default: Rsyncing folder: /Users/moleary/Workspace/multinode-lab/ => /vagrant ✔ ~/Workspace/multinode-lab [master|✚ 1][/sourcecode]
Everything looks good from the above screenshot apart from the fact that we don’t have guest editions installed. That’s not a huge problem however it does limit us from mounting a shared volume between my host and guest machine. We can tackle that later as that is functionality I find useful. In the meantime lets destroy the vagrant machine we have created.
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 1] 09:19 $ vagrant destroy ==> default: Are you sure you want to destroy the 'default' VM? [y/N] y ==> default: Forcing shutdown of VM... ==> default: Destroying VM and associated drives... ✔ ~/Workspace/multinode-lab [master|✚ 1][/sourcecode]
Install the Supporting Vagrant Plugins
Vagrant comes with a long list of plugins that extend it’s functionality. For this tutorial I need three plugins. One of these plugin is Landrush. Landrush provides local DNS resolution support within the vagrant environment. The others are the Ansible and the Shell provisioner, Ansible and Shell are the provisioners for our vagrant environment. Ansible can be used to directly provision the vagrant boxes if you are using a Unix or Linux operating system. If you are using a Windows operating systems then we can create an additional box to be used as a provisioner.
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 2] 11:49 $ vagrant plugin list shell (0.0.1) vagrant-aws (0.7.0) vagrant-hostmanager (1.8.1) vagrant-share (1.1.5, system) ✔ ~/Workspace/multinode-lab [master|✚ 2][/sourcecode]
Let’s install the plugins using the following command
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 2] 11:49 $ vagrant plugin install landrush ansible Installing the 'landrush' plugin. This can take a few minutes... Installed the plugin 'landrush (1.1.2)'! Installing the 'ansible' plugin. This can take a few minutes... Installed the plugin 'ansible (0.2.2)'! ✔ ~/Workspace/multinode-lab [master|✚ 2] 12:00 $[/sourcecode]
For a list of vagrant plugin see the Github home page https://github.com/mitchellh/vagrant/wiki/Available-Vagrant-Plugins.
Let’s go Multi-Node
Most architectures don’t consist of one box on a workstation. We are going to replicate the architecture mentioned above. I’ve added a sub directory “ansible/vars/cluster/guests/guests.yml” under our lab directory. In this directory I have added guests.yml to define some nodes. I’ve also added some ram values, so I can override the default ram value of 256MB. In addition I’ve updated the Vagrant file to reflect the changes for the multi-node environment.
[sourcecode language="plain"]default_ram = '256' default_domain = 'avocado.lab' default_env = "cluster" environment = ENV['ANSIBLE_ENV'] || default_env environment_domain = "#{environment}.#{default_domain}" require 'yaml' guests = YAML.load(File.open(File.join(File.dirname(__FILE__), "ansible/vars/#{environment}/guests/guests.yml"), File::RDONLY).read) Vagrant.configure("2") do |config| config.landrush.enabled = true config.landrush.tld = default_domain config.vm.box = "centos/7" config.ssh.insert_key = false guests.each do |name,options| config.vm.define name do |guest_config| hostname = [name, environment_domain].join('.') guest_config.vm.host_name = hostname memory = options['ram'] || default_ram guest_config.vm.provider :virtualbox do |v| v.name = hostname v.cpus = 1 v.memory = memory.to_s end end end end[/sourcecode]
In addition to configuring the nodes, I’ve used the Landrush plugin to provide me with DNS names, each node will have their name followed by the domain name. Thus the FQDN of the nodes will be:
- clm01.cluster.avocado.lab
- shd01.cluster.avocado.lab
- shd02.cluster.avocado.lab
- idx01.cluster.avocado.lab
- idx02.cluster.avocado.lab
- ufw01.cluster.avocodo.lab
- mgmt.cluster.avocado.lab
The contents of the guests.yml file is as follows:
[sourcecode language="plain"]--- clm01: ram: '512' shd01: ram: '512' shd02: ram: '512' idx01: ram: '512' idx02: ram: '512' mgmt: ram: '512' [/sourcecode]
Running vagrant status will show the hosts define in the guests yaml file.
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 2] 14:18 $ vagrant status Current machine states: clm01 not created (virtualbox) shd01 not created (virtualbox) shd02 not created (virtualbox) idx01 not created (virtualbox) idx02 not created (virtualbox) ufw01 not created (virutalbox) mgmt not created (virtualbox) This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`. ✔ ~/Workspace/multinode-lab [master|✚ 2][/sourcecode]
Running vagrant up command will start all three nodes
[sourcecode language="plain"]✔ ~/Workspace/multinode-lab [master|✚ 2…41] 08:56 $ vagrant up Bringing machine 'clm01' up with 'virtualbox' provider... Bringing machine 'shd01' up with 'virtualbox' provider... Bringing machine 'shd02' up with 'virtualbox' provider... Bringing machine 'idx01' up with 'virtualbox' provider... Bringing machine 'idx02' up with 'virtualbox' provider... Bringing machine 'ufw01' up with 'virtualbox' provider... Bringing machine 'mgmt' up with 'virtualbox' provider... ==> clm01: Importing base box 'centos-7.2-64-base'... ==> clm01: Matching MAC address for NAT networking... ==> clm01: Setting the name of the VM: clm01.cluster.avocado.lab ==> clm01: Clearing any previously set network interfaces... ==> clm01: [landrush] virtualbox requires an additional private network; adding it ==> clm01: Preparing network interfaces based on configuration... clm01: Adapter 1: nat clm01: Adapter 2: hostonly ==> clm01: Forwarding ports... clm01: 22 (guest) => 2400 (host) (adapter 1) ==> clm01: Running 'pre-boot' VM customizations... ==> clm01: Booting VM... ==> clm01: Waiting for machine to boot. This may take a few minutes... clm01: SSH address: 127.0.0.1:2400 clm01: SSH username: vagrant clm01: SSH auth method: private key clm01: Warning: Remote connection disconnect. Retrying... ==> clm01: Machine booted and ready! ==> clm01: Checking for guest additions in VM... ==> clm01: Setting hostname... ==> clm01: Configuring and enabling network interfaces... ==> clm01: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.3) ==> clm01: [landrush] adding machine entry: clm01.cluster.avocado.lab => 172.28.128.3 [landrush] Using enp0s8 (172.28.128.3) [landrush] Host DNS resolver config looks good. ==> clm01: Mounting shared folders... clm01: /vagrant => /Users/moleary/Workspace/multinode-lab ==> shd01: Importing base box 'centos-7.2-64-base'... ==> shd01: Matching MAC address for NAT networking... ==> shd01: Setting the name of the VM: shd01.cluster.avocado.lab ==> shd01: Clearing any previously set network interfaces... ==> shd01: [landrush] virtualbox requires an additional private network; adding it ==> shd01: Preparing network interfaces based on configuration... shd01: Adapter 1: nat shd01: Adapter 2: hostonly ==> shd01: Forwarding ports... shd01: 22 (guest) => 2410 (host) (adapter 1) ==> shd01: Running 'pre-boot' VM customizations... ==> shd01: Booting VM... ==> shd01: Waiting for machine to boot. This may take a few minutes... shd01: SSH address: 127.0.0.1:2410 shd01: SSH username: vagrant shd01: SSH auth method: private key shd01: Warning: Remote connection disconnect. Retrying... ==> shd01: Machine booted and ready! ==> shd01: Checking for guest additions in VM... ==> shd01: Setting hostname... ==> shd01: Configuring and enabling network interfaces... ==> shd01: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.4) ==> shd01: [landrush] adding machine entry: shd01.cluster.avocado.lab => 172.28.128.4 [landrush] Using enp0s8 (172.28.128.4) [landrush] Host DNS resolver config looks good. ==> shd01: Mounting shared folders... shd01: /vagrant => /Users/moleary/Workspace/multinode-lab ==> shd02: Importing base box 'centos-7.2-64-base'... ==> shd02: Matching MAC address for NAT networking... ==> shd02: Setting the name of the VM: shd02.cluster.avocado.lab ==> shd02: Clearing any previously set network interfaces... ==> shd02: [landrush] virtualbox requires an additional private network; adding it ==> shd02: Preparing network interfaces based on configuration... shd02: Adapter 1: nat shd02: Adapter 2: hostonly ==> shd02: Forwarding ports... shd02: 22 (guest) => 2411 (host) (adapter 1) ==> shd02: Running 'pre-boot' VM customizations... ==> shd02: Booting VM... ==> shd02: Waiting for machine to boot. This may take a few minutes... shd02: SSH address: 127.0.0.1:2411 shd02: SSH username: vagrant shd02: SSH auth method: private key shd02: Warning: Remote connection disconnect. Retrying... ==> shd02: Machine booted and ready! ==> shd02: Checking for guest additions in VM... ==> shd02: Setting hostname... ==> shd02: Configuring and enabling network interfaces... ==> shd02: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.5) ==> shd02: [landrush] adding machine entry: shd02.cluster.avocado.lab => 172.28.128.5 [landrush] Using enp0s8 (172.28.128.5) [landrush] Host DNS resolver config looks good. ==> shd02: Mounting shared folders... shd02: /vagrant => /Users/moleary/Workspace/multinode-lab ==> idx01: Importing base box 'centos-7.2-64-base'... ==> idx01: Matching MAC address for NAT networking... ==> idx01: Setting the name of the VM: idx01.cluster.avocado.lab ==> idx01: Clearing any previously set network interfaces... ==> idx01: [landrush] virtualbox requires an additional private network; adding it ==> idx01: Preparing network interfaces based on configuration... idx01: Adapter 1: nat idx01: Adapter 2: hostonly ==> idx01: Forwarding ports... idx01: 22 (guest) => 2420 (host) (adapter 1) ==> idx01: Running 'pre-boot' VM customizations... ==> idx01: Booting VM... ==> idx01: Waiting for machine to boot. This may take a few minutes... idx01: SSH address: 127.0.0.1:2420 idx01: SSH username: vagrant idx01: SSH auth method: private key idx01: Warning: Remote connection disconnect. Retrying... ==> idx01: Machine booted and ready! ==> idx01: Checking for guest additions in VM... ==> idx01: Setting hostname... ==> idx01: Configuring and enabling network interfaces... ==> idx01: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.6) ==> idx01: [landrush] adding machine entry: idx01.cluster.avocado.lab => 172.28.128.6 [landrush] Using enp0s8 (172.28.128.6) [landrush] Host DNS resolver config looks good. ==> idx01: Mounting shared folders... idx01: /vagrant => /Users/moleary/Workspace/multinode-lab ==> idx02: Importing base box 'centos-7.2-64-base'... ==> idx02: Matching MAC address for NAT networking... ==> idx02: Setting the name of the VM: idx02.cluster.avocado.lab ==> idx02: Clearing any previously set network interfaces... ==> idx02: [landrush] virtualbox requires an additional private network; adding it ==> idx02: Preparing network interfaces based on configuration... idx02: Adapter 1: nat idx02: Adapter 2: hostonly ==> idx02: Forwarding ports... idx02: 22 (guest) => 2421 (host) (adapter 1) ==> idx02: Running 'pre-boot' VM customizations... ==> idx02: Booting VM... ==> idx02: Waiting for machine to boot. This may take a few minutes... idx02: SSH address: 127.0.0.1:2421 idx02: SSH username: vagrant idx02: SSH auth method: private key idx02: Warning: Remote connection disconnect. Retrying... ==> idx02: Machine booted and ready! ==> idx02: Checking for guest additions in VM... ==> idx02: Setting hostname... ==> idx02: Configuring and enabling network interfaces... ==> idx02: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.7) ==> idx02: [landrush] adding machine entry: idx02.cluster.avocado.lab => 172.28.128.7 [landrush] Using enp0s8 (172.28.128.7) [landrush] Host DNS resolver config looks good. ==> idx02: Mounting shared folders... idx02: /vagrant => /Users/moleary/Workspace/multinode-lab ==> ufw01: Importing base box 'centos-7.2-64-base'... ==> ufw01: Matching MAC address for NAT networking... ==> ufw01: Setting the name of the VM: ufw01.cluster.avocado.lab ==> ufw01: Clearing any previously set network interfaces... ==> ufw01: [landrush] virtualbox requires an additional private network; adding it ==> ufw01: Preparing network interfaces based on configuration... ufw01: Adapter 1: nat ufw01: Adapter 2: hostonly ==> ufw01: Forwarding ports... ufw01: 22 (guest) => 2450 (host) (adapter 1) ==> ufw01: Running 'pre-boot' VM customizations... ==> ufw01: Booting VM... ==> ufw01: Waiting for machine to boot. This may take a few minutes... ufw01: SSH address: 127.0.0.1:2450 ufw01: SSH username: vagrant ufw01: SSH auth method: private key ufw01: Warning: Remote connection disconnect. Retrying... ==> ufw01: Machine booted and ready! ==> ufw01: Checking for guest additions in VM... ==> ufw01: Setting hostname... ==> ufw01: Configuring and enabling network interfaces... ==> ufw01: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.8) ==> ufw01: [landrush] adding machine entry: ufw01.cluster.avocado.lab => 172.28.128.8 [landrush] Using enp0s8 (172.28.128.8) [landrush] Host DNS resolver config looks good. ==> ufw01: Mounting shared folders... ufw01: /vagrant => /Users/moleary/Workspace/multinode-lab ==> mgmt: Importing base box 'centos-7.2-64-base'... ==> mgmt: Matching MAC address for NAT networking... ==> mgmt: Setting the name of the VM: mgmt.dev.avocado.lab ==> mgmt: Clearing any previously set network interfaces... ==> mgmt: [landrush] virtualbox requires an additional private network; adding it ==> mgmt: Preparing network interfaces based on configuration... mgmt: Adapter 1: nat mgmt: Adapter 2: hostonly ==> mgmt: Forwarding ports... mgmt: 22 (guest) => 2222 (host) (adapter 1) ==> mgmt: Running 'pre-boot' VM customizations... ==> mgmt: Booting VM... ==> mgmt: Waiting for machine to boot. This may take a few minutes... mgmt: SSH address: 127.0.0.1:2222 mgmt: SSH username: vagrant mgmt: SSH auth method: private key mgmt: Warning: Remote connection disconnect. Retrying... ==> mgmt: Machine booted and ready! ==> mgmt: Checking for guest additions in VM... ==> mgmt: Setting hostname... ==> mgmt: Configuring and enabling network interfaces... ==> mgmt: Landrush IP not installed in guest yet (or it's an outdated version). Installing now. [landrush] Using enp0s8 (172.28.128.7) ==> mgmt: [landrush] adding machine entry: mgmt.dev.avocado.lab => 172.28.128.7 [landrush] Using enp0s8 (172.28.128.7) [landrush] Host DNS resolver config looks good. ==> mgmt: Mounting shared folders... mgmt: /vagrant => /Users/moleary/Workspace/multinode-lab ✔ ~/Workspace/multinode-lab [master|✚ 2] 14:36 $[/sourcecode]
You should now have servers virtual machines that can be pinged using their corresponding DNS names.
In Part 2 we will explore the integration of the Ansible playbooks to assist with on the fly provisioning of the virtual machine.