Introduction
Part 1 of this series gave a high-level overview of what Terraform is. Part 2 showed how to use Terraform to deploy a single virtual machine inside an OpenStack cloud.
This blog post will show how to deploy multiple virtual machines inside an OpenStack cloud to act as a highly-available (HA) pair. High Availability means the system will always try to be available, or at least in service. In order to do this, some form of fault-tolerance — the ability to withstand interruption — needs to be introduced.
There are several different ways to accomplish this. The method I will outline in this blog post will use a “virtual IP address”, which is an IP address shared among a group of servers. An election process takes place, and the winner of the election hosts the virtual IP. If the elected leader is unavailable, a new leader will be elected.
keepalived is a popular open source project that implements this kind of system by way of a virtual router redundancy protocol (VRRP).
Required Materials
This is a complex demo that requires several different pieces. To make it easier to follow along, I have bundled everything together here.
If you wanted to just cut to the chase and go to the final product, do the following:
$ source /path/to/your/openrc
$ git clone https://github.com/jtopjian/terraform-openstack-keepalived
$ cd terraform-openstack-keepalived
$ terraform apply
The rest of this blog post will describe what each piece is actually doing.
The Terraform Configuration
Let’s look at the main.tf file. There are several OpenStack resources listed here:
Key Pair
resource "openstack_compute_keypair_v2" "keepalived" {
name = "keepalived"
public_key = "${file("key/id_rsa.pub")}"
}
The first resource declares an SSH key pair that will be uploaded to your OpenStack account. You need to generate the actual key, though, by doing the following:
$ ssh-keygen -f key/id_rsa
Security Group
resource "openstack_compute_secgroup_v2" "keepalived" {
name = "keepalived"
description = "Rules for keepalived tests"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "::/0"
}
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = 80
to_port = 80
ip_protocol = "tcp"
cidr = "::/0"
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "tcp"
self = true
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "udp"
self = true
}
}
The second resource is a security group. Security groups provide a firewall-like service for your OpenStack virtual machines. This security group will allow any outside traffic entry to ports 22 and 80 of the virtual machines, and also allow the flow of traffic from any port between virtual machines.
Server Group
resource "openstack_compute_servergroup_v2" "keepalived" {
name = "keepalived"
policies = ["anti-affinity"]
}
The third resource declares a Server Group. Here’s an excellent article about using the Affinity and Anti-Affinity server groups in OpenStack.
The server group being created will ensure that all virtual machines are placed on a different compute node inside the OpenStack cloud. Compute nodes are the components that host virtual machines. If a compute node goes offline (possibly due to a hardware issue), the virtual machines hosted on it will also be unavailable.
Since we’re creating a highly-available cluster, we want to ensure that all virtual machines are hosted on different compute nodes. After all, if all virtual machines were on the same compute node, and that one compute node went offline, our entire highly-available cluster would also be offline.
Floating IP
resource "openstack_compute_floatingip_v2" "keepalived" {
pool = "nova"
}
The fourth resource allocates a Floating IP to our OpenStack account. This will act as the Virtual IP Address.
Instances
resource "openstack_compute_instance_v2" "keepalived-1" {
name = "keepalived-1"
image_name = "Ubuntu 14.04"
flavor_name = "m1.tiny"
key_pair = "${openstack_compute_keypair_v2.keepalived.name}"
security_groups = ["${openstack_compute_secgroup_v2.keepalived.name}"]
scheduler_hints {
group = "${openstack_compute_servergroup_v2.keepalived.id}"
}
}
resource "openstack_compute_instance_v2" "keepalived-2" {
name = "keepalived-2"
image_name = "Ubuntu 14.04"
flavor_name = "m1.tiny"
key_pair = "${openstack_compute_keypair_v2.keepalived.name}"
security_groups = ["${openstack_compute_secgroup_v2.keepalived.name}"]
scheduler_hints {
group = "${openstack_compute_servergroup_v2.keepalived.id}"
}
}
The fifth and sixth resources are virtual machines, or “instances”. You can see that they are identical except that one is called “keepalived-1” and the other is called “keepalived-2”.
You can also see that they have referenced the Key Pair, Security Group, and Server Group that were previously created. In Terraform, this creates an implicit relationship. This means that the two instances will not be created until the referenced resources have also been successfully created.
Templates
resource "template_file" "keepalived-1" {
filename = "templates/keepalived.conf.tpl"
vars {
my_uuid = "${openstack_compute_instance_v2.keepalived-1.id}"
peer_uuid = "${openstack_compute_instance_v2.keepalived-2.id}"
my_ip = "${openstack_compute_instance_v2.keepalived-1.access_ip_v4}"
peer_ip = "${openstack_compute_instance_v2.keepalived-2.access_ip_v4}"
floating_ip = "${openstack_compute_floatingip_v2.keepalived.address}"
my_state = "MASTER"
my_priority = "101"
}
connection {
user = "ubuntu"
key_file = "key/id_rsa"
host = "${openstack_compute_instance_v2.keepalived-1.access_ip_v6}"
}
provisioner "local-exec" {
command = "echo \"${template_file.keepalived-1.rendered}\" > scripts/keepalived-1.conf"
}
provisioner file {
source = "scripts"
destination = "scripts"
}
provisioner "remote-exec" {
inline = [
"sudo bash /home/ubuntu/scripts/bootstrap.sh"
]
}
}
resource "template_file" "keepalived-2" {
filename = "templates/keepalived.conf.tpl"
vars {
my_uuid = "${openstack_compute_instance_v2.keepalived-2.id}"
peer_uuid = "${openstack_compute_instance_v2.keepalived-1.id}"
my_ip = "${openstack_compute_instance_v2.keepalived-2.access_ip_v4}"
peer_ip = "${openstack_compute_instance_v2.keepalived-1.access_ip_v4}"
floating_ip = "${openstack_compute_floatingip_v2.keepalived.address}"
my_state = "BACKUP"
my_priority = "100"
}
connection {
user = "ubuntu"
key_file = "key/id_rsa"
host = "${openstack_compute_instance_v2.keepalived-2.access_ip_v6}"
}
provisioner "local-exec" {
command = "echo \"${template_file.keepalived-2.rendered}\" > scripts/keepalived-2.conf"
}
provisioner file {
source = "scripts"
destination = "scripts"
}
provisioner "remote-exec" {
inline = [
"sudo bash /home/ubuntu/scripts/bootstrap.sh"
]
}
}
The seventh and eighth resources are templates. Templates are a special kind of Terraform resource. They are able to take a text file as input and replace particular areas of text with a value (also known as interpolation).
In this case, the text file is located at “templates/keepalived.conf.tpl“. The values that will be filled in are listed in the “vars” block. You can see that the vars, or variables, contain information about both of the virtual machines that were created.
Again, because the virtual machines are referenced in the template resource, an implicit relationship is created. This is important because it ensures Terraform launched both instances successfully before moving on. In addition, it allows us to exchange information about both instances with each other without having to hard-code information. As described in the previous parts, this is a very important feature.
There are four other blocks in the template resource: one “connection” block and three “provision” blocks.
The first, connection, describes how Terraform can remotely access the virtual machines.
The “provision” blocks execute commands. The first command renders the template. The second command copies everything over from the “scripts” directory to the virtual machines. The third command runs the “scripts/bootstrap.sh” command remotely on the virtual machines.
Action
With all of the resources in place, running “terraform apply” will do the following:
- Create the Key Pair.
- Create the Security Group.
- Create the Server Group.
- Create the Floating IP.
- Create two virtual machines.
- Create a keepalived configuration file for each virtual machine and place it in “scripts”.
- Copy “scripts” to each virtual machine.
- Run “bootstrap.sh” on each virtual machine, which will:
- Install keepalived.
- Install the keepalived.conf configuration file generated by the Template resource.
- Start keepalived, which will:
- Determine if it’s the MASTER or BACKUP virtual machine.
- Add the Floating IP to itself if it’s the MASTER.
All of that will happen in a single command. Impressive, right?
Failover
With everything up and running, let’s test out the failover. By default, “keepalived-1” will have the Floating IP. If I shut off “keepalived-1”, the Floating IP will transfer to “keepalived-2”. Turning “keepalived-1” back on will give the Floating IP back: