Build a Versatile OpenStack Lab with Kolla

LJ301-August2019_DeepDive_1400

Hone your OpenStack skills with a full deployment in a single virtual machine. By John S. Tonello

It's hard to go anywhere these days without hearing something about the urgent need to deploy on-premises cloud environments that are agile, flexible and don't cost an arm and a leg to build and maintain, but getting your hands on a real OpenStack cluster—the de facto standard—can be downright impossible.

Enter Kolla-Ansible, an official OpenStack project that allows you to deploy a complete cluster successfully—including Keystone, Cinder, Neutron, Nova, Heat and Horizon—in Docker containers on a single, beefy virtual machine. It's actually just one of an emerging group of official OpenStack projects that containerize the OpenStack control plane so users can deploy complete systems in containers and Kubernetes.

To date, for those who don't happen to have a bunch of extra servers loaded with RAM and CPU cores handy, DevStack has served as the go-to OpenStack lab environment, but it comes with some limitations. Key among those is your inability to reboot a DevStack system effectively. In fact, rebooting generally bricks your instances and renders the rest of the stack largely unusable. DevStack also limits your ability to experiment beyond core OpenStack modules, where Kolla lets you build systems that can mimic full production capabilities, make changes and pick up where you left off after a shutdown.

In this article, I explain how to deploy Kolla, starting from the initial configuration of your laptop or workstation, to configuration of your cluster, to putting your OpenStack cluster into service.

Why OpenStack?

As organizations of all shapes and sizes look to speed development and deployment of mission-critical applications, many turn to public clouds like Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine, RackSpace and many others. All make it easy to build the systems you and your organization need quickly. Still, these public cloud services come at a price—sometimes a steep price you only learn about at the end of a billing cycle. Anyone in your organization with a credit card can spin up servers, even ones containing proprietary data and inadequate security safeguards.

OpenStack, a community-driven open-source project with thousands of developers worldwide, offers a robust, enterprise-worthy alternative. It gives you the flexibility of public clouds in your own data center. In many ways, it's also easier to use than public clouds, particularly when OpenStack administrators properly set up networks, carve out storage and compute resources, and provide self-service capabilities to users. It also has tons of add-on capabilities to suit almost any use case you can imagine. No wonder 75% of private clouds are built using OpenStack.

The challenge remains though in getting OpenStack up and running. Even though it doesn't rely on any particular brand of hardware, it does require machines with plenty of memory and CPU cores. That alone creates a roadblock to many looking to try it. The Kolla project gets you past hurdle.

What You'll Need

Kolla can be run in a single virtual machine (or bare-metal box), known as an "all-in-one" deployment. You also can set it up to use multiple VMs, which is called "multinode". In this article, I show how to deploy the former using a virtual machine deployed with KVM, the Linux virtualization service based on libvirtd. I successfully deployed Kolla on a Dell 5530 with 32GB of RAM and an i7 CPU with 12 cores, but I also did it on a machine with 16GB of RAM and four cores. You can allocate whatever you have. Obviously, the more RAM and cores, the better your OpenStack cluster will perform.

I used KVM for this deployment, but theoretically, you could use VirtualBox, VMware Desktop or another hypervisor. The base of the system is Docker, so just make sure you're using a system that can run it. Don't worry if you don't know much about Docker; Kolla uses Ansible to automate the creation of images and the containers themselves.

To install KVM, check the requirements for your distribution, keeping in mind you'll need libvirtd, qemu and virt-manager (for GUI management). On Ubuntu, this would be:


$ sudo apt-get install qemu-kvm libvirt-bin bridge-utils 
 ↪virt-manager 

On Fedora, you'd use:


$ sudo dnf -y install bridge-utils libvirt virt-install 
 ↪qemu-kvm 

On openSUSE, you'd install the KVM patterns:


$ sudo zypper -n install patterns-openSUSE-kvm_server
 ↪patterns-server-kvm_tools

As part of your workstation configuration, I recommend setting up bridged networking. This will enable you to connect to the Kolla VM (and the OpenStack instances you create on it) directly from the host machine. Without this, KVM defaults to a NAT configuration that isolates VMs from the host. (You'll see how to set up bridged network connections below.)

Finally, Kolla supports two Linux distributions for deployment: CentOS and Ubuntu. Your host machine can be any flavor of Linux you want (or even Windows or Mac), but the main VM will be one of the two flavors listed above. That doesn't mean you can't create OpenStack images for your OpenStack instances based on other Linux flavors. You can, and you have a lot of options. For this lab though, I'm using CentOS 7 for the main Kolla VM.

Prepare Your Workstation

To work properly, Kolla wants two NICs active, and in a perfect world, these would be distinct subnets, but they don't need to be. More important for this lab is that you can access your Kolla VM and your OpenStack instances, and to do that, set up a bridge.

In my case, my workstation has two distinct networks, one internal and one external. For the internal, I used 10.128.1.0/24, but you can create a subnet that suits your needs. My subnet spans several physical and virtual servers on my lab network, including DNS servers, so I was able to take advantage of those resources automatically. Just be careful to carve out enough network resources to suit your needs. I needed only about 50 IPs, so creating a /24 was plenty for OpenStack instances and all my other servers.

You have several options on how to set up bridging depending on your Linux distribution. Most bridges can be done simply by editing config files from the command line, and others make it easy with graphical tools, like openSUSE's YaST. Regardless, the premise is the same. Instead of assigning network parameters to the physical network device—eth0, eth1, enps01 and so on—you bind the unconfigured physical device to a separate bridge device, which gets the static IP, netmask, gateway, DNS servers and other network parameters.

Historically, Ubuntu users would edit /etc/network/interfaces to set up a bridge, which might look something like this:


auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
address 10.128.1.10
netmask 255.255.255.0
gateway 10.128.1.1
dns-nameservers 10.128.1.2 10.128.1.3 8.8.8.8
dns-search example.com
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

Current versions of Ubuntu (and other distributions) use netplan, which might look something like this:


network:
  version: 2
  renderer: networkd
  ethernets:
    enp3s0:
      dhcp4: no
  bridges:
    br0:
      dhcp4: yes
      interfaces:
        - enp3s0

See the Resources section at the end of this article for more information on using Netplan.

For distributions that use /etc/sysconfig/network/ configuration files (such as CentOS and openSUSE), a separate bridge file references a physical device. For example, ifcfg-br0 would be created along with ifcfg-eth0:


$ sudo vi /etc/sysconfig/network-scripts/ifcfg-br0:

BOOTPROTO='static'
BRIDGE='yes'
BRIDGE_FORWARDDELAY='0'
BRIDGE_PORTS='eth0'
BRIDGE_STP='off'
BROADCAST='10.128.1.255'
ETHTOOL_OPTIONS=''
IPADDR='10.128.1.10/24'
STARTMODE='auto'

$ sudo vi /etc/sysconfig/network/ifcfg-eth0:

BOOTPROTO='none'
NAME='AX88179 Gigabit Ethernet'
STARTMODE='hotplug'

Depending on how your network is managed (NetworkManager, Wicked, Networkd), you should restart the service before proceeding. If things seem to be out of whack, try rebooting.

Create the Kolla Virtual Machine

This deployment of OpenStack using Kolla relies on a single, beefy virtual machine. The more resources you can commit to it, the better OpenStack will perform. Here's the minimum you should assign:

This is a bare minimum. I strongly suggest at least 10GB of RAM and six vCPU. Also, if you have an SSD or NVMe drive, use that for your VM storage. Solid-state drives will improve performance dramatically and speed the initial deployment. Remember to size the disks based on your anticipated use cases. If you plan to create 200GB worth of volumes for your OpenStack instances, create a second virtual disk that's at least 200GB.

KVM virtual machine

Figure 1. When creating your KVM virtual machine, remember to check the "Customize configuration before install" box, so you can add a second storage device and a second network interface.

Prepare CentOS

Step through the basic configuration of CentOS and reboot. To save resources and time, don't bother installing a desktop environment. Once the system restarts, log in and perform a couple housekeeping tasks, including setting up a static IP address—no bridging here, just a static address for eth0. Don't configure the eth1 interface, but verify that it exists:


DEVICE='eth0'
HWADDR='00:AA:0C:28:46:6B:91'
Type=Ethernet
UUID=25a7bad9-616a-40a0-ace5-52aa0af9fdb7
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=static
IPADDR=10.128.1.20
NETMASK=255.255.255.0
GATEWAY=10.128.1.1

A few times when I created the CentOS 7 VM, I found that it would rename eth0 to eth1 automatically and persist that way. Kolla requires you to specify and hard-code the interface names in the configuration file, so this unwanted name change breaks the install. If that happens, just run the following to fix it (no reboot required):


$ sudo ip link set eth1 down
$ sudo ip link set eth1 name eth0
$ sudo ip link set eth0 up

Install the Required Packages

You theoretically can run the following install commands in one fell swoop, but it's better to do them individually to isolate any errors. The epel-release and other packages are required for Kolla, and if any fail, the rest of the installation will fail:


$ sudo yum update
$ sudo yum install epel-release
$ sudo yum install python-pip
$ sudo yum install python-devel libffi-devel gcc openssl-devel
 ↪libselinux-python
$ sudo yum install ansible git

Update pip to avoid issues later:


$ sudo pip install --upgrade pip

Install kolla-ansible

You'll need elements of the kolla-ansible package for the install, but you won't use this system version of the application to execute the individual commands later. Keep that in mind if you run into errors during the deployment steps:


$ sudo pip install kolla-ansible --ignore-installed 

Set Up Git and Clone the Kolla Repos

The installation is done primarily from code stored in GitHub, so you'll need GitHub credentials—namely a public SSH key from your Kolla host VM added to your GitHub settings:


$ git config --global user.name "Your Name"
$ git config --global user.email "your@github-email"
$ git clone https://github.com/openstack/kolla
$ git clone https://github.com/openstack/kolla-ansible

Working Directory view

Figure 2. Your working directory now should look like this, containing the kolla and kolla-ansible directories from GitHub.

Copy Some Configuration Files and Install kolla-ansible Requirements

Several configuration files provided by the kolla-ansible Git repo must be copied to locations on your Kolla host VM. The requirements.txt files checks for all necessary packages and installs any that aren't satisfied:


$ sudo cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/
$ sudo cp /usr/share/kolla-ansible/ansible/inventory/* .
$ sudo pip install -r kolla/requirements.txt
$ sudo pip install -r kolla-ansible/requirements.txt

Copy the Configuration Files

Once the requirements files have run, a number of new resources will be available and must be copied to /etc/kolla/ and your working directory:


$ sudo mkdir -p /etc/kolla
$ sudo cp -r kolla-ansible/etc/kolla/* /etc/kolla
$ sudo cp kolla-ansible/ansible/inventory/* .

Create Cinder Volumes for LVM

It's possible to spin up your Kolla cluster without Cinder (the OpenStack storage component), but you won't be able to create instances other than ones built with the tiny Cirros image. Since this particular lab will use LVM for the back end, a volume group should be created. This will be deployed on the second virtual disk you created in your Kolla host VM. Use pvcreate and vgcreate to create the volume group (to learn more, see the Cinder guide link in the Resources section):


$ sudo pvcreate /dev/sda 
$ sudo vgcreate cinder-volumes /dev/sda 

Screen View

Figure 3. If you created a SATA disk when you set up your Kolla host VM, the drive will show up as sda.

Edit the Main Kolla Configuration Settings

Kolla gets information about your virtual environment from the main configuration file, /etc/kolla/globals.yml. Ensure that the following items are set and the lines are uncommented:


# Define the installation type
config_strategy: "COPY_ALWAYS"
kolla_base_distro: "centos"
kolla_install_type: "binary"
openstack_release: "master"    # "master" ensures you're 
                               # pulling the latest release. 
                               # You also can designate specific
                               # OpenStack versions

network_interface: "eth0"    # This must match the name of your
                             # first NIC

# Match first NIC on host
neutron_external_interface: "eth1"    # This should match the 
                                      # name of your second NIC

# Match second NIC on host
kolla_internal_vip_address: "10.128.1.250"    # Any free IP 
                                              # address on your
                                              # subnet

# An unused address in eth0 subnet
keepalived_virtual_router_id: "51"    # If initial deployment 
                                      # fails to get the vip
                                      # address, change "51" 
                                      # to "251"

enable_cinder: "yes"
enable_cinder_backend_iscsi: "yes"
enable_cinder_backend_lvm: "yes"
enable_heat: "yes"

Note: you can enable a wide variety of other OpenStack resources here, but for an initial deployment, I recommend this relatively minimal configuration. Also note that this configuration provides Heat and Cinder.

Auto-Generate Passwords

OpenStack requires a number of different credentials, and Kolla provides a script to generate them for you. It also provides them, as necessary, to various components during deployment:


$ sudo kolla-ansible/tools/generate_passwords.py 

Later, you'll need the Horizon dashboard login credentials, which are created along with the rest of the passwords. Issue the following command to get the "admin" user password:


$ grep keystone_admin_password /etc/kolla/passwords.yml

Install the Heat Packages

Heat enables ready automation of full stacks within your OpenStack environment. I recommend adding this component so you can experiment with building stacks, not just instances:


$ sudo pip install openstack-heat 

Set Up qemu as the VM Type

Because you're running a nested installation of OpenStack in a virtual machine, you need to tell Kolla to use qemu as the hypervisor instead of KVM, the default. Create a new directory and a configuration file:


$ sudo mkdir -p /etc/kolla/config/nova 

Create the file /etc/kolla/config/nova/nova-compute.conf and include the following:


[libvirt]
virt_type=qemu

Bootstrap the Kolla Containers

You're now ready to deploy OpenStack! If all the installation steps up to now have completed without errors, your environment is good to go.

When executing the following commands, be sure to use the version of kolla-ansible located in the folder you downloaded from GitHub, not the system version. The system version will not work properly.

Note that you're instructing the system to bootstrap the "all-in-one" deployment, not "multinode". The deploy command can take some time depending on your system resources and whether you're using an SSD or spinning disk for storage. Kolla is launching about 40 Docker containers, so be patient:


$ sudo kolla-ansible/tools/kolla-ansible -i all-in-one 
 ↪bootstrap-servers
$ sudo kolla-ansible/tools/kolla-ansible -i all-in-one 
 ↪prechecks
$ sudo kolla-ansible/tools/kolla-ansible -i all-in-one 
 ↪deploy   

Troubleshooting screen

Figure 4. Each step offers details as it's happening, so you can follow along and troubleshoot any issues.

Again, the deploy step can take some time—an hour or more. You can follow that progress by running sudo docker ps from a separate shell. Some containers may appear to be "stuck" or show lots of restarts. This is normal. Avoid any urge to halt the install.

Openstack build

Figure 5. Run sudo docker ps in a separate shell to follow along as Kolla deploys the containers it needs to build your OpenStack.

When the all-in-one deploy steps complete successfully (failed=0), you may want to make a snapshot of the VM at this point. It's a good place to roll back to in case you run into problems later.

Install the OpenStack Client Tools and Run post-deploy

When the bootstrapping is complete, your OpenStack cluster will be up and running. It's actually accessible and usable in its current form, but the Kolla project provides some additional automation that adds resources and configures networking for you:


$ sudo pip install python-openstackclient --ignore-installed
 ↪python-glanceclient python-neutronclient
$ sudo kolla-ansible/tools/kolla-ansible post-deploy

Kolla provides an initialization step that brings everything together. The init-runonce script creates networks, keys and image flavors, among other things. Be sure to edit the file to match your public network configuration before proceeding. This way, your OpenStack instances will immediately have access to your network, not the default, which won't do you any good if your subnet doesn't match it:


$ vi kolla-ansible/tools/init-runonce 

Edit the following lines to match your own network. Using the previous example network (10.128.1.0/24), your entries might look like this:


EXT_NET_CIDR='10.128.1.0/24'    # This will become public1 
EXT_NET_RANGE='start=10.128.1.100,end=10.128.1.149'   # These 50 
                                           # addresses will be 
                                           # floating IPs
EXT_NET_GATEWAY='10.128.1.1'    # Your network gateway

Run the Final Initialization

This is a good time to take a second snapshot of your Kolla host VM. Once you run init-runonce in the next step, you can't roll back.

Start by sourcing the admin user's openrc.sh file, and then kick off the init script:


$ source /etc/kolla/admin-openrc.sh 
$ kolla-ansible/tools/init-runonce 

Sample output

Figure 6. A sample of the output from the init-runonce script.

Launch the Horizon Dashboard

If everything goes well, you now have a working OpenStack cluster. You can access it via Horizon at the kolla_internal_vip_address you set in the /etc/kolla/globals.yml file (10.128.1.250 in this example):


http://kolla_internal_vip_address 

Username: admin
Password: $ grep keystone_admin_password 
 ↪/etc/kolla/passwords.yml   

OpenStack Horizon Login

Figure 7. The OpenStack Horizon Login

After a moment, you'll be taken to the main OpenStack overview dashboard. Go ahead and explore the interface, including the Compute→Instance and Network→Network Topology. In the latter, you'll notice your public network already configured along with a private subnet and a router that connects them. Also be sure to look at the Compute→Images, where you'll see cirros, a small OS you can deploy immediately as a working instance.

OpenStack Horizon Dashboard

Figure 8. The OpenStack Horizon Dashboard

Cirros qcow2 image

Figure 9. Launch an instance using the provided cirros qcow2 image.

As you explore, try to keep in mind that this whole cluster is running on a single VM, and it may be slow to respond at times. Be patient, or if you can't be patient and you have more resources available, power off the cluster, and add more RAM and CPU to your virtual machine.

Restarting Your Cluster

If you want to shut down your cluster, be sure there are no running processes (like an instance in mid-launch), and just issue a sudo poweroff command on the Kolla host. This will shut down the Docker containers and take everything offline. You also can issue sudo docker stop $(docker ps -q) to stop all the containers before shutting down. When you restart the Kolla VM, your OpenStack cluster will take a little time to restart all the containers, but the system will be intact with all the resources just as you left them. In most cases, your instances won't auto-start, so you'll need to start them from the dashboard. To restart your Kolla cluster after a shut down, you need to start all the related OpenStack containers. Issue this command to do that:


sudo docker start $(docker ps -q)

This will find all the existing images and start them.

Resources

About the Author

John Tonello is a Global Technical Marketing Manager for SUSE, where he specializes in software-defined infrastructure. He's been a Linux user and enthusiast since building his first Slackware system from diskette more than 20 years ago.