Having joined Cybera a year ago, I got to attend my first ever OpenStack Summit last month, in Boston. I was really impressed by how large the OpenStack community is, and how many projects are a part of it! There were more than 5,000 attendees (from 1,000 companies and 63 countries) at the Summit, who came to learn about the 46 ongoing Open Stack projects.
Every project is created and maintained by contributor groups from around the world, who design the project to support different workflow needs, and accommodate various OpenStack configuration requirements. Each has its own purpose and area where it can be installed and used.
I wanted to familiarize myself with three of the different OpenStack projects, and will outline the highlights from the workshops I attended:
Barbican
Barbican is the OpenStack Key Manager service. It provides secure storage, provisioning and management of “secrets”, and supports Symmetric Keys, Asymmetric Keys and Raw Secrets.
How it works:
Secrets (encrypted data or encryption keys) are stored in the datastore (MySQL/PostgeSQL +SQLAlchemy). When the API gets an incoming REST request, it either 1) Goes directly to the datastore and is processed synchronously, 2) Goes to the queue (Oslo messaging) to be distributed between workers and processed asynchronously. (The latter option could also include interactions with third parties such as certificate authorities).
How to use it:
Among other things, Barbican can be used to store and retrieve passphrases using the OpenStack command line client. This could be helpful for scripts, puppet configurations, and config files.
For instance, to store a password, the following command can be used:
SECRET_REF=$(openstack secret store –secret-type passphrase
–name “passphrase” –payload ‘Be77erPa$$phrazE’ -c secret_ref -f value)
And then to retrieve the password in the script:
PASSPHRASE=$(openstack secret get –payload $SECRET_REF -c payload -f value)
# echo $PASSPHRASE
I find the idea of Barbican very interesting, as it is very common for people to store passwords in plain text in config files. This offers a perfect solution for how to store these passwords more securely, and I’m curious to see if this is something Cybera can use.
Kuryr
Kuryr (Czech for “courier”) is an OpenStack Network management tool that creates a unified interface between Docker and Neutron networks. Docker is very popular right now, and many companies use it quite intensively. What is interesting about Kuryr is that it extends Docker and allows for very complex and developed Neutron functionality to be added to containers.
How it works:
Kuryr is a Docker network plugin that uses Neutron to provide networking services to Docker containers. It implements VIF binding between vEth pairs (from the container namespace to the Neutron namespace).
How to use it:
The following situations are examples of how Kuryr creates a “bridge” between Docker and Neutron:
- If the network was created using Docker, Neutron will automatically create a corresponding network and subnet with the same range.
- If a new Docker container is launched on an existing Docker/Kuryr network, Neutron will create a port for this IP address and subnet (as well as a security group with exposed ports, if you have any).
- If the network was created using Neutron, you can recreate it in Docker using the specific neutron.net.name option.
Note: Docker (with Kuryr) will not allow you to create more networks than the Neutron quota allows.
Vitrage
Vitrage is an OpenStack “Root Cause Analysis” service that can be used for organizing, analyzing and expanding OpenStack alarms and events.
How it works:
Vitrage data sources: Draws information from different sources (Nova, Cinder, Nagios Alarms, Physical Resources, etc).
Vitrage Graph: Stores the information collected from the data sources, including the relationships between them and the different algorithms, and makes it accessible to the Vitrage Evaluator.
Vitrage Evaluator: Analyses the Vitrage Graph state and implements different kinds of template procedures (“if this – then that”), and assesses the “Root Cause Analysis” relationship between the alarms.
How it can be used:
Example situation:
If a switch fails, Nagios triggers an alarm, which goes to the Vitrage data sources. The Vitrage Graph identifies and stores the relationship between the switch and the host connected to it, as well as the VM on that host. The Vitrage Evaluator assesses how the switch outage can impact this VM. It triggers an alarm on the VM, and adds the relationship between the alarm on the switch and the alarm on the VM to the Vitrage Topology. The Evaluator can also change the state of the VM (to “ERROR”).
I have worked before with monitoring (Telegraf+ Graphite) and automation (StackStorm), and I felt that Vitrage was an interesting project for me to learn as it fits both things together (i.e. it has elements of automation and monitoring).
Final Thoughts
OpenStack is definitely a huge project, with a large number of subprojects. Each is designed to be used in different areas. It’s impossible to know and understand all of them, but it was interesting to learn more about these three, to better understand OpenStack possibilities and development routes.