Automating OpenShift Installs

Introduction

I’ve been involved with OpenShift since it’s pre-Kubernetes days. I’ve also been through it’s re-write when Kubernetes came on the scene about four years ago. I’ve been through the evolution of DevOps being built around Kubernetes and the birth of “Cloud Native” and the CNCF.

When I started working on automating installs on my github page the first thought that came to mind was people asking “Why?”. I’ve done a lot of engagements with many customers and every one of them started with the “One Cluster to rule them all” frame of mind, to always end up with multiple clusters across multiple data centers.

If you’re running Kubernetes/OpenShift in production; you will quickly learn that you can’t do what you’ve always done, but just use Kubernetes. (Kelsey Hightower has a great talk where he says “You can’t rub Kubernetes over your situation to make it better”)

In the end you are going to be running many clusters and automating that is going to save you lots of time.

Let’s get started!

Technologies Used

I used the following technologies in my tests

  • OpenShift Container Platform v3.11
  • Red Hat Enterprise Virtualization 4.2
  • Red Hat Identity Manager 4.6
  • Ansible 2.7

Although I haven’t tested it, this same set up should work with okd, ovirt, and freeipa as well.

For those not familiar with RHEV/oVirt or RHIDM/FreeIPA; I’ll give a short explanation on what these provide.

RHEV/oVirt provides a vitualization platform that is comparable to VMWare ESXi with vCenter. RHEV has a rich api and templating system that I will be leveraging to create the vms where I’ll be installing OpenShift

Red Hat IdM/FreeIPA
provides a centrally managed Identity, Policy, and Audit server. It combines, LDAP, MIT Kerberos, NTP, DNS, and a CA Certificate system. For my testing I am mainly using the DNS system to dynamically create my DNS entries using the api.

Prework and Assumptions

I’m going to make a few assumptions, mostly having to do with infrastructure. These are things that are either out of the scope of this post or things that I’m assuming you already have in place in your environment.

  • I have created a VM Template based on the section in the OpenShift doc that describes how to prepare your hosts for OpenShift.
  • All these hosts had the correct SSH keys installed
  • I have an IPA domain/realm of cloud.chx that I also use for DNS.
  • I have DHCP set up with all my IPs in DNS (forward AND reverse are setup)

Automated Installs

I’m going to go through, at a high level, some of the sections of my playbook from my github repo. For a more detailed overview please see that repo itself. This is just to give you an idea of the thought process in automating installs.

RHEV/oVirt Auth

So first and foremost, you’ll need to set up how ansible authenticates to RHEVM/oVirt. This settings just sets up credentials to use for subsequent ovirt_vm module calls. The config looks like this.

  - name: Get RHEVM Auth
    ovirt_auth:
      url: https://rhevm.cloud.chx/ovirt-engine/api
      username: admin@internal
      password: "{{ lookup('env','OVIRT_PASSWORD') }}"
      insecure: true
      state: present

Note the {{ lookup(‘env’,’OVIRT_PASSWORD’) }} value. This says that ansible will be looking up the password in an environment variable (I’ll be doing this a lot in my playbook).

VM Creation

In order to create a VM from my template; I will need to call the ovirt_vm module. This is where you specify the size and specs of the servers. You also specify the template (this is the one I created that I used the host preparation guide against).

  - name: Create VMs for OpenShift
    ovirt_vm:
      auth: "{{ ovirt_auth }}"
      name: "{{ item }}"
      comment: This is {{ item }} for the OCP cluster
      state: running
      cluster: Default
      template: rhel-7.6-template
      memory: 16GiB
      memory_max: 24GiB
      memory_guaranteed: 12GiB
      cpu_threads: 2
      cpu_cores: 2
      cpu_sockets: 1
      type: server
      operating_system: rhel_7x64
      nics:
      - name: nic1
        profile_name: ovirtmgmt
      wait: true
    with_items:
      - master1
      - app1
      - app2
      - app3

Adding OCS Disk

Since I am using OpenShift Container Storage (OCS); I used ovirt_disk to attach an extra disk to my application servers

  - name:  Attach OCS disk to VM 
    ovirt_disk:
      auth: "{{ ovirt_auth }}"
      name: "{{ item }}_disk2"
      vm_name: "{{ item }}"
      state: attached
      size: 250GiB
      storage_domain: vmdata
      format: cow
      interface: virtio_scsi
      wait: true
    with_items:
      - app1
      - app2
      - app3

Creating an install hostfile

In order to install OpenShift v3.11 you will need to create an ansible host file (since OpenShift v3.x uses ansible to install). I use ansible templating to dynamically create this file to use for installation. Here I am getting the server information from what I created and using it to build my ansible host file for OpenShift

  - name: Obtain VM information
    ovirt_vm_facts:
      auth: "{{ ovirt_auth }}"
      pattern: name=master* or name=app* and cluster=Default
      fetch_nested: true
      nested_attributes: ips

  - name: Write out a viable hosts file for OCP installation
    template:
      src: ../templates/poc-generated_inventory.j2
      dest: ../output_files/poc-generated_inventory.ini

Now that I have that file, I add (what will be) the master to the in memory inventory file in order to copy that inventory file to the master.

  - name: Obtain Master1 VM information
    ovirt_vm_facts:
      auth: "{{ ovirt_auth }}"
      pattern: name=master1 and cluster=Default
      fetch_nested: true
      nested_attributes: ips

  - name: Set Master1 VM Fact
    set_fact:
      ocp_master: "{{ ovirt_vms.0.fqdn }}"

  - name: Add "{{ ocp_master }}" to in memory inventory
    add_host:
      name: "{{ ocp_master }}"

  - name: Copy inventory to "{{ ocp_master }}"
    copy:
      src: ../output_files/poc-generated_inventory.ini
      dest: /etc/ansible/hosts
      owner: root
      group: root
      mode: 0644
    delegate_to: "{{ ocp_master }}"

Note I’m using delegate_to in oder to reference the master that just got provisioned.

Creating DNS entries

Since I’m using IPA for DNS, I will be tapping into the API in order to create entries. I set up some defaults (like domains and such) as variables.

- hosts: all
  vars:
    wildcard_domain: "osa.cloud.chx"
    console_fqdn: "openshift.cloud.chx"
    zone_fwd: "cloud.chx"

Using these variables; I created DNS entries pointing the wildcard DNS and console DNS to the master (where these objects will be running

  - name: Create DNS CNAME record
    ipa_dnsrecord:
      ipa_host: ipa1.cloud.chx
      ipa_pass: "{{ lookup('env','IPA_PASSWORD') }}"
      ipa_user: admin
      ipa_timeout: 30
      validate_certs: false
      zone_name: "{{ zone_fwd }}"
      record_name: openshift
      record_type: 'CNAME'
      record_value: "{{ ocp_master }}."
      record_ttl: 3600
      state: present

  - name: IPA create Wildcard DNS record
    ipa_dnsrecord:
      ipa_host: ipa1.cloud.chx
      ipa_pass: "{{ lookup('env','IPA_PASSWORD') }}"
      ipa_user: admin
      ipa_timeout: 30
      validate_certs: false
      zone_name: "{{ zone_fwd }}"
      record_name: '*.osa'
      record_type: 'CNAME'
      record_value: "{{ ocp_master }}."
      record_ttl: 3600
      state: present

Note: This overwrites the entries if it’s not already set. If nothing is set, it will create them.

Preparing the hosts

When I created the VM template I created it as generic as possible. Still, it’s nice to make sure the servers are updated with the proper packages. This is kind of ugly and a work in progress.

  - name: Update packages via ansible from "{{ ocp_master }}"
    shell: |
      ansible all -m shell -a "subscription-manager register --username {{ lookup('env','OREG_AUTH_USER') }} --password {{ lookup('env','OREG_AUTH_PASSWORD') }}"
      ansible all -m shell -a "subscription-manager attach --pool {{ lookup('env','POOL_ID') }}"
      ansible all -m shell -a "subscription-manager repos --disable=*"
      ansible all -m shell -a "subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-ose-3.11-rpms --enable=rhel-7-server-ansible-2.6-rpms --enable=rh-gluster-3-client-for-rhel-7-server-rpms"
      #ansible all -m shell -a "yum -y update"
      # Temp fix because of https://access.redhat.com/solutions/3949501
      ansible all -m shell -a "yum -y update --exclude java-1.8.0-openjdk*"
      ansible all -m shell -a "systemctl reboot"
    delegate_to: "{{ ocp_master }}"

  - name: Wait for servers to restart
    wait_for:
      host: "{{ ocp_master }}"
      port: 22
      delay: 30
      timeout: 300

In the above I’m using the shell module. Ideally you’d want to use the package and the redhat_subscription modules. For right now this should work fine.

Note: I reference a bug where you have to add an exclude to your yum update command.

Running the installer

Now that I have my hostsfile for the installer, my DNS in place, and my VMs ready to go; I can go ahead with the install. I run the playbook directly on master from my laptop via the playbook. Again I’m using delegate_to to do this.

  - name: Running OCP prerequisites from "{{ ocp_master }}"
    shell: |
      ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
    delegate_to: "{{ ocp_master }}"

  - name: Running OCP installer from "{{ ocp_master }}"
    shell: |
      ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    delegate_to: "{{ ocp_master }}"

  - name: Display OCP artifacts from "{{ ocp_master }}"
    shell: oc get pods --all-namespaces
    register: ocpoutput
    delegate_to: "{{ ocp_master }}"

  - debug: var=ocpoutput.stdout_lines

Once the install is done I run an oc getpods –all-namespaces to see what the status of everything is and I print them out

Torubleshooting

I took a “cattle” approach in building this. Since I’m creating everything dynamically; I basically destroyed everything, fixed what was wrong, and then re-ran the playbook. This is the beauty of automating installs. Here is an example of my destroy/uninstall playbook

  - name: Delete VM
    ovirt_vm:
      auth: "{{ ovirt_auth }}"
      name: "{{ item }}"
      state: absent
      cluster: Default
      wait: true
    with_items:
      - master1
      - master2
      - master3
      - infra1
      - infra2
      - infra3
      - app1
      - app2
      - app3
      - lb

As you can see, I just delete everything. Since my DNS get’s updated on creation, there’s not a NEED to remove those entries either! (although it’s probably good that you do).

Summary

In this blog I took you through a high level overview on how you can automate Kubernetes/Openshift installations using opensource tools. I used OpenShift, RHEV, Red Hat IdM, and Ansible specifically in my example.

You can also apply this to other tools like Dyamic DNS, Kubernetes, Puppet, Chef, VMWare ESXi vCenter, etc. The tools aren’t necessarily important but getting to where you are automating installs is!

If you plan on running Kubernetes/OpenShift in production you will be running many clusters, and having a way to stamp these out will be paramount for your cloud native environment.

Use less YAML

I have been thinking about what should my first blog post be about. I figured since I just took the CKA (by the way, I passed!), I have kubernetes short hand commands on the brain; so I’ll write about using less YAML when working with k8s.

When studying for the CKA; I came across a lot of blogs/howtos that show things like creating pods and deployments by creating a YAML file and using kubectl create -f … or (what’s worse) you’ll see a cat <<EOF | kubectl create -f – to create a resource.

Now don’t get me wrong; I’m not bashing using YAML. When working with kubernetes, you’ll inevitably have to use YAML at some point. It’s also completely valid way to do things. But when you’re doing something like an exam, where time is precious. These can come in handy!

During the CKA exam, you have (if you average it out); 7 minutes per question. So time is precious and I learned how to generate resources through the kubectl command to save time.

So to create a deployment you can do the following

$ kubectl create deployment welcome-php --image=quay.io/redhatworkshops/welcome-php:latest
deployment.apps/welcome-php created

This creates all my resources
$ kubectl get deploy,rs,pod
NAME                                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/welcome-php   1         1         1            1           7m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.extensions/welcome-php-57db6cbb6   1         1         1       7m

NAME                              READY   STATUS    RESTARTS   AGE
pod/welcome-php-57db6cbb6-kjtcz   1/1     Running   0          7m


Now, I can actually create a service by exposing the deplpyment
$ kubectl expose deploy welcome-php --port=8080 --target-port=8080
service/welcome-php exposed


Now I have all the resources I need for my application.
$ kubectl get deploy,svc,rs,pod
NAME                                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/welcome-php   1         1         1            1           15m

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/welcome-php   ClusterIP   100.71.31.164   <none>        8080/TCP   1m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.extensions/welcome-php-57db6cbb6   1         1         1       15m

NAME                              READY   STATUS    RESTARTS   AGE
pod/welcome-php-57db6cbb6-kjtcz   1/1     Running   0          15m


If you look at the kubectl create -h it’ll show you what you can create via the cli. Here is a snippet.
Available Commands:
  clusterrole         Create a ClusterRole.
  clusterrolebinding  Create a ClusterRoleBinding for a particular ClusterRole
  configmap           Create a configmap from a local file, directory or literal value
  deployment          Create a deployment with the specified name.
  job                 Create a job with the specified name.
  namespace           Create a namespace with the specified name
  poddisruptionbudget Create a pod disruption budget with the specified name.
  priorityclass       Create a priorityclass with the specified name.
  quota               Create a quota with the specified name.
  role                Create a role with single rule.
  rolebinding         Create a RoleBinding for a particular Role or ClusterRole
  secret              Create a secret using specified subcommand
  service             Create a service using specified subcommand.
  serviceaccount      Create a service account with the specified name


So to recap…you can run the following to create and expose your application without using any YAML.
$ kubectl create deployment welcome-php --image=quay.io/redhatworkshops/welcome-php:latest
$ kubectl expose deploy welcome-php --port=8080 --target-port=8080


You can even do the same with a pod and node port. (note that I named the nodeport the same as the pod)
$ kubectl run nginx --image=nginx --generator=run-pod/v1 -l app=nginx
pod/nginx created

$ kubectl create service nodeport nginx --node-port=32000 --tcp=80:80 
service/nginx created


When working with kubernetes, you will run into lots of YAMLs that you will be copying and pasting. You can save yourself some typing if use the kubectl to create these resources for you!