Getting Familiar With ClusterAPI

Introduction

There are many tools around to get a Kubernetes cluster up and running. Some of these include kops, kubeadm, openshift-ansible, and kubicon (just to name a few). There is even a way dubbed “The Hard Way“, as made famous by Kelsy Hightower.

Some of these tools (like kops and kubicon) aim to manage your entire stack. That is from the infrastructure layer all the way to the Kubernetes layer. This is what I like to think of a fully managed system/install. Other tools take the UPI approach (User Provided Infrastructure). Tools like openshift-ansible and kubeadm let a user bring an already existing infrastructure where you just layer Kubernetes on top of.

ClusterAPI is a SIG group that is trying to bring a declarative approach to setting up Kubernetes clusters. The idea here is that you have a “wanted state” (your described cluster) and ClusterAPI will reconcile that for you. The SIG group has the goal to have ClusterAPI be 1. Use declarative Kubernetes-style APIs and 2. Be environment agnostic (while still being flexible).

This Diagram taken from their github shows the architecture

In this blog I’m going to go through an example of installing Kubernetes on AWS using the ClusterAPI AWS provisioner

prerequisites

So I mostly followed the quickstart that is on the github page. There it lists some good tools to have (some as must have and others as nice to have). To summarize here are the MUST haves:

  • Linux or Mac (no Windows support at this time)
  • AWS Credentials
  • An IAM role to give to the k8s control-plane
  • KIND
    • KIND has it’s own dependencies including docker
  • The gettext package installed

Some of the optional nice-to-haves are:

Once you have those; you’ll need to install the cli tools. Below is what I installed as of 19-MAR-2019 …please see here for the latest binaries

# wget https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.1.1/clusterawsadm-linux-amd64
# wget https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.1.1/clusterctl-linux-amd64
# chmod +x clusterctl-linux-amd64
# chmod +x clusterawsadm-linux-amd64
# mv clusterctl-linux-amd64 /usr/local/bin/clusterctl
# mv clusterawsadm-linux-amd64 /usr/local/bin/clusterawsadm

I also downloaded the examples tarball to help generate some files I’ll need later

# wget https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.1.1/cluster-api-provider-aws-examples.tar
# tar -xf cluster-api-provider-aws-examples.tar

Setting up environment variables

There is a helper script in the cluster-api-provider-aws-examples.tar tarball that generates a lot of the manifests for you. In the doc it explains some, but not all, of the environment vars that you need to export. I dug around the script and found that these are helpful to set.

export AWS_REGION="us-west-1"
export AWS_ACCESS_KEY_ID="XXXXXXXXXXXXXXXXXXXXX"
export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/ZZZZZZZZZZ"
export SSH_KEY_NAME="chernand-ec2"
export CLUSTER_NAME="pony-unicorns"
export CONTROL_PLANE_MACHINE_TYPE="m4.xlarge"
export NODE_MACHINE_TYPE="m4.xlarge"

When exporting SSH_KEY_NAME, you need to make sure this key exists in AWS already.

I verified that my exports with the AWS cli

# aws sts get-caller-identity
{
    "Account": "123123123123",
    "UserId": "TH75ISMYR3F4RCHUS3R1D",
    "Arn": "arn:aws:iam::123123123123:user/clusterapiuser"
}

Generating manifests

When I untar-ed the cluster-api-provider-aws-examples.tar file it created an aws dir in my current working directory.

# tree ./aws
./aws
├── addons.yaml
├── cluster-network-spec.yaml.template
├── cluster.yaml.template
├── generate-yaml.sh
├── getting-started.md
├── machines.yaml.template
└── provider-components-base.yaml

Running the generate-yaml.sh script in this directory will generate the needed manifests files for the installer.

# cd ./aws
# ./generate-yaml.sh 
Done generating /root/aws/out/cluster.yaml
Done generating /root/aws/out/machines.yaml
Done copying /root/aws/out/addons.yaml
Generated credentials
Done writing /root/aws/out/provider-components.yaml
WARNING: /root/aws/out/provider-components.yaml includes credentials

Go ahead and go into the out directory and examine these files. Making sure they match what you set in your environment variables

# cd out
# cat *

Once you’re okay with these manifests…you can move along to the installer!

installing kubernetes on aws

Using the clusterctl command I created a cluster with the following command

# cd /root/aws/out
# clusterctl create cluster -v 3 \
--bootstrap-type kind \
--provider aws \
-m machines.yaml \
-c cluster.yaml \
-p provider-components.yaml \
-a addons.yaml

You should see the following output

I0319 19:11:27.808556   25430 createbootstrapcluster.go:27] Creating bootstrap cluster
I0319 19:11:27.808667   25430 kind.go:57] Running: kind [create cluster --name=clusterapi]
I0319 19:12:10.001664   25430 kind.go:60] Ran: kind [create cluster --name=clusterapi] Output: Creating cluster "clusterapi" ...
 • Ensuring node image (kindest/node:v1.13.3) 🖼  ...
 ✓ Ensuring node image (kindest/node:v1.13.3) 🖼
 • Preparing nodes 📦  ...
 ✓ Preparing nodes 📦
 • Creating kubeadm config 📜  ...
 ✓ Creating kubeadm config 📜
 • Starting control-plane 🕹️  ...
 ✓ Starting control-plane 🕹️
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="clusterapi")"
kubectl cluster-info
I0319 19:12:10.001735   25430 kind.go:57] Running: kind [get kubeconfig-path --name=clusterapi]
I0319 19:12:10.043264   25430 kind.go:60] Ran: kind [get kubeconfig-path --name=clusterapi] Output: /root/.kube/kind-config-clusterapi
I0319 19:12:10.046231   25430 clusterdeployer.go:78] Applying Cluster API stack to bootstrap cluster
I0319 19:12:10.046258   25430 applyclusterapicomponents.go:26] Applying Cluster API Provider Components
I0319 19:12:10.046273   25430 clusterclient.go:919] Waiting for kubectl apply...
I0319 19:12:11.757657   25430 clusterclient.go:948] Waiting for Cluster v1alpha resources to become available...
I0319 19:12:11.765143   25430 clusterclient.go:961] Waiting for Cluster v1alpha resources to be listable...
I0319 19:12:11.792776   25430 clusterdeployer.go:83] Provisioning target cluster via bootstrap cluster
I0319 19:12:11.852236   25430 applycluster.go:36] Creating cluster object pony-unicorns in namespace "default"
I0319 19:12:11.877091   25430 clusterdeployer.go:92] Creating control plane controlplane-0 in namespace "default"
I0319 19:12:11.897136   25430 applymachines.go:36] Creating machines in namespace "default"
I0319 19:12:11.915500   25430 clusterclient.go:972] Waiting for Machine controlplane-0 to become ready...

What’s happening here is that the installer is creating a local kubernetes cluster using kind. There the local cluster uses your creds to install a kubernetes cluster on AWS. Open another terminal window and see the following pods come up.

# kubectl  get pods  --all-namespaces 
NAMESPACE             NAME                                               READY   STATUS    RESTARTS   AGE
aws-provider-system   aws-provider-controller-manager-0                  1/1     Running   0          72s
cluster-api-system    cluster-api-controller-manager-0                   1/1     Running   0          72s
kube-system           coredns-86c58d9df4-4r2jx                           1/1     Running   0          73s
kube-system           coredns-86c58d9df4-lg2zd                           1/1     Running   0          73s
kube-system           etcd-clusterapi-control-plane                      1/1     Running   0          24s
kube-system           kube-apiserver-clusterapi-control-plane            1/1     Running   0          5s
kube-system           kube-controller-manager-clusterapi-control-plane   1/1     Running   0          16s
kube-system           kube-proxy-qj7qp                                   1/1     Running   0          73s
kube-system           kube-scheduler-clusterapi-control-plane            1/1     Running   0          17s
kube-system           weave-net-qpcq2                                    2/2     Running   0          73s

Once they are all running, tail the log of the aws-provider-controller-manager-0 pod to see what’s happening (useful for debugging).

# kubectl logs -f -n aws-provider-system aws-provider-controller-manager-0

Once it’s done you’ll see an output that looks something like this (note that the KIND cluster is only temporary)

I0319 19:22:58.000769   25430 clusterdeployer.go:143] Done provisioning cluster. You can now access your cluster with kubectl --kubeconfig kubeconfig
I0319 19:22:58.000823   25430 createbootstrapcluster.go:36] Cleaning up bootstrap cluster.
I0319 19:22:58.000832   25430 kind.go:57] Running: kind [delete cluster --name=clusterapi]
I0319 19:22:58.882121   25430 kind.go:60] Ran: kind [delete cluster --name=clusterapi] Output: Deleting cluster "clusterapi" ...
$KUBECONFIG is still set to use /root/.kube/kind-config-clusterapi even though that file has been deleted, remember to unset it

Now the moment of truth…See if I can see my cluster…

# kubectl get nodes --kubeconfig=kubeconfig 
NAME                                      STATUS   ROLES    AGE   VERSION
ip-10-0-0-11.us-west-1.compute.internal   Ready    <none>   70m   v1.13.3
ip-10-0-0-88.us-west-1.compute.internal   Ready    master   72m   v1.13.3

It works! I have one controller and one worker node. Looks like they are also preparing for multimaster since I can see that an ELB was created for me.

# kubectl config view --kubeconfig=kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://pony-unicorns-apiserver-1159964288.us-west-1.elb.amazonaws.com:6443
  name: pony-unicorns
contexts:
- context:
    cluster: pony-unicorns
    user: kubernetes-admin
  name: kubernetes-admin@pony-unicorns
current-context: kubernetes-admin@pony-unicorns
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

uninstalling and cleanuP

To delete everything I created, I used the clusterctl command

# clusterctl delete cluster \
--bootstrap-type kind \
--kubeconfig kubeconfig -p provider-components.yaml 
I0319 20:37:14.156586    2852 clusterdeployer.go:149] Creating bootstrap cluster
I0319 20:37:14.156630    2852 createbootstrapcluster.go:27] Creating bootstrap cluster
I0319 20:37:56.466031    2852 clusterdeployer.go:157] Pivoting Cluster API stack to bootstrap cluster
I0319 20:37:56.466130    2852 pivot.go:67] Applying Cluster API Provider Components to Target Cluster
I0319 20:37:57.876975    2852 pivot.go:72] Pivoting Cluster API objects from bootstrap to target cluster.
I0319 20:38:33.196752    2852 clusterdeployer.go:167] Deleting objects from bootstrap cluster
I0319 20:38:33.196782    2852 clusterdeployer.go:214] Deleting MachineDeployments in all namespaces
I0319 20:38:33.198438    2852 clusterdeployer.go:219] Deleting MachineSets in all namespaces
I0319 20:38:33.200085    2852 clusterdeployer.go:224] Deleting Machines in all namespaces
I0319 20:38:43.227284    2852 clusterdeployer.go:229] Deleting MachineClasses in all namespaces
I0319 20:38:43.229738    2852 clusterdeployer.go:234] Deleting Clusters in all namespaces
I0319 20:41:13.253792    2852 clusterdeployer.go:172] Deletion of cluster complete
I0319 20:41:13.254168    2852 createbootstrapcluster.go:36] Cleaning up bootstrap cluster.

This was the easiest and most straight forward of the whole process.

conclusion

In this blog I took a look at ClusterAPI and tested the ClusterAPI AWS Provider. The ClusterAPI SIG aims to unify how we provide the infrastructure to/for Kubernetes clusters. It aims to rebuild what we currently have out there by learning from what we got out of tools like kops, kubicon, and ansible.

The project is still in it’s infancy and is bound to change. I encourage you to try it out and provide feedback. There is also a channel on the Kubernetes Slack that you can join as well.