Installing the HCL Connections Component Pack 6.5 CR1 – Part 2: Installing Kubernetes, Calico and Helm
In a series of articles I’m trying to fill the gaps in the HCL documentation regarding the Component Pack. In the first part I covered the installation and configuration of Docker. In this 2nd part I’ll cover the installation of Kubernetes together with Calico and Helm. After this the basic infrastructure is set up and the actual installation of the Component pack can begin. The installation I’m doing is a non-HA Kubernetes platform. One master and 2 worker nodes. If you need to setup a HA Kubernetes platform, you have to do a few extra steps, but using this manual and combining it with the HA documentation from HCL, you should be good.
Installing Kubernetes
Preparations
Step one in the HCL documentation for the Kubernetes install is to disable swap on your master and nodes. This step is still valid. Therefore to switch off swap till reboot type:
swapoff -a
And to disable swap after reboot edit your /etc/fstab
file and comment out the line with your swap drive
Kubernetes also doesn’t work with SELinux enabled. The HCL documentation tells you to disable it with setenforce 0
, but that only disables it till next reboot. My client would like a proper SE Linux policy, which means they need to see which SE Linux policies to change. For this, set SE Linux to permissive. You do this till reboot by typing:
setenforce permissive
and changing your /etc/selinux/config file. I scripted that with this line:
sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
The HCL documentation also mentions that:
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. To avoid this problem, run the following commands to ensure that net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl config:
sudo bash -c 'cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
sysctl --system
By default this value should aready be 1, so this line shouldn’t be necessary. However, I have found some reports of problems where above was the solution, so I advice to stick to the documentation at this point.
The Kubernetes in the standard CentOS repository is ancient, so you can’t use that. At the client where I’m installing the component pack, they actually already had Kubernetes in a private repository, so I didn’t need to add an extra repository, but if you do, the procedure from the HCL documentation hasn’t changed on this point, so:
sudo bash -c 'cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF'
Installation
At the time of writing, the latest stable version of Kubernetes is Kubernetes 1.18.0. HCL have mentioned that they wish to stay current with the Kubernetes versions, so we should expect compatbility with Kubernetes 1.18, yet maybe not with the current version of the Component pack. The Component Pack was verified with Kubernetes version 1.17.2, so that’s the version we’re going to install.
Update: In the mean time HCL has updated their documentation and state that they tested against Kubernetes 1.18.2 and continue to test with the latest versions. Use the most current Kubernetes version if you do a new installation
yum install -y kubelet-1.17.2* kubeadm-1.17.2* kubectl-1.17.2* --disableexcludes=kubernetes
To prevent automatic updates of Kubernetes, you should disable the repository with:
yum-config-manager --disable kubernetes*
To make sure the kubelet starts on system start, you have to finish the installation with:
systemctl enable kubelet
You have to perform these steps on all master and worker nodes.
Configuring Kubernetes
The HCL documentation in the next step lets you create a kubeadm-config.yaml file to initialise the Kubernetes master, where they make a division between a file in case you want to use pod security policies and one where you don’t. That part is only valid for Kubernetes 1.11 and you will find that it will not work with Kubernetes 1.17.2. Regarding the admission plugin for the Pod Security Policy, this plugin is enabled by default in later versions of Kubernetes, so there’s no need anymore to do anything special in this area. I could provide a working kubeadm-config .yaml file here, but as we don’t need any advanced options, it’s actually much easier to just use the options in the init command, like:
kubeadm init --kubernetes-version=v1.17.2 --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint=con-k8sm.example.com --node-name=con-k8sm
The first 2 options you should always use. The latter 2 are optional. To explain these options:
–kubernetes-version=v1.17.2
If you don’t define this option Kubernetes by default will use version “stable-1”. We want specifically 1.17.2 here, so that’s what I defined
–pod-network-cidr=192.168.0.0./16
You define the network segment here that Calico will use. It doesn’t really matter much which segment you use, as long as it’s a segment that is currently not routable from your servers. 192.168.0.0./16 is the Calico default
–control-plane-endpoint=acc-con-k8sm.example.com
Kubernetes by default binds to the network adapter that’s defined by the hostname of the machine. I can think of 3 reasons why you may want to change that:
- You want to build a High Available Kubernetes cluster where you will define the address of the load balancer or you want to keep the option to do so at a later stage
- You have multiple nic’s in your machine and you want to bind the Kubernetes master to a different nic than the default
- You want to bind the Kubernetes mater to an alias instead of the machine hostname. This was my reason to use this option
–node-name=con-k8sm
If you don’t use this option, Kubernetes will use the hostname of the current machine as nodename. The hostnames of my machines here are names like app409, so not particularly descriptive. Each machine has a far more descriptive alias in dns though, so I prefer to have all nodes named after this alias.
This command produces a long output which I will list below so you can compare:
W0515 16:08:25.103068 128993 validation.go:28] Cannot validate kubelet config - no validator is available W0515 16:08:25.103134 128993 validation.go:28] Cannot validate kube-proxy config - no validator is available [init] Using Kubernetes version: v1.17.2 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [con-k8sm kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local con-k8sm.example.com] and IPs [10.96.0.1 10.8.85.220] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [acc-con-k8sm localhost] and IPs [10.8.85.220 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [acc-con-k8sm localhost] and IPs [10.8.85.220 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0515 16:08:29.918297 128993 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0515 16:08:29.919446 128993 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 33.505599 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node con-k8sm as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node con-k8sm as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: q75cp2.ffy7sfw2ibiboo0y [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join con-k8sm.example.com:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join con-k8sm.example.com:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash>
If you did not provide a control-plane-endpoint in your init command, you will see a warning message which you may ignore. I highlighted the RBAC Roles part. This parts shows that the admission-plugin is already there and you can simply add the pod-security-policy, if you so wish, by executing the commands according to the HCL documentation:
unzip -p /<path-to>/ComponentPack-6.5.0.1.zip microservices_connections/hybridcloud/support/psp/privileged-psp-with-rbac.yaml > privileged-psp-with-rbac.yaml
kubectl apply -f privileged-psp-with-rbac.yaml
podsecuritypolicy.policy/privileged created
clusterrole.rbac.authorization.k8s.io/privileged-psp created
rolebinding.rbac.authorization.k8s.io/kube-system-psp created
To be able to use kubectl commands, you first need to perform the actions as mentioned in the log above, so login with the ID you want to control your kubernetes cluster from (or stay on root) and type:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Before we continue with adding the worker nodes to the cluster, we first need to install Calico
Installing Calico
The first question to ask is which version of Calico to install. HCL in their reference implementation validated for Calico 3.11, which was the current version at that time. Calico is the network layer for the pods and the important thing for this layer is that it’s compatible with the Kubernetes version we’re running. It’s not directly bound to the components of the Component Pack, so we have some freedom here. The latest version of Calico, 3.14 at the time I write this blog, is compatible with Kubernetes 1.16 all the way up to the current version 1.18, so at this moment it’s safe to use the latest version of Calico. The commands:
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
Maybe you wonder what happened to the separate rbac file from the HCL documentation. Don’t worry, it’s now part of above calico.yaml.
Joining the worker nodes
The next step is to join the worker nodes in Kubernetes cluster. When initialising kubeadm on the master, in the output you found the command to do this. You can add extra options. I added –node-name=con-k8s1 which for my first worker node which referred to the dns alias of that node.
Before you run the command, make sure your firewall is configured to allow traffic from your nodes to your master and vice versa on ports 6443 and 10250.
kubeadm join con-k8sm.example.com:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name=con-k8s1
W0515 15:54:51.878660 107169 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet-start] Downloading configuration for the kubelet from the “kubelet-config-1.17” ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
Do this on all nodes. If all went well, kubectl get nodes
on the master will show you a cluster like this:
kubectl get nodes
NAME STATUS ROLES AGE VERSION con-k8s1 Ready <none> 3d4h v1.17.2 con-k8s2 Ready <none> 3d4h v1.17.2 con-k8sm Ready master 3d4h v1.17.2
To be able to use kubectl commands on your nodes too, you have to copy your config file from the master to the nodes, as described in the HCL documentation:
mkdir -p $HOME/.kube
scp root@Master_IP_address:$HOME/.kube/config $HOME/.kube
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Where the last line is only necessary if you use a different id than root.
Helm installation
The last component for this 2nd part is Helm. Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters. As the components of the component pack each have a Helm chart for deployment, it’s an essential component to install.
At the time of writing, the latest version of Helm is version 3.2.1. The helm charts of the component pack however, were written for Helm v2 and validated for Helm v2.16.3. You should therefore stick to the 2.16 branch, which is also still maintained. You should be fine using the latest version of this branch, which at the time of writing is v2.16.7.
The commands to enter are:
wget https://get.helm.sh/helm-v2.16.7-linux-amd64.tar.gz
tar -zxvf helm-v2.16.7-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm init
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
rm -f helm-v2.16.7-linux-amd64.tar.gz
If all went well, kubectl get pods -n kube-system
, should give you something like:
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE calico-kube-controllers-7d4d547dd6-dnvf5 1/1 Running 3 3d5h calico-node-27zwq 1/1 Running 3 3d5h calico-node-l9hf2 1/1 Running 0 3d5h calico-node-rhb6m 1/1 Running 0 3d5h coredns-6955765f44-dfnp8 1/1 Running 3 3d5h coredns-6955765f44-wcvrg 1/1 Running 3 3d5h etcd-acc-con-k8sm 1/1 Running 4 3d5h kube-apiserver-con-k8sm 1/1 Running 5 3d5h kube-controller-manager-con-k8sm 1/1 Running 3 3d5h kube-proxy-2smzm 1/1 Running 0 3d5h kube-proxy-9pvr8 1/1 Running 3 3d5h kube-proxy-tq7jj 1/1 Running 0 3d5h kube-scheduler-con-k8sm 1/1 Running 3 3d5h tiller-deploy-566d8c9b77-72qp6 1/1 Running 0 3d5h
In my case, all didn’t go well initially as my customer had a https traffic inspector which caused all traffic to https sources to get their certificate. That doesn’t work with Kubernetes trying to pull tiller from the k8s.gcr.io repository. The way to find out what was wrong was this command:
kubectl describe pod tiller-deploy-566d8c9b77-72qp6 -n kube-system
Just a quick tip if you see another status than “Running”.
Test your Kubernetes DNS server
If all your pods are up, it’s time to test if your Kubernetes environment can resolve dns entries within the environment. HCL documented that here and this page is still valid. In short, run these commands:
kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
kubectl get pods busybox
Check if the pod is running. If it is, type:
kubectl exec -ti busybox -- nslookup kubernetes.default
If all is working well, you’ll see something like:
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
If you see something else, in the line of “can’t resolve ‘kubernetes.default'”, you have a problem which you need to fix first. You’ll find pointers on what to do in the HCL documentation and in the Kubernetes documentation.
If everything did work, you can remove the busybox container by typing:
kubectl delete -f https://k8s.io/examples/admin/dns/busybox.yaml
This finishes the 2nd part on installing the component pack. You now have a working Kubernetes cluster and are ready for installing the actual components of the component pack
References
Kudeadm init – all the options for the init command explained
Install Calico networking and network policy for on-premises deployments
Helm – The package manager for Kubernetes