Installing the HCL Connections Component Pack 6.5 CR1 – Part 1: Installing Docker

As I’m currently installing the HCL Connections 6.5 CR1 component pack at a customer I run into a lot of points where the HCL documentation is simply outdated or very confusing. In a series of articles I plan to write about the caveats in the documentation, to hopefully help you with your installation. In this first part I cover Docker.

Update: HCL reviewed this document and gave their go, so if you follow below instructions regarding the configuration of Docker you still have a supported configuration.

As a prerequisite for the Component Pack, you need to install Docker and Kubernetes. In the documentation, you’ll find references to outdated Docker and Kubernetes versions (Docker 17.03/18.06, Kubernetes v1.11.9, Calico v3.3 and Helm v2.11.0). I found only one page which lists the current recommended versions: Docker-ce 1.19.5+, Kubernetes 1.17.2, Calico v3.11 and Helm v2.16.3. In the mean time some things changed in the world of Docker, RedHat etc. Let’s start with Docker. I’ll start with the recommended setup according to HCL, but be sure to read on!

Installing Docker-ce

So the currently recommended version by HCL is Docker 19.03. The HCL documentation lists the commands to install old versions of Docker. Use these commands to install 19.03:

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --disable docker*
yum-config-manager --enable docker-ce-stable
yum install -y --setopt=obsoletes=0 docker-ce-19.03*
yum makecache fast
systemctl enable docker.service
yum-config-manager --disable docker*

Configuring Docker

Docker 17.06 introduced the concept of direct-lvm mode. This means you can replace all the steps mentioned in step 3 of this page, like creating a physical volume (pvcreate), a volume group (vgcreate) etc, by this:

vi /etc/docker/daemon.json

and insert:

{
  "storage-driver": "devicemapper",
  "storage-opts": [
    "dm.directlvm_device=/dev/xdf",
    "dm.thinp_percent=95",
    "dm.thinp_metapercent=1",
    "dm.thinp_autoextend_threshold=80",
    "dm.thinp_autoextend_percent=20",
    "dm.directlvm_device_force=false"
  ]
} 

Make sure to replace /dev/xdf by your actual block device. However, when you start Docker with this configuration, you’ll find the following line in the code:

level=warning msg=”[graphdriver] WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release”
So what’s going on here? Somewhere in, I believe, the 17.0x branch, Docker introduced support for overlayFS 2 (a.k.a overlay2). However, RHEL/CentOS kernel lacked support for overlay2 up till RHEL 7.2. Only in RHEL/CentOS 7.4 support for SELinux was added for overlay2. Currently we’re on RHEL/CentOS 7.6 so this is old news now, but at the time the documentation was written, it wasn’t. Docker has standardised on overlay2 since and it is now the default driver. About the devicemapper, Docker states:

The devicemapper storage driver is deprecated in Docker Engine 18.09, and will be removed in a future release. It is recommended that users of the devicemapper storage driver migrate to overlay2.

The same page does mention:

Docker supports the following storage drivers:

  • overlay2 is the preferred storage driver, for all currently supported Linux distributions, and requires no extra configuration.
  • aufs is the preferred storage driver for Docker 18.06 and older, when running on Ubuntu 14.04 on kernel 3.13 which has no support for overlay2.
  • devicemapper is supported, but requires direct-lvm for production environments, because loopback-lvm, while zero-configuration, has very poor performance. devicemapper was the recommended storage driver for CentOS and RHEL, as their kernel version did not support overlay2. However, current versions of CentOS and RHEL now have support for overlay2, which is now the recommended driver.

So, should we still use the devicemapper on a new installation? I advice against it. Some of the benefits of the overlay2 driver are that starting and stopping containers is faster and performance is better when multiple containers use the same container image. That last part is very relevant for the component pack as by default 3 replica’s of each container are deployed. I haven’t found any downsides of the overlay2 driver and as it’s the recommended driver by Docker, testing of this option will be much better with future versions than of the devicemapper. These are all very valid arguments against still using the devicemapper.

So much for the storage driver. Going one step ahead on the Kubernetes installation. If you would initialise kubernetes with the settings according to the HCL documentation, you would see the following warning:

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/

This url will tell you:

When systemd is chosen as the init system for a Linux distribution, the init process generates and consumes a root control group (cgroup) and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. It’s possible to configure your container runtime and the kubelet to use cgroupfs. Using cgroupfs alongside systemd means that there will be two different cgroup managers.

Control groups are used to constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the available and in-use resources. When we have two managers we end up with two views of those resources. We have seen cases in the field where nodes that are configured to use cgroupfs for the kubelet and Docker, and systemd for the rest of the processes running on the node becomes unstable under resource pressure.

Changing the settings such that your container runtime and kubelet use systemd as the cgroup driver stabilized the system. Please note the native.cgroupdriver=systemd option in the Docker setup below.

You have to do the above before you create your nodes, so now is the right time to do so. This means adding options to you docker/daemon.json as I’ve done below. Compared to the Kubernetes documentation I’ve omitted the override_kernel_check as that, again, is for older versions than CentOS 7.6.

Concluding, how should you install Docker? If you have a dedicated block device for Docker, do this:
Put a volume group on your block device. Below I assume it’s called /dev/sdb. Check what yours is called using ‘lsblk’. Then do:

pvcreate /dev/sdb
vgcreate docker /dev/sdb
lvcreate -n dockerdata -l 100%FREE docker
mkfs -t xfs -n ftype=1 /dev/docker/dockerdata 
## Stop Docker if it wasn't stopped already 
systemctl stop docker 
## remove the current Docker dirs. This assumes a new installation. Make a backup if you actually need something which is already there 
rm -rf /var/lib/docker/* 
## add the new drive to your fstab 
echo "/dev/mapper/docker-dockerdata   /var/lib/docker xfs     defaults        0 0" >> /etc/fstab mount -a 
## create docker daemon.json file
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
## and enable and start Docker
systemctl enable docker
systemctl start docker

Check with ‘docker info’ if everything went correct. In case you wonder whether you should use XFS or Ext4 as filesystem: They both work, but XFS is the recommended Filesystem for Docker overlay. However, it’s essential that you create the filesystem with ftype=1. If you don’t want to use an extra block device, you can check with xfs_info /var/lib/docker if your filesystem is configured that way.

References:

On to Part 2