WARNING Currently it is very very far from optimal to run CRI-O on a “non-supported” system like Container Linux. For curious people, this should work “perfectly”.
SIDENOTE If it is not clear from the
WARNING, I wouldn’t recommend this for production (yet). Personally I will soon replace Docker with CRI-O in my private Kubernetes cluster though, because Docker causes issues in my cluster.
This post is showing how you can install and run CRI-O for Kubernetes on Container Linux. CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI). There are still some things not (fully) implemented in CRI-O yet, like Container metrics (see Kubernetes CRI link).
- Golang Development Environment (e.g. Getting Started - The Go Programming Language with a set
- CRI-O source code (
git clone) in your
$GOPATH(will also be explained in Step 1 - Preparations)
- Container Linux machines with Kubernetes (in a Kubernetes cluster)
Why switch to CRI-O?
For me the reason is simple: “Docker seems to break down every few weeks in my private cluster and causes nodes to need a reboot..”. From running CRI-O for a bit now, Kubelet has gone “quiet”. No more logs about issues with the container runtime (in my case Docker caused a good amount of logs in Kubelet).
I suggest you take a look at a few posts about CRI-O to get a general understanding of what CRI-O is:
- 6 Reasons why CRI-O is the best runtime for Kubernetes - Project Atomic
- CRI-O, the Project to Run Containers without Docker, Reaches 1.0 - The New Stack
But you are most likely here because you already know that, right? ;)
Another point for me is that in the best case it will be “shipped” with Kubernetes at some point in time, so that you “just” upgrade Kubelet and et voilà you get the new CRI-O version. I would totally wnat it to be like that.
Step 1 - Build CRI-O
You can either use release binaries (which are not linked to the releases (yet?)) or build CRI-O your own, for that checkout the CRI-O README.md’s Getting started section. Before building “master” branch of CRI-O, I’d recommend you to checkout CRI-O’s Kubernetes compatibility matrix.
Step 2 - Upload CRI-O and “Dependencies”
NOTE It is possible that the dependent libraries are named differently. Use
ldd FILENAMEto show which libraries need to be copied. All libraries with
not foundneed to be copied to the
On “all” servers which should be switched to/use CRI-O as the Container Runtime the
/opt/bin directory needs to be created:
Now upload the compiled
common) from Step 1 - Preparations binary to
libgpgme.so.11 library for CRI-O to
conntrack binary to
/opt/bin and the missing library dependencies to
Step 3 - CRI-O Systemd Service
NOTE The Systemd service units shown here are modified versions of the originals!
Create Systemd service unit at
To signal CRI-O that a shutdown has started, a second Systemd service unit is used at
Step 4 - Configure CRI-O
Create the directory
/etc/sysconfig if it doesn’t exist yet, and create the file
/etc/sysconfig/crio in it with the following content:
/etc/sysconfig/crio file holds a few flags which do the following:
--pause-image=k8s.gcr.io/pause-amd64:3.0 - Use
k8s.gcr.io/pause-amd64:3.0 as the pause image (only needed if you don’t change the
pause_image in the
crio.conf that will be shown in this section).
--conmon=/opt/bin/conmon - Use
/opt/bin/conmon for the
conmon binary, instead of searching through
Now to configure CRI-O “in-depth”,
crio config --default > /etc/crio/crio.conf is used to generate a config with sane defaults (
There are two other files needed from the GitHub kubernetes-incubator/cri-o repository.
First file is to be placed at
/etc/crio/seccomp.json, it can be found here: GitHub kubernetes-incubator - master
Second to be placed at
/etc/containers/policy.json, the original can be found here: GitHub kubernetes-incubator - master
test/policy.json, but I recommend using this slimmed down version:
An important part to make CRI-O not fail for images (without an image registry server, e.g. from Docker Hub) that normally would be pulled from Docker Hub, to fix that modify the
/etc/crio/crio.conf as follows:
"docker.io",to the list of “default”/unqualified registries.
I recommend to change the following other values too:
storage_driver = "" - To
storage_driver = "overlay" to use overlay storage for the containers and images.
pids_limit = 1024 - Set the maximum process ID limit for a container.
enable_shared_pid_namespace = false - Enable shared Process ID namespace between containers of a Pod.
You may also need to change the
network_dir option to reflect your CNI path used (default is
NOTE My current
/etc/crio/crio.confis available on GitHub as a Gist:
GitHub Gist - galexrt - My current CRI-O config file - Click to expand
Now that CRI-O is ready to be used, we can continue to get Kubelet to use CRI-O.
Step 5 - Configure Kubelet to use CRI-O
The following flags need to be added to the
--container-runtime=remote- Use the Container Runtime
--container-runtime-endpoint=unix:///run/crio/crio.sock- Where the Container Runtime can be reached.
--image-service-endpoint=unix:///run/crio/crio.sock- Where the (Container Runtime) Image Service endpoint can be reached.
--runtime-request-timeout=10m- Timeout for Container Runtime requests.
The below snippet is from a
kubelet.service that uses the
More details on the flags can be found at GitHub kubernetes-incubator/cri-o - master
Additionally adding a mount for the host path
/opt/bin is required, because CRI-O binaries are not shipped within
quay.io/coreos/hyperkube image. // TODO test if this is necessary
Additionally you you should add
crio.service and remove
docker.service from the
Before= section (if you have one). It should look like this:
Now that the CRI-O and Kubelet Systemd service units have been created and/or modified we need to reload Systemd:
Step 6 - Start CRI-O and restart Kubelet
.servicecan be omitted in this case as there is no other unit (example
.device) with that name. For more info on that please refer to the Systemd man pages.
If there was no error during the start CRI-O, you are ready to restart Kubelet and let it start the containers through CRI-O:
If you want to make sure that Kubelet uses CRI-O successfully, you can check the logs of Kubelet by running
journalctl -u kubelet -xe.
Now that Kubelet should use CRI-O as the Container Runtime (CRI), you can move on the Step 7 - Test your new Container Runtime.
Step 7 - Test your new Container Runtime
There are multiple ways to test if CRI-O starts containers:
crictlcan list the containers and images on the node. Example to list the containers:
crictl --image-endpoint=/run/crio/crio.sock --runtime-endpoint=/run/crio/crio.sock ps, for more information on how to use
crictlsee GitHub kubernetes-incubator - master
runccan be used like this:
- Kubernetes shows you the Pod status per node:
kubectl get --all-namespaces pods -o wide | grep $NODE_NAME.
“Timeout” or “Connection refuesd” when trying to exec into a Pod
If you have a firewall on the nodes, you need to allow CRI-O’s so called
stream_port which is by default listening on
10010/TCP. It needs to be accessible by the Kubernetes masters.
See GitHub Gist - galexrt - My current CRI-O config file - Line 29-30:
This should get you started with installing/running CRI-O for Kubernetes on Container Linux (previously CoreOS). For questions about the post, please leave a comment below, thanks!