Sysadmin Garden of Eden

CRI-O + Container Linux: How to Install

Table of Contents
  1. 1. Intro
    1. 1.1. Requirements
  2. 2. Why switch to CRI-O?
  3. 3. Step 1 - Build CRI-O
  4. 4. Step 2 - Upload CRI-O and “Dependencies”
  5. 5. Step 3 - CRI-O Systemd Service
  6. 6. Step 4 - Configure CRI-O
  7. 7. Step 5 - Configure Kubelet to use CRI-O
  8. 8. Step 6 - Start CRI-O and restart Kubelet
  9. 9. Step 7 - Test your new Container Runtime
  10. 10. Troubleshotting
    1. 10.1. “Timeout” or “Connection refuesd” when trying to exec into a Pod
  11. 11. Summary

CRI-O and Container Linux Logo


WARNING Currently it is very very far from optimal to run CRI-O on a “non-supported” system like Container Linux. For curious people, this should work “perfectly”.
SIDENOTE If it is not clear from the WARNING, I wouldn’t recommend this for production (yet). Personally I will soon replace Docker with CRI-O in my private Kubernetes cluster though, because Docker causes issues in my cluster.

This post is showing how you can install and run CRI-O for Kubernetes on Container Linux.
CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI).
There are still some things not (fully) implemented in CRI-O yet, like Container metrics (see Kubernetes CRI link).


Why switch to CRI-O?

For me the reason is simple: “Docker seems to break down every few weeks in my private cluster and causes nodes to need a reboot..”.
From running CRI-O for a bit now, Kubelet has gone “quiet”. No more logs about issues with the container runtime (in my case Docker caused a good amount of logs in Kubelet).

I suggest you take a look at a few posts about CRI-O to get a general understanding of what CRI-O is:

But you are most likely here because you already know that, right? ;)

Another point for me is that in the best case it will be “shipped” with Kubernetes at some point in time, so that you “just” upgrade Kubelet and et voilà you get the new CRI-O version. I would totally wnat it to be like that.

Step 1 - Build CRI-O

You can either use release binaries (which are not linked to the releases (yet?)) or build CRI-O your own, for that checkout the CRI-O README.md’s Getting started section.
Before building “master” branch of CRI-O, I’d recommend you to checkout CRI-O’s Kubernetes compatibility matrix.

Step 2 - Upload CRI-O and “Dependencies”

NOTE It is possible that the dependent libraries are named differently. Use ldd FILENAME to show which libraries need to be copied. All libraries with not found need to be copied to the /opt/lib64 directory.

On “all” servers which should be switched to/use CRI-O as the Container Runtime the /opt/bin directory needs to be created:

$ mkdir -p /opt/bin /opt/lib64 /etc/crio /var/lib/crio

Now upload the compiled crio and conmon (not common) from Step 1 - Preparations binary to /opt/bin directory.
Upload missing libgpgme.so.11 library for CRI-O to /opt/lib64.

Upload conntrack binary to /opt/bin and the missing library dependencies to /opt/lib64:

Step 3 - CRI-O Systemd Service

NOTE The Systemd service units shown here are modified versions of the originals!

Create Systemd service unit at /etc/systemd/system/crio.service:

Description=Open Container Initiative Daemon
ExecStart=/opt/bin/crio \
ExecReload=/bin/kill -s HUP $MAINPID

The original Systemd service unit was taken from GitHub kubernetes-incubator/cri-o - master contrib/systemd/crio.service.

To signal CRI-O that a shutdown has started, a second Systemd service unit is used at /etc/systemd/system/crio-shutdown.service:

Description=Shutdown CRIO containers before shutting down the system
ExecStop=/usr/bin/mkdir -p /var/lib/crio; /usr/bin/touch /var/lib/crio/crio.shutdown

The original Systemd service unit was taken from GitHub kubernetes-incubator/cri-o - master contrib/systemd/crio-shutdown.service.

Step 4 - Configure CRI-O

Create the directory /etc/sysconfig if it doesn’t exist yet, and create the file /etc/sysconfig/crio in it with the following content:

CRIO_OPTIONS="--pause-image=k8s.gcr.io/pause-amd64:3.0 --conmon=/opt/bin/conmon"

The /etc/sysconfig/crio file holds a few flags which do the following:

Now to configure CRI-O “in-depth”, crio config --default > /etc/crio/crio.conf is used to generate a config with sane defaults (--default flag).
There are two other files needed from the GitHub kubernetes-incubator/cri-o repository.
First file is to be placed at /etc/crio/seccomp.json, it can be found here: GitHub kubernetes-incubator - master seccomp.json.
Second to be placed at /etc/containers/policy.json, the original can be found here: GitHub kubernetes-incubator - master test/policy.json, but I recommend using this slimmed down version:

"default": [
"type": "insecureAcceptAnything"
"transports": {
"docker": {}

(This file is also available as a Gist: GitHub Gist - galexrt - My current CRI-O config file - policy.json)

An important part to make CRI-O not fail for images (without an image registry server, e.g. from Docker Hub) that normally would be pulled from Docker Hub, to fix that modify the /etc/crio/crio.conf as follows:

# registries is used to specify a comma separated list of registries to be used
# when pulling an unqualified image (e.g. fedora:rawhide).
registries = [

The change is to add "docker.io", to the list of “default”/unqualified registries.

I recommend to change the following other values too:

You may also need to change the network_dir option to reflect your CNI path used (default is /etc/cni/net.d/).

NOTE My current /etc/crio/crio.conf is available on GitHub as a Gist:

GitHub Gist - galexrt - My current CRI-O config file - Click to expand

Now that CRI-O is ready to be used, we can continue to get Kubelet to use CRI-O.

Step 5 - Configure Kubelet to use CRI-O

The following flags need to be added to the kubelet.service:

The below snippet is from a kubelet.service that uses the /usr/lib/coreos/kubelet-wrapper:

ExecStart=/usr/lib/coreos/kubelet-wrapper \
--container-runtime=remote \
--container-runtime-endpoint=unix:///run/crio/crio.sock \
--image-service-endpoint=unix:///run/crio/crio.sock \
--runtime-request-timeout=10m \

More details on the flags can be found at GitHub kubernetes-incubator/cri-o - master kubernetes.md.

Additionally adding a mount for the host path /opt/bin is required, because CRI-O binaries are not shipped within quay.io/coreos/hyperkube image. // TODO TEST THIS OUT

Additionally you you should add crio.service and remove docker.service from the kubelet.service Before= section (if you have one). It should look like this:


Now that the CRI-O and Kubelet Systemd service units have been created and/or modified we need to reload Systemd:

$ systemctl daemon-reload

Step 6 - Start CRI-O and restart Kubelet

$ systemctl start crio.service crio-shutdown.service

SIDENOTE The .service can be omitted in this case as there is no other unit (example .mount or .device) with that name. For more info on that please refer to the Systemd man pages.

If there was no error during the start CRI-O, you are ready to restart Kubelet and let it start the containers through CRI-O:

$ systemctl restart kubelet.service

If you want to make sure that Kubelet uses CRI-O successfully, you can check the logs of Kubelet by running journalctl -u kubelet -xe.

Now that Kubelet should use CRI-O as the Container Runtime (CRI), you can move on the Step 7 - Test your new Container Runtime.

Step 7 - Test your new Container Runtime

There are multiple ways to test if CRI-O starts containers:


“Timeout” or “Connection refuesd” when trying to exec into a Pod

If you have a firewall on the nodes, you need to allow CRI-O’s so called stream_port which is by default listening on 10010/TCP. It needs to be accessible by the Kubernetes masters.
See GitHub Gist - galexrt - My current CRI-O config file - Line 29-30: stream_port.


This should get you started with installing/running CRI-O for Kubernetes on Container Linux (previously CoreOS).
For questions about the post, please leave a comment below, thanks!

Have Fun!