Currently it is very very far from optimal to run CRI-O on a “no package manager” system like Container Linux. For curious people, this should work “perfectly” though.
If it is not clear from the above
WARNING, I wouldn't recommend this for production (yet). Personally I will soon replace Docker with CRI-O in my private Kubernetes cluster though, because Docker causes issues in my cluster.
This post is showing how you can install and run CRI-O for Kubernetes on Container Linux (previously CoreOS). CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI). There are still some things not (fully) implemented in CRI-O yet, like Container metrics (see Kubernetes CRI link).
- Golang (e.g. Getting Started - The Go Programming Language with a set
- CRI-O source code (
git clone) in your
$GOPATH(will be explained in Step 1 - Preparations)
- Container Linux (/ CoreOS) machines
lddLinux tool to show the shared object dependencies of binaries.
Why switch to CRI-O?
For me the reason is simple: “Docker seems to break down every few weeks in my private cluster and causes nodes to need a reboot..". From running CRI-O for a bit now, Kubelet has gone “quiet”. No more logs about issues with the container runtime (in my case Docker caused a good amount of logs in Kubelet).
I suggest you take a look at a few posts about CRI-O to get a general understanding of what CRI-O is:
- 6 Reasons why CRI-O is the best runtime for Kubernetes - Project Atomic
- CRI-O, the Project to Run Containers without Docker, Reaches 1.0 - The New Stack
But you are most likely here because you already know that, right? ;)
Another point for me is that in the best case it will be “shipped” with Kubernetes at some point in time, so that you “just” upgrade Kubelet and et voilà you get the new CRI-O version. I would totally wnat it to be like that.
Step 1 - Build CRI-O
You need to make sure you have the dependencies installed CRI-O expects you to have, they can be found here: Build and install CRI-O from source - Build and Run Dependencies section on cri-o/cri-o GitHub.
You can either use release binaries (which are not linked to the releases (yet?)) or build CRI-O your own, for that checkout the Build and install CRI-O from source on cri-o/cri-o GitHub (cri-o/cri-o README.md - Getting started section).
Before you just go ahead and build from “master” branch, I'd recommend you to checkout CRI-O's Kubernetes compatibility matrix and choose a release branch, e.g.,
release-1.14 for Kubernetes
Summarized to build the binary from source run:
That will produce all CRI-O binaries, see:
These binaries are needed and will be referenced to in the upcoming steps.
Step 2 - Upload CRI-O and “Dependencies”
It is possible that the dependent libraries are named differently on your (build) machine. Use
ldd crio) to show which libraries need to be copied. All libraries with
not foundneed to be copied to the
On “all” servers which should be switched to use CRI-O as the Container Runtime we need to create some directory:
Now upload the compiled
common) binaries from Step 1 - Preparations to the
/opt/bin directory on the hosts.
After that run
ldd /opt/bin/crio (and for
conmon too), to see which libraries need to uploaded too.
Example Output of
If you look at the example output for, unrelated,
ffmpeg binary, you see some entries having
not found behind the arrow. These libraries need to be copied to the hosts.
For CRI-O this can possibly be the
libgpgme.so.11 library. Meaning that you copy the
libgpgme.so.11 of your build machine / laptop to the hosts
Upload all libraries that are
not found in the
ldd output for
conmon binary. The path to each library should be shown as in the above example output.
To verify that the libraries copied are correct, you can run
LD_LIBRARY_PATH=:/opt/lib64 ldd /opt/bin/crio and it should now show no
not found entries anymore.
Depending on which version of Container Linux (CoreOS), you may need to copy the following software dependencies of the following dependencies to your hosts
Make sure to verify that the software are not on the servers already, e.g., use
command -v runc or
which runc (where
runc is the software you are checking for).
runc- If that is missing something is probably “wrong” with your host's Container Linux.
conntrack binary to
/opt/bin and the
not found library dependencies to
/opt/lib64 which would be:
LD_LIBRARY_PATH=:/opt/lib64 ldd /opt/bin/conntrack should show no
not found library entries.
Step 3 - CRI-O Systemd Service
The Systemd service units shown here are modified versions of the originals! Keep that in mind if you have made your own modifications to the service unit files.
Create Systemd service unit at
The original Systemd service unit was taken from GitHub cri-o/cri-o - master
To signal CRI-O that a shutdown has started, a second Systemd service unit is used at
The original Systemd service unit was taken from GitHub cri-o/cri-o - master
Step 4 - Configure CRI-O
Create the directory
/etc/sysconfig if it doesn't exist yet, and create / update the file
/etc/sysconfig/crio in it with the following content:
/etc/sysconfig/crio file holds a few flags which do the following:
conmonbinary, instead of searching through
There are two other files needed from the GitHub cri-o/cri-o repository:
First file is to be placed at
/etc/crio/seccomp.json, it can be found here: GitHub kubernetes-incubator - master
Second to be placed at
/etc/containers/policy.json, the original can be found here: GitHub kubernetes-incubator - master
test/policy.json, as you may see from the path where the file is located, it is probably just used for testing, so I'd recommend you to use this “slimmed down” version:
(This file is also available as a Gist: GitHub Gist - galexrt - My current CRI-O config file -
Now that the parts around CRI-O are configured, let's configure CRI-O “in-depth” using the
You can use the
crio config --default > /etc/crio/crio.conf command to generate a config with sane defaults (
--default flag adds the sane defaults).
An important point to change in the
crio.conf to make CRI-O “Docker backwards compatible” is to make CRI-O not fail for images without an image registry server. Normally such images are served from Docker Hub, but CRI-O by default would fail. To fix that modify the
/etc/crio/crio.conf as follows:
The change is to uncomment / add
"docker.io", to the list of “default” / unqualified registries to pull from.
Also I recommend you to change the following other values too:
storage_driver = ""- To
storage_driver = "overlay"to use overlay storage for the containers and images.
overlayis the default, but it is good to “enforce” a default in case that may change at one point.
pids_limit = 10240(or higher) - Set the maximum process ID limit for a container.
enable_shared_pid_namespace = false- Enable shared Process ID namespace between containers of a Pod (depends on if you want / need it).
You may also need to change the
network_dir config option to reflect the CNI (config) path used by your setup (default is
/etc/crio/crio.confI am using is available on GitHub as a Gist:
GitHub Gist - galexrt - My current CRI-O config file - **Click to expand**
Now that CRI-O is ready to be used, we can continue to configure Kubelet to use CRI-O.
Step 5 - Configure Kubelet to use CRI-O
The following flags need to be added to the
--container-runtime=remote- Use the Container Runtime
--container-runtime-endpoint=unix:///run/crio/crio.sock- Where the Container Runtime can be reached.
--image-service-endpoint=unix:///run/crio/crio.sock- Where the (Container Runtime) Image Service endpoint can be reached.
--runtime-request-timeout=10m- Timeout for Container Runtime requests.
(If you are using dynamic kubelet configuration files, the option to change is
criSocket:. Set it to
The below snippet is from a
kubelet.service that uses the
More details on the flags and / or commands can be found at GitHub cri-o/cri-o - master
Additionally you need to add a mount for the host path
/opt/bin to the same path in the
kubelet-wrapper, that is so that the
kubelet is able to reach the CRI-O binaries as they were / are not shipped within the
The lines for that will look like that:
RKT_RUN_ARGSenvironment variable in your
kubelet.serviceunit file, example on how this can look like can be found here: kubelet-wrapper “Allow pods to use rbd volumes” documentation- coreos/coreos-kubernetes GitHub repository)
After that we should add the
crio.service as a dependency to the
kubelet.service, this causes systemd to start up
kubelet only after
crio.service is running. This can be achieved by adding / editing the
After= section in your
kubelet.service unit file.
docker.service is in the
After= section, go ahead and remove it.
It should look like this:
Now that the CRI-O and Kubelet Systemd service units have been created and/or modified we need to reload Systemd:
Step 6 - Start CRI-O and restart Kubelet
.servicecan be omitted in this case as there is no other unit (example
.device) with that name. For more info on that please refer to the Systemd man pages.
If there was no error during the start CRI-O, you are ready to restart Kubelet and let it start the containers through CRI-O:
If you want to make sure that Kubelet uses CRI-O successfully, you can check the logs of Kubelet by running
journalctl -u kubelet -xe.
Now that Kubelet should use CRI-O as the Container Runtime (CRI), you can move on the Step 7 - Test your new Container Runtime.
Step 8 - Test your new Container Runtime
There are multiple ways to test if CRI-O starts containers:
crictlcan list the containers and images on the node. Example to list the containers:
crictl --image-endpoint=/run/crio/crio.sock --runtime-endpoint=/run/crio/crio.sock ps, for more information on how to use
crictlsee GitHub kubernetes-incubator - master
runccan be used like this:
- Kubernetes shows you the Pod status per node:
kubectl get --all-namespaces pods -o wide | grep $NODE_NAME.
Step 9 - Disable and stop
Now that we are sure containers are running fine with CRI-O, go ahead stop and disable the
docker.service on the host(s) using the following commands:
This masks the
docker.service, systemd will then symlink
/dev/null “in place” of the
“Timeout” or “Connection refuesd” when trying to
kubectl exec into a Pod
If you have a firewall on the nodes, you may need to allow CRI-O's so called
stream_port which is by default listening on
10010/TCP. It needs to be accessible by the Kubernetes masters.
This should get you started with installing and running CRI-O for Kubernetes on Container Linux (previously CoreOS).
You can basically put most of these “copy files commands” into your Container Linux / Ignition Config, even cloud-config would be one way to automate it during OS install / boot.
For questions about the post, please leave a comment below, thanks!