Table of Contents
In this post, I'll be going over using GitLab CI to create your application's container Continuous Delivery to Kubernetes. This is the second post in the three post series about Kubernetes and GitLab. The first post can be found here: Edenmal - GitLab + Kubernetes: Perfect Match for Continuous Delivery with Container.
NOTE Please check the requirements before beginning.
- Kubernetes cluster
- Running GitLab instance
kubectlbinary (with Kubernetes cluster access)
- StorageClass configured in Kubernetes
ReadWriteManyPersistent Storage (example CephFS using Rook)
The manifests shown in this blog post will also be available on GitHub here: GitHub - galexrt/kubernetes-manifests. If you want the latest manifests version, I recommend you checkout the repository (though I try to keep the blog post and repository in the same state/version).
Step 1 - Verify Kubernetes cluster “connectivity”
To check if you have proper Kubernetes cluster connection, run the following command:
cluster-infoshows you if you can connect to the cluster and list the available cluster services (in the ouput: Kubernetes apiserver and KubeDNS service).
Step 2 - Get GitLab CI Register Token from GitLab
Go to your GitLab instance and go to the Admin area.
To go to your Admin area, click the wrench icon next to the search bar in the top right of any GitLab page:
Next click on the Runners tab in the navbar.
Et-voilà! There is your token (in the picture the token would be where the blanked out area is).
Now copy the token somewhere safe for usage in Step 3 - Write manifest for GitLab CI Runners.
NOTE Keep the token securely stored!
Step 3 - Write Kubernetes manifests for GitLab CI Runners
This step will show you the manifests for the GitLab CI runner manifests for Kubernetes.
Kubernetes API References
The Kubernetes API references can be found on the official Kubernetes.io page here: https://kubernetes.io/docs/reference/
All manifests should be compatible with Kubernetes
Namespace for the build tasks
YOUR_GITLAB_BUILD_NAMESPACE with a name for the namespace in which the GitLab CI builds are run. The separate namespace for the GitLab CI builds is useful for detecting stuck containers (for example when there was an issue with the runner not cleaning up).
ConfigMap for the Environment Variables and Script
First we gonna start with the
ConfigMap for the basic configuration using environment variables of the GitLab CI Runner image:
You have to point
YOUR_GITLAB_CI_SERVER_URL to your GitLab instance's URL with
/ci appended to it (like this
Also you need to replace
YOUR_GITLAB_BUILD_NAMESPACE with the name of the namespace from the step Namespace for the build tasks in every manifest coming up.
ConfigMapalso adds resource requests and limits to the build containers run. You can change them as long as they follow the Kubernetes resource limits units.
When you have added new options to the
ConfigMap, you need to delete each GitLab CI Runner Pod. This is currently a limitation of using Kubernetes
envFromhelps keep manifests shorter by moving the environment variables out to
To add additional options (/flags), you need to run
gitlab-ci-multi-runner register --helpin the
Podto see all available flags with their matching environment variable counter part (in square brackets, without the
$sign) next to the flag name. You just add the env vars for the flags you want to configure, as shown in the above
Example output of
gitlab-ci-multi-runner register --help:
To run a command in the GitLab CI Runner Pods, use
kubectl exec -n YOUR_GITLAB_BUILD_NAMESPACE -it gitlab-ci-runner-0 /bin/bash. This will drop you into the Pod and give you
Next up is the
ConfigMap which contains a small script which registers, runs and unregisters the GitLab CI Runner.
The runner unregister will only be triggered when the Pod is normally terminated through Kubernetes (
In case of a forced termination of the Pod (
SIGKILL signal), the CI runner won't unregister itself. Cleanup of such “killed” CI Runners has to be done manually.
More information on Pod termination can be found here: Kubernetes - Pods - Termination of Pods.
Secret for the Token Environment Variable
Then we'll use the GitLab CI runner token to create a secret in Kubernetes for it to use it in the
To encode the token as base64, you can run something like this:
base64 should be available and already installed on most Linux distributions)
YOUR_BASE64_ENCODED_TOKEN with the output from the above command.
Statefulset for the actually running the Runners in Kubernetes
This container specification for the GitLab CI runner has a twist added to it. On start the runer tries to unregister any runner with the same name. This is especially useful when a node is lost (aka
NodeLost event). It then tries to re-register itself and then begin to run. On a normal stop of the Pod, the GitLab CI runner will run the
unregister command to try to unregister itself, so it won't be used by GitLab anymore. This is done by using Kubernetes
lifecycle “hooks”, documentation about them can be found here: Kubernetes.io - Container Lifecycle Hooks.
Another awesome thing is that using
envFrom allows to specify
ConfigMaps to be used as environment variables (variables are only set when they match the specific regex for environment variables).
Be sure to use the “latest” available image tag for the GitLab CI runner (
image:). The available image tags for the
gitlab/gitlab-runnerimage can be found here: gitlab/gitlab-runner Tags - Docker Hub.
It is not recommended to use “dynamic static” tags (e.g.,
bleeding, etc). Best is to use a specific tag, which makes debugging easier.
RBAC: ServiceAccount, Role and Rolebinding
If you are not using RBAC, skip this section.
I know for myself that the
Roleused here can and should be improved. Though as it is not a wildcard
ClusterRoleit is not that worse at least for my setup as the
YOUR_GITLAB_BUILD_NAMESPACEnamespace is dedicated for GitLab CI.
As noted above:
- These are very very very (did I say very?) open permissions granted to the GitLab CI runners.
- These can/should/must be refined and reduced for “real” production environments.
If you had the time to refine/reduce them, please let me know in the comments. I'm happy to give you a shoutout in the post and add your improved
Should you experience issues creating the
Role object and you are on GKE (other Kubernetes as a Service platforms maybe affected too), please take a look at this GitHub issue: GitHub coreos/prometheus-operator RBAC on GKE - extra step needed #357.
Thanks to Parama Danoesubroto for commenting about this for GKE (see his comment http://disq.us/p/1r9y13d)!
Step 4 - Create the manifests
To interact with the Kubernetes cluster the
kubectl programm is used. To create/“run” a manifest on the Kubernetes cluster the subcommand
create is used.
An example, like this:
FILE_NAME would be the corresponding name of file you saved the manifest(s) in.
To specify multiple manifests in one go, just add
-f FILE_NAME per file behind the
create. Like this
kubectl create -f FILE_NAME_1 -f FILE_NAME_2 -f FILE_NAME_3 ....
Now specify the manifests you created in the Step 3 - Write manifest for GitLab CI Runners.
Step 5 - Check the GitLab CI Runners
For an example GitLab CI pipeline construct, please take a look at the repository on GitHub galexrt/presentation-gitlab-k8s.
Go to the GitLab Admin area and there to the “Runners” tab as shown above.
In the table/list below where you got the GitLab runner token from, the runners should habe appeared. If not check the Troubleshooting section below.
Check the logs of the GitLab CI runner Pods
To get logs from one of the GitLab CI runner pods, you can use the following command:
-f option is causing the logs to be streamed, like with the
Check the logs for any errors.
Now you should have two (or more depending if you have changed the
replicas count in the
StatefulSet manifest) GitLab CI runner that use Kubernetes as an so called executor for CI tasks.