Resources

Deploy Actions Runner Controller (ARC) using ArgoCD: A Step-by-Step Guide

Explore GitHub Actions self-hosted workflows by understanding deployment strategies for Actions Runner Controller, using the simplicity and efficiency of ArgoCD.

Ashish Kurmi
November 28, 2023

Table of Contents

Table of Contents

Introduction

Managing Actions Runner Controller (ARC) deployments can be a tedious task, especially when you're dealing with upgrades and maintenance tasks. In this blog post, we'll cover how to deploy Actions Runner Controller using ArgoCD—a popular Kubernetes GitOps tool.

What is Actions Runner Controller (ARC)?

ARC is a Kubernetes operator designed to manage and scale self-hosted GitHub Action runner pods. It is one of the go-to options for running GitHub Actions workflows in a self-hosted environment. Please refer to our previous blog posts in the series to learn more about Actions Runner Controller:

Introduction to GitHub Actions Runner Controller: A Blog Series

How to Use Docker in Actions Runner Controller (ARC) Runners Securely

Looking to secure your ARC environments? Our eBPF-powered Kubernetes native solution on AWS EKS, GCP GKE, and Azure AKS is made just for that! Explore how you can fortify your ARC cluster with us.

Why Use ArgoCD?

ArgoCD is a declarative, GitOps Continuous Delivery (CD) tool for Kubernetes. It is an open-source project typically used in conjunction with a Continuous Integration (CI) platform such as GitHub Actions to manage Kubernetes deployments. With ArgoCD, your Kubernetes resources are version-controlled and can be automatically updated to match the state in your Git repository. This makes it an ideal choice for deploying and managing ARC.

Prerequisites

Before you proceed, ensure you have the following setup:

  • A Kubernetes cluster for hosting ARC and ArgoCD
  • kubectl configured to manage the above cluster
  • Helm CLI

We assume that you will be deploying ArgoCD first in the Kubernetes cluster and then use it to manage ARC resources.

Setting up ArgoCD and sealed-secrets

Deploy ArgoCD

To deploy ArgoCD, run the following command as per the official documentation

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

We will be using the ArgoCD CLI which you can install on your machine by following these steps.

Deploy sealed-secrets

For managing ARC’s GitHub Personal Access Token (PAT) (more on this in the next section) and other secrets, we will use sealed-secrets.

For production, generate a key pair, store it securely offline, and configure sealed-secrets to use this key. This way, you won’t have to change your encrypted secrets in your Git repository every time you rebuild your Kubernetes cluster.

Generate a key pair

You should perform this step only once and store private.key and certificate.crt securely. For subsequent installations, skip this step and pass private.key and certificate.crt that you generated previously to the next step.

openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout private.key -out certificate.crt -subj "/CN=sealed-secrets/O=my-org"

Install sealed-secret

Run the following Helm command to install sealed-secrets.

kubectl create secret tls sealed-secrets-key --key=private.key --cert=certificate.crt -n kube-system 

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets  

helm install sealed-secrets sealed-secrets/sealed-secrets -n kube-system --set secretName=sealed-secrets-key --atomic 

ArgoCD Configuration

CLI

To access the ArgoCD API server, run the following kubectl port forwarding command in a terminal.

kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl proxy

Run the following commands to configure argocd CLI to use the ArgoCD instance:

ADMIN_PASSWORD=$(argocd admin initial-password -n argocd | head -n 1)

argocd login localhost:8080 --insecure  --username "admin" --password "${ADMIN_PASSWORD}"
argocd cli initialization

Web UI

For this installation guide, you don’t need to use the web UI. Access the ArgoCD web interface at localhost:8080 to track progress or troubleshoot issues.

GitHub Repository for ArgoCD applications

If you don’t already have a GitHub repository for hosting ArgoCD applications, create one in your GitHub organization. For better security, this must be a private repository. We have shared the ArgoCD apps that will be uses for the blog post at https://github.com/step-security/code-samples/tree/main/deploy-arc-using-argocd

Personal Access Token (PAT) to authorize ArgoCD

As the ArgoCD repository that you created above is private, ArgoCD would require a GitHub PAT to access the repository. Follow these steps:

  1. Create a new GitHub bot user named argocd
  1. Give the bot user read access to the ArgoCD repository
  1. Create a GitHub PAT for the bot user with repo permissions as shown in the image below:
github required pat permissions

You can also create an equivalent fine-grained PAT with your personal account.

Onboard the GitHub repository onto ArgoCD

argocd repo add <Repository URL> \
            --type git \
            --name argocd \
            --username <GitHub Bot Username> \
            --password <PAT>
argocd with git repo configured

Deploy Actions Runner Controller using ArgoCD

Create a GitHub Personal Access Token for ARC

In order to authorize ARC to access relevant GitHub resources, you will need a GitHub Personal Access Token with appropriate permissions. Ideally, create a new GitHub bot account for ARC, authorize it to access the GitHub repositories/organizations/enterprises you need, and store this safely as you'll need it later.

Another way to authorize ARC is to create a GitHub App and provide the private key to ARC. One advantage of using a GitHub App over a PAT is that the App has higher API throttling limits.

You can read more about ARC-supported GitHub authentication models and required PAT/GitHub App permissions here.

Setting Up the ARC ArgoCD App of Apps

ArgoCD App of Apps consists solely of other apps within ArgoCD. To simplify ARC installation, we have authored the following ARC App of Apps that installs all the required components to build a functional ARC cluster. Please clone the repository into your ArgoCD repository before proceeding forward.

There are two flavors of ARC: community-supported ARC and GitHub-supported ARC. You can learn about how these flavors differ from each other in our ARC series introductory blog post. Please follow one of the following sections depending on which flavor you are using.

In both models, we create two ArgoCD apps, one for the controller and one for hosting runner pods. The advantage of having a separate ArgoCD app for runners is that one can create multiple runners using the same ArgoCD app definition in GitHub by passing different parameters.

Community-Supported ARC

Sealed secret

1. Create github_token_secret.yml with the GitHub PAT in the following format

apiVersion: v1
kind: Secret
metadata:
  name: controller-manager
  namespace: actions-runner-system
type: Opaque
data:
  github_token: <BASE64 ENCODED GITHUB PAT>

2. Run the following command to create a new file named github_token_sealed_secret.yml with the sealed secret.

kubeseal -f github_token_secret.yml -w github_token_sealed_secret.yml --controller-name sealed-secrets --controller-namespace kube-system

cat github_token_sealed_secret.yml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: controller-manager
  namespace: actions-runner-system
spec:
  encryptedData:
    github_token: Ag...1g
  template:
    metadata:
      creationTimestamp: null
      name: controller-manager
      namespace: actions-runner-system
    type: Opaque

3. Replace spec.encryptedData.github_token in community-supported/controller/templates/pat-secret.yaml with the content of spec.encryptedData. github_token from github_token_sealed_secret.yml.

ARC Controller

1. Copy the app into the ArgoCD repository you created above and run the following argocd CLI command

argocd app create arc-apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/community-supported/controller \
    --sync-policy automated --auto-prune --self-heal

argocd webui with community managed arc controller
ARC Runners

As ARC runners depend on the ARC controller, it cannot be created as part of the ARC App of Apps. Furthermore, having a separate ArgoCD application for ARC Runners gives us the ability to deploy multiple runners if required.

In the GitHub runner ArgoCD app, we are not using any runner labels. The runner is also not part of a specific runner group. You can update the ArgoCD app with these attributes if required. Define runner configurations inside the values directory. For example, we have created step_security.yaml to create a runner for the step-security organization.

argocd app create arc-apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/community-supported/controller \
    --sync-policy automated --auto-prune --self-heal

argocd webui with community managed arc runner
Check pods status

At this point, if you list all running pods, you will see the ARC controller and runner pods in running state.

kubectl get pods -A
NAMESPACE               NAME                                                READY   STATUS              RESTARTS   AGE
actions-runner-system   actions-runner-controller-5c996cd9c7-hq7n8          2/2     Running             0          12m
arcrunner               step-security-wlg9g-9jnlc                           0/2     Running   			0          29s
argocd                  argocd-application-controller-0                     1/1     Running             0          79m
argocd                  argocd-applicationset-controller-5bf97c679b-lr9lq   1/1     Running             0          79m
argocd                  argocd-dex-server-f7648d898-r8khk                   1/1     Running             0          79m
argocd                  argocd-notifications-controller-6cf7579685-jwgn6    1/1     Running             0          79m
argocd                  argocd-redis-6976fc7dfc-tmk42                       1/1     Running             0          79m
argocd                  argocd-repo-server-8477fdffc7-sl4xq                 1/1     Running             0          79m
argocd                  argocd-server-7c7d77f474-th989                      1/1     Running             0          79m
cert-manager            cert-manager-5cd87c47c6-pgw5z                       1/1     Running             0          12m
cert-manager            cert-manager-cainjector-767844f895-kmnl5            1/1     Running             0          12m
cert-manager            cert-manager-webhook-855778f88d-h65mb               1/1     Running             0          12m
kube-system             aws-node-5mc8z                                      2/2     Running             0          89m
kube-system             aws-node-jfvs8                                      2/2     Running             0          89m
kube-system             coredns-59754897cf-bhhqp                            1/1     Running             0          93m
kube-system             coredns-59754897cf-gch55                            1/1     Running             0          93m
kube-system             kube-proxy-gxlt8                                    1/1     Running             0          89m
kube-system             kube-proxy-lzdln                                    1/1     Running             0          89m
kube-system             sealed-secrets-5557bf4fb4-4n4rj                     1/1     Running             0          78m

GitHub-supported ARC

ARC Controller

Deploy Actions Runner Controller by running the following command

argocd app create actions-runner-controller-apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/github-supported/controller \
    --sync-policy automated --auto-prune --self-heal
argocd webui with github managed arc controller
sealed-secret

With the community-supported model, the ARC controller had access to the GitHub PAT. Whereas in this model, you need to grant runners access to a GitHub PAT instead.

1. Create github_token_secret.yml with the GitHub PAT in the following format.

apiVersion: v1 
kind: Secret 
metadata: 
  name: runner-pat 
  namespace: arc-runners 
type: Opaque 
data: 
  github_token: <BASE64 ENCODED GITHUB PAT>

2. Run the following command to create a new file named github_token_sealed_secret.yml with the sealed secret.

kubeseal -f github_token_secret_runner.yml -w github_token_secret_runner_sealed.yml --controller-name sealed-secrets --controller-namespace kube-system

cat github_token_secret_sealed.yml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: runner-pat
  namespace: arc-runners
spec:
  encryptedData:
    github_token: Ag...GL
  template:
    metadata:
      creationTimestamp: null
      name: runner-pat
      namespace: arc-runners
    type: Opaque

3. Update deploy-arc-using-argocd/github-supported/runners/templates/pat-secret.yaml with the value of sealed secret in github_token_secret_sealed.yml.

ARC Runner

Just like the community-managed ARC flavor, the ArgoCD app for runner pods does not use any runner labels. The runner is also not part of a specific runner group. You can update the ArgoCD app with these attributes if required. Deploy ARC runner by running the following command

argocd app create arc-runner \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/github-supported/runners \
    --helm-set runner.githubConfigUrl="https://github.com/step-security/harden-runner" \
    --sync-policy automated --auto-prune --self-heal
argocd webui with github managed arc runner
Check Pod Status

Run the following command to confirm that all Kubernetes pods are in running state.

kubectl get pods -A
NAMESPACE     NAME                                                           READY   STATUS    RESTARTS   AGE
arc           actions-runner-controller-gha-rs-controller-7cb94f78cd-cqszz   1/1     Running   0          21m
arc           arc-runner-set-754b578d-listener                               1/1     Running   0          13s
argocd        argocd-application-controller-0                                1/1     Running   0          125m
argocd        argocd-applicationset-controller-5bf97c679b-lr9lq              1/1     Running   0          125m
argocd        argocd-dex-server-f7648d898-r8khk                              1/1     Running   0          125m
argocd        argocd-notifications-controller-6cf7579685-jwgn6               1/1     Running   0          125m
argocd        argocd-redis-6976fc7dfc-tmk42                                  1/1     Running   0          125m
argocd        argocd-repo-server-8477fdffc7-sl4xq                            1/1     Running   0          125m
argocd        argocd-server-7c7d77f474-th989                                 1/1     Running   0          125m
kube-system   aws-node-5mc8z                                                 2/2     Running   0          136m
kube-system   aws-node-jfvs8                                                 2/2     Running   0          136m
kube-system   coredns-59754897cf-bhhqp                                       1/1     Running   0          139m
kube-system   coredns-59754897cf-gch55                                       1/1     Running   0          139m
kube-system   kube-proxy-gxlt8                                               1/1     Running   0          136m
kube-system   kube-proxy-lzdln                                               1/1     Running   0          136m
kube-system   sealed-secrets-5557bf4fb4-4n4rj                                1/1     Running   0          125m

Troubleshooting

Running argocd CLI commands on argocd server

We have frequently seen Kubernetes port forwarding dropping packets with the following error:

Handling connection for 8080
E1127 02:14:16.784173   88896 portforward.go:381] error copying from remote stream to local connection: readfrom tcp6 [::1]:8080->[::1]:52063: write tcp6 [::1]:8080->[::1]:52063: write: broken pipe
Handling connection for 8080
Handling connection for 8080
E1127 02:14:46.817383   88896 portforward.go:370] error creating forwarding stream for port 8080 -> 8080: Timeout occurred

If you consistently see these errors, another way to run ArgoCD CLI commands is to use the ArgoCD Kubernetes pod itself. To use this method, you would need to create a new ArgoCD user, generate a token, and use that token for running ArgoCD commands. We have shown the steps to run ArgoCD CLI commands on the ArgoCD Kubernetes pod below.

echo "Update ConfigMap"
kubectl get configmap argocd-cm -n argocd -o yaml > argocd-cm.yaml
cat <<EOL >> argocd-cm.yaml
data:
accounts.githubactions: login,apiKey
policy.csv: |
  g, argo-account, role:admin
EOL
kubectl apply -f argocd-cm.yaml -n argocd

echo "Retrieve admin credentials"
ARGOSERVERPODNAME=$(kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o custom-columns=NAME:.metadata.name --no-headers)
echo "ARGOSERVERPODNAME: $ARGOSERVERPODNAME"

ADMIN_PASSWORD=$(kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd admin initial-password -n argocd | head -n 1)
echo "ADMIN_PASSWORD: $ADMIN_PASSWORD"
kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd login argocd-server.argocd.svc:443 --insecure  --username "admin" --password "${ADMIN_PASSWORD}"

echo "Updating account password"
kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd account update-password --account githubactions --current-password $ADMIN_PASSWORD --new-password $ADMIN_PASSWORD

echo "Updating argocd-rbac-cm configmap"
kubectl get configmap argocd-rbac-cm -n argocd -o yaml > argocd-rbac-cm.yaml
cat <<EOL >> argocd-rbac-cm.yaml
data:
policy.csv: |
  p, githubactions, *, *, *, allow
EOL
kubectl apply -f argocd-rbac-cm.yaml
rm argocd-rbac-cm.yaml

kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd login argocd-server.argocd.svc:443 --insecure --username githubactions --password $ADMIN_PASSWORD
ARGOCDAUTHTOKEN=$(kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd account generate-token)
echo "ARGOCDAUTHTOKEN: $ARGOCDAUTHTOKEN"

kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd repo add ... --server argocd-server.argocd.svc:443 --insecure --auth-token $ARGOCDAUTHTOKEN

Also Read: Secure your Actions Runner Controller (ARC) Environment using StepSecurity

Happy Deploying!

A compromised dependency or build tool can exfiltrate source code and CI/CD secrets from any GitHub Actions runners. We recommend implementing runtime CI/CD security to prevent such threats. To know more about how you can do that, get in touch with our team.

Try StepSecurity For Free

Introduction

Managing Actions Runner Controller (ARC) deployments can be a tedious task, especially when you're dealing with upgrades and maintenance tasks. In this blog post, we'll cover how to deploy Actions Runner Controller using ArgoCD—a popular Kubernetes GitOps tool.

What is Actions Runner Controller (ARC)?

ARC is a Kubernetes operator designed to manage and scale self-hosted GitHub Action runner pods. It is one of the go-to options for running GitHub Actions workflows in a self-hosted environment. Please refer to our previous blog posts in the series to learn more about Actions Runner Controller:

Introduction to GitHub Actions Runner Controller: A Blog Series

How to Use Docker in Actions Runner Controller (ARC) Runners Securely

Looking to secure your ARC environments? Our eBPF-powered Kubernetes native solution on AWS EKS, GCP GKE, and Azure AKS is made just for that! Explore how you can fortify your ARC cluster with us.

Why Use ArgoCD?

ArgoCD is a declarative, GitOps Continuous Delivery (CD) tool for Kubernetes. It is an open-source project typically used in conjunction with a Continuous Integration (CI) platform such as GitHub Actions to manage Kubernetes deployments. With ArgoCD, your Kubernetes resources are version-controlled and can be automatically updated to match the state in your Git repository. This makes it an ideal choice for deploying and managing ARC.

Prerequisites

Before you proceed, ensure you have the following setup:

  • A Kubernetes cluster for hosting ARC and ArgoCD
  • kubectl configured to manage the above cluster
  • Helm CLI

We assume that you will be deploying ArgoCD first in the Kubernetes cluster and then use it to manage ARC resources.

Setting up ArgoCD and sealed-secrets

Deploy ArgoCD

To deploy ArgoCD, run the following command as per the official documentation

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

We will be using the ArgoCD CLI which you can install on your machine by following these steps.

Deploy sealed-secrets

For managing ARC’s GitHub Personal Access Token (PAT) (more on this in the next section) and other secrets, we will use sealed-secrets.

For production, generate a key pair, store it securely offline, and configure sealed-secrets to use this key. This way, you won’t have to change your encrypted secrets in your Git repository every time you rebuild your Kubernetes cluster.

Generate a key pair

You should perform this step only once and store private.key and certificate.crt securely. For subsequent installations, skip this step and pass private.key and certificate.crt that you generated previously to the next step.

openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout private.key -out certificate.crt -subj "/CN=sealed-secrets/O=my-org"

Install sealed-secret

Run the following Helm command to install sealed-secrets.

kubectl create secret tls sealed-secrets-key --key=private.key --cert=certificate.crt -n kube-system 

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets  

helm install sealed-secrets sealed-secrets/sealed-secrets -n kube-system --set secretName=sealed-secrets-key --atomic 

ArgoCD Configuration

CLI

To access the ArgoCD API server, run the following kubectl port forwarding command in a terminal.

kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl proxy

Run the following commands to configure argocd CLI to use the ArgoCD instance:

ADMIN_PASSWORD=$(argocd admin initial-password -n argocd | head -n 1)

argocd login localhost:8080 --insecure  --username "admin" --password "${ADMIN_PASSWORD}"
argocd cli initialization

Web UI

For this installation guide, you don’t need to use the web UI. Access the ArgoCD web interface at localhost:8080 to track progress or troubleshoot issues.

GitHub Repository for ArgoCD applications

If you don’t already have a GitHub repository for hosting ArgoCD applications, create one in your GitHub organization. For better security, this must be a private repository. We have shared the ArgoCD apps that will be uses for the blog post at https://github.com/step-security/code-samples/tree/main/deploy-arc-using-argocd

Personal Access Token (PAT) to authorize ArgoCD

As the ArgoCD repository that you created above is private, ArgoCD would require a GitHub PAT to access the repository. Follow these steps:

  1. Create a new GitHub bot user named argocd
  1. Give the bot user read access to the ArgoCD repository
  1. Create a GitHub PAT for the bot user with repo permissions as shown in the image below:
github required pat permissions

You can also create an equivalent fine-grained PAT with your personal account.

Onboard the GitHub repository onto ArgoCD

argocd repo add <Repository URL> \
            --type git \
            --name argocd \
            --username <GitHub Bot Username> \
            --password <PAT>
argocd with git repo configured

Deploy Actions Runner Controller using ArgoCD

Create a GitHub Personal Access Token for ARC

In order to authorize ARC to access relevant GitHub resources, you will need a GitHub Personal Access Token with appropriate permissions. Ideally, create a new GitHub bot account for ARC, authorize it to access the GitHub repositories/organizations/enterprises you need, and store this safely as you'll need it later.

Another way to authorize ARC is to create a GitHub App and provide the private key to ARC. One advantage of using a GitHub App over a PAT is that the App has higher API throttling limits.

You can read more about ARC-supported GitHub authentication models and required PAT/GitHub App permissions here.

Setting Up the ARC ArgoCD App of Apps

ArgoCD App of Apps consists solely of other apps within ArgoCD. To simplify ARC installation, we have authored the following ARC App of Apps that installs all the required components to build a functional ARC cluster. Please clone the repository into your ArgoCD repository before proceeding forward.

There are two flavors of ARC: community-supported ARC and GitHub-supported ARC. You can learn about how these flavors differ from each other in our ARC series introductory blog post. Please follow one of the following sections depending on which flavor you are using.

In both models, we create two ArgoCD apps, one for the controller and one for hosting runner pods. The advantage of having a separate ArgoCD app for runners is that one can create multiple runners using the same ArgoCD app definition in GitHub by passing different parameters.

Community-Supported ARC

Sealed secret

1. Create github_token_secret.yml with the GitHub PAT in the following format

apiVersion: v1
kind: Secret
metadata:
  name: controller-manager
  namespace: actions-runner-system
type: Opaque
data:
  github_token: <BASE64 ENCODED GITHUB PAT>

2. Run the following command to create a new file named github_token_sealed_secret.yml with the sealed secret.

kubeseal -f github_token_secret.yml -w github_token_sealed_secret.yml --controller-name sealed-secrets --controller-namespace kube-system

cat github_token_sealed_secret.yml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: controller-manager
  namespace: actions-runner-system
spec:
  encryptedData:
    github_token: Ag...1g
  template:
    metadata:
      creationTimestamp: null
      name: controller-manager
      namespace: actions-runner-system
    type: Opaque

3. Replace spec.encryptedData.github_token in community-supported/controller/templates/pat-secret.yaml with the content of spec.encryptedData. github_token from github_token_sealed_secret.yml.

ARC Controller

1. Copy the app into the ArgoCD repository you created above and run the following argocd CLI command

argocd app create arc-apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/community-supported/controller \
    --sync-policy automated --auto-prune --self-heal

argocd webui with community managed arc controller
ARC Runners

As ARC runners depend on the ARC controller, it cannot be created as part of the ARC App of Apps. Furthermore, having a separate ArgoCD application for ARC Runners gives us the ability to deploy multiple runners if required.

In the GitHub runner ArgoCD app, we are not using any runner labels. The runner is also not part of a specific runner group. You can update the ArgoCD app with these attributes if required. Define runner configurations inside the values directory. For example, we have created step_security.yaml to create a runner for the step-security organization.

argocd app create arc-apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/community-supported/controller \
    --sync-policy automated --auto-prune --self-heal

argocd webui with community managed arc runner
Check pods status

At this point, if you list all running pods, you will see the ARC controller and runner pods in running state.

kubectl get pods -A
NAMESPACE               NAME                                                READY   STATUS              RESTARTS   AGE
actions-runner-system   actions-runner-controller-5c996cd9c7-hq7n8          2/2     Running             0          12m
arcrunner               step-security-wlg9g-9jnlc                           0/2     Running   			0          29s
argocd                  argocd-application-controller-0                     1/1     Running             0          79m
argocd                  argocd-applicationset-controller-5bf97c679b-lr9lq   1/1     Running             0          79m
argocd                  argocd-dex-server-f7648d898-r8khk                   1/1     Running             0          79m
argocd                  argocd-notifications-controller-6cf7579685-jwgn6    1/1     Running             0          79m
argocd                  argocd-redis-6976fc7dfc-tmk42                       1/1     Running             0          79m
argocd                  argocd-repo-server-8477fdffc7-sl4xq                 1/1     Running             0          79m
argocd                  argocd-server-7c7d77f474-th989                      1/1     Running             0          79m
cert-manager            cert-manager-5cd87c47c6-pgw5z                       1/1     Running             0          12m
cert-manager            cert-manager-cainjector-767844f895-kmnl5            1/1     Running             0          12m
cert-manager            cert-manager-webhook-855778f88d-h65mb               1/1     Running             0          12m
kube-system             aws-node-5mc8z                                      2/2     Running             0          89m
kube-system             aws-node-jfvs8                                      2/2     Running             0          89m
kube-system             coredns-59754897cf-bhhqp                            1/1     Running             0          93m
kube-system             coredns-59754897cf-gch55                            1/1     Running             0          93m
kube-system             kube-proxy-gxlt8                                    1/1     Running             0          89m
kube-system             kube-proxy-lzdln                                    1/1     Running             0          89m
kube-system             sealed-secrets-5557bf4fb4-4n4rj                     1/1     Running             0          78m

GitHub-supported ARC

ARC Controller

Deploy Actions Runner Controller by running the following command

argocd app create actions-runner-controller-apps \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/github-supported/controller \
    --sync-policy automated --auto-prune --self-heal
argocd webui with github managed arc controller
sealed-secret

With the community-supported model, the ARC controller had access to the GitHub PAT. Whereas in this model, you need to grant runners access to a GitHub PAT instead.

1. Create github_token_secret.yml with the GitHub PAT in the following format.

apiVersion: v1 
kind: Secret 
metadata: 
  name: runner-pat 
  namespace: arc-runners 
type: Opaque 
data: 
  github_token: <BASE64 ENCODED GITHUB PAT>

2. Run the following command to create a new file named github_token_sealed_secret.yml with the sealed secret.

kubeseal -f github_token_secret_runner.yml -w github_token_secret_runner_sealed.yml --controller-name sealed-secrets --controller-namespace kube-system

cat github_token_secret_sealed.yml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: runner-pat
  namespace: arc-runners
spec:
  encryptedData:
    github_token: Ag...GL
  template:
    metadata:
      creationTimestamp: null
      name: runner-pat
      namespace: arc-runners
    type: Opaque

3. Update deploy-arc-using-argocd/github-supported/runners/templates/pat-secret.yaml with the value of sealed secret in github_token_secret_sealed.yml.

ARC Runner

Just like the community-managed ARC flavor, the ArgoCD app for runner pods does not use any runner labels. The runner is also not part of a specific runner group. You can update the ArgoCD app with these attributes if required. Deploy ARC runner by running the following command

argocd app create arc-runner \
    --dest-namespace argocd \
    --dest-server https://kubernetes.default.svc \
    --repo https://github.com/step-security/code-samples.git \
    --path deploy-arc-using-argocd/github-supported/runners \
    --helm-set runner.githubConfigUrl="https://github.com/step-security/harden-runner" \
    --sync-policy automated --auto-prune --self-heal
argocd webui with github managed arc runner
Check Pod Status

Run the following command to confirm that all Kubernetes pods are in running state.

kubectl get pods -A
NAMESPACE     NAME                                                           READY   STATUS    RESTARTS   AGE
arc           actions-runner-controller-gha-rs-controller-7cb94f78cd-cqszz   1/1     Running   0          21m
arc           arc-runner-set-754b578d-listener                               1/1     Running   0          13s
argocd        argocd-application-controller-0                                1/1     Running   0          125m
argocd        argocd-applicationset-controller-5bf97c679b-lr9lq              1/1     Running   0          125m
argocd        argocd-dex-server-f7648d898-r8khk                              1/1     Running   0          125m
argocd        argocd-notifications-controller-6cf7579685-jwgn6               1/1     Running   0          125m
argocd        argocd-redis-6976fc7dfc-tmk42                                  1/1     Running   0          125m
argocd        argocd-repo-server-8477fdffc7-sl4xq                            1/1     Running   0          125m
argocd        argocd-server-7c7d77f474-th989                                 1/1     Running   0          125m
kube-system   aws-node-5mc8z                                                 2/2     Running   0          136m
kube-system   aws-node-jfvs8                                                 2/2     Running   0          136m
kube-system   coredns-59754897cf-bhhqp                                       1/1     Running   0          139m
kube-system   coredns-59754897cf-gch55                                       1/1     Running   0          139m
kube-system   kube-proxy-gxlt8                                               1/1     Running   0          136m
kube-system   kube-proxy-lzdln                                               1/1     Running   0          136m
kube-system   sealed-secrets-5557bf4fb4-4n4rj                                1/1     Running   0          125m

Troubleshooting

Running argocd CLI commands on argocd server

We have frequently seen Kubernetes port forwarding dropping packets with the following error:

Handling connection for 8080
E1127 02:14:16.784173   88896 portforward.go:381] error copying from remote stream to local connection: readfrom tcp6 [::1]:8080->[::1]:52063: write tcp6 [::1]:8080->[::1]:52063: write: broken pipe
Handling connection for 8080
Handling connection for 8080
E1127 02:14:46.817383   88896 portforward.go:370] error creating forwarding stream for port 8080 -> 8080: Timeout occurred

If you consistently see these errors, another way to run ArgoCD CLI commands is to use the ArgoCD Kubernetes pod itself. To use this method, you would need to create a new ArgoCD user, generate a token, and use that token for running ArgoCD commands. We have shown the steps to run ArgoCD CLI commands on the ArgoCD Kubernetes pod below.

echo "Update ConfigMap"
kubectl get configmap argocd-cm -n argocd -o yaml > argocd-cm.yaml
cat <<EOL >> argocd-cm.yaml
data:
accounts.githubactions: login,apiKey
policy.csv: |
  g, argo-account, role:admin
EOL
kubectl apply -f argocd-cm.yaml -n argocd

echo "Retrieve admin credentials"
ARGOSERVERPODNAME=$(kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o custom-columns=NAME:.metadata.name --no-headers)
echo "ARGOSERVERPODNAME: $ARGOSERVERPODNAME"

ADMIN_PASSWORD=$(kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd admin initial-password -n argocd | head -n 1)
echo "ADMIN_PASSWORD: $ADMIN_PASSWORD"
kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd login argocd-server.argocd.svc:443 --insecure  --username "admin" --password "${ADMIN_PASSWORD}"

echo "Updating account password"
kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd account update-password --account githubactions --current-password $ADMIN_PASSWORD --new-password $ADMIN_PASSWORD

echo "Updating argocd-rbac-cm configmap"
kubectl get configmap argocd-rbac-cm -n argocd -o yaml > argocd-rbac-cm.yaml
cat <<EOL >> argocd-rbac-cm.yaml
data:
policy.csv: |
  p, githubactions, *, *, *, allow
EOL
kubectl apply -f argocd-rbac-cm.yaml
rm argocd-rbac-cm.yaml

kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd login argocd-server.argocd.svc:443 --insecure --username githubactions --password $ADMIN_PASSWORD
ARGOCDAUTHTOKEN=$(kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd account generate-token)
echo "ARGOCDAUTHTOKEN: $ARGOCDAUTHTOKEN"

kubectl exec -n argocd $ARGOSERVERPODNAME -- argocd repo add ... --server argocd-server.argocd.svc:443 --insecure --auth-token $ARGOCDAUTHTOKEN

Also Read: Secure your Actions Runner Controller (ARC) Environment using StepSecurity

Happy Deploying!

A compromised dependency or build tool can exfiltrate source code and CI/CD secrets from any GitHub Actions runners. We recommend implementing runtime CI/CD security to prevent such threats. To know more about how you can do that, get in touch with our team.

Try StepSecurity For Free