Support for Gitea Actions #459

Open
opened 2023-06-21 16:11:08 +00:00 by Claas · 9 comments

Gitea supports Actions for a while, are there any plans to include them in this Helm-Chart?

Is there anything holding it back on a technical bases? I could try to add it, but I don't want to waste time if it's impossible in the first place.

Gitea supports Actions for a while, are there any plans to include them in this Helm-Chart? Is there anything holding it back on a technical bases? I could try to add it, but I don't want to waste time if it's impossible in the first place.
Member

The helm chart is not blocking here, all you need to do is to enable it in app.ini. https://docs.gitea.com/administration/config-cheat-sheet#actions-actions
The default is false at the moment and I think going with the default is fine.

I am not sure if there is any support for k8s at the moment so you might be facing issues. I think this is on the roadmap but not as one of the next features (I could be wrong with this).

The helm chart is not blocking here, all you need to do is to enable it in `app.ini`. https://docs.gitea.com/administration/config-cheat-sheet#actions-actions The default is `false` at the moment and I think going with the default is fine. I am not sure if there is any support for k8s at the moment so you might be facing issues. I think this is on the roadmap but not as one of the next features (I could be wrong with this).
Author

Enabling it in the app.ini is easy, yes, but I thought more about adding actual runners.

Structurally similar to, say, memcache:

actions:
enable: true
runner:
- name: defaultrunner
- image: runnerxyz:release

My (personal) goal would be that everything gets set up automatically at least for a reasonable default runner (whatever that is). I'm currently sifting the docs on how to best deploy runners in k8s, maybe it's doable for me.

Enabling it in the app.ini is easy, yes, but I thought more about adding actual runners. Structurally similar to, say, memcache: > actions: enable: true runner: - name: defaultrunner - image: runnerxyz:release My (personal) goal would be that everything gets set up automatically at least for a reasonable default runner (whatever that is). I'm currently sifting the docs on how to best deploy runners in k8s, maybe it's doable for me.
Member

If there's k8s support when can definitely think about adding support for runners. I don't know where the runner repo stands atm with k8s backend support, so if you get something going or find out that it's possible already, feel free to let us know and propose a design :) Happy to discuss and take a look.

Proposing something first might be best to avoid long discussions about the design etc for the chart :)

If there's k8s support when can definitely think about adding support for runners. I don't know where the runner repo stands atm with k8s backend support, so if you get something going or find out that it's possible already, feel free to let us know and propose a design :) Happy to discuss and take a look. Proposing something first might be best to avoid long discussions about the design etc for the chart :)
Author

I've looked into the documentation. Starting runners doesn't seem too hard, however, they need to be registered using one-time tokens.

These tokens can't be pre-set, so there has to be some kind of interaction with the already running Gitea instance, I'm afraid.

The only viable option I can see is to first deploy gitea itself, then retrieve tokens from gitea via the API using the admin/user credentials supplied in the values and finally start and register the act_runners with these tokes.

The token-retrieval itself could be done by an initContainer in the act_runner pod(s).

I've looked into the documentation. Starting runners doesn't seem too hard, however, they need to be registered using one-time tokens. These tokens can't be pre-set, so there has to be some kind of interaction with the already running Gitea instance, I'm afraid. The only viable option I can see is to first deploy gitea itself, then retrieve tokens from gitea via the API using the admin/user credentials supplied in the values and finally start and register the act_runners with these tokes. The token-retrieval itself could be done by an initContainer in the act_runner pod(s).
pat-s added the
kind
proposal
upstream
gitea
labels 2023-06-27 19:38:26 +00:00
pat-s changed title from Are there any plans to support actions? to Support for Gitea Actions 2023-06-27 19:38:38 +00:00
pat-s pinned this 2023-06-27 19:38:55 +00:00
Author

I finally found some time to actually dig into it a bit.

There seems to be some discussion in the Gitea repo as well about the token mechanism:
https://github.com/go-gitea/gitea/issues/24101
https://github.com/go-gitea/gitea/issues/24635

There even is a CLI command available (in the current 1.20-RC, not 1.19):
https://github.com/go-gitea/gitea/pull/23762

However, that still means, that one would have to exec into another pod after the actual gitea pod has started.
I tried using an init container right before the gitea container, but that created a wonderful seg-fault when executing the CLI command.

My current approach is to use a bitnami kubectl container as an init pod for the actual runners, do a "kubectl exec ..." from the init container to the gitea pod and then share the token via an in-memory volume. That kinda sorta works, but it feels rather hacky and absolutely defeats the point of the token.

I finally found some time to actually dig into it a bit. There seems to be some discussion in the Gitea repo as well about the token mechanism: https://github.com/go-gitea/gitea/issues/24101 https://github.com/go-gitea/gitea/issues/24635 There even is a CLI command available (in the current 1.20-RC, not 1.19): https://github.com/go-gitea/gitea/pull/23762 However, that still means, that one would have to exec into another pod after the actual gitea pod has started. I tried using an init container right before the gitea container, but that created a wonderful seg-fault when executing the CLI command. My current approach is to use a bitnami kubectl container as an init pod for the actual runners, do a "kubectl exec ..." from the init container to the gitea pod and then share the token via an in-memory volume. That kinda sorta works, but it feels rather hacky and absolutely defeats the point of the token.

With 1.20.4 I was able to do registration from pod with act-runner when I added `GITEA__SERVER__LOCAL_ROOT_URL environment variable to gitea pod and then mounted generated config directory to act-runner pod and executed registration.

With 1.20.4 I was able to do registration from pod with act-runner when I added `GITEA__SERVER__LOCAL_ROOT_URL environment variable to gitea pod and then mounted generated config directory to act-runner pod and executed registration.

With this statefulset and additional parameter for helm chart I was able to do fully automatic deployments of act-runner.

Values.yaml

...
deployment:
  env:
    - name: GITEA__ACTIONS__ENABLED
      value: 'true'
    - name: GITEA__SERVER__LOCAL_ROOT_URL
      value: http://gitea-http:3000
...

Tested with following workflow:

name: Actions Demo
run-name: ${{ gitea.actor }} is testing out Actions 🚀
on: [push]
jobs:
  Explore-Actions:
    container:
      image: node:18
    runs-on: ubuntu-latest
    steps:
      - name: Check out repository code
        uses: actions/checkout@v4
      - name: List files in the repository
        run: |
          ls ${{ github.workspace }}          

Statefulset:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: act-runner
  name: act-runner
  namespace: gitea
spec:
  selector:
    matchLabels:
      app: act-runner
  template:
    metadata:
      labels:
        app: act-runner
    spec:
      initContainers:
        - name: gitea
          image: gitea/gitea:1.20.4-rootless
          command:
            - bash
            - -c
            - |
              set -xe;
              if [[ ! -e  /workdir/.runner ]]; then

              cat >/workdir/config.yaml <<EOF
              log:
                level: debug
              cache:
                enabled: false
              EOF

                gitea actions generate-runner-token > /var/run/token
              fi              
          env:
            - name: GITEA_APP_INI
              value: /data/gitea/conf/app.ini
          volumeMounts:
            - mountPath: /data
              name: data
            - mountPath: /var/run
              name: var-run
            - mountPath: /workdir
              name: act-runner-workdir
      containers:
        - name: act-runner
          image: gitea/act_runner:latest
          workingDir: /workdir
          command:
            - bash
            - -c
            - |
              set -x;
              while ! ls /var/run/docker.sock; do
                echo 'waiting for docker daemon...';
                sleep 5;
              done;
              if [[ -e /var/run/token ]]; then
                act_runner register -c /workdir/config.yaml \
                  --no-interactive \
                  --instance "$GITEA_INSTANCE_URL" \
                  --token "$(cat /var/run/token)" \
                  --name "$HOSTNAME" \
                  --labels "$GITEA_RUNNER_LABELS";
              fi
              act_runner daemon -c /workdir/config.yaml              
          env:
            - name: GITEA_INSTANCE_URL
              value: http://gitea-http:3000
            - name: GITEA_RUNNER_NAME
              value: ubuntu-latest
            - name: GITEA_RUNNER_LABELS
              value: ubuntu-latest
          volumeMounts:
            - mountPath: /var/run
              name: var-run
            - mountPath: /workdir
              name: act-runner-workdir
        - name: dind
          image: docker:23.0.6-dind
          command:
            - dockerd
            - --host
            - unix:///var/run/docker.sock
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /var/run
              name: var-run
      volumes:
        - name: var-run
          emptyDir: {}
        - name: data
          persistentVolumeClaim:
            claimName: gitea-shared-storage
  volumeClaimTemplates:
    - metadata:
        name: act-runner-workdir
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 1Gi
With this statefulset and additional parameter for helm chart I was able to do fully automatic deployments of act-runner. Values.yaml ```yaml ... deployment: env: - name: GITEA__ACTIONS__ENABLED value: 'true' - name: GITEA__SERVER__LOCAL_ROOT_URL value: http://gitea-http:3000 ... ``` Tested with following workflow: ```yaml name: Actions Demo run-name: ${{ gitea.actor }} is testing out Actions 🚀 on: [push] jobs: Explore-Actions: container: image: node:18 runs-on: ubuntu-latest steps: - name: Check out repository code uses: actions/checkout@v4 - name: List files in the repository run: | ls ${{ github.workspace }} ``` Statefulset: ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: act-runner name: act-runner namespace: gitea spec: selector: matchLabels: app: act-runner template: metadata: labels: app: act-runner spec: initContainers: - name: gitea image: gitea/gitea:1.20.4-rootless command: - bash - -c - | set -xe; if [[ ! -e /workdir/.runner ]]; then cat >/workdir/config.yaml <<EOF log: level: debug cache: enabled: false EOF gitea actions generate-runner-token > /var/run/token fi env: - name: GITEA_APP_INI value: /data/gitea/conf/app.ini volumeMounts: - mountPath: /data name: data - mountPath: /var/run name: var-run - mountPath: /workdir name: act-runner-workdir containers: - name: act-runner image: gitea/act_runner:latest workingDir: /workdir command: - bash - -c - | set -x; while ! ls /var/run/docker.sock; do echo 'waiting for docker daemon...'; sleep 5; done; if [[ -e /var/run/token ]]; then act_runner register -c /workdir/config.yaml \ --no-interactive \ --instance "$GITEA_INSTANCE_URL" \ --token "$(cat /var/run/token)" \ --name "$HOSTNAME" \ --labels "$GITEA_RUNNER_LABELS"; fi act_runner daemon -c /workdir/config.yaml env: - name: GITEA_INSTANCE_URL value: http://gitea-http:3000 - name: GITEA_RUNNER_NAME value: ubuntu-latest - name: GITEA_RUNNER_LABELS value: ubuntu-latest volumeMounts: - mountPath: /var/run name: var-run - mountPath: /workdir name: act-runner-workdir - name: dind image: docker:23.0.6-dind command: - dockerd - --host - unix:///var/run/docker.sock securityContext: privileged: true volumeMounts: - mountPath: /var/run name: var-run volumes: - name: var-run emptyDir: {} - name: data persistentVolumeClaim: claimName: gitea-shared-storage volumeClaimTemplates: - metadata: name: act-runner-workdir spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi ```
Member

Thanks for sharing!

Would you mind porting this into a PR so we can discuss implementation details?

Some adhoc thoughts:

  • The commands in initContainers should be ported to the existing initcontainers
  • The runner token should be defined in a k8s secret
  • I can't see why GITEA__SERVER__LOCAL_ROOT_URL might be needed?
  • Given that we switched to deployments, I'd prefer using this type instead of a statefulset for the runner resource
Thanks for sharing! Would you mind porting this into a PR so we can discuss implementation details? Some adhoc thoughts: - The commands in `initContainers` should be ported to the existing initcontainers - The runner token should be defined in a k8s secret - I can't see why `GITEA__SERVER__LOCAL_ROOT_URL` might be needed? - Given that we switched to deployments, I'd prefer using this type instead of a statefulset for the runner resource

Here are my findings:

  • runner token is generated by gitea server. I don't think it is possible to save it to static secret.
  • GITEA__SERVER__LOCAL_ROOT_URL is required as gitea command using this variable to determine where to request token.
  • statefulset is needed as .runner file is different per runner so you need to have volumeClaimTemplates in the spec and this can't be done with deployments

Problems:

  • act_runner image from https://gitea.com/gitea/act_runner is not usable, you need to rebuild it and add at least nodejs to be able to execute checkout action.
  • HPA won't work as there is no deregister hook and no metrics exposed how much tasks in queue.

Here final yaml without root user:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: act-runner-config
data:
  config.yaml: |
    log:
      level: debug
    cache:
      enabled: false    
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: act-runner
  name: act-runner
  namespace: gitea
spec:
  selector:
    matchLabels:
      app: act-runner
  template:
    metadata:
      labels:
        app: act-runner
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      initContainers:
        # as there is no dedicated gitea cli using the whole image.
        # Alternative solution:
        # curl -v -X POST -d '{"scope":""}' \
        # -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -H 'Accept: application/json' \
        # http://gitea-http:3000/api/internal/actions/generate_actions_runner_token
        - name: gitea
          image: gitea/gitea:1.20.5-rootless
          command:
            - bash
            - -exc
            - test -e /act-runner-data/.runner || gitea actions generate-runner-token > /act-runner-data/token
          env:
            - name: GITEA_APP_INI
              value: /data/gitea/conf/app.ini
          volumeMounts:
            - mountPath: /data
              name: gitea-shared-storage
            - mountPath: /act-runner-data
              name: act-runner-data
      containers:
        - name: act-runner
          image: registry.local/act_runner:20231015-171754
          workingDir: /data
          securityContext:
            runAsUser: 1000
            runAsGroup: 1000
          env:
            - name: DOCKER_HOST
              value: tcp://127.0.0.1:2376
            - name: DOCKER_TLS_VERIFY
              value: "1"
            - name: DOCKER_CERT_PATH
              value: /certs/server
            - name: GITEA_RUNNER_REGISTRATION_TOKEN_FILE
              value: /data/token
            - name: GITEA_INSTANCE_URL
              value: http://gitea-http:3000
            - name: GITEA_RUNNER_LABELS
              value: ubuntu-latest
            - name: CONFIG_FILE
              value: /data/config.yaml
          volumeMounts:
            - mountPath: /data/config.yaml
              name: act-runner-config
              subPath: config.yaml
            - mountPath: /data
              name: act-runner-data
            - mountPath: /certs/server
              name: docker-certs
        - name: dind
          image: docker:24.0.6-dind-rootless
          env:
            - name: DOCKER_HOST
              value: tcp://127.0.0.1:2376
            - name: DOCKER_TLS_VERIFY
              value: "1"
            - name: DOCKER_CERT_PATH
              value: /certs/server
          securityContext:
            # allowPrivilegeEscalation: true
            # priviledged is required as rootlesskit unable to mount "/" with error:
            # [rootlesskit:child ] error: failed to share mount point: /: permission denied
            # maybe there is even more problems but this one don't allow to go futher
            privileged: true
          volumeMounts:
            - mountPath: /certs/server
              name: docker-certs
      volumes:
        - name: act-runner-config
          configMap:
            name: act-runner-config
        - name: docker-certs
          emptyDir: {}
        - name: gitea-shared-storage
          persistentVolumeClaim:
            claimName: gitea-shared-storage
  volumeClaimTemplates:
    - metadata:
        name: act-runner-data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 1Gi
Here are my findings: - runner token is generated by gitea server. I don't think it is possible to save it to static secret. - `GITEA__SERVER__LOCAL_ROOT_URL` is required as gitea command using this variable to determine where to request token. - statefulset is needed as `.runner` file is different per runner so you need to have `volumeClaimTemplates` in the spec and this can't be done with deployments Problems: - act_runner image from https://gitea.com/gitea/act_runner is not usable, you need to rebuild it and add at least nodejs to be able to execute checkout action. - HPA won't work as there is no deregister hook and no metrics exposed how much tasks in queue. Here final yaml without root user: ```yaml --- apiVersion: v1 kind: ConfigMap metadata: name: act-runner-config data: config.yaml: | log: level: debug cache: enabled: false --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: act-runner name: act-runner namespace: gitea spec: selector: matchLabels: app: act-runner template: metadata: labels: app: act-runner spec: securityContext: runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 initContainers: # as there is no dedicated gitea cli using the whole image. # Alternative solution: # curl -v -X POST -d '{"scope":""}' \ # -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -H 'Accept: application/json' \ # http://gitea-http:3000/api/internal/actions/generate_actions_runner_token - name: gitea image: gitea/gitea:1.20.5-rootless command: - bash - -exc - test -e /act-runner-data/.runner || gitea actions generate-runner-token > /act-runner-data/token env: - name: GITEA_APP_INI value: /data/gitea/conf/app.ini volumeMounts: - mountPath: /data name: gitea-shared-storage - mountPath: /act-runner-data name: act-runner-data containers: - name: act-runner image: registry.local/act_runner:20231015-171754 workingDir: /data securityContext: runAsUser: 1000 runAsGroup: 1000 env: - name: DOCKER_HOST value: tcp://127.0.0.1:2376 - name: DOCKER_TLS_VERIFY value: "1" - name: DOCKER_CERT_PATH value: /certs/server - name: GITEA_RUNNER_REGISTRATION_TOKEN_FILE value: /data/token - name: GITEA_INSTANCE_URL value: http://gitea-http:3000 - name: GITEA_RUNNER_LABELS value: ubuntu-latest - name: CONFIG_FILE value: /data/config.yaml volumeMounts: - mountPath: /data/config.yaml name: act-runner-config subPath: config.yaml - mountPath: /data name: act-runner-data - mountPath: /certs/server name: docker-certs - name: dind image: docker:24.0.6-dind-rootless env: - name: DOCKER_HOST value: tcp://127.0.0.1:2376 - name: DOCKER_TLS_VERIFY value: "1" - name: DOCKER_CERT_PATH value: /certs/server securityContext: # allowPrivilegeEscalation: true # priviledged is required as rootlesskit unable to mount "/" with error: # [rootlesskit:child ] error: failed to share mount point: /: permission denied # maybe there is even more problems but this one don't allow to go futher privileged: true volumeMounts: - mountPath: /certs/server name: docker-certs volumes: - name: act-runner-config configMap: name: act-runner-config - name: docker-certs emptyDir: {} - name: gitea-shared-storage persistentVolumeClaim: claimName: gitea-shared-storage volumeClaimTemplates: - metadata: name: act-runner-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi ```
Sign in to join this conversation.
No Milestone
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: gitea/helm-chart#459
No description provided.