pipeline/tekton
Andrea Frittoli cc2474f7e0 Use the pre-release check task from plumbing
Add the pre-release check task to the release pipeline, to ensure
we don't override an existing release by mistake.

Also fix the README to specify the correct setup for the nightly
release pipeline and the manual release pipeline.

Closes #983
2019-10-17 09:51:20 -05:00
..
ko Update ko Dockerfile to point to new ko repo 2019-05-04 09:16:53 -05:00
koparse Remove API compatibility references to Knative in docs 2019-06-19 18:30:11 -05:00
account.yaml Update release dogfood to work against prod 😌 2019-03-27 20:47:39 -05:00
ci-images.yaml Add nightly release pipeline 🌙 2019-09-16 14:18:09 -05:00
publish-nightly.yaml Tekton 0.3.1 does not support $() syntax 2019-09-20 14:52:11 -05:00
publish.yaml Adapt the release pipeline to Tekton v0.7.0+ 2019-10-14 08:32:19 -05:00
README.md Use the pre-release check task from plumbing 2019-10-17 09:51:20 -05:00
release-pipeline-nightly.yaml Add nightly release pipeline 🌙 2019-09-16 14:18:09 -05:00
release-pipeline.yaml Use the pre-release check task from plumbing 2019-10-17 09:51:20 -05:00
resources.yaml Use a subfolder in the release bucket 2019-10-14 08:01:19 -05:00

Tekton Repo CI/CD

Why does Tekton pipelines have a folder called tekton? Cuz we think it would be cool if the tekton folder were the place to look for CI/CD logic in most repos!

We dogfood our project by using Tekton Pipelines to build, test and release Tekton Pipelines!

This directory contains the Tasks and Pipelines that we use.

The Pipelines and Tasks in this folder are used for:

  1. Manually creating official releases from the official cluster
  2. Automated nightly releases

To start from scratch and use these Pipelines and Tasks:

  1. Install Tekton v0.3.1
  2. Setup the Tasks and Pipelines
  3. Create the required service account + secrets

Create an official release

Official releases are performed from the prow cluster in the tekton-releases GCP project. This cluster already has the correct version of Tekton installed.

To make a new release:

  1. (Optionally) Apply the latest versions of the Tasks + Pipelines
  2. (If you haven't already) Install tkn
  3. Run the Pipeline
  4. Create the new tag and release in GitHub (see one of way of doing that here). TODO(#530): Automate as much of this as possible with Tekton.
  5. Add an entry to the README at HEAD for docs and examples for the new release (README.md#read-the-docs).
  6. Update the new release in GitHub with the same links to the docs and examples, see v0.1.0 for example.

Run the Pipeline

To use tkn to run the publish-tekton-pipelines Task and create a release:

  1. Pick the revision you want to release and update the resources.yaml file to add a PipelineResoruce for it, e.g.:

    apiVersion: tekton.dev/v1alpha1
    kind: PipelineResource
    metadata:
      name: tekton-pipelines-vX-Y-Z
    spec:
      type: git
      params:
      - name: url
        value: https://github.com/tektoncd/pipeline
      - name: revision
        value: revision-for-vX.Y.Z-invalid-tags-boouuhhh # REPLACE with the commit you'd like to build from (not a tag, since that's not created yet)
    
  2. To run against your own infrastructure (if you are running in the production cluster the default account should already have these creds, this is just a bonus - plus release-right-meow might already exist in the cluster!), also setup the required credentials for the release-right-meow service account, either:

  3. Connect to the production cluster:

    gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases
    
  4. Run the release-pipeline (assuming you are using the production cluster and all the Tasks and Pipelines already exist):

    # Create the resoruces - i.e. set the revision that you wan to build from
    kubectl apply -f tekton/resources.yaml
    
    # Change the environment variable to the version you would like to use.
    # Be careful: due to #983 it is possible to overwrite previous releases.
    export VERSION_TAG=v0.X.Y
    export IMAGE_REGISTRY=gcr.io/tekton-releases
    
    # Double-check the git revision that is going to be used for the release:
    kubectl get pipelineresource/tekton-pipelines-git -o=jsonpath="{'Target Revision: '}{.spec.params[?(@.name == 'revision')].value}{'\n'}"
    
    tkn pipeline start \
     	--param=versionTag=${VERSION_TAG} \
     --param=imageRegistry=${IMAGE_REGISTRY} \
     	--serviceaccount=release-right-meow \
     	--resource=source-repo=tekton-pipelines-git \
     	--resource=bucket=tekton-bucket \
     	--resource=builtBaseImage=base-image \
     	--resource=builtEntrypointImage=entrypoint-image \
     	--resource=builtKubeconfigWriterImage=kubeconfigwriter-image \
     	--resource=builtCredsInitImage=creds-init-image \
     	--resource=builtGitInitImage=git-init-image \
     	--resource=builtNopImage=nop-image \
     	--resource=builtBashImage=bash-image \
     	--resource=builtGsutilImage=gsutil-image \
     	--resource=builtControllerImage=controller-image \
     	--resource=builtWebhookImage=webhook-image \
     	--resource=builtDigestExporterImage=digest-exporter-image \
     	--resource=builtPullRequestInitImage=pull-request-init-image \
                 --resource=builtGcsFetcherImage=gcs-fetcher-image \
     	pipeline-release
    

TODO(#569): Normally we'd use the image PipelineResources to control which image registry the images are pushed to. However since we have so many images, all going to the same registry, we are cheating and using a parameter for the image registry instead.

Nightly releases

The nightly release pipeline is triggered nightly by Prow.

This Pipeline uses:

The nightly release Pipeline is currently missing Tasks which we want to add once we are able:

  • The unit tests aren't run due to the data race reported in #1124
  • Linting isn't run due to it being flakey #1205
  • Build isn't run because it uses workingDir which is broken in v0.3.1 (kubernetes/test-infra#13948)

Install Tekton

Some of the Pipelines and Tasks in this repo work with v0.3.1 due to Prow #13948, so that they can be used with Prow.

Specifically, nightly releases are triggered by Prow, so they are compatible with v0.3.1, while full releases are triggered manually and require Tekton >= v0.7.0.

# If this is your first time installing Tekton in the cluster you might need to give yourself permission to do so
kubectl create clusterrolebinding cluster-admin-binding-someusername \
  --clusterrole=cluster-admin \
  --user=$(gcloud config get-value core/account)

# For Tekton v0.3.1 - apply version v0.3.1
kubectl apply --filename  https://storage.googleapis.com/tekton-releases/previous/v0.3.1/release.yaml

# For Tekton v0.7.0 - apply version v0.7.0 - Do not apply both versions in the same cluster!
kubectl apply --filename  https://storage.googleapis.com/tekton-releases/previous/v0.3.1/release.yaml

Setup

Add all the Tasks to the cluster, including the golang Tasks from the tektoncd/catalog, and the release pre-check Task from tektoncd/plumbing.

For nightly releases, use a version of the tektoncdcatalog tasks that is compatible with Tekton v0.3.1:

# Apply the Tasks we are using from the catalog
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/14d38f2041312b0ad17bc079cfa9c0d66895cc7a/golang/lint.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/14d38f2041312b0ad17bc079cfa9c0d66895cc7a/golang/build.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/14d38f2041312b0ad17bc079cfa9c0d66895cc7a/golang/tests.yaml

For full releases, use a version of the tektoncdcatalog tasks that is compatible with Tekton v0.7.0 (master) and install the pre-release check Task from plumbing too:

# Apply the Tasks we are using from the catalog
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/lint.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/build.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/master/golang/tests.yaml
kubectl apply -f https://raw.githubusercontent.com/tektoncd/plumbing/master/tekton/prerelease_checks.yaml

Apply the tasks from the pipeline repo:

# Apply the Tasks and Pipelines we use from this repo
kubectl apply -f tekton/ci-images.yaml
kubectl apply -f tekton/publish.yaml
kubectl apply -f tekton/publish-nightly.yaml
kubectl apply -f tekton/release-pipeline.yaml
kubectl apply -f tekton/release-pipeline-nightly.yaml

# Apply the resources - note that when manually releasing you'll re-apply these
kubectl apply -f tekton/resources.yaml

Tasks from this repo are:

Service account and secrets

In order to release, these Pipelines use the release-right-meow service account, which uses release-secret and has Storage Admin access to tekton-releases and tekton-releases-nightly.

After creating these service accounts in GCP, the kubernetes service account and secret were created with:

KEY_FILE=release.json
GENERIC_SECRET=release-secret
ACCOUNT=release-right-meow

# Connected to the `prow` in the `tekton-releases` GCP project
GCP_ACCOUNT="$ACCOUNT@tekton-releases.iam.gserviceaccount.com"

# 1. Create a private key for the service account
gcloud iam service-accounts keys create $KEY_FILE --iam-account $GCP_ACCOUNT

# 2. Create kubernetes secret, which we will use via a service account and directly mounting
kubectl create secret generic $GENERIC_SECRET --from-file=./$KEY_FILE

# 3. Add the docker secret to the service account
kubectl apply -f tekton/account.yaml
kubectl patch serviceaccount $ACCOUNT \
  -p "{\"secrets\": [{\"name\": \"$GENERIC_SECRET\"}]}"

Supporting scripts and images

Some supporting scripts have been written using Python 2.7:

  • koparse - Contains logic for parsing release.yaml files created by ko

ko image

In order to run ko, and to be able to use a cluster's default credentials, we need an image which contains:

  • ko
  • golang - Required by ko to build
  • gcloud - Required to auth with default namespace credentials

The image which we use for this is built from tekton/ko/Dockerfile.

go-containerregistry#383 is about publishing a ko image, which hopefully we'll be able to move it.