Init script fails without Logs #149

Closed
opened 2021-04-25 15:17:28 +00:00 by mattn · 13 comments

The init script has been causing my helm deploy to fail for some time. I ended up modfying the init script to printout some context to help, and then... it worked?

I don't know if this is useful information for anyone. I'm still looking for a long-term solution or explaintation of the actual issue. Will update here if i discover.

Values.yaml

I'm running on a homelab environment, here's my value's file:
ingress:
  enabled: false
persistence:
  enabled: true
  size: 10Gi
  storageClass: manual
  existingClaim: gitea-pv-claim
postgresql:
  persistence:
    size: 10Gi
    storageClass: manual
    existingClaim: gitea-pv-claim-postgres
  volumePermissions:
    enabled: true
  service:
    annotations:
      networking.istio.io/exportTo: "."
gitea:
  admin:
    username: admin
    password: pass
    email: admin@admin.admin

Chart apply

and here's how I'm applying the chart
resource "helm_release" "gitea" {
  name       = "gitea"
  repository = "https://dl.gitea.io/charts"
  chart      =   "gitea"
  version = "2.2.5"
  namespace = "${kubernetes_namespace.gitea.metadata[0].name}"
  values = [
    "${data.template_file.chart-values.rendered}"
  ]
  depends_on = [
    kubectl_manifest.gitea-pv,
    kubectl_manifest.gitea-pvc,
    kubectl_manifest.gitea-pv-postgres,
    kubectl_manifest.gitea-pvc-postgres
  ]
}

pod description

Unmodifyed init script results
kubectl get pods -n gitea
NAME                              READY   STATUS       RESTARTS   AGE
gitea-0                           0/2     Init:Error   1          16s
gitea-memcached-c8547c9c9-v2xtq   2/2     Running      0          20m
gitea-postgresql-0                2/2     Running      0          20m

kubectl describe pods -n gitea
Name:         gitea-0
Namespace:    gitea
Priority:     0
Node:         k8worker1/10.0.64.129
Start Time:   Sun, 25 Apr 2021 10:27:23 -0400
Labels:       app=gitea
              app.kubernetes.io/instance=gitea
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=gitea
              app.kubernetes.io/version=1.13.7
              controller-revision-hash=gitea-8679c9568c
              helm.sh/chart=gitea-2.2.5
              istio.io/rev=default
              security.istio.io/tlsMode=istio
              service.istio.io/canonical-name=gitea
              service.istio.io/canonical-revision=1.13.7
              statefulset.kubernetes.io/pod-name=gitea-0
              version=1.13.7
Annotations:  checksum/config: 988c8a17f2339df2b2f22c4651ae47d476de14fd1cb010b97b62c1a0d7abc0b6
              checksum/ldap: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
              checksum/oauth: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
              cni.projectcalico.org/podIP: 10.0.117.129/32
              cni.projectcalico.org/podIPs: 10.0.117.129/32
              k8s.v1.cni.cncf.io/networks: istio-cni
              kubectl.kubernetes.io/default-container: gitea
              kubectl.kubernetes.io/default-logs-container: gitea
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/interceptionMode: REDIRECT
              sidecar.istio.io/status:
                {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","i...
              traffic.sidecar.istio.io/excludeInboundPorts: 15020
              traffic.sidecar.istio.io/includeInboundPorts: *
              traffic.sidecar.istio.io/includeOutboundIPRanges: *
Status:       Pending
IP:           10.0.117.129
IPs:
  IP:           10.0.117.129
Controlled By:  StatefulSet/gitea
Init Containers:
  istio-validation:
    Container ID:  containerd://f5f697da4f36d44963e2f2a1194654f23d72f5e93afd95b9fe7bfec41f33be4d
    Image:         gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7
    Image ID:      gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
      --run-validation
      --skip-rule-apply
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 25 Apr 2021 10:27:36 -0400
      Finished:     Sun, 25 Apr 2021 10:27:36 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro)
  init:
    Container ID:  containerd://07ec6f6e5d7fdde9d44e271413f7683a6386e5205ff6caa258441fcd8ada6cac
    Image:         gitea/gitea:1.13.7
    Image ID:      docker.io/gitea/gitea@sha256:1b32b27c45550254245f81c1d95bb3a1e9c08570eadbc241ead000e5fbecb79e
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/sbin/init_gitea.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 25 Apr 2021 10:33:35 -0400
      Finished:     Sun, 25 Apr 2021 10:33:35 -0400
    Ready:          False
    Restart Count:  6
    Environment:    <none>
    Mounts:
      /data from data (rw)
      /etc/gitea/conf from config (rw)
      /usr/sbin from init (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro)
Containers:
  gitea:
    Container ID:
    Image:          gitea/gitea:1.13.7
    Image ID:
    Ports:          22/TCP, 3000/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       tcp-socket :http delay=200s timeout=1s period=10s #success=1 #failure=10
    Readiness:      tcp-socket :http delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:
      SSH_LISTEN_PORT:  22
      SSH_PORT:         22
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro)
  istio-proxy:
    Container ID:
    Image:         gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7
    Image ID:
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      gitea-0 (v1:metadata.name)
      POD_NAMESPACE:                 gitea (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      CANONICAL_SERVICE:              (v1:metadata.labels['service.istio.io/canonical-name'])
      CANONICAL_REVISION:             (v1:metadata.labels['service.istio.io/canonical-revision'])
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                         {"name":"ssh","containerPort":22,"protocol":"TCP"}
                                         ,{"name":"http","containerPort":3000,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     gitea
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_METAJSON_ANNOTATIONS:    {"checksum/config":"988c8a17f2339df2b2f22c4651ae47d476de14fd1cb010b97b62c1a0d7abc0b6","checksum/ldap":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","checksum/oauth":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"}

      ISTIO_META_WORKLOAD_NAME:      gitea
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/gitea/statefulsets/gitea
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro)
      /var/run/secrets/tokens from istio-token (rw)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
      limits.cpu -> cpu-limit
      requests.cpu -> cpu-request
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  init:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gitea-init
    Optional:    false
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gitea
    Optional:    false
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  gitea-pv-claim
    ReadOnly:   false
  kube-api-access-w7smv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  9m22s                  default-scheduler  Successfully assigned gitea/gitea-0 to k8worker1
  Normal   Pulled     9m13s                  kubelet            Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine
  Normal   Created    9m10s                  kubelet            Created container istio-validation
  Normal   Started    9m9s                   kubelet            Started container istio-validation
  Normal   Pulled     7m24s (x5 over 9m6s)   kubelet            Container image "gitea/gitea:1.13.7" already present on machine
  Normal   Created    7m22s (x5 over 9m4s)   kubelet            Created container init
  Normal   Started    7m21s (x5 over 9m3s)   kubelet            Started container init
  Warning  BackOff    4m9s (x21 over 8m55s)  kubelet            Back-off restarting failed container


Name:         gitea-memcached-c8547c9c9-v2xtq
Namespace:    gitea
Priority:     0
Node:         k8worker1/10.0.64.129
Start Time:   Sun, 25 Apr 2021 10:27:23 -0400
Labels:       app.kubernetes.io/instance=gitea
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=memcached
              helm.sh/chart=memcached-4.2.20
              istio.io/rev=default
              pod-template-hash=c8547c9c9
              security.istio.io/tlsMode=istio
              service.istio.io/canonical-name=memcached
              service.istio.io/canonical-revision=latest
Annotations:  cni.projectcalico.org/podIP: 10.0.117.131/32
              cni.projectcalico.org/podIPs: 10.0.117.131/32
              k8s.v1.cni.cncf.io/networks: istio-cni
              kubectl.kubernetes.io/default-container: memcached
              kubectl.kubernetes.io/default-logs-container: memcached
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/interceptionMode: REDIRECT
              sidecar.istio.io/status:
                {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","i...
              traffic.sidecar.istio.io/excludeInboundPorts: 15020
              traffic.sidecar.istio.io/includeInboundPorts: *
              traffic.sidecar.istio.io/includeOutboundIPRanges: *
Status:       Running
IP:           10.0.117.131
IPs:
  IP:           10.0.117.131
Controlled By:  ReplicaSet/gitea-memcached-c8547c9c9
Init Containers:
  istio-validation:
    Container ID:  containerd://d8624d6dad1c23af8dc99d15ae26dd994ce46444ca7a5b58a86bdc4f2b1abf1a
    Image:         gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7
    Image ID:      gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
      --run-validation
      --skip-rule-apply
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 25 Apr 2021 10:27:36 -0400
      Finished:     Sun, 25 Apr 2021 10:27:36 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-297l2 (ro)
Containers:
  memcached:
    Container ID:  containerd://67c42596e6cb4fc8dd786cc22bbea73ebe5b684d5d6eaecad632d12452f1043f
    Image:         docker.io/bitnami/memcached:1.6.6-debian-10-r54
    Image ID:      docker.io/bitnami/memcached@sha256:d74d2d6594054a56c1ccca816ee52fa8bdf26ed42d363c2bf7ba0c5caba6125c
    Port:          11211/TCP
    Host Port:     0/TCP
    Args:
      /run.sh
    State:          Running
      Started:      Sun, 25 Apr 2021 10:27:42 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   tcp-socket :memcache delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  tcp-socket :memcache delay=5s timeout=3s period=5s #success=1 #failure=3
    Environment:
      BITNAMI_DEBUG:       false
      MEMCACHED_USERNAME:
      MEMCACHED_PASSWORD:  <set to the key 'memcached-password' in secret 'gitea-memcached'>  Optional: false
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-297l2 (ro)
  istio-proxy:
    Container ID:  containerd://8827ece9864721dee26ef48f10445eb6228525c69456171e9db06b1d79380c7d
    Image:         gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7
    Image ID:      gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Sun, 25 Apr 2021 10:27:47 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      gitea-memcached-c8547c9c9-v2xtq (v1:metadata.name)
      POD_NAMESPACE:                 gitea (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      CANONICAL_SERVICE:              (v1:metadata.labels['service.istio.io/canonical-name'])
      CANONICAL_REVISION:             (v1:metadata.labels['service.istio.io/canonical-revision'])
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                         {"name":"memcache","containerPort":11211,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     memcached
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      gitea-memcached
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/gitea/deployments/gitea-memcached
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-297l2 (ro)
      /var/run/secrets/tokens from istio-token (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
      limits.cpu -> cpu-limit
      requests.cpu -> cpu-request
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-297l2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  9m22s                  default-scheduler  Successfully assigned gitea/gitea-memcached-c8547c9c9-v2xtq to k8worker1
  Normal   Pulled     9m13s                  kubelet            Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine
  Normal   Created    9m10s                  kubelet            Created container istio-validation
  Normal   Started    9m9s                   kubelet            Started container istio-validation
  Normal   Pulled     9m6s                   kubelet            Container image "docker.io/bitnami/memcached:1.6.6-debian-10-r54" already present on machine
  Normal   Created    9m3s                   kubelet            Created container memcached
  Normal   Started    9m3s                   kubelet            Started container memcached
  Normal   Pulled     9m3s                   kubelet            Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine
  Normal   Created    9m1s                   kubelet            Created container istio-proxy
  Normal   Started    8m58s                  kubelet            Started container istio-proxy
  Warning  Unhealthy  8m57s (x2 over 8m57s)  kubelet            Readiness probe failed: dial tcp 10.0.117.131:11211: connect: connection refused
  Warning  Unhealthy  8m57s                  kubelet            Readiness probe failed: Get "http://10.0.117.131:15021/healthz/ready": dial tcp 10.0.117.131:15021: connect: connection refused


Name:         gitea-postgresql-0
Namespace:    gitea
Priority:     0
Node:         k8worker1/10.0.64.129
Start Time:   Sun, 25 Apr 2021 10:27:23 -0400
Labels:       app.kubernetes.io/instance=gitea
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=postgresql
              controller-revision-hash=gitea-postgresql-844c4dbdd
              helm.sh/chart=postgresql-9.7.2
              istio.io/rev=default
              role=master
              security.istio.io/tlsMode=istio
              service.istio.io/canonical-name=postgresql
              service.istio.io/canonical-revision=latest
              statefulset.kubernetes.io/pod-name=gitea-postgresql-0
Annotations:  cni.projectcalico.org/podIP: 10.0.117.133/32
              cni.projectcalico.org/podIPs: 10.0.117.133/32
              k8s.v1.cni.cncf.io/networks: istio-cni
              kubectl.kubernetes.io/default-container: gitea-postgresql
              kubectl.kubernetes.io/default-logs-container: gitea-postgresql
              prometheus.io/path: /stats/prometheus
              prometheus.io/port: 15020
              prometheus.io/scrape: true
              sidecar.istio.io/interceptionMode: REDIRECT
              sidecar.istio.io/status:
                {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","i...
              traffic.sidecar.istio.io/excludeInboundPorts: 15020
              traffic.sidecar.istio.io/includeInboundPorts: *
              traffic.sidecar.istio.io/includeOutboundIPRanges: *
Status:       Running
IP:           10.0.117.133
IPs:
  IP:           10.0.117.133
Controlled By:  StatefulSet/gitea-postgresql
Init Containers:
  istio-validation:
    Container ID:  containerd://14f75cb582e99aaf7953339a248861c2d4c4f277e3df8a6d3de0a48b26343d21
    Image:         gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7
    Image ID:      gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
      --run-validation
      --skip-rule-apply
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 25 Apr 2021 10:27:36 -0400
      Finished:     Sun, 25 Apr 2021 10:27:36 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro)
  init-chmod-data:
    Container ID:  containerd://dcf2ebfecfc695daefa3342a641d9188f5f7b7999e7cfe2793c549e7134791d0
    Image:         docker.io/bitnami/minideb:buster
    Image ID:      docker.io/bitnami/minideb@sha256:933e213e76b6712185adc09e8a173d02bdf8db53f94a6e400006ecf6d6ef4a40
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -cx
      chown 1001:1001 /bitnami/postgresql
      mkdir -p /bitnami/postgresql/data
      chmod 700 /bitnami/postgresql/data
      find /bitnami/postgresql -mindepth 1 -maxdepth 1 -not -name "conf" -not -name ".snapshot" -not -name "lost+found" | \
        xargs chown -R 1001:1001
      chmod -R 777 /dev/shm

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 25 Apr 2021 10:27:45 -0400
      Finished:     Sun, 25 Apr 2021 10:27:45 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /bitnami/postgresql from data (rw)
      /dev/shm from dshm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro)
Containers:
  gitea-postgresql:
    Container ID:   containerd://8261dc04b762b66e56de99c7719175842c0180b0ab260f11d3745bea806b8562
    Image:          docker.io/bitnami/postgresql:11.9.0-debian-10-r34
    Image ID:       docker.io/bitnami/postgresql@sha256:3114b406e0ea358432ada0c6c273592bc4ff28821f505e55f51c57404fbee086
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 25 Apr 2021 10:27:50 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      250m
      memory:   256Mi
    Liveness:   exec [/bin/sh -c exec pg_isready -U "gitea" -d "dbname=gitea" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:  exec [/bin/sh -c -e exec pg_isready -U "gitea" -d "dbname=gitea" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      BITNAMI_DEBUG:                        false
      POSTGRESQL_PORT_NUMBER:               5432
      POSTGRESQL_VOLUME_DIR:                /bitnami/postgresql
      PGDATA:                               /bitnami/postgresql/data
      POSTGRES_USER:                        gitea
      POSTGRES_PASSWORD:                    <set to the key 'postgresql-password' in secret 'gitea-postgresql'>  Optional: false
      POSTGRES_DB:                          gitea
      POSTGRESQL_ENABLE_LDAP:               no
      POSTGRESQL_ENABLE_TLS:                no
      POSTGRESQL_LOG_HOSTNAME:              false
      POSTGRESQL_LOG_CONNECTIONS:           false
      POSTGRESQL_LOG_DISCONNECTIONS:        false
      POSTGRESQL_PGAUDIT_LOG_CATALOG:       off
      POSTGRESQL_CLIENT_MIN_MESSAGES:       error
      POSTGRESQL_SHARED_PRELOAD_LIBRARIES:  pgaudit
    Mounts:
      /bitnami/postgresql from data (rw)
      /dev/shm from dshm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro)
  istio-proxy:
    Container ID:  containerd://d8cbf45d1dbc259d6d0702b4d8389e1d9a227ef68a830c8e6333abd9f4448f31
    Image:         gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7
    Image ID:      gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --concurrency
      2
    State:          Running
      Started:      Sun, 25 Apr 2021 10:27:52 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      gitea-postgresql-0 (v1:metadata.name)
      POD_NAMESPACE:                 gitea (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      CANONICAL_SERVICE:              (v1:metadata.labels['service.istio.io/canonical-name'])
      CANONICAL_REVISION:             (v1:metadata.labels['service.istio.io/canonical-revision'])
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                         {"name":"tcp-postgresql","containerPort":5432,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     gitea-postgresql
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      gitea-postgresql
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/gitea/statefulsets/gitea-postgresql
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro)
      /var/run/secrets/tokens from istio-token (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
      limits.cpu -> cpu-limit
      requests.cpu -> cpu-request
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  dshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  1Gi
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  gitea-pv-claim-postgres
    ReadOnly:   false
  kube-api-access-x857v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  9m22s  default-scheduler  Successfully assigned gitea/gitea-postgresql-0 to k8worker1
  Normal   Pulled     9m13s  kubelet            Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine
  Normal   Created    9m10s  kubelet            Created container istio-validation
  Normal   Started    9m9s   kubelet            Started container istio-validation
  Normal   Pulling    9m7s   kubelet            Pulling image "docker.io/bitnami/minideb:buster"
  Normal   Pulled     9m3s   kubelet            Successfully pulled image "docker.io/bitnami/minideb:buster" in 3.402404816s
  Normal   Created    9m1s   kubelet            Created container init-chmod-data
  Normal   Started    9m     kubelet            Started container init-chmod-data
  Normal   Pulled     8m58s  kubelet            Container image "docker.io/bitnami/postgresql:11.9.0-debian-10-r34" already present on machine
  Normal   Created    8m55s  kubelet            Created container gitea-postgresql
  Normal   Started    8m55s  kubelet            Started container gitea-postgresql
  Normal   Pulled     8m55s  kubelet            Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine
  Normal   Created    8m53s  kubelet            Created container istio-proxy
  Normal   Started    8m53s  kubelet            Started container istio-proxy
  Warning  Unhealthy  8m52s  kubelet            Readiness probe failed: Get "http://10.0.117.133:15021/healthz/ready": dial tcp 10.0.117.133:15021: connect: connection refused

Editing init script

Download the script

kubectl get secret gitea-init -n gitea -o yaml > secret.yaml
cat secret.yaml # Copy payload
echo $PAYLOAD | base64 --decode > script.sh

Adding simple debug

#!/bin/bash
echo "start"
chown 1000:1000 /data
echo "chown success"
mkdir -p /data/git/.ssh
echo "mkdir success"
chmod -R 700 /data/git/.ssh
echo "chmod /data/git/.ssh success"
mkdir -p /data/gitea/conf
echo "mkdir data/gitea/conf success"
cp /etc/gitea/conf/app.ini /data/gitea/conf/app.ini
echo "cp app.ini success"
chmod a+rwx /data/gitea/conf/app.ini
echo "chmod app.init success"
nc -v -w2 -z gitea-postgresql 5432 && \
su git -c ' \
set -x; \
gitea migrate; \
gitea admin create-user --username  matt --password "Jewish123" --email "matt@salmon.sec" --admin --must-change-password=false \
|| \
gitea admin change-password --username matt --password "Jewish123"; \
'
echo "done......"

Write script back and re-create pod

# Edit script.sh to change payload
kubectl apply -f secret.yaml
secret/gitea-init configured

kubectl delete pod gitea-0 -n gitea

Wait a little while...

kubectl get pods -n gitea
NAME                              READY   STATUS    RESTARTS   AGE
gitea-0                           2/2     Running   0          35s
gitea-memcached-c8547c9c9-v2xtq   2/2     Running   0          25m
gitea-postgresql-0                2/2     Running   0          25m

View output

 kubectl logs gitea-0 -c init -n gitea
start
chown success
mkdir success
chmod /data/git/.ssh success
mkdir data/gitea/conf success
cp app.ini success
chmod app.init success
done......
The init script has been causing my helm deploy to fail for some time. I ended up modfying the init script to printout some context to help, and then... it worked? I don't know if this is useful information for anyone. I'm still looking for a long-term solution or explaintation of the actual issue. Will update here if i discover. # Values.yaml <details> <summary>I'm running on a homelab environment, here's my value's file:</summary> ingress: enabled: false persistence: enabled: true size: 10Gi storageClass: manual existingClaim: gitea-pv-claim postgresql: persistence: size: 10Gi storageClass: manual existingClaim: gitea-pv-claim-postgres volumePermissions: enabled: true service: annotations: networking.istio.io/exportTo: "." gitea: admin: username: admin password: pass email: admin@admin.admin </details> # Chart apply <details> <summary>and here's how I'm applying the chart</summary> resource "helm_release" "gitea" { name = "gitea" repository = "https://dl.gitea.io/charts" chart = "gitea" version = "2.2.5" namespace = "${kubernetes_namespace.gitea.metadata[0].name}" values = [ "${data.template_file.chart-values.rendered}" ] depends_on = [ kubectl_manifest.gitea-pv, kubectl_manifest.gitea-pvc, kubectl_manifest.gitea-pv-postgres, kubectl_manifest.gitea-pvc-postgres ] } </details> # pod description <details> <summary>Unmodifyed init script results</summary> kubectl get pods -n gitea NAME READY STATUS RESTARTS AGE gitea-0 0/2 Init:Error 1 16s gitea-memcached-c8547c9c9-v2xtq 2/2 Running 0 20m gitea-postgresql-0 2/2 Running 0 20m kubectl describe pods -n gitea Name: gitea-0 Namespace: gitea Priority: 0 Node: k8worker1/10.0.64.129 Start Time: Sun, 25 Apr 2021 10:27:23 -0400 Labels: app=gitea app.kubernetes.io/instance=gitea app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=gitea app.kubernetes.io/version=1.13.7 controller-revision-hash=gitea-8679c9568c helm.sh/chart=gitea-2.2.5 istio.io/rev=default security.istio.io/tlsMode=istio service.istio.io/canonical-name=gitea service.istio.io/canonical-revision=1.13.7 statefulset.kubernetes.io/pod-name=gitea-0 version=1.13.7 Annotations: checksum/config: 988c8a17f2339df2b2f22c4651ae47d476de14fd1cb010b97b62c1a0d7abc0b6 checksum/ldap: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 checksum/oauth: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 cni.projectcalico.org/podIP: 10.0.117.129/32 cni.projectcalico.org/podIPs: 10.0.117.129/32 k8s.v1.cni.cncf.io/networks: istio-cni kubectl.kubernetes.io/default-container: gitea kubectl.kubernetes.io/default-logs-container: gitea prometheus.io/path: /stats/prometheus prometheus.io/port: 15020 prometheus.io/scrape: true sidecar.istio.io/interceptionMode: REDIRECT sidecar.istio.io/status: {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","i... traffic.sidecar.istio.io/excludeInboundPorts: 15020 traffic.sidecar.istio.io/includeInboundPorts: * traffic.sidecar.istio.io/includeOutboundIPRanges: * Status: Pending IP: 10.0.117.129 IPs: IP: 10.0.117.129 Controlled By: StatefulSet/gitea Init Containers: istio-validation: Container ID: containerd://f5f697da4f36d44963e2f2a1194654f23d72f5e93afd95b9fe7bfec41f33be4d Image: gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7 Image ID: gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55 Port: <none> Host Port: <none> Args: istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x -b * -d 15090,15021,15020 --run-validation --skip-rule-apply State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 25 Apr 2021 10:27:36 -0400 Finished: Sun, 25 Apr 2021 10:27:36 -0400 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 100m memory: 128Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro) init: Container ID: containerd://07ec6f6e5d7fdde9d44e271413f7683a6386e5205ff6caa258441fcd8ada6cac Image: gitea/gitea:1.13.7 Image ID: docker.io/gitea/gitea@sha256:1b32b27c45550254245f81c1d95bb3a1e9c08570eadbc241ead000e5fbecb79e Port: <none> Host Port: <none> Command: /usr/sbin/init_gitea.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sun, 25 Apr 2021 10:33:35 -0400 Finished: Sun, 25 Apr 2021 10:33:35 -0400 Ready: False Restart Count: 6 Environment: <none> Mounts: /data from data (rw) /etc/gitea/conf from config (rw) /usr/sbin from init (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro) Containers: gitea: Container ID: Image: gitea/gitea:1.13.7 Image ID: Ports: 22/TCP, 3000/TCP Host Ports: 0/TCP, 0/TCP State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Liveness: tcp-socket :http delay=200s timeout=1s period=10s #success=1 #failure=10 Readiness: tcp-socket :http delay=5s timeout=1s period=10s #success=1 #failure=3 Environment: SSH_LISTEN_PORT: 22 SSH_PORT: 22 Mounts: /data from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro) istio-proxy: Container ID: Image: gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7 Image ID: Port: 15090/TCP Host Port: 0/TCP Args: proxy sidecar --domain $(POD_NAMESPACE).svc.cluster.local --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --log_output_level=default:info --concurrency 2 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 100m memory: 128Mi Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30 Environment: JWT_POLICY: third-party-jwt PILOT_CERT_PROVIDER: istiod CA_ADDR: istiod.istio-system.svc:15012 POD_NAME: gitea-0 (v1:metadata.name) POD_NAMESPACE: gitea (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) SERVICE_ACCOUNT: (v1:spec.serviceAccountName) HOST_IP: (v1:status.hostIP) CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name']) CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision']) PROXY_CONFIG: {} ISTIO_META_POD_PORTS: [ {"name":"ssh","containerPort":22,"protocol":"TCP"} ,{"name":"http","containerPort":3000,"protocol":"TCP"} ] ISTIO_META_APP_CONTAINERS: gitea ISTIO_META_CLUSTER_ID: Kubernetes ISTIO_META_INTERCEPTION_MODE: REDIRECT ISTIO_METAJSON_ANNOTATIONS: {"checksum/config":"988c8a17f2339df2b2f22c4651ae47d476de14fd1cb010b97b62c1a0d7abc0b6","checksum/ldap":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855","checksum/oauth":"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"} ISTIO_META_WORKLOAD_NAME: gitea ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/gitea/statefulsets/gitea ISTIO_META_MESH_ID: cluster.local TRUST_DOMAIN: cluster.local Mounts: /etc/istio/pod from istio-podinfo (rw) /etc/istio/proxy from istio-envoy (rw) /var/lib/istio/data from istio-data (rw) /var/run/secrets/istio from istiod-ca-cert (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7smv (ro) /var/run/secrets/tokens from istio-token (rw) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: <unset> istio-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> istio-podinfo: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.labels -> labels metadata.annotations -> annotations limits.cpu -> cpu-limit requests.cpu -> cpu-request istio-token: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 43200 istiod-ca-cert: Type: ConfigMap (a volume populated by a ConfigMap) Name: istio-ca-root-cert Optional: false init: Type: Secret (a volume populated by a Secret) SecretName: gitea-init Optional: false config: Type: Secret (a volume populated by a Secret) SecretName: gitea Optional: false data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gitea-pv-claim ReadOnly: false kube-api-access-w7smv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m22s default-scheduler Successfully assigned gitea/gitea-0 to k8worker1 Normal Pulled 9m13s kubelet Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine Normal Created 9m10s kubelet Created container istio-validation Normal Started 9m9s kubelet Started container istio-validation Normal Pulled 7m24s (x5 over 9m6s) kubelet Container image "gitea/gitea:1.13.7" already present on machine Normal Created 7m22s (x5 over 9m4s) kubelet Created container init Normal Started 7m21s (x5 over 9m3s) kubelet Started container init Warning BackOff 4m9s (x21 over 8m55s) kubelet Back-off restarting failed container Name: gitea-memcached-c8547c9c9-v2xtq Namespace: gitea Priority: 0 Node: k8worker1/10.0.64.129 Start Time: Sun, 25 Apr 2021 10:27:23 -0400 Labels: app.kubernetes.io/instance=gitea app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=memcached helm.sh/chart=memcached-4.2.20 istio.io/rev=default pod-template-hash=c8547c9c9 security.istio.io/tlsMode=istio service.istio.io/canonical-name=memcached service.istio.io/canonical-revision=latest Annotations: cni.projectcalico.org/podIP: 10.0.117.131/32 cni.projectcalico.org/podIPs: 10.0.117.131/32 k8s.v1.cni.cncf.io/networks: istio-cni kubectl.kubernetes.io/default-container: memcached kubectl.kubernetes.io/default-logs-container: memcached prometheus.io/path: /stats/prometheus prometheus.io/port: 15020 prometheus.io/scrape: true sidecar.istio.io/interceptionMode: REDIRECT sidecar.istio.io/status: {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","i... traffic.sidecar.istio.io/excludeInboundPorts: 15020 traffic.sidecar.istio.io/includeInboundPorts: * traffic.sidecar.istio.io/includeOutboundIPRanges: * Status: Running IP: 10.0.117.131 IPs: IP: 10.0.117.131 Controlled By: ReplicaSet/gitea-memcached-c8547c9c9 Init Containers: istio-validation: Container ID: containerd://d8624d6dad1c23af8dc99d15ae26dd994ce46444ca7a5b58a86bdc4f2b1abf1a Image: gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7 Image ID: gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55 Port: <none> Host Port: <none> Args: istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x -b * -d 15090,15021,15020 --run-validation --skip-rule-apply State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 25 Apr 2021 10:27:36 -0400 Finished: Sun, 25 Apr 2021 10:27:36 -0400 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 100m memory: 128Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-297l2 (ro) Containers: memcached: Container ID: containerd://67c42596e6cb4fc8dd786cc22bbea73ebe5b684d5d6eaecad632d12452f1043f Image: docker.io/bitnami/memcached:1.6.6-debian-10-r54 Image ID: docker.io/bitnami/memcached@sha256:d74d2d6594054a56c1ccca816ee52fa8bdf26ed42d363c2bf7ba0c5caba6125c Port: 11211/TCP Host Port: 0/TCP Args: /run.sh State: Running Started: Sun, 25 Apr 2021 10:27:42 -0400 Ready: True Restart Count: 0 Requests: cpu: 250m memory: 256Mi Liveness: tcp-socket :memcache delay=30s timeout=5s period=10s #success=1 #failure=6 Readiness: tcp-socket :memcache delay=5s timeout=3s period=5s #success=1 #failure=3 Environment: BITNAMI_DEBUG: false MEMCACHED_USERNAME: MEMCACHED_PASSWORD: <set to the key 'memcached-password' in secret 'gitea-memcached'> Optional: false Mounts: /tmp from tmp (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-297l2 (ro) istio-proxy: Container ID: containerd://8827ece9864721dee26ef48f10445eb6228525c69456171e9db06b1d79380c7d Image: gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7 Image ID: gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55 Port: 15090/TCP Host Port: 0/TCP Args: proxy sidecar --domain $(POD_NAMESPACE).svc.cluster.local --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --log_output_level=default:info --concurrency 2 State: Running Started: Sun, 25 Apr 2021 10:27:47 -0400 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 100m memory: 128Mi Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30 Environment: JWT_POLICY: third-party-jwt PILOT_CERT_PROVIDER: istiod CA_ADDR: istiod.istio-system.svc:15012 POD_NAME: gitea-memcached-c8547c9c9-v2xtq (v1:metadata.name) POD_NAMESPACE: gitea (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) SERVICE_ACCOUNT: (v1:spec.serviceAccountName) HOST_IP: (v1:status.hostIP) CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name']) CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision']) PROXY_CONFIG: {} ISTIO_META_POD_PORTS: [ {"name":"memcache","containerPort":11211,"protocol":"TCP"} ] ISTIO_META_APP_CONTAINERS: memcached ISTIO_META_CLUSTER_ID: Kubernetes ISTIO_META_INTERCEPTION_MODE: REDIRECT ISTIO_META_WORKLOAD_NAME: gitea-memcached ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/gitea/deployments/gitea-memcached ISTIO_META_MESH_ID: cluster.local TRUST_DOMAIN: cluster.local Mounts: /etc/istio/pod from istio-podinfo (rw) /etc/istio/proxy from istio-envoy (rw) /var/lib/istio/data from istio-data (rw) /var/run/secrets/istio from istiod-ca-cert (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-297l2 (ro) /var/run/secrets/tokens from istio-token (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: <unset> istio-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> istio-podinfo: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.labels -> labels metadata.annotations -> annotations limits.cpu -> cpu-limit requests.cpu -> cpu-request istio-token: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 43200 istiod-ca-cert: Type: ConfigMap (a volume populated by a ConfigMap) Name: istio-ca-root-cert Optional: false tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-297l2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m22s default-scheduler Successfully assigned gitea/gitea-memcached-c8547c9c9-v2xtq to k8worker1 Normal Pulled 9m13s kubelet Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine Normal Created 9m10s kubelet Created container istio-validation Normal Started 9m9s kubelet Started container istio-validation Normal Pulled 9m6s kubelet Container image "docker.io/bitnami/memcached:1.6.6-debian-10-r54" already present on machine Normal Created 9m3s kubelet Created container memcached Normal Started 9m3s kubelet Started container memcached Normal Pulled 9m3s kubelet Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine Normal Created 9m1s kubelet Created container istio-proxy Normal Started 8m58s kubelet Started container istio-proxy Warning Unhealthy 8m57s (x2 over 8m57s) kubelet Readiness probe failed: dial tcp 10.0.117.131:11211: connect: connection refused Warning Unhealthy 8m57s kubelet Readiness probe failed: Get "http://10.0.117.131:15021/healthz/ready": dial tcp 10.0.117.131:15021: connect: connection refused Name: gitea-postgresql-0 Namespace: gitea Priority: 0 Node: k8worker1/10.0.64.129 Start Time: Sun, 25 Apr 2021 10:27:23 -0400 Labels: app.kubernetes.io/instance=gitea app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=postgresql controller-revision-hash=gitea-postgresql-844c4dbdd helm.sh/chart=postgresql-9.7.2 istio.io/rev=default role=master security.istio.io/tlsMode=istio service.istio.io/canonical-name=postgresql service.istio.io/canonical-revision=latest statefulset.kubernetes.io/pod-name=gitea-postgresql-0 Annotations: cni.projectcalico.org/podIP: 10.0.117.133/32 cni.projectcalico.org/podIPs: 10.0.117.133/32 k8s.v1.cni.cncf.io/networks: istio-cni kubectl.kubernetes.io/default-container: gitea-postgresql kubectl.kubernetes.io/default-logs-container: gitea-postgresql prometheus.io/path: /stats/prometheus prometheus.io/port: 15020 prometheus.io/scrape: true sidecar.istio.io/interceptionMode: REDIRECT sidecar.istio.io/status: {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","i... traffic.sidecar.istio.io/excludeInboundPorts: 15020 traffic.sidecar.istio.io/includeInboundPorts: * traffic.sidecar.istio.io/includeOutboundIPRanges: * Status: Running IP: 10.0.117.133 IPs: IP: 10.0.117.133 Controlled By: StatefulSet/gitea-postgresql Init Containers: istio-validation: Container ID: containerd://14f75cb582e99aaf7953339a248861c2d4c4f277e3df8a6d3de0a48b26343d21 Image: gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7 Image ID: gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55 Port: <none> Host Port: <none> Args: istio-iptables -p 15001 -z 15006 -u 1337 -m REDIRECT -i * -x -b * -d 15090,15021,15020 --run-validation --skip-rule-apply State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 25 Apr 2021 10:27:36 -0400 Finished: Sun, 25 Apr 2021 10:27:36 -0400 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 100m memory: 128Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro) init-chmod-data: Container ID: containerd://dcf2ebfecfc695daefa3342a641d9188f5f7b7999e7cfe2793c549e7134791d0 Image: docker.io/bitnami/minideb:buster Image ID: docker.io/bitnami/minideb@sha256:933e213e76b6712185adc09e8a173d02bdf8db53f94a6e400006ecf6d6ef4a40 Port: <none> Host Port: <none> Command: /bin/sh -cx chown 1001:1001 /bitnami/postgresql mkdir -p /bitnami/postgresql/data chmod 700 /bitnami/postgresql/data find /bitnami/postgresql -mindepth 1 -maxdepth 1 -not -name "conf" -not -name ".snapshot" -not -name "lost+found" | \ xargs chown -R 1001:1001 chmod -R 777 /dev/shm State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 25 Apr 2021 10:27:45 -0400 Finished: Sun, 25 Apr 2021 10:27:45 -0400 Ready: True Restart Count: 0 Requests: cpu: 250m memory: 256Mi Environment: <none> Mounts: /bitnami/postgresql from data (rw) /dev/shm from dshm (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro) Containers: gitea-postgresql: Container ID: containerd://8261dc04b762b66e56de99c7719175842c0180b0ab260f11d3745bea806b8562 Image: docker.io/bitnami/postgresql:11.9.0-debian-10-r34 Image ID: docker.io/bitnami/postgresql@sha256:3114b406e0ea358432ada0c6c273592bc4ff28821f505e55f51c57404fbee086 Port: 5432/TCP Host Port: 0/TCP State: Running Started: Sun, 25 Apr 2021 10:27:50 -0400 Ready: True Restart Count: 0 Requests: cpu: 250m memory: 256Mi Liveness: exec [/bin/sh -c exec pg_isready -U "gitea" -d "dbname=gitea" -h 127.0.0.1 -p 5432] delay=30s timeout=5s period=10s #success=1 #failure=6 Readiness: exec [/bin/sh -c -e exec pg_isready -U "gitea" -d "dbname=gitea" -h 127.0.0.1 -p 5432 [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] ] delay=5s timeout=5s period=10s #success=1 #failure=6 Environment: BITNAMI_DEBUG: false POSTGRESQL_PORT_NUMBER: 5432 POSTGRESQL_VOLUME_DIR: /bitnami/postgresql PGDATA: /bitnami/postgresql/data POSTGRES_USER: gitea POSTGRES_PASSWORD: <set to the key 'postgresql-password' in secret 'gitea-postgresql'> Optional: false POSTGRES_DB: gitea POSTGRESQL_ENABLE_LDAP: no POSTGRESQL_ENABLE_TLS: no POSTGRESQL_LOG_HOSTNAME: false POSTGRESQL_LOG_CONNECTIONS: false POSTGRESQL_LOG_DISCONNECTIONS: false POSTGRESQL_PGAUDIT_LOG_CATALOG: off POSTGRESQL_CLIENT_MIN_MESSAGES: error POSTGRESQL_SHARED_PRELOAD_LIBRARIES: pgaudit Mounts: /bitnami/postgresql from data (rw) /dev/shm from dshm (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro) istio-proxy: Container ID: containerd://d8cbf45d1dbc259d6d0702b4d8389e1d9a227ef68a830c8e6333abd9f4448f31 Image: gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7 Image ID: gcr.io/istio-testing/proxyv2@sha256:d18c888a9c0f2fb70f9ff55c9c5a19e2e443ecfe7dfdd983dcc02d74d1613b55 Port: 15090/TCP Host Port: 0/TCP Args: proxy sidecar --domain $(POD_NAMESPACE).svc.cluster.local --proxyLogLevel=warning --proxyComponentLogLevel=misc:error --log_output_level=default:info --concurrency 2 State: Running Started: Sun, 25 Apr 2021 10:27:52 -0400 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 1Gi Requests: cpu: 100m memory: 128Mi Readiness: http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30 Environment: JWT_POLICY: third-party-jwt PILOT_CERT_PROVIDER: istiod CA_ADDR: istiod.istio-system.svc:15012 POD_NAME: gitea-postgresql-0 (v1:metadata.name) POD_NAMESPACE: gitea (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) SERVICE_ACCOUNT: (v1:spec.serviceAccountName) HOST_IP: (v1:status.hostIP) CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name']) CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision']) PROXY_CONFIG: {} ISTIO_META_POD_PORTS: [ {"name":"tcp-postgresql","containerPort":5432,"protocol":"TCP"} ] ISTIO_META_APP_CONTAINERS: gitea-postgresql ISTIO_META_CLUSTER_ID: Kubernetes ISTIO_META_INTERCEPTION_MODE: REDIRECT ISTIO_META_WORKLOAD_NAME: gitea-postgresql ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/gitea/statefulsets/gitea-postgresql ISTIO_META_MESH_ID: cluster.local TRUST_DOMAIN: cluster.local Mounts: /etc/istio/pod from istio-podinfo (rw) /etc/istio/proxy from istio-envoy (rw) /var/lib/istio/data from istio-data (rw) /var/run/secrets/istio from istiod-ca-cert (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x857v (ro) /var/run/secrets/tokens from istio-token (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: <unset> istio-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> istio-podinfo: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.labels -> labels metadata.annotations -> annotations limits.cpu -> cpu-limit requests.cpu -> cpu-request istio-token: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 43200 istiod-ca-cert: Type: ConfigMap (a volume populated by a ConfigMap) Name: istio-ca-root-cert Optional: false dshm: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: 1Gi data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gitea-pv-claim-postgres ReadOnly: false kube-api-access-x857v: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m22s default-scheduler Successfully assigned gitea/gitea-postgresql-0 to k8worker1 Normal Pulled 9m13s kubelet Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine Normal Created 9m10s kubelet Created container istio-validation Normal Started 9m9s kubelet Started container istio-validation Normal Pulling 9m7s kubelet Pulling image "docker.io/bitnami/minideb:buster" Normal Pulled 9m3s kubelet Successfully pulled image "docker.io/bitnami/minideb:buster" in 3.402404816s Normal Created 9m1s kubelet Created container init-chmod-data Normal Started 9m kubelet Started container init-chmod-data Normal Pulled 8m58s kubelet Container image "docker.io/bitnami/postgresql:11.9.0-debian-10-r34" already present on machine Normal Created 8m55s kubelet Created container gitea-postgresql Normal Started 8m55s kubelet Started container gitea-postgresql Normal Pulled 8m55s kubelet Container image "gcr.io/istio-testing/proxyv2:1.11-alpha.21a12a752207e3328ad44f2fbd3cb612ef30b9c7" already present on machine Normal Created 8m53s kubelet Created container istio-proxy Normal Started 8m53s kubelet Started container istio-proxy Warning Unhealthy 8m52s kubelet Readiness probe failed: Get "http://10.0.117.133:15021/healthz/ready": dial tcp 10.0.117.133:15021: connect: connection refused </details> # Editing init script Download the script ``` kubectl get secret gitea-init -n gitea -o yaml > secret.yaml cat secret.yaml # Copy payload echo $PAYLOAD | base64 --decode > script.sh ``` Adding simple debug ```bash #!/bin/bash echo "start" chown 1000:1000 /data echo "chown success" mkdir -p /data/git/.ssh echo "mkdir success" chmod -R 700 /data/git/.ssh echo "chmod /data/git/.ssh success" mkdir -p /data/gitea/conf echo "mkdir data/gitea/conf success" cp /etc/gitea/conf/app.ini /data/gitea/conf/app.ini echo "cp app.ini success" chmod a+rwx /data/gitea/conf/app.ini echo "chmod app.init success" nc -v -w2 -z gitea-postgresql 5432 && \ su git -c ' \ set -x; \ gitea migrate; \ gitea admin create-user --username matt --password "Jewish123" --email "matt@salmon.sec" --admin --must-change-password=false \ || \ gitea admin change-password --username matt --password "Jewish123"; \ ' echo "done......" ``` Write script back and re-create pod ``` # Edit script.sh to change payload kubectl apply -f secret.yaml secret/gitea-init configured kubectl delete pod gitea-0 -n gitea ``` Wait a little while... ``` kubectl get pods -n gitea NAME READY STATUS RESTARTS AGE gitea-0 2/2 Running 0 35s gitea-memcached-c8547c9c9-v2xtq 2/2 Running 0 25m gitea-postgresql-0 2/2 Running 0 25m ``` View output ``` kubectl logs gitea-0 -c init -n gitea start chown success mkdir success chmod /data/git/.ssh success mkdir data/gitea/conf success cp app.ini success chmod app.init success done...... ```
Member

Hi there,

it would be nice to have the logs of the failed init script.
Please keep in mind, that the init container might fail repeatedly until the postgresql database is ready to accept connections.

Hi there, it would be nice to have the logs of the failed init script. Please keep in mind, that the init container might fail repeatedly until the postgresql database is ready to accept connections.
Author

That's what made me go so far down this path to edit the script with simple debug output - there was no logs! I used kubectl get get logs from the init container kubectl get logs gitea-0 -c init ... and nothing, and then debug into pulling logs from containerd on the node host. I was able to see logs for everything else (including postgresql confirming it was up and ready to accept connections) but nothing from the init container. Even in the symlinked log files in /var/log were empty for this init container.

As soon as I added printouts - it produced logs and everything worked!

That's what made me go so far down this path to edit the script with simple debug output - there was no logs! I used kubectl get get logs from the init container `kubectl get logs gitea-0 -c init ...` and nothing, and then debug into pulling logs from containerd on the node host. I was able to see logs for everything else (including postgresql confirming it was up and ready to accept connections) but nothing from the init container. Even in the symlinked log files in /var/log were empty for this init container. As soon as I added printouts - it produced logs and everything worked!
Member

this is weird, normally you wont get logs until the postgresql is available, because of the

nc -v -w2 -z gitea-postgresql 5432

Once the postgresql startet you will receive logs from the sql and gitea admin commands

this is weird, normally you wont get logs until the postgresql is available, because of the nc -v -w2 -z gitea-postgresql 5432 Once the postgresql startet you will receive logs from the sql and gitea admin commands

I am running into the same problem. I modfied the init script to include a "while sleep" loop so that I could run a shell on the container.

bash-5.1$ nc -vvvv -w2 -z gitea-db-postgresql 5432
nc: gitea-db-postgresql (10.43.45.187:5432): Connection refused
sent 0, rcvd 0
bash-5.1$ 

In my case, the database is configured separately rather than part of the chart, however I have also confirmed that the database is accepting connections from pods in the same namespace.

I then tested with a few arbitrary public facing endpoints and got a similar result. It seems that the container is unable to make external connections.

Any ideas?

I am running into the same problem. I modfied the init script to include a "while sleep" loop so that I could run a shell on the container. ``` bash-5.1$ nc -vvvv -w2 -z gitea-db-postgresql 5432 nc: gitea-db-postgresql (10.43.45.187:5432): Connection refused sent 0, rcvd 0 bash-5.1$ ``` In my case, the database is configured separately rather than part of the chart, however I have also confirmed that the database is accepting connections from pods in the same namespace. I then tested with a few arbitrary public facing endpoints and got a similar result. It seems that the container is unable to make external connections. Any ideas?

Actually, @mattn just looking at your configs again. I'm also running istio on this namespace.

It seems that the istio-proxy runs after all init containers are done, which could prevent outbound connections. I'm seeing some discussion of it on the istio forums.

Currently looking for a way around this but the general recommendation is: "avoid network IO in your init containers".

Actually, @mattn just looking at your configs again. I'm also running istio on this namespace. It seems that the istio-proxy runs after all init containers are done, which could prevent outbound connections. I'm seeing [some discussion](https://discuss.istio.io/t/k8s-istio-sidecar-injection-with-other-init-containers/845) of it on the istio forums. Currently looking for a way around this but the general recommendation is: "avoid network IO in your init containers".
Author

@EternalDeiwos Ah yes, thanks for the debug and references. This is absolutely the issue here!

It seems like it'll be a while before Istio can support this properly.

It seems like the only known-working hack would be to remove the init container and move the logic somewhere else.. I'll do some experimenting and let you know if I come up with anything.

I'm not up to speed on developer practices here or expectations but I would assume a one-off hack to support a specific use-case isn't going to be something the devs at gitea want to support.

@EternalDeiwos Ah yes, thanks for the debug and references. This is absolutely the issue here! It seems like it'll be a while before Istio can support this properly. It seems like the only known-working `hack` would be to remove the init container and move the logic somewhere else.. I'll do some experimenting and let you know if I come up with anything. I'm not up to speed on developer practices here or expectations but I would assume a one-off hack to support a specific use-case isn't going to be something the devs at gitea want to support.
Member

I had also some issues with istio, but this was due to filesystem permissions. The solution here was using the charts 3.0.0 version and enabling the rootless image since istio had some issues setting the correct file permissions.

However i did not test the current version with istio.

To move the logic out of the init container we would need to run another command on the main container, running the init script and invoking the original containers run command in this init script.

Not sure if I really like this approach :D. Will think about this issue and see if I can come up with some other ideas.

I had also some issues with istio, but this was due to filesystem permissions. The solution here was using the charts 3.0.0 version and enabling the rootless image since istio had some issues setting the correct file permissions. However i did not test the current version with istio. To move the logic out of the init container we would need to run another command on the main container, running the init script and invoking the original containers run command in this init script. Not sure if I really like this approach :D. Will think about this issue and see if I can come up with some other ideas.
Author

@luhahn I don't blame you, the approach would be a regression to code quality for sure.

@luhahn I don't blame you, the approach would be a regression to code quality for sure.

I just ran into a version of this myself. I'm not running Istio, but I had the same problem (failing init container with no logs). I followed the steps in the OP and, sure enough, the init container ran and the process moved forward.

In my case, I think what's going on is the last command of the init script is failing. Since it returns a non zero exit code the init container itself is marked as failed by kubernetes. When I added the final echo "Done" to the script, the last command now succeedes and so kubernetes thinks the init container succeeded and moves on.

I was able to see some logs and my issue appers to be that the database connection is being denied.

Access denied for user 'gitea'@'ip' (using password: YES)

I checked with an interactive container and I can log in with the password configured in the environment variable GITEA__database__PASSWD, but I don't know if the init script is using that password or the dummy one I have provided in the helm values.yaml file.

I'm using a Secret for the actual database password (see this issue), but if I don't also supply db details in gitea.config.database the chart failes to install. I undertood from the linked issue and this page that the GITEA__* env variables took precedence over other config, but maybe that doesn't apply in the init container?

Any ideas on how I can debug this further?

Also, I'm happy to make a PR to add some basic output to the init script if that would be appreciated. Just let me know.

I just ran into a version of this myself. I'm not running Istio, but I had the same problem (failing init container with no logs). I followed the steps in the OP and, sure enough, the init container ran and the process moved forward. In my case, I think what's going on is the last command of the init script is failing. Since it returns a non zero exit code the init container itself is marked as failed by kubernetes. When I added the final `echo "Done"` to the script, the last command now succeedes and so kubernetes thinks the init container succeeded and moves on. I was able to see some logs and my issue appers to be that the database connection is being denied. > Access denied for user 'gitea'@'ip' (using password: YES) I checked with an interactive container and I can log in with the password configured in the environment variable `GITEA__database__PASSWD`, but I don't know if the init script is using that password or the dummy one I have provided in the helm values.yaml file. I'm using a Secret for the actual database password (see [this issue](https://gitea.com/gitea/helm-chart/issues/60#issuecomment-316380)), but if I don't also supply db details in gitea.config.database the chart failes to install. I undertood from the linked issue and [this page](https://docs.gitea.io/en-us/install-with-docker/#managing-deployments-with-environment-variables) that the `GITEA__*` env variables took precedence over other config, but maybe that doesn't apply in the init container? Any ideas on how I can debug this further? Also, I'm happy to make a PR to add some basic output to the init script if that would be appreciated. Just let me know.

After posting the last comment I realized I could just use the actual database password in gitea.config.database in the values.yaml file as a test...it was late.

Sure enough, that worked perfectly.

So, I see three main issues here:

  1. When running with Istio, the init container won't be able to use the network like the init script needs.

  2. The GITEA__ environment variables don't seem to be respected in the init script. In addition to the password issue, I noticed that the nc line didn't use the host configured in GITEA__database__HOST but defaulted to localhost:3306 until I configured gitea.config.database.HOST.

  3. The init script doesn't produce any output on some failure modes which makes debugging challenging.

As mentioned, I'm happy to help with the thrid one, but unfortunately the first two are a bit out of my league (I don't know go at all).

After posting the last comment I realized I could just use the actual database password in `gitea.config.database` in the values.yaml file as a test...it was late. Sure enough, that worked perfectly. So, I see three main issues here: 1. When running with Istio, the init container won't be able to use the network like the init script needs. 2. The `GITEA__` environment variables don't seem to be respected in the init script. In addition to the password issue, I noticed that the `nc` line didn't use the host configured in `GITEA__database__HOST` but defaulted to localhost:3306 until I configured `gitea.config.database.HOST`. 3. The init script doesn't produce any output on some failure modes which makes debugging challenging. As mentioned, I'm happy to help with the thrid one, but unfortunately the first two are a bit out of my league (I don't know go at all).
Member

will get back to this issue soon, sorry for the delay !

will get back to this issue soon, sorry for the delay !

This problem still persists in 4.x; is it possible to move the database connection testing and migration to a job rather than have it as an init container?

Currently init containers run before sidecars are started and Istio prevents connection to the database otherwise.

This problem still persists in `4.x`; is it possible to move the database connection testing and migration to a job rather than have it as an init container? Currently init containers run before sidecars are started and Istio prevents connection to the database otherwise.
Member

Is it possible to move the database connection testing and migration to a job rather than have it as an init container?

The init container rely on the database connection if a DB other than SQLite is used. So moving it completely out of the init container would risk instability of the init container itself.

Maybe it is be possible to use Chart hooks for that. ?

> Is it possible to move the database connection testing and migration to a job rather than have it as an init container? The init container rely on the database connection if a DB other than SQLite is used. So moving it completely out of the init container would risk instability of the init container itself. Maybe it is be possible to use [Chart hooks](https://helm.sh/docs/topics/charts_hooks/) for that. ?
lunny closed this issue 2022-03-02 00:25:50 +00:00
Sign in to join this conversation.
No Milestone
No Assignees
5 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: gitea/helm-chart#149
No description provided.