nginx ingress microk8s - Service "default/gitea-http" does not have any active Endpoint #253

Closed
opened 2021-11-21 19:46:13 +00:00 by NathanDotTo · 10 comments

I have enabled ingress on microk8s with the command microk8s enable ingress.

I see this message in the nginx-ingress-microk8s-controller pod log:

Only Ingresses with class "public" will be processed by this Ingress controller

What that turns out to mean is that the ingress config in the values.yaml has to have:

ingress:
  enabled: false
  className: public
  ...

Without the className: public the ingress is ignored:

I1121 19:43:45.152469       6 store.go:352] "Ignoring ingress" ingress="default/gitea" kubernetes.io/ingress.class="" ingressClassName="nginx"

Even with this className: public setting, though, I get:

W1121 19:44:33.113743       6 controller.go:977] Service "default/gitea-http" does not have any active Endpoint.

Any ideas please?

I have enabled ingress on microk8s with the command `microk8s enable ingress`. I see this message in the nginx-ingress-microk8s-controller pod log: ``` Only Ingresses with class "public" will be processed by this Ingress controller ``` What that turns out to mean is that the ingress config in the values.yaml has to have: ``` ingress: enabled: false className: public ... ``` Without the `className: public` the ingress is ignored: ``` I1121 19:43:45.152469 6 store.go:352] "Ignoring ingress" ingress="default/gitea" kubernetes.io/ingress.class="" ingressClassName="nginx" ``` Even with this `className: public` setting, though, I get: ``` W1121 19:44:33.113743 6 controller.go:977] Service "default/gitea-http" does not have any active Endpoint. ``` Any ideas please?
Member

Typically, Kubernetes ingresses also look for a specific annotation to distinguish between different parallel ingress. Have you tried adding such annotation?
https://stackoverflow.com/a/67041204

If this does not do the trick: The message "No active endpoint" could also mean that something within the traffic path (external request -> ingress -> service -> endpoint -> pod -> application) is incorrect. Inactive endpoints mostly indicate that the pod selectors set on the service do not match.

Typically, Kubernetes ingresses also look for a specific annotation to distinguish between different parallel ingress. Have you tried adding such annotation? https://stackoverflow.com/a/67041204 If this does not do the trick: The message "No active endpoint" could also mean that something within the traffic path (external request -> ingress -> service -> endpoint -> pod -> application) is incorrect. Inactive endpoints mostly indicate that the pod selectors set on the service do not match.
Author

To install nginx such that it works with the ingressClass=nginx use:

#https://kubernetes.github.io/ingress-nginx/deploy/
helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace
To install nginx such that it works with the `ingressClass=nginx` use: ``` #https://kubernetes.github.io/ingress-nginx/deploy/ helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace ```
NathanDotTo changed title from nginx ingress microk8s - ` Service "default/gitea-http" does not have any active Endpoint.` to nginx ingress microk8s - Service "default/gitea-http" does not have any active Endpoint 2021-11-22 10:57:38 +00:00
Author

It occurred to me that the gitea-http service depends on the gitea-00 pod. The gitea-0 pod takes some time to come up, as it is waiting on the postgresql service.

I used the Helm chart without the ingress, i.e., enabled: false. The services started, eventually, and curl 127.0.0.1:3000 returned a web page.

I then used k create -f ingress.yaml to apply the ingress, where the content of ingress.yaml is below. I now do not see the error message Service "default/gitea-http" does not have any active Endpoint.

This seems to imply that the ingress can only be applied when all of the services are active. This is an unsatisfactory conclusion if one assumes that it should be possible to apply the ingress in the Helm chart.

The ingress.yaml is a copy of the output from helm --debug install --dependency-update gitea . -f values.yaml with enabled: true set for the ingress section in values.yaml.

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitea
  annotations:
    ingress.kubernetes.io/rewrite-target: /
  labels:
    helm.sh/chart: gitea-0.0.0
    app: gitea
    app.kubernetes.io/name: gitea
    app.kubernetes.io/instance: gitea
    app.kubernetes.io/version: "1.15.4"
    version: "1.15.4"
    app.kubernetes.io/managed-by: Helm
spec:
  ingressClassName: nginx
  rules:
    - host: <VM host name>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: gitea-http
                port:
                  number: 3000
It occurred to me that the `gitea-http` service depends on the `gitea-00` pod. The `gitea-0` pod takes some time to come up, as it is waiting on the postgresql service. I used the Helm chart *without* the ingress, i.e., `enabled: false`. The services started, eventually, and `curl 127.0.0.1:3000` returned a web page. I then used `k create -f ingress.yaml` to apply the ingress, where the content of ingress.yaml is below. I now do *not* see the error message `Service "default/gitea-http" does not have any active Endpoint.` This *seems* to imply that the ingress can only be applied when all of the services are active. This is an unsatisfactory conclusion if one assumes that it should be possible to apply the ingress in the Helm chart. The ingress.yaml is a copy of the output from `helm --debug install --dependency-update gitea . -f values.yaml` with `enabled: true` set for the `ingress` section in values.yaml. ``` # ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gitea annotations: ingress.kubernetes.io/rewrite-target: / labels: helm.sh/chart: gitea-0.0.0 app: gitea app.kubernetes.io/name: gitea app.kubernetes.io/instance: gitea app.kubernetes.io/version: "1.15.4" version: "1.15.4" app.kubernetes.io/managed-by: Helm spec: ingressClassName: nginx rules: - host: <VM host name> http: paths: - path: / pathType: Prefix backend: service: name: gitea-http port: number: 3000 ```
Member

This seems to imply that the ingress can only be applied when all of the services are active. This is an unsatisfactory conclusion if one assumes that it should be possible to apply the ingress in the Helm chart.

In general, all Kubernetes resources (such as Ingress, Deployment, Service) can be applied at any time in any order. If there is a dependency from one resource to another, Kubernetes will wait until it's available. Especially Ingress resources can be applied at any time. If the Ingress Controller (in your case NGINX, IIRC) is configured correctly, it should auto-update itself with new Ingress resources.

As you said, this is now working for you, right?. I'd like trying to reproduce your described behavior anyway. Please share:

  • microk8s version
  • used Gitea Helm Chart version (I guess it's the master branch version, based on Gitea version 1.15.4. Right?)
  • the actual changes made to the values.yaml compared to the default values. You can skip the Gitea configuration it self and please redact any sensitive data.

If you use the latest available version 4.1.1, do you still get this error?

> This seems to imply that the ingress can only be applied when all of the services are active. This is an unsatisfactory conclusion if one assumes that it should be possible to apply the ingress in the Helm chart. In general, all Kubernetes resources (such as Ingress, Deployment, Service) can be applied at any time in any order. If there is a dependency from one resource to another, Kubernetes will wait until it's available. Especially Ingress resources can be applied at any time. If the Ingress Controller (in your case NGINX, IIRC) is configured correctly, it should auto-update itself with new Ingress resources. As you said, this is now working for you, right?. I'd like trying to reproduce your described behavior anyway. Please share: - microk8s version - used Gitea Helm Chart version (I guess it's the master branch version, based on Gitea version `1.15.4`. Right?) - the actual changes made to the values.yaml compared to the default values. You can skip the Gitea configuration it self and please redact any sensitive data. If you use the latest [available version 4.1.1](https://artifacthub.io/packages/helm/gitea/gitea), do you still get this error?
Author

Thank you for following up. This is not working properly at all, just inching forward. I am actually going to install a Git server on the VM itself, and move on now. This Gitea container is a small piece of a larger project, and it has swallowed up far too much time now sadly.

But I will be able to test any suggestions you make, as I do want to see how this should work.

I am using:

sudo snap install microk8s --classic --channel=1.21/stable

This Helm chart:

https://gitea.com/NathanDotTo/helm-chart/src/branch/master/values.yaml

Which is a fork I made last week.

So, actually, this should get to the same result:

helm repo add gitea-charts https://dl.gitea.io/charts/
helm repo update
helm install gitea gitea-charts/gitea

In the end I did not change the Helm chart values.yaml, instead I applied the ingress separately, after the services and pods were up and running.

I installed Helm with:

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

I am running on a Ubuntu 21.10 server as a vSphere VM. I also have the build files for the Packer template and the Terraform for the VM itself, if that helps.

Please also note:

microk8s enable storage dns<vSphere lab DNS server list>

The storage is required for postgresq, and the DNS is required else the gitea service can't find the postgresql service.

Thank you for following up. This is not working properly at all, just inching forward. I am actually going to install a Git server on the VM itself, and move on now. This Gitea container is a small piece of a larger project, and it has swallowed up far too much time now sadly. But I will be able to test any suggestions you make, as I do want to see how this *should* work. I am using: ``` sudo snap install microk8s --classic --channel=1.21/stable ``` This Helm chart: https://gitea.com/NathanDotTo/helm-chart/src/branch/master/values.yaml Which is a fork I made last week. So, actually, this should get to the same result: ``` helm repo add gitea-charts https://dl.gitea.io/charts/ helm repo update helm install gitea gitea-charts/gitea ``` In the end I did not change the Helm chart values.yaml, instead I applied the ingress separately, after the services and pods were up and running. I installed Helm with: ``` helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace ``` I am running on a Ubuntu 21.10 server as a vSphere VM. I also have the build files for the Packer template and the Terraform for the VM itself, if that helps. Please also note: ``` microk8s enable storage dns<vSphere lab DNS server list> ``` The storage is required for postgresq, and the DNS is required else the `gitea` service can't find the `postgresql` service.
Member

Thanks for the detailed description. Sorry for misunderstanding your previous comment. I will have a closer look on this the next days.

The master branch is not the same as the latest release, because it contains unreleased changes. But I think this doesn't really change the current situation.

Thanks for the detailed description. Sorry for misunderstanding your previous comment. I will have a closer look on this the next days. The master branch is not the same as the latest release, because it contains unreleased changes. But I think this doesn't really change the current situation.
Author

Many thanks. I also configure sudo iptables -P FORWARD ACCEPT on the microk8s VM.

Many thanks. I also configure `sudo iptables -P FORWARD ACCEPT` on the microk8s VM.
Member

I will have a closer look on this the next days.

Obviously, I was not able to have a closer look "the next days". Is this issue still valid, @NathanDotTo?

> I will have a closer look on this the next days. Obviously, I was not able to have a closer look "the next days". Is this issue still valid, @NathanDotTo?
justusbunsi added the
status
needs-feedback
label 2023-04-21 14:39:29 +00:00
Member

Stale and possibly outdated -> closing

Stale and possibly outdated -> closing
pat-s closed this issue 2023-07-17 19:58:48 +00:00

Looks good thanks 👍👍

Looks good thanks [👍](https://rickycasino3.com/)[👍](https://letswiner.co.uk/online-casinos/slots/)
Sign in to join this conversation.
No Milestone
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: gitea/helm-chart#253
No description provided.