Logs fluted with "failed authentication attempt" from ssh.go #224

Closed
opened 2021-09-25 08:41:18 +00:00 by pat-s · 22 comments
Member

Currently running an instance with 1.15.3 (1 replica, RUN_MODE=dev) and my logs are floded with the messages below.

So far everything seems to be working in the instance though, especially also for SSH.
I did not make any changes to the SSH config of the chart. Any idea which setting might cause this?

2021/09/25 08:36:15 ...dels/notification.go:739:GetUIDsAndNotificationCounts() [I] [SQL] SELECT user_id, count(*) AS count FROM notification WHERE user_id IN (SELECT user_id FROM notification WHERE updated_unix >= $1 AND updated_unix < $2) AND status = $3 GROUP BY user_id [1632558963 1632558973 1] - 1.115848ms
2021/09/25 08:36:15 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43484 with error: EOF
2021/09/25 08:36:15 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43484
2021/09/25 08:36:18 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43504 with error: EOF
2021/09/25 08:36:18 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43504
2021/09/25 08:36:18 ...s/issue_stopwatch.go:51:GetUserStopwatches() [I] [SQL] SELECT "id", "issue_id", "user_id", "created_unix" FROM "stopwatch" WHERE (stopwatch.user_id = $1) [2] - 763.567µs
2021/09/25 08:36:19 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43510 with error: EOF
2021/09/25 08:36:19 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43510
2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43524 with error: EOF
2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43524
2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43532 with error: EOF
2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43532
2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43534 with error: EOF
2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43534
2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43546 with error: EOF
2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43546
2021/09/25 08:36:21 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43562 with error: EOF
2021/09/25 08:36:21 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43562
2021/09/25 08:36:22 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43568 with error: EOF
2021/09/25 08:36:22 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43568
2021/09/25 08:36:22 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43576 with error: EOF
2021/09/25 08:36:22 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43576
Currently running an instance with 1.15.3 (1 replica, RUN_MODE=dev) and my logs are floded with the messages below. So far everything seems to be working in the instance though, especially also for SSH. I did not make any changes to the SSH config of the chart. Any idea which setting might cause this? ``` 2021/09/25 08:36:15 ...dels/notification.go:739:GetUIDsAndNotificationCounts() [I] [SQL] SELECT user_id, count(*) AS count FROM notification WHERE user_id IN (SELECT user_id FROM notification WHERE updated_unix >= $1 AND updated_unix < $2) AND status = $3 GROUP BY user_id [1632558963 1632558973 1] - 1.115848ms 2021/09/25 08:36:15 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43484 with error: EOF 2021/09/25 08:36:15 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43484 2021/09/25 08:36:18 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43504 with error: EOF 2021/09/25 08:36:18 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43504 2021/09/25 08:36:18 ...s/issue_stopwatch.go:51:GetUserStopwatches() [I] [SQL] SELECT "id", "issue_id", "user_id", "created_unix" FROM "stopwatch" WHERE (stopwatch.user_id = $1) [2] - 763.567µs 2021/09/25 08:36:19 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43510 with error: EOF 2021/09/25 08:36:19 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43510 2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43524 with error: EOF 2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43524 2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43532 with error: EOF 2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43532 2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43534 with error: EOF 2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43534 2021/09/25 08:36:20 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43546 with error: EOF 2021/09/25 08:36:20 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43546 2021/09/25 08:36:21 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43562 with error: EOF 2021/09/25 08:36:21 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43562 2021/09/25 08:36:22 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43568 with error: EOF 2021/09/25 08:36:22 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43568 2021/09/25 08:36:22 modules/ssh/ssh.go:259:sshConnectionFailed() [W] Failed connection from 10.0.1.150:43576 with error: EOF 2021/09/25 08:36:22 modules/ssh/ssh.go:261:sshConnectionFailed() [W] Failed authentication attempt from 10.0.1.150:43576 ```

Could a malicious SSH scanner be hitting your IP?

Could a malicious SSH scanner be hitting your IP?
Author
Member

The IP is an internal node IP so maybe it was from an initial HA attempt. I read afterwards that there is no real support yet even one could set replicaCount: 2.

The logs are fine now (unclear why), so I'll close here :)

The IP is an internal node IP so maybe it was from an initial HA attempt. I read afterwards that there is no real support yet even one could set `replicaCount: 2`. The logs are fine now (unclear why), so I'll close here :)
pat-s closed this issue 2021-09-25 19:13:32 +00:00
Author
Member

It's back and I don't have a clue yet what's causing this. It seems to also slow down the service to some degree.

The 10. IP addresses look like the request is coming from an internal instance. Yet there is no instance with the logged address ?

It's back and I don't have a clue yet what's causing this. It seems to also slow down the service to some degree. The `10.` IP addresses look like the request is coming from an internal instance. Yet there is no instance with the logged address ?
pat-s reopened this issue 2022-03-05 16:02:19 +00:00
Author
Member

So I found the requests are coming from the ingress pod. Could it be there is a relation to https://gitea.com/gitea/helm-chart#ssh-and-ingress?
I don't fully understand the role of metallb here.

Currently SSH is working in our instance but it seems some service is not completely happy.

So I found the requests are coming from the ingress pod. Could it be there is a relation to https://gitea.com/gitea/helm-chart#ssh-and-ingress? I don't fully understand the role of `metallb` here. Currently SSH is working in our instance but it seems some service is not completely happy.
Contributor

This is definately internal as I changed the SSH port to some random port and still seeing these flux of logs in a few seconds. So no external bruteforce SSH attempt.

I used K8s External Loadbalancer to expose the SSH port, and immediately these logs start to emerge and changing ports doesn't make any difference. When Loadbalancer is removed, there is no logs.

Changing log option in values.yaml, even setting SSH logs to disabled, doesn't help.

This is definately internal as I changed the SSH port to some random port and still seeing these flux of logs in a few seconds. So no external bruteforce SSH attempt. I used K8s [External Loadbalancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) to expose the SSH port, and immediately these logs start to emerge and changing ports doesn't make any difference. When Loadbalancer is removed, there is no logs. Changing log option in `values.yaml`, even setting SSH logs to disabled, doesn't help.
Author
Member

So I've done some research and I think it's related to nginx ingress.
The requests come from such and it seems that the ingress is trying to establish a ssh connection to the gitea pod but fails.
I guess because there is no SSH key pair between both.
I am not sure why the ingress does it in the first place as it should only do the port forwarding.
So not sure if this is really a Gitea issue or rathern one from the nginx ingress.
I haven't found anything related to this behavior in the nginx ingress docs.

A workaround is to change the log level in Gitea, so that "warnings" are not issued anymore.

gitea:
  config:
    log:
      LEVEL: error

@pi3ch
In your case (#303) it seems your config is wrong, i.e. you specified log: top-level whereas it needs to be under gitea.config.

So I've done some research and I think it's related to nginx ingress. The requests come from such and it seems that the ingress is trying to establish a ssh connection to the gitea pod but fails. I guess because there is no SSH key pair between both. I am not sure why the ingress does it in the first place as it should only do the port forwarding. So not sure if this is really a Gitea issue or rathern one from the nginx ingress. I haven't found anything related to this behavior in the nginx ingress docs. A workaround is to change the log level in Gitea, so that "warnings" are not issued anymore. ```yml gitea: config: log: LEVEL: error ``` @pi3ch In your case (#303) it seems your config is wrong, i.e. you specified `log:` top-level whereas it needs to be under `gitea.config`.
Contributor

Thank @pat-s I didn't posted my full helm config, but I did have it in the right order:
Here is the snippet from /data/gitea/conf/app.ini

[log]
LEVEL    = Error
MODE     = console
COLORIZE = false

or

[log]
LEVEL    = Error
MODE     = console
COLORIZE = false

[log.console]
LEVEL = Error

They didn't solve the issue for me. Also tried error instead of Error.

Thank @pat-s I didn't posted my full helm config, but I did have it in the right order: Here is the snippet from /data/gitea/conf/app.ini ``` [log] LEVEL = Error MODE = console COLORIZE = false ``` or ``` [log] LEVEL = Error MODE = console COLORIZE = false [log.console] LEVEL = Error ``` They didn't solve the issue for me. Also tried `error` instead of `Error`.
Author
Member

Did you change this within a running session eventually?

The workaround MUST work as it simply suppresses all log outputs of level "warning". The fact you still see them can just mean that the log option is not picked up/honored.

Could you maybe try only setting the LOG LEVEL in the config and avoid any additional other log related adjustments?
And check the logs for eventual messages related to the log entry not being picked up?

I can directly reproduce this by switching between "Error" and "Warning" in 1.16.5.

Did you change this within a running session eventually? The workaround MUST work as it simply suppresses all log outputs of level "warning". The fact you still see them can just mean that the log option is not picked up/honored. Could you maybe try only setting the LOG LEVEL in the config and avoid any additional other log related adjustments? And check the logs for eventual messages related to the log entry not being picked up? I can directly reproduce this by switching between "Error" and "Warning" in 1.16.5.
Contributor

I change helm values and verify inside the pod for changes after upgrade. Bumped the version to 1.16.15, removed all other entries for log.

# cat /data/gitea/conf/app.ini
[log]
LEVEL = error

# env
GITEA_APP_INI=/data/gitea/conf/app.ini
GITEA_TEMP=/tmp/gitea

and still getting (I tried also critical and none levels)

# kubectl logs gitea-0
Connection closed by 10.104.0.11 port 64948
kex_exchange_identification: Connection closed by remote host
Connection closed by 10.104.0.7 port 53758
kex_exchange_identification: Connection closed by remote host
Connection closed by 10.104.0.11 port 64950
kex_exchange_identification: Connection closed by remote host
Connection closed by 10.104.0.5 port 17076

All other config setting applies but log. Strange!

I change helm values and verify inside the pod for changes after upgrade. Bumped the version to `1.16.15`, removed all other entries for `log`. ``` # cat /data/gitea/conf/app.ini [log] LEVEL = error # env GITEA_APP_INI=/data/gitea/conf/app.ini GITEA_TEMP=/tmp/gitea ``` and still getting (I tried also `critical` and `none` levels) ``` # kubectl logs gitea-0 Connection closed by 10.104.0.11 port 64948 kex_exchange_identification: Connection closed by remote host Connection closed by 10.104.0.7 port 53758 kex_exchange_identification: Connection closed by remote host Connection closed by 10.104.0.11 port 64950 kex_exchange_identification: Connection closed by remote host Connection closed by 10.104.0.5 port 17076 ``` All other config setting applies but `log`. Strange!
Author
Member

Your messages differ somewhat to mine reported above so I might not be issued by the "warning" logger and hence you cannot suppress them by setting the log level to error it seems.
So it might be that they are coming from some other ingress configuration.

Do you also use an nginx ingress?

FWIW we have set the following two options in this helm chart to allow SSH port forwarding to the gitea deployment (Terraform notation):

  set {
    name  = "tcp.22"
    value = "gitea/gitea-ssh:22"
  }

  set {
    name  = "controller.service.nodePorts.tcp.22"
    value = var.NODE_PORT_SSH
  }

where NODE_PORT_SSH is the node port defined for the ingress deployment.

Your messages differ somewhat to mine reported above so I might not be issued by the "warning" logger and hence you cannot suppress them by setting the log level to error it seems. So it might be that they are coming from some other ingress configuration. Do you also use an nginx ingress? FWIW we have set the following two options in [this](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx) helm chart to allow SSH port forwarding to the gitea deployment (Terraform notation): ```yml set { name = "tcp.22" value = "gitea/gitea-ssh:22" } set { name = "controller.service.nodePorts.tcp.22" value = var.NODE_PORT_SSH } ``` where `NODE_PORT_SSH` is the node port defined for the ingress deployment.
Contributor

Screenshots from admin

image

image

Screenshots from admin ![image](/attachments/cfa07d94-b7f3-415f-8ffb-8ae693a66090) ![image](/attachments/87d48132-b4e4-4457-930c-88a4d6189d14)
Contributor

Good point, indeed they look different.
I don't use nginx ingress for SSH (only http) and instead use k8s external loadbalance. When loadbalancer is deployed, immediately these logs start to emerge. When Loadbalancer is removed, there is no logs.

Your messages differ somewhat to mine reported above so I might not be issued by the "warning" logger and hence you cannot suppress them by setting the log level to error it seems.
So it might be that they are coming from some other ingress configuration.

Do you also use an nginx ingress?

FWIW we have set the following two options in this helm chart to allow SSH port forwarding to the gitea deployment (Terraform notation):

  set {
    name  = "tcp.22"
    value = "gitea/gitea-ssh:22"
  }

  set {
    name  = "controller.service.nodePorts.tcp.22"
    value = var.NODE_PORT_SSH
  }

where NODE_PORT_SSH is the node port defined for the ingress deployment.

Good point, indeed they look different. I don't use nginx ingress for SSH (only http) and instead use k8s [external loadbalance](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/). When loadbalancer is deployed, immediately these logs start to emerge. When Loadbalancer is removed, there is no logs. > Your messages differ somewhat to mine reported above so I might not be issued by the "warning" logger and hence you cannot suppress them by setting the log level to error it seems. > So it might be that they are coming from some other ingress configuration. > > Do you also use an nginx ingress? > > FWIW we have set the following two options in [this](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx) helm chart to allow SSH port forwarding to the gitea deployment (Terraform notation): > > ```yml > set { > name = "tcp.22" > value = "gitea/gitea-ssh:22" > } > > set { > name = "controller.service.nodePorts.tcp.22" > value = var.NODE_PORT_SSH > } > ``` > > where `NODE_PORT_SSH` is the node port defined for the ingress deployment.
Contributor

ok, I think I know what it is. This is not gitea logs but sshd logs that is set to INFO.

ok, I think I know what it is. This is not gitea logs but sshd logs that is set to `INFO`.
Contributor

SSH log level is set in /etc/template/sshd_config. Change logLevel to ERROR or FATAL and restart sshd (s6-svc -r /etc/s6/openssh). Issue resolved.

This is NOT permanent fix as a pod restart will bring back loglevel to INFO

SSH log level is set in `/etc/template/sshd_config`. Change `logLevel` to `ERROR` or `FATAL` and restart sshd (`s6-svc -r /etc/s6/openssh`). Issue resolved. This is NOT permanent fix as a pod restart will bring back loglevel to `INFO`

A more permanent solution is to configure it via environments(https://github.com/go-gitea/gitea/issues/19232). PR: https://github.com/go-gitea/gitea/pull/19274

A more permanent solution is to configure it via environments(https://github.com/go-gitea/gitea/issues/19232). PR: https://github.com/go-gitea/gitea/pull/19274
Contributor

A more permanent solution is to configure it via environments(https://github.com/go-gitea/gitea/issues/19232). PR: https://github.com/go-gitea/gitea/pull/19274

When this fix is going to be released? Can't find it in latest v1.6.8

> A more permanent solution is to configure it via environments(https://github.com/go-gitea/gitea/issues/19232). PR: https://github.com/go-gitea/gitea/pull/19274 When this fix is going to be released? Can't find it in latest v1.6.8
Member

1.17.0 based on the milestone of the mentioned Gitea PR.

1.17.0 based on the milestone of the mentioned Gitea PR.
Contributor

@justusbunsi There is a bug in helm chart. SSH_LOG_LEVEL is not set as environ so the setup script still doesn't apply it to SSH config.

https://gitea.com/gitea/helm-chart/src/branch/main/templates/gitea/statefulset.yaml#L206

@justusbunsi There is a bug in helm chart. SSH_LOG_LEVEL is not set as environ so the setup script still doesn't apply it to SSH config. https://gitea.com/gitea/helm-chart/src/branch/main/templates/gitea/statefulset.yaml#L206
Member

Looks like this is resolved. Closing it now. Feel free to reopen.

Looks like this is resolved. Closing it now. Feel free to reopen.
Author
Member

@justusbunsi I might be missing something but I think this is not resolved. The issue still applies as SSH_LOG_LEVEL is not defined by default. So unless one manually sets SSH_LOG_LEVEL or set gitea.log.LEVEL: error, one is experiencing this issue.

Why would it be bad defining SSH_LOG_LEVEL in the statefulset defaults as suggested in #358?

@justusbunsi I might be missing something but I think this is not resolved. The issue still applies as `SSH_LOG_LEVEL` is not defined by default. So unless one manually sets `SSH_LOG_LEVEL` or set `gitea.log.LEVEL: error`, one is experiencing this issue. Why would it be bad defining `SSH_LOG_LEVEL` in the statefulset defaults as suggested in #358?
Member

Oh. My bad. Due to GitNex design I mixed up some things and thought #358 is an issue with just the reference to this issue.
In that case it would be enough to specify the environment variable. But as there is already the pull request to add this, I agree with you. ?
Let's reopen both.

Oh. My bad. Due to GitNex design I mixed up some things and thought #358 is an issue with just the reference to this issue. In that case it would be enough to specify the environment variable. But as there is already the pull request to add this, I agree with you. ? Let's reopen both.
Member

Closed by #358

Closed by #358
Sign in to join this conversation.
No Milestone
No Assignees
5 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: gitea/helm-chart#224
No description provided.