Logs fluted with "failed authentication attempt" from ssh.go
#224
Labels
No Label
has
backport
in progress
invalid
kind
breaking
kind
bug
kind
build
kind
dependency
kind
deployment
kind
docs
kind
enhancement
kind
feature
kind
lint
kind
proposal
kind
question
kind
refactor
kind
security
kind
testing
kind
translation
kind
ui
need
backport
priority
critical
priority
low
priority
maybe
priority
medium
reviewed
duplicate
reviewed
invalid
reviewed
wontfix
skip-changelog
status
blocked
status
needs-feedback
status
needs-reviews
status
wip
upstream
gitea
upstream
other
No Milestone
No Assignees
5 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: gitea/helm-chart#224
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently running an instance with 1.15.3 (1 replica, RUN_MODE=dev) and my logs are floded with the messages below.
So far everything seems to be working in the instance though, especially also for SSH.
I did not make any changes to the SSH config of the chart. Any idea which setting might cause this?
Could a malicious SSH scanner be hitting your IP?
The IP is an internal node IP so maybe it was from an initial HA attempt. I read afterwards that there is no real support yet even one could set
replicaCount: 2
.The logs are fine now (unclear why), so I'll close here :)
It's back and I don't have a clue yet what's causing this. It seems to also slow down the service to some degree.
The
10.
IP addresses look like the request is coming from an internal instance. Yet there is no instance with the logged address ?So I found the requests are coming from the ingress pod. Could it be there is a relation to https://gitea.com/gitea/helm-chart#ssh-and-ingress?
I don't fully understand the role of
metallb
here.Currently SSH is working in our instance but it seems some service is not completely happy.
This is definately internal as I changed the SSH port to some random port and still seeing these flux of logs in a few seconds. So no external bruteforce SSH attempt.
I used K8s External Loadbalancer to expose the SSH port, and immediately these logs start to emerge and changing ports doesn't make any difference. When Loadbalancer is removed, there is no logs.
Changing log option in
values.yaml
, even setting SSH logs to disabled, doesn't help.So I've done some research and I think it's related to nginx ingress.
The requests come from such and it seems that the ingress is trying to establish a ssh connection to the gitea pod but fails.
I guess because there is no SSH key pair between both.
I am not sure why the ingress does it in the first place as it should only do the port forwarding.
So not sure if this is really a Gitea issue or rathern one from the nginx ingress.
I haven't found anything related to this behavior in the nginx ingress docs.
A workaround is to change the log level in Gitea, so that "warnings" are not issued anymore.
@pi3ch
In your case (#303) it seems your config is wrong, i.e. you specified
log:
top-level whereas it needs to be undergitea.config
.Thank @pat-s I didn't posted my full helm config, but I did have it in the right order:
Here is the snippet from /data/gitea/conf/app.ini
or
They didn't solve the issue for me. Also tried
error
instead ofError
.Did you change this within a running session eventually?
The workaround MUST work as it simply suppresses all log outputs of level "warning". The fact you still see them can just mean that the log option is not picked up/honored.
Could you maybe try only setting the LOG LEVEL in the config and avoid any additional other log related adjustments?
And check the logs for eventual messages related to the log entry not being picked up?
I can directly reproduce this by switching between "Error" and "Warning" in 1.16.5.
I change helm values and verify inside the pod for changes after upgrade. Bumped the version to
1.16.15
, removed all other entries forlog
.and still getting (I tried also
critical
andnone
levels)All other config setting applies but
log
. Strange!Your messages differ somewhat to mine reported above so I might not be issued by the "warning" logger and hence you cannot suppress them by setting the log level to error it seems.
So it might be that they are coming from some other ingress configuration.
Do you also use an nginx ingress?
FWIW we have set the following two options in this helm chart to allow SSH port forwarding to the gitea deployment (Terraform notation):
where
NODE_PORT_SSH
is the node port defined for the ingress deployment.Screenshots from admin
Good point, indeed they look different.
I don't use nginx ingress for SSH (only http) and instead use k8s external loadbalance. When loadbalancer is deployed, immediately these logs start to emerge. When Loadbalancer is removed, there is no logs.
ok, I think I know what it is. This is not gitea logs but sshd logs that is set to
INFO
.SSH log level is set in
/etc/template/sshd_config
. ChangelogLevel
toERROR
orFATAL
and restart sshd (s6-svc -r /etc/s6/openssh
). Issue resolved.This is NOT permanent fix as a pod restart will bring back loglevel to
INFO
A more permanent solution is to configure it via environments(https://github.com/go-gitea/gitea/issues/19232). PR: https://github.com/go-gitea/gitea/pull/19274
When this fix is going to be released? Can't find it in latest v1.6.8
1.17.0 based on the milestone of the mentioned Gitea PR.
@justusbunsi There is a bug in helm chart. SSH_LOG_LEVEL is not set as environ so the setup script still doesn't apply it to SSH config.
https://gitea.com/gitea/helm-chart/src/branch/main/templates/gitea/statefulset.yaml#L206
Looks like this is resolved. Closing it now. Feel free to reopen.
@justusbunsi I might be missing something but I think this is not resolved. The issue still applies as
SSH_LOG_LEVEL
is not defined by default. So unless one manually setsSSH_LOG_LEVEL
or setgitea.log.LEVEL: error
, one is experiencing this issue.Why would it be bad defining
SSH_LOG_LEVEL
in the statefulset defaults as suggested in #358?Oh. My bad. Due to GitNex design I mixed up some things and thought #358 is an issue with just the reference to this issue.
In that case it would be enough to specify the environment variable. But as there is already the pull request to add this, I agree with you. ?
Let's reopen both.
Closed by #358