Gitea chart will recreate INTERNAL_TOKEN every 21:51/52 GMT if the token is defined before installing. #43
Labels
No Label
has
backport
in progress
invalid
kind
breaking
kind
bug
kind
build
kind
dependency
kind
deployment
kind
docs
kind
enhancement
kind
feature
kind
lint
kind
proposal
kind
question
kind
refactor
kind
security
kind
testing
kind
translation
kind
ui
need
backport
priority
critical
priority
low
priority
maybe
priority
medium
reviewed
duplicate
reviewed
invalid
reviewed
wontfix
skip-changelog
status
blocked
status
needs-feedback
status
needs-reviews
status
wip
upstream
gitea
upstream
other
No Milestone
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: gitea/helm-chart#43
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I installed gitea chart, to find out that repository would refuse pushes every day until restarting service, and there is a 403 error about accessing /api/internal while pushing.
After digging into the pod installation, I figured out that the INTERNAL_TOKEN was changed arround every 21:51 GMT without restarting the service. So the pre-receive hook will use a new internal_token to access /api/internal while the token is not usable yet.
To reproduce, you can install the gitea chart without setting INTERNAL_TOKEN/SECRET_KEY in values.yaml.
I am not familiar with golang and gitea code, but I am curious about the code that causes the problem.
Is this Issue still relevant?
Sounds like an internal cronjob running at that time. Can you find out which one is fired around that time?
If this is an issue with the Helm Chart and not with an internal cronjob as thought, it will probably be fixed with #239.
Closing this for now. As mentioned above, with 5.0.0 both values will be persistent on new installs. Please open a new issue or repoen this if you have similar issues in the future.