Provide "docker run" options for the containers launched by the runner #79
Labels
No Label
kind
bug
kind
build
kind/compatible
kind
dependencies
kind
docs
kind
enhancement
kind
feature
kind
help wanted
kind
proposal
kind
refactor
related
act
related
environment
related
exec
related
gitea
related
workflow
reviewed
confirmed
reviewed
duplicate
reviewed
invalid
reviewed
needs feedback
reviewed
wontfix
reviewed
workaround
No Milestone
No Assignees
7 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: gitea/act_runner#79
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hey,
Not sure if this is already possible,
But I'm finding it very needed to be able to provide extra options for the containers that the runner launches, so I can, for example, add volumes to it.
-v /path/to/dir/:/path/to/dir
If this is already possible, please let me know!
I understand this is something that wouldn't be possible on GitHub Actions, but since we're self-hosting Gitea on our servers, having some extra flexibility on this would be very great!
Thank you very much!!
Due to missing container-options flag / config you currently have to add them into your workflow file
See second example with
container:
https://github.com/nektos/act/issues/1696#issuecomment-1483385747Not tested with act_runner, it works in act
Thanks. I didn't know this!
That should be enough for me, if it works!!
I'm giving it a try.
I see I cannot use
env.XYZ
like this:Which is OK - we can just hardcode the values.
However:
I was expecting
echo "$OUTPUT1"
(the 3rd line from the bottom) to printhello
, but it doesn't.Now - a few notes:
I run Gitea in Docker, with
docker.sock
mounted from the hostI understand
-v /var/run/act/workflow/:/var/run/act/workflow/
mounts the host's/var/run/act/workflow/
folder, so this is not a valid test.I saw the folder was created in the host, and contains all the Step files.
I deleted that folder.
I also tried without
-v /var/run/act/workflow/:/var/run/act/workflow/
.I see the folder wasn't created in the host, but
hello
isn't printed either.Not sure if the
act_runner
tries to add the volume itself, using the real folder or so.The thing is that since both Gitea and
act_runner
run on Docker, I'm not exactly sure how couldact_runner
make this work for$GITHUB_OUTPUT
.I see
needs
in your example, however that is not implemented (yet) in Gitea Actions and needs protocol changes..., the ones who implemented this in first place forget that (needs works in nektos/act)Please read my list of defects:
I'm still using my own GitHub Actions Simulator for gitea, which I posted in the initial gitea actions issue before this became reality
To be clear,
needs
works in Gitea Actions, what is not implemented isoutput
.Actually, not in the version I have right now. There have been a number of fixes since, also from issues I raised.
I was waiting for a new version of Gitea, but I've just realized that
act_runner
can be updated separately (of course, since I've downloaded a separate binary for it :-) !)I'll look into upgrading it now with the latest version.
Although, some fixes go on Gitea, like - https://github.com/go-gitea/gitea/pull/23789
I was refering to the
needs
context${{ needs.<id>.result }}
, because it has been empty.like this ?
Hey @seepine
I was thinking just something as simple as:
options: -v /root-path/:/root-path/ -v /var/run/act/workflow/:/var/run/act/workflow/
which seems to exist already, but there appears to be some caveats/limitations at the moment, based on the comments above.
I'd say we'll give it a bit of time while Gitea devs continue to implement what's missing, and make improvements, and then we can come back to this, to see what we can do to improve this scenario :-)
I see that both outputs and needs have been implemented, is needs.job.outputs.output supposed to be working on latest ? It doesn't seem to on my side.
Sometimes you just want to inject the same options to all started containers (e.g. to set proxy settings), so it's better to "force" them through
act_runner
instead of repeating the same configuration over and over in each repository workflow.I've built my own
act_runner
image with a simple patch to letcontainer.options
be injected in all started containers.See #265
In this case you may add them to
options:
in the act-runner configuration file. This is what I do [1] and it appears to work without problem.Unless I misunderstand something, this issue looks like it is already fixed.
No, it's not.
options
are applied only to default container.If you use a custom container to run your jobs (and it happens most of the times, i.e. if you need Node.js, Python or other environments not available in default container), then those
options
are not applied.You actually need to specify them within the job definition (which is exactly what I want to avoid) and pollute all of your repos.
Actually:
options
defined in runner config