Configurable network (copy options from default bridge network when creating a custom bridge network) #232

Open
opened 2023-06-07 08:32:13 +00:00 by crow · 11 comments

Problem

I have an network interface with a lower mtu as the default Docker mtu (1500).

I configured the mtu in daemon.json. But when a runner is instantiated the mtu is 1500 again. This seems to be due to a new network being created for each RunContext. And I don't know why this does not abide the mtu configured in Docker's daemon.json

Solution

Allow options to be passed down to the Docker client.

Since this is an upstream issue as well I opened the issue here. Because it seems like the API does not allow for options to be passed down.

Workaround

I added the options statically and compiled the act_runner myself

## Problem I have an network interface with a lower mtu as the default Docker mtu (`1500`). I configured the mtu in daemon.json. But when a runner is instantiated the mtu is `1500` again. This seems to be due to a new network being created for each [RunContext](https://gitea.com/gitea/act/src/branch/main/pkg/runner/run_context.go#L373). And I don't know why this does not abide the mtu configured in Docker's daemon.json ## Solution Allow [options](https://docs.docker.com/engine/reference/commandline/network_create/#options) to be passed down to the Docker client. Since this is an upstream issue as well I opened the issue here. Because it seems like the [API](https://gitea.com/gitea/act/src/branch/main/pkg/container/docker_network.go#L12) does not allow for options to be passed down. ## Workaround I added the options statically and compiled the act_runner myself
Member

my testing:

  1. configure mtu in daemon.json
{
  // others
  "mtu": 2000
}

apply and restart docker

  1. create docker network by command line:
docker network create -d bridge my-net
  1. inspect network
docker network inspect my-net

output:

[
    {
        "Name": "my-net",
        "Id": "aaf4c132eb44eea7012e67a8bc4e2b6dd05657c4848912747442a8a6e910875a",
        "Created": "2023-06-07T09:01:12.36757659Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

As we can see, mtu is not exist in options.
So I guess this is docker's default behavior that the user-defined network won't follow the mtu configured in Docker's daemon.json.

But I think we can add a item of like container.network_driver_opts (refer to doc) in the config file of runner to make network options configurable.

my testing: 1. configure `mtu` in `daemon.json` ```json { // others "mtu": 2000 } ``` apply and restart docker 2. create docker network by command line: ```bash docker network create -d bridge my-net ``` 3. inspect network ```bash docker network inspect my-net ``` output: ```json [ { "Name": "my-net", "Id": "aaf4c132eb44eea7012e67a8bc4e2b6dd05657c4848912747442a8a6e910875a", "Created": "2023-06-07T09:01:12.36757659Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ] ``` As we can see, `mtu` is not exist in `options`. So I guess this is docker's default behavior that the user-defined network won't follow the `mtu` configured in Docker's `daemon.json`. But I think we can add a item of like `container.network_driver_opts` (refer to [doc](https://docs.docker.com/engine/reference/commandline/network_create/#bridge-driver-options)) in the config file of runner to make network options configurable.
Author

@sillyguodong

Yes, this is indeed the issue. That said. I have no idea on what to do next since the issue cascades into the forked act package.

Does Gitea intend to make minimal changes to the package and prevent diversion to upstream? Or was the intention to create a point in time fork and make changes to fit Gitea's purpose?

@sillyguodong Yes, this is indeed the issue. That said. I have no idea on what to do next since the issue cascades into the forked `act` package. Does Gitea intend to make minimal changes to the package and prevent diversion to upstream? Or was the intention to create a point in time fork and make changes to fit Gitea's purpose?
Owner

We have no plan to hard fork act from upstream, and I am cautious about the solution "adding container.network_driver_opts", will this result in more and more docker-related configurations being added to configuration of act_runner?

We have no plan to hard fork act from upstream, and I am cautious about the solution "adding `container.network_driver_opts`", will this result in more and more docker-related configurations being added to configuration of act_runner?
Author

@wolfogre

Yes, but is this an issue?

I understand the argument of overwhelming configuration. But we know that Docker has a lot of sane defaults and most users would not have to touch the configuration in the first place.

Or is there an argument against specific configuration that will potentially break the CI?

These are just assumptions based on my experiences. Please elaborate

@wolfogre Yes, but is this an issue? I understand the argument of overwhelming configuration. But we know that Docker has a lot of sane defaults and most users would not have to touch the configuration in the first place. Or is there an argument against specific configuration that will potentially break the CI? These are just assumptions based on my experiences. Please elaborate
Owner

I've got another idea: copy the options from default bridge network when creating a custom bridge network, since users can modify daemon.json to update options of default bridge.

And it makes sense to copy default options when create custom networks for jobs.

We cannot do that via docker network create --config-from bridge xxx, because bridge isn't a "configuration network". But we can still inspect the network via SDK client, then create a custom network with the same options.

I've got another idea: copy the options from default bridge network when creating a custom bridge network, since users can modify `daemon.json` to update options of default bridge. And it makes sense to copy default options when create custom networks for jobs. We cannot do that via `docker network create --config-from bridge xxx`, because `bridge` isn't a "configuration network". But we can still inspect the network via SDK client, then create a custom network with the same options.
Author

That is a good idea

But how do you propose this without modifying the forked act repo?

That is a good idea But how do you propose this without modifying the forked act repo?
Owner

😄 You may have misunderstood, "we have no plan to hard fork act from upstream", I meant it's a soft fork. We will follow upstream regularly, not keeping same with upstream.

Of cause we modify the forked act repo, see https://gitea.com/gitea/act/pulls?state=closed

😄 You may have misunderstood, "we have no plan to hard fork act from upstream", I meant it's a soft fork. We will follow upstream regularly, not keeping same with upstream. Of cause we modify the forked act repo, see https://gitea.com/gitea/act/pulls?state=closed
Author

Ahhh

Well that makes sense. Haha... My bad 😬

Then this approach seems to make the most sense. Good thinking 👍

Ahhh Well that makes sense. Haha... My bad 😬 Then this approach seems to make the most sense. Good thinking 👍
wolfogre changed title from Configurable network to Configurable network (copy options from default bridge network when creating a custom bridge network) 2023-06-12 05:37:42 +00:00
sillyguodong self-assigned this 2023-06-13 09:12:20 +00:00

Is this still being worked on? Without this feature it seems impossible to use act_runner in my setup.

I'd love to help but I have no experience with go.

Is this still being worked on? Without this feature it seems impossible to use act_runner in my setup. I'd love to help but I have no experience with go.
Author

@northcode

As a temporary workaround, you can do this

Vendor the dependencies

go mod vendor

Edit this file and add the mtu option like below

// vendor/github.com/nektos/act/pkg/container/docker_network.go#19
Options: map[string]string{
       `com.docker.network.driver.mtu`: `Your desired mtu`,
},

After this you can compile the binary yourself

@northcode As a temporary workaround, you can do this Vendor the dependencies ```bash go mod vendor ``` Edit this file and add the mtu option like below ``` // vendor/github.com/nektos/act/pkg/container/docker_network.go#19 Options: map[string]string{ `com.docker.network.driver.mtu`: `Your desired mtu`, }, ``` After this you can compile the binary yourself

@crow thank you very much! that works! I hit a new roadblock getting docker build to work in k8s, but I'm currently digging through some other issues to figure out a solution to that.

@crow thank you very much! that works! I hit a new roadblock getting docker build to work in k8s, but I'm currently digging through some other issues to figure out a solution to that.
Sign in to join this conversation.
No Milestone
No Assignees
4 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: gitea/act_runner#232
No description provided.