I have a bridge device set up with systemd, br0, that replaces my primary ethernet eth0. With the br0 bridge device, Incus is able to create containers/VMs that have unique MAC addresses that are then assigned IP addresses by my DHCP server. (sudo incus profile device add <profileName> eth0 nic nictype=bridged parent=br0) Additionally, the containers/VMs can directly contact the host, unlike with MACVLAN.

With Docker, I can’t see a way to get the same feature-set with their options. I have MACVLAN working, but it is even shoddier than the Incus implementation as it can’t do DHCP without a poorly-maintained plugin. And the host cannot contact the container due to the MACVLAN method (precludes running a container like a DNS server that the host server would want to rely on).

Is there a way I’ve missed with the bridge driver to specify a specific parent device? Can I make another bridge device off of br0 and bind to that one host-like? Searching really fell apart when I got to this point.

Also, if someone knows how to match Incus’ networking capability with Podman, I would love to hear that. I’m eyeing trying to move to Podman Quadlets (with Debian 13) after I’ve got myself well-versed with Docker (and its vast support infrastructure to learn from).

Hoping someone has solved this and wants to share their powers. I can always put a Docker/podman inside of an Incus container, but I’d like to avoid onioning if possible.

  • glizzyguzzler@lemmy.blahaj.zoneOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    This was very insightful and I’d like to say I groked 90% of it meaningfully!

    For an Incus container with its unique MAC interface, yes if I run a Docker container in that Incus container and leave the Docker container in its default bridge mode then I get the desired feature set (with the power of onions).

    And thanks for explaining CNI, I’ve seen it referenced but didn’t fully get how it’s involved. I see that podman uses it to make a MACVLAN interface that can do DHCP (until 5.0, but the replacement seems to be feature-compatible for MACVLAN), so podman will sidestep the pain point of having to assign a no-go-zone on the DHCP server for a Docker swath of IPv4s, as you mentioned. Close enough for containers that the host doesn’t need to talk to.

    So in summary:

    • I’ve got Docker doing the extent it can manage with MACVLAN and there’s no extra magicks to be done on it.

    • Podman will still use MACVLAN (no host to container comms still) but it’s able to use DHCP to get an address for the MACVLAN container.

    • If the host must talk to the container with MACVLAN, I can either use the MACVLAN bypass as you linked to above or put the Docker/Podman container inside an Incus container with its bridge mode.

    • Kubernutes continues to sound very powerful and flexible but is definitely beyond my reach yet. (Womp womp)

    Thanks again for taking the time to type and explain all of that!

    • litchralee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Kubernetes does indeed have a learning curve, but it’s also strangely accommodating for single-node setups which can then be expanded only by adding components, rather than tearing the whole thing down and starting again. In that sense, it’s a great learning platform towards managing larger or commercial clusters, if simply to get experience with the unique challenges inherent to scaling up.

      But that might be more of a !homelab@lemmy.ml point of view haha