I ran into a class of container networking problem where the usual answers were all slightly wrong.
I had Plex in a container. Plex inspects the interfaces it can see and publishes those addresses to clients. On a normal bridge network, that meant Plex happily discovered its container-side 10.x address and advertised it, even though nothing on my LAN could reach it directly. Clients would try that dead address first, wait, and then eventually fall back to the real one.
The recommended way to run Plex in a docker container, is with host networking (linuxserver/plex and plexinc/pms-docker) which does remove the dead container address, but it created the opposite problem: Plex could now see every relevant host address instead. That included addresses I did not want it to advertise - because my server does a lot more than Plex, and I use a multitude of addresses for that.
What I wanted was:
- Host networking, but without the discovering of multiple IP addresses
- Plex didn’t support just binding to an address, it did its discovery
- If I could somehow show Plex just one network interface with just one address …
That turns out to be a very different shape than normal Docker bridge networking.
The Problem with Bridge Networking
With ordinary bridge mode, the container gets a private address on a Docker-managed subnet, and then selected ports are published on the host. That is great for a lot of workloads. It is not great for applications that inspect their own interfaces and advertise what they find - becuase the bridge network is docker-specific and not reachable from outside Docker. With bridge networking, and a configured extra “Custom server access URLs” to let local clients find my published IP, I got this from Plex:
[
{
// The docker local network IP
// Unreachable by LAN clients
"address": "10.88.0.24",
"port": 32400
},
{
// The desired LAN ip
"address": "192.168.1.16",
"port": 32400
},
{
// My public IP
"address": "88.41.56.99",
"port": 32400
}
]
The 10.88.0.24 address was the problem. It was a perfectly valid address inside the container namespace, but it was not a useful address for clients on my LAN. Plex still published it because, from Plex’s point of view, it existed.
This is the key point: port publishing changes how traffic reaches the container from the outside. It does not change what the application sees from the inside.
So if the app auto-discovers interfaces, bridge mode can give it the wrong world-model.
Why Host Networking Was Still Wrong
The obvious next attempt was host networking.
That does remove the Docker bridge address, because there is no separate container network anymore. But now the process sees the host network namespace directly. If the host has multiple IPs, VLAN interfaces, VPN interfaces, or secondary addresses, the application may discover all of those as well. For Plex, that was way worse than plain bridge networking, as now remote clients would try many local addresses that never will work on their network. An example listing could be:
[
{
// The wrong LAN ip
"address": "192.168.1.12",
"port": 32400
},
{
// More wrong LAN ips
"address": "192.168.1.13",
"port": 32400
},
{
"address": "192.168.1.14",
"port": 32400
},
{
"address": "192.168.1.15",
"port": 32400
},
{
// The desired LAN ip
"address": "192.168.1.16",
"port": 32400
},
{
"address": "192.168.1.32",
"port": 32400
},
{
// My public IP
"address": "88.41.56.99",
"port": 32400
}
]
So bridge mode was too isolated in the wrong way, and host mode was too shared in the wrong way.
What I Actually Needed
I needed the container to look like a small machine on the LAN.
That is where macvlan and ipvlan come in.
Both are Linux networking primitives that container runtimes expose. Docker is not inventing a new network model here, they are wiring together kernel features that already exist.
Three Short Networking Models
macvlan
macvlan gives each container its own layer-2 identity. To the rest of the network, the container looks like a separate device with its own MAC address and its own IP. That is why it is so useful for applications that insist on inspecting their own interfaces: the app sees a real LAN-facing identity instead of a Docker-internal bridge address. Docker’s docs describe this as useful for applications that expect to be directly connected to the physical network, which is exactly the class of problem I had. See the Docker macvlan docs.
ipvlan L2
ipvlan in layer-2 mode aims for a similar result, but without giving every container a separate MAC address. The containers share the parent interface’s layer-2 identity, while still getting their own IPs and their own network namespaces. In practice, this still gives the application a real LAN-facing IP to discover, which is what matters here, but it tends to be friendlier to switches and port-security policies because you are not multiplying MAC addresses on one port. The best primary reference here is the Linux kernel ipvlan documentation, plus the Docker ipvlan docs.
ipvlan L3
ipvlan in layer-3 mode is a different model. The containers are no longer ordinary peers on the same LAN segment. Instead, the host effectively routes traffic to container-side subnets. Docker’s docs are explicit here: L3 mode should be on a separate subnet from the host’s default namespace, broadcast and multicast are filtered, and external systems need routes pointing back at the Docker host. That can be a clean design, but it is not the answer when the goal is “make this containerized app think it has exactly one ordinary LAN interface”. For that, L2 is the interesting mode.
The Important Catch
This part is easy to miss, because macvlan and ipvlan both sound like “put the container directly on the LAN and everything becomes simpler”. That is only partly true. The container gets a much cleaner view of the network. The host does not.
For macvlan, Docker’s docs explicitly say that containers attached to a macvlan network cannot communicate with the host directly. If you need that, you have to add more plumbing, such as a host-side macvlan interface or a second network attachment. See the macvlan considerations.
For ipvlan L2, Docker says the containers cannot ping the underlying host interfaces, and that the default namespace is not reachable by design. In other words: this also isolates the host from the container network path you just created. See the ipvlan L2 docs.
For ipvlan L3, the situation is different again: the container networks are routed, not same-subnet peers. That means you must treat them as routed networks and distribute routes accordingly. On the plus side, once you set up the necessary routing (both on the LAN side to route to your host, and on the host to route to your ipvlan-l3) then everything just works.
So the short version is:
macvlan: real LAN identity, but host communication is awkwardipvlanL2: real LAN identity, but the host’s default namespace is still intentionally isolatedipvlanL3: routed design, not “container as a normal LAN peer”
That trade-off is fine for my Plex case, because my main requirement was not “make host-to-container management easy”. It was “make Plex advertise the right address”.
Why I Landed on ipvlan L2
For my case, ipvlan L2 was the most honest shape.
It gives the container one real IP on the LAN. No port bindings are needed, because the container is not hiding behind the host. Plex sees that IP as its own address. It does not see 10.88.x.x, and it does not see the host’s other addresses either.
Conceptually, this is what I wanted:
LAN
├─ host 192.168.1.12
└─ plex container 192.168.1.16
And inside the container, Plex should effectively just see:
eth0 -> 192.168.1.16
lo -> 127.0.0.1
A Minimal Example
Create an ipvlan L2 network on the host’s LAN interface:
docker network create -d ipvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o ipvlan_mode=l2 \
-o parent=eno1 \
plex_lan
Then run Plex with a fixed address on that network:
docker run -d \
--name plex \
--network plex_lan \
--ip 192.168.1.16 \
plexinc/pms-docker:latest
At that point, clients talk directly to 192.168.1.16:32400, and Plex sees 192.168.1.16 as its local address.
Trade-offs
The trade-off with ipvlan and macvlan is that you are moving closer to “real networking”:
- the container consumes a real IP
- LAN addressing becomes your responsibility
- host communication is no longer the convenient default
Wrap-up
Now my Plex is happily auto-discovering and publishing, exactly two addresses: My LAN IP, and my Public IP (which of course has the appropriate port forwarding rules set up).
