Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1592932 - no network access in containers when doing 'podman run' on RHELAH
Summary: no network access in containers when doing 'podman run' on RHELAH
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: podman
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Dan Williams
QA Contact: Martin Jenner
URL:
Whiteboard:
Depends On:
Blocks: 1593419
TreeView+ depends on / blocked
 
Reported: 2018-06-19 15:33 UTC by Micah Abbott
Modified: 2019-03-01 15:43 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1593419 (view as bug list)
Environment:
Last Closed: 2019-03-01 15:43:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Micah Abbott 2018-06-19 15:33:57 UTC
On a RHELAH 7.5.1-1 system, I'm able to get network access in a container when using `podman run`.  The host VM is running in our internal OpenStack instance.

# rpm-ostree status
State: idle
Deployments:
● ostree://rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard
                   Version: 7.5.1.1 (2018-05-22 00:51:05)
                    Commit: c28680604bc84f472804a8f8c787917496739bc61529cbee7c474f68d4daeb81
              GPGSignature: Valid signature by 567E347AD0044ADE55BA8A5F199E2F91FD431D51

# rpm -q containernetworking-plugins podman runc
containernetworking-plugins-0.7.0-4.gitb51d327.el7.x86_64
podman-0.4.1-4.gitb51d327.el7.x86_64
runc-1.0.0-27.rc5.dev.git4bb1fe4.el7.x86_64

# date; podman run -it docker.io/alpine ping -c 5 1.1.1.1; date
Tue Jun 19 15:30:39 UTC 2018                                                        
PING 1.1.1.1 (1.1.1.1): 56 data bytes                                                                                                                                                                             
                                                                                                                                                                                                                  
--- 1.1.1.1 ping statistics ---                                                                                                                                                                                   
5 packets transmitted, 0 packets received, 100% packet loss                         
Tue Jun 19 15:30:53 UTC 2018         

# journalctl -b --since 15:30:39 --until 15:30:53               
-- Logs begin at Mon 2017-09-25 18:58:03 UTC, end at Tue 2018-06-19 15:30:53 UTC. --                   
Jun 19 15:30:39 micah-rhelah-ltt kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Jun 19 15:30:39 micah-rhelah-ltt kernel: cni0: port 2(vethdb53473c) entered blocking state
Jun 19 15:30:39 micah-rhelah-ltt kernel: cni0: port 2(vethdb53473c) entered disabled state
Jun 19 15:30:39 micah-rhelah-ltt kernel: device vethdb53473c entered promiscuous mode
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2672] device (vethdb53473c): carrier: link connected
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2678] manager: (vethdb53473c): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jun 19 15:30:39 micah-rhelah-ltt kernel: cni0: port 2(vethdb53473c) entered blocking state
Jun 19 15:30:39 micah-rhelah-ltt kernel: cni0: port 2(vethdb53473c) entered forwarding state
Jun 19 15:30:39 micah-rhelah-ltt kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2860] device (vethdb53473c): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2874] ifcfg-rh: add connection in-memory (8b79b037-95b9-3a87-92d7-1e0d752ed4d7,"Wired connection 3")
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2878] settings: (vethdb53473c): created default wired connection 'Wired connection 3'
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2919] ifcfg-rh: add connection in-memory (58ef8f27-af90-464f-94b0-9be2d23a6a4d,"vethdb53473c")
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2923] device (vethdb53473c): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2928] device (vethdb53473c): Activation: starting connection 'vethdb53473c' (58ef8f27-af90-464f-94b0-9be2d23a6a4d)
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2973] device (vethdb53473c): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2975] device (vethdb53473c): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2977] device (vethdb53473c): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2977] device (cni0): bridge port vethdb53473c was attached
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2977] device (vethdb53473c): Activation: connection 'vethdb53473c' enslaved, continuing activation
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2978] device (vethdb53473c): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2981] device (vethdb53473c): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.2983] device (vethdb53473c): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Jun 19 15:30:39 micah-rhelah-ltt NetworkManager[739]: <info>  [1529422239.3144] device (vethdb53473c): Activation: successful, device activated.
Jun 19 15:30:39 micah-rhelah-ltt dbus[681]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Jun 19 15:30:39 micah-rhelah-ltt systemd[1]: Starting Network Manager Script Dispatcher Service...
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.12DZKZ}
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: about to waitpid: 14204
Jun 19 15:30:39 micah-rhelah-ltt dbus[681]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Jun 19 15:30:39 micah-rhelah-ltt nm-dispatcher[14200]: req:1 'up' [vethdb53473c]: new request (5 scripts)
Jun 19 15:30:39 micah-rhelah-ltt systemd[1]: Started Network Manager Script Dispatcher Service.
Jun 19 15:30:39 micah-rhelah-ltt nm-dispatcher[14200]: req:1 'up' [vethdb53473c]: start running ordered scripts...
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: about to accept from console_socket_fd: 9
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: about to recvfd from connfd: 15
Jun 19 15:30:39 micah-rhelah-ltt systemd[1]: Unit iscsi.service cannot be reloaded because it is inactive.
Jun 19 15:30:39 micah-rhelah-ltt kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Jun 19 15:30:39 micah-rhelah-ltt oci-umount[14236]: umounthook <debug>: prestart container_id:0c02181a0380 rootfs:/var/lib/containers/storage/overlay/676fe2d3c25734977419a5fb7e5f4f6a57b409d00afc13cb943d8362ced14
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: console = {.name = '/dev/ptmx9 15:30:39 conmon: conmon <ninfo>: about to recvfd from connfd: 15
                                                '; .fd = 9}
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: container PID: 14213
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: attach sock path: /var/run/libpod/socket/0c02181a03802310a476ea0bc1bd8f8735642fe3dd9a558101200ccb26f4e252/attach
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/libpod/socket/0c02181a03802310a476ea0bc1bd8f8735642fe3dd9a558101200ccb26f4e252/attach}
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: ctl fifo path: /var/lib/containers/storage/overlay-containers/0c02181a03802310a476ea0bc1bd8f8735642fe3dd9a558101200ccb26f4e252/userdata/ctl
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: terminal_ctrl_fd: 16
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: Accepted connection 10
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: Got ctl message: 1 25 211
Jun 19 15:30:39 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: Message type: 1, Height: 25, Width: 211
Jun 19 15:30:46 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: Got ctl message: 1 51 211
Jun 19 15:30:46 micah-rhelah-ltt conmon[14203]: conmon <ninfo>: Message type: 1, Height: 51, Width: 211



I'm able to ping the same IP from the host just fine.

# ping -c 5 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=49 time=10.7 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=49 time=10.7 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=49 time=10.9 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=49 time=10.8 ms
64 bytes from 1.1.1.1: icmp_seq=5 ttl=49 time=10.9 ms

--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 10.762/10.852/10.933/0.114 ms


Additionally, I'm able to workaround the problem if I use `--net=host`

# date; podman run --net=host -it docker.io/alpine ping -c 5 1.1.1.1; date
Tue Jun 19 15:33:04 UTC 2018
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=49 time=11.017 ms
64 bytes from 1.1.1.1: seq=1 ttl=49 time=10.942 ms
64 bytes from 1.1.1.1: seq=2 ttl=49 time=10.869 ms
64 bytes from 1.1.1.1: seq=3 ttl=49 time=10.905 ms
64 bytes from 1.1.1.1: seq=4 ttl=49 time=10.823 ms

--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 10.823/10.911/11.017 ms
Tue Jun 19 15:33:09 UTC 2018

# journalctl -b --since 15:33:04 --until 15:33:09
-- Logs begin at Mon 2017-09-25 18:58:03 UTC, end at Tue 2018-06-19 15:33:09 UTC. --
Jun 19 15:33:04 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.JIZQKZ}
Jun 19 15:33:04 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: about to waitpid: 14339
Jun 19 15:33:04 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: about to accept from console_socket_fd: 9
Jun 19 15:33:04 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: about to recvfd from connfd: 15
Jun 19 15:33:05 micah-rhelah-ltt kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Jun 19 15:33:05 micah-rhelah-ltt oci-umount[14351]: umounthook <debug>: prestart container_id:242724ae195d rootfs:/var/lib/containers/storage/overlay/c1b9a4ef424694e972fbf6c1df0632fdf964564842832e20d52cbfd819da0
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: console = {.name = '/dev/ptmx9 15:33:04 conmon: conmon <ninfo>: about to recvfd from connfd: 15 
                                                '; .fd = 9}
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: container PID: 14345
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: attach sock path: /var/run/libpod/socket/242724ae195d138258ed8f602b52e4535a645fd86f94a9df33c8d1bd9238160f/attach
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: addr{sun_family=AF_UNIX, sun_path=/var/run/libpod/socket/242724ae195d138258ed8f602b52e4535a645fd86f94a9df33c8d1bd9238160f/attach}
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: ctl fifo path: /var/lib/containers/storage/overlay-containers/242724ae195d138258ed8f602b52e4535a645fd86f94a9df33c8d1bd9238160f/userdata/ctl
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: terminal_ctrl_fd: 16
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: Accepted connection 10
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: Got ctl message: 1 51 211
Jun 19 15:33:05 micah-rhelah-ltt conmon[14338]: conmon <ninfo>: Message type: 1, Height: 51, Width: 211

Comment 2 Micah Abbott 2018-06-19 15:45:11 UTC
> On a RHELAH 7.5.1-1 system, I'm able to get network access in a container when using `podman run`. 

That should say "I'm *unable* to get network access" obviously

Comment 3 Daniel Walsh 2018-06-19 16:44:48 UTC
podman version

Micah, have you tried to build podman from github and see if it works?

Comment 4 Micah Abbott 2018-06-19 17:04:16 UTC
Dan, I did build from git master; see below


RPM version 
--------------
# podman version
Version:       0.4.1
Go Version:    go1.9.2
OS/Arch:       linux/amd64


git master version
---------------------
# /srv/podman version
Version:       0.6.4-dev
Go Version:    go1.10.3
OS/Arch:       linux/amd64

# date; /srv/podman run  -it docker.io/alpine ping -c 5 1.1.1.1; date
Tue Jun 19 17:02:13 UTC 2018
PING 1.1.1.1 (1.1.1.1): 56 data bytes

--- 1.1.1.1 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
Tue Jun 19 17:02:27 UTC 2018

Comment 5 Micah Abbott 2018-06-19 17:06:25 UTC
Surprisingly, I'm able to get network access on regular RHEL Server:

# cat /etc/os-release
NAME="Red Hat Enterprise Linux Server"
VERSION="7.5 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.5"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.5 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.5:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.5
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.5"

# rpm -q containernetworking-plugins podman runc
containernetworking-plugins-0.7.0-4.gitb51d327.el7.x86_64
podman-0.4.1-4.gitb51d327.el7.x86_64
runc-1.0.0-27.rc5.dev.git4bb1fe4.el7.x86_64

# podman version
Version:       0.4.1
Go Version:    go1.9.2
OS/Arch:       linux/amd64

# date; podman run -it docker.io/alpine ping -c 5 1.1.1.1; date                                                                                                                        
Tue Jun 19 13:04:59 EDT 2018
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=48 time=11.128 ms
64 bytes from 1.1.1.1: seq=1 ttl=48 time=11.122 ms
64 bytes from 1.1.1.1: seq=2 ttl=48 time=11.197 ms
64 bytes from 1.1.1.1: seq=3 ttl=48 time=11.114 ms
64 bytes from 1.1.1.1: seq=4 ttl=48 time=11.078 ms

--- 1.1.1.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 11.078/11.127/11.197 ms
Tue Jun 19 13:05:03 EDT 2018

Comment 6 Micah Abbott 2018-06-19 18:30:40 UTC
Dan suggested there might be an iptables/CNI problem, so I looked at the current state of the iptables rules on both hosts.  There are definitely differences, but I'm unsure if there is a smoking gun.


RHELAH 7.5.1-1
----------------
# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy DROP)
target     prot opt source               destination
ACCEPT     all  --  10.88.0.7            anywhere
DOCKER-ISOLATION  all  --  anywhere             anywhere
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (1 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere



RHEL 7 Server
--------------
# iptables -L                
Chain INPUT (policy ACCEPT)                              
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
INPUT_direct  all  --  anywhere             anywhere
INPUT_ZONES_SOURCE  all  --  anywhere             anywhere
INPUT_ZONES  all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere             ctstate INVALID
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
FORWARD_direct  all  --  anywhere             anywhere
FORWARD_IN_ZONES_SOURCE  all  --  anywhere             anywhere
FORWARD_IN_ZONES  all  --  anywhere             anywhere
FORWARD_OUT_ZONES_SOURCE  all  --  anywhere             anywhere
FORWARD_OUT_ZONES  all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere             ctstate INVALID
REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
OUTPUT_direct  all  --  anywhere             anywhere

Chain FORWARD_IN_ZONES (1 references)
target     prot opt source               destination
FWDI_public  all  --  anywhere             anywhere            [goto]
FWDI_public  all  --  anywhere             anywhere            [goto]

Chain FORWARD_IN_ZONES_SOURCE (1 references)
target     prot opt source               destination

Chain FORWARD_OUT_ZONES (1 references)
target     prot opt source               destination
FWDO_public  all  --  anywhere             anywhere            [goto]
FWDO_public  all  --  anywhere             anywhere            [goto]

Chain FORWARD_OUT_ZONES_SOURCE (1 references)
target     prot opt source               destination

Chain FORWARD_direct (1 references)
target     prot opt source               destination

Chain FWDI_public (2 references)
target     prot opt source               destination
FWDI_public_log  all  --  anywhere             anywhere
FWDI_public_deny  all  --  anywhere             anywhere
FWDI_public_allow  all  --  anywhere             anywhere
ACCEPT     icmp --  anywhere             anywhere

Chain FWDI_public_allow (1 references)
target     prot opt source               destination

Chain FWDI_public_deny (1 references)
target     prot opt source               destination

Chain FWDI_public_log (1 references)
target     prot opt source               destination

Chain FWDO_public (2 references)
target     prot opt source               destination
FWDO_public_log  all  --  anywhere             anywhere
FWDO_public_deny  all  --  anywhere             anywhere
FWDO_public_allow  all  --  anywhere             anywhere

Chain FWDO_public_allow (1 references)
target     prot opt source               destination

Chain FWDO_public_deny (1 references)
target     prot opt source               destination

Chain FWDO_public_log (1 references)
target     prot opt source               destination

Chain INPUT_ZONES (1 references)
target     prot opt source               destination
IN_public  all  --  anywhere             anywhere            [goto]
IN_public  all  --  anywhere             anywhere            [goto]

Chain INPUT_ZONES_SOURCE (1 references)
target     prot opt source               destination

Chain INPUT_direct (1 references)
target     prot opt source               destination

Chain IN_public (2 references)
target     prot opt source               destination
IN_public_log  all  --  anywhere             anywhere
IN_public_deny  all  --  anywhere             anywhere
IN_public_allow  all  --  anywhere             anywhere
ACCEPT     icmp --  anywhere             anywhere

Chain IN_public_allow (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh ctstate NEW

Chain IN_public_deny (1 references)
target     prot opt source               destination

Chain IN_public_log (1 references)
target     prot opt source               destination

Chain OUTPUT_direct (1 references)
target     prot opt source               destination

Comment 7 Micah Abbott 2018-06-19 19:37:02 UTC
Sadly, this is also affecting RHELAH 7.5.2

# rpm-ostree status
State: idle; auto updates disabled
Deployments:
● ostree://custom:rhel-atomic-host/7/x86_64/standard
                   Version: 7.5.2 (2018-06-09 02:40:55)
                    Commit: db4a302e874cdd9cc9517a63133cfdf05e23cb684faae166b444c74cf7c146e8
              GPGSignature: Valid signature by 567E347AD0044ADE55BA8A5F199E2F91FD431D51

  ostree://rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard
                   Version: 7.5.1 (2018-05-08 16:36:53)
                    Commit: c0211e0b703930dd0f0df8b9f5e731901fce8e15e00b3bc76d3cf00df44eb6e8
              GPGSignature: Valid signature by 567E347AD0044ADE55BA8A5F199E2F91FD431D51

# rpm -q containernetworking-plugins podman runc
containernetworking-plugins-0.7.0-101.el7.x86_64
podman-0.6.1-3.git3e0ff12.el7.x86_64
runc-1.0.0-27.rc5.dev.git4bb1fe4.el7.x86_64

# podman version
Version:       0.6.1
Go Version:    go1.9.2
OS/Arch:       linux/amd64

# podman run docker.io/alpine ping -c 5 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes

--- 1.1.1.1 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss

Comment 8 Micah Abbott 2018-06-19 20:50:35 UTC
Additional workaround if you don't want to use `--net=host`

<baude> dcbw, if a guy was stuck with a binary that didn't have that, is there a simple iptables command he could run ?
<dcbw> baude: if you have the container's IP address you can:
<dcbw> iptables -t nat -A FORWARD -d <ipaddr> -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
<baude> k
<dcbw> or for the entire bridge, iptables -t nat -A FORWARD -o <cni bridge name> -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

Comment 9 Micah Abbott 2018-06-20 13:04:07 UTC
@dcbw suggested that this PR would address this problem:

https://github.com/containernetworking/plugins/pull/75

Comment 10 Micah Abbott 2018-06-20 14:05:15 UTC
Removing the request for blocker:

- this isn't technically a regression
- there have been no customer cases about this
- there is a workaround available

Comment 11 Daniel Walsh 2019-01-10 20:40:27 UTC
Seems that PR is still languishing.

Comment 12 Dan Williams 2019-03-01 15:43:42 UTC
Latest extras-rhel-7.6 branch in dist-git has the podman firewall workaround stuff in version 1.0.1. I believe this bug is fixed because of that.


Note You need to log in before you can comment on or make changes to this bug.