Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1258350 (docker_local_resolver) - Docker ignores local DNS resolver
Summary: Docker ignores local DNS resolver
Keywords:
Status: CLOSED EOL
Alias: docker_local_resolver
Product: Fedora
Classification: Fedora
Component: docker
Version: 23
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Antonio Murdaca
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: Default_Local_DNS_Resolver
TreeView+ depends on / blocked
 
Reported: 2015-08-31 07:16 UTC by Tomáš Hozza
Modified: 2016-12-20 14:31 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-20 14:31:36 UTC
Type: Bug
Embargoed:
thozza: needinfo-


Attachments (Terms of Use)

Description Tomáš Hozza 2015-08-31 07:16:15 UTC
Description of problem:
Currently Docker ignores localhost address in /etc/resolv.conf. Docker upstream is planning to solve this in the future by some DNS proxy service. There is proposed solution using iptables rules.

Actual results:
the localhost address from resolv.conf is ignored

Expected results:
the local DNS resolver should be used by containers

Additional info:
upstream ticket - https://github.com/docker/docker/issues/14627

Comment 1 Tomáš Hozza 2015-08-31 07:17:12 UTC
I assigned this bug to PJP right away, since he is driving this changes in Docker.

Comment 2 Daniel Walsh 2015-09-28 17:56:49 UTC
This issue seems to have died, or at least gone to a deep deep sleep.

Comment 3 Tomáš Hozza 2015-09-29 06:13:08 UTC
PJP, any news?

Comment 4 pjp 2015-10-01 05:11:32 UTC
  Hello Tomas, Dan,

(In reply to Tomas Hozza from comment #3)
> PJP, any news?

Sorry, I was super occupied at work, could not spend time on this. Will start with it again over the weekend.

Thank you.

Comment 5 Daniel Walsh 2015-10-28 17:45:12 UTC
Update?

Comment 6 Tomáš Hozza 2015-11-30 10:02:51 UTC
(In reply to Daniel Walsh from comment #5)
> Update?

We are restarting the change process for "Default DNS resolver" as well as the work on the necessary parts. Most of the discussion is happening upstream.

PJP, can you please determine if we will have to have some downstream patch for F24, or is there anything usable in Upstream?

Comment 7 pjp 2015-12-01 06:18:52 UTC
(In reply to Tomas Hozza from comment #6)
> (In reply to Daniel Walsh from comment #5)
> PJP, can you please determine if we will have to have some downstream patch
> for F24, or is there anything usable in Upstream?

  Yes, will do and update here at the earliest.

Sorry I got caught up with lot of things before. Thank you.

Comment 8 Daniel Walsh 2016-01-06 18:50:30 UTC
Docker is working on this also, I believe or is that you guys?

Comment 9 Tomáš Hozza 2016-01-07 08:31:50 UTC
(In reply to Daniel Walsh from comment #8)
> Docker is working on this also, I believe or is that you guys?

It's Docker team. They wanted to implement the whole DNS proxy service instead of just resolving our one use case. Based on their comments, this should be also fixed by their proposed changes.

Some more info is here:
https://github.com/docker/libnetwork/pull/841

Comment 10 Daniel Walsh 2016-01-07 14:57:13 UTC
Yes I was following that, just wanted to confirm that fixes this issue.  I say we go with their solution, and wait for it.  Hopefully this will be in docker-1.10

Comment 11 Tomáš Hozza 2016-01-26 15:08:18 UTC
Just a note that we discovered a problem caused by the fact that Docker ignores local resolver.

In case the local resolver has configured forward zones e.g. for specific domain from VPN and the VPN-provided resolver has internal view of the domain, then the internal domains are not resolvable from within the container. So when you e.g. want to build the container using 'docker build' and inside you use the internal domain, the build will fail.

I know that this is basically an implication of the situation in Docker, but I wanted to list this use-case.

Comment 12 Daniel Walsh 2016-02-22 21:27:06 UTC
Tomas does docker-1.10 fix this issue?

Comment 13 pjp 2016-02-23 03:30:33 UTC
(In reply to Daniel Walsh from comment #12)
> Tomas does docker-1.10 fix this issue?

IIUC, Docker-1.10 embedded DNS server still does not connect to a local resolver on the host. It requires the resolver on the host to run on a non-localhost(127.0.0.1) interface and the same to be supplied with --dns option. It forwards only those requests to the external resolver which it could not resolve itself.

Comment 14 Nick Coghlan 2016-02-26 09:14:03 UTC
I started hitting this on Fedora 23, and updating to 1.10 didn't resolve it (although it did introduce a wrinkle with needing to install docker-v1.10-migrator and run "v1.10-migrator-local -s devicemapper" to convert the local images to the new format).

My current host name resolution configuration:

$ cat /etc/resolv.conf
# Generated by NetworkManager
search redhat.com
nameserver 127.0.0.1

The docker0 bridge is on 172.17.0.1, so would it be possible to bind the local resolver to that, and tell docker to use it for external DNS resolution?

Comment 15 pjp 2016-02-29 05:07:24 UTC
Hello Nick,

(In reply to Nick Coghlan from comment #14)
> The docker0 bridge is on 172.17.0.1, so would it be possible to bind the
> local resolver to that, and tell docker to use it for external DNS
> resolution?

  I think that'd defeat the idea of having local resolver. These steps below should help

  -> https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver#Docker

Please let us know if you face any issues. Thank you.

Comment 16 Nick Coghlan 2016-02-29 06:48:30 UTC
I went down a different path, which was to create a dedicated dnsmasq instance for Docker to use: http://stackoverflow.com/questions/35693117/giving-docker-containers-access-to-a-dnsmasq-local-dns-resolver-on-the-host/35693118

There are some aspects of my current configuration that I definitely don't like (mainly that I wasn't able to figure out a nice way of configuring firewalld, so ended up dropping the network firewall between the host and containers entirely), but it does work.

If I've understood the way dnsmasq configures itself correctly, then the instance binding itself to docker0 should be passing queries to the resolver on lo, rather than directly to the external DNS servers.

(Also setting this back to ASSIGNED, since 1.10 didn't fix it)

Comment 17 Tomáš Hozza 2016-02-29 08:27:18 UTC
(In reply to Nick Coghlan from comment #16)
> I went down a different path, which was to create a dedicated dnsmasq
> instance for Docker to use:
> http://stackoverflow.com/questions/35693117/giving-docker-containers-access-
> to-a-dnsmasq-local-dns-resolver-on-the-host/35693118

Thank you for the link. This is definitely something users can do to workaround the problem. The same setup should work also with Unbound.

> If I've understood the way dnsmasq configures itself correctly, then the
> instance binding itself to docker0 should be passing queries to the resolver
> on lo, rather than directly to the external DNS servers.

It depends on your configuration, but dnsmasq by default reads the /etc/resolv.conf, so it should use the local DNS resolver. Dnsmasq does not use DNSSEC by default, but that may not be something you care about.

Comment 18 Nick Coghlan 2016-04-07 06:56:40 UTC
I've discovered another interesting problem with my setup: it doesn't play nicely with docker-compose

Setting the composed services to "network_method: bridge" gets things working again, so I suspect it's a problem with the dedicated resolver not being present on the docker-compose created networks

Comment 19 Tomáš Hozza 2016-04-07 09:19:53 UTC
(In reply to Nick Coghlan from comment #18)
> Setting the composed services to "network_method: bridge" gets things
> working again, so I suspect it's a problem with the dedicated resolver not
> being present on the docker-compose created networks

Can you please explain what do you mean by "dedicated resolver not being present on the docker-compose created network". How does this work when there is no local resolver on the machine on which you run the codker-compose? Do these machines directly use resolvers from /etc/resolv.conf?

e.g. when you use libvirt, it runs its own dnsmasq instance on the network it creates and this instance forwards all the queries to the local resolver running on localhost. I would expect docker-compose to do something similar.

Comment 20 Nick Coghlan 2016-04-08 00:48:33 UTC
I haven't looked into the details of what docker-compose is getting wrong, but the behaviour I see is:

1. I have "OPTIONS='--selinux-enabled --log-driver=journald --dns=172.17.0.1'" configured in /etc/sysconfig/docker
2. This works correctly for containers started on the default Docker bridge network (including allowing lookup of services only available via VPN from the host)
3. For containers started by docker-compose in the default per-app network mode, /etc/resolv.conf had the DNS address as something like "127.0.0.11"
4. Name resolution in those containers didn't work, at all (not even for finding other services in the compose)

I don't know how it works when there's no local resolver on the host - I don't have that configuration readily available.

Comment 21 Daniel Walsh 2016-06-03 13:37:19 UTC
So is this a docker-compose bug?

Comment 22 Michael Hampton 2016-06-03 19:52:31 UTC
Not sure why this bug was reassigned to docker-compose. It's pretty obviously a docker upstream issue.

If there's a problem with docker-compose, which doesn't merely reproduce this bug, please open a new bug report.

Comment 23 Daniel Walsh 2016-08-19 20:10:44 UTC
Since the git pull request got merged upstream, does this fix the problem?

Comment 24 Daniel Walsh 2016-10-18 13:18:58 UTC
I am closing this bug as fixed in the current release.  Reopen if it still happens in docker-1.12

Comment 25 Tomáš Hozza 2016-10-18 13:56:02 UTC
I will test this once the version is available in some stable Fedora. I will reopen in case it won't work with 1.12.

Comment 26 Fedora End Of Life 2016-11-24 12:25:03 UTC
This message is a reminder that Fedora 23 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 23. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as EOL if it remains open with a Fedora  'version'
of '23'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 23 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 27 Fedora End Of Life 2016-12-20 14:31:36 UTC
Fedora 23 changed to end-of-life (EOL) status on 2016-12-20. Fedora 23 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.