Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1268847 - ssh fails to connect to VPN hosts - hangs at "expecting SSH2_MSG_KEX_ECDH_REPLY"
Summary: ssh fails to connect to VPN hosts - hangs at "expecting SSH2_MSG_KEX_ECDH_REPLY"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: openconnect
Version: 24
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: David Woodhouse
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-05 12:34 UTC by udayb
Modified: 2016-12-19 07:52 UTC (History)
8 users (show)

Fixed In Version: openconnect-7.08-1.fc25 openconnect-7.08-1.fc24
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-19 06:02:47 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description udayb 2015-10-05 12:34:12 UTC
Description of problem:

ssh fails to connect to hosts on the VPN; it hangs at "debug1: expecting SSH2_MSG_KEX_ECDH_REPLY" as shown by debug output with -vvv. Changing the vpn interface's MTU size to 1200 fixes the problem.  This problem exists for ssh to all servers on the VPN that I tried (running different OSs - Ubuntu, Cent OS) - so it has nothing to do with the server. I started noticing this after a big update to my packages after a long time. I never faced this problem earlier through this VPN.



Version-Release number of selected component (if applicable):
NetworkManager-vpnc-1.0.2-1.fc22.x86_64
openssh-clients-6.9p1-7.fc22.x86_64

How reproducible:
Always


Steps to Reproduce:
1. ssh -vvv
2.
3.

Actual results:

Hangs at debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
After a long time, the connection is closed.

Expected results:
Should have connected. 

Additional info:

Comment 1 Jirka Klimes 2015-10-06 11:29:38 UTC
Can you try with NetworkManager-1.0.6-6.fc22? It fixed a bug for VPN MTU - bug 1244547.

Comment 2 udayb 2015-10-06 13:05:13 UTC
Works fine with NetworkManager-1.0.6-6.fc22. The MTU is set to 1406 (it was 1500 earlier).

Comment 3 Fedora Admin XMLRPC Client 2015-10-14 14:49:02 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 4 Marco Driusso 2015-11-12 19:08:54 UTC
I'm facing the same bug, even with NetworkManager-1.0.6-7.fc22. Indeed, in my case, mtu 1406 is not sufficient for enabling a successful connection, and I have to set it to 1200. Ubuntu has a similar bug: https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1254085.

Some additional info:
VPN type: openconnect
NetworkManager-openconnect.x86_64 1.0.2-1.fc22 
openconnect.x86_64 7.06-1.fc22

Comment 5 Gabriele Turchi 2016-03-01 17:22:50 UTC
I have the same problem with NetworkManager-1.0.10-2.fc22.x86_64 and vpnc. I need to set MTU to 1340, having default set to 1412.

I suspect that could be better to allow the user to set this value by hand...

Comment 6 Fedora End Of Life 2016-07-19 19:19:31 UTC
Fedora 22 changed to end-of-life (EOL) status on 2016-07-19. Fedora 22 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Comment 7 Marco Driusso 2016-08-23 14:28:27 UTC
I'm still experiencing the same bug in Fedora 24 (and I had it also in Fedora 23). It appears to be still unsolved also in Ubuntu (see link in comment 4, where workarounds are also proposed). Again, the workaround I use is lowering the MTU for the vpn to 1200 with:
# ip li set mtu 1200 dev vpn0

Info:
NetworkManager.x86_64                 1:1.2.4-2.fc24    
NetworkManager-openconnect.x86_64     1.2.2-1.fc24
openconnect.x86_64                    7.07-2.fc24

Comment 8 David Woodhouse 2016-08-23 15:30:57 UTC
It would be useful to know precisely what the problem is. Is OpenConnect not negotiating the correct MTU for its connection to your server... or is it just that your internal network is broken and a lower MTU happens to work around its brokenness.

After you SSH into the remote machine, can you put your MTU back *up* again, then attempt 'ping -M do' with different packet sizes in both directions. Does your VPN server correctly send ICMP 'needs fragmentation' responses when you send a packet that's too large? If not, start by finding the idiot sysadmin and nailing it to the wall until it stops blocking ICMP....

Comment 9 Marco Driusso 2016-08-24 18:28:13 UTC
Ok, some additional info.

First, I tried to connect to the VPN using AnyConnect from Windows, and there the MTU is 1406, which is the same value set by openconnect and that generates the ssh problem.

I tried what asked in comment 8, but unfortunately it seems that ICMP is blocked, since I don't get any answer from the VPN server. Anyway, find below the results:

1) First I access the VPN through openconnect (NetworkManager), having:

[me@local ~]$ ip addr list
...
8: vpn0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1406 qdisc fq_codel state UP group default qlen 500
    link/none 
    inet 172.30.224.1/24 brd 172.30.224.255 scope global vpn0
       valid_lft forever preferred_lft forever
    inet6 fe80::bdc:9f50:b231:19bf/64 scope link flags 800 
       valid_lft forever preferred_lft forever

2) Then I decrease the MTU in order to be able to access the remote node with ssh. I discovered that the maximum value for the MTU in order to ssh the remote node is 1386 (don't know where this comes from). Hence, after:

[me@local ~]$  sudo ip li set mtu 1386 dev vpn0

I have:

[me@local ~]$ ip addr list
...
8: vpn0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1386 qdisc fq_codel state UP group default qlen 500
    link/none 
    inet 172.30.224.1/24 brd 172.30.224.255 scope global vpn0
       valid_lft forever preferred_lft forever
    inet6 fe80::bdc:9f50:b231:19bf/64 scope link flags 800 
       valid_lft forever preferred_lft forever

3) Now I ssh the remote node, and increase again the MTU, in order to have again the same situation of point 1).

4) Then, ping from local (172.30.224.1) to remote (172.30.50.172), obtaining:

[me@local ~]$ ping 172.30.50.172 -c 3 -M do -s 1358
PING 172.30.50.172 (172.30.50.172) 1358(1386) bytes of data.
1366 bytes from 172.30.50.172: icmp_seq=1 ttl=63 time=70.2 ms
1366 bytes from 172.30.50.172: icmp_seq=2 ttl=63 time=70.5 ms
1366 bytes from 172.30.50.172: icmp_seq=3 ttl=63 time=73.4 ms
--- 172.30.50.172 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 70.233/71.422/73.494/1.470 ms

[me@local ~]$ ping 172.30.50.172 -c 3 -M do -s 1359
PING 172.30.50.172 (172.30.50.172) 1359(1387) bytes of data.
--- 172.30.50.172 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

5) Finally, ping from remote (172.30.50.172) to local (172.30.224.1), obtaining:

[me@remote ~]$ ping 172.30.224.1 -c 3 -M do -s 1358
PING 172.30.224.1 (172.30.224.1) 1358(1386) bytes of data.
1366 bytes from 172.30.224.1: icmp_req=1 ttl=63 time=71.1 ms
1366 bytes from 172.30.224.1: icmp_req=2 ttl=63 time=70.2 ms
1366 bytes from 172.30.224.1: icmp_req=3 ttl=63 time=121 ms
--- 172.30.224.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 70.259/87.630/121.472/23.933 ms

[me@remote ~]$ ping 172.30.224.1 -c 3 -M do -s 1359
PING 172.30.224.1 (172.30.224.1) 1359(1387) bytes of data.
--- 172.30.224.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2014ms

Comment 10 David Woodhouse 2016-12-14 12:04:26 UTC
OpenConnect 7.08 has automatic MTU detection. Does it fix this?

Comment 11 Fedora Update System 2016-12-14 12:57:36 UTC
openconnect-7.08-1.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2016-236fdd6917

Comment 12 Fedora Update System 2016-12-14 12:58:35 UTC
openconnect-7.08-1.fc24 has been submitted as an update to Fedora 24. https://bodhi.fedoraproject.org/updates/FEDORA-2016-4e680d77fa

Comment 13 Fedora Update System 2016-12-15 05:04:03 UTC
openconnect-7.08-1.fc24 has been pushed to the Fedora 24 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-4e680d77fa

Comment 14 Fedora Update System 2016-12-15 05:07:40 UTC
openconnect-7.08-1.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-236fdd6917

Comment 15 Marco Driusso 2016-12-15 20:24:02 UTC
(In reply to David Woodhouse from comment #10)
> OpenConnect 7.08 has automatic MTU detection. Does it fix this?

Yes, openconnect-7.08-1.fc24 fixes it, many thanks!

Comment 16 Fedora Update System 2016-12-19 06:02:47 UTC
openconnect-7.08-1.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.

Comment 17 Fedora Update System 2016-12-19 07:52:52 UTC
openconnect-7.08-1.fc24 has been pushed to the Fedora 24 stable repository. If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.