Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1711054 - Ovirt not setting network.remote-dio on new volumes leaves them unable to be added as storage domains
Summary: Ovirt not setting network.remote-dio on new volumes leaves them unable to be ...
Keywords:
Status: CLOSED DUPLICATE of bug 1592916
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Gluster
Version: 4.3.3.7
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
: 1711060 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-16 20:00 UTC by Darrell
Modified: 2019-06-17 07:25 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-17 07:25:09 UTC
oVirt Team: Gluster
Embargoed:


Attachments (Terms of Use)
part of the vdsm log on 4.3.4 RC1 (6.66 KB, text/plain)
2019-05-17 08:12 UTC, Strahil Nikolov
no flags Details
Gluster mount log created during the failiure of the creation (10.03 KB, text/plain)
2019-05-17 08:13 UTC, Strahil Nikolov
no flags Details

Description Darrell 2019-05-16 20:00:26 UTC
Description of problem:
Creating a new storage domain on a gluster volume fails.

Version-Release number of selected component (if applicable):
Ovirt 4.3.3.7
Gluster 6.1

How reproducible:
100%

Steps to Reproduce:
1. Create gluster volume (web gui or CLI)
2. Attempt to add new volume as a storage domain
3.

Actual results:
Fails due to 
ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110)

Expected results:
Storage domain successfully created

Additional info:
from trouble shooting with Nir on overt-users, we isolated the problem to network.remote-dio being disabled on the new gluster volume. Setting network.remote-dio enable corrected the problem and allowed the volume to be created. This appears to be the default setting for that field in Gluster 6.1 at least (don't have other versions available to test). 
I note that that setting is part of gluster virt group settings, but apparently it isn't set by ovirt during optimize for storage. Perhaps optimize for storage should set the gluster group instead of (apparently) using hard coded settings?

Comment 2 Strahil Nikolov 2019-05-16 21:00:58 UTC
*** Bug 1711060 has been marked as a duplicate of this bug. ***

Comment 3 Strahil Nikolov 2019-05-16 21:03:42 UTC
The same behaviour can be seen on 4.3.4 RC1 with gluster 6.1 .
'Optimize for virt store' did not provide the option which prevented using the volume as gluster-based storage.

Comment 4 Krutika Dhananjay 2019-05-17 06:21:58 UTC
1.Could you provide the output of `gluster volume info $VOLNAME` for the affected volume?

2. Also the brick and mount logs of the affected volume for a failed run when `dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct` is executed? If this requires you to toggle remote-dio value just to hit the failure, please do that and share the logs for that run.



-Krutika

Comment 5 Strahil Nikolov 2019-05-17 08:10:07 UTC
Let's keep the communication in BZ 1711060.

Comment 6 Strahil Nikolov 2019-05-17 08:10:52 UTC
ignore the last comment.

Comment 7 Strahil Nikolov 2019-05-17 08:12:33 UTC
Created attachment 1569959 [details]
part of the vdsm log on 4.3.4 RC1

Comment 8 Strahil Nikolov 2019-05-17 08:13:06 UTC
Created attachment 1569960 [details]
Gluster mount log created during the failiure of the creation

Comment 9 Strahil Nikolov 2019-05-17 08:23:41 UTC
New volume info at that time:

Volume Name: data_fast4
Type: Replicate
Volume ID: fe9144fa-2fa2-4fde-b17c-0b41308b205f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast4/data_fast4
Brick2: gluster2:/gluster_bricks/data_fast4/data_fast4
Brick3: ovirt3:/gluster_bricks/data_fast4/data_fast4 (arbiter)
Options Reconfigured:
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.enable-shared-storage: enable

Set the remote-dio to on in order to create the storage domain.

The strange thing is that setting back the gluster volume to remote-dio off and mounting it on '/mnt' doesn't cause the issue.

[root@ovirt1 mnt]#  dd if=/dev/zero of=$(pwd)/test bs=4096 count=1 oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00368095 s, 1.1 MB/s



Here is a snip from vdsm log for another fresh gluster volume and the command is a little bit different:

2019-05-16 10:15:21,175+0300 INFO  (jsonrpc/2) [storage.StorageDomain] sdUUID=591d9b61-5c7d-4388-a6b7-ab03181dff8a domainName=data_fast2 remotePath=gluster1:/data_fast2 domClass=1, block_size=512, alignment=1048576 (nfsSD:86)
2019-05-16 10:15:21,195+0300 INFO  (jsonrpc/2) [IOProcessClient] (591d9b61-5c7d-4388-a6b7-ab03181dff8a) Starting client (__init__:308)
2019-05-16 10:15:21,207+0300 INFO  (ioprocess/16362) [IOProcess] (591d9b61-5c7d-4388-a6b7-ab03181dff8a) Starting ioprocess (__init__:434)
2019-05-16 10:15:21,216+0300 INFO  (jsonrpc/2) [storage.StorageDomain] Creating domain metadata directory u'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md' (fileSD:450)
2019-05-16 10:15:21,253+0300 INFO  (jsonrpc/2) [storage.xlease] Formatting index for lockspace u'591d9b61-5c7d-4388-a6b7-ab03181dff8a' (version=1) (xlease:673)
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" from=::ffff:192.168.1.2,43864, flow_id=4a54578a, task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in createStorageDomain
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in create
    block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in _prepareMetadata
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in format_external_leases
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in format_index
    index.dump(file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in dump
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in pwrite
    self._run(args, data=buf[:])
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1093, in _run
    raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') aborting: Task is aborted: u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', u\'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\', \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', \'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' err="/usr/bin/dd: error writing \'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\': Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2019-05-16 10:15:21,297+0300 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" (dispatcher:87)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper
    result = ctask.prepare(func, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
    return m(self, *a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in prepare
    raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n"

Comment 10 Krutika Dhananjay 2019-05-17 08:44:54 UTC
Not sure I understand. Are you saying turning remote-dio off is no longer causing IO to fail? Have you tried toggling it multiple times to see if it is stable?

Let's wait for Darrel also to get back.

Also, brick logs I'd need for the failure. They'd be under /var/log/glusterfs/bricks.

-Krutika

Comment 11 Strahil Nikolov 2019-05-17 09:02:37 UTC
It seems so.
I will try to create a fresh volume via cli and then to provide the logs.
My system is 4.3.4 RC1 but gluster has not been touched between 4.3.3.7 -> 4.3.4 RC1

Let's see Darell's output...

Comment 12 Darrell 2019-05-17 16:11:52 UTC
Redid the experiment as I'd destroyed the volume from the first test already, reconfirmed same results. Created volume in the web GUI, tried to add storage domain, failed with same error (as expected). Note that the DIRECT_IO_TEST file is created, but empty and owned by root. Same for DD tests when run on the volume mounted on /mnt. These logs are from the engine's attempt to create the storage domain.

Root ownership of the DIRECT_IO_TEST file was important, as I had to remove it before being able to create the domain after enabling network.remote-dio just to confirm it was the only change needed. Expected, perhaps, but would be good if it had been cleaned up by the engine when the original test failed.

[root@boneyard telsin]# gluster vol info test
 
Volume Name: test
Type: Replicate
Volume ID: c4a22ffc-b328-4d5c-a63f-5adab1e9a93e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.50.3.11:/v0/bricks/test
Brick2: 10.50.3.10:/v0/bricks/test
Brick3: 10.50.3.12:/v0/bricks/test
Options Reconfigured:
network.ping-timeout: 30
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.server-quorum-ratio: 50

vdsm.log:
2019-05-17 10:48:02,787-0500 INFO  (jsonrpc/5) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.10:10.50.3.12', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'10.50.3.11:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=d0bc00dd-b181-4636-afd0-4064283195d7, task_id=0ef787fa-d714-4962-a6e8-95ac45b59b59 (api:48)
2019-05-17 10:48:02,815-0500 INFO  (jsonrpc/2) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': '', 'pool_status': 'connected', 'lver': 41L, 'domains': u'3747c20e-34ba-44ac-9476-61818f88a269:Active,0dce69a7-40c2-4860-85db-4f3a5526bc11:Active', 'master_uuid': u'0dce69a7-40c2-4860-85db-4f3a5526bc11', 'version': '4', 'spm_id': 2, 'type': 'GLUSTERFS', 'master_ver': 134}, 'dominfo': {u'3747c20e-34ba-44ac-9476-61818f88a269': {'status': u'Active', 'diskfree': '398593884160', 'isoprefix': '', 'alerts': [], 'disktotal': '410640973824', 'version': 4}, u'0dce69a7-40c2-4860-85db-4f3a5526bc11': {'status': u'Active', 'diskfree': '398535950336', 'isoprefix': '', 'alerts': [], 'disktotal': '416429375488', 'version': 4}}} from=::ffff:10.100.90.5,43830, task_id=682d746d-68d2-41c7-91c5-3f0927e39459 (api:54)
2019-05-17 10:48:02,815-0500 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.05 seconds (__init__:312)
2019-05-17 10:48:02,939-0500 INFO  (jsonrpc/5) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/glusterSD/10.50.3.11:_test' (storageServer:168)
2019-05-17 10:48:02,940-0500 INFO  (jsonrpc/5) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/glusterSD/10.50.3.11:_test mode: None (fileUtils:199)
2019-05-17 10:48:02,940-0500 WARN  (jsonrpc/5) [storage.StorageServer.MountConnection] Using user specified backup-volfile-servers option (storageServer:275)
2019-05-17 10:48:02,940-0500 INFO  (jsonrpc/5) [storage.Mount] mounting 10.50.3.11:/test at /rhev/data-center/mnt/glusterSD/10.50.3.11:_test (mount:204)
2019-05-17 10:48:03,089-0500 INFO  (jsonrpc/5) [IOProcessClient] (Global) Starting client (__init__:308)
2019-05-17 10:48:03,096-0500 INFO  (ioprocess/12048) [IOProcess] (Global) Starting ioprocess (__init__:434)
2019-05-17 10:48:03,114-0500 INFO  (itmap/0) [IOProcessClient] (/glusterSD/10.50.3.11:_test) Starting client (__init__:308)
2019-05-17 10:48:03,121-0500 INFO  (itmap/1) [IOProcessClient] (/glusterSD/dev-san:_gv0) Starting client (__init__:308)
2019-05-17 10:48:03,123-0500 INFO  (ioprocess/12097) [IOProcess] (/glusterSD/10.50.3.11:_test) Starting ioprocess (__init__:434)
2019-05-17 10:48:03,128-0500 INFO  (itmap/2) [IOProcessClient] (/glusterSD/dev-san:_gvOvirt) Starting client (__init__:308)
2019-05-17 10:48:03,131-0500 INFO  (ioprocess/12105) [IOProcess] (/glusterSD/dev-san:_gv0) Starting ioprocess (__init__:434)
2019-05-17 10:48:03,136-0500 INFO  (ioprocess/12111) [IOProcess] (/glusterSD/dev-san:_gvOvirt) Starting ioprocess (__init__:434)
2019-05-17 10:48:03,139-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]} from=::ffff:10.100.90.5,44732, flow_id=d0bc00dd-b181-4636-afd0-4064283195d7, task_id=0ef787fa-d714-4962-a6e8-95ac45b59b59 (api:54)
2019-05-17 10:48:03,139-0500 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.35 seconds (__init__:312)
2019-05-17 10:48:03,883-0500 INFO  (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.10:10.50.3.12', u'id': u'253295f9-e5a2-4864-992e-7dc6f54b2222', u'connection': u'10.50.3.11:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=2b3475ad, task_id=c121c2a0-05ec-42d2-aed7-13bfc4d485e4 (api:48)
2019-05-17 10:48:03,887-0500 INFO  (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'253295f9-e5a2-4864-992e-7dc6f54b2222'}]} from=::ffff:10.100.90.5,44732, flow_id=2b3475ad, task_id=c121c2a0-05ec-42d2-aed7-13bfc4d485e4 (api:54)
2019-05-17 10:48:03,888-0500 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:312)
2019-05-17 10:48:03,892-0500 INFO  (jsonrpc/7) [vdsm.api] START createStorageDomain(storageType=7, sdUUID=u'f8222c62-65d0-4257-913b-b8309a195e97', domainName=u'gTest', typeSpecificArg=u'10.50.3.11:/test', domClass=1, domVersion=u'4', block_size=512, max_hosts=250, options=None) from=::ffff:10.100.90.5,44732, flow_id=2b3475ad, task_id=e1ac11d3-f19e-4e32-a210-9cd88742b8b8 (api:48)
2019-05-17 10:48:04,205-0500 WARN  (jsonrpc/7) [storage.LVM] Reloading VGs failed (vgs=[u'f8222c62-65d0-4257-913b-b8309a195e97'] rc=5 out=[] err=['  Volume group "f8222c62-65d0-4257-913b-b8309a195e97" not found', '  Cannot process volume group f8222c62-65d0-4257-913b-b8309a195e97']) (lvm:442)
2019-05-17 10:48:04,224-0500 INFO  (periodic/31) [vdsm.api] START repoStats(domains=()) from=internal, task_id=c65982d4-6a9d-4eaf-a7cb-34f6b84e44d4 (api:48)
2019-05-17 10:48:04,225-0500 INFO  (periodic/31) [vdsm.api] FINISH repoStats return={u'3747c20e-34ba-44ac-9476-61818f88a269': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00101691', 'lastCheck': '6.1', 'valid': True}, u'0dce69a7-40c2-4860-85db-4f3a5526bc11': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00131841', 'lastCheck': '5.7', 'valid': True}} from=internal, task_id=c65982d4-6a9d-4eaf-a7cb-34f6b84e44d4 (api:54)
2019-05-17 10:48:04,238-0500 INFO  (jsonrpc/7) [storage.StorageDomain] sdUUID=f8222c62-65d0-4257-913b-b8309a195e97 domainName=gTest remotePath=10.50.3.11:/test domClass=1, block_size=512, alignment=1048576 (nfsSD:86)
2019-05-17 10:48:04,254-0500 INFO  (jsonrpc/7) [IOProcessClient] (f8222c62-65d0-4257-913b-b8309a195e97) Starting client (__init__:308)
2019-05-17 10:48:04,261-0500 INFO  (ioprocess/12146) [IOProcess] (f8222c62-65d0-4257-913b-b8309a195e97) Starting ioprocess (__init__:434)
2019-05-17 10:48:04,306-0500 ERROR (jsonrpc/7) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110)
2019-05-17 10:48:04,306-0500 INFO  (jsonrpc/7) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=2b3475ad, task_id=e1ac11d3-f19e-4e32-a210-9cd88742b8b8 (api:52)
2019-05-17 10:48:04,306-0500 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='e1ac11d3-f19e-4e32-a210-9cd88742b8b8') Unexpected error (task:875)
2019-05-17 10:48:04,306-0500 INFO  (jsonrpc/7) [storage.TaskManager.Task] (Task='e1ac11d3-f19e-4e32-a210-9cd88742b8b8') aborting: Task is aborted: 'Storage Domain target is unsupported: ()' - code 399 (task:1181)
2019-05-17 10:48:04,306-0500 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH createStorageDomain error=Storage Domain target is unsupported: () (dispatcher:83)
2019-05-17 10:48:04,306-0500 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 399) in 0.42 seconds (__init__:312)
2019-05-17 10:48:05,106-0500 INFO  (jsonrpc/3) [vdsm.api] START disconnectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'backup-volfile-servers=10.50.3.10:10.50.3.12', u'id': u'253295f9-e5a2-4864-992e-7dc6f54b2222', u'connection': u'10.50.3.11:/test', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.100.90.5,44732, flow_id=f7709310-0b0d-4fce-b01c-4a42dd50072c, task_id=6245c346-7bfa-48a9-9edf-8532ef491347 (api:48)
2019-05-17 10:48:05,107-0500 INFO  (jsonrpc/3) [storage.Mount] unmounting /rhev/data-center/mnt/glusterSD/10.50.3.11:_test (mount:212)
2019-05-17 10:48:05,313-0500 INFO  (jsonrpc/3) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 0, 'id': u'253295f9-e5a2-4864-992e-7dc6f54b2222'}]} from=::ffff:10.100.90.5,44732, flow_id=f7709310-0b0d-4fce-b01c-4a42dd50072c, task_id=6245c346-7bfa-48a9-9edf-8532ef491347 (api:54)
2019-05-17 10:48:05,314-0500 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.20 seconds (__init__:312)

brick logs:
[2019-05-17 15:47:16.857040] I [MSGID: 100030] [glusterfsd.c:2849:main] 0-/usr/s
bin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.1 (args: /usr/sbi
n/glusterfsd -s 10.50.3.11 --volfile-id test.10.50.3.11.v0-bricks-test -p /var/r
un/gluster/vols/test/10.50.3.11-v0-bricks-test.pid -S /var/run/gluster/35185988e
170e6ed.socket --brick-name /v0/bricks/test -l /var/log/glusterfs/bricks/v0-bric
ks-test.log --xlator-option *-posix.glusterd-uuid=7643ebf9-7661-4b9b-935b-93fa4c
85ab6f --process-name brick --brick-port 49152 --xlator-option test-server.liste
n-port=49152)
[2019-05-17 15:47:16.857723] I [glusterfsd.c:2556:daemonize] 0-glusterfs: Pid of
 current running process is 11027
[2019-05-17 15:47:16.862835] I [socket.c:902:__socket_server_bind] 0-socket.glus
terfsd: closing (AF_UNIX) reuse check socket 9
[2019-05-17 15:47:16.869909] I [MSGID: 101190] [event-epoll.c:680:event_dispatch
_epoll_worker] 0-epoll: Started thread with index 0
[2019-05-17 15:47:16.869947] I [MSGID: 101190] [event-epoll.c:680:event_dispatch
_epoll_worker] 0-epoll: Started thread with index 1
[2019-05-17 15:47:16.874653] I [MSGID: 101190] [event-epoll.c:680:event_dispatch
_epoll_worker] 0-epoll: Started thread with index 2
[2019-05-17 15:47:16.874700] I [MSGID: 101190] [event-epoll.c:680:event_dispatch
_epoll_worker] 0-epoll: Started thread with index 3
[2019-05-17 15:47:16.874928] I [rpcsvc.c:2698:rpcsvc_set_outstanding_rpc_limit] 
0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2019-05-17 15:47:16.876961] I [socket.c:902:__socket_server_bind] 0-socket.test
-changelog: closing (AF_UNIX) reuse check socket 14
[2019-05-17 15:47:16.877172] I [trash.c:2450:init] 0-test-trash: no option speci
fied for 'eliminate', using NULL
Final graph:
+------------------------------------------------------------------------------+
  1: volume test-posix
  2:     type storage/posix
  3:     option glusterd-uuid 7643ebf9-7661-4b9b-935b-93fa4c85ab6f
  4:     option directory /v0/bricks/test
  5:     option volume-id c4a22ffc-b328-4d5c-a63f-5adab1e9a93e
  6:     option shared-brick-count 1
  7:     option brick-uid 36
  8:     option brick-gid 36
  9: end-volume
 10:  
 11: volume test-trash
 12:     type features/trash
 13:     option trash-dir .trashcan
 14:     option brick-path /v0/bricks/test
 15:     option trash-internal-op off
 16:     subvolumes test-posix
 17: end-volume
 18:  
 19: volume test-changelog
 20:     type features/changelog
 21:     option changelog-brick /v0/bricks/test
 22:     option changelog-dir /v0/bricks/test/.glusterfs/changelogs
 23:     option changelog-barrier-timeout 120
 24:     subvolumes test-trash
 25: end-volume
 26:  
 27: volume test-bitrot-stub
 28:     type features/bitrot-stub
 29:     option export /v0/bricks/test
 30:     option bitrot disable
 31:     subvolumes test-changelog
 32: end-volume
 33:  
 34: volume test-access-control
 35:     type features/access-control
 36:     subvolumes test-bitrot-stub
 37: end-volume
 38:  
 39: volume test-locks
 40:     type features/locks
 41:     option enforce-mandatory-lock off
 42:     subvolumes test-access-control
 43: end-volume
 44:  
 45: volume test-worm
 46:     type features/worm
 47:     option worm off
 48:     option worm-file-level off
 49:     option worm-files-deletable on
 50:     subvolumes test-locks
 51: end-volume
 52:  
 53: volume test-read-only
 54:     type features/read-only
 55:     option read-only off
 56:     subvolumes test-worm
 57: end-volume
 58:  
 59: volume test-leases
 60:     type features/leases
 61:     option leases off
 62:     subvolumes test-read-only
 63: end-volume
 64:  
 65: volume test-upcall
 66:     type features/upcall
 67:     option cache-invalidation off
 68:     subvolumes test-leases
 69: end-volume
 70:  
 71: volume test-io-threads
 72:     type performance/io-threads
 73:     option low-prio-threads 32
 74:     subvolumes test-upcall
 75: end-volume
 76:  
 77: volume test-selinux
 78:     type features/selinux
 79:     option selinux on
 80:     subvolumes test-io-threads
 81: end-volume
 82:  
 83: volume test-marker
 84:     type features/marker
 85:     option volume-uuid c4a22ffc-b328-4d5c-a63f-5adab1e9a93e
 86:     option timestamp-file /var/lib/glusterd/vols/test/marker.tstamp
 87:     option quota-version 0
 88:     option xtime off
 89:     option gsync-force-xtime off
 90:     option quota off
 91:     option inode-quota off
 92:     subvolumes test-selinux
 93: end-volume
 94:  
 95: volume test-barrier
 96:     type features/barrier
 97:     option barrier disable
 98:     option barrier-timeout 120
 99:     subvolumes test-marker
100: end-volume
101:  
102: volume test-index
103:     type features/index
104:     option index-base /v0/bricks/test/.glusterfs/indices
105:     option xattrop-dirty-watchlist trusted.afr.dirty
106:     option xattrop-pending-watchlist trusted.afr.test-
107:     subvolumes test-barrier
108: end-volume
109:  
110: volume test-quota
111:     type features/quota
112:     option volume-uuid test
113:     option server-quota off
114:     option deem-statfs off
115:     subvolumes test-index
116: end-volume
117:  
118: volume test-io-stats
119:     type debug/io-stats
120:     option unique-id /v0/bricks/test
121:     option log-level INFO
122:     option latency-measurement off
123:     option count-fop-hits off
124:     subvolumes test-quota
125: end-volume
126:  
127: volume /v0/bricks/test
128:     type performance/decompounder
129:     option auth.addr./v0/bricks/test.allow *
130:     option auth-path /v0/bricks/test
131:     option auth.login.24909ee0-24cc-4f78-af68-275187589fae.password 2ce65fd
a-fb86-416d-9e29-861ac3a738c6
132:     option auth.login./v0/bricks/test.allow 24909ee0-24cc-4f78-af68-2751875
89fae
133:     subvolumes test-io-stats
134: end-volume
135:  
136: volume test-server
137:     type protocol/server
138:     option transport.socket.listen-port 49152
139:     option rpc-auth.auth-glusterfs on
140:     option rpc-auth.auth-unix on
141:     option rpc-auth.auth-null on
142:     option rpc-auth-allow-insecure on
143:     option transport-type tcp
144:     option transport.address-family inet
145:     option auth.login./v0/bricks/test.allow 24909ee0-24cc-4f78-af68-2751875
89fae
146:     option auth.login.24909ee0-24cc-4f78-af68-275187589fae.password 2ce65fd
a-fb86-416d-9e29-861ac3a738c6
147:     option auth-path /v0/bricks/test
148:     option auth.addr./v0/bricks/test.allow *
149:     option transport.socket.keepalive 1
150:     option transport.socket.ssl-enabled off
151:     option event-threads 4
152:     option transport.socket.keepalive-time 20
153:     option transport.socket.keepalive-interval 2
154:     option transport.socket.keepalive-count 9
155:     option transport.listen-backlog 1024
156:     subvolumes /v0/bricks/test
157: end-volume
158:  
+------------------------------------------------------------------------------+
[2019-05-17 15:47:18.977033] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.12"
[2019-05-17 15:47:18.977055] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:47:18.977068] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:263830b2-de4d-4f5b-9049-e829
925db095-GRAPH_ID:6-PID:22730-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-clie
nt-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:47:18.979652] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.12"
[2019-05-17 15:47:18.979670] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:47:18.979693] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:360b06ff-d4a7-4d68-b6e9-bde7
096e3d82-GRAPH_ID:0-PID:6085-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-clien
t-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:47:18.980363] I [rpcsvc.c:870:rpcsvc_handle_rpc_call] 0-rpc-servi
ce: spawned a request handler thread for queue 0
[2019-05-17 15:47:19.017248] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.11"
[2019-05-17 15:47:19.017275] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:47:19.017294] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:d424cc54-9a87-403e-b217-6313
62eeaff3-GRAPH_ID:0-PID:11061-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-cli
ent-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:47:20.602649] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.10"
[2019-05-17 15:47:20.602669] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:47:20.602679] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:a332606e-0f41-4f72-9060-a474
2add08b0-GRAPH_ID:0-PID:26592-HOST:necropolis.int.ohgnetworks.com-PC_NAME:test-c
lient-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:47:20.611796] I [MSGID: 115036] [server.c:499:server_rpc_notify] 
0-test-server: disconnecting connection from CTX_ID:a332606e-0f41-4f72-9060-a474
2add08b0-GRAPH_ID:0-PID:26592-HOST:necropolis.int.ohgnetworks.com-PC_NAME:test-c
lient-0-RECON_NO:-0
[2019-05-17 15:47:20.611900] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 
0-test-server: Shutting down connection CTX_ID:a332606e-0f41-4f72-9060-a4742add0
8b0-GRAPH_ID:0-PID:26592-HOST:necropolis.int.ohgnetworks.com-PC_NAME:test-client
-0-RECON_NO:-0
[2019-05-17 15:47:20.855104] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.10"
[2019-05-17 15:47:20.855134] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:47:20.855144] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:f654a9a1-0241-4415-a88f-b0a2
b83b2301-GRAPH_ID:0-PID:26419-HOST:necropolis.int.ohgnetworks.com-PC_NAME:test-c
lient-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:47:20.856227] I [rpcsvc.c:870:rpcsvc_handle_rpc_call] 0-rpc-servi
ce: spawned a request handler thread for queue 1
[2019-05-17 15:48:03.062247] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.11"
[2019-05-17 15:48:03.062271] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:48:03.062282] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:471a7ddb-e566-4364-a171-8de6
c75b9dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-cli
ent-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:48:04.264875] E [MSGID: 113069] [posix-entry-ops.c:2123:posix_cre
ate] 0-test-posix: open on /v0/bricks/test/__DIRECT_IO_TEST__ failed [Invalid ar
gument]
[2019-05-17 15:48:04.264931] I [MSGID: 115071] [server-rpc-fops_v2.c:1546:server
4_create_cbk] 0-test-server: 25: CREATE /__DIRECT_IO_TEST__ (00000000-0000-0000-
0000-000000000001/__DIRECT_IO_TEST__), client: CTX_ID:471a7ddb-e566-4364-a171-8d
e6c75b9dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-c
lient-0-RECON_NO:-0, error-xlator: test-posix [Invalid argument]
[2019-05-17 15:48:05.120532] I [MSGID: 115036] [server.c:499:server_rpc_notify] 
0-test-server: disconnecting connection from CTX_ID:471a7ddb-e566-4364-a171-8de6
c75b9dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-cli
ent-0-RECON_NO:-0
[2019-05-17 15:48:05.120658] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 
0-test-server: Shutting down connection CTX_ID:471a7ddb-e566-4364-a171-8de6c75b9
dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-client-0
-RECON_NO:-0
[2019-05-17 15:50:15.508396] I [addr.c:54:compare_addr_and_update] 0-/v0/bricks/
test: allowed = "*", received addr = "10.50.3.12"
[2019-05-17 15:50:15.508429] I [login.c:110:gf_auth] 0-auth/login: allowed user 
names: 24909ee0-24cc-4f78-af68-275187589fae
[2019-05-17 15:50:15.508439] I [MSGID: 115029] [server-handshake.c:550:server_se
tvolume] 0-test-server: accepted client from CTX_ID:35a55e7e-7e14-4653-918e-0959
97a81ed3-GRAPH_ID:0-PID:9455-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-clien
t-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
[2019-05-17 15:50:15.526169] I [MSGID: 115036] [server.c:499:server_rpc_notify] 
0-test-server: disconnecting connection from CTX_ID:35a55e7e-7e14-4653-918e-0959
97a81ed3-GRAPH_ID:0-PID:9455-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-clien
t-0-RECON_NO:-0
[2019-05-17 15:50:15.526271] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 
0-test-server: Shutting down connection CTX_ID:35a55e7e-7e14-4653-918e-095997a81
ed3-GRAPH_ID:0-PID:9455-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-client-0-R
ECON_NO:-0
[2019-05-17 15:52:16.992388] I [rpcsvc.c:870:rpcsvc_handle_rpc_call] 0-rpc-servi
ce: spawned a request handler thread for queue 2

Comment 13 Strahil Nikolov 2019-05-19 11:39:26 UTC
It seems that my case is different from Darell's while the symptoms are the same.

Now I have destroyed all 4 Storage Domains, gluster volumes and vdo and recreated again (4 gluster volumes on a single vdo).
This time vdo has '--emulate512=true' and no issues have been observed.

Gluster volume options before 'Optimize for virt':

Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Gluster volume after 'Optimize for virt':

Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
network.ping-timeout: 30
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.enable-shared-storage: enable

After that adding the volumes as storage domains (via UI) worked without any issues.

Comment 14 Krutika Dhananjay 2019-05-20 08:48:08 UTC
(In reply to Darrell from comment #12)
> Redid the experiment as I'd destroyed the volume from the first test
> already, reconfirmed same results. Created volume in the web GUI, tried to
> add storage domain, failed with same error (as expected). Note that the
> DIRECT_IO_TEST file is created, but empty and owned by root. Same for DD
> tests when run on the volume mounted on /mnt. These logs are from the
> engine's attempt to create the storage domain.
> 
> Root ownership of the DIRECT_IO_TEST file was important, as I had to remove
> it before being able to create the domain after enabling network.remote-dio
> just to confirm it was the only change needed. Expected, perhaps, but would
> be good if it had been cleaned up by the engine when the original test
> failed.
> 
> [root@boneyard telsin]# gluster vol info test
>  
> Volume Name: test
> Type: Replicate
> Volume ID: c4a22ffc-b328-4d5c-a63f-5adab1e9a93e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.50.3.11:/v0/bricks/test
> Brick2: 10.50.3.10:/v0/bricks/test
> Brick3: 10.50.3.12:/v0/bricks/test
> Options Reconfigured:
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.event-threads: 4
> client.event-threads: 4
> cluster.choose-local: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> auth.allow: *
> user.cifs: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> cluster.server-quorum-ratio: 50
> 
> vdsm.log:
> 2019-05-17 10:48:02,787-0500 INFO  (jsonrpc/5) [vdsm.api] START
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options':
> u'backup-volfile-servers=10.50.3.10:10.50.3.12', u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection': u'10.50.3.11:/test',
> u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None) from=::ffff:10.100.90.5,44732,
> flow_id=d0bc00dd-b181-4636-afd0-4064283195d7,
> task_id=0ef787fa-d714-4962-a6e8-95ac45b59b59 (api:48)
> 2019-05-17 10:48:02,815-0500 INFO  (jsonrpc/2) [vdsm.api] FINISH
> getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix':
> '', 'pool_status': 'connected', 'lver': 41L, 'domains':
> u'3747c20e-34ba-44ac-9476-61818f88a269:Active,0dce69a7-40c2-4860-85db-
> 4f3a5526bc11:Active', 'master_uuid':
> u'0dce69a7-40c2-4860-85db-4f3a5526bc11', 'version': '4', 'spm_id': 2,
> 'type': 'GLUSTERFS', 'master_ver': 134}, 'dominfo':
> {u'3747c20e-34ba-44ac-9476-61818f88a269': {'status': u'Active', 'diskfree':
> '398593884160', 'isoprefix': '', 'alerts': [], 'disktotal': '410640973824',
> 'version': 4}, u'0dce69a7-40c2-4860-85db-4f3a5526bc11': {'status':
> u'Active', 'diskfree': '398535950336', 'isoprefix': '', 'alerts': [],
> 'disktotal': '416429375488', 'version': 4}}} from=::ffff:10.100.90.5,43830,
> task_id=682d746d-68d2-41c7-91c5-3f0927e39459 (api:54)
> 2019-05-17 10:48:02,815-0500 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.getInfo succeeded in 0.05 seconds (__init__:312)
> 2019-05-17 10:48:02,939-0500 INFO  (jsonrpc/5)
> [storage.StorageServer.MountConnection] Creating directory
> u'/rhev/data-center/mnt/glusterSD/10.50.3.11:_test' (storageServer:168)
> 2019-05-17 10:48:02,940-0500 INFO  (jsonrpc/5) [storage.fileUtils] Creating
> directory: /rhev/data-center/mnt/glusterSD/10.50.3.11:_test mode: None
> (fileUtils:199)
> 2019-05-17 10:48:02,940-0500 WARN  (jsonrpc/5)
> [storage.StorageServer.MountConnection] Using user specified
> backup-volfile-servers option (storageServer:275)
> 2019-05-17 10:48:02,940-0500 INFO  (jsonrpc/5) [storage.Mount] mounting
> 10.50.3.11:/test at /rhev/data-center/mnt/glusterSD/10.50.3.11:_test
> (mount:204)
> 2019-05-17 10:48:03,089-0500 INFO  (jsonrpc/5) [IOProcessClient] (Global)
> Starting client (__init__:308)
> 2019-05-17 10:48:03,096-0500 INFO  (ioprocess/12048) [IOProcess] (Global)
> Starting ioprocess (__init__:434)
> 2019-05-17 10:48:03,114-0500 INFO  (itmap/0) [IOProcessClient]
> (/glusterSD/10.50.3.11:_test) Starting client (__init__:308)
> 2019-05-17 10:48:03,121-0500 INFO  (itmap/1) [IOProcessClient]
> (/glusterSD/dev-san:_gv0) Starting client (__init__:308)
> 2019-05-17 10:48:03,123-0500 INFO  (ioprocess/12097) [IOProcess]
> (/glusterSD/10.50.3.11:_test) Starting ioprocess (__init__:434)
> 2019-05-17 10:48:03,128-0500 INFO  (itmap/2) [IOProcessClient]
> (/glusterSD/dev-san:_gvOvirt) Starting client (__init__:308)
> 2019-05-17 10:48:03,131-0500 INFO  (ioprocess/12105) [IOProcess]
> (/glusterSD/dev-san:_gv0) Starting ioprocess (__init__:434)
> 2019-05-17 10:48:03,136-0500 INFO  (ioprocess/12111) [IOProcess]
> (/glusterSD/dev-san:_gvOvirt) Starting ioprocess (__init__:434)
> 2019-05-17 10:48:03,139-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> connectStorageServer return={'statuslist': [{'status': 0, 'id':
> u'00000000-0000-0000-0000-000000000000'}]} from=::ffff:10.100.90.5,44732,
> flow_id=d0bc00dd-b181-4636-afd0-4064283195d7,
> task_id=0ef787fa-d714-4962-a6e8-95ac45b59b59 (api:54)
> 2019-05-17 10:48:03,139-0500 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connectStorageServer succeeded in 0.35 seconds
> (__init__:312)
> 2019-05-17 10:48:03,883-0500 INFO  (jsonrpc/1) [vdsm.api] START
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options':
> u'backup-volfile-servers=10.50.3.10:10.50.3.12', u'id':
> u'253295f9-e5a2-4864-992e-7dc6f54b2222', u'connection': u'10.50.3.11:/test',
> u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None) from=::ffff:10.100.90.5,44732, flow_id=2b3475ad,
> task_id=c121c2a0-05ec-42d2-aed7-13bfc4d485e4 (api:48)
> 2019-05-17 10:48:03,887-0500 INFO  (jsonrpc/1) [vdsm.api] FINISH
> connectStorageServer return={'statuslist': [{'status': 0, 'id':
> u'253295f9-e5a2-4864-992e-7dc6f54b2222'}]} from=::ffff:10.100.90.5,44732,
> flow_id=2b3475ad, task_id=c121c2a0-05ec-42d2-aed7-13bfc4d485e4 (api:54)
> 2019-05-17 10:48:03,888-0500 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.connectStorageServer succeeded in 0.01 seconds
> (__init__:312)
> 2019-05-17 10:48:03,892-0500 INFO  (jsonrpc/7) [vdsm.api] START
> createStorageDomain(storageType=7,
> sdUUID=u'f8222c62-65d0-4257-913b-b8309a195e97', domainName=u'gTest',
> typeSpecificArg=u'10.50.3.11:/test', domClass=1, domVersion=u'4',
> block_size=512, max_hosts=250, options=None) from=::ffff:10.100.90.5,44732,
> flow_id=2b3475ad, task_id=e1ac11d3-f19e-4e32-a210-9cd88742b8b8 (api:48)
> 2019-05-17 10:48:04,205-0500 WARN  (jsonrpc/7) [storage.LVM] Reloading VGs
> failed (vgs=[u'f8222c62-65d0-4257-913b-b8309a195e97'] rc=5 out=[] err=[' 
> Volume group "f8222c62-65d0-4257-913b-b8309a195e97" not found', '  Cannot
> process volume group f8222c62-65d0-4257-913b-b8309a195e97']) (lvm:442)
> 2019-05-17 10:48:04,224-0500 INFO  (periodic/31) [vdsm.api] START
> repoStats(domains=()) from=internal,
> task_id=c65982d4-6a9d-4eaf-a7cb-34f6b84e44d4 (api:48)
> 2019-05-17 10:48:04,225-0500 INFO  (periodic/31) [vdsm.api] FINISH repoStats
> return={u'3747c20e-34ba-44ac-9476-61818f88a269': {'code': 0, 'actual': True,
> 'version': 4, 'acquired': True, 'delay': '0.00101691', 'lastCheck': '6.1',
> 'valid': True}, u'0dce69a7-40c2-4860-85db-4f3a5526bc11': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00131841',
> 'lastCheck': '5.7', 'valid': True}} from=internal,
> task_id=c65982d4-6a9d-4eaf-a7cb-34f6b84e44d4 (api:54)
> 2019-05-17 10:48:04,238-0500 INFO  (jsonrpc/7) [storage.StorageDomain]
> sdUUID=f8222c62-65d0-4257-913b-b8309a195e97 domainName=gTest
> remotePath=10.50.3.11:/test domClass=1, block_size=512, alignment=1048576
> (nfsSD:86)
> 2019-05-17 10:48:04,254-0500 INFO  (jsonrpc/7) [IOProcessClient]
> (f8222c62-65d0-4257-913b-b8309a195e97) Starting client (__init__:308)
> 2019-05-17 10:48:04,261-0500 INFO  (ioprocess/12146) [IOProcess]
> (f8222c62-65d0-4257-913b-b8309a195e97) Starting ioprocess (__init__:434)
> 2019-05-17 10:48:04,306-0500 ERROR (jsonrpc/7) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-17 10:48:04,306-0500 INFO  (jsonrpc/7) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=::ffff:10.100.90.5,44732, flow_id=2b3475ad,
> task_id=e1ac11d3-f19e-4e32-a210-9cd88742b8b8 (api:52)
> 2019-05-17 10:48:04,306-0500 ERROR (jsonrpc/7) [storage.TaskManager.Task]
> (Task='e1ac11d3-f19e-4e32-a210-9cd88742b8b8') Unexpected error (task:875)
> 2019-05-17 10:48:04,306-0500 INFO  (jsonrpc/7) [storage.TaskManager.Task]
> (Task='e1ac11d3-f19e-4e32-a210-9cd88742b8b8') aborting: Task is aborted:
> 'Storage Domain target is unsupported: ()' - code 399 (task:1181)
> 2019-05-17 10:48:04,306-0500 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> (dispatcher:83)
> 2019-05-17 10:48:04,306-0500 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call StorageDomain.create failed (error 399) in 0.42 seconds (__init__:312)
> 2019-05-17 10:48:05,106-0500 INFO  (jsonrpc/3) [vdsm.api] START
> disconnectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options':
> u'backup-volfile-servers=10.50.3.10:10.50.3.12', u'id':
> u'253295f9-e5a2-4864-992e-7dc6f54b2222', u'connection': u'10.50.3.11:/test',
> u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None) from=::ffff:10.100.90.5,44732,
> flow_id=f7709310-0b0d-4fce-b01c-4a42dd50072c,
> task_id=6245c346-7bfa-48a9-9edf-8532ef491347 (api:48)
> 2019-05-17 10:48:05,107-0500 INFO  (jsonrpc/3) [storage.Mount] unmounting
> /rhev/data-center/mnt/glusterSD/10.50.3.11:_test (mount:212)
> 2019-05-17 10:48:05,313-0500 INFO  (jsonrpc/3) [vdsm.api] FINISH
> disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
> u'253295f9-e5a2-4864-992e-7dc6f54b2222'}]} from=::ffff:10.100.90.5,44732,
> flow_id=f7709310-0b0d-4fce-b01c-4a42dd50072c,
> task_id=6245c346-7bfa-48a9-9edf-8532ef491347 (api:54)
> 2019-05-17 10:48:05,314-0500 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.disconnectStorageServer succeeded in 0.20 seconds
> (__init__:312)
> 
> brick logs:
> [2019-05-17 15:47:16.857040] I [MSGID: 100030] [glusterfsd.c:2849:main]
> 0-/usr/s
> bin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.1 (args:
> /usr/sbi
> n/glusterfsd -s 10.50.3.11 --volfile-id test.10.50.3.11.v0-bricks-test -p
> /var/r
> un/gluster/vols/test/10.50.3.11-v0-bricks-test.pid -S
> /var/run/gluster/35185988e
> 170e6ed.socket --brick-name /v0/bricks/test -l
> /var/log/glusterfs/bricks/v0-bric
> ks-test.log --xlator-option
> *-posix.glusterd-uuid=7643ebf9-7661-4b9b-935b-93fa4c
> 85ab6f --process-name brick --brick-port 49152 --xlator-option
> test-server.liste
> n-port=49152)
> [2019-05-17 15:47:16.857723] I [glusterfsd.c:2556:daemonize] 0-glusterfs:
> Pid of
>  current running process is 11027
> [2019-05-17 15:47:16.862835] I [socket.c:902:__socket_server_bind]
> 0-socket.glus
> terfsd: closing (AF_UNIX) reuse check socket 9
> [2019-05-17 15:47:16.869909] I [MSGID: 101190]
> [event-epoll.c:680:event_dispatch
> _epoll_worker] 0-epoll: Started thread with index 0
> [2019-05-17 15:47:16.869947] I [MSGID: 101190]
> [event-epoll.c:680:event_dispatch
> _epoll_worker] 0-epoll: Started thread with index 1
> [2019-05-17 15:47:16.874653] I [MSGID: 101190]
> [event-epoll.c:680:event_dispatch
> _epoll_worker] 0-epoll: Started thread with index 2
> [2019-05-17 15:47:16.874700] I [MSGID: 101190]
> [event-epoll.c:680:event_dispatch
> _epoll_worker] 0-epoll: Started thread with index 3
> [2019-05-17 15:47:16.874928] I
> [rpcsvc.c:2698:rpcsvc_set_outstanding_rpc_limit] 
> 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
> [2019-05-17 15:47:16.876961] I [socket.c:902:__socket_server_bind]
> 0-socket.test
> -changelog: closing (AF_UNIX) reuse check socket 14
> [2019-05-17 15:47:16.877172] I [trash.c:2450:init] 0-test-trash: no option
> speci
> fied for 'eliminate', using NULL
> Final graph:
> +----------------------------------------------------------------------------
> --+
>   1: volume test-posix
>   2:     type storage/posix
>   3:     option glusterd-uuid 7643ebf9-7661-4b9b-935b-93fa4c85ab6f
>   4:     option directory /v0/bricks/test
>   5:     option volume-id c4a22ffc-b328-4d5c-a63f-5adab1e9a93e
>   6:     option shared-brick-count 1
>   7:     option brick-uid 36
>   8:     option brick-gid 36
>   9: end-volume
>  10:  
>  11: volume test-trash
>  12:     type features/trash
>  13:     option trash-dir .trashcan
>  14:     option brick-path /v0/bricks/test
>  15:     option trash-internal-op off
>  16:     subvolumes test-posix
>  17: end-volume
>  18:  
>  19: volume test-changelog
>  20:     type features/changelog
>  21:     option changelog-brick /v0/bricks/test
>  22:     option changelog-dir /v0/bricks/test/.glusterfs/changelogs
>  23:     option changelog-barrier-timeout 120
>  24:     subvolumes test-trash
>  25: end-volume
>  26:  
>  27: volume test-bitrot-stub
>  28:     type features/bitrot-stub
>  29:     option export /v0/bricks/test
>  30:     option bitrot disable
>  31:     subvolumes test-changelog
>  32: end-volume
>  33:  
>  34: volume test-access-control
>  35:     type features/access-control
>  36:     subvolumes test-bitrot-stub
>  37: end-volume
>  38:  
>  39: volume test-locks
>  40:     type features/locks
>  41:     option enforce-mandatory-lock off
>  42:     subvolumes test-access-control
>  43: end-volume
>  44:  
>  45: volume test-worm
>  46:     type features/worm
>  47:     option worm off
>  48:     option worm-file-level off
>  49:     option worm-files-deletable on
>  50:     subvolumes test-locks
>  51: end-volume
>  52:  
>  53: volume test-read-only
>  54:     type features/read-only
>  55:     option read-only off
>  56:     subvolumes test-worm
>  57: end-volume
>  58:  
>  59: volume test-leases
>  60:     type features/leases
>  61:     option leases off
>  62:     subvolumes test-read-only
>  63: end-volume
>  64:  
>  65: volume test-upcall
>  66:     type features/upcall
>  67:     option cache-invalidation off
>  68:     subvolumes test-leases
>  69: end-volume
>  70:  
>  71: volume test-io-threads
>  72:     type performance/io-threads
>  73:     option low-prio-threads 32
>  74:     subvolumes test-upcall
>  75: end-volume
>  76:  
>  77: volume test-selinux
>  78:     type features/selinux
>  79:     option selinux on
>  80:     subvolumes test-io-threads
>  81: end-volume
>  82:  
>  83: volume test-marker
>  84:     type features/marker
>  85:     option volume-uuid c4a22ffc-b328-4d5c-a63f-5adab1e9a93e
>  86:     option timestamp-file /var/lib/glusterd/vols/test/marker.tstamp
>  87:     option quota-version 0
>  88:     option xtime off
>  89:     option gsync-force-xtime off
>  90:     option quota off
>  91:     option inode-quota off
>  92:     subvolumes test-selinux
>  93: end-volume
>  94:  
>  95: volume test-barrier
>  96:     type features/barrier
>  97:     option barrier disable
>  98:     option barrier-timeout 120
>  99:     subvolumes test-marker
> 100: end-volume
> 101:  
> 102: volume test-index
> 103:     type features/index
> 104:     option index-base /v0/bricks/test/.glusterfs/indices
> 105:     option xattrop-dirty-watchlist trusted.afr.dirty
> 106:     option xattrop-pending-watchlist trusted.afr.test-
> 107:     subvolumes test-barrier
> 108: end-volume
> 109:  
> 110: volume test-quota
> 111:     type features/quota
> 112:     option volume-uuid test
> 113:     option server-quota off
> 114:     option deem-statfs off
> 115:     subvolumes test-index
> 116: end-volume
> 117:  
> 118: volume test-io-stats
> 119:     type debug/io-stats
> 120:     option unique-id /v0/bricks/test
> 121:     option log-level INFO
> 122:     option latency-measurement off
> 123:     option count-fop-hits off
> 124:     subvolumes test-quota
> 125: end-volume
> 126:  
> 127: volume /v0/bricks/test
> 128:     type performance/decompounder
> 129:     option auth.addr./v0/bricks/test.allow *
> 130:     option auth-path /v0/bricks/test
> 131:     option auth.login.24909ee0-24cc-4f78-af68-275187589fae.password
> 2ce65fd
> a-fb86-416d-9e29-861ac3a738c6
> 132:     option auth.login./v0/bricks/test.allow
> 24909ee0-24cc-4f78-af68-2751875
> 89fae
> 133:     subvolumes test-io-stats
> 134: end-volume
> 135:  
> 136: volume test-server
> 137:     type protocol/server
> 138:     option transport.socket.listen-port 49152
> 139:     option rpc-auth.auth-glusterfs on
> 140:     option rpc-auth.auth-unix on
> 141:     option rpc-auth.auth-null on
> 142:     option rpc-auth-allow-insecure on
> 143:     option transport-type tcp
> 144:     option transport.address-family inet
> 145:     option auth.login./v0/bricks/test.allow
> 24909ee0-24cc-4f78-af68-2751875
> 89fae
> 146:     option auth.login.24909ee0-24cc-4f78-af68-275187589fae.password
> 2ce65fd
> a-fb86-416d-9e29-861ac3a738c6
> 147:     option auth-path /v0/bricks/test
> 148:     option auth.addr./v0/bricks/test.allow *
> 149:     option transport.socket.keepalive 1
> 150:     option transport.socket.ssl-enabled off
> 151:     option event-threads 4
> 152:     option transport.socket.keepalive-time 20
> 153:     option transport.socket.keepalive-interval 2
> 154:     option transport.socket.keepalive-count 9
> 155:     option transport.listen-backlog 1024
> 156:     subvolumes /v0/bricks/test
> 157: end-volume
> 158:  
> +----------------------------------------------------------------------------
> --+
> [2019-05-17 15:47:18.977033] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.12"
> [2019-05-17 15:47:18.977055] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:47:18.977068] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:263830b2-de4d-4f5b-9049-e829
> 925db095-GRAPH_ID:6-PID:22730-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-
> clie
> nt-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:47:18.979652] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.12"
> [2019-05-17 15:47:18.979670] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:47:18.979693] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:360b06ff-d4a7-4d68-b6e9-bde7
> 096e3d82-GRAPH_ID:0-PID:6085-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-
> clien
> t-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:47:18.980363] I [rpcsvc.c:870:rpcsvc_handle_rpc_call]
> 0-rpc-servi
> ce: spawned a request handler thread for queue 0
> [2019-05-17 15:47:19.017248] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.11"
> [2019-05-17 15:47:19.017275] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:47:19.017294] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:d424cc54-9a87-403e-b217-6313
> 62eeaff3-GRAPH_ID:0-PID:11061-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-
> cli
> ent-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:47:20.602649] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.10"
> [2019-05-17 15:47:20.602669] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:47:20.602679] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:a332606e-0f41-4f72-9060-a474
> 2add08b0-GRAPH_ID:0-PID:26592-HOST:necropolis.int.ohgnetworks.com-PC_NAME:
> test-c
> lient-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:47:20.611796] I [MSGID: 115036]
> [server.c:499:server_rpc_notify] 
> 0-test-server: disconnecting connection from
> CTX_ID:a332606e-0f41-4f72-9060-a474
> 2add08b0-GRAPH_ID:0-PID:26592-HOST:necropolis.int.ohgnetworks.com-PC_NAME:
> test-c
> lient-0-RECON_NO:-0
> [2019-05-17 15:47:20.611900] I [MSGID: 101055]
> [client_t.c:436:gf_client_unref] 
> 0-test-server: Shutting down connection
> CTX_ID:a332606e-0f41-4f72-9060-a4742add0
> 8b0-GRAPH_ID:0-PID:26592-HOST:necropolis.int.ohgnetworks.com-PC_NAME:test-
> client
> -0-RECON_NO:-0
> [2019-05-17 15:47:20.855104] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.10"
> [2019-05-17 15:47:20.855134] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:47:20.855144] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:f654a9a1-0241-4415-a88f-b0a2
> b83b2301-GRAPH_ID:0-PID:26419-HOST:necropolis.int.ohgnetworks.com-PC_NAME:
> test-c
> lient-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:47:20.856227] I [rpcsvc.c:870:rpcsvc_handle_rpc_call]
> 0-rpc-servi
> ce: spawned a request handler thread for queue 1
> [2019-05-17 15:48:03.062247] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.11"
> [2019-05-17 15:48:03.062271] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:48:03.062282] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:471a7ddb-e566-4364-a171-8de6
> c75b9dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-
> cli
> ent-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:48:04.264875] E [MSGID: 113069]
> [posix-entry-ops.c:2123:posix_cre
> ate] 0-test-posix: open on /v0/bricks/test/__DIRECT_IO_TEST__ failed
> [Invalid ar
> gument]
> [2019-05-17 15:48:04.264931] I [MSGID: 115071]
> [server-rpc-fops_v2.c:1546:server
> 4_create_cbk] 0-test-server: 25: CREATE /__DIRECT_IO_TEST__
> (00000000-0000-0000-
> 0000-000000000001/__DIRECT_IO_TEST__), client:
> CTX_ID:471a7ddb-e566-4364-a171-8d
> e6c75b9dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:
> test-c
> lient-0-RECON_NO:-0, error-xlator: test-posix [Invalid argument]
> [2019-05-17 15:48:05.120532] I [MSGID: 115036]
> [server.c:499:server_rpc_notify] 
> 0-test-server: disconnecting connection from
> CTX_ID:471a7ddb-e566-4364-a171-8de6
> c75b9dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-
> cli
> ent-0-RECON_NO:-0
> [2019-05-17 15:48:05.120658] I [MSGID: 101055]
> [client_t.c:436:gf_client_unref] 
> 0-test-server: Shutting down connection
> CTX_ID:471a7ddb-e566-4364-a171-8de6c75b9
> dde-GRAPH_ID:0-PID:12024-HOST:boneyard.int.ohgnetworks.com-PC_NAME:test-
> client-0
> -RECON_NO:-0
> [2019-05-17 15:50:15.508396] I [addr.c:54:compare_addr_and_update]
> 0-/v0/bricks/
> test: allowed = "*", received addr = "10.50.3.12"
> [2019-05-17 15:50:15.508429] I [login.c:110:gf_auth] 0-auth/login: allowed
> user 
> names: 24909ee0-24cc-4f78-af68-275187589fae
> [2019-05-17 15:50:15.508439] I [MSGID: 115029]
> [server-handshake.c:550:server_se
> tvolume] 0-test-server: accepted client from
> CTX_ID:35a55e7e-7e14-4653-918e-0959
> 97a81ed3-GRAPH_ID:0-PID:9455-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-
> clien
> t-0-RECON_NO:-0 (version: 6.1) with subvol /v0/bricks/test
> [2019-05-17 15:50:15.526169] I [MSGID: 115036]
> [server.c:499:server_rpc_notify] 
> 0-test-server: disconnecting connection from
> CTX_ID:35a55e7e-7e14-4653-918e-0959
> 97a81ed3-GRAPH_ID:0-PID:9455-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-
> clien
> t-0-RECON_NO:-0
> [2019-05-17 15:50:15.526271] I [MSGID: 101055]
> [client_t.c:436:gf_client_unref] 
> 0-test-server: Shutting down connection
> CTX_ID:35a55e7e-7e14-4653-918e-095997a81
> ed3-GRAPH_ID:0-PID:9455-HOST:ossuary.int.ohgnetworks.com-PC_NAME:test-client-
> 0-R
> ECON_NO:-0
> [2019-05-17 15:52:16.992388] I [rpcsvc.c:870:rpcsvc_handle_rpc_call]
> 0-rpc-servi
> ce: spawned a request handler thread for queue 2

In your case, open() itself failed with O_DIRECT. Seems like the backend fs on which your gluster bricks are hosted don't support O_DIRECT.

[2019-05-17 15:48:04.264875] E [MSGID: 113069] [posix-entry-ops.c:2123:posix_create] 0-test-posix: open on /v0/bricks/test/__DIRECT_IO_TEST__ failed [Invalid argument]

What FS are you using for bricks?

Turning on remote-dio helps in your case because the job of that option is to knock off O_DIRECT flag before open().
See https://github.com/gluster/glusterfs/blob/836e5b6b76ee74a486b6d41c4a2d3de91e4ad2c1/xlators/protocol/client/src/client.c#L37

What this means is that your volume will no longer be honoring O_DIRECT flag.

Going by Nir's thread, oVirt expects the io operations to be DIRECT IO.

Sahina/Nir,
Is the "DIRECT-IO" nature of the io operations a strict requirement for oVirt? Do we expect any consistency issues in case the flag is not honored under the hood?

-Krutika

Comment 15 Darrell 2019-05-20 20:26:56 UTC
Ah, that makes sense, I had ZFS as bricks for that one, and it (zfs on linux 0.7.x) won't support O_DIRECT until version 0.8, still in rc status.

To confirm, I created a new volume for my production systems with XFS bricks, applied the virt group and then disabled network.remote-dio on the volume. Ovirt was able to create a storage domain with no problem.

Looks like the two of us had different edge cases going, sorry for the trouble.

Comment 16 Strahil Nikolov 2019-05-20 21:00:25 UTC
And here comes the question ...
Do we really support storage that can emulate 512 ? We have 3PARs that report 4096/16384 (minumum/optimum) and in such case - this type of storage will be unavailable.

Comment 17 Sahina Bose 2019-06-17 07:25:09 UTC
(In reply to Strahil Nikolov from comment #16)
> And here comes the question ...
> Do we really support storage that can emulate 512 ? We have 3PARs that
> report 4096/16384 (minumum/optimum) and in such case - this type of storage
> will be unavailable.

We do not support storage that cannot emulate 512b at least until Bug 1592916 is fixed.

Closing this bug.

*** This bug has been marked as a duplicate of bug 1592916 ***


Note You need to log in before you can comment on or make changes to this bug.