Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1481611 - Mongo does not work on secondary (BE?) arches
Summary: Mongo does not work on secondary (BE?) arches
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: rubygem-moped
Version: rawhide
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Vít Ondruch
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: ZedoraTracker PPCTracker 1457953
TreeView+ depends on / blocked
 
Reported: 2017-08-15 07:57 UTC by Vít Ondruch
Modified: 2017-08-28 16:19 UTC (History)
15 users (show)

Fixed In Version: rubygem-moped-1.5.3-5.fc25 rubygem-moped-1.5.3-5.fc26
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-27 06:22:45 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Vít Ondruch 2017-08-15 07:57:39 UTC
Description of problem:
I am trying to build rubygem-database_cleaner. It works just fine on primary arches, but it does not look Redis is accessible on s390x and ppc64:

https://koji.fedoraproject.org/koji/taskinfo?taskID=21238196

Since this is noarch package, I can still build it, but this is annoying issue due to mass rebuild failures and failing Koschei builds.

Version-Release number of selected component (if applicable):
redis-3.2.9-3.fc27.ppc64
redis-3.2.9-3.fc27.s390x

How reproducible:
Always


Steps to Reproduce:
1. $ fedpkg co rubygem-database_cleaner
2. $ cd rubygem-database_cleaner
3. $ fedpkg scratch-build --srpm --arch armv7hl aarch64 i686 ppc64 ppc64le s390x x86_64

Actual results:



Expected results:


Additional info:

Comment 1 Dan Horák 2017-08-15 08:05:43 UTC
hmm, it fails on ppc64le as well, so doesn't look as big endian (only) problem ...

Comment 2 Vít Ondruch 2017-08-15 08:17:25 UTC
(In reply to Dan Horák from comment #1)
> hmm, it fails on ppc64le as well, so doesn't look as big endian (only)
> problem ...

That is different error ...

Comment 3 Vít Ondruch 2017-08-15 08:18:15 UTC
But it does *not* fail on ARM, which is BE, isn't it?

Comment 4 Dan Horák 2017-08-15 08:22:02 UTC
(In reply to Vít Ondruch from comment #3)
> But it does *not* fail on ARM, which is BE, isn't it?

nope, only s390x and ppc64 are big endians now in Fedora

Comment 5 Nathan Scott 2017-08-16 02:16:14 UTC
If someone could distil the failure down to a simple test case, I'd be happy to take a look at the redis server code and diagnose further.  Those failed build messages are a bit too opaque for me to decipher.

[ideally, a redis-cli command (or series thereof) that can reproduce the issue - would help greatly to resolving this]

Comment 6 Vít Ondruch 2017-08-16 11:28:47 UTC
Mea culpa. I am really sorry, I made mistake. The issue is not with Redis, but with Mongo. Going to reassign ...

Comment 7 Vít Ondruch 2017-08-16 12:32:42 UTC
Trying this locally, I can simulate the issue when MongoDB server is not running:

~~~
$ ruby -r moped -e "s = ::Moped::Session.new(['127.0.0.1:27017'], database: 'test'); s.use :moped; p s[:users].find.one"
W, [2017-08-16T14:18:16.559826 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:16.811022 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:17.062197 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:17.313449 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:17.566640 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:17.819639 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:18.071277 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:18.322270 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:18.575943 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:18.828969 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:19.079826 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:19.331822 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:19.583756 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:19.835952 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:20.088706 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:20.341233 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:20.593571 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:20.846111 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:21.098913 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
W, [2017-08-16T14:18:21.351574 #25]  WARN -- :   MOPED: Could not connect to any node in replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]>, refreshing list.
/usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:270:in `with_secondary': Could not connect to any secondary or primary nodes for replica set <Moped::Cluster nodes=[<Moped::Node resolved_address="127.0.0.1:27017">]> (Moped::Errors::ConnectionFailure)
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/cluster.rb:268:in `with_secondary'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/session/context.rb:104:in `with_node'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/session/context.rb:43:in `query'
	from /usr/share/gems/gems/moped-1.5.3/lib/moped/query.rb:115:in `first'
	from -e:1:in `<main>'
~~~

but as soon as the server is running, I got correct meaningful result:

~~~
$ mongod --dbpath=. --logpath ./mongod.log --fork
about to fork child process, waiting until server is ready for connections.
forked process: 71
child process started successfully, parent exiting

$ ruby -r moped -e "s = ::Moped::Session.new(['127.0.0.1:27017'], database: 'test'); s.use :moped; p s[:users].find.one"
nil
~~~

But this does not work on Koji.

Just out of fun, I tried mongo CLI:

~~~
$ echo "db.adminCommand('listDatabases')" | mongo
MongoDB shell version v3.4.6
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.6
{
	"databases" : [
		{
			"name" : "admin",
			"sizeOnDisk" : 12288,
			"empty" : false
		},
		{
			"name" : "local",
			"sizeOnDisk" : 8192,
			"empty" : false
		}
	],
	"totalSize" : 20480,
	"ok" : 1
}
bye
~~~

This seems to work just fine locally as well as on Koji.

So this suggest some networking issues ...

Comment 8 Vít Ondruch 2017-08-16 12:43:41 UTC
Actually, there are reports that moped has some issues on BE systems:

https://github.com/mongoid/moped/issues/390

Comment 9 Vít Ondruch 2017-08-16 13:31:04 UTC
The patch seems to help.

Comment 10 Fedora Update System 2017-08-16 14:02:12 UTC
rubygem-moped-1.5.3-5.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-f972093bcb

Comment 11 Fedora Update System 2017-08-16 14:11:47 UTC
rubygem-moped-1.5.3-5.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2017-81cdd8b6dd

Comment 12 Fedora Update System 2017-08-18 21:53:52 UTC
rubygem-moped-1.5.3-5.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-81cdd8b6dd

Comment 13 Fedora Update System 2017-08-19 18:53:01 UTC
rubygem-moped-1.5.3-5.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-f972093bcb

Comment 14 Fedora Update System 2017-08-27 06:22:45 UTC
rubygem-moped-1.5.3-5.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.

Comment 15 Fedora Update System 2017-08-28 16:19:19 UTC
rubygem-moped-1.5.3-5.fc26 has been pushed to the Fedora 26 stable repository. If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.