Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2099331 - crm_attribute default output changed to "(null") instead of empty, breaks redis resource agent
Summary: crm_attribute default output changed to "(null") instead of empty, breaks red...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: pacemaker
Version: 9.0
Hardware: All
OS: All
urgent
high
Target Milestone: rc
: 9.1
Assignee: Chris Lumens
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-20 15:10 UTC by Damien Ciabrini
Modified: 2022-11-15 10:05 UTC (History)
6 users (show)

Fixed In Version: pacemaker-2.1.4-2.el9
Doc Type: No Doc Update
Doc Text:
This issue was not in a released build
Clone Of:
Environment:
Last Closed: 2022-11-15 09:49:38 UTC
Type: Bug
Target Upstream Version: 2.1.5
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1978997 0 None None None 2022-06-20 15:10:53 UTC
Red Hat Issue Tracker RHELPLAN-125776 0 None None None 2022-06-20 15:33:19 UTC
Red Hat Product Errata RHBA-2022:7937 0 None None None 2022-11-15 09:50:09 UTC

Description Damien Ciabrini 2022-06-20 15:10:54 UTC
Description of problem:
In Openstack CI, we're consuming the latest pacemaker pacemaker-2.1.3-2.el9.x86_64, and our deployment can no longer promote the Redis resource 
managed by the redis resource agent.

After futther inspection, it looks like recent version of crm_attribute now returns "(null)" instead of an empty string when a attribute is no found in the CIB. E.g:

# crm_attribute --type crm_config --name REDIS_REPL_INFO -s redis_replication --query -q 2>/dev/null
(null)

or

# crm_attribute --promotion -n nonexisting-attribbute -N standalone -G --quiet
(null)
crm_attribute: Error performing operation: No such device or address


This may potentially confuse a lot of resource agent. So far we've confirmed that the redis resource agent is not able to cope with that behaviour as it expected empty string to cycle from start to promotable state.


Version-Release number of selected component (if applicable):
pacemaker-2.1.3-2.el9.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Deploy a standalone OpenStack cloud in a VM. (this creates a containerized M/S redis resource.)

Actual results:
The redis-bundle resource stays to Unpromoted state

Expected results:
The redis-bundle resource should go to Promoted state automatically

Additional info:
This old behaviour is known to work up to pacemaker.x86_64 2.1.2-4.el9

Comment 1 Takashi Kajinami 2022-06-20 15:20:01 UTC
I guess the output change was made by https://github.com/ClusterLabs/pacemaker/commit/3f1565b95d5e5314c9bbb6edc91aa949a6c05935 , which is present in pacmeaker 2.1.3 (and beyond)

Comment 2 Ken Gaillot 2022-06-27 21:09:36 UTC
Fixed in upstream main branch as of commit 9853f4d05

Comment 4 Ken Gaillot 2022-06-29 14:32:05 UTC
QA: Only the redis and rabbitmq agents are known to be potentially affected by this issue, but the issue itself is in the crm_attribute tool, so the only test needed is:

    crm_attribute --query --quiet --name $NAME --node $NODE 2>/dev/null

where $NAME is the name of an attribute that does not exist, and $NODE is any node in the cluster. Before the fix, with the 2.1.3 or 2.1.4-1 packages, it will output "(null)"; after the fix, it will not output anything.

Comment 5 lejeczek 2022-07-08 07:34:41 UTC
That goes for c8s too with pacemaker-2.1.3-2.el8.x86_64, there the cluster I must fool by making a constraint with 'move --master' if that does not exists cluster logs:
...
3442363:S 07 Jul 2022 20:11:24.184 # Unable to connect to MASTER: (null)
3442363:S 07 Jul 2022 20:11:25.187 * Connecting to MASTER no-such-master:6379
...

Would be great to have fixes send to centOS asap as well.
thanks, L.

Comment 9 Ken Gaillot 2022-07-11 17:09:13 UTC
(In reply to lejeczek from comment #5)
> That goes for c8s too with pacemaker-2.1.3-2.el8.x86_64, there the cluster I
> must fool by making a constraint with 'move --master' if that does not
> exists cluster logs:
> ...
> 3442363:S 07 Jul 2022 20:11:24.184 # Unable to connect to MASTER: (null)
> 3442363:S 07 Jul 2022 20:11:25.187 * Connecting to MASTER no-such-master:6379
> ...
> 
> Would be great to have fixes send to centOS asap as well.
> thanks, L.

The fix is also in the pacemaker-2.1.4-3.el8 build

Comment 10 Markéta Smazová 2022-07-13 15:03:46 UTC
before fix
-----------
[root@virt-245 ~]# rpm -q pacemaker
pacemaker-2.1.4-1.el9.x86_64

[root@virt-245 ~]# pcs cluster status
Cluster Status:
 Cluster Summary:
   * Stack: corosync
   * Current DC: virt-245 (version 2.1.4-1.el9-dc6eb4362e) - partition with quorum
   * Last updated: Wed Jul 13 16:31:15 2022
   * Last change:  Tue Jul 12 09:58:47 2022 by root via cibadmin on virt-245
   * 2 nodes configured
   * 2 resource instances configured
 Node List:
   * Online: [ virt-245 virt-246 ]

PCSD Status:
  virt-245: Online
  virt-246: Online

[root@virt-245 ~]# crm_attribute --query --quiet --name test --node virt-246 2>/dev/null
(null)

[root@virt-245 ~]# crm_attribute --query --quiet --name test --node virt-246
(null)
crm_attribute: Error performing operation: No such device or address


after fix
----------
[root@virt-259 ~]# rpm -q pacemaker
pacemaker-2.1.4-2.el9.x86_64

[root@virt-259 ~]# pcs cluster status
Cluster Status:
 Cluster Summary:
   * Stack: corosync
   * Current DC: virt-260 (version 2.1.4-2.el9-dc6eb4362e) - partition with quorum
   * Last updated: Wed Jul 13 16:31:58 2022
   * Last change:  Wed Jul 13 15:54:22 2022 by root via cibadmin on virt-259
   * 2 nodes configured
   * 2 resource instances configured
 Node List:
   * Online: [ virt-259 virt-260 ]

PCSD Status:
  virt-260: Online
  virt-259: Online

[root@virt-259 ~]# crm_attribute --query --quiet --name test --node virt-259 2>/dev/null

[root@virt-259 ~]# crm_attribute --query --quiet --name test --node virt-259
crm_attribute: Error performing operation: No such device or address

marking verified in pacemaker-2.1.4-2.el9

Comment 12 errata-xmlrpc 2022-11-15 09:49:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:7937


Note You need to log in before you can comment on or make changes to this bug.