Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1394862 - libdb: Assumes that internal condition variable layout never changes
Summary: libdb: Assumes that internal condition variable layout never changes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: libdb
Version: 26
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Petr Kubat
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: https://fedoraproject.org/wiki/Common...
Depends On:
Blocks: 1397087
TreeView+ depends on / blocked
 
Reported: 2016-11-14 15:35 UTC by Jan Pazdziora
Modified: 2017-09-13 14:39 UTC (History)
35 users (show)

Fixed In Version: libdb-5.3.28-24.fc26 libdb-5.3.28-24.fc24 libdb-5.3.28-24.fc25
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-07-07 23:00:28 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
upstream patch (30.94 KB, patch)
2017-05-16 08:07 UTC, Petr Kubat
no flags Details | Diff
rpm lock check patch (1.94 KB, patch)
2017-05-30 10:22 UTC, Petr Kubat
no flags Details | Diff
rpm lock check patch v2 (2.12 KB, patch)
2017-06-01 09:10 UTC, Petr Kubat
no flags Details | Diff
rpm lock check patch v3 (2.28 KB, patch)
2017-06-01 10:55 UTC, Petr Kubat
no flags Details | Diff
failed upgrade (20.79 KB, text/plain)
2017-06-12 20:28 UTC, Lukas Slebodnik
no flags Details
Cond var ppc fix (3.35 KB, patch)
2017-06-19 12:47 UTC, Petr Kubat
no flags Details | Diff
rpm lock check patch v4 (2.56 KB, patch)
2017-06-19 12:48 UTC, Petr Kubat
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1397087 0 high CLOSED Rpm hangs after updating to glibc >= 2.25 2022-05-16 11:32:56 UTC
Red Hat Bugzilla 1483553 0 unspecified CLOSED glibc dnf script error: BDB1539 Build signature doesn't match environment 2022-05-16 11:32:56 UTC
Sourceware 21119 0 P3 UNCONFIRMED Unify the pthread_mutex_t definitions 2021-01-18 09:43:07 UTC

Internal Links: 1397087

Description Jan Pazdziora 2016-11-14 15:35:21 UTC
Description of problem:

Building Fedora rawhide container image from Dockerfile

FROM fedora:rawhide
RUN dnf install -y httpd

works fine.

However, adding RUN dnf upgrade -y glibc before the dnf install line causes the build process to hang.

Version-Release number of selected component (if applicable):

glibc-2.24.90-13.fc26.x86_64

How reproducible:

Deterministic for this particular Dockerfile on my machines. But it's not always apr-util or the second package on which it gets stuck.

Steps to Reproduce:
1. Have Dockerfile

FROM fedora:rawhide
RUN dnf upgrade -y glibc
RUN dnf install -y httpd

2. Attempt to build the image: docker build -t rawhide .

Actual results:

Install  7 Packages

Total download size: 1.7 M
Installed size: 4.7 M
Downloading Packages:
--------------------------------------------------------------------------------
Total                                           1.1 MB/s | 1.7 MB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Installing  : apr-1.5.2-4.fc25.x86_64                                     1/7 
  Installing  : apr-util-1.5.4-3.fc24.x86_64                                2/7

and here the process hangs.

Expected results:

All packages installed, image created.

Additional info:

The same hang happens with plain docker run:

docker run --rm -ti fedora:rawhide bash -c 'dnf upgrade -y glibc && dnf install -y httpd'

With glibc-2.24.90-2.fc26.x86_64 (which is on the 3 month old fedora:rawhide image), things work, that's why filing against glibc.

Comment 1 Jan Pazdziora 2016-11-14 15:38:05 UTC
The OS and environment on the host do not matter -- I see the same hang on Fedora 24 with docker-1.10.3-54.gite03ddb8.fc24.x86_64 and on RHEL 7.3 with docker-1.12.3-4.el7.x86_64.

Comment 2 Florian Weimer 2016-11-14 15:38:50 UTC
Please provide a process tree (ps axuf) when the hang occurs.  Attaching GDB to the hanging subprocess would be helpful, too, but it could prove difficult to get GDB to pick up the proper separate debuginfo.  Alternatively, please create a coredump of the hanging process with gcore and upload it somewhere.  Then we can analyze it offline.

Comment 3 Jan Pazdziora 2016-11-15 10:48:29 UTC
The bt against the hung process is

#0  0x00007fd8520f82c1 in futex_wait (private=<optimized out>, expected=4294967295, futex_word=0x7fd842759c04) at ../sysdeps/unix/sysv/linux/futex-internal.h:61
#1  futex_wait_simple (private=<optimized out>, expected=4294967295, futex_word=0x7fd842759c04) at ../sysdeps/nptl/futex-internal.h:135
#2  __pthread_cond_destroy (cond=cond@entry=0x7fd842759be0) at pthread_cond_destroy.c:54
#3  0x00007fd846d969cf in __db_pthread_mutex_destroy (env=env@entry=0x55ac4daae720, mutex=mutex@entry=340) at ../../src/mutex/mut_pthread.c:757
#4  0x00007fd846d9610f in __db_tas_mutex_destroy (env=env@entry=0x55ac4daae720, mutex=mutex@entry=340) at ../../src/mutex/mut_tas.c:602
#5  0x00007fd846e4dda8 in __mutex_free_int (env=0x55ac4daae720, locksys=locksys@entry=1, indxp=indxp@entry=0x7fd84036f550) at ../../src/mutex/mut_alloc.c:248
#6  0x00007fd846e4e3c5 in __mutex_free (env=env@entry=0x55ac4daae720, indxp=indxp@entry=0x7fd84036f550) at ../../src/mutex/mut_alloc.c:217
#7  0x00007fd846eb329b in __memp_bhfree (dbmp=dbmp@entry=0x55ac4cc383f0, infop=0x55ac4daf5b30, mfp=mfp@entry=0x7fd8403052d0, hp=<optimized out>, 
    bhp=bhp@entry=0x7fd84036f550, flags=flags@entry=1) at ../../src/mp/mp_bh.c:663
#8  0x00007fd846eb5aca in __memp_fget (dbmfp=dbmfp@entry=0x55ac4d9e50f0, pgnoaddr=pgnoaddr@entry=0x7ffc5622bb5c, ip=ip@entry=0x7fd84045f770, txn=txn@entry=0x0, 
    flags=flags@entry=8, addrp=addrp@entry=0x7ffc5622bb60) at ../../src/mp/mp_fget.c:479
#9  0x00007fd846ebb4fc in __memp_ftruncate (dbmfp=dbmfp@entry=0x55ac4d9e50f0, txn=0x0, ip=0x7fd84045f770, pgno=pgno@entry=19, flags=flags@entry=0)
    at ../../src/mp/mp_method.c:856
#10 0x00007fd846e7199e in __db_free (dbc=dbc@entry=0x55ac4d9e8280, h=<optimized out>, flags=flags@entry=0) at ../../src/db/db_meta.c:525
#11 0x00007fd846e75ddb in __db_doff (dbc=dbc@entry=0x55ac4d9e8280, pgno=<optimized out>) at ../../src/db/db_overflow.c:479
#12 0x00007fd846da3a8f in __bam_ditem (dbc=dbc@entry=0x55ac4d9e8280, h=0x7fd84036b388, indx=indx@entry=7) at ../../src/btree/bt_delete.c:138
#13 0x00007fd846da8160 in __bam_iitem (dbc=dbc@entry=0x55ac4d9e8280, key=key@entry=0x7ffc5622c1d0, data=data@entry=0x7ffc5622c1a0, op=6, flags=flags@entry=0)
    at ../../src/btree/bt_put.c:421
#14 0x00007fd846da2e5d in __bamc_put (dbc=0x55ac4d9e8280, key=0x7ffc5622c1d0, data=0x7ffc5622c1a0, flags=14, pgnop=0x7ffc5622c0b4)
    at ../../src/btree/bt_cursor.c:2240
#15 0x00007fd846e5d1dc in __dbc_iput (dbc=0x55ac4daad330, key=0x7ffc5622c1d0, data=0x7ffc5622c1a0, flags=14) at ../../src/db/db_cam.c:2136
#16 0x00007fd846e5f70d in __dbc_put (dbc=dbc@entry=0x55ac4daad330, key=key@entry=0x7ffc5622c1d0, data=data@entry=0x7ffc5622c1a0, flags=<optimized out>, 
    flags@entry=14) at ../../src/db/db_cam.c:2049
#17 0x00007fd846e6c9c1 in __dbc_put_pp (dbc=0x55ac4daad330, key=0x7ffc5622c1d0, data=0x7ffc5622c1a0, flags=14) at ../../src/db/db_iface.c:2751
#18 0x00007fd84754b235 in dbiCursorPut.isra.5 () from /lib64/librpm.so.7
#19 0x00007fd84754d211 in updateIndex.part.8 () from /lib64/librpm.so.7
#20 0x00007fd84754d5e1 in db3_idxdbPut () from /lib64/librpm.so.7
#21 0x00007fd847553168 in tag2index () from /lib64/librpm.so.7
#22 0x00007fd847556c6b in rpmdbAdd () from /lib64/librpm.so.7
#23 0x00007fd84756a86d in rpmpsmRun () from /lib64/librpm.so.7
#24 0x00007fd84757e125 in rpmteProcess () from /lib64/librpm.so.7
#25 0x00007fd847584a3e in rpmtsRun () from /lib64/librpm.so.7
#26 0x00007fd84559d204 in rpmts_Run () from /usr/lib64/python3.5/site-packages/rpm/_rpm.cpython-35m-x86_64-linux-gnu.so
#27 0x00007fd8523b67e9 in PyCFunction_Call (func=<built-in method run of TransactionSet object at remote 0x7fd843940b28>, 
    args=(<method at remote 0x7fd84458a1c8>, '', 64), kwds=<optimized out>) at /usr/src/debug/Python-3.5.2/Objects/methodobject.c:98
#28 0x00007fd85236eeb7 in PyObject_Call (func=<built-in method run of TransactionSet object at remote 0x7fd843940b28>, arg=<optimized out>, kw=<optimized out>)
    at /usr/src/debug/Python-3.5.2/Objects/abstract.c:2165
#29 0x00007fd852425d17 in PyEval_CallObjectWithKeywords (func=func@entry=<built-in method run of TransactionSet object at remote 0x7fd843940b28>, 
    arg=arg@entry=(<method at remote 0x7fd84458a1c8>, '', 64), kw=kw@entry=0x0) at /usr/src/debug/Python-3.5.2/Python/ceval.c:4609
#30 0x00007fd852389b38 in methoddescr_call (descr=<optimized out>, args=(<method at remote 0x7fd84458a1c8>, '', 64), kwds=0x0)
    at /usr/src/debug/Python-3.5.2/Objects/descrobject.c:250
#31 0x00007fd85236eeb7 in PyObject_Call (func=<method_descriptor at remote 0x7fd845bfd708>, arg=<optimized out>, kw=<optimized out>)
    at /usr/src/debug/Python-3.5.2/Objects/abstract.c:2165
#32 0x00007fd85242a491 in do_call (nk=<optimized out>, na=4, pp_stack=0x7ffc5622ca60, func=<optimized out>) at /usr/src/debug/Python-3.5.2/Python/ceval.c:4965
#33 call_function (oparg=<optimized out>, pp_stack=0x7ffc5622ca60) at /usr/src/debug/Python-3.5.2/Python/ceval.c:4761
#34 PyEval_EvalFrameEx (
    f=f@entry=Frame 0x7fd842bae048, for file /usr/lib64/python3.5/site-packages/rpm/transaction.py, line 103, in run (self=<TransactionSet(_probFilter=64) at remote 0x7fd843940b28>, callback=<method at remote 0x7fd84458a1c8>, data=''), throwflag=throwflag@entry=0) at /usr/src/debug/Python-3.5.2/Python/ceval.c:3260
#35 0x00007fd85242dcfb in fast_function (nk=<optimized out>, na=<optimized out>, n=3, pp_stack=0x7ffc5622cba0, func=<optimized out>)
    at /usr/src/debug/Python-3.5.2/Python/ceval.c:4832
#36 call_function (oparg=<optimized out>, pp_stack=0x7ffc5622cba0) at /usr/src/debug/Python-3.5.2/Python/ceval.c:4759
#37 PyEval_EvalFrameEx (
    f=f@entry=Frame 0x55ac4da0bba8, for file /usr/lib/python3.5/site-packages/dnf/base.py, line 735, in _run_transaction (self=<BaseCli(_tempfile_persistor=None, _ds_callback=<DepSolveProgressCallBack(loops=1) at remote 0x7fd843950710>, _comps=None, _group_persistor=None, _plugins=<Plugins(plugin_cls=[<type at remote 0x55ac4bdef758>, <type at remote 0x55ac4bde9c38>, <type at remote 0x55ac4bdef3a8>, <type at remote 0x55ac4bdf24b8>, <type at remote 0x55ac4bdfcdd8>, <type at remote 0x55ac4bdf1f58>, <type at remote 0x55ac4bde8a38>, <type at remote 0x55ac4bdedaa8>, <type at remote 0x55ac4bdf3f08>, <type at remote 0x55ac4bdf3b58>], plugins=[<DebuginfoInstall(cli=<Cli(cmdstring='dnf install -y httpd ', command=<InstallCommand(cli=<...>, opts=<Namespace(ip_resolve=None, debuglevel=None, command=['install'], help=False, color=None, setopts=[], allowerasing=None, rpmverbosity=None, quiet=None, repos_ed=[], showdupesfromrepos=None, releasever=None, verbose=None, plugins=None, assumeno=None, version=None, excludepkgs=[], debugsolver=N...(truncated), throwflag=throwflag@entry=0) at /usr/src/debug/Python-3.5.2/Python/ceval.c:3260
#38 0x00007fd85242f5c3 in _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x55ac4c8bac90, 
    argcount=1, kws=0x55ac4c8bac98, kwcount=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name='_run_transaction', qualname='Base._run_transaction')
    at /usr/src/debug/Python-3.5.2/Python/ceval.c:4047
#39 0x00007fd85242be39 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=0x7ffc5622cdb0, func=<optimized out>)
    at /usr/src/debug/Python-3.5.2/Python/ceval.c:4842
#40 call_function (oparg=<optimized out>, pp_stack=0x7ffc5622cdb0) at /usr/src/debug/Python-3.5.2/Python/ceval.c:4759
#41 PyEval_EvalFrameEx (
    f=f@entry=Frame 0x55ac4c8baaa8, for file /usr/lib/python3.5/site-packages/dnf/base.py, line 661, in do_transaction (self=<BaseCli(_tempfile_persistor=None, _ds_callback=<DepSolveProgressCallBack(loops=1) at remote 0x7fd843950710>, _comps=None, _group_persistor=None, _plugins=<Plugins(plugin_cls=[<type at remote 0x55ac4bdef758>, <type at remote 0x55ac4bde9c38>, <type at remote 0x55ac4bdef3a8>, <type at remote 0x55ac4bdf24b8>, <type at remote 0x55ac4bdfcdd8>, <type at remote 0x55ac4bdf1f58>, <type at remote 0x55ac4bde8a38>, <type at remote 0x55ac4bdedaa8>, <type at remote 0x55ac4bdf3f08>, <type at remote 0x55ac4bdf3b58>], plugins=[<Debugi---Type <return> to continue, or q <return> to quit---

The ps output in the container is

# ps axuwwf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root        11  0.0  0.0  12588  3716 ?        Ss   10:46   0:00 bash
root        34  0.0  0.0  39828  3256 ?        R+   10:48   0:00  \_ ps axuwwf
root         1  1.3  2.2 541256 91700 ?        Ss+  10:46   0:01 /usr/libexec/system-python /usr/bin/dnf install -y httpd

Comment 4 Florian Weimer 2016-11-15 10:56:46 UTC
(In reply to Jan Pazdziora from comment #3)
> The bt against the hung process is
> 
> #0  0x00007fd8520f82c1 in futex_wait (private=<optimized out>,
> expected=4294967295, futex_word=0x7fd842759c04) at
> ../sysdeps/unix/sysv/linux/futex-internal.h:61
> #1  futex_wait_simple (private=<optimized out>, expected=4294967295,
> futex_word=0x7fd842759c04) at ../sysdeps/nptl/futex-internal.h:135
> #2  __pthread_cond_destroy (cond=cond@entry=0x7fd842759be0) at
> pthread_cond_destroy.c:54

This could be due to glibc-swbz13165.patch (new condition variable implementation).

Comment 5 Florian Weimer 2016-11-15 13:41:29 UTC
I was able to reproduce it, but only in a Docker image.  Here's the condvar state:

(gdb) print *cond
$2 = {__data = {{__wseq = 0, __wseq32 = {__low = 0, __high = 0}}, {
      __g1_start = 0, __g1_start32 = {__low = 0, __high = 0}}, 
    __g_refs = {0, 0}, __g_size = {0, 0}, 
    __g1_orig_size = 4294967295, __wrefs = 4294967295, 
    __g_signals = {0, 0}}, 
  __size = '\000' <repeats 32 times>, "\377\377\377\377\377\377\377\377\000\000\000\000\000\000\000", __align = 0}

Comment 6 Torvald Riegel 2016-11-15 15:19:53 UTC
It seems that the root cause is that the program tries to use a condvar instance that has been initialized with a prior version of glibc.  The condvar appears to be in a database file (/var/lib/rpm/__db.002 in our reproducer).  With the (glibc-internal) condvar definition in the old glibc version, the bits represent a valid, unused condvar:
$4 = {__data = {__lock = 0, __futex = 0, __total_seq = 0, 
                __wakeup_seq = 0,  __woken_seq = 0, __mutex = 
                0xffffffffffffffff, __nwaiters = 0,  __broadcast_seq = 0},

The same bits do not represent a valid condvar if given the condvar definition in the new version of glibc.

Storing the condvar in the database file means that effectively, the program is expecting different versions of glibc to use the same bit representations of internal data structures.  This is not guaranteed to work.

It seems that either BDB should be changed to not store the condvar in database files, or dnf needs to reinitialize the databases it relies on when glibc is updated.

Comment 7 Florian Weimer 2016-11-15 15:23:57 UTC
The hang goes away if I add a “rpm --rebuilddb” step to the container build file:

FROM fedora:rawhide
RUN dnf upgrade -y glibc
RUN rpm --rebuilddb
RUN dnf install -y httpd

Comment 8 Jan Pazdziora 2016-11-15 16:15:42 UTC
(In reply to Torvald Riegel from comment #6)
> It seems that the root cause is that the program tries to use a condvar
> instance that has been initialized with a prior version of glibc.

Thank you for the analysis.

> The
> condvar appears to be in a database file (/var/lib/rpm/__db.002 in our
> reproducer).  With the (glibc-internal) condvar definition in the old glibc
> version, the bits represent a valid, unused condvar:
> $4 = {__data = {__lock = 0, __futex = 0, __total_seq = 0, 
>                 __wakeup_seq = 0,  __woken_seq = 0, __mutex = 
>                 0xffffffffffffffff, __nwaiters = 0,  __broadcast_seq = 0},
> 
> The same bits do not represent a valid condvar if given the condvar
> definition in the new version of glibc.

Can the same issue happen during upgrade from (say) Fedora 24 to rawhide?

How hard would it be to make this old unused bit value also understood as unused value in new glibc?

> Storing the condvar in the database file means that effectively, the program
> is expecting different versions of glibc to use the same bit representations
> of internal data structures.  This is not guaranteed to work.
> 
> It seems that either BDB should be changed to not store the condvar in
> database files, or dnf needs to reinitialize the databases it relies on when
> glibc is updated.

It would likely need to be rpm, not dnf.

Comment 9 Carlos O'Donell 2016-11-15 16:46:50 UTC
(In reply to Torvald Riegel from comment #6)
> It seems that the root cause is that the program tries to use a condvar
> instance that has been initialized with a prior version of glibc.  The
> condvar appears to be in a database file (/var/lib/rpm/__db.002 in our
> reproducer).  With the (glibc-internal) condvar definition in the old glibc
> version, the bits represent a valid, unused condvar:
> $4 = {__data = {__lock = 0, __futex = 0, __total_seq = 0, 
>                 __wakeup_seq = 0,  __woken_seq = 0, __mutex = 
>                 0xffffffffffffffff, __nwaiters = 0,  __broadcast_seq = 0},
> 
> The same bits do not represent a valid condvar if given the condvar
> definition in the new version of glibc.
> 
> Storing the condvar in the database file means that effectively, the program
> is expecting different versions of glibc to use the same bit representations
> of internal data structures.  This is not guaranteed to work.
> 
> It seems that either BDB should be changed to not store the condvar in
> database files, or dnf needs to reinitialize the databases it relies on when
> glibc is updated.

This is unsupported by POSIX, you must _always_ reinitialize the POISX thread objects per:
~~~
2.9.9 Synchronization Object Copies and Alternative Mappings

For barriers, condition variables, mutexes, and read-write locks, [TSH] [Option Start]  if the process-shared attribute is set to PTHREAD_PROCESS_PRIVATE, [Option End]  only the synchronization object at the address used to initialize it can be used for performing synchronization. The effect of referring to another mapping of the same object when locking, unlocking, or destroying the object is undefined. [TSH] [Option Start]  If the process-shared attribute is set to PTHREAD_PROCESS_SHARED, only the synchronization object itself can be used for performing synchronization; however, it need not be referenced at the address used to initalize it (that is, another mapping of the same object can be used). [Option End]  The effect of referring to a copy of the object when locking, unlocking, or destroying it is undefined.

For spin locks, the above requirements shall apply as if spin locks have a process-shared attribute that is set from the pshared argument to pthread_spin_init(). For semaphores, the above requirements shall apply as if semaphores have a process-shared attribute that is set to PTHREAD_PROCESS_PRIVATE if the pshared argument to sem_init() is zero and set to PTHREAD_PROCESS_SHARED if pshared is non-zero.
~~~

The serialization of the POSIX thread object into the database is undefined behaviour as noted under "The effect of referring to a copy of the object when locking, unlocking, or destroying it is undefined."

You may only use initialized objects.

I'm moving this to libdb to get fixed.

Comment 10 Florian Weimer 2016-11-15 16:57:49 UTC
(In reply to Carlos O'Donell from comment #9)

Torvald and I looked at the POSIX wording as well, and it is not sufficiently clear if the libdb usage is actually undefined.  POSIX likely intends this to be undefined: some implementations do not have a unified page cache, and the effect of the mutex/condition variable initialization may happen completely outside the context of the file mapping.

The key question is whether you may continue to use the resource after the process which has initialized it unmapped the file and exited.  The historic Berkeley DB use case requires the initialization to persist, although I do think it is problematic, as explained above.  Even the existing database environment liveness checks are likely insufficient because they cannot detect the case where all applications detached orderly, and reattach after a glibc update.

Comment 11 Florian Weimer 2016-11-15 20:30:04 UTC
I found where POSIX says that the libdb usage is undefined.  It's in the specfication for mmap and unmmap:

“The state of synchronization objects such as mutexes, semaphores, barriers, and conditional variables placed in shared memory mapped with MAP_SHARED becomes undefined when the last region in any process containing the synchronization object is unmapped.”

If we posit that glibc updates need system reboots, this means that the format of these data structures on disk does not have to be retained across glibc updates.

Comment 12 Carlos O'Donell 2016-11-15 20:58:10 UTC
(In reply to Florian Weimer from comment #11)
> I found where POSIX says that the libdb usage is undefined.  It's in the
> specfication for mmap and unmmap:
> 
> “The state of synchronization objects such as mutexes, semaphores, barriers,
> and conditional variables placed in shared memory mapped with MAP_SHARED
> becomes undefined when the last region in any process containing the
> synchronization object is unmapped.”
> 
> If we posit that glibc updates need system reboots, this means that the
> format of these data structures on disk does not have to be retained across
> glibc updates.

Correct, with the alignment and size being constrained by the ABI, so that needs to remain constant, but yes, the internal details of the type and the bit patterns and their meanings can and may change from release to release. You must reinitialize the objects before using them.

(In reply to Florian Weimer from comment #10)
> (In reply to Carlos O'Donell from comment #9)
> 
> Torvald and I looked at the POSIX wording as well, and it is not
> sufficiently clear if the libdb usage is actually undefined.  POSIX likely
> intends this to be undefined: some implementations do not have a unified
> page cache, and the effect of the mutex/condition variable initialization
> may happen completely outside the context of the file mapping.

The "why" is covered in the non-normative text under "Alternate Implementations Possible" here:
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_init.html

> The key question is whether you may continue to use the resource after the
> process which has initialized it unmapped the file and exited.  The historic
> Berkeley DB use case requires the initialization to persist, although I do
> think it is problematic, as explained above.  Even the existing database
> environment liveness checks are likely insufficient because they cannot
> detect the case where all applications detached orderly, and reattach after
> a glibc update.

You cannot continue to use the synchronization resource after the last mapping is removed, as you noted in comment 11.

(In reply to Jan Pazdziora from comment #8)
> (In reply to Torvald Riegel from comment #6)
> > The
> > condvar appears to be in a database file (/var/lib/rpm/__db.002 in our
> > reproducer).  With the (glibc-internal) condvar definition in the old glibc
> > version, the bits represent a valid, unused condvar:
> > $4 = {__data = {__lock = 0, __futex = 0, __total_seq = 0, 
> >                 __wakeup_seq = 0,  __woken_seq = 0, __mutex = 
> >                 0xffffffffffffffff, __nwaiters = 0,  __broadcast_seq = 0},
> > 
> > The same bits do not represent a valid condvar if given the condvar
> > definition in the new version of glibc.
> 
> Can the same issue happen during upgrade from (say) Fedora 24 to rawhide?

Yes, the same issue may happen when upgrading from Fedora 24 to Rawhide. It is conceivable that this issue becomes a blocker for F26 if it reproduces consistently during installs or upgrades.

This needs to be fixed in Berkeley DB, this is a error in the use of the POSIX threads primitives which constrains the implementation from moving forward with beneficial changes e.g. correctness and performance for condition variables (the reason the internal type representation changed).

> How hard would it be to make this old unused bit value also understood as
> unused value in new glibc?

TLDR; we would never support detecting of the old unused bit value. The cost is too high for other applications.

You would have to detect all possible marshalled bit patterns and reinitialize the condition variable in all API interfaces that might be passed the unmarshalled state. The cost is non-zero, the cost is on the hot path for all other applications that use the type (even with versioning apps pay the price until they are recompiled), and it would be only to support an undefined use.

> > Storing the condvar in the database file means that effectively, the program
> > is expecting different versions of glibc to use the same bit representations
> > of internal data structures.  This is not guaranteed to work.
> > 
> > It seems that either BDB should be changed to not store the condvar in
> > database files, or dnf needs to reinitialize the databases it relies on when
> > glibc is updated.
> 
> It would likely need to be rpm, not dnf.

It would actually be in Berkeley DB, which is where we have assigned this bug to e.g. libdb.

Comment 13 Carlos O'Donell 2016-11-15 21:08:36 UTC
(In reply to Carlos O'Donell from comment #12)
> > How hard would it be to make this old unused bit value also understood as
> > unused value in new glibc?
> 
> TLDR; we would never support detecting of the old unused bit value. The cost
> is too high for other applications.
> 
> You would have to detect all possible marshalled bit patterns and
> reinitialize the condition variable in all API interfaces that might be
> passed the unmarshalled state. The cost is non-zero, the cost is on the hot
> path for all other applications that use the type (even with versioning apps
> pay the price until they are recompiled), and it would be only to support an
> undefined use.

FYI.

For the sake of openness and transparency there are two cases which glibc doesn't support which it could, and it is the following:

(1) Mixed implementations with PTHREAD_PROCESS_SHARED.

(a) Process A running glibc X initializes a condition variable in shared memory with PTHREAD_PROCESS_SHARED.
(b) Process B running glibc X+1 (with a different condition variable definition) attaches to that shared memory and tries to use it.

It is clear that process A and process B have two different understandings of the condition variable. The use of PTHREAD_PROCESS_SHARED has to trigger some kind of versioned data interface e.g. a condvar with a version field which detects the version in use and switches to the appropriate algorithm. This would incur significant costs when using PTHREAD_PROCESS_SHARED objects, and the use case has never become so important to support that we have supported it. Either way the glibc upgrade for Process B required a reboot, so as Florian argues you need to reboot, in which case you get a consistent view of the condvar and everything works optimally. The text as written in POSIX argues the above should be supported, but glibc doesn't support it.

(2) Mixed 32-bit and 64-bit implementations with PTHREAD_PROCESS_SHARED (similar to (1)).

While this is similar to (1) and can be considered a case of "another implementation of narrower type" I want to call it out separately.

The other case is a mixed 32-bit and 64-bit process case where the processes share types in shared memory, and this doesn't work correctly for the obvious reasons that the types are different ABIs. Making it work could again be part of making PTHREAD_PROCESS_SHARED truly "proces shared" across an ABI boundary for 32-bit and 64-bit code (reserve a bit in the version field for an ABI flag). Again this use case has never been important enough from a user perspective that we would attempt to fix it (though we face a similar problem in nscd with shared maps of the database caches).

Comment 14 Torvald Riegel 2016-11-16 10:43:58 UTC
I had a quick look at src/mutex/mut_pthread.c, and I'd like to stress that this applies to all pthreads synchronization primitives, not just condvars.  libdb should not try to persist instances of these types; if it does, the database becomes tied to a particular glibc version.  A new rwlock implementation is in the queue for testing in Rawhide, and we may alter plain mutexes too in the future.

libdb will have to either implement custom synchronization, or it has to require clients to all use the same glibc version and architecture (including 32b vs. 64b on x86_64, for example).

Comment 15 Fedora Admin XMLRPC Client 2016-11-16 13:31:44 UTC
This package has changed ownership in the Fedora Package Database.  Reassigning to the new owner of this component.

Comment 16 Torvald Riegel 2017-01-03 13:29:49 UTC
glibc upstream now contains the new condition variable.  Has any work been started on the libdb (users) side to fix this libdb bug?

libdb has to be fixed very soon or we will get into issues when updating glibc.  I can help with how to fix this in libdb, but I do need help from the libdb maintainers, in particular in terms of (1) testing, (2) ensuring that all users of libdb enable whatever solution we come up with, and (3) notifying other distributions about this.

Who are the people that should be involved in this?  I suppose upstream-first isn't possible in this case?

Comment 17 Petr Kubat 2017-01-03 13:54:28 UTC
Thanks for bringing this up.
I have not yet managed to take a deeper look at this but I can see how this might prove problematic in the near future. I will definitely appreciate any help you can offer with fixing libdb.
As for testing this change, libdb has some kind of test suite that might be useful but has not been used yet in any way. I will take a look at it and see if I can get it working.

I am not sure whether we are going to get any support from upstream with this issue given that we are running on an older (possibly already unsupported?) release, but I can try asking anyway.

Comment 18 Torvald Riegel 2017-01-03 17:31:42 UTC
Note that I haven't ever worked with or on libdb before, so it would be good to get people involved that have some experience.  I don't have the spare time to see this through in detail.  I can help regarding how to use pthreads properly and with general concurrency considerations.

To reiterate, the problem is that libdb uses the glibc-internal binary representation in databases, and either (1) libdb or (2) how dnf uses libdb cause new glibc code to try to work on binary representation of old glibc code.  

I'm splitting this up in (1) and (2) because it is not yet clear to me who exactly is at fault here.  libdb's documentation on upgrading libdb (see docs/upgrading/upgrade_process.html in libdb's source) is rather vague and fails to include glibc (or whatever provides pthreads) in it's consideration but does mention the compiler; this is odd at least from our perspective because things like datastructure layout are part of the ABI guarantees but glibc-internal representations are not part of the ABI.  libdb could state that any pthreads-implementation update is to be considered to be a libdb major/minor update -- but it does not do that AFAIK, and it's unclear whether that would help.

libdb could require that only the same build of libdb including the same build of the pthreads implementation are active on the same database concurrently.  If it would shut down all mutexes when there are no references left and reinitialize them when a new reference comes in, this should be sufficient to not trigger the glibc update problem under that requirement.  In this case, it may be simply dnf's fault if dnf does not enforce that no process must hold a reference on dnf's databases while glibc is upgraded.
Second, I have looked a bit at the code for libdb mutex, mutex regions, and environments, but I haven't figured out yet whether libdb actually destroys all mutexes when the last reference to a database is removed.
Nonetheless, making this scenario work could be a solution.

Another solution could be to say that glibc upgrades are like libdb major/minor upgrades, and then follow (some of) the steps outlined in docs/upgrading/upgrade_process.html for every user of libdb.  This requires touching more packages, but may be easier to enforce than the first solution, which requires checking whether libdb clients cannot hold a reference across updates of glibc.

A third solution would be to version databases and include glibc's version in there.  But that doesn't provide automatic capability, and would probably require quite a few changes to libdb.

I guess it would be best to look at the first solution first, starting with trying to find out what libdb actually guarantees or intends to guarantee.  Any suggestions for where to get this information?

Comment 19 Panu Matilainen 2017-01-04 06:53:47 UTC
Just FWIW: dnf does not use libdb directly, rpm does. Dnf is only involved as a librpm API consumer. The rpmdb is never closed during a transaction so there's always a "reference" to the database environment when an upgrade is in process.

From rpm POV the simplest option would be to just flick on DB_PRIVATE mode. Which sort of equals disabling db locking but then non-root queries (which I'd assume to be the vast majority) run in DB_PRIVATE mode for the simple reason of not having permission to grab locks, and the world hasn't ended. Single writer is enforced via a separate transaction lock anyway.

Comment 20 Petr Kubat 2017-01-04 11:39:06 UTC
(In reply to Torvald Riegel from comment #18)
> Second, I have looked a bit at the code for libdb mutex, mutex regions, and
> environments, but I haven't figured out yet whether libdb actually destroys
> all mutexes when the last reference to a database is removed.
> Nonetheless, making this scenario work could be a solution.

To my knowledge, mutexes located in libdb's regions are not destroyed or re-initialized when the last process that references them exists.
 
> Another solution could be to say that glibc upgrades are like libdb
> major/minor upgrades, and then follow (some of) the steps outlined in
> docs/upgrading/upgrade_process.html for every user of libdb.  This requires
> touching more packages, but may be easier to enforce than the first
> solution, which requires checking whether libdb clients cannot hold a
> reference across updates of glibc.

This would be possible for the packages we know of but not helpful for user applications built upon libdb that would find themselves unable to use it and would have to make some manual changes on their own.
This could however be used as a workaround for rpm which would otherwise block the upgrade to F26.

> A third solution would be to version databases and include glibc's version
> in there.  But that doesn't provide automatic capability, and would probably
> require quite a few changes to libdb.
> 
> I guess it would be best to look at the first solution first, starting with
> trying to find out what libdb actually guarantees or intends to guarantee. 
> Any suggestions for where to get this information?

Unfortunately my knowledge about libdb is also very limited given that I have not been taking care of it for too long.
I have contacted upstream regarding this issue. The thread is publicly available at:
https://community.oracle.com/message/14170069
My hope is that Oracle might help us answer the questions we have about their implementation at least.

Comment 21 Petr Kubat 2017-01-06 07:49:59 UTC
Discussion with Oracle moved to email. They promised to take a look at this issue but it will take a while before they can work on it actively. Do we know when the new implementation is going to get into an upstream release? So Oracle know what timeframe to work with.

I have also verified with Oracle that libdb does not do any reinitialization checks of synchronization primitives on its own, so the only way to currently work around this is by opening the environment with the DB_RECOVER (or DB_PRIVATE as Panu pointed out) flag set.

Comment 22 Petr Kubat 2017-02-13 09:53:40 UTC
Forwarding new information from upstream:

----

We have been looking at this.  this is a bit tricky.   The problem as we see it, is the glibc library changes the pthread_cond_t structure in a way that breaks backward compatibility.   So we cannot look at structure size and make any conclusions.  The idea that we came up with is this ....

   -  add a check in the configure code to see if gnu_get_libc_version() exists on the system -- This should be true for nearly all Linuxes
 - if true, we compile in some extra code that is executed during the environment open
 - This code will clean the mutex region if the glibc version is different.

  consequences of this patch ....

  - we will store the glibc version in the environment and on open we will grab the current version and if we detect a mismatch we will recreate the region files.  This should prevent the hang that you are seeing.

----

This could work as a workaround to the issue, however I have voiced some concerns regarding already existing environments that do not have the glibc version variable yet (as we had problems with modifying environment structures before).
Also upstream asks whether it is possible to get into a situation in which two rpm processes are accessing libdb environment each using a different glibc version (after performing the glibc upgrade).

Comment 23 Panu Matilainen 2017-02-13 10:15:26 UTC
(In reply to Petr Kubat from comment #22)
> Also upstream asks whether it is possible to get into a situation in which
> two rpm processes are accessing libdb environment each using a different
> glibc version (after performing the glibc upgrade).

This is one of the reasons why calling rpm from rpm scriptlets is such a bad idea: while it appears to work most of the time, every once in a blue moon it breaks in that single transaction where glibc innards change and there's no way to predict when this might happen and reproducing can be a bit tricky too.

It's also a fine example of the wacko complications involved with the rpm database...

Comment 24 Petr Kubat 2017-02-13 11:09:02 UTC
(In reply to Panu Matilainen from comment #23)
> This is one of the reasons why calling rpm from rpm scriptlets is such a bad
> idea: while it appears to work most of the time, every once in a blue moon
> it breaks in that single transaction where glibc innards change and there's
> no way to predict when this might happen and reproducing can be a bit tricky
> too.
> 
> It's also a fine example of the wacko complications involved with the rpm
> database...

Ah, right. I forgot running rpm inside a rpm scriptlet is a thing. Thanks for reminding me.

Comment 25 Panu Matilainen 2017-02-13 11:15:03 UTC
(In reply to Petr Kubat from comment #24)
> 
> Ah, right. I forgot running rpm inside a rpm scriptlet is a thing. Thanks
> for reminding me.

It's very much a discouraged thing. Not that discouraging ever *stopped* people from doing it though...

Comment 26 Florian Weimer 2017-02-13 11:19:34 UTC
(In reply to Petr Kubat from comment #22)
> Forwarding new information from upstream:
> 
> ----
> 
> We have been looking at this.  this is a bit tricky.   The problem as we see
> it, is the glibc library changes the pthread_cond_t structure in a way that
> breaks backward compatibility.   So we cannot look at structure size and
> make any conclusions.  The idea that we came up with is this ....
> 
>    -  add a check in the configure code to see if gnu_get_libc_version()
> exists on the system -- This should be true for nearly all Linuxes
>  - if true, we compile in some extra code that is executed during the
> environment open
>  - This code will clean the mutex region if the glibc version is different.
> 
>   consequences of this patch ....
> 
>   - we will store the glibc version in the environment and on open we will
> grab the current version and if we detect a mismatch we will recreate the
> region files.  This should prevent the hang that you are seeing.
> 
> ----

I don't think this is quite sufficient because the data structure layout could change without the glibc version changing.

I suppose we could provide a pthread_layout_version_np function in glibc which returns an int which refers to version number of the current pthread data structure layout.  it is a bit tricky to come up with a single version number which is consistent across all distributions, but we can probably provide something.

Comment 27 Torvald Riegel 2017-02-13 11:57:58 UTC
(In reply to Florian Weimer from comment #26)
> (In reply to Petr Kubat from comment #22)
> > Forwarding new information from upstream:
> > 
> > ----
> > 
> > We have been looking at this.  this is a bit tricky.   The problem as we see
> > it, is the glibc library changes the pthread_cond_t structure in a way that
> > breaks backward compatibility.   So we cannot look at structure size and
> > make any conclusions.  The idea that we came up with is this ....
> > 
> >    -  add a check in the configure code to see if gnu_get_libc_version()
> > exists on the system -- This should be true for nearly all Linuxes
> >  - if true, we compile in some extra code that is executed during the
> > environment open
> >  - This code will clean the mutex region if the glibc version is different.
> > 
> >   consequences of this patch ....
> > 
> >   - we will store the glibc version in the environment and on open we will
> > grab the current version and if we detect a mismatch we will recreate the
> > region files.  This should prevent the hang that you are seeing.
> > 
> > ----
> 
> I don't think this is quite sufficient because the data structure layout
> could change without the glibc version changing.
> 
> I suppose we could provide a pthread_layout_version_np function in glibc
> which returns an int which refers to version number of the current pthread
> data structure layout.  it is a bit tricky to come up with a single version
> number which is consistent across all distributions, but we can probably
> provide something.

I agree that we could be able to build such a version number, but I don't think we should.  First, libdb should show that it can't just do reference counting, for example, and re-initialize any condvars it uses whenever the a database transitions from not being used to used.  I'll talk with libdb upstream about this.

Comment 28 Howard Chu 2017-02-13 12:46:51 UTC
If you can guarantee that a reboot will occur between version changes, a simple fix is to always configure BDB to use a shared memory region instead of a mmap'd file for its environment. Then the shared memory region is automatically destroyed/recreated by rebooting the system.

Comment 29 Petr Kubat 2017-02-21 13:46:51 UTC
Some more information from the discussion with libdb upstream.

We have 2 ideas we are looking at right now.

1. have BDB scroll away the glibc version and on an open do some checking to see if glibc version changed and force a recovery under the covers
2. DBENV recovery on glibc version change via a %posttrans cleanup action

The first idea I have already mentioned in comment 22.
The second idea is something I have thrown into the discussion as a workaround  specifically for rpm. I have tried looking at this idea a bit more to see if it would actually work and found out, that there are additional issues connected with it.
AFAIK there are 3 ways to go about forcing the environment to recover:
First "rpm --rebuilddb", but this does not work as it needs a transaction lock that is held by the rpm process doing the install/update action.
Second is "db_recover -h /var/lib/rpm" which is the libdb way of doing the recovery. This does work in recovering the environment but results in the rpm process throwing DB_RUNRECOVERY errors.
Third way is just removing the problematic region files by force via "rm -f /var/lib/__db.00*" but it seems rpm is recreating them with the old glibc structures in place when the db/dbenv is closed.

Generally libdb upstream is against modifying their code to rebuild the environment every time it is accessed as that (together with the implementation of an "in-use" counter) would be a non-negligible hit to performance.

Comment 30 Petr Kubat 2017-02-22 08:48:49 UTC
A correction to the third way of environment recovery ("rm -f /var/lib/__db.00*") - the region files are not recreated on closing the db/dbenv, but during dnf's verification process, which is still a problem since the hang is not removed.

Comment 31 Petr Kubat 2017-02-22 09:03:53 UTC
The underlying problem with the verification step might be the same to the one that I encountered a while ago in yum (bug 1351060).

Comment 32 Fedora End Of Life 2017-02-28 10:36:52 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 26 development cycle.
Changing version to '26'.

Comment 33 Petr Kubat 2017-05-03 06:37:14 UTC
Some more information from upstream:

They are going with the basic solution that they had originally proposed (cleaning up the environment files on glibc version change). The main issue is doing this in a dynamic way.

Right now upstream is waiting for a pass on all of their regressions tests so hopefully we will get a patch we can work with soon.

Comment 34 Petr Kubat 2017-05-16 08:07:23 UTC
Created attachment 1279226 [details]
upstream patch

Earlier today, upstream sent me the promised patch for our version of libdb (attached).
Only had time to do a quick test but it seems to work out of the box. Will do a thorough review later.

Comment 35 Fedora Update System 2017-05-23 22:43:34 UTC
libdb-5.3.28-18.fc24 has been submitted as an update to Fedora 24. https://bodhi.fedoraproject.org/updates/FEDORA-2017-9d674be444

Comment 36 Fedora Update System 2017-05-23 22:44:08 UTC
libdb-5.3.28-18.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2017-6e056b68bf

Comment 37 Fedora Update System 2017-05-23 22:44:32 UTC
libdb-5.3.28-18.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-a4c41ecc27

Comment 38 Adam Williamson 2017-05-23 22:50:49 UTC
As this is blocking Fedora 26 beta, and the patch tested out with a scratch build (see https://bugzilla.redhat.com/show_bug.cgi?id=1443415#c19 ), I went ahead and submitted updates for all releases. Petr, of course if you spot anything wrong on review, I will withdraw the updates.

For F24 and F25 I synced up with master, pulling in a couple of other recent fixes; I reviewed them and they seemed to be quite safe to include, but again, please let me know if not.

Comment 39 Petr Kubat 2017-05-24 06:36:10 UTC
As for the patch I found one thing that bothered me and also might pose some issues:

~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~

diff -r db78da0996b1 src/env/env_region.c
--- a/src/env/env_region.c  Mon Sep 09 11:09:35 2013 -0400
+++ b/src/env/env_region.c  Sat Apr 29 04:10:18 2017 -0700
@@ -14,17 +14,54 @@
 #include "dbinc/log.h"
 #include "dbinc/txn.h"

+#define static
 static int  __env_des_get __P((ENV *, REGINFO *, REGINFO *, REGION **));
 static int  __env_faultmem __P((ENV *, void *, size_t, int));
 static int  __env_sys_attach __P((ENV *, REGINFO *, REGION *));

~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~

I already have a patch for this (and other minor things) prepared but wanted to first get in touch with upstream to ask if there is not a good reason for the define to stay in (there is not).
I will make new builds for libdb across all the branches that already have the original fix.

Comment 40 Adam Williamson 2017-05-24 06:56:26 UTC
Thanks for the heads-up - please do go ahead and edit my updates with your new builds if you have the power, or submit new updates to supersede them.

Comment 41 Petr Kubat 2017-05-24 07:50:39 UTC
New builds have been added to the existing updates.

As for how libdb now handles environment rebuild checks - when the environment is opened libdb checks the modify timestamp of libpthread.so, compares it with the value stored in the environment and on mismatch exclusively locks the environment and rebuilds it with new variables in place.
Libdb now also rebuilds the environment the same way when either the internal layout of its regions' variables changes (as is the case with the addition of pthreads_timestamp) or when libdb is updated to a newer version.

Comment 42 Fedora Update System 2017-05-25 19:20:08 UTC
libdb-5.3.28-19.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-a4c41ecc27

Comment 43 Lukas Slebodnik 2017-05-26 07:32:49 UTC
(In reply to Adam Williamson from comment #38)
> As this is blocking Fedora 26 beta, and the patch tested out with a scratch
> build (see https://bugzilla.redhat.com/show_bug.cgi?id=1443415#c19 ), I went
> ahead and submitted updates for all releases. Petr, of course if you spot
> anything wrong on review, I will withdraw the updates.
> 
> For F24 and F25 I synced up with master, pulling in a couple of other recent
> fixes; I reviewed them and they seemed to be quite safe to include, but
> again, please let me know if not.

Adam,
Thank you very much. I really appreciate that you "tested" this update properly.
[root@vm-174 ~]# dnf check-update
error: rpmdb: BDB0113 Thread/process 17153/139837515298560 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
Error: Error: rpmdb open failed
[root@vm-174 ~]# 
[root@vm-174 ~]# rpm -q libdb
error: rpmdb: BDB0113 Thread/process 17153/139837515298560 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 17153/139837515298560 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 -  (-30973)
error: cannot open Packages database in /var/lib/rpm
package libdb is not installed

Comment 44 Adam Williamson 2017-05-26 07:36:59 UTC
Lukas: I didn't test it; people hitting the bug did, and it fixed the bug. Updates go to testing in order to find problems like that. I already unpushed the updates several hours ago. You may want to follow https://bugzilla.redhat.com/show_bug.cgi?id=1443415 , the discussion is more active there.

Comment 45 Lukas Slebodnik 2017-05-26 07:47:45 UTC
Then it would be good to close this BZ as a duplicate or vice versa.

Comment 46 Lukas Slebodnik 2017-05-26 11:58:13 UTC
(In reply to Adam Williamson from comment #44)
> Lukas: I didn't test it; people hitting the bug did, and it fixed the bug.
And that is the biggest problem. It is a critical package and you blindly pushed 5.3.28-18 to updates-testing.

You would have not pushed package to updates-tesing if had tried dnf update + links from koji.
  e.g. dnf update https://kojipkgs.fedoraproject.org//packages/libdb/5.3.28/18.fc26/x86_64/libdb-5.3.28-18.fc26.x86_64.rpm https://kojipkgs.fedoraproject.org//packages/libdb/5.3.28/18.fc26/x86_64/libdb-utils-5.3.28-18.fc26.x86_64.rpm

Because it does not work. I know is not just your problem because package maintainer didn't tested it's build either (5.3.28-19). (dnf update + links from koji)

> Updates go to testing in order to find problems like that.
Fedora users should just help with testing packages in updates-testing. They should not supplement this role. updates-testing should be used to find corner-case. But this is a basic use case. Update from update -> updates testing.

> I already unpushed the updates several hours ago.
Then there is a bug in fedora release engineering. It is obvious that unpushed packages were released. It would be good to prevent such situation in future. If you know where to file ticket for fedora releng then I would appreciate if you could do that.

Comment 47 Petr Kubat 2017-05-26 12:08:33 UTC
Lukas,
libdb has been tested. It is unfortunate that another issue broke rpm in some configurations (I guess yours included) but that is something that most of the times cannot be forseen, which is exactly the reason why we are pushing packages to updates-testing where a lot of users with different configurations can install and test the package.

This is not a libdb issue, nor is it a dnf/yum issue. If anything this is either an issue of rpm, that allows calling rpm commands inside scriplets, or an issue of packages, that actually do it, even though it is strongly discouraged. But let us not throw blame around and concentrate on fixing things instead.

Follows a log from my box for anyone interested with openldap-servers package installed and libdb being updated (via rpm -Uvv "libdb-5.3.28-19.fc26.x86_64.rpm")

~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~
D: %triggerin(openldap-servers-2.4.44-10.fc25.x86_64): scriptlet start
fdio:       2 writes,      368 total bytes in 0.000009 secs
D: %triggerin(openldap-servers-2.4.44-10.fc25.x86_64): execv(/bin/sh) pid 8412
D: Plugin: calling hook scriptlet_fork_post in selinux plugin
D: setexecfilecon: (/bin/sh) 
+ '[' 2 -eq 2 ']'
++ wc -l
++ sort -u
++ sed 's/\.[0-9]*$//'
++ rpm -q '--qf=%{version}\n' libdb
+ '[' 1 '!=' 1 ']'
+ rm -f /var/lib/ldap/rpm_upgrade_libdb
+ exit 0
D: %triggerin(openldap-servers-2.4.44-10.fc25.x86_64): waitpid(8412) rc 8412 status 0
error: rpmdb: DB_LOCK->lock_put: Lock is no longer valid
error: db5 error(22) from dbcursor->c_close: Invalid argument
Segmentation fault (core dumped)
~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~

Not sure yet why libdb dies here with segfault. Should be complaining about environment version mismatch instead.

Comment 48 Panu Matilainen 2017-05-26 13:14:48 UTC
It's not even just scriptlets calling rpm, it's *anything* opening the rpmdb ... during a transaction that happens to change the futex implementation or such. Which occurs once every few years - totally unpredictable except for the fact that the next time WILL come, most likely when everybody has mostly forgotten the issue exists at all :-/

Comment 49 Lukas Slebodnik 2017-05-26 14:54:13 UTC
(In reply to Petr Kubat from comment #47)
> Lukas,
> libdb has been tested. It is unfortunate that another issue broke rpm in
> some configurations (I guess yours included)

It is not broken just in some configuration. It is broken for everyone on 26.

I am sorry but you didn't tested upgrade of your own package and you try to persuade us that there is nothing wrong in libdb. If there is a change in libdb which required change in rpm then both packages should be updated at once. Or both maintainers should be in sync. But breaking upgrade for everyone is not a solution. libdb is ***critical*** package on fedora (due to rpm)

> but that is something that most
> of the times cannot be forseen, which is exactly the reason why we are
> pushing packages to updates-testing where a lot of users with different
> configurations can install and test the package.
>

BTW the updates-testing is a poor excuse. I do not want to say that you had to test all test cases with rpm. But upgrading of own package is a minimal use-case which should not be tested by users.



Are you sure it is not bug in libdb?
Then please explain why it happens when libdb-5.3.28-17.fc26.x86_64.rpm is upgraded to newer version. And libdb-5.3.28-17.fc26.x86_64.rpm is a default version in f26 https://apps.fedoraproject.org/packages/libdb

sh# rpm -qa libdb*
libdb-utils-5.3.28-17.fc26.x86_64
libdb-5.3.28-17.fc26.x86_64

sh# rpm -q --scripts libdb libdb-utils
postinstall program: /sbin/ldconfig
postuninstall program: /sbin/ldconfig

sh# rpm -Uvh https://kojipkgs.fedoraproject.org//packages/libdb/5.3.28/19.fc26/x86_64/libdb-5.3.28-19.fc26.x86_64.rpm https://kojipkgs.fedoraproject.org//packages/libdb/5.3.28/19.fc26/x86_64/libdb-utils-5.3.28-19.fc26.x86_64.rpm
Retrieving https://kojipkgs.fedoraproject.org//packages/libdb/5.3.28/19.fc26/x86_64/libdb-5.3.28-19.fc26.x86_64.rpm
Retrieving https://kojipkgs.fedoraproject.org//packages/libdb/5.3.28/19.fc26/x86_64/libdb-utils-5.3.28-19.fc26.x86_64.rpm

Preparing...                          ################################# [100%]
Updating / installing...
   1:libdb-5.3.28-19.fc26             ################################# [ 25%]
error: rpmdb: DB_LOCK->lock_put: Lock is no longer valid
error: db5 error(22) from dbcursor->c_close: Invalid argument

Comment 50 Petr Kubat 2017-05-26 16:49:56 UTC
>Then please explain why it happens when libdb-5.3.28-17.fc26.x86_64.rpm is upgraded to newer version.

If you were to provide a proper debugging output (like with the example I posted in my last comment) I would be able to answer that question. Alas you failed and so I cannot.

>It's not even just scriptlets calling rpm, it's *anything* opening the rpmdb ... 

True but it feels like the scriplets use case is one of the more visible ones given I seem to hit some issues with them pretty often.
In any way I am out of ideas on how to make this work for scriplets. We cannot remove the environment during scriplets as that breaks rpm since it does not expect that a scriplet might do that. On the other hand even if we were able to make scriplets use the old libdb shared library (is that even possible after it is already installed?) that would just cause issues in the packages running those scriplets (since those would just fail on environment version mismatch).
This leaves us once again with trying to keep the environment signature the same between those two versions, which, however would cause us to not be able to remember the pthreads version libdb was build against as it is currently saved in the environment.

Comment 51 Petr Kubat 2017-05-29 07:49:38 UTC
>On the other hand even if we were able to make scriplets use the old libdb shared library (is that even possible after it is already installed?) that would just cause issues in the packages running those scriplets

Extending on this idea, a (very) dirty way to make this work would be to install the newer libdb-5.3.so file to some other location (or under a different name, say libdb-5.3.so.new) and then move it to the proper location in a %posttrans. This way openldap-server's scriplets would use the old version of libdb as it is still installed. As a result of openldap-server's trigger being run against the older version its libdb environment would not be upgraded but that should not pose any issues as far as I can see since libdb would just rebuild the environment by itself on the next time it is opened.

However, as the new shared library would essentially be available only after the the %posttrans has gone through, any %postun or other %posttrans scriptlets that need libdb installed to work would fail...

What do you guys think? Any other ideas?

Comment 52 Panu Matilainen 2017-05-29 08:16:22 UTC
I think all it needs is to fail cleanly in this situation, whether it's a scriptlet or manual rpm query that's causing the mismatch.

If somebodys scriptlets fail to do the right thing, it's only a case of "told you so" - we dont want to go out of our way and introduce dirty hacks to support an unsupported thing.

Comment 53 Petr Kubat 2017-05-29 08:49:05 UTC
>I think all it needs is to fail cleanly in this situation, whether it's a scriptlet or manual rpm query that's causing the mismatch.

I agree, but making the scriplet/another rpm process fail cleanly in this situation will be pretty difficult given the patch introduces an automatic environment rebuild mechanism specifically designed to avoid failing due to version mismatch...

I guess I could remove the environment rebuild on signature change for the time being to just fail on version mismatch but that would need some additional modifications from the rpm side - it would need to rebuild its environment as it would find itself failing for the same reason (due to the addition of pthreads timestamp tracking).

Comment 54 Panu Matilainen 2017-05-29 09:11:57 UTC
I only skimmed through the patch, but it seems to me the ingredients to do the right thing are there: it (tries) to get an exclusive lock on the primary region initially for creation of the environment, and then downgrade to shared lock for the rest of operation. 

Clearly it should only try to rebuild the environment if it's the sole owner of the environment (and AFAICS that's what the patch does) and during a transaction that would not be the case. 

Except that of course in transaction where libdb itself is being upgraded, the region is not locked at all. So possible implementation bugs aside, the patch looks sound to me, only it doesn't handle the rather special case of update of itself. So it should somehow detect that the region is in use but not locked, and just fail without trying to rebuild the environment in that case.

Comment 55 Petr Kubat 2017-05-29 09:20:09 UTC
>So it should somehow detect that the region is in use but not locked, and just fail without trying to rebuild the environment in that case.

As the addition of using locks on the first region file was a result of trying to make libdb aware of whether there are other processes accessing the environment I am not sure there is any other way to check this right now.
But it is a good direction to follow, thanks! Will also ask upstream if there are some other solutions they might have scrapped in favour of locking.

Comment 56 Panu Matilainen 2017-05-29 09:24:40 UTC
Another possibility could be making rpm take the the region lock too, that should  then make the current patch do the right thing.

Comment 57 Panu Matilainen 2017-05-29 09:26:47 UTC
Hmph, but of course that still depends on somehow getting a newer rpm into the users' system before trying to update libdb, so it just shifts the problem elsewhere, doesn't actually solve it.

Comment 58 Panu Matilainen 2017-05-29 09:40:38 UTC
One more note: in the short term at least, might be best to just leave out the automatic environment rebuild. Rpm still has this workaround in its %posttrans:

> # XXX this is klunky and ugly, rpm itself should handle this
> dbstat=/usr/lib/rpm/rpmdb_stat
> if [ -x "$dbstat" ]; then
>    if "$dbstat" -e -h /var/lib/rpm 2>&1 | grep -q "doesn't match library version \| Invalid argument"; then
>        rm -f /var/lib/rpm/__db.*
>    fi
>fi
>exit 0

Comment 59 Petr Kubat 2017-05-29 10:41:27 UTC
>Hmph, but of course that still depends on somehow getting a newer rpm into the users' system before trying to update libdb, so it just shifts the problem elsewhere, doesn't actually solve it.

Yep that is the major difficulty in fixing this - how to do it only via a libdb update.

>Rpm still has this workaround in its %posttrans: ...

I am aware of this workaround but this does not help us in any way for two main reasons:
1) This only removes the region files when either the minor or major versions of libdb change so we would either have to bump the versions downstream or change this %posttrans even on DB_VERSION_MISMATCH
2) afaik there is still the issue with yum/dnf which I touched in comment 31 so the environment would just be recreated with the old structures in place after nuking it...

Comment 60 Petr Kubat 2017-05-29 14:02:22 UTC
>Another possibility could be making rpm take the the region lock too, that should  then make the current patch do the right thing.

Actually how about the other way around?
afaik rpm creates some .lock files during the update of packages. How are these being used? Would it be possible for libdb to check if an update is in progress?

Comment 61 Petr Kubat 2017-05-30 10:22:57 UTC
Created attachment 1283384 [details]
rpm lock check patch

The solution of looking at rpm's .lock file for information about a possible ongoing update looked like a good way to work around the issue (although it is very much another rpm specific hack) so I have tried it out (patch attached) and confirmed it is working. Tested with a complete update from F25 to F26 without any hangs occurring. All packages using rpm in their scriplets failed to remove the environment, resulted in DB_VERSION_MISMATCH as expected and the environment  got rebuilt when next accessed.

As far as I can see there should not be any negative influence when using this patch as the check is only used when an incompatible environment is encountered and is non-blocking, falling back to normal environment checks when the rpm lock is unavailable.

It would be nice if someone could take a look at the patch and review it. One thing I am not sure is whether to try get an exclusive lock on rpm's .lock file, or just a reader lock...
Also need to modify configure to properly set up the path to the .lock file.

Comment 62 Panu Matilainen 2017-05-30 12:05:18 UTC
AFAICS that hits problems with the crazy semantics of fcntl() locks: if (and when) that test runs in the rpm that is actually doing the transaction, the lock it holds gets released there. It's not just the explicit F_UNLCK, the mere act of closing the fd will cause the lock to get released.

You could avoid the F_UNLCK by doing a F_GETLK instead of actually locking - if F_GETLK fails then somebody else is holding the lock already and libdb should not rebuild the environment. It's racy of course but no more so than what your patch does I think.

However that still leaves the problem of fd closing. You'd have to deliberately leak the fd, or maybe store it into a static variable and only close it from an atexit() handler or such - closing it on db close is not okay because rpm reopens the db while holding the transaction lock. So it gets really ugly :-/

Rpm itself would be in a better position to fiddle with these locks of course. One possibility might be having rpm drop to readonly DB_PRIVATE mode when it detects a held transaction lock, which should avoid most disasters even if the exact behavior is murky.

Comment 63 Petr Kubat 2017-05-30 12:27:58 UTC
>AFAICS that hits problems with the crazy semantics of fcntl() locks: if (and when) that test runs in the rpm that is actually doing the transaction, the lock it holds gets released there. It's not just the explicit F_UNLCK, the mere act of closing the fd will cause the lock to get released.

Ah, damn. Completely forgot about this. So libdb would essentially remove the lock from rpm right at the start next time the db is accessed after the update since a rebuild of the environment would be needed... messy. Thanks for spotting this!

>However that still leaves the problem of fd closing. You'd have to deliberately leak the fd, or maybe store it into a static variable and only close it from an atexit() handler or such - closing it on db close is not okay because rpm reopens the db while holding the transaction lock. So it gets really ugly :-

I know libdb has some form of fd management built-in that might help but would have to look into this some more to check where and when the descriptors are released. I guess that it would most likely be at environment close though...

Comment 64 Panu Matilainen 2017-05-30 13:00:21 UTC
While we're talking about *cough* creative *cough* approaches: one could check for the existence of a lock without interfering with it by looking at /proc/locks. Determine the inode of /var/lib/rpm/.rpm.lock if it exists and look for it in /proc/locks in the sixth column:

[pmatilai@sopuli ~]$ ls -i /srv/test/var/lib/rpm/.rpm.lock 
13631516 /srv/test/var/lib/rpm/.rpm.lock
[pmatilai@sopuli ~]$ grep :13631516 /proc/locks 
1: POSIX  ADVISORY  WRITE 13458 fd:01:13631516 0 EOF

If it's there, it means rpm is in middle of transaction.

Comment 65 Petr Kubat 2017-05-31 10:04:20 UTC
Looked at libdb's fd management subsytem and, as I expected, it is able to release still held descriptors on closing the environment which does not help us at all since rpm opens and closes the environment multiple times during an upgrade. Libdb is also very vocal about having to release the descriptors...

>Determine the inode of /var/lib/rpm/.rpm.lock if it exists and look for it in /proc/locks in the sixth column:

Creativity aside, wouldn't that approach also need having the .rpm.lock file open when trying to look at its inode number?

I wonder if we could not just get away with using the rpm lock check in libdb as is (well with the modification of only trying to see if we could lock the file as Panu suggested) ...
The way I see it, right now (using the patch) we are able to survive the update of libdb which is the most important thing as any updates for rpm (or dnf, but rpm makes more sense I guess) itself would be applied by the time the next rpm command would be run. Which makes our hands a little bit less tied.
At this point rpm would just need to make sure that the environment is touched (hence rebuilt) before the transaction lock is taken since the rpm lock check will never get hit when the environment does not need to be rebuilt.

afaics dnf opens the environment before any rpm transactions take place so this should already be safe when only working through dnf.

Comment 66 Petr Kubat 2017-05-31 10:17:47 UTC
>Creativity aside, wouldn't that approach also need having the .rpm.lock file open when trying to look at its inode number?

I guess we wouldn't need to actually open the file if we went through directory's dirents trying to find the .rpm.lock's entry...

Comment 67 Panu Matilainen 2017-05-31 10:47:38 UTC
You can stat() a file to get the inode without opening, that's not a problem.

But the more I think about this, having system libdb muck about rpm locks seems ... not so healthy. Rpm spec has an option to build with bundled libdb, for occasions just like this one might say :)

Comment 68 Petr Kubat 2017-05-31 11:39:05 UTC
>But the more I think about this, having system libdb muck about rpm locks seems ... not so healthy. Rpm spec has an option to build with bundled libdb, for occasions just like this one might say :)

Well yes. As I said before it is still an ugly hack that we would need to remove as soon as possible if we went through with it.

The only "clean" alternative to this that I see right now is pushing the patch with the automatic rebuild stripped out, fixing rpm so it expects a DB_VERSION_MISMATCH error when opening the environment and hope other packages depending on libdb have some checks of their own in place.

Comment 69 Adam Williamson 2017-05-31 18:45:28 UTC
Note the parent bug is blocking Beta, and we have the second Go/No-Go for Beta tomorrow, and at present we have no *other* blockers. So it'd be really good to get something in that makes this work for now, if it's at all possible, even if it's technically a Dirty Hack (TM).

Comment 70 Petr Kubat 2017-06-01 06:50:58 UTC
Adam, thanks for the heads-up.
For the time being I will just push the fix that looks at rpm's transaction lock since that does not need any modifications from rpm's side to work. I will change it so as to leak the descriptor (the test is not run every time the environment so it should not hog too many resources) in order to not drop rpm's lock.

Also upstream replied to me that they are looking into the issue so they might provide some other ideas later on.

Comment 71 Petr Kubat 2017-06-01 09:10:34 UTC
Created attachment 1284053 [details]
rpm lock check patch v2

Attached modified patch as per comment 70. Would be nice if someone took a quick look at it before I push it

Comment 72 Panu Matilainen 2017-06-01 09:37:58 UTC
+    fd = open(RPMLOCK_PATH, O_RDWR);
+    if (fd == -1)
+        return 1;

If the file doesn't exist at all then there can be no lock there either, so you could return 0 in that case. But probably doesn't matter.

I have to say I would feel less bad about this if the patch limited this thing to environments whose path contains /var/lib/rpm, rather than doing this for all users of libdb. That way completely unrelated software doesn't end up leaking fds ominuous looking fds because of this.

Comment 73 Petr Kubat 2017-06-01 10:55:52 UTC
Created attachment 1284124 [details]
rpm lock check patch v3

>I have to say I would feel less bad about this if the patch limited this thing to environments whose path contains /var/lib/rpm

Yeah that might actually be a bit better. Thanks!
modified patch attached

Comment 74 Panu Matilainen 2017-06-01 11:00:13 UTC
/me likes (but didn't test), thanks.

It's still one helluva gross hack but at least it's now a precision hack :)

Comment 75 Fedora Update System 2017-06-01 12:45:13 UTC
libdb-5.3.28-21.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-a4c41ecc27

Comment 76 Petr Kubat 2017-06-01 12:48:20 UTC
Pushed the changes to rawhide and f26 branches and added the f26 build to the update.

Note that previous releases do not need to have the fixes applied as all of the magic happens after libdb is updated to the fixed version. As such I have removed the upstream fix from those (f24, f25) branches.

Comment 77 Adam Williamson 2017-06-01 18:43:36 UTC
"Note that previous releases do not need to have the fixes applied as all of the magic happens after libdb is updated to the fixed version. As such I have removed the upstream fix from those (f24, f25) branches."

Uh - but in that case, how do we know that libdb is updated before glibc/libpthread is updated? When you're upgrading from F24 or F25 to F26, if glibc/libpthread is updated before libdb, won't that trigger the bug?

Comment 78 Adam Williamson 2017-06-02 05:30:43 UTC
It'd be really great if people could help test the new update hard: test that it fixes the actual bug, test it in the scenarios that caused trouble with -18 and -19, etc. We want to get this fix out for Beta, but we don't want to send it unless we're sure it's good. Thanks a lot!

Comment 79 Petr Kubat 2017-06-02 06:19:08 UTC
>Uh - but in that case, how do we know that libdb is updated before glibc/libpthread is updated?

I am not sure how the order of the installs is created from the list of packages but I would expect that the dependencies are installed first (and from testing out the F25 -> F26 update that seems to be the case). So the install order should be glibc -> libdb -> whatever package that depends on libdb.

This also means that by the time a package that would be able to trigger the bug is installed, a newer version of libdb should already be present on the box (if not then that pavckage's requires need to be changed) and would not trigger the condvar issue since a newer libdb environment (and a rebuild of the old one) would be needed first.

Comment 80 Panu Matilainen 2017-06-02 11:21:24 UTC
Packages running rpm from scriptlets can happen anywhere in the order, never mind external accesses to the rpmdb.

I think we really need to have a patched up libdb present on the system before upgrading to the new glibc.

Comment 81 Fedora Update System 2017-06-02 13:00:37 UTC
libdb-5.3.28-21.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2017-6e056b68bf

Comment 82 Fedora Update System 2017-06-02 13:01:12 UTC
libdb-5.3.28-21.fc24 has been submitted as an update to Fedora 24. https://bodhi.fedoraproject.org/updates/FEDORA-2017-9d674be444

Comment 83 Fedora Update System 2017-06-04 05:10:46 UTC
libdb-5.3.28-21.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-a4c41ecc27

Comment 84 Fedora Update System 2017-06-05 12:07:44 UTC
libdb-5.3.28-21.fc24 has been submitted as an update to Fedora 24. https://bodhi.fedoraproject.org/updates/FEDORA-2017-9d674be444

Comment 85 Fedora Update System 2017-06-07 17:41:10 UTC
libdb-5.3.28-21.fc24 has been submitted as an update to Fedora 24. https://bodhi.fedoraproject.org/updates/FEDORA-2017-9d674be444

Comment 86 Fedora Update System 2017-06-07 20:16:03 UTC
libdb-5.3.28-21.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2017-6e056b68bf

Comment 87 Fedora Update System 2017-06-08 16:04:48 UTC
libdb-5.3.28-21.fc24 has been pushed to the Fedora 24 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-9d674be444

Comment 88 Fedora Update System 2017-06-08 16:10:48 UTC
libdb-5.3.28-21.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-6e056b68bf

Comment 89 Fedora Update System 2017-06-08 21:07:22 UTC
libdb-5.3.28-21.fc26 has been pushed to the Fedora 26 stable repository. If problems still persist, please make note of it in this bug report.

Comment 90 Fedora Update System 2017-06-09 09:23:58 UTC
libdb-5.3.28-21.fc24 has been pushed to the Fedora 24 stable repository. If problems still persist, please make note of it in this bug report.

Comment 91 Fedora Update System 2017-06-09 11:29:18 UTC
libdb-5.3.28-21.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.

Comment 92 stan 2017-06-09 20:53:16 UTC
I just did a dnf update in F25 and picked up the libdb-5.3.28-21.fc25 package.  Not sure if it was from testing or stable, as I have both enabled.  However, there are issues.  When the libdb package was being installed, dnf popped up an error saying there was a database mismatch in the rpm database, though the rest of the updates completed.  But, at the end, dnf hung, and put up this message,

BDB1537 //var/lib/rpm/__db.001: unable to read system-memory information: Input/output error.

I remembered reading an email about this in the test list, and followed the instructions in that email.
rm /var/lib/rpm/_db.*
rpm --rebuilddb

That completed, and I will reboot and see if there are any problems.  I had to restart my mail client in order for it to access its database of stored passwords.

Comment 93 Adam Williamson 2017-06-09 20:57:28 UTC
Yeah, you're not the only one, it now seems a few people are still having issues with -21 :( Very sorry about this, this bug is turning into a bit of a nightmare. We'll try and do all we can to deal with it.

Comment 94 stan 2017-06-09 21:13:17 UTC
After the rpm database rebuild, and a reboot, everything seems to be working just fine again.  I ran a dnf update, and it properly told me that there was nothing to do, so the update that had the rpm db problems got integrated.  

A little excitement, and no lasting harm done.  :-)

> Very sorry about this,

Not to worry.

Comment 95 Christian Krause 2017-06-11 08:45:19 UTC
I just stumbled over this bug during a regular "dnf update" was well:

During the update libdb was updated tolibdb.x86_64 5.3.28-21.fc25

$ dnf update
...
Complete!
Segmentation fault (core dumped)
$

The error messages of following "dnf update" or "rpm -qa" are similar to the ones already reported in:

https://bugzilla.redhat.com/show_bug.cgi?id=1394862#c43

I recovered from the situation via "rpm -v --rebuilddb".

Comment 96 Panu Matilainen 2017-06-12 09:59:47 UTC
So people are seeing crashes after dnf prints out "Complete!", at which point there shouldn't be much going on except rpmdb being *closed*. 

Unless there's something else at play, that is: for example https://bugzilla.redhat.com/show_bug.cgi?id=1397087#c48 shows etckeeper being involved. I'm not really familiar with it, but it does involve dnf plugins that execute before and after (and maybe during, dunno) transaction, and seems to query rpmdb before and after the transaction.

But try as I might, I'm not able to reproduce those post-trans crashes, with or without etckeeper :-/

Comment 97 Petr Kubat 2017-06-12 11:57:52 UTC
From what I can see in dnf's code nothing else should access the rpmdb after the "Complete!" string is printed out, except a call to "cli.command.run_transaction()" which for the upgrade command seems to be a nop...

Unfortunately I cannot reproduce either. From what I can see from the logs it seems like something is accessing the rpmdb (and rebuilding it) after the transaction goes through (same behaviour as we have seen when it was removed during scriplets). The difference here is that it segfaults after the transaction has successfully completed so a "rpmdb --rebuilddb" fixes the issue.

Comment 98 Petr Kubat 2017-06-12 14:17:40 UTC
Managed to reproduce the issue artificially by introducing a sleep() directly after the part of dnf's code where the "Complete!" is printed out and accessing the rpmdb from another process (rpm -qi libdb as root).
At this point it makes sense that the transaction lock is no longer taken by rpm as the transaction has already gone through so the environment is removed by the other process.
As for where the segfault happens, here is the python bt:

~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~
Traceback (most recent call first):
  File "/usr/lib/python3.5/site-packages/dnf/rpm/transaction.py", line 48, in close
    self.ts.closeDB()
  File "/usr/lib/python3.5/site-packages/dnf/base.py", line 373, in _ts
    self._priv_ts.close()
  File "/usr/lib/python3.5/site-packages/dnf/base.py", line 329, in _closeRpmDB
    del self._ts
  File "/usr/lib/python3.5/site-packages/dnf/base.py", line 301, in close
    self._closeRpmDB()
  File "/usr/lib/python3.5/site-packages/dnf/base.py", line 102, in __exit__
    self.close()
  File "/usr/lib/python3.5/site-packages/dnf/cli/main.py", line 62, in main
    return _main(base, args)
  File "/usr/lib/python3.5/site-packages/dnf/cli/main.py", line 177, in user_main
    errcode = main(args)
  File "/usr/bin/dnf", line 58, in <module>
    main.user_main(sys.argv[1:], exit_code=True)
~~~~~~~~~~~~~~snip~~~~~~~~~~~~~~

We can see dnf tries to close the rpmdb handle but fails to do so since it has already been removed by the other process.

Comment 99 Lukas Slebodnik 2017-06-12 20:28:27 UTC
Created attachment 1287101 [details]
failed upgrade

BTW I can see failure when upgrading from -21 to -22 on f26
I didn't catch it earlier because my nightly tests were not running due to maintenance of hypervisors.

Comment 100 sixpack13 2017-06-12 20:32:00 UTC
TL,DR

hit this bug today

update/-grade was 
from libdb.x86_64 5.3.28-21.fc26
to   libdb.x86_64 5.3.28-22.fc26

NO messages during upgrade, but after "sudo dnf update --refresh -v" I got:
error: db5 error(5) from dbenv->open: Input/output error
error: cannot open Packages index using db5 - Input/output error (5)
error: cannot open Packages database in /var/lib/rpm
Error: Error: rpmdb open failed

rm -rf __db* and rpm -v --rebuilddb fixed it (so far)

Comment 101 sixpack13 2017-06-12 20:36:00 UTC
sorry, Lukas and I wrote ours comments simultaneously, but Lukas saved it first !

Comment 102 stan 2017-06-12 21:47:56 UTC
Well, I hit this bug today again when updating to libdb.x86_64 5.3.28-22.fc26 in F25.  But it didn't happen after the Complete!, it happened before it.  I don't think I was doing anything to access the rpm database.

Here's the error message

Upgraded:
  exfalso.noarch 3.9.1-1.fc25        fusion-icon.noarch 1:0.2.4-1.fc25  jansson.x86_64 2.10-2.fc25      jansson-devel.x86_64
  2.10-2.fc25  libdb.x86_64 5.3.28-22.fc25   libdb-cxx.x86_64 5.3.28-22.fc25  libdb-cxx-devel.x86_64 5.3.28-22.fc25
    libdb-devel.x86_64 5.3.28-22.fc25  libdb-utils.x86_64 5.3.28-22.fc25  lua-lxc.x86_64 2.0.8-2.fc25     lxc.x86_64
    2.0.8-2.fc25           lxc-libs.x86_64 2.0.8-2.fc25  lxqt-admin.x86_64 0.11.1-5.fc25  python3-lxc.x86_64 2.0.8-2.fc25
      quodlibet.noarch 3.9.1-1.fc25      tbb.x86_64 2017.7-1.fc25           tbb-devel.x86_64 2017.7-1.fc25  tbb-doc.x86_64
      2017.7-1.fc25

      Tracer:
        Program 'tracer' crashed with following error:

        b'error: db5 error(5) from dbenv->open: Input/output error\nerror: cannot open Packages index using db5 -
        Input/output error (5)\nerror: cannot open Packages database in /var/lib/rpm\nTraceback (most recent call last):\n
        File "/usr/bin/tracer", line 34, in <module>\n    tracer.main.run()\n  File
        "/usr/lib/python3.5/site-packages/tracer/main.py", line 45, in run\n    return router.dispatch()\n  File
        "/usr/lib/python3.5/site-packages/tracer/resources/router.py", line 52, in dispatch\n    controller =
        DefaultController(self.args, self.packages)\n  File "/usr/lib/python3.5/site-packages/tracer/controllers/default.py",
        line 62, in __init__\n    self.applications = self.tracer.trace_affected(self._user(args.user))\n  File
        "/usr/lib/python3.5/site-packages/tracer/resources/tracer.py", line 96, in trace_affected\n    for file in
        self._PACKAGE_MANAGER.package_files(package.name):\n  File
        "/usr/lib/python3.5/site-packages/tracer/resources/PackageManager.py", line 55, in package_files\n    return
        self.package_managers[0].package_files(pkg_name)\n  File
        "/usr/lib/python3.5/site-packages/tracer/packageManagers/dnf.py", line 34, in package_files\n    if
        self._is_installed(pkg_name):\n  File "/usr/lib/python3.5/site-packages/tracer/packageManagers/rpm.py", line 151, in
        _is_installed\n    mi = ts.dbMatch(\'name\', pkg_name)\n_rpm.error: rpmdb open failed\n'
        Please visit https://github.com/FrostyX/tracer/issues and submit the issue. Thank you
        We apologize for any inconvenience
        Complete!

I didn't go to github and enter the issue, as it seems like it probably isn't related to tracer.  That is, I'm thinking tracer is just an innocent victim of the issue in rpm.

Comment 103 stan 2017-06-12 22:04:32 UTC
I removed the temp files in /var/lib/rpm (__db.00?) and rebuilt the rpm database (rpm --rebuilddb), and a subsequent install using dnf completed flawlessly.  So, this doesn't seem to do any lasting harm.

Comment 104 Adam Williamson 2017-06-12 22:08:37 UTC
Hum, so if we send out -22, it's gonna cause problems *again* for people who already got -21? Great...

Comment 105 Chris Murphy 2017-06-13 03:41:37 UTC
I've got this happening as well on a dnf update of a system that had -21 with no problems, and now the rpmdb appears corrupt.
https://bugzilla.redhat.com/show_bug.cgi?id=1394862#c100

Comment 106 Petr Kubat 2017-06-13 06:43:01 UTC
>Hum, so if we send out -22, it's gonna cause problems *again* for people who already got -21?

Yep. This is due to -21 having modified libdb's internal environment structures which was need to fix the original issue of this bz. Since the older versions do not have the automatic rebuild mechanism libdb will fail when trying to access the new environment...

Also note that this will happen for everyone currently on -21 and not only those that hit the bug I described in comment 98.

Comment 107 Panu Matilainen 2017-06-13 08:00:04 UTC
(In reply to Petr Kubat from comment #98)
> Managed to reproduce the issue artificially by introducing a sleep()
> directly after the part of dnf's code where the "Complete!" is printed out
> and accessing the rpmdb from another process (rpm -qi libdb as root).
> At this point it makes sense that the transaction lock is no longer taken by
> rpm as the transaction has already gone through so the environment is
> removed by the other process.

Okay, this makes perfect sense then: dnf post-transaction plugins run after the rpm transaction has finished, and any such plugin executing rpm will cause such problems. etckeeper is the one plugin I know that would match the pattern, but maybe there are others. At any rate, this explains why many people are not seeing problems but some are.

A dnf plugin executing rpm is just about as bad idea as running rpm queries from scriptlets, but changing that would require significant changes to etckeeper I suppose.

Anyway, since the remaining issue with -21 is now pretty well understood, my suggestion is that -22 is pulled out of circulation ASAP (and/or replaced with -23 that includes the patch from -21 again), reverting the patch will only cause even more trouble at this point.

The more productive approach might be banging all drums available to tell people to:
1) run the libdb -21 update in a transaction of its own, with --noplugins
2) if it crashes, keep calm and run rpm --rebuilddb
3) inform people about the known causes of problems (etckeeper-dnf at this point), most people will not experience any problems

Comment 108 Lukas Slebodnik 2017-06-13 08:10:08 UTC
(In reply to Panu Matilainen from comment #107)
> The more productive approach might be banging all drums available to tell
> people to:
> 1) run the libdb -21 update in a transaction of its own, with --noplugins
> 2) if it crashes, keep calm and run rpm --rebuilddb
> 3) inform people about the known causes of problems (etckeeper-dnf at this
> point), most people will not experience any problems
How do you want to achieve it. IIRC dnf does not have a way how to inform users before upgrade about important/breaking changes. Correct me If I am wrong.
Debian(apt) has such feature.

And I would say that most of fedora users does not follow fedora mailing lists before upgrade.

Comment 109 Petr Kubat 2017-06-13 08:42:31 UTC
>Anyway, since the remaining issue with -21 is now pretty well understood, my suggestion is that -22 is pulled out of circulation ASAP (and/or replaced with -23 that includes the patch from -21 again), reverting the patch will only cause even more trouble at this point.

Agreed. Doing a -23 makes more sense since I still need to incorporate the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1460003 anyway.

>And I would say that most of fedora users does not follow fedora mailing lists before upgrade.

True. But every user should be able to find information about the issue if it is hit, be it in Common bugs, one of the bugzillas, or the actual bodhi update, as long as it is properly documented.
We can also suggest the users to reboot (through the bodhi tool) which afaics also fixes the problem. But I am not sure what the result of marking the checkbox actually is since I have never had the need to use it...

Comment 110 Adam Williamson 2017-06-13 16:15:11 UTC
Panu: yup, that's pretty much what we agreed upon in another discussion. I'll be pulling the -22 updates and writing some blog posts etc. about this today. Sorry for all the confusion and ping-ponging, everyone.

Petr: I'll wait on you to do a -23, then, since you have that other fix to include.

Comment 111 Dominik 'Rathann' Mierzejewski 2017-06-13 23:29:43 UTC
It looks like the "tracer" dnf plugin is another trigger for this (in my case).

python3-dnf-plugins-extras-tracer-0.0.12-4.fc25.noarch
python3-tracer-0.6.12-4.fc25.noarch
tracer-common-0.6.12-4.fc25.noarch

Comment 112 Petr Kubat 2017-06-14 07:06:15 UTC
>It looks like the "tracer" dnf plugin is another trigger for this (in my case).

Dominik, thanks for the report!

I can confirm that the tracer dnf plugin does indeed trigger the issue some dnf users might be running into. Fortunately, once you rebuild the rpmdb after hitting the dnf-update crash you will unlikely run into any such issues again as libdb will just use its environment lock, which it holds until the environment is closed (= until after dnf plugins have run).

Comment 113 Petr Kubat 2017-06-19 12:46:43 UTC
Attaching a new set of patches (applied after the upstream patch):

db-5.3.28-condition-variable-ppc.patch
Fixes issues described in bug 1460003. The root problem of this bug was libdb not failing when only the modify timestamp of libpthread has been changed, e.g. when glibc is updated after having first installed libdb -21 build (as is the case in F24/25-> F26 updates).
Unfortunately the issue is not fixed as easy as by forcing libdb to fail every time it encounters a libpthread timestamp change. Dnf accesses the rpmdb a few times during the update process after the glibc has updated (= timestamp changed) and if libdb were to fail in those situations the update process might not be completed successfully.
The best fix I could come up with is taking a look at libdb's lock on its environment and checking if the pid of the process holding the lock is the same as the pid of the current process as that ensures that the environment is accessed with the correct glibc library version loaded.
Afaik there is no easy way to do this check (for flock-style locks) so I have written a small parser that looks at /proc/locks and extracts the pid from there.

db-5.3.28-rpm-lock-check.patch
Since I have written a /proc/locks parser as part of the patch described above I have modified the check for rpm's transaction lock to use it as well (to at least avoid fd leaks the original patch had).
Also fixed a few issues with the original patch (properly unlock the environment when not able to get the rpm lock, return success when the rpm lock file does not exist).

Comment 114 Petr Kubat 2017-06-19 12:47:49 UTC
Created attachment 1289098 [details]
Cond var ppc fix

Comment 115 Petr Kubat 2017-06-19 12:48:34 UTC
Created attachment 1289099 [details]
rpm lock check patch v4

Comment 116 Fedora Update System 2017-06-27 07:33:21 UTC
libdb-5.3.28-24.fc26 has been submitted as an update to Fedora 26. https://bodhi.fedoraproject.org/updates/FEDORA-2017-2b68e14594

Comment 117 Fedora Update System 2017-06-27 17:21:23 UTC
libdb-5.3.28-24.fc24 has been pushed to the Fedora 24 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-014d67fa9d

Comment 118 Fedora Update System 2017-06-27 17:26:46 UTC
libdb-5.3.28-24.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-372bb1edb3

Comment 119 Fedora Update System 2017-06-27 20:26:18 UTC
libdb-5.3.28-24.fc26 has been pushed to the Fedora 26 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2017-2b68e14594

Comment 120 Fedora Update System 2017-07-07 23:00:28 UTC
libdb-5.3.28-24.fc26 has been pushed to the Fedora 26 stable repository. If problems still persist, please make note of it in this bug report.

Comment 121 Fedora Update System 2017-07-12 01:50:18 UTC
libdb-5.3.28-24.fc24 has been pushed to the Fedora 24 stable repository. If problems still persist, please make note of it in this bug report.

Comment 122 Fedora Update System 2017-07-12 03:21:51 UTC
libdb-5.3.28-24.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.

Comment 123 Colin Walters 2017-09-05 16:52:52 UTC
I'm not entirely sure if it's this bug, but it seems related; I can't do a `yum update` from the latest registry.fedoraproject.org/fedora:26 container:

# docker run --rm -ti registry.fedoraproject.org/fedora:26 bash
# yum update 
yum -y upgrade glibc
Last metadata expiration check: 0:05:36 ago on Tue Sep  5 16:46:10 2017.
Dependencies resolved.
...
Upgrading:
 glibc                                                    x86_64                                        2.25-10.fc26                                          updates                                        3.4 M
 glibc-common                                             x86_64                                        2.25-10.fc26                                          updates                                        889 k
 glibc-langpack-en                                        x86_64                                        2.25-10.fc26                                          updates                                        291 k
 libcrypt-nss                                             x86_64                                        2.25-10.fc26                                          updates                                         53 k

Transaction Summary
...
Upgrade  4 Packages

Total download size: 4.6 M
Downloading Packages:
...
Failed Delta RPMs increased 4.6 MB of updates to 5.5 MB (-20.1% wasted)
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                           1/1 
  Upgrading        : glibc-common-2.25-10.fc26.x86_64                                                                                                                                                          1/8 
  Upgrading        : glibc-langpack-en-2.25-10.fc26.x86_64                                                                                                                                                     2/8 
  Running scriptlet: glibc-2.25-10.fc26.x86_64                                                                                                                                                                 3/8 
  Upgrading        : glibc-2.25-10.fc26.x86_64                                                                                                                                                                 3/8 
  Running scriptlet: glibc-2.25-10.fc26.x86_64                                                                                                                                                                 3/8 
  Upgrading        : libcrypt-nss-2.25-10.fc26.x86_64                                                                                                                                                          4/8 
  Running scriptlet: libcrypt-nss-2.25-10.fc26.x86_64                                                                                                                                                          4/8 
  Cleanup          : libcrypt-nss-2.25-9.fc26.x86_64                                                                                                                                                           5/8 
  Running scriptlet: libcrypt-nss-2.25-9.fc26.x86_64                                                                                                                                                           5/8 
  Cleanup          : glibc-common-2.25-9.fc26.x86_64                                                                                                                                                           6/8 
  Cleanup          : glibc-langpack-en-2.25-9.fc26.x86_64                                                                                                                                                      7/8 
  Cleanup          : glibc-2.25-9.fc26.x86_64                                                                                                                                                                  8/8 
  Running scriptlet: glibc-2.25-9.fc26.x86_64                                                                                                                                                                  8/8 
BDB1539 Build signature doesn't match environment
failed loading RPMDB
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.


Note You need to log in before you can comment on or make changes to this bug.