Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1751272 - rpm-ostree status shows "Warning: failed to finalize previous deployment"
Summary: rpm-ostree status shows "Warning: failed to finalize previous deployment"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: rpm-ostree
Version: 31
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Jonathan Lebon
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: F31FinalFreezeException
TreeView+ depends on / blocked
 
Reported: 2019-09-11 14:57 UTC by Parag Nemade
Modified: 2020-11-30 14:39 UTC (History)
10 users (show)

Fixed In Version: rpm-ostree-2019.6-1.fc31
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-28 00:02:52 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
rpm-ostree journal (deleted)
2019-09-11 16:02 UTC, Parag Nemade
no flags Details
json output (deleted)
2019-09-12 03:14 UTC, Parag Nemade
no flags Details | Diff
13th Sept journal logs (deleted)
2019-09-13 05:38 UTC, Parag Nemade
no flags Details | Diff
18th Sept journal logs (deleted)
2019-09-18 12:46 UTC, Parag Nemade
no flags Details
gzipped 18th Sept journal logs (deleted)
2019-09-18 12:52 UTC, Parag Nemade
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ostreedev ostree pull 1926 0 None closed boot/finalize-staged: Run after systemd-journal-flush.service 2021-01-17 05:40:49 UTC

Description Parag Nemade 2019-09-11 14:57:26 UTC
Description of problem:
I am observing this behavior since last 2 weeks now. Whenever I use "rpm-ostree upgrade" and reboot and runs "rpm-ostree status" it always prints 2 lines in red
Warning: failed to finalize previous deployment
         check `journalctl -b -1 -u ostree-finalize-staged.service`

Then when I reboot again, and check output of "rpm-ostree status" these lines go es away.Also,
[parag@f31sb ~]$ journalctl -b -1 -u ostree-finalize-staged.service
-- Logs begin at Mon 2019-07-15 08:11:39 IST, end at Wed 2019-09-11 20:25:06 IST. --
Sep 11 20:12:41 f31sb systemd[1]: Started OSTree Finalize Staged Deployment.
Sep 11 20:14:19 f31sb systemd[1]: Stopping OSTree Finalize Staged Deployment...
Sep 11 20:14:19 f31sb ostree[11220]: Finalizing staged deployment


Version-Release number of selected component (if applicable):
[parag@f31sb ~]$ rpm -qa rpm rpm-ostree
rpm-4.15.0-0.beta.2.module_f31+5103+45c9cb69.x86_64
rpm-ostree-2019.5.7.gcac5aa41-3.fc31.x86_64


How reproducible:
always

Steps to Reproduce:
1. just "rpm-ostree upgrade"
2. reboot and run "rpm-ostree status"
3. observe 2 red lines as stated above

Actual results:
Warning: failed to finalize previous deployment


Expected results:
warning should not come.

Additional info:

Comment 1 Jonathan Lebon 2019-09-11 15:44:37 UTC
Hmm, and the system upgraded OK otherwise? If so, it's likely that the code that searches the journal for error messages is getting it wrong. Could you share the full journal logs for one of the boots (and reboots) in which you `rpm-ostree upgrade` and reboot? (E.g.: reboot, run rpm-ostree upgrade, reboot, then collect the journals for `-b -1 -o json-pretty` and `-b 0 -o json-pretty`).

Comment 2 Colin Walters 2019-09-11 15:49:58 UTC
Running `journalctl -u ostree-finalize-staged.service` (i.e. without the `-b` so we gather all boots) would likely help.

Comment 3 Parag Nemade 2019-09-11 16:02:24 UTC
Created attachment 1614147 [details]
rpm-ostree journal

system upgraded successfully. 
I pick one of the F31 SB VM, started it. Then ran rpm-ostree upgrade on it then reboot then 
journalctl -u ostree-finalize-staged.service > rpmostree-journal.txt

I am attaching this file here. Please note this system uses Hindi language as default locale.

Comment 4 Colin Walters 2019-09-11 17:18:42 UTC
Hm, no error messages in there.  Odd.

Comment 5 Parag Nemade 2019-09-11 17:34:38 UTC
But I keep reproducing it every time. on this test machine I layered few more packages now, reboot and again I see that warning message.

Comment 6 Jonathan Lebon 2019-09-11 19:27:00 UTC
> Created attachment 1614147 [details]

`-u ostree-finalize-staged.service` is not reliable unfortunately (see https://github.com/systemd/systemd/issues/2913). E.g. I'm not seeing the "Finalizing staged deployment" and "Bootloader updated" messages (the latter being the message ID we look for to know the finalization was successful). Can you retry but with `SYSLOG_IDENTIFIER=ostree -o json-pretty`?

Comment 7 Parag Nemade 2019-09-12 03:14:28 UTC
Created attachment 1614296 [details]
json output

used this command 
$ journalctl -o verbose SYSLOG_IDENTIFIER=ostree -o json-pretty > journalctl-json-pretty.txt

Comment 8 Jonathan Lebon 2019-09-12 14:47:30 UTC
Hmm, indeed the "Bootloader updated"/"Transaction complete" messages are missing. I'm not sure offhand why that is. Maybe let's cast a wider net and just include all the messages in case there are any other subtle errors happening elsewhere? (I.e. without the SYSLOG_IDENTIFIER=ostree filter, but maybe in that case, only upload the logs for just one or two boots?).

Hmm, long stretch, though wondering if this could somehow be ostree-finalize-staged.service running so late now that journald doesn't have a chance to write out to disk before rebooting. Do you have any /var mounts? Do your reboot logs end with "Journal stopped"?

Comment 9 Parag Nemade 2019-09-13 05:37:24 UTC
I am too confused to not found "Bootloder updated"/"Transaction complete" messages. But let me describe my system working again
1) I start my system, when I see notification that updates are available, I use "rpm-ostree upgrade" command
3) This command work as usual without any extra message on the gnome-terminal
4) I reboot and can see bootloader enter for new ostree and let my system boot into it
5) I login and check "rpm-ostree status" (even after I check it say after 1 hr) output, I still see "failed to finalize previous deployment"
6) Then i reboot and login and again check "rpm-ostree status", there is no warning


Yes on my host system I have extra /var/junk partition which mounts successfully always and I can use it.

You are right that my reboot logs showing "Journal stopped" messages. e.g. See
====================================================================================
Sep 13 09:08:30 f31sb systemd[1]: Shutting down.
Sep 13 09:08:30 f31sb lvm[8535]:   3 logical volume(s) in volume group "fedora" unmonitored
Sep 13 09:08:30 f31sb systemd[1]: Hardware watchdog 'iTCO_wdt', version 0
Sep 13 09:08:30 f31sb systemd[1]: Set hardware watchdog to 10min.
Sep 13 09:08:30 f31sb kernel: watchdog: watchdog0: watchdog did not stop!
Sep 13 09:08:30 f31sb kernel: printk: systemd-shutdow: 70 output lines suppressed due to ratelimiting
Sep 13 09:08:30 f31sb systemd-shutdown[1]: Syncing filesystems and block devices.
Sep 13 09:08:30 f31sb systemd-shutdown[1]: Sending SIGTERM to remaining processes...
Sep 13 09:08:30 f31sb systemd-journald[718]: Journal stopped
-- Reboot --
Sep 13 09:08:44 localhost kernel: microcode: microcode updated early to revision 0xb4, date = 2019-04-01
Sep 13 09:08:44 localhost kernel: Linux version 5.3.0-0.rc6.git0.1.fc31.x86_64 (mockbuild.fedoraproject.org) (gcc version 9.1.1 20190605 (Red Hat 9.1.1-2) (GCC)) #1 SMP Mon Aug 26 13:01:25 UTC 2019
======================================================================================

So on this host system, I had few reboots but "Journal stopped" message appeared only once. I guess it appeared when I reboot after upgrade?

To cross-verify this I am now starting my F31 SB VM and upgrading it
1) I start my VM and without going for upgrade executed
[parag@localhost ~]$ journalctl --since=today SYSLOG_IDENTIFIER=systemd-journald
-- Logs begin at Tue 2019-08-20 10:07:12 IST, end at Fri 2019-09-13 10:49:27 IST. --
Sep 13 10:31:59 localhost systemd-journald[299]: Journal started
Sep 13 10:31:59 localhost systemd-journald[299]: Runtime Journal (/run/log/journal/3bc6eaeb8e4142b4afabded586da3e8a) is 8.0M, max 99.3M, 91.3M free.
Sep 13 10:32:00 localhost systemd-journald[299]: Journal stopped
Sep 13 10:32:01 localhost.localdomain systemd-journald[299]: Received SIGTERM from PID 1 (systemd).
Sep 13 10:32:01 localhost.localdomain systemd-journald[659]: Journal started
Sep 13 10:32:01 localhost.localdomain systemd-journald[659]: Runtime Journal (/run/log/journal/3bc6eaeb8e4142b4afabded586da3e8a) is 8.0M, max 99.3M, 91.3M free.
Sep 13 10:32:01 localhost.localdomain systemd-journald[659]: Time spent on flushing to /var is 92.813ms for 995 entries.
Sep 13 10:32:01 localhost.localdomain systemd-journald[659]: System Journal (/var/log/journal/3bc6eaeb8e4142b4afabded586da3e8a) is 192.0M, max 1.1G, 970.0M free.
[parag@localhost ~]$ 

2) rpm-ostree upgrade executed, also executed "rpm-ostree status" whose output was normal

3) Reboot, checked "rpm-ostree status" command output which showed that warning (2 red lines)

4) now let's check journal command again which just added below lines to journal
================================================================================
-- Reboot --
Sep 13 10:56:11 localhost systemd-journald[298]: Journal started
Sep 13 10:56:11 localhost systemd-journald[298]: Runtime Journal (/run/log/journal/3bc6eaeb8e4142b4afabded586da3e8a) is 8.0M, max 99.3M, 91.3M free.
Sep 13 10:56:12 localhost systemd-journald[298]: Journal stopped
Sep 13 10:56:13 localhost.localdomain systemd-journald[298]: Received SIGTERM from PID 1 (systemd).
Sep 13 10:56:13 localhost.localdomain systemd-journald[659]: Journal started
Sep 13 10:56:13 localhost.localdomain systemd-journald[659]: Runtime Journal (/run/log/journal/3bc6eaeb8e4142b4afabded586da3e8a) is 8.0M, max 99.3M, 91.3M free.
Sep 13 10:56:13 localhost.localdomain systemd-journald[659]: Time spent on flushing to /var is 53.765ms for 992 entries.
Sep 13 10:56:13 localhost.localdomain systemd-journald[659]: System Journal (/var/log/journal/3bc6eaeb8e4142b4afabded586da3e8a) is 192.0M, max 1.1G, 970.0M free.
================================================================================

5) Then reboot this system to repeat above 3 and 4 steps. I got same output as above 4 step.

In conclusion, I got one system with extra /var mount and another no extra /var mount. But both the systems showing "Journal stopped" messages but that became pattern. I am not sure how it can be related to not finalizing previous deployment warning.

I am also attaching one text file here from this F31 VM which includes above journal messages.

Comment 10 Parag Nemade 2019-09-13 05:38:16 UTC
Created attachment 1614705 [details]
13th Sept journal logs

Do ask anymore logs if needed.
Thank you.

Comment 11 Jonathan Lebon 2019-09-17 19:35:06 UTC
OK, just wanted to verify you didn't have e.g. /var/log or /var/log/journal themselves as separate mounts and there was some funkiness going on with systemd not being able to write everything back.

> Created attachment 1614705 [details]
> 13th Sept journal logs

I meant the full logs, i.e. `journalctl --since=today --output=json-pretty` (or similar, so that it covers the boot where finalization ran).

Is the F31 VM where you're seeing this mostly OOTB? If so, I can try to see if I can reproduce this.

Comment 12 Parag Nemade 2019-09-18 12:46:35 UTC
Created attachment 1616218 [details]
18th Sept journal logs

Comment 13 Parag Nemade 2019-09-18 12:52:42 UTC
Created attachment 1616219 [details]
gzipped 18th Sept journal logs

ah but you need logs till when it finalizes tree. So it took me like
1) Start system, run "rpm-ostree upgrade"
2) reboot
3) "rpm-ostree status" showed warning
4) reboot
5) now no warning and now at this stage this log file is created.

This time I gzipped the txt file.

Comment 14 Parag Nemade 2019-09-18 12:58:42 UTC
also, this VM do not have any extra partition.
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 963M     0  963M   0% /dev
tmpfs                    994M     0  994M   0% /dev/shm
tmpfs                    994M  1.3M  993M   1% /run
/dev/mapper/fedora-root   12G  6.8G  4.1G  63% /sysroot
tmpfs                    994M   80K  994M   1% /tmp
/dev/vda1                976M  142M  768M  16% /boot
tmpfs                    199M     0  199M   0% /run/user/1000
tmpfs                    199M  5.7M  194M   3% /run/user/42


I can reproduce this issue on host as well as guest F31 SB systems.

Comment 15 Parag Nemade 2019-09-19 05:00:00 UTC
Further debugging, I also found grub menu entries are doubled. e.g.
Fedora 31.20190917.n.2 (Workstation Edition) (ostree)
Fedora 31.20190916.n.0 (Workstation Edition) (ostree)
Fedora 31.20190917.n.2 (Workstation Edition) (ostree)
Fedora 31.20190916.n.0 (Workstation Edition) (ostree)

but first entry looks normal (5 lines) when pressed 'e' but the second entry contains (15 lines)

Comment 16 Jens Petersen 2019-09-24 11:12:42 UTC
Does running `ostree admin finalize-staged` and rebooting help for you?

Comment 17 Jens Petersen 2019-09-24 11:22:35 UTC
BTW I think this bug is reproducible on any fresh SB31 install.

Comment 18 Parag Nemade 2019-09-24 12:08:52 UTC
Yes it works. I installed new ostree update, executed `ostree admin finalize-staged` and rebooted.
Then `rpm-ostree status` did not show any warning. But boot loader still did shown duplicate boot entries.

Comment 19 Jens Petersen 2019-09-24 13:32:09 UTC
(In reply to Jens Petersen from comment #17)
> BTW I think this bug is reproducible on any fresh SB31 install.

I mean after any ostree deployment transaction.

Comment 20 Jens Petersen 2019-09-24 13:52:08 UTC
From my testing, the simplest workaround after an ostree transaction, eg:

$ rpm-ostree install zsh

seems to be to run:

$ sudo ostree admin finalize-staged
$ reboot

Then no warning is seen in rpm-ostree status.

Comment 21 Jonathan Lebon 2019-09-24 16:28:26 UTC
> Blocks: 1644940

Hmm, not sure if it's bad enough to warrant a freeze exception. Note that (AFAICT) it still does update; it's just that it throws a spurious (albeit scary) warning after reboot.

> BTW I think this bug is reproducible on any fresh SB31 install.

Thanks for checking this. Investigating now.

Comment 22 Jonathan Lebon 2019-09-24 18:37:19 UTC
OK, I can confirm this happens OOTB on SB31. Though it doesn't happen everytime. It does not reproduce on an f31 FCOS I built.
Will dig deeper.

Comment 23 Parag Nemade 2019-09-25 03:09:45 UTC
Third problem discovered now that since last 2-3 days everytime I install new ostree and reboot, it never reaches to grub and my laptop goes in hang condition. Then I need to power down and hard start the T480s laptop and it succeeds.

Comment 24 Jens Petersen 2019-09-25 05:41:59 UTC
(In reply to Jonathan Lebon from comment #22)
> OK, I can confirm this happens OOTB on SB31. Though it doesn't happen
> everytime. It does not reproduce on an f31 FCOS I built.

Thanks!

It can break toolbox for example.

Comment 25 Fedora Update System 2019-09-25 17:31:58 UTC
FEDORA-2019-5f32602fd4 has been submitted as an update to Fedora 31. https://bodhi.fedoraproject.org/updates/FEDORA-2019-5f32602fd4

Comment 26 Fedora Update System 2019-09-26 02:56:19 UTC
ostree-2019.4-1.fc31, rpm-ostree-2019.6-1.fc31 has been pushed to the Fedora 31 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-5f32602fd4

Comment 27 Parag Nemade 2019-09-26 06:24:47 UTC
Jonathan,

I am unable to conclude if the duplicate boot loader entries in grub is considered as bug and which component to report it? After updating to this new fix, I still see for each installed ostree there are 2 entries in grub.

Comment 28 Jonathan Lebon 2019-09-26 14:30:45 UTC
> I am unable to conclude if the duplicate boot loader entries in grub is considered as bug and which component to report it? After updating to this new fix, I still see for each installed ostree there are 2 entries in grub.

Should probably be a separate bug report, because that one might actually be worth an exception given how confusing it is. But anyway, should be fixed by:
https://pagure.io/workstation-ostree-config/pull-request/156

Comment 29 Fedora Update System 2019-09-26 18:17:33 UTC
FEDORA-2019-5f32602fd4 has been submitted as an update to Fedora 31. https://bodhi.fedoraproject.org/updates/FEDORA-2019-5f32602fd4

Comment 30 Fedora Update System 2019-09-27 02:28:48 UTC
ostree-2019.4-2.fc31, rpm-ostree-2019.6-1.fc31 has been pushed to the Fedora 31 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-5f32602fd4

Comment 31 Fedora Update System 2019-09-28 00:02:52 UTC
ostree-2019.4-2.fc31, rpm-ostree-2019.6-1.fc31 has been pushed to the Fedora 31 stable repository. If problems still persist, please make note of it in this bug report.

Comment 32 Fedora Update System 2020-04-02 16:15:04 UTC
FEDORA-2020-df95a90cce has been submitted as an update to Fedora 31. https://bodhi.fedoraproject.org/updates/FEDORA-2020-df95a90cce

Comment 33 Fedora Update System 2020-04-03 20:50:01 UTC
FEDORA-2020-df95a90cce has been pushed to the Fedora 31 testing repository.
In short time you'll be able to install the update with the following command:
`sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-df95a90cce`
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-df95a90cce

See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.

Comment 34 Fedora Update System 2020-04-09 18:18:37 UTC
FEDORA-2020-df95a90cce has been pushed to the Fedora 31 stable repository.
If problem still persists, please make note of it in this bug report.

Comment 35 Junjie Yuan 2020-11-28 12:24:45 UTC
This issue appearance on fedora silverblue 33.20201127.0 again.

[junjie@xps ~]$ rpm-ostree status
State: idle
Warning: failed to finalize previous deployment
         error: During /etc merge: While computing configuration diff: Error reading from file: Input/output error
         check `journalctl -b -1 -u ostree-finalize-staged.service`
Deployments:
● ostree://fedora:fedora/33/x86_64/silverblue
                   Version: 33.20201127.0 (2020-11-27T00:41:09Z)
                BaseCommit: 474dce288460e1e75b15f7ef5d5e21aa1c20650c7f4bf814dba04ecdb6d89729
              GPGSignature: Valid signature by 963A2BEB02009608FE67EA4249FD77499570FF31
           LayeredPackages: google-chrome-stable

[junjie@xps ~]$ journalctl -b -1 -u ostree-finalize-staged.service
-- Logs begin at Fri 2020-10-23 16:46:17 CST, end at Sat 2020-11-28 20:20:59 CST. --
Nov 28 19:58:34 xps systemd[1]: Finished OSTree Finalize Staged Deployment.
Nov 28 19:59:10 xps systemd[1]: Stopping OSTree Finalize Staged Deployment...
Nov 28 19:59:10 xps ostree[9140]: Finalizing staged deployment
Nov 28 19:59:10 xps ostree[9140]: error: During /etc merge: While computing configuration diff: Error reading from file>
Nov 28 19:59:10 xps systemd[1]: ostree-finalize-staged.service: Control process exited, code=exited, status=1/FAILURE
Nov 28 19:59:10 xps systemd[1]: ostree-finalize-staged.service: Failed with result 'exit-code'.
Nov 28 19:59:10 xps systemd[1]: Stopped OSTree Finalize Staged Deployment.

[junjie@xps ~]$ rpm -q rpm-ostree ostree
warning: Found bdb Packages database while attempting sqlite backend: using bdb backend.
rpm-ostree-2020.8-1.fc33.x86_64
ostree-2020.8-1.fc33.x86_64

[junjie@xps ~]$ rpm-ostree upgrade --preview
⠄ Receiving metadata objects: 0/(estimating) -/s 0 bytes... 
Receiving metadata objects: 0/(estimating) -/s 0 bytes... done
Enabled rpm-md repositories: fedora-cisco-openh264 updates fedora google-chrome
Updating metadata for 'fedora-cisco-openh264'... done
rpm-md repo 'fedora-cisco-openh264'; generated: 2020-08-25T19:10:34Z
Updating metadata for 'updates'... done
rpm-md repo 'updates'; generated: 2020-11-27T00:49:07Z
Updating metadata for 'fedora'... done
rpm-md repo 'fedora'; generated: 2020-10-19T23:27:19Z
Updating metadata for 'google-chrome'... done
rpm-md repo 'google-chrome'; generated: 2020-11-19T20:41:45Z
Importing rpm-md... done
AvailableUpdate:
        Version: 33.20201128.0 (2020-11-28T01:16:50Z)
         Commit: 11a57ff749420f5e88c3f481616201dd8955f08f1749bee98ff3cf63808a7e14
   GPGSignature: 1 signature
                 Signature made Sat 28 Nov 2020 09:16:54 AM CST using RSA key ID 49FD77499570FF31
                 Good signature from "Fedora <fedora-33-primary>"
       Upgraded: c-ares 1.16.1-3.fc33 -> 1.17.0-1.fc33
                 cups-filters 1.28.5-1.fc33 -> 1.28.5-3.fc33
                 cups-filters-libs 1.28.5-1.fc33 -> 1.28.5-3.fc33
                 gtkmm30 3.24.2-3.fc33 -> 3.24.2-4.fc33
                 hplip 3.20.9-2.fc33 -> 3.20.9-4.fc33
                 hplip-common 3.20.9-2.fc33 -> 3.20.9-4.fc33
                 hplip-libs 3.20.9-2.fc33 -> 3.20.9-4.fc33
                 libgcrypt 1.8.6-4.fc33 -> 1.8.7-1.fc33
                 libsane-airscan 0.99.18-1.fc33 -> 0.99.21-1.fc33
                 libsane-hpaio 3.20.9-2.fc33 -> 3.20.9-4.fc33
                 pam 1.4.0-8.fc33 -> 1.4.0-9.fc33
                 qpdf-libs 10.0.3-1.fc33 -> 10.0.4-1.fc33
                 sane-airscan 0.99.18-1.fc33 -> 0.99.21-1.fc33
                 thermald 2.3-2.fc33 -> 2.4-1.fc33

Comment 36 Jonathan Lebon 2020-11-30 14:39:30 UTC
> During /etc merge: While computing configuration diff: Error reading from file: Input/output error

This is a different issue. If it's reproducible, please file a separate bug for this.


Note You need to log in before you can comment on or make changes to this bug.