Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1717663 - python-priority fails to build with Python 3.8
Summary: python-priority fails to build with Python 3.8
Keywords:
Status: CLOSED DUPLICATE of bug 1709800
Alias: None
Product: Fedora
Classification: Fedora
Component: python-priority
Version: rawhide
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: Robert-André Mauchin 🐧
QA Contact: Fedora Extras Quality Assurance
URL: https://copr.fedorainfracloud.org/cop...
Whiteboard:
Depends On:
Blocks: PYTHON38
TreeView+ depends on / blocked
 
Reported: 2019-06-05 21:44 UTC by Miro Hrončok
Modified: 2019-06-05 21:59 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-05 21:59:56 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Miro Hrončok 2019-06-05 21:44:45 UTC
python-priority fails to build with Python 3.8.0b1. See https://copr.fedorainfracloud.org/coprs/g/python/python3.8/package/python-priority/ for actual logs. This report is automated and not very verbose, but I'll get back here with details.

Comment 1 Miro Hrončok 2019-06-05 21:59:02 UTC
=================================== FAILURES ===================================
_______________ TestPriorityTreeOutput.test_period_of_repetition _______________

self = <test_priority.TestPriorityTreeOutput object at 0x7f697a9ed150>

    @given(STREAMS_AND_WEIGHTS)
>   def test_period_of_repetition(self, streams_and_weights):
        """
        The period of repetition of a priority sequence is given by the sum of
        the weights of the streams. Once that many values have been pulled out
        the sequence repeats identically.

test/test_priority.py:492: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python2.7/site-packages/hypothesis/core.py:600: in execute
    % (test.__name__, text_repr[0])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <hypothesis.core.StateForActualGivenExecution object at 0x7f697a9ed510>
message = 'Hypothesis test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weigh... (257, 249),\n (10715, 247)]) produces unreliable results: Falsified on the first call but did not on a subsequent one'

    def __flaky(self, message):
        if len(self.falsifying_examples) <= 1:
>           raise Flaky(message)
E           Flaky: Hypothesis test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weights=[(29729, 157),
E            (2627, 16),
E            (17000, 17),
E            (13695, 160),
E            (90876124250409049, 71),
E            (23759, 4),
E            (1, 1),
E            (23041, 162),
E            (104, 81),
E            (80, 201),
E            (257, 249),
E            (10715, 247)]) produces unreliable results: Falsified on the first call but did not on a subsequent one

/usr/lib/python2.7/site-packages/hypothesis/core.py:770: Flaky
---------------------------------- Hypothesis ----------------------------------
Falsifying example: test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weights=[(29729, 157),
 (2627, 16),
 (17000, 17),
 (13695, 160),
 (90876124250409049, 71),
 (23759, 4),
 (1, 1),
 (23041, 162),
 (104, 81),
 (80, 201),
 (257, 249),
 (10715, 247)])
Unreliable test timings! On an initial run, this test took 308.50ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 169.27 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None.

You can reproduce this example by temporarily adding @reproduce_failure('4.23.4', 'AIUDVOhAnJ8DExSED9ICAITPEHICAGr9n4ID+QKFtrUBBRCxRrQEN7mcAwEBlQAAAQCSAMQHBIG0AKFtASnOUIMGBwBAnsg0AAECAPg+BwMAU7X2Aw==') as a decorator on your test case
==================== 1 failed, 161 passed in 491.49 seconds ====================

Looks like a flaky test :(

Comment 2 Miro Hrončok 2019-06-05 21:59:56 UTC

*** This bug has been marked as a duplicate of bug 1709800 ***


Note You need to log in before you can comment on or make changes to this bug.