Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1717663
Summary: | python-priority fails to build with Python 3.8 | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Miro Hrončok <mhroncok> |
Component: | python-priority | Assignee: | Robert-André Mauchin 🐧 <zebob.m> |
Status: | CLOSED DUPLICATE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | rawhide | CC: | python-sig, zebob.m |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
URL: | https://copr.fedorainfracloud.org/coprs/g/python/python3.8/package/python-priority/ | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-05 21:59:56 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1686977 |
Description
Miro Hrončok
2019-06-05 21:44:45 UTC
=================================== FAILURES =================================== _______________ TestPriorityTreeOutput.test_period_of_repetition _______________ self = <test_priority.TestPriorityTreeOutput object at 0x7f697a9ed150> @given(STREAMS_AND_WEIGHTS) > def test_period_of_repetition(self, streams_and_weights): """ The period of repetition of a priority sequence is given by the sum of the weights of the streams. Once that many values have been pulled out the sequence repeats identically. test/test_priority.py:492: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python2.7/site-packages/hypothesis/core.py:600: in execute % (test.__name__, text_repr[0]) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <hypothesis.core.StateForActualGivenExecution object at 0x7f697a9ed510> message = 'Hypothesis test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weigh... (257, 249),\n (10715, 247)]) produces unreliable results: Falsified on the first call but did not on a subsequent one' def __flaky(self, message): if len(self.falsifying_examples) <= 1: > raise Flaky(message) E Flaky: Hypothesis test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weights=[(29729, 157), E (2627, 16), E (17000, 17), E (13695, 160), E (90876124250409049, 71), E (23759, 4), E (1, 1), E (23041, 162), E (104, 81), E (80, 201), E (257, 249), E (10715, 247)]) produces unreliable results: Falsified on the first call but did not on a subsequent one /usr/lib/python2.7/site-packages/hypothesis/core.py:770: Flaky ---------------------------------- Hypothesis ---------------------------------- Falsifying example: test_period_of_repetition(self=<test_priority.TestPriorityTreeOutput at 0x7f697a9ed150>, streams_and_weights=[(29729, 157), (2627, 16), (17000, 17), (13695, 160), (90876124250409049, 71), (23759, 4), (1, 1), (23041, 162), (104, 81), (80, 201), (257, 249), (10715, 247)]) Unreliable test timings! On an initial run, this test took 308.50ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 169.27 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None. You can reproduce this example by temporarily adding @reproduce_failure('4.23.4', 'AIUDVOhAnJ8DExSED9ICAITPEHICAGr9n4ID+QKFtrUBBRCxRrQEN7mcAwEBlQAAAQCSAMQHBIG0AKFtASnOUIMGBwBAnsg0AAECAPg+BwMAU7X2Aw==') as a decorator on your test case ==================== 1 failed, 161 passed in 491.49 seconds ==================== Looks like a flaky test :( *** This bug has been marked as a duplicate of bug 1709800 *** |