Note: This is a public test instance of Red Hat Bugzilla. The data contained within is a snapshot of the live data so any changes you make will not be reflected in the production Bugzilla. Email is disabled so feel free to test any aspect of the site that you want. File any problems you find or give feedback at bugzilla.redhat.com.
Bug 1919912 (CVE-2020-26418) - CVE-2020-26418 wireshark: Kafka dissector memory leak (wnpa-sec-2020-16)
Summary: CVE-2020-26418 wireshark: Kafka dissector memory leak (wnpa-sec-2020-16)
Keywords:
Status: CLOSED WONTFIX
Alias: CVE-2020-26418
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: Red Hat Product Security
QA Contact:
URL:
Whiteboard:
Depends On: 1919913 1924649
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-25 11:50 UTC by Dhananjay Arunesh
Modified: 2021-09-28 17:02 UTC (History)
9 users (show)

Fixed In Version: wireshark 3.2.9, wireshark 3.4.1
Clone Of:
Environment:
Last Closed: 2021-06-29 20:58:01 UTC
Embargoed:


Attachments (Terms of Use)

Comment 1 Dhananjay Arunesh 2021-01-25 11:51:55 UTC
Created wireshark tracking bugs for this issue:

Affects: fedora-all [bug 1919913]

Comment 3 Mauro Matteo Cascella 2021-02-01 10:24:26 UTC
External References:

https://www.wireshark.org/security/wnpa-sec-2020-16

Comment 4 Mauro Matteo Cascella 2021-02-01 14:31:58 UTC
Statement:

This issue does not affect the versions of `wireshark` as shipped with Red Hat Enterprise Linux 5, 6, and 7, as they did not include support for the Apache Kafka dissector.

Comment 5 Mauro Matteo Cascella 2021-02-03 11:09:23 UTC
More of a memory leak, I'd rather consider this bug to be an improper validation of the decompression size (while decoding packets captured in a pcap file or coming from the network) leading to an assertion failure and possible crash. Among other things, the patch checks the 'length' argument of decompress() in epan/dissectors/packet-kafka.c.

---
#define MAX_DECOMPRESSION_SIZE (50 * 1000 * 1000) // Arbitrary
if (length > MAX_DECOMPRESSION_SIZE) {
    expert_add_info(pinfo, NULL, &ei_kafka_bad_decompression_length);
    return FALSE;
}


Note You need to log in before you can comment on or make changes to this bug.