Skip to main content

Posts

Showing posts from June, 2023

Re: Kafka Streaming: RocksDbSessionBytesStoreSupplier seems lost data in Kubernetes

The class `RocksDbSessionBytesStoreSupplier` is in package `internal` and thus, you should not use it directly. Instead, you should use the public factory class `org.apache.kafka.streams.state.Stores` However, your usage seems correct in general. Not sure why you pass-in the supplier directly though? In the end, if you want to set a name for the store, you can use `Materialized.as("..."), and you can set retention time via `Materailazed#withRetention(...)` (what would be the proper usage of the API). Besides this, the store should be backed by a changelog topic and thus you should never lose any data, independent of you deployment. Of course, I would recommend to use a stateful set and re-attach storage to the pod to avoid re-creating the store from the changelog. HTH, -Matthias On 6/28/23 8:49 AM, An, Hongguo (CORP) wrote: > Hi: > I am using RocksDbSessionBytesStoreSupplier in my kafka streaming application for an aggregation li...

Kafka Streaming: RocksDbSessionBytesStoreSupplier seems lost data in Kubernetes

Hi: I am using RocksDbSessionBytesStoreSupplier in my kafka streaming application for an aggregation like this: var materialized = Materialized.<String, List<CDCRecord>>as( new RocksDbSessionBytesStoreSupplier(env.getProperty("messages.cdc.pft.topic", "NASHCM.PAYROLL.PFT.FILENUMBER"), Duration.parse(env.getProperty("pft.duration", "P7D")).toMillis())) .withKeySerde(stringSerde) .withValueSerde(listSerde); stream.windowedBy(SessionWindows .with(Duration.parse(env.getProperty("pft.gap", "PT0.1S"))) .grace(Duration.parse(env.getProperty("pft.duration", "PT0.05S"))) ) .aggregate(ArrayList::new, (k, v, list)->{list.add(v); return list;}, ...

Migration from 2.7 to 3.5 and random URP

Hi, I'm attempting a migration from 2.7 to 3.5.0 and the way it's recommended in the documentation. 1. Freeze the following settings inter.broker.protocol.version=2.7-IV2 log.message.format.version=2.7-IV2 2. Upgrade kafka version to 3.5.0 3. Change inter.broker.protocol.version to 3.5 4. Change log.message.format.version to 3.5 After step 2, I get random under-replicated partitions on some topics (all of them are empty). But the way they are under-replicated seems weird to me: ``` Topic: __consumer_offsets Partition: 13 Leader: 10001 Replicas: 10001,10000,10002 Isr: 10002,10000 ``` It turns out the leader is no longer part of the ISRs. I didn't even know this was possible. Does anyone know how this is possible and if this is a known issue? Kind Regards, J

Offsets: consumption and production in rollback

I have some doubts regarding message consumption and production, as well as transactional capabilities. I am using a Kafka template to produce a message within a transaction. After that, I execute another transaction that produces a message and intentionally throws a runtime exception to simulate a transaction rollback. Next, I use the Kafka AdminClient to retrieve the latest offset for the topic partition and the consumer group's offsets for the same topic partition. However, when I compare the offset numbers, I notice a difference. In this example, the consumer has 4 offsets, while the topic has only 2. I have come across references to this issue in a Spring-Kafka report, specifically in the Kafka-10683 report, where developers describe it as either Bogus or Pseudo Lag. I am keen on resolving this problem, and I would greatly appreciate hearing about your experiences and knowledge regarding this matter. Thank you very much Henry

Release plan required for version 3.5.1

Hi Team, There is an vulnerability on snappy-java-1.1.8.4.jar, are we impacted due to this if we are using only client jar and kafka server. Below are the vulnerabilities that still open and we unable to find any detail of these CVEs on jira. In which version these CVEs are planned to be resolved? CVE-2022-42003 CVE-2022-42004 CVE-2023-34454 CVE-2023-34453 CVE-2023-35116 Kindly share the release plan for version 3.5.1. Regards, Sahil

Streams/RocksDB: Why Universal Compaction?

Hello there! I was wondering if anyone (perhaps an early developer or power-user of Kafka Streams) knows why the Streams developers made the default setting for RocksDB compaction "Universal" compaction rather than "Level" compaction? My understanding (in which I am extremely UNconfident) is as follows— Supposedly Universal compaction leads to lower write amplification after compaction finishes. In a run of Universal compaction, all data is compacted; as per the RocksDB documentation it is possible for temporary write amplification of up to 2x during this process. There have also been reports of "write stalls" during this process [1]. In Level compaction, only certain levels (tiers of SST files) are compacted at once, meaning that the compaction process is shorter and less intensive, but that write amplification after compaction finishes is higher than with universal compaction. Can anyone confirm/deny/correct this? [1] https://...

Re: required dependent jars on kafka-clients-3.3.1.jar

Hi Sahil I am curious to understand why don't you consume only the kafka-clients package? You can find the package at: https://search.maven.org/artifact/org.apache.kafka/kafka-clients/3.3.1/jar . You don't need to import the entire Kafka just to use the clients. Nevertheless, if you still want to understand the dependency tree for kafka-clients, I would encourage you to look at the "clients" project in our gradle file and that should get you started in identifying the dependencies. [1] https://github.com/apache/kafka/blob/6f7682d2f4ecc8110f80cb6301de02f512d36a53/build.gradle#L1331 -- Divij Vaidya On Mon, Jun 19, 2023 at 9:28 AM Sahil Sharma D <sahil.d.sharma@ericsson.com.invalid> wrote: > Hi Team, > > We are using Kafka 3.3.1 in our product, there are multiple jars bundled > in it. We are using only kafka-clients-3.3.1.jar out of those jars. > > Can you please help us in identifying the jars which are bei...

TAC Applications for Community Over Code North America and Asia now open

Hi All, (This email goes out to all our user and dev project mailing lists, so you may receive this email more than once.) The Travel Assistance Committee has opened up applications to help get people to the following events: *Community Over Code Asia 2023 - * *August 18th to August 20th in Beijing , China* Applications for this event closes on the 6th July so time is short, please apply as soon as possible. TAC is prioritising applications from the Asia and Oceania regions. More details on this event can be found at: https://apachecon.com/acasia2023/ More information on how to apply please read: https://tac.apache.org/ *Community Over Code North America - * *October 7th to October 10th in Halifax, Canada* Applications for this event closes on the 22nd July. We expect many applications so please do apply as soon as you can. TAC is prioritising applications from the North and South America regions. More details on this event can be found at: ht...

Re: Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Gaurav, Il giorno gio 15 giu 2023 alle ore 15:27 Gaurav Pande < gaupande21@gmail.com > ha scritto: > > Hi Divij, > > Thanks a lot for detailed explanation. > > One last thing(stupid question) if you don't mind : > > Presently I have only a single Zookeeper running in standalone mode, My > query is if I standup 2 new zk nodes how should I make my ensemble work > w.r.t Should I configure two new zk nodes and restart them first or should > I restart the standalone existing Zookeeper by adding two new zk nodes > entry ? Switching from standalone mode to 3 nodes is doable but you must test it in a dev environment. You risk that your system falls into a split brain situation. You have to configure the list of servers in the zookeeper configuration file and also add an "id" configuration file on each server. I don't know how Kafka bundles ZooKeeper so I am not going to paste instructions here You can as...

Re: Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Hi Divij, Thanks a lot for detailed explanation. One last thing(stupid question) if you don't mind : Presently I have only a single Zookeeper running in standalone mode, My query is if I standup 2 new zk nodes how should I make my ensemble work w.r.t Should I configure two new zk nodes and restart them first or should I restart the standalone existing Zookeeper by adding two new zk nodes entry ? Regards, Gaurav Pande On Wed, 14 Jun, 2023, 22:50 Divij Vaidya, < divijvaidya13@gmail.com > wrote: > Gaurav > > You can find the compatibility matrix for Zk here: > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=240882784#KIP902:UpgradeZookeeperto3.8.1-Compatibility,Deprecation,andMigrationPlan > > > More specifically, for your use case of migrating from Kafka 2.7 to Kafka > 3.4, you will not face any hiccups. Kafka 2.7 uses Zk version 3.5.x and > Kafka 3.4 uses Zk version 3.6.x. Quoting from Zookeeper...

Re: [ANNOUNCE] Apache Kafka 3.5.0

Thanks for running this release, Mickael! On Thu, Jun 15, 2023 at 4:27 PM Mickael Maison < mimaison@apache.org > wrote: > The Apache Kafka community is pleased to announce the release for Apache > Kafka 3.5.0. > > This is a minor release and it includes fixes and improvements from 201 > JIRAs. > > All of the changes in this release can be found in the release notes: > https://downloads.apache.org/kafka/3.5.0/RELEASE_NOTES.html > > An overview of the release can be found in our announcement blog post: > https://kafka.apache.org/blog > > You can download the source and binary release (Scala 2.12 and Scala 2.13) > from: > https://kafka.apache.org/downloads#3.5.0 > > > --------------------------------------------------------------------------------------------------- > > Apache Kafka is a distributed streaming platform with four core APIs: > ** The Producer API allows an application to publi...

Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.5.0

Mickael, Thanks for driving the release! Best, Bruno On 15.06.23 10:27, Mickael Maison wrote: > The Apache Kafka community is pleased to announce the release for Apache > Kafka 3.5.0. > > This is a minor release and it includes fixes and improvements from 201 > JIRAs. > > All of the changes in this release can be found in the release notes: > https://downloads.apache.org/kafka/3.5.0/RELEASE_NOTES.html > < https://downloads.apache.org/kafka/3.5.0/RELEASE_NOTES.html > > > An overview of the release can be found in our announcement blog post: > https://kafka.apache.org/blog < https://kafka.apache.org/blog > > > You can download the source and binary release (Scala 2.12 and Scala > 2.13) from: > https://kafka.apache.org/downloads#3.5.0 > < https://kafka.apache.org/downloads#3.5.0 > > > ------------------------------------------------------------------------------------------...

[ANNOUNCE] Apache Kafka 3.5.0

The Apache Kafka community is pleased to announce the release for Apache Kafka 3.5.0. This is a minor release and it includes fixes and improvements from 201 JIRAs. All of the changes in this release can be found in the release notes: https://downloads.apache.org/kafka/3.5.0/RELEASE_NOTES.html An overview of the release can be found in our announcement blog post: https://kafka.apache.org/blog You can download the source and binary release (Scala 2.12 and Scala 2.13) from: https://kafka.apache.org/downloads#3.5.0 --------------------------------------------------------------------------------------------------- Apache Kafka is a distributed streaming platform with four core APIs: ** The Producer API allows an application to publish a stream records to one or more Kafka topics. ** The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them. ** The Streams API allows an application to act as a...

Re: Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Gaurav You can find the compatibility matrix for Zk here: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=240882784#KIP902:UpgradeZookeeperto3.8.1-Compatibility,Deprecation,andMigrationPlan More specifically, for your use case of migrating from Kafka 2.7 to Kafka 3.4, you will not face any hiccups. Kafka 2.7 uses Zk version 3.5.x and Kafka 3.4 uses Zk version 3.6.x. Quoting from Zookeeper's documentation for 3.6.0 [1] *"This is the first release for 3.6 branch. It comes with lots of new features and improvements around performance and security. It is also introducing new APIS on the client side. ZooKeeper clients from 3.4 and 3.5 branch are fully compatible with 3.6 servers. The upgrade from 3.5.7 to 3.6.0 can be executed as usual, no particular additional upgrade procedure is needed."* Hence, you can update your Zk cluster first and upgrade it to 3.6.x. Your existing 2.7 brokers will continue to work since they will be using Zk 3.5...

Re: Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Hi Luke, Thanks for helping here , iam using Zookeeper that comes with Apache kafka itself , is the same ? So not using external zk binary. Regards, Gaurav On Wed, 14 Jun, 2023, 17:43 Luke Chen, < showuon@gmail.com > wrote: > Hi Gaurav, > > Please check Zookeeper's doc for upgrading guide. > > Thanks. > Luke > > On Wed, Jun 14, 2023 at 12:03 PM Gaurav Pande < gaupande21@gmail.com > > wrote: > > > Hi Guys, > > > > Could anyone help on this query? > > > > Regards, > > Gaurav > > > > On Tue, 13 Jun, 2023, 11:40 Gaurav Pande, < gaupande21@gmail.com > wrote: > > > > > Hello Guys, > > > > > > Iam new in this space, I was going through the documentation of > Upgrading > > > Kafka brokers to 3.4 from any previous version with Zookeeper mode , > > but I > > > couldn't find any Upgrade process...

Re: Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Hi Gaurav, Please check Zookeeper's doc for upgrading guide. Thanks. Luke On Wed, Jun 14, 2023 at 12:03 PM Gaurav Pande < gaupande21@gmail.com > wrote: > Hi Guys, > > Could anyone help on this query? > > Regards, > Gaurav > > On Tue, 13 Jun, 2023, 11:40 Gaurav Pande, < gaupande21@gmail.com > wrote: > > > Hello Guys, > > > > Iam new in this space, I was going through the documentation of Upgrading > > Kafka brokers to 3.4 from any previous version with Zookeeper mode , > but I > > couldn't find any Upgrade process for Zookeeper. > > > > Iam using Zookeeper provided by Kafka binary in this case 2.7.0 and not > > installed zk externally. > > > > So what's the process of Upgrading Zookeeper? And should I upgrade > > Zookeeper first or 3 Kafka brokers? > > > > Note - I have single Zookeeper and 3 Kafka brokers at this point. ...

RE: CVEs related to Kafka

Hi Luke, Please find my queries inline: https://issues.apache.org/jira/browse/KAFKA-14107 [Sahil: As mentioned in this ticket CVE-2022-2048 and CVE-2022-2047 were fixed in versions 2.8.2, 3.3.0, 3.0.2, 3.1.2, 3.2.3. We are using Kafka version 3.3.1 and still we are getting these CVEs] https://issues.apache.org/jira/browse/KAFKA-14256 [Sahil: There is no CVE mentioned in this ticket, can you please share which CVEs had been resolved in this ticket. [As per ticket this " KAFKA-14256" this is solved in 3.4.0 however it is not mentioned ion Release Note of v3.4.0 ] Regards. Sahil -----Original Message----- From: Luke Chen < showuon@gmail.com > Sent: 10 May 2023 10:50 AM To: users@kafka.apache.org Cc: Tauzell, Dave < Dave.Tauzell@surescripts.com > Subject: Re: CVEs related to Kafka Hi Sahil, > in which version of Kafka these will be fixed https://issues.apache.org/jira/browse/KAFKA-14320 https://issues.apache.org/jira...

Re: Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Hi Guys, Could anyone help on this query? Regards, Gaurav On Tue, 13 Jun, 2023, 11:40 Gaurav Pande, < gaupande21@gmail.com > wrote: > Hello Guys, > > Iam new in this space, I was going through the documentation of Upgrading > Kafka brokers to 3.4 from any previous version with Zookeeper mode , but I > couldn't find any Upgrade process for Zookeeper. > > Iam using Zookeeper provided by Kafka binary in this case 2.7.0 and not > installed zk externally. > > So what's the process of Upgrading Zookeeper? And should I upgrade > Zookeeper first or 3 Kafka brokers? > > Note - I have single Zookeeper and 3 Kafka brokers at this point. > > Regards, > Gaurav >

Re: Apache Kafka consumer consumes messages with "partition" option, but not with "group" option

Hi, If you have a large number of partitions in your topic, it can take a really long while before you start seeing messages on the console. So, using the partition id is the right approach. But just need to be patient at the command-line. Out of interest, how long did you wait for the output from console consumer ? If you need to know the partition id, you will need to use a custom program to compute it based on the key. (You could have a look at the murmur2 source code on the Kafka github repository and try to create a simple command line tool to compute the partition id using the key). However, using --group option will only set the consumer group id of your instance of the kafka-console-consumer.sh. Regards, Neeraj On Tuesday, 13 June, 2023 at 05:26:24 pm GMT+10, Geithner, Wolfgang Dr. < w.geithner@gsi.de > wrote: This is a copy of a topic I posted in stackoverflow ( https://stackoverflow.com/questions/76458064/apache-kafka-consumer-consumes-message...

Re: Consuming an entire partition with control messages

Sounds like a bug in aiokafka library to me. If the last message in a topic partition is a tx-marker, the consumer should step over it, and report the correct position after the marker. The official KafkaConsumer (ie, the Java one), does the exact same thing. -Matthias On 5/30/23 8:41 AM, Vincent Maurin wrote: > Hello ! > > I am working on an exactly once stream processors in Python, using > aiokafka client library. My program stores a state in memory, that is > recovered from a changelog topic, like in kafka streams. > > On each processing loop, I am consuming messages, producing messages > to an output topics and to my changelog topic, within a transaction. > > When I need to restart a runner, to restore the state in memory, I > have a routine consuming the changelog topic from the beginning to the > "end" with a read_commited isolation level. Here I am struggling to > define when to stop my recovery : ...

Apache Kafka consumer consumes messages with "partition" option, but not with "group" option

This is a copy of a topic I posted in stackoverflow ( https://stackoverflow.com/questions/76458064/apache-kafka-consumer-consumes-messages-with-partition-option-but-not-with-g ), where I didn't get any answer yet. Searching the web did not yield any helpful reults either. Hence, I am addressing to this mailing list: I am running a plain Apache Kafka server (version 3.4.1) which I would like to connect to a Telegraf consumer. The Telegraf ```[[inputs.kafka_consumer]]``` plugin has the option to consume by Kafka "group". When staring Telegraf, I get an error message [inputs.kafka_consumer] Error in plugin: consume: kafka server: Request was for a consumer group that is not coordinated by this broker Hence, I started to investigate my setup by using the Kafka console tools and found that when executing ./kafka-console-consumer.sh --bootstrap-server myserver:9092 --topic test --partition 0 and sending messages via ```kafka-console-consumer.sh```, these...

Process to Upgrade Zookeeper from 2.7.0 to 3.4.1

Hello Guys, Iam new in this space, I was going through the documentation of Upgrading Kafka brokers to 3.4 from any previous version with Zookeeper mode , but I couldn't find any Upgrade process for Zookeeper. Iam using Zookeeper provided by Kafka binary in this case 2.7.0 and not installed zk externally. So what's the process of Upgrading Zookeeper? And should I upgrade Zookeeper first or 3 Kafka brokers? Note - I have single Zookeeper and 3 Kafka brokers at this point. Regards, Gaurav

Announcing the Community Over Code 2023 Streaming Track

Hi all, Community Over Code < https://communityovercode.org/ >, the ASF conference, will be held in Halifax, Nova Scotia October 7-10, 2023. The call for presentations < https://communityovercode.org/call-for-presentations/ > is open now through July 13, 2023. I am one of the co-chairs for the stream processing track, and we would love to see you there and hope that you will consider submitting a talk. About the Streaming track: There are many top-level ASF projects which focus on and push the envelope for stream and event processing. ActiveMQ, Beam, Bookkeeper, Camel, Flink, Kafka, Pulsar, RocketMQ, and Spark are all house-hold names in the stream processing and analytics world at this point. These projects show that stream processing has unique characteristics requiring deep expertise. On the other hand, users need easy to apply solutions. The streaming track will host talks focused on the use cases and advances of these projects as well as othe...

Re: Request Contributor's Permissions

You're welcome! On Thu, Jun 8, 2023 at 9:59 PM Steven Booke < steviebeee55@gmail.com > wrote: > I am able to assign the ticket to myself now. Thank you Josep! > > On Thu, Jun 8, 2023 at 12:52 PM Josep Prat <josep.prat@aiven.io.invalid> > wrote: > > > Try again now please. I think you should be able to do it now (I granted > > your username to the contributor role). Recently the ASF changed the > > process on Jira creation and I wasn't sure that this step was still > needed. > > > > Best, > > > > On Thu, Jun 8, 2023 at 9:38 PM Steven Booke < steviebeee55@gmail.com > > > wrote: > > > > > Yes, that is the ticket I want to assign myself to and am still unable > to > > > do so. Could you assign it to me please? And could you help me in > getting > > > the correct permissions so that in the future I can assign myself to > > > ti...

Re: Request Contributor's Permissions

I am able to assign the ticket to myself now. Thank you Josep! On Thu, Jun 8, 2023 at 12:52 PM Josep Prat <josep.prat@aiven.io.invalid> wrote: > Try again now please. I think you should be able to do it now (I granted > your username to the contributor role). Recently the ASF changed the > process on Jira creation and I wasn't sure that this step was still needed. > > Best, > > On Thu, Jun 8, 2023 at 9:38 PM Steven Booke < steviebeee55@gmail.com > > wrote: > > > Yes, that is the ticket I want to assign myself to and am still unable to > > do so. Could you assign it to me please? And could you help me in getting > > the correct permissions so that in the future I can assign myself to > > tickets? > > > > On Thu, Jun 8, 2023 at 12:22 PM Josep Prat <josep.prat@aiven.io.invalid> > > wrote: > > > > > Hi Steven, > > > > > > I think you should b...

Re: Request Contributor's Permissions

Try again now please. I think you should be able to do it now (I granted your username to the contributor role). Recently the ASF changed the process on Jira creation and I wasn't sure that this step was still needed. Best, On Thu, Jun 8, 2023 at 9:38 PM Steven Booke < steviebeee55@gmail.com > wrote: > Yes, that is the ticket I want to assign myself to and am still unable to > do so. Could you assign it to me please? And could you help me in getting > the correct permissions so that in the future I can assign myself to > tickets? > > On Thu, Jun 8, 2023 at 12:22 PM Josep Prat <josep.prat@aiven.io.invalid> > wrote: > > > Hi Steven, > > > > I think you should be able to assign yourself to issues. Is this the one > > you want to assign to yourself ( > > https://issues.apache.org/jira/browse/KAFKA-14995 )? If you can't, I'll > > assign it to you. > > > > Best, > ...

Re: Request Contributor's Permissions

Yes, that is the ticket I want to assign myself to and am still unable to do so. Could you assign it to me please? And could you help me in getting the correct permissions so that in the future I can assign myself to tickets? On Thu, Jun 8, 2023 at 12:22 PM Josep Prat <josep.prat@aiven.io.invalid> wrote: > Hi Steven, > > I think you should be able to assign yourself to issues. Is this the one > you want to assign to yourself ( > https://issues.apache.org/jira/browse/KAFKA-14995 )? If you can't, I'll > assign it to you. > > Best, > > On Thu, Jun 8, 2023 at 9:14 PM Steven Booke < steviebeee55@gmail.com > > wrote: > > > Hi Josep, > > > > What I mean is I need Jira permissions to assign myself to a ticket. I > have > > already successfully created a Jira account using the ASF self serve > portal > > but am unable to assign myself to a ticket. For reference my Jira > u...

Re: Request Contributor's Permissions

Hi Steven, I think you should be able to assign yourself to issues. Is this the one you want to assign to yourself ( https://issues.apache.org/jira/browse/KAFKA-14995 )? If you can't, I'll assign it to you. Best, On Thu, Jun 8, 2023 at 9:14 PM Steven Booke < steviebeee55@gmail.com > wrote: > Hi Josep, > > What I mean is I need Jira permissions to assign myself to a ticket. I have > already successfully created a Jira account using the ASF self serve portal > but am unable to assign myself to a ticket. For reference my Jira username > is spbooke. > > Thank You > > Steven Booke > > On Thu, Jun 8, 2023 at 11:49 AM Josep Prat <josep.prat@aiven.io.invalid> > wrote: > > > Hi Steven, > > > > If what you mean is that you want to send a PR to Kafka, you don't need > any > > special permissions. You can fork the Apache Kafka repository, do your > > changes, and cre...

Re: Request Contributor's Permissions

Hi Josep, What I mean is I need Jira permissions to assign myself to a ticket. I have already successfully created a Jira account using the ASF self serve portal but am unable to assign myself to a ticket. For reference my Jira username is spbooke. Thank You Steven Booke On Thu, Jun 8, 2023 at 11:49 AM Josep Prat <josep.prat@aiven.io.invalid> wrote: > Hi Steven, > > If what you mean is that you want to send a PR to Kafka, you don't need any > special permissions. You can fork the Apache Kafka repository, do your > changes, and create a PR. > > Let me know if this is what you need. > > Best, > > On Thu, Jun 8, 2023 at 7:57 PM Steven Booke < steviebeee55@gmail.com > > wrote: > > > To whom this may concern, > > > > I am requesting contributor's permissions so that I may make my first > > contribution to the apache/kafka repository. > > > > -- > > Regar...