Skip to main content

Posts

Showing posts from June, 2025

Re: Consumer not receiving messages when subscribing to a topic but can receive message when assigning a partition

Hi Ranganath, If messages are only received when a specific partition is assigned, but not when subscribing via a consumer group. This is because: --> The consumer config has enable.auto.commit=false, but no manual offset commits are being made (commitSync() is missing). As a result, Kafka thinks there are no new messages to consume for the group. --> Also, if offsets were already committed earlier, --from-beginning has no effect unless the offsets are reset. *Recommended fixes:* 1. Add kafkaConsumer.commitSync() after polling records in Java code. 2. temporarily set enable.auto.commit=true to allow auto commits. 3. For CLI, reset the group offset using: kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group console --topic input --reset-offsets --to-earliest --execute Regards, Sisindri M. On Thu, Jun 26, 2025 at 1:03 AM Samudrala, Ranganath [USA] <Samudrala_Ranganath@bah.com.invalid> wro...

Re: Kafka4 commons-beanutils:1.9.4

Hi Sachin, Please check KAFKA-19359 < https://issues.apache.org/jira/browse/KAFKA-19359 > for more info. Thanks. Luke On Thu, Jun 26, 2025 at 5:44 PM Sachin Jangle <sachin.jangle@oracle.com.invalid> wrote: > Hi, > > A CVE-2025-48734, has been identified in the third-party library > commons-beanutils version 1.9.4. > Requesting confirmation on the following: > > * Is a fix available in a later version of kafka4 ? > > * If not, is there any recommended workaround or mitigation for the > current version? > Thanks, > Sachin Jangle >

Kafka4 commons-beanutils:1.9.4

Hi, A CVE-2025-48734, has been identified in the third-party library commons-beanutils version 1.9.4. Requesting confirmation on the following: * Is a fix available in a later version of kafka4 ? * If not, is there any recommended workaround or mitigation for the current version? Thanks, Sachin Jangle

Consumer not receiving messages when subscribing to a topic but can receive message when assigning a partition

Hello I have been struggling to receive messages when I subscribe to a topic or when I use a consumer group. However, when I assign a partition I am able to receive messages. what am I doing wrong. ======================= Consumer config: ======================= key.deserializer=org.apache.kafka.common.serialization.StringDeserializer value.deserializer=org.apache.kafka.common.serialization.StringDeserializer group.id =console socket.connection.setup.timeout.max.ms =5000 retry.backoff.max.ms =10000 max.poll.records=20 reconnect.backoff.max.ms =10000 socket.connection.setup.timeout.ms =2000 request.timeout.ms =5000 reconnect.backoff.ms =2000 read_uncommitted=read_committed bootstrap.servers=localhost:9092 retry.backoff.ms =2000 enable.auto.commit=false allow.auto.create.topics=true fetch.max.wait.ms =5000 connections.max.idle.ms =600000 session.timeout.ms =1800000 max.poll.interval.ms =2000 auto.offset.reset=earliest default.api.timeout.ms =5000 ==========...

Re: Submission for "Powered By Apache Kafka" page

Hi, You can open a pull request against the https://github.com/apache/kafka-site repository For an example, see https://github.com/apache/kafka-site/pull/662 Thanks, Mickael On Sat, Jun 21, 2025 at 12:28 AM Maria Angelella < maria.angelella@dattell.com > wrote: > Hi Apache Kafka team, > > I'm reaching out on behalf of Dattell to request inclusion on the "Powered > By Apache Kafka" page. Below is a suggested description for consideration. > > *Description:* > > Dattell is a data architecture company that uses Apache Kafka to power the > core of our own platform operations and provides fully managed Kafka > support for organizations worldwide. Kafka plays a central role in the > event-driven systems we design, helping teams build resilient, > high-throughput architectures across cloud and on-prem deployments. > > *Logo:* Attached. > > *Site link:* https://dattell.com > > Please ...

How to retrieve consumer group offset committed timestamp?

Hello Kafka community members, I am reaching out to ask if there is a straightforward way to retrieve the timestamp of when a consumer group commits an offset for a topic partition. From what I understand, it is possible to obtain this information by scanning or continuously reading from the `*__consumer_offsets*` internal Kafka topic and storing the data externally. However, I am wondering if there might be a better or more recommended approach that I may have overlooked. *Use Cases* There are two main use cases I am considering: 1. This is for monitoring purposes. As a Kafka cluster maintainer, if a developer notices that their consumers stopped consuming records from a Kafka topic unintentionally, we can quickly use the last committed time to roughly estimate when the issue may start occurring. 2. This is for finer control over consumer group offsets retention. Currently, we manage consumer group offset retention using the ` *offsets.retention.minutes*` confi...

Re: Kafka 3.x end of support timeline

Hi Sergey, There is no plan on having a 3.10 release. 3.9.x is the last release series in the 3 major version family. Regarding the EOL, Apache Kafka supports the last 3 minor/major releases. This means at this moment: 3.8.x, 3.9.x and 4.0.x. Once 4.1.x is released, we will drop support for 3.8.x. And the moment 4.2.0 release happens (in around 4 months probably) 3.9.x will reach EOL. If many community users have the need for a longer support for 3.9.x series, let us know please. Best, On Wed, Jun 25, 2025 at 11:09 AM Дергей Иванов < delta45666@gmail.com > wrote: > Hello, > > Could you please help us clarify the support timeline for Kafka 3.x > releases? > > According to the following page: > > https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy > , > it seems that the community generally supports the latest three minor > releases. However, the transiti...

Re: Kafka - Failed to clean up log for __consumer_offsets-45 in dir

Hola, Simply, windows is not supported. Good luck. On Wed, Jun 25, 2025, 11:10 KrishnaSai Dandu < krishnasai.dandu@bluepal.com > wrote: > Hi,Good Afternoon! > > We are using the spring boot java application, Below are the Kafka > dependencies > <properties> <java.version>17</java.version> <!-- Use > consistent Kafka version --> <kafka.version>2.8.0</kafka.version> > <zookeeper.version>3.6.3</zookeeper.version> > <scala.version>2.13</scala.version> </properties> <!-- Kafka > Dependencies - All same version to avoid conflicts --> <dependency> > <groupId>org.apache.kafka</groupId> > <artifactId>kafka_${scala.version}</artifactId> > <version>${kafka.version}</version> <exclusions> > <exclusion> <groupId>org.slf4j</...

Kafka 3.x end of support timeline

Hello, Could you please help us clarify the support timeline for Kafka 3.x releases? According to the following page: https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy , it seems that the community generally supports the latest three minor releases. However, the transition from 3.x to 4.x appears to be a special case. Could you please share more details on the expected end-of-life (EOL) timeline for Kafka 3.x versions? Are you going to release new minor/feature versions for 3.x release (3.10.x etc)? Even an approximate window would be very helpful for our migration planning. Thank you in advance! -- Best Regards, Sergey Ivanov

Kafka - Failed to clean up log for __consumer_offsets-45 in dir

Hi,Good Afternoon! We are using the spring boot java application, Below are the Kafka dependencies  <properties>        <java.version>17</java.version>        <!-- Use consistent Kafka version -->        <kafka.version>2.8.0</kafka.version>        <zookeeper.version>3.6.3</zookeeper.version>        <scala.version>2.13</scala.version>    </properties> <!-- Kafka Dependencies - All same version to avoid conflicts -->        <dependency>            <groupId>org.apache.kafka</groupId>            <artifactId>kafka_${scala.version}</artifactId>            <version>${kafka.version}</version>            <exclusions>...

Submission for "Powered By Apache Kafka" page

Hi Apache Kafka team, I'm reaching out on behalf of Dattell to request inclusion on the "Powered By Apache Kafka" page.  Below is a suggested description for consideration. Description: Dattell is a data architecture company that uses Apache Kafka to power the core of our own platform operations and provides fully managed Kafka support for organizations worldwide. Kafka plays a central role in the event-driven systems we design, helping teams build resilient, high-throughput architectures across cloud and on-prem deployments. Logo: Attached. Site link:   https://dattell.com Please let us know if there's any additional information or process needed to complete this request. We'd be honored to be included. Best regards, Maria Hatfield Director https://dattell.com

Re: Add user to contributors list

Hi, no, for that you need to become a committer. Infos to get to committership can be found under the following link: https://kafka.apache.org/contributing tl;dr You need to contribute to Apache Kafka, help the community, and demonstrate good knowledge of the project. If this sounds appealing to you, you are welcome to pursue committership. Best, Bruno On 18.06.25 18:26, Mahesh Sambharam wrote: > With this do I get @ apache.org mail on my name or username ? > > > On Wed, 18 Jun 2025 at 9:00 PM, Matthias J. Sax < mjsax@apache.org > wrote: > >> You should be all set. >> >> On 6/16/25 8:14 PM, Mahesh Sambharam wrote: >>> My username is maheshsambaram. >>> >>> Thanks, >>> Mahesh >>> >>> On Tue, 17 Jun 2025 at 4:26 AM, Matthias J. Sax < mjsax@apache.org > >> wrote: >>> >>>> If you refer to Jira, please let us know your use...

Re: Upcoming Kafka conference

Hi everyone. I realized that the previous email was sent to the users email list rather than to Bill. I am sorry about the spam and confusion that caused. On Tue, Jun 17, 2025 at 6:17 PM Tirtha Chatterjee < tirtha.p.chatterjee@gmail.com > wrote: > Hi Bill > > I do not have a submission for a talk, but I just wanted to check if you > are looking for someone to help with the evaluation or judgement of > submissions. If so, I would be happy to volunteer my time and effort. > > Some context about me - I am Tirtha Chatterjee, a senior engineer at > Apple on the data caching team. Before this, I have worked on the Kafka > team at LinkedIn for around 3 years, and before that, I was one of the > founding engineers on the Amazon MSK team, having built much of the > infrastructure monitoring and automated health management there. > > I have worked with Apache Kafka for around 8 years, and actively > collaborated on the tie...

Re: Upcoming Kafka conference

Hi Bill I do not have a submission for a talk, but I just wanted to check if you are looking for someone to help with the evaluation or judgement of submissions. If so, I would be happy to volunteer my time and effort. Some context about me - I am Tirtha Chatterjee, a senior engineer at Apple on the data caching team. Before this, I have worked on the Kafka team at LinkedIn for around 3 years, and before that, I was one of the founding engineers on the Amazon MSK team, having built much of the infrastructure monitoring and automated health management there. I have worked with Apache Kafka for around 8 years, and actively collaborated on the tiered storage KIP and code changes, back-porting them to kafka 3.0 for use at LinkedIn. I committed bug-fix patches that eventually made it into upstream Kafka. I have presented a talk at the Flink Forward conference in 2020 about how we do health monitoring at scale for Kafka clusters at MSK. Here is the link to it - https:...

Re: KIP-429 vs. KIP 848?

Sorry error: 429 appeared in Kafka 2.4.0, and 848 appeared in 3.7 Paul From: Brebner, Paul <Paul.Brebner@netapp.com.INVALID> Date: Tuesday, 17 June 2025 at 12:32 pm To: Kafka Users < users@kafka.apache.org >, dev < dev@kafka.apache.org > Subject: KIP-429 vs. KIP 848? EXTERNAL EMAIL - USE CAUTION when clicking links or attachments Hi all, time for me to ask a silly question please! I'm puzzled about the transition from KIP-429 https://urldefense.com/v3/__https://cwiki.apache.org/confluence/display/KAFKA/KIP-429*3A*Kafka*Consumer*Incremental*Rebalance*Protocol__;JSsrKysr!!Nhn8V6BzJA!VTTo78juiJz-rwB9C31093F77QawTxAuPnNYRF_zBh09vNnMIuYl0zyfePUZc2Bb4IuFRWodf7GbKx_weJVPP-ri-KUPI54dMY4$ < https://urldefense.com/v3/__https:/cwiki.apache.org/confluence/display/KAFKA/KIP-429*3A*Kafka*Consumer*Incremental*Rebalance*Protocol__;JSsrKysr!!Nhn8V6BzJA!VTTo78juiJz-rwB9C31093F77QawTxAuPnNYRF_zBh09vNnMIuYl0zyfePUZc2Bb4IuFRWodf7GbKx_weJVPP-ri-KUPI54dMY4$...

KIP-429 vs. KIP 848?

Hi all, time for me to ask a silly question please! I'm puzzled about the transition from KIP-429 https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol to KIP-848 https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol 429 appeared in Kafka 4.0, and 848 appeared in 3.7 (along with the 1st mention of group.protocol consumer configurations for classic or consumer – slightly confusing names – I assume classic is the pre 2.4 rebalancing protocol, and consumer is the new one, which is actually managed by the broker šŸ˜‰ ) But can you still use 429 in 4.0? how? I'd like to do some performance tests between the 2 incremental rebalancing KIPs in 4.0. Regards, Paul

Re: Kafka Connect on Kubernetes: Statefulset vs Deployment

Hi Prateek, I'm not sure how you are testing this. A Kafka Connect cluster in distributed mode uses the group management protocol to coordinate (distribute tasks across workers). This is set to "sessioned" by default, which aims to minimize task movements during rebalancing. On Mon, Jun 16, 2025 at 6:04 AM Prateek Kohli <prateek.kohli@ericsson.com.invalid> wrote: > > Thanks a lot @Vignesh & @Raphael Mazelier for your detailed replies. > > Even I thought the same, but I read this and now I'm a bit confused. > > "In a Kafka Connect cluster, each worker node is identified by its advertised address. This identity is crucial because connectors and tasks are assigned to specific workers based on it. > > When you use a Kubernetes Deployment, rolling updates result in Pods being recreated with new IPs and hostnames. Kafka Connect interprets these as entirely new worker nodes joining the cluster, while the old ones are se...

RE: Kafka Connect on Kubernetes: Statefulset vs Deployment

Thanks a lot @Vignesh & @Raphael Mazelier for your detailed replies. Even I thought the same, but I read this and now I'm a bit confused. "In a Kafka Connect cluster, each worker node is identified by its advertised address. This identity is crucial because connectors and tasks are assigned to specific workers based on it. When you use a Kubernetes Deployment, rolling updates result in Pods being recreated with new IPs and hostnames. Kafka Connect interprets these as entirely new worker nodes joining the cluster, while the old ones are seen as having left. As a result, Kafka Connect takes some time (typically around 5 minutes) to recognize that the old nodes have departed and to reassign their tasks to the remaining active workers. During this delay, some tasks may remain inactive, leading to reduced service availability." Strimzi also switched to using StrimziPodSet some time ago because of this issue. https://github.com/strimzi/strimzi-kafka-opera...

Re: Kafka Connect on Kubernetes: Statefulset vs Deployment

Kafka Connect is a stateless component by design. It relies on external Kafka topics to persist its state, including connector configurations, offsets, and status updates. In a distributed Kafka Connect cluster, this state is managed through the following configurable topics: - config.storage.topic – stores connector configurations - offset.storage.topic – stores source connector offsets - status.storage.topic – stores the status of connectors and tasks Because Kafka Connect does not maintain any state locally, it is not dependent on a specific IP address or hostname. As a result, it is best to deploy Kafka Connect using a *Kubernetes Deployment* rather than a *StatefulSet*, since Deployments are better suited for stateless applications and provide more flexibility with scaling and rolling updates. Additionally, it is common practice to expose the Kafka Connect REST API via an *Ingress*, allowing external systems to submit and manage connec...

Re: Kafka Connect on Kubernetes: Statefulset vs Deployment

I created the docker+kube stuff for our kafka-connect at my current job. I use standard deployment. kafka-connect doesn't care of hostname or IP. The sole trick is to inject the connector configuration at runtime (if you want). -- Raph On 14/06/2025 2:12 pm, Prateek Kohli wrote: > Hi All, > > I'm building a custom Docker image for kafka Connect and planning to run it > on Kubernetes. I'm a bit stuck on whether I should use a Deployment or a > StatefulSet. > > From what I understand, the main difference that could affect Kafka Connect > is the hostname/IP behaviour. With a Deployment, pod IPs and hostnames can > change after restarts. With a StatefulSet, each pod gets a stable hostname > (like connect-0, connect-1, etc.) > > My question is: Does it really matter for Kafka Connect if the pod > IPs/hostname change, considering its a stateless application? > > Thanks

Kafka Connect on Kubernetes: Statefulset vs Deployment

Hi All, I'm building a custom Docker image for kafka Connect and planning to run it on Kubernetes. I'm a bit stuck on whether I should use a Deployment or a StatefulSet. From what I understand, the main difference that could affect Kafka Connect is the hostname/IP behaviour. With a Deployment, pod IPs and hostnames can change after restarts. With a StatefulSet, each pod gets a stable hostname (like connect-0, connect-1, etc.) My question is: Does it really matter for Kafka Connect(task reassignment) if the pod IPs/hostnames(this will be the worker_Id as well) change on restarts, considering its a stateless application? Thanks

Kafka Connect on Kubernetes: Statefulset vs Deployment

Hi All, I'm building a custom Docker image for kafka Connect and planning to run it on Kubernetes. I'm a bit stuck on whether I should use a Deployment or a StatefulSet. From what I understand, the main difference that could affect Kafka Connect is the hostname/IP behaviour. With a Deployment, pod IPs and hostnames can change after restarts. With a StatefulSet, each pod gets a stable hostname (like connect-0, connect-1, etc.) My question is: Does it really matter for Kafka Connect if the pod IPs/hostname change, considering its a stateless application? Thanks

How to use aclpublisher in an external application to get acl info for authorize

I am using an external application to authorize requests before sending to broker. Before kraft I initialized aclauthroizer and just called configure and then called authorize (the zk watcher inside aclauthorizer took care of loading and updating acls). In case of kraft, how to use this feature? How can I load the acls before authorize? And how to make sure the changed acls reflect. Regards, Nanda

Re: KafkaProducer partitionsFor v/s KafkaAdminClient describeTopics

Hi Anana, Typically in Kafka, it is useful for Consumers to know about the number of partitions (as the number of consumers must be <= partitions). So one way for Consumers to find partitions is the KafkaConsumer class using the partitionsFor(topic) method, https://kafka.apache.org/40/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#partitionsFor(java.lang.String) Regards, Paul Brebner From: Subra I < iamsubra100@gmail.com > Date: Wednesday, 11 June 2025 at 12:42 am To: users@kafka.apache.org < users@kafka.apache.org >, dev@kafka.apache.org < dev@kafka.apache.org > Subject: KafkaProducer partitionsFor v/s KafkaAdminClient describeTopics EXTERNAL EMAIL - USE CAUTION when clicking links or attachments Hi All, I need to know the number of partitions for a topic before producing/consuming. Some users may set the number of partitions for a given topic but some users may just set the number of partitions in server.properties...

Re: How to execute the application.

Hey Vinothini, Hi and welcome to the Kafka community! šŸ‘‹ Great to hear you're interested in contributing! Here's how you can get started: a. Getting Familiar with Kafka To get hands-on quickly and understand how Kafka works, we recommend starting with the Kafka Quickstart Guide [1] It walks you through setting up a Kafka broker locally, producing and consuming messages — a great way to build intuition. b. Setting Up Kafka for Local Development If you're planning to make changes to the Kafka codebase or test patches locally: - Clone the repo [2] - Follow the Development README [3] to build the project and run a broker. This will help you: - Build Kafka from source - Run unit and integration tests - Start a Kafka broker locally with custom changes c. Exploring Good First Issues You can browse beginner-friendly issues using the newbie label on Jira [4]. We're glad to have you on board — happy learning! [1]: https://kaf...

Re: Kraft mode - Authz errors while doing alterconfig via admin client

Hi Nanda, It's great you figured it out. "KIP-1157 < https://cwiki.apache.org/confluence/display/KAFKA/KIP-1157%3A+Enforce+KafkaPrincipalSerde+Implementation+for+KafkaPrincipalBuilder >: Enforce KafkaPrincipalSerde Implementation for KafkaPrincipalBuilder" is proposed to fix this issue. Thank you. Luke On Wed, Jun 11, 2025 at 12:39 AM Nanda Naga <nandanaga@microsoft.com.invalid> wrote: > I figured this out issue - it is due to missing > serialization/deserialization logic for the custom principal > > Regards, > Nanda > > -----Original Message----- > From: Nanda Naga <nandanaga@microsoft.com.INVALID> > Sent: Friday, June 6, 2025 1:19 PM > To: users@kafka.apache.org > Subject: [EXTERNAL] Kraft mode - Authz errors while doing alterconfig via > admin client > > [You don't often get email from nandanaga@microsoft.com.invalid. Learn > why this is important at https://aka.ms/L...

Upcoming Kafka conference

Hi All, Current is a conference dedicated to data streaming, Apache KafkaĀ®, Apache FlinkĀ®, and Apache Icebergā„¢ communities. Now is the time to apply your ideas and experiences to share with the community. The Current 2025 CFP < https://sessionize.com/current-2025-new-orleans/ > is open *until June 15*.

RE: Kraft mode - Authz errors while doing alterconfig via admin client

I figured this out issue - it is due to missing serialization/deserialization logic for the custom principal Regards, Nanda -----Original Message----- From: Nanda Naga <nandanaga@microsoft.com.INVALID> Sent: Friday, June 6, 2025 1:19 PM To: users@kafka.apache.org Subject: [EXTERNAL] Kraft mode - Authz errors while doing alterconfig via admin client [You don't often get email from nandanaga@microsoft.com.invalid. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ] In broker server properties and controller server properties, I have setup the custom principal builder class name and custom acl authorizer (extends standard authorizer) class name properly The normal produce/ consumes that the topic has acls works fine though using the custom principal and custom acl authorizer. It works when it is inter controller auth calls But when requests sent via admin client (using command prompt calls) or via code that uses admin client, ...

KafkaProducer partitionsFor v/s KafkaAdminClient describeTopics

Hi All, I need to know the number of partitions for a topic before producing/consuming. Some users may set the number of partitions for a given topic but some users may just set the number of partitions in server.properties. It should work for both cases. Which method is better for a production grade software: 1. KafkaProducer partitionsFor 2. KafkaAdminClient describeTopics: you need to extract the topic you are interested in and get the partition info from the metadata. A quick Google search tells me that describeTopics approach is more comprehensive. Also, will the use of KafkaProducer partitionsFor work for clustered environments? Thanks, Anand

CVE-2025-27819: Apache Kafka: Possible RCE/Denial of service attack via SASL JAAS JndiLoginModule configuration

Severity: important Affected versions: - Apache Kafka 2.0.0 through 3.3.2 Description: In CVE-2023-25194, we announced the RCE/Denial of service attack via SASL JAAS JndiLoginModule configuration in Kafka Connect API. But not only Kafka Connect API is vulnerable to this attack, the Apache Kafka brokers also have this vulnerability. To exploit this vulnerability, the attacker needs to be able to connect to the Kafka cluster and have the AlterConfigs permission on the cluster resource. Since Apache Kafka 3.4.0, we have added a system property ("-Dorg.apache.kafka.disallowed.login.modules") to disable the problematic login modules usage in SASL JAAS configuration. Also by default "com.sun.security.auth.module.JndiLoginModule" is disabled in Apache Kafka 3.4.0, and "com.sun.security.auth.module.JndiLoginModule,com.sun.security.auth.module.LdapLoginModule" is disabled by default in in Apache Kafka 3.9.1/4.0.0 Credit: Ziyang Li (finder) ...

CVE-2025-27818: Apache Kafka: Possible RCE attack via SASL JAAS LdapLoginModule configuration

Severity: important Affected versions: - Apache Kafka 2.3.0 through 3.9.0 Description: A possible security vulnerability has been identified in Apache Kafka. This requires access to a alterConfig to the cluster resource, or Kafka Connect worker, and the ability to create/modify connectors on it with an arbitrary Kafka client SASL JAAS config and a SASL-based security protocol, which has been possible on Kafka clusters since Apache Kafka 2.0.0 (Kafka Connect 2.3.0). When configuring the broker via config file or AlterConfig command, or connector via the Kafka Kafka Connect REST API, an authenticated operator can set the `sasl.jaas.config` property for any of the connector's Kafka clients to "com.sun.security.auth.module.LdapLoginModule", which can be done via the `producer.override.sasl.jaas.config`, `consumer.override.sasl.jaas.config`, or `admin.override.sasl.jaas.config` properties. This will allow the server to connect to the attacker's ...

CVE-2025-27817: Apache Kafka Client: Arbitrary file read and SSRF vulnerability

Severity: important Affected versions: - Apache Kafka Client 3.1.0 through 3.9.0 Description: A possible arbitrary file read and SSRF vulnerability has been identified in Apache Kafka Client. Apache Kafka Clients accept configuration data for setting the SASL/OAUTHBEARER connection with the brokers, including "sasl.oauthbearer.token.endpoint.url" and "sasl.oauthbearer.jwks.endpoint.url". Apache Kafka allows clients to read an arbitrary file and return the content in the error log, or sending requests to an unintended location. In applications where Apache Kafka Clients configurations can be specified by an untrusted party, attackers may use the "sasl.oauthbearer.token.endpoint.url" and "sasl.oauthbearer.jwks.endpoint.url" configuratin to read arbitrary contents of the disk and environment variables or make requests to an unintended location. In particular, this flaw may be used in Apache Kafka Connect to escalate from REST API access...

Kraft mode - Authz errors while doing alterconfig via admin client

In broker server properties and controller server properties, I have setup the custom principal builder class name and custom acl authorizer (extends standard authorizer) class name properly The normal produce/ consumes that the topic has acls works fine though using the custom principal and custom acl authorizer. It works when it is inter controller auth calls But when requests sent via admin client (using command prompt calls) or via code that uses admin client, I see default principal being passed (KafkaPrincipal) instead of my custom principal from broker to controller. Anything I miss here? If you need any more details, I can share Regards, Nanda

Unsubscription Request

Hi Team, Please unsubscribe - srinivas.v59@wipro.com Thanks, 'The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com ' Internal - General Use