Skip to main content

Posts

Kafka Streams 4.1.2 offsets not being committed after upgrade from Spring Boot 3.5.8 to 4.0.5

Hello Kafka users, I am looking for help with a Kafka Streams offset commit issue that started after upgrading. We have a Spring Cloud Stream application using the Kafka Streams binder. Old working stack: * Spring Boot 3.5.9 * Spring Cloud 2025.0.0 * spring-cloud-stream-binder-kafka-streams 4.3.0 * kafka-streams 3.9.1 New stack with the issue: * Spring Boot 4.0.5 * Spring Cloud 2025.1.1 * spring-cloud-stream-binder-kafka-streams 5.0.1 * kafka-streams 4.1.2 Broker version: * Kafka brokers 3.9.0 Symptoms: * The Kafka Streams application processes new records normally. * However, committed offsets for the Streams application do not appear to advance. * In Kafka UI, consumer group lag stays high for all partitions, while application-level metrics show near real-time processing of new events. * On restart, the application starts reading from the beginning of the topic again. * When I set auto.offset.reset=no...

Tiered Storage recovery after cluster Deletion

Hi Everyone, I wanted to understand if there is a way to recover topic data from Kafka tiered storage after the kafka cluster is deleted . The local data is gone but the data in tiered storage still exists. Is there a way to bring up a new kafka cluster and assign the same tiered storage bucket to it so its able to read data from it by resyncing the metadata? Thanks, Vruttant Mankad

Best practice for Kraft controller recovery during disaster

Hi Kafka experts, I am running kafka in windows platform. I have controllers (around 5) and brokers running in separate machines During disaster recovery (Say all controller machines data wiped out, or the controller logs folder is corrupt), but broker machines are intact. what is the best practice to bring the kraft controller back with metadata info? Regards, Nanda

Applications for Travel Assistance to Community Over Code Beijing now open

Hello, forwarding a message from the ASF Travel Assistance Committee: > The Travel Assistance Committee are now accepting applications for Community Over Code Asia, 2026. To be held in Beijing, China between the 7th and the 9th August. The Application deadline is set at the 29th of May - so plenty of time to apply. If you require a Visa then please do apply early, do not wait until you know if you are accepted or not. Same for Speakers, you should apply even though you do not know if you will be accepted, so that your application is in. Places are limited for Travel Assistance, so those deemed most in need will get higher priority. If you do not need assistance to get to Beijing, but you know of others who might be interested, then feel free to spread the word! > > Good luck to all those that apply. For more details, checkout https://tac.apache.org/ -Matthias

Message loss during reassignment when producing with acks=1

Hi, while testing Kafka 3.9 (KRaft), we encountered Unclean Leader Elections (ULE) during a partition reassignment. We found an existing issue that describes the problem: https://issues.apache.org/jira/browse/KAFKA-19148 (https://issues.apache.org/jira/browse/KAFKA-19148) As suggested in the issue, the ULE appears to be a false positive and is fixed in Kafka 4.1 (which we have verified). However, when reproducing the problem using the README attached to KAFKA- 19148, we observed message loss (not only reordering) while producing to the cluster with acks=1 (--producer-property acks=1) during the reassignment. My question: Is this message loss expected behavior when producing with acks=1 during partition reassignment? Notes: - The KAFKA-19148 issue has not received any updates for a long time (and is primarily about ULE). - I understand that acks=1 increases throughput but is not safe if the leader is removed imme...

CVE-2026-33558: Apache Kafka, Apache Kafka Clients: Information Exposure Through Network Client Log Output

Severity: moderate Affected versions: - Apache Kafka 0.11.0 through 3.9.1 - Apache Kafka 4.0.0 - Apache Kafka Clients (org.apache.kafka:kafka-clients) 0.11.0 through 3.9.1 - Apache Kafka Clients (org.apache.kafka:kafka-clients) 4.0.0 Description: Information exposure vulnerability has been identified in Apache Kafka. The NetworkClient component will output entire requests and responses information in the DEBUG log level in the logs. By default, the log level is set to INFO level. If the DEBUG level is enabled, the sensitive information will be exposed via the requests and responses output log. The entire lists of impacted requests and responses are: * AlterConfigsRequest * AlterUserScramCredentialsRequest * ExpireDelegationTokenRequest * IncrementalAlterConfigsRequest * RenewDelegationTokenRequest * SaslAuthenticateRequest * createDelegationTokenResponse * describeDelegationTokenResponse * SaslAuthenticateRespon...

CVE-2026-33557: Apache Kafka: Missing JWT token validation in OAUTHBEARER authentication

Severity: important Affected versions: - Apache Kafka 4.1.0 through 4.1.1 Description: A possible security vulnerability has been identified in Apache Kafka. By default, the broker property `sasl.oauthbearer.jwt.validator.class` is set to `org.apache.kafka.common.security.oauthbearer.DefaultJwtValidator`. It accepts any JWT token without validating its signature, issuer, or audience. An attacker can generate a JWT token from any issuer with the `preferred_username` set to any user, and the broker will accept it. We advise the Kafka users using kafka v4.1.0 or v4.1.1 to set the config `sasl.oauthbearer.jwt.validator.class` to `org.apache.kafka.common.security.oauthbearer.BrokerJwtValidator` explicitly to avoid this vulnerability. Since Kafka v4.1.2 and v4.2.0 and later, the issue is fixed and will correctly validate the JWT token. Credit: Павел Романов <promanov1994@gmail.com> (finder) References: https://kafka.apache.org/cve-list https://kafka.apac...