Skip to main content

Posts

Re: Kafka Streams 4.1.2 offsets not being committed after upgrade from Spring Boot 3.5.8 to 4.0.5

Hi Bill, Thanks for the quick reply. Here is some of the output I get with debug logging and kafka-consumer-groups.sh ./kafka-consumer-groups.sh \ --bootstrap-server localhost:9092 \ --describe \ --group 'MyProcessingApplication' loging.level.org.apache.kafka.streams.processor.internals.StreamTask: DEBUG Starting with a clean topic with no events. output: GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID MyProcessingApplication myTopic.0 4 - 0 - MyProcessingApplication-509fdb18-f2ec-43f9-a214-50c3d309fde5-StreamThread-1-consumer-6a48934d-d93f-41d5-a89b-4faecaaba720 / hostIp MyProcessingApplication-509fdb18-f2ec-43f9-a214-50c3d309fde5-StreamThread-1-consumer MyProcessingApplication myTo...

Re: Kafka Streams 4.1.2 offsets not being committed after upgrade from Spring Boot 3.5.8 to 4.0.5

Sorry to hear about the issues. Can you provide log files to help diagnose the problem? Thanks, Bill On Fri, Apr 24, 2026 at 4:53 AM STROUCKEN Yves < yves.stroucken@soprasteria.com> wrote: > Hello Kafka users, > I am looking for help with a Kafka Streams offset commit issue that > started after upgrading. > We have a Spring Cloud Stream application using the Kafka Streams binder. > Old working stack: > > * Spring Boot 3.5.9 > * Spring Cloud 2025.0.0 > * spring-cloud-stream-binder-kafka-streams 4.3.0 > * kafka-streams 3.9.1 > New stack with the issue: > > * Spring Boot 4.0.5 > * Spring Cloud 2025.1.1 > * spring-cloud-stream-binder-kafka-streams 5.0.1 > * kafka-streams 4.1.2 > Broker version: > > * Kafka brokers 3.9.0 > Symptoms: > > * The Kafka Streams application processes new records normally. > * However, committed offsets for the S...

Kafka Streams 4.1.2 offsets not being committed after upgrade from Spring Boot 3.5.8 to 4.0.5

Hello Kafka users, I am looking for help with a Kafka Streams offset commit issue that started after upgrading. We have a Spring Cloud Stream application using the Kafka Streams binder. Old working stack: * Spring Boot 3.5.9 * Spring Cloud 2025.0.0 * spring-cloud-stream-binder-kafka-streams 4.3.0 * kafka-streams 3.9.1 New stack with the issue: * Spring Boot 4.0.5 * Spring Cloud 2025.1.1 * spring-cloud-stream-binder-kafka-streams 5.0.1 * kafka-streams 4.1.2 Broker version: * Kafka brokers 3.9.0 Symptoms: * The Kafka Streams application processes new records normally. * However, committed offsets for the Streams application do not appear to advance. * In Kafka UI, consumer group lag stays high for all partitions, while application-level metrics show near real-time processing of new events. * On restart, the application starts reading from the beginning of the topic again. * When I set auto.offset.reset=no...

Tiered Storage recovery after cluster Deletion

Hi Everyone, I wanted to understand if there is a way to recover topic data from Kafka tiered storage after the kafka cluster is deleted . The local data is gone but the data in tiered storage still exists. Is there a way to bring up a new kafka cluster and assign the same tiered storage bucket to it so its able to read data from it by resyncing the metadata? Thanks, Vruttant Mankad

Best practice for Kraft controller recovery during disaster

Hi Kafka experts, I am running kafka in windows platform. I have controllers (around 5) and brokers running in separate machines During disaster recovery (Say all controller machines data wiped out, or the controller logs folder is corrupt), but broker machines are intact. what is the best practice to bring the kraft controller back with metadata info? Regards, Nanda

Applications for Travel Assistance to Community Over Code Beijing now open

Hello, forwarding a message from the ASF Travel Assistance Committee: > The Travel Assistance Committee are now accepting applications for Community Over Code Asia, 2026. To be held in Beijing, China between the 7th and the 9th August. The Application deadline is set at the 29th of May - so plenty of time to apply. If you require a Visa then please do apply early, do not wait until you know if you are accepted or not. Same for Speakers, you should apply even though you do not know if you will be accepted, so that your application is in. Places are limited for Travel Assistance, so those deemed most in need will get higher priority. If you do not need assistance to get to Beijing, but you know of others who might be interested, then feel free to spread the word! > > Good luck to all those that apply. For more details, checkout https://tac.apache.org/ -Matthias

Message loss during reassignment when producing with acks=1

Hi, while testing Kafka 3.9 (KRaft), we encountered Unclean Leader Elections (ULE) during a partition reassignment. We found an existing issue that describes the problem: https://issues.apache.org/jira/browse/KAFKA-19148 (https://issues.apache.org/jira/browse/KAFKA-19148) As suggested in the issue, the ULE appears to be a false positive and is fixed in Kafka 4.1 (which we have verified). However, when reproducing the problem using the README attached to KAFKA- 19148, we observed message loss (not only reordering) while producing to the cluster with acks=1 (--producer-property acks=1) during the reassignment. My question: Is this message loss expected behavior when producing with acks=1 during partition reassignment? Notes: - The KAFKA-19148 issue has not received any updates for a long time (and is primarily about ULE). - I understand that acks=1 increases throughput but is not safe if the leader is removed imme...