Skip to main content

Posts

Showing posts from November, 2024

Kafka cluster collapsed for no reason

Hello! We have a 3 broker Kafka cluster ( KRaft ) brokers and Kraft controllers on the same nodes CPU: 16 RAM: 32GB We have 2241 topic and 107262 online partitions with 23652 client connections kafka version is 3.6.1 And yesterday we have trouble from 12:08 to 12:11 We have so many logs on all brokers indicated connection troubles ( inter node ) Here is the logs from 1st broker 438]: [2024-11-29 13:08:35,199] INFO [Partition coication.in-42 broker=0] Shrinking IS from 1,0,2 to 0,2. Leader: (highWatermariv 4381: [2024-11-29 13:08:35,207] INFO Partition cldkafka.out-24 broker=0] Shrinking [SR from 2,1,0 to 0. Leader: ChighWatermark: 655805, endOffset: t.sh[1457438]: [2024-11-29 13:08:45,244] INFO [Partition communication.notificationmanager.sendnotification.in-42 broker=0] IS updated to 0,2 and version updated to 57 t.sh[1457438]: [2024-11-29 13:08:45,273] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set( communication.notificationmanager.s...

Re: Data Corruption in Netty 4.1.111.Final (Kafka 3.9.0)

This was fixed with this PR https://github.com/apache/kafka/pull/17860 On Wed, Nov 20, 2024 at 9:49 AM Thomas Thornton < tthornton@salesforce.com > wrote: > With the completion of KAFKA-17046 > < https://issues.apache.org/jira/browse/KAFKA-17046 >, the netty version > has been upgraded to 4.1.111.Final in Kafka 3.9.0. This netty version has a > known issue with data corruption, see netty issue > < https://github.com/netty/netty/issues/14126 > and grpc-java issue > < https://github.com/grpc/grpc-java/issues/11284 >. It is localized to > grpc-java. This causes data corruption in any Kafka application that uses > grpc-java (e.g., debezium-vitess-connector which runs on Kafka Connect). We > should upgrade to a newer version to avoid this data corruption. I > requested Jira access to open a ticket for this. > > For anyone who is trying to resolve this, the workaround is manually > removing these 4.1.111.Final...

Re: Explicitly creating topology topics in a streams app

Just FYI Streams explicitly disables auto topic creation. This is because we want to detect eg accidental deletion of internal topics, since that can/will result in data loss. Better to shut down and get someone's attention so they can try and revive the deleted topic or decide what to do. Not entirely sure which streams examples you were looking at but I'd guess the reason we set auto.create.topics.enable=true is not for the Streams app but for the consumers we use to push test data, so we don't have to explicitly create inuit topics On Thu, Nov 21, 2024 at 3:31 PM John D. Ament < johndament@apache.org > wrote: > Hi Paul > > > > On Thu, Nov 21, 2024 at 6:06 PM Brebner, Paul > <Paul.Brebner@netapp.com.invalid> wrote: > > > Hi John, > > > > I'm not a Kafka streams expert but have experimented a few times – I > > recall that Kafka Streams does need to create/use "internal topics...

Re: Explicitly creating topology topics in a streams app

Hi Paul On Thu, Nov 21, 2024 at 6:06 PM Brebner, Paul <Paul.Brebner@netapp.com.invalid> wrote: > Hi John, > > I'm not a Kafka streams expert but have experimented a few times – I > recall that Kafka Streams does need to create/use "internal topics" – and > security has to be set on clients correctly from memory. > This may help? > https://kafka.apache.org/23/documentation/streams/developer-guide/manage-topics > And this > > https://kafka.apache.org/23/documentation/streams/developer-guide/security.html#streams-developer-guide-security Thanks, yeah, I've seen these two. The topics I'm referring to are the internal topics around state store, and other similar internal use cases as found at https://github.com/apache/kafka/blob/trunk/docs/streams/developer-guide/dsl-topology-naming.html#L83-L95 , which are generally (I believe) considered the non-user topics. To give a little more context, we run ...

Re: Explicitly creating topology topics in a streams app

Hi John, I'm not a Kafka streams expert but have experimented a few times – I recall that Kafka Streams does need to create/use "internal topics" – and security has to be set on clients correctly from memory. This may help? https://kafka.apache.org/23/documentation/streams/developer-guide/manage-topics And this https://kafka.apache.org/23/documentation/streams/developer-guide/security.html#streams-developer-guide-security Regards, Paul Brebner, NetApp From: John D. Ament < johndament@apache.org > Date: Friday, 22 November 2024 at 8:46 am To: users@kafka.apache.org < users@kafka.apache.org > Subject: Explicitly creating topology topics in a streams app [You don't often get email from johndament@apache.org . Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ] EXTERNAL EMAIL - USE CAUTION when clicking links or attachments Hi, When I look at most stream examples I notice that they tend to include t...

Explicitly creating topology topics in a streams app

Hi, When I look at most stream examples I notice that they tend to include the broker setting: auto.create.topics.enable=true My understanding is that this isn't usually recommended for production environments, so we have it off. We have started to play with Kafka Streams apps a bit but noticed that they fail in our production environments with errors around "UNKNOWN_TOPIC_OR_PARTITION". I suspect it's related to not having auto create enabled. Is there an option to force the client to create the topology topics that they use for Kafka Streams apps? Or is auto creation required? Thanks, John

Re: [DISCUSS] Java 23 Support for 3.9.x

Hello, I also think this sounds like a reasonable suggestion, also given that 3.9 was the first version to make KRaft feature complete, which means it's the first version to bring production readiness for KRaft, in many contexts. +1 from me. BR, Den tors 21 nov. 2024 kl 10:51 skrev Josep Prat <josep.prat@aiven.io.invalid >: > Hi all, > Given 3.9 is the latest version before a major, and 4.0.0 comes with > breaking changes (Zookeeper removal, Scala and Java version drops...) I > think it's fair to assume that a considerable subset of Kafka users will be > using 3.9 for a little longer than we usually support versions (roughly 1 > year). I would be in favour of considering 3.9 as and LTS and offer support > for, let's say 18 to 24 months. > In that spirit, I would be in favour of releasing a future 3.9 version that > can run with JDK 23. > > Best, > > On Thu, Nov 21, 2024 at 3:46 AM Chia-Ping Tsai <...

Re: [DISCUSS] Java 23 Support for 3.9.x

Hi all, Given 3.9 is the latest version before a major, and 4.0.0 comes with breaking changes (Zookeeper removal, Scala and Java version drops...) I think it's fair to assume that a considerable subset of Kafka users will be using 3.9 for a little longer than we usually support versions (roughly 1 year). I would be in favour of considering 3.9 as and LTS and offer support for, let's say 18 to 24 months. In that spirit, I would be in favour of releasing a future 3.9 version that can run with JDK 23. Best, On Thu, Nov 21, 2024 at 3:46 AM Chia-Ping Tsai < chia7712@gmail.com > wrote: > Dear all, > > It seems the main question is whether we should consider version 3.9 as an > LTS release. For example, will we continue with versions like 3.9.1, 3.9.2, > ... 3.9.100? > > If yes, we should backport both KIP-1006 and support for future JDKs. > > If not, backporting KIP-1006 to 3.9 would be sufficient to fix the issue of > run...

Re: [DISCUSS] Java 23 Support for 3.9.x

Dear all, It seems the main question is whether we should consider version 3.9 as an LTS release. For example, will we continue with versions like 3.9.1, 3.9.2, ... 3.9.100? If yes, we should backport both KIP-1006 and support for future JDKs. If not, backporting KIP-1006 to 3.9 would be sufficient to fix the issue of running version 3.9 under JDK 23, even if JDK 23 is still not officially supported by 3.9. Best, Chia-Ping Greg Harris <greg.harris@aiven.io.invalid> 於 2024年11月21日 週四 上午4:37寫道: > > Has the SecurityManager been fully removed in JDK 23? > > What is the effect of running Kafka 3.9.0 with JDK 23? > > The SecurityManager has been degraded, so by default our users experience > an UnsupportedOperationException. They can work-around this by setting a > system property. > In JRE 24, JEP-486 [1] has removed this workaround, so an unpatched 3.9.x > will experience an UnsupportedOperationException unconditional...

Re: [DISCUSS] Java 23 Support for 3.9.x

> Has the SecurityManager been fully removed in JDK 23? > What is the effect of running Kafka 3.9.0 with JDK 23? The SecurityManager has been degraded, so by default our users experience an UnsupportedOperationException. They can work-around this by setting a system property. In JRE 24, JEP-486 [1] has removed this workaround, so an unpatched 3.9.x will experience an UnsupportedOperationException unconditionally. > I see https://issues.apache.org/jira/browse/KAFKA-17638 > which explicitly adds JDK 23 to our CI with a fix version of 4.0.0. Lack of > support for JDK 23 in 3.9.x is not a bug, it is what we planned (as far as > I can tell). Originally we were planning to get this change into 3.9.0, but we missed the merge deadline. I opened that ticket afterwards to be fixed in 4.0.0 because that's the next release. The patch was always intended to be backportable, and I intended to backport it [2]. I understand that if we consider Java 23 su...

Re: [DISCUSS] Java 23 Support for 3.9.x

Greg, I have not been following this closely, so apologies for some basic questions. Has the SecurityManager been fully removed in JDK 23? What is the effect of running Kafka 3.9.0 with JDK 23? By "4.0 breaking changes" do you mean changes to our JDK/Scala supported versions, removal or ZK, Kafka API changes, or something else? In general, I do not think we should change our supported JDK versions in a hotfix release. I see https://issues.apache.org/jira/browse/KAFKA-17638 which explicitly adds JDK 23 to our CI with a fix version of 4.0.0. Lack of support for JDK 23 in 3.9.x is not a bug, it is what we planned (as far as I can tell). Also, I feel that we should not add too much to 3.9.x aside from actual bugs. If we backport things into 3.9.x, it will slow adoption of 4.x and increase our maintenance burden over time. Just my $0.02 Thanks! David A On Wed, Nov 20, 2024 at 12:22 PM Greg Harris <greg.harris@aiven.io.invalid> wrote: ...

Data Corruption in Netty 4.1.111.Final (Kafka 3.9.0)

With the completion of KAFKA-17046 < https://issues.apache.org/jira/browse/KAFKA-17046 >, the netty version has been upgraded to 4.1.111.Final in Kafka 3.9.0. This netty version has a known issue with data corruption, see netty issue < https://github.com/netty/netty/issues/14126 > and grpc-java issue < https://github.com/grpc/grpc-java/issues/11284 >. It is localized to grpc-java. This causes data corruption in any Kafka application that uses grpc-java (e.g., debezium-vitess-connector which runs on Kafka Connect). We should upgrade to a newer version to avoid this data corruption. I requested Jira access to open a ticket for this. For anyone who is trying to resolve this, the workaround is manually removing these 4.1.111.Final netty dependencies in the /kafka/libs directory and installing another netty version, eg: ``` RUN rm -f /kafka/libs/netty-codec-4.1.111.Final.jar RUN curl -sfSL -o /kafka/libs/netty-codec-4.1.110.Final.jar https://repo1.maven...

[DISCUSS] Java 23 Support for 3.9.x

Hi all, Now that 3.9.0 is released and 4.0.x is progressing, I'd like to understand everyone's expectations about the 3.9.x branch, and ask for a specific consensus on Java 23 support. Some context that I think is relevant to the discussion: * KIP-1006 [1] proposes a backwards-compatible strategy for handling the ongoing removal of the SecurityManager, which is merged and due to release in 4.0.0 [2]. * KIP-1012 [3] rejected ongoing parallel feature development on a 3.x branch while having trunk on 4.x. * During the 3.9.0 release, the patch [2] was rejected [4] due to being a new feature which did not meet the feature freeze deadline. * Other than the SecurityManager removal, there are additional PRs which would also need to be backported for full Java 23 support [5] including a Scala patch upgrade. * Downstream users are asking for a backport [6] because adding support for Java 23 would obligate them to also include the 4.0 breaking changes. So while addi...

Kafka service failure seen during scaling

Hi Team, We tried to scale kafka from single broker to 3 brokers. During scaling, getting below error. We are using apache/kafka 3.8.0 version [2024-11-13 23:00:57,251] ERROR Encountered fatal fault: Unable to apply PartitionChangeRecord record at offset 264919 on standby controller, from the batch with baseOffset 264919 (org.apache.kafka.server.fault.ProcessTerminatingFaultHandler) java.lang.RuntimeException: Tried to create partition YFqfehupTfah0LfzGbw-wA:1, but no topic with that ID was found. at org.apache.kafka.controller.ReplicationControlManager.replay(ReplicationControlManager.java:526) at org.apache.kafka.controller.QuorumController.replay(QuorumController.java:1504) at org.apache.kafka.controller.QuorumController.access$1700(QuorumController.java:179) at org.apache.kafka.controller.QuorumController$QuorumMetaLogListener.lambda$handleCommit$0(QuorumController.java:1083) at org.apache.kafka.controller.QuorumController$QuorumMetaLogListener.lambda$appendR...

Re: CVE-2024-31141: Apache Kafka Clients: Privilege escalation to filesystem read-access via automatic ConfigProvider

Hi Everyone, Due to an oversight, the Affected versions are incorrect. Version 3.7.1 of kafka-clients is not vulnerable. This is the correct data: Affected versions: - Apache Kafka Clients 2.3.0 through 3.5.2 - Apache Kafka Clients 3.6.0 through 3.6.2 - Apache Kafka Clients 3.7.0 This issue affects Apache Kafka Clients: from 2.3.0 through 3.5.2, 3.6.2, 3.7.0. Thanks, Greg Harris On Mon, Nov 18, 2024 at 10:42 AM Greg Harris < gharris@apache.org > wrote: > Severity: moderate > > Affected versions: > > - Apache Kafka Clients 2.3.0 through 3.5.2 > - Apache Kafka Clients 3.6.0 through 3.6.2 > - Apache Kafka Clients 3.7.0 through 3.7.1 > > Description: > > Files or Directories Accessible to External Parties, Improper Privilege > Management vulnerability in Apache Kafka Clients. > > Apache Kafka Clients accept configuration data for customizing behavior, > and includes ConfigProvider plugins in order to...

Re: CVE-2024-31141: Apache Kafka Clients: Privilege escalation to filesystem read-access via automatic ConfigProvider

Hi Everyone, Due to an oversight, the Affected versions are incorrect. Version 3.7.1 of kafka-clients is not vulnerable. This is the correct data: Affected versions: - Apache Kafka Clients 2.3.0 through 3.5.2 - Apache Kafka Clients 3.6.0 through 3.6.2 - Apache Kafka Clients 3.7.0 This issue affects Apache Kafka Clients: from 2.3.0 through 3.5.2, 3.6.2, 3.7.0. Thanks, Greg Harris On Mon, Nov 18, 2024 at 10:42 AM Greg Harris < gharris@apache.org > wrote: > Severity: moderate > > Affected versions: > > - Apache Kafka Clients 2.3.0 through 3.5.2 > - Apache Kafka Clients 3.6.0 through 3.6.2 > - Apache Kafka Clients 3.7.0 through 3.7.1 > > Description: > > Files or Directories Accessible to External Parties, Improper Privilege > Management vulnerability in Apache Kafka Clients. > > Apache Kafka Clients accept configuration data for customizing behavior, > and includes ConfigProvider plugins in order t...

CVE-2024-31141: Apache Kafka Clients: Privilege escalation to filesystem read-access via automatic ConfigProvider

Severity: moderate Affected versions: - Apache Kafka Clients 2.3.0 through 3.5.2 - Apache Kafka Clients 3.6.0 through 3.6.2 - Apache Kafka Clients 3.7.0 through 3.7.1 Description: Files or Directories Accessible to External Parties, Improper Privilege Management vulnerability in Apache Kafka Clients. Apache Kafka Clients accept configuration data for customizing behavior, and includes ConfigProvider plugins in order to manipulate these configurations. Apache Kafka also provides FileConfigProvider, DirectoryConfigProvider, and EnvVarConfigProvider implementations which include the ability to read from disk or environment variables. In applications where Apache Kafka Clients configurations can be specified by an untrusted party, attackers may use these ConfigProviders to read arbitrary contents of the disk and environment variables. In particular, this flaw may be used in Apache Kafka Connect to escalate from REST API access to filesystem/environment access, which ma...

Re: Kafka Connect

Hi, I think SMTs (KIP-66) could work for your case. https://kafka.apache.org/documentation.html#connect_transforms Regards, OSB On Fri, Nov 15, 2024, 03:03 Surbhi Mungre < mungre.surbhi@gmail.com > wrote: > Can Kafka Connect be used to read messages from one Kafka Cluster, apply > some basic transformation and write messages to another Kafka Cluster? I > did not find a Kafka Connect Connector in the list of connectors provided > by Confluence[1]. I only found a Replicator[2] but for my use-case I want > to apply some transformation on the messages. > > Instead of using Kafka Connect, does it make more sense to use Kafka > Streams or Spark Streaming. I want to perform very simple transformations. > > [1] https://www.confluent.io/product/connectors/ > [2] > https://docs.confluent.io/platform/current/multi-dc-deployments/replicator/ > > Thanks, > -Surbhi >

Fwd: Query on MM2 Compatibility for Kafka Versions

---------- Forwarded message --------- From: Sharma, Atul < atusharm@visa.com > Date: Fri, Nov 15, 2024, 12:16 PM Subject: Query on MM2 Compatibility for Kafka Versions To: Atul Sharma < atul.sharma.mat17@itbhu.ac.in > Hi Team, I have a question regarding MM2 compatibility. Since MM2 comes bundled with the Apache Kafka code and utilizes the Connect framework, I was wondering: Can we use MM2 that ships with Kafka 3.8.0 to replicate data from a Kafka cluster running version 2.5.0 to another cluster running version 2.8.0? Thank you. Atul

Re: Kafka Connect

We have similar usecase and we use flink for transformation. Flink reads from kafka , does the transformation and writes back to kafka. Thanks, Prince > On Nov 14, 2024, at 8:22 PM, Neeraj Vaidya <neeraj.vaidya@yahoo.co.in.INVALID> wrote: > > I don't think KStreams is a good option just by itself for inter site replication. > How about using a replication technology like MM2 to first replicate to a topic in the destination cluster and then run KStreams client there in the destination cluster to consume, transform and then produce to your final topic. > > Regards, > Neeraj > > >> On 15 Nov 2024, at 1:04 PM, Surbhi Mungre < mungre.surbhi@gmail.com > wrote: >> >> Can Kafka Connect be used to read messages from one Kafka Cluster, apply >> some basic transformation and write messages to another Kafka Cluster? I >> did not find a Kafka Connect Connector in the list of connectors provided >...

Re: Kafka Connect

I don't think KStreams is a good option just by itself for inter site replication. How about using a replication technology like MM2 to first replicate to a topic in the destination cluster and then run KStreams client there in the destination cluster to consume, transform and then produce to your final topic. Regards, Neeraj > On 15 Nov 2024, at 1:04 PM, Surbhi Mungre < mungre.surbhi@gmail.com > wrote: > > Can Kafka Connect be used to read messages from one Kafka Cluster, apply > some basic transformation and write messages to another Kafka Cluster? I > did not find a Kafka Connect Connector in the list of connectors provided > by Confluence[1]. I only found a Replicator[2] but for my use-case I want > to apply some transformation on the messages. > > Instead of using Kafka Connect, does it make more sense to use Kafka > Streams or Spark Streaming. I want to perform very simple transformations. > > [1] https://www...

Kafka Connect

Can Kafka Connect be used to read messages from one Kafka Cluster, apply some basic transformation and write messages to another Kafka Cluster? I did not find a Kafka Connect Connector in the list of connectors provided by Confluence[1]. I only found a Replicator[2] but for my use-case I want to apply some transformation on the messages. Instead of using Kafka Connect, does it make more sense to use Kafka Streams or Spark Streaming. I want to perform very simple transformations. [1] https://www.confluent.io/product/connectors/ [2] https://docs.confluent.io/platform/current/multi-dc-deployments/replicator/ Thanks, -Surbhi

Correct way to override Kafka Connect producer settings?

Hey Kafka fam, What's the correct way to set task-level overrides to producer settings in a Kafka Connect task? For example, with MirrorMaker2, I'd expect the following "producer.override.*" based configs to work based on the documentation, but in in reality this does not change any of the producer behavior and the default 1MB "max.request.size" is still used: https://github.com/apache/kafka/blob/3.9.0/docs/connect.html#L60 { "producer.override.max.request.size": "26214400", "producer.override.batch.size": "524288", "producer.override.buffer.memory": "524288000", "producer.override.receive.buffer.bytes": "33554432", "producer.override.send.buffer.bytes": "33554432", "producer.override.compression.type": "gzip", "name": "mm2-cpc", "connector.class": "org.apa...

Re: Kafka "kafka-metadata-quorum.sh" regression in 3.9.0

Hi Jesús, That's part of the change in KIP-853: https://cwiki.apache.org/confluence/display/KAFKA/KIP-853%3A+KRaft+Controller+Membership+Changes#KIP853:KRaftControllerMembershipChanges-describe--status Thanks. Luke On Tue, Nov 12, 2024 at 11:48 PM Jesus Cea < jcea@jcea.es > wrote: > In Kafka 3.8.1 I see this: > > """ > /home/kafka/bin/kafka-metadata-quorum.sh --command-config > /home/kafka-broker-data/command.properties --bootstrap-server > [HIDDEN]:9092 describe --status > ClusterId: 8a31cmC7Tn-IHxEDnIfQoA > LeaderId: 1001 > LeaderEpoch: 117580 > HighWatermark: 66256270 > MaxFollowerLag: 0 > MaxFollowerLagTimeMs: 134 > CurrentVoters: [1000,1001,1002] > CurrentObservers: [0,1,3,4] > """ > > In 3.9.0 I see this: > > """ > /home/kafka/bin/kafka-metadata-quorum.sh --c...

Data Loss from Kafka Connect when Schema Registry requests fail

Hi we notice data loss i.e. dropped records when running Debezium on Kafka Connect with Apicurio Schema Registry. Specifically, multiple times we have observed that a single record is dropped when we get this exception (full stack trace < https://gist.github.com/twthorn/917bf3cc576f2b486dde04b16a60d681 >). Failed to send HTTP request to endpoint: http://schema-registry.service.prod-us-east-1-dw1.consul:8080/apis/ccompat/v6/subjects/prod .<keyspace>.<table>-key/versions?normalize=false This exception is raised by the Kafka Connect worker, which receives it from the confluent schema registry client. This seems to be a network blip and after it doesn't have any errors and continues processing data without issue. But it will have data loss for one record that was received almost exactly one minute prior to when this exception is logged. We have observed the behavior with that same timeline occur on different days several weeks apart. We have the...

Kafka transactions allow aborted reads, lost writes, and torn transactions

Hello all, I've spent the last few months testing Bufstream, a Kafka-compatible system. In the course of that research, we discovered that the Kafka transaction protocol allows aborted reads, lost writes, and torn transactions: https://jepsen.io/analyses/bufstream-0.1.0 In short, the protocol assumes that message delivery is ordered, but sends messages over different TCP connections, to different nodes, with automatic retries. When network or node hiccups (e.g. garbage collection) delay delivery of a commit or abort message, that message can commit or abort a different, later transaction. Committed transactions can actually be lost. Aborted transactions can actually succeed. Transactions can be torn into parts: some of their effects committed, others lost. We've reproduced these problems in both Bufstream and Kafka itself, and we believe every Kafka-compatible system is most likely susceptible. KIP-890 may help. Client maintainers may also b...

Kafka "kafka-metadata-quorum.sh" regression in 3.9.0

In Kafka 3.8.1 I see this: """ /home/kafka/bin/kafka-metadata-quorum.sh --command-config /home/kafka-broker-data/command.properties --bootstrap-server [HIDDEN]:9092 describe --status ClusterId: 8a31cmC7Tn-IHxEDnIfQoA LeaderId: 1001 LeaderEpoch: 117580 HighWatermark: 66256270 MaxFollowerLag: 0 MaxFollowerLagTimeMs: 134 CurrentVoters: [1000,1001,1002] CurrentObservers: [0,1,3,4] """ In 3.9.0 I see this: """ /home/kafka/bin/kafka-metadata-quorum.sh --command-config /home/kafka-broker-data/command.properties --bootstrap-server [HIDDEN]:9092 describe --status ClusterId: 8a31cmC7Tn-IHxEDnIfQoA LeaderId: 1002 LeaderEpoch: 117581 HighWatermark: 66261499 MaxFollowerLag: 0 MaxFollowerLagTimeMs: 38 CurrentVoters: [{"id": 1000, "directoryId": null, "endpoints...

Safe removal of stray partition logs from a broker

Hello, I am currently using kafka 3.7 in kraft mode, have cluster of 3 controllers and 5 brokers. I issued a `/opt/kafka/bin/kafka-topics.sh ... --topic T --delete` on a topic whose sole partition had only one replica on a broker that was at the time offline (in process of recovering). The operation succeeded and by the time the broker got online it's possible that the topic had gotten automatically recreated by some consumer or producer. At that moment the broker moved the logs into a dir named something like `topic-partition.[0-9a-f]*-stray`. Now the logs dir has hundreds of GB in these stray directories and I am wondering what is the safest way to clean this mess up. In this particular case I do not care for the contents of the original topics. But I am very reluctant to simply remove the directories manually from the underlying disk. I couldn't find a mention in the documentation. The comment in the source code [1] does not allude to what should be done with su...

Re: [ANNOUNCE] Apache Kafka 3.9.0

Thanks Colin for all the hard work. Regards, Apoorv Mittal On Fri, Nov 8, 2024 at 6:37 AM Josep Prat <josep.prat@aiven.io.invalid> wrote: > Hi Colin, > > Thanks for running the release!! > > Best, > > ------------------ > Josep Prat > Open Source Engineering Director, Aiven > josep.prat@aiven.io | +491715557497 | aiven.io > Aiven Deutschland GmbH > Alexanderufer 3-7, 10117 Berlin > Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen, > Anna Richardson, Kenneth Chen > Amtsgericht Charlottenburg, HRB 209739 B > > On Fri, Nov 8, 2024, 07:23 Satish Duggana < satish.duggana@gmail.com > > wrote: > > > Thanks Colin for all your hard work on running the 3.9.0 release. > > Thanks to all the contributors to this release. > > > > ~Satish. > > > > > > ~Satish. > > > > On Fri, 8 Nov 2024 at 04:42, Colin McCabe < cmccabe@ap...