Skip to main content

Posts

Showing posts from August, 2021

Re: Security vulnerabilities in kafka:2.13-2.6.0/2.7.0 docker image

Hi Ashish, I suggested that you upgrade to V2.8. I checked 2 of the CVEs, and are fixed (or not used, like libfetch) in V2.8. If you still found the CVEs existed in V2.8, please raise it. Thank you. Luke On Wed, Sep 1, 2021 at 4:07 AM Ashish Patil < ashish.patil@gm.com > wrote: > Hi Team > > I wanted to use the 2.6.0 docker image for Kafka but It has lots of > security vulnerabilities. > Please find the below list of security vulnerabilities > ** > CVE-2021-36159 > CVE-2020-25649 < https://github.com/advisories/GHSA-288c-cq4h-88gq > > CVE-2021-22926 > CVE-2021-22922 > CVE-2021-22924 > CVE-2021-22922 > CVE-2021-22924 > CVE-2021-31535 > CVE-2019-17571 < https://github.com/advisories/GHSA-2qrg-x229-3v8q > > ** > > I did raise this issue here > https://github.com/wurstmeister/kafka-docker/issues/681 but it looks like > the issue is within the Kafka binary. > > Do we...

Security vulnerabilities in kafka:2.13-2.6.0/2.7.0 docker image

Hi Team I wanted to use the 2.6.0 docker image for Kafka but It has lots of security vulnerabilities. Please find the below list of security vulnerabilities ** CVE-2021-36159 CVE-2020-25649 CVE-2021-22926 CVE-2021-22922 CVE-2021-22924 CVE-2021-22922 CVE-2021-22924 CVE-2021-31535 CVE-2019-17571 ** I did raise this issue here https://github.com/wurstmeister/kafka-docker/issues/681 but it looks like the issue is within the Kafka binary. Do we have any plan to fix this in the coming version or any suggestions around this? Thanks Ashish

Re: [VOTE] 3.0.0 RC1

Small correction to my previous email. The actual link for public preview of the 3.0.0 blog post draft is: https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6 (see also the email thread with title: [DISCUSS] Please review the 3.0.0 blog post) Best, Konstantine On Tue, Aug 31, 2021 at 6:34 PM Konstantine Karantasis < kkarantasis@apache.org > wrote: > > Hello Kafka users, developers and client-developers, > > This is the second release candidate for Apache Kafka 3.0.0. > It corresponds to a major release that includes many new features, > including: > > * The deprecation of support for Java 8 and Scala 2.12. > * Kafka Raft support for snapshots of the metadata topic and > other improvements in the self-managed quorum. > * Deprecation of message formats v0 and v1. > * Stronger delivery guarantees for the Kafka producer enabled by default. > * Optimizations in OffsetFetch and FindCoordinator...

[VOTE] 3.0.0 RC1

Hello Kafka users, developers and client-developers, This is the second release candidate for Apache Kafka 3.0.0. It corresponds to a major release that includes many new features, including: * The deprecation of support for Java 8 and Scala 2.12. * Kafka Raft support for snapshots of the metadata topic and other improvements in the self-managed quorum. * Deprecation of message formats v0 and v1. * Stronger delivery guarantees for the Kafka producer enabled by default. * Optimizations in OffsetFetch and FindCoordinator requests. * More flexible Mirror Maker 2 configuration and deprecation of Mirror Maker 1. * Ability to restart a connector's tasks on a single call in Kafka Connect. * Connector log contexts and connector client overrides are now enabled by default. * Enhanced semantics for timestamp synchronization in Kafka Streams. * Revamped public API for Stream's TaskId. * Default serde becomes null in Kafka Streams and several other configuration chang...

Re: Kafka consumer and heartbeat threads stuck/hanging up

On Thu, Aug 26, 2021 at 4:44 PM Shantam Garg(Customer Service and Transact) < shantam.garg@flipkart.com > wrote: > Hello all, > > I have come across this behavior in our production cluster where the* > Kafka consumers and heartbeat threads are getting stuck/hang-up* without > any error. > > As the heartbeat threads as stuck - the session timeout is breached and > the consumers are thrown on the live consumer group. I have tried debugging > it by enabling debug logs for Coordinator and Consumer but couldn't find > the reason for this, my hunch is that it's getting stuck in some deadlock > but not sure what's causing it. > Please let me know if anyone has any idea on how to debug this issue. > > Kafka version:* 2.7* > Consumer client:* kafka-client:2.7* > > *Configs:* > max.poll.interval.ms : 300000 > session.timeout.ms : 10000 > heartbeat.interval.ms : 3000 > fetch.min.bytes: 1 >...

Kafka does not check the expiry of SSL certificates if host.cer is included in truststore ?

Hello fellow Kafka users, I have came across this behaviour of kafka while using it in SASL_SSL mode. My observations are: When we exclude the host.cer => Expiry date of certificate[1] in certificate chain of Keystore is consider. When we include the host.cer => No expiry is check even for all 3 certificate in the chain. Can anyone help me understands whether it's a known behaviour in kafka or an issue? Any help would be appreciated. Thanks, Deepak

Re: Ensuring that the message is persisted after acknowledgement

Kunal, I recommend looking at the broker and topic parameters that include the term "flush" , such as https://kafka.apache.org/documentation/#topicconfigs_flush.messages < https://kafka.apache.org/documentation/#topicconfigs_flush.messages > Kafka lets you configure how often log messages are flushed to disk, either per topic or globally. The default settings leave the flushing completely to the OS. Kafka was designed to take full advantage of the OS page cache because it significantly improves performance for both producers and consumers, allowing them to write to and read from memory. If your application requires absolute disk persistence and you are willing to take a significant performance hit, you can set the topic property flush.messages to 1 for any topic that requires this guarantee. — Peter > On Aug 24, 2021, at 10:31 PM, Kunal Goyal < kunal.goyal@cohesity.com > wrote: > > Hi Sunil > > The article that you shared talk...

Re: Ensuring that the message is persisted after acknowledgement

Hi Sunil The article that you shared talks about acks. But even if the message is received by all in-sync replicas and kafka sends response back to the producer, it is possible that none of the replicas did not flush the messages to disk. So, if all the replicas crash for some reason, the messages would be lost. For our application, we require some way to guarantee that the messages are persisted to disk. Regards, Kunal On Tue, Aug 24, 2021 at 8:40 PM Vairavanathan Emalayan < vairavanathan.emalayan@cohesity.com > wrote: > > > ---------- Forwarded message --------- > From: sunil chaudhari < sunilmchaudhari05@gmail.com > > Date: Fri, Aug 20, 2021 at 8:00 AM > Subject: Re: Ensuring that the message is persisted after acknowledgement > To: < users@kafka.apache.org > > Cc: Vairavanathan Emalayan < vairavanathan.emalayan@cohesity.com > > > > Hi Kunal, > This article may help you. > > https://bet...

How to use CRL (Certificate Revocation List) with Kafka

Hi We have a private CA and our Kafka Brokers are signed by a private CA. Bunch of external clients connect to our broker and before connecting they download the private CA's cert and add it to truststore. Everything works fine. On the Kafka broker side, we want to use CRL before we authenticate any client. Just wondering how we can use the CRL or OCSP (Online Certificate Status Protocol) with Kafka ? I couldn't find any documentation around it, so I thought of asking the community. Any help would be appreciated. Thanks. --Darshan

Re: Kafka Streams Handling uncaught exceptions REPLACE_THREAD

Thank you for your help, I will check it and try it :-) On Mon, Aug 16, 2021 at 11:45 AM Bruno Cadonna < cadonna@apache.org > wrote: > Hi Yoda, > > for certain cases, Kafka Streams allows you to specify handlers that > skip the problematic record. Those handlers are: > > 1. deserialization exception handler configured in > default.deserialization.exception.handler > 2. time extractor set in default.timestamp.extractor and in the Consumed > object > 3. production exception handler configured in > default.production.exception.handler > > Kafka Streams provides implementations for handlers 1 and 2 to skip the > problematic records, that are LogAndContinueExceptionHandler and > LogAndSkipOnInvalidTimestamp, respectively. > > For some more details have a look at > > https://docs.confluent.io/platform/current/streams/faq.html#failure-and-exception-handling > > If problematic records cause an excep...

Re: Kafka attempt rediscovery never happen

Hi Antonio, What you can do is to check the broker side log, to see if there's any info. If no, you can also enable DEBUG log to have more info for troubleshooting. Thank you. Luke On Mon, Aug 23, 2021 at 11:27 PM Antonio Pires < smborgespires@gmail.com > wrote: > Hello fellow Kafka users, > > I've been trying to understand what happened to one of my Consumers but > with no luck so far, but maybe someone could help me with some insight. > > I got these lines on the logs > > │ 20:43:43,909 INFO com.mms.logic [] - > Successfully consumed message. > correlationId=ce62d95e-09e4-4f8f-b166-5193d582733b topic=nj_topic > partition=2 offset=371360 > │ 20:43:55,281 ERROR > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator [] - > [Consumer clientId=consumer-mms-1, groupId=mms] Offset commit failed on > partition nj_topic-2 at offset 371361: The coordinator is not aware...

Conditional Produce in Kafka (KAFKA-2260)

Hi all, I am new to the Kafka project and while having a discussion with a colleague, I was pointed to the JIRA KAFKA-2260 < https://issues.apache.org/jira/browse/KAFKA-2260 > [1] which talks about adding the ability to perform "conditional produce" requests to Kafka. This idea sounded very exciting to me and I can see it draw parallels with Optimistic Concurrency Control in Traditional Databases wherein Kafka could be used as a "system of record", or as an arbiter for disparate systems working on the same data. The natural scale-out architecture of Kafka makes this ability even more interesting than the more monolithic databases that generally offer this. I found a system which is doing something like this - Waltz < https://wepay.github.io/waltz/docs/introduction >altz < https://wepay.github.io/waltz/docs/introduction > [2] today already. Following the discussion in the JIRA [1] itself and then on KIP-27 < https://cwiki.apache.org/conflue...

Kafka attempt rediscovery never happen

Hello fellow Kafka users, I've been trying to understand what happened to one of my Consumers but with no luck so far, but maybe someone could help me with some insight. I got these lines on the logs │ 20:43:43,909 INFO com.mms.logic [] - Successfully consumed message. correlationId=ce62d95e-09e4-4f8f-b166-5193d582733b topic=nj_topic partition=2 offset=371360 │ 20:43:55,281 ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator [] - [Consumer clientId=consumer-mms-1, groupId=mms] Offset commit failed on partition nj_topic-2 at offset 371361: The coordinator is not aware of this member. │ 20:43:55,282 INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator [] - [Consumer clientId=consumer-mms-1, groupId=mms] OffsetCommit failed with Generation{generationId=17, memberId='consumer-mms-1-c69e919b-2719-42cf-bf76-687fb1b0ea18', protocol='range'}: The coordinator is not aware of this member. ...

ReplicaFetcherThread marked as failed and NotLeaderOrFollowerException after upgrade to 2.7.1

Hello, we have recently updated our production kafka cluster from 2.6.1 to 2.7.1 and started receiving 2 types of errors: 1. When a broker is restared, upon starting up it produces a lot of warnings with information about old partition leader epoch and: ... [2021-08-23 15:25:55,629] INFO [ReplicaFetcher replicaId=10, leaderId=11, fetcherId=2] Partition redacted-topic1-name-19 has an older epoch (44) than the current leader. Will await the new LeaderAndIsr state before resuming fetching. (kafka.server.ReplicaFetcherThread) [2021-08-23 15:25:55,630] WARN [ReplicaFetcher replicaId=10, leaderId=11, fetcherId=2] Partition redacted-topic1-name-19 marked as failed (kafka.server.ReplicaFetcherThread) ... At the end of broker startup I get: [2021-08-23 15:25:55,645] INFO [ReplicaFetcherManager on broker 10] Removed fetcher for partitions Set(...[a lot of partitions], redacted-topic1-name-19, [a lot of partitions]...) (kafka.server.ReplicaFetcherManager) 2. While runnin...

Kafka Zookeeper SSL - include cert and key.

Hi, I am using this command to test my zookeeper SSL connection: openssl s_client -showcerts -connect 55.55.55.55:2280 -cert /root/ca-old3/intermediate/certs/intermediate.cert.pem -key /root/ca-old3/intermediate/private/intermediate.key.pem That works great and I have this msg "Authenticated" from the log: [2021-08-22 20:19:36,173] INFO Authenticated Id '1.2.44222.1.9.1=#16137365637572697479406b696e7374612e636f6d,OU=Engineering,O=Ltd,ST=CA,C=US' for Scheme 'x509' (org.apache.zookeeper.server.auth.X509AuthenticationProvider) So I assume that one is working properly. What if I don't want to use -cert and -key option from the openssl command? I tried: I tried that with these steps: 1. I chained kac-zookeeper_cluster.cert.pem and intermediate.cert.pem using this command: cat /root/myca/intermediate/certs/kac-zookeeper_cluster.cert.pem /root/ca-old3/intermediate/certs/intermediate.cert.pem > /root/ca-old3/intermediate...

Re: Ensuring that the message is persisted after acknowledgement

Rather than forcing writes to disk after each message, the usual method of ensuring durability is to configure the topic with a Replication Factor of 3, min.insync.replicas=2 and have producers configure acks=all. This ensures that the record has been replicated to at least 2 brokers before an acknowledgement is sent back to the producer. If you really want to force an fsync after reach record, you could configure the brokers with log.flush.interval.messages=1 https://kafka.apache.org/documentation/#brokerconfigs_log.flush.interval.messages If you only want that behavior for certain topics, you could configure the topic with flush.messages=1 https://kafka.apache.org/documentation/#topicconfigs_flush.messages Do note that this is explicitly not recommended in the documentation. On Fri, Aug 20, 2021 at 8:01 AM sunil chaudhari < sunilmchaudhari05@gmail.com > wrote: > Hi Kunal, > This article may help you. > > https://betterprogramming.pub/kafka...

Re: Ensuring that the message is persisted after acknowledgement

Hi Kunal, This article may help you. https://betterprogramming.pub/kafka-acks-explained-c0515b3b707e Cheers, Sunil. On Fri, 20 Aug 2021 at 8:11 PM, Kunal Goyal < kunal.goyal@cohesity.com > wrote: > Hello, > > We are exploring using Kafka for our application. Our requirement is that > once we write some messages to Kafka, it should be guaranteed that the > messages are persisted to disk. > We found this > < > https://www.quora.com/Does-Kafka-sync-data-to-disk-asynchronously-like-Redis-does > > > article which says that a Kafka broker acknowledges a record after it has > written the record to the buffer of the I/O device; it does not issue an > explicit fsync operation nor does it wait for the OS to confirm that the > data has been written. Is this statement true for the current > implementation? If so, is there any way in which we can ensure fsync is > called before acknowledgement of messages? ...

Ensuring that the message is persisted after acknowledgement

Hello, We are exploring using Kafka for our application. Our requirement is that once we write some messages to Kafka, it should be guaranteed that the messages are persisted to disk. We found this < https://www.quora.com/Does-Kafka-sync-data-to-disk-asynchronously-like-Redis-does > article which says that a Kafka broker acknowledges a record after it has written the record to the buffer of the I/O device; it does not issue an explicit fsync operation nor does it wait for the OS to confirm that the data has been written. Is this statement true for the current implementation? If so, is there any way in which we can ensure fsync is called before acknowledgement of messages? Any help would be appreciated. -- Thanks & Regards Kunal Goyal

Re: Kafka Streams leave group behaviour

As Boyang mentioned, Kafka Streams intentionally does not send a LeaveGroup request when shutting down. This is because often the shutdown is not due to a scaling down event but instead some transient closure, such as during a rolling bounce. In cases where the instance is expected to start up again shortly after, we originally wanted to avoid that member's tasks from being redistributed across the remaining group members since this would disturb the stable assignment and could cause unnecessary state migration and restoration. We also hoped to limit the disruption to just a single rebalance, rather than forcing the group to rebalance once when the member shuts down and then again when it comes back up. So it's really an optimization for the case in which the shutdown is temporary. That said, many of those optimizations are no longer necessary or at least much less useful given recent features and improvements. For example rebalances are now lightweight so...

Re: Kafka metrics to calculate number of messages in a topic

That will get you a good approximation, but it's not guaranteed to be completely accurate. Offsets in Kafka are not guaranteed to be continuous. For topics with log compaction enabled, the removed records will leave (potentially very large) holes in the offsets. Even for topics without log compaction, it's possible for there to be small holes in the offsets. Transaction markers for Exactly Once processing will also use offsets. On Mon, Aug 9, 2021 at 11:28 PM Dhirendra Singh < dhirendraks@gmail.com > wrote: > Hi All, > I have a requirement to display the total number of messages in a topic in > grafana dashboard. > I am looking at the metrics exposed by kafka broker and came across the > following metrics. > kafka_log_log_logendoffset > kafka_log_log_logstartoffset > > My understanding is that if I take the difference of > kafka_log_log_logendoffset and kafka_log_log_logstartoffset it should > result in the numbe...

Kafka-consumer-groups.sh says "no active members" but the CURRENT_OFFSET is moving

Hi Folks, We recently came across a weird scenario where we had a consumer group consuming from multiple topics. When we ran the "Kafka-consumer-group" command multiple times, we saw that the CURRENT-OFFSET is advancing; however , we also saw a line printed: *"Consumer group 'GROUP_ABC' has no active members."* The consumer lag graph shows no data from this group. Here is the output from the Kafka-consumer-groups.sh: ./bin/kafka-consumer-groups.sh --bootstrap-server <HOST:PORT> --group GROUP_ABC --describe *Consumer group 'GROUP_ABC' has no active members.* GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID*GROUP_ABC* TOPIC_A 0 2280611 3861697 1581086 - - - *GROUP_ABC *TOPIC_A 1 3845015 3845015 ...

Re: DescribeTopics could return deleted topic

Got it, thanks for explaining that Guozhang! On Tue, Aug 17, 2021 at 3:08 PM Guozhang Wang < wangguoz@gmail.com > wrote: > Since DescribeTopics is just handled via a Metadata request behind the > scene, it is possible that if the request is sent to some brokers with > stale metadata (not yet received the metadata update request). But I would > not expect it to "always" return the deleted topic partition, unless the > broker being queried is partitioned from the controller and hence would > never receive the metadata update request. > > > Guozhang > > On Sun, Aug 8, 2021 at 6:08 PM Boyang Chen < reluctanthero104@gmail.com > > wrote: > > > Hey there, > > > > Has anyone experienced the case where the admin delete topic command was > > issued but the DescribeTopics command always returns the topic partition? > > What's the expected time for the topic metadata to disappear? ...

Re: High disk read with Kafka streams

Hello Magnat, Thanks for reporting your observations. I have some questions: 1) Are your global state stores also in-memory or persisted on disks? 2) Are your Kafka and KStreams colocated? Guozhang On Tue, Aug 10, 2021 at 6:10 AM mangat rai < mangatmodi@gmail.com > wrote: > Hey All, > > We are using the low level processor API to create kafka stream > applications. Each app has 1 or more in-memory state stores with caching > disabled and changelog enabled. Some of the apps also have global stores. > We noticed from the node metrics (kubernetes) that the stream applications > are consuming too much disk IO. On going deeper I found following > > 1. Running locally with docker I could see some pretty high disk reads. I > used `docker stats` and got `BLOCK I/O` as `438MB / 0B`. To compare we did > only a few gigabytes of Net I/O. > 2. In kubernetes, `container_fs_reads_bytes_total` gives us pretty big > numbers wher...

Re: DescribeTopics could return deleted topic

Since DescribeTopics is just handled via a Metadata request behind the scene, it is possible that if the request is sent to some brokers with stale metadata (not yet received the metadata update request). But I would not expect it to "always" return the deleted topic partition, unless the broker being queried is partitioned from the controller and hence would never receive the metadata update request. Guozhang On Sun, Aug 8, 2021 at 6:08 PM Boyang Chen < reluctanthero104@gmail.com > wrote: > Hey there, > > Has anyone experienced the case where the admin delete topic command was > issued but the DescribeTopics command always returns the topic partition? > What's the expected time for the topic metadata to disappear? > > Boyang > -- -- Guozhang

MirrorMaker2 is not always starting tasks

I am using MirrorMaker 2.0 and running it via MirrorMaker.java < https://github.com/apache/kafka/blob/2.7/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMaker.java > class. This method will start up it's own worker and`DistributedHerder`. After upgrading from version `2.4.0` to `2.7.1` I noticed that when I am starting up mirror maker it does not always start tasks - just the connector is running. Doing some amount of stop/starts will eventually start tasks too. I summarised a problem and my investigations in this JIRA: https://issues.apache.org/jira/browse/KAFKA-13196 Does any one face similar behavior? Or am I doing something inappropriate? Thanks, Jozef

Secure connection using cert and keyfile

Hi, I am running this command to test my zookeeper SSL connection: openssl s_client -showcerts -connect 55.55.55.55:2280 -CAfile /certs/ca-chain.cert.pem -cert /root/ca/intermediate/certs/intermediate.cert.pem -key /root/ca/intermediate/private/intermediate.key.pem It works just fine so that's for openssl s_client to connect to zookeeper. How can I connect my Kafka server using -cert and -key option like the command mentioned above? I need to use that to avoid getting SSL errors because I cannot use ssl.clientAuth in my zookeeper config that is because I have a version 3.5.5 only (does not support ssl.clientAuth). Any ideas on how can I connect my Kafka server using -cert and -key option? Best regards, John Mark Causing

RE: Kafka checks the validity of SSL certificates keystore or trust store?

Hello, Can anyone help me provide the below information: Kafka SSL checks the validity of which SSL certificate: keystore or trust store while checking the expiry condition? Thanks in advance! Best regards, Deepak From: Deepak Jain Sent: 12 August 2021 15:01 To: users@kafka.apache.org Cc: Alap Patwardhan < alap@cumulus-systems.com >; Prashant Ahire < prashant.ahire@cumulus-systems.com > Subject: Kafka checks the validity of SSL certificates keystore or trust store? Hello, We are using Kafka for data uploading via SSL. While doing the SSL certificate expiry test, we found that Kafka checks the expiry of keystore and does not start when the current date exceed the validity end date of keystore and dump the following exception in server.log ------------------------------------------------------------------------------------START-OF-EXCEPTION------------------------------------------------------------------------------------------------------...

Re: Kafka Streams Handling uncaught exceptions REPLACE_THREAD

Hi Yoda, for certain cases, Kafka Streams allows you to specify handlers that skip the problematic record. Those handlers are: 1. deserialization exception handler configured in default.deserialization.exception.handler 2. time extractor set in default.timestamp.extractor and in the Consumed object 3. production exception handler configured in default.production.exception.handler Kafka Streams provides implementations for handlers 1 and 2 to skip the problematic records, that are LogAndContinueExceptionHandler and LogAndSkipOnInvalidTimestamp, respectively. For some more details have a look at https://docs.confluent.io/platform/current/streams/faq.html#failure-and-exception-handling If problematic records cause an exception in user code, the user code needs to provide functionality to skip the problematic record. Best, Bruno On 10.08.21 13:26, Yoda Jedi Master wrote: > Hi Bruno, thank you for your answer. > I mean that the message that ca...

zookeeper ssl alert “alert bad certificate”

Hi there! < https://serverfault.com/posts/1074579/timeline > I am using Kafka (version 2.3.0) and Zookeeper (version 3.5.5-3) - the stable version is 3.6.3. When I test the SSL of my Zookeeper using this command: openssl s_client -showcerts -connect 127.0.0.1:2280 -CAfile /certs/ca-chain.cert.pem and I am getting this error: 140371409225024:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:../ssl/record/rec_layer_s3.c:1543:SSL alert number 42 But if I will install Zookeeper version 3.5.7 and up and I can add this in my zoo.cnf or zookeeper.properties: ssl.clientAuth=want and I no longer see any SSL errors. Any tips/suggestions on how to fix this SSL error without upgrading (I don't want to update at the moment to avoid other conflicts like Kafka Cruise Control and others). Thanks in advance! Best regards, John Mark Causing

Re: Kafka Mirror Maker 2 - source topic keep getting created

Have you tried running mm2 in a distributed mode instead of standalone/dedicated? Try using this file below that I have working on a AWS EC2 instance: name=mm2 clusters = source, target source.bootstrap.servers = source:9092 target.bootstrap.servers = target:9092 group.id = mm2 ## all replication factor has to be greater than the min.insync.replicas = 2, which is the default msk configuration. source.config.storage.replication.factor = 4 target.config.storage.replication.factor = 4 source.offset.storage.replication.factor = 4 target.offset.storage.replication.factor = 4 source.status.storage.replication.factor = 4 target.status.storage.replication.factor = 4 source->target.enabled = true target->source.enabled = false offset-syncs.topic.replication.factor = 4 heartbeats.topic.replication.factor = 4 checkpoints.topic.replication.factor = 4 topics = .* tasks.max = 2 replication.factor = 4 refresh.topics.enabled = true sync.topic.configs.enab...

Re: Kafka 2.8.0 "KRaft" - advertised.listeners port mismatch?

On Thu, Aug 12, 2021 at 11:44 AM code@uweeisele.eu < code@uweeisele.eu > wrote: > this is a bug which I also have already noticed. This has already been > fixed ( https://github.com/apache/kafka/pull/10935 ) and it will be > released with Kafka 3.0.0 ( > https://issues.apache.org/jira/browse/KAFKA-13003 ). Thank you for the reply (and for contributing the fix)! This may be a better question for the development list, but I wonder if this fix would be a candidate for inclusion in a 2.8.1 release, and (if so) what the process would be to backport it. Regards, Mike

Re: Kafka Streams leave group behaviour

You are right Uwe, Kafka Streams won't leave group no matter dynamic or static membership. If you want to have fast scale down, consider trying static membership and use the admin command `removeMemberFromGroup` when you need to rescale. Boyang On Thu, Aug 12, 2021 at 4:37 PM Lerh Chuan Low < lerhchuan@gmail.com > wrote: > I think you may have stumbled upon this: > https://issues.apache.org/jira/browse/KAFKA-4881 . 1 thing that you could > try is using static membership - we have yet to try that though so can't > comment yet on how that might work out. > > On Thu, Aug 12, 2021 at 11:29 PM code@uweeisele.eu < code@uweeisele.eu > > wrote: > > > Hello all, > > > > I have a question about the Group Membership lifecycle of Kafka Streams, > > or more specific about when Kafka Streams does leave the consumer group > (in > > case of dynamic membership). > > > > My expectation wa...

Re: Kafka Streams leave group behaviour

I think you may have stumbled upon this: https://issues.apache.org/jira/browse/KAFKA-4881 . 1 thing that you could try is using static membership - we have yet to try that though so can't comment yet on how that might work out. On Thu, Aug 12, 2021 at 11:29 PM code@uweeisele.eu < code@uweeisele.eu > wrote: > Hello all, > > I have a question about the Group Membership lifecycle of Kafka Streams, > or more specific about when Kafka Streams does leave the consumer group (in > case of dynamic membership). > > My expectation was, that a call to the method KafkaStreams.close() also > sends a LeaveGroup request to the coordination (if dynamic membership is > used). However, its seems that this is not the case (at least in my case > the request was not send). Only if I explicitly call > KafkaStreams.removeStreamThread() a LeaveGroup request is sent to the > coordinator. I used the WordCount example located in > https://git...