Skip to main content

Posts

Showing posts from September, 2021

Re: question about mm2 on consumer group offset mirroring

Hey Calvin, the property you're looking for is emit.checkpoint.interval.seconds. That's how often MM will write checkpoints, which includes consumer group offsets. Ryanne On Thu, Sep 30, 2021, 9:18 AM Calvin Chen < pingc.sh@hotmail.com > wrote: > Hi all > > I have a question about the mirror make 2, on the consumer group offset > mirroring, what is the duration for mm2 to detect consumer group offset > change and mirror it to remote kafka consumer group? > > I have my mm2 code define as below: > > > {{ kafka01_name }}->{{ kafka02_name }}.sync.group.offsets.enabled = true > {{ kafka02_name }}->{{ kafka01_name }}.sync.group.offsets.enabled = true > > refresh.topics.interval.seconds=10 > refresh.groups.interval.seconds=10 > > so I would expect the consumer group offset mirroring would happen every > around 10 second, but during test, I see sometime consumer group offset > mirroring are qu...

question about mm2 on consumer group

Hi all I see for mm2, mirrored topic name will have source kafka cluster put as prefix, but consumer group doesn't, why not put consumer group naming same as topic? is it because consumer group/offset mirroring not only from source-kafka to remote-kafka but also from remote-kafka to source-kafka, so we need to make sure consumer group name are same across kafkas? I made a test and it looks like consumer group offset mirroring is one-way: say, I have topic/acl(consumer group) created in source-kafka, and produce and consume message from source-kafka, and I can see topic and consumer group offset are mirrored to remote-kafka, this is very good, but when I produce to source-kafka and consume from remote-kafka, I see consumer group offset is updated in remote-kafka, but not updating the source-kafka, in my mm2 config I put them cross-mirrored, so in this case, even I consume from remote-kafka, I would expect source-kafka get consumer group offset updated, but it is not, what...

question about mm2 on consumer group offset mirroring

Hi all I have a question about the mirror make 2, on the consumer group offset mirroring, what is the duration for mm2 to detect consumer group offset change and mirror it to remote kafka consumer group? I have my mm2 code define as below: {{ kafka01_name }}->{{ kafka02_name }}.sync.group.offsets.enabled = true {{ kafka02_name }}->{{ kafka01_name }}.sync.group.offsets.enabled = true refresh.topics.interval.seconds=10 refresh.groups.interval.seconds=10 so I would expect the consumer group offset mirroring would happen every around 10 second, but during test, I see sometime consumer group offset mirroring are quick, sometimes it takes minutes, so I would like to know how is offset mirrored and why there is time difference, thanks -Calvin

Re: Question about controller behavior when fenced

Hi Andrew, Every broker receives a notification about the /controller znode being created and will also receive the latest epoch. The old controller will get some partitions assigned as a result of it being considered as part of the cluster, by the new controller. This is the normal partition allocation roles/duties of a controller. The old controller will not shutdown. It will become one of the non-controller brokers, if I may call it. Regards, Neeraj On Thursday, 30 September, 2021, 12:12:08 am GMT+10, Andrew Grant < andrewgrant243@gmail.com > wrote: Hi all, I had a question about controller behavior when fenced. From my understanding epoch numbers are used to fence brokers who might think they're still the controller but really in the meantime a new broker has been elected as the new controller. My question is, how does a broker realize it's no longer the controller? And when it does realize this, does it shutdown or does it maybe log someth...

mm2: size of target vs source topic

Hi all, I'm setting up two new clusters today and using mm2 to replicate data from clusterA --> clusterB. I noticed that the topic has the same amount of record but the size is small by 5x. source topic is 6.975 MB target topic is: 1.136 MB It has the same number of record. both cluster is using gzip. any idea why this is? thanks,

MM2 unreachable clusters

Hello, I have 50 on-prem field devices that each function as a single-broker kafka cluster. I am replicating to a multi-broker "master" cluster in a datacenter. I've been doing this by running the original mirrormaker in the datacenter. (Because of concerns about unauthorized data access, we cannot run mirrormaker on the field devices). This has a lot of overhead because I am running an instance of mirrormaker for every remote node. I have been testing mirrormaker2 to reduce this overhead because it supports several clusters in the configuration. However, its seems like when any of the clusters are unreachable none of the replication works. Is there a work around for this or a better way to replicate topics from several remote clusters? Here's an example of my mm2 properties: bootstrap.server= 192.0.2.1:9092 replication.factor=2 checkpoints.topic.replication.factor=1 heartbeats.topic.replication.factor=1 offset-syncs.topic.replication.factor=1 offset.st...

Question about controller behavior when fenced

Hi all, I had a question about controller behavior when fenced. From my understanding epoch numbers are used to fence brokers who might think they're still the controller but really in the meantime a new broker has been elected as the new controller. My question is, how does a broker realize it's no longer the controller? And when it does realize this, does it shutdown or does it maybe log something and continue on with its other duties? I suspect the latter but wanted to check. Links to any resources that might answer the question would also be helpful!! Thanks, Andrew -- Andrew Grant 8054482621

Not able to replicate groups in Mirror Maker 2

Hi, Initially, I created a topic named "quickstart-events" and then produced some messages into it, then consumed it from the kafka-console-consumer with consumer group "quickstartGroup" and now I want to replicate the group from source to destination. When I run describe command to describe the group in the source cluster ~/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group quickstartGroup The output I'm getting is Consumer group 'quickstartGroup' has no active members. GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID quickstartGroup quickstart-events 1 9 12 3 - - - quickstartGroup quickstart-events 0 9 12 3 - - - - Here, the topic is getting replicated but when I run the...

Not able to replicate groups in Mirror Maker 2

Initially, I created a topic named "quickstart-events" and then produced some messages into it, then consumed it from the kafka-console-consumer with consumer group "quickstartGroup" and now I want to replicate the group from source to destination. When I run describe command to describe the group in the source cluster ~/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group quickstartGroup The output I'm getting is Consumer group 'quickstartGroup' has no active members. GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID quickstartGroup quickstart-events 1 9 12 3 - - - quickstartGroup quickstart-events 0 9 12 3 - - - - Here, the topic is getting replicated but when I run the comman...

Re: [kafka-clients] Re: [DISCUSS] Before you Vote for Apache Kafka RCs, Watch This!

Thanks for the feedback, David. On Tue, Sep 28, 2021 at 4:05 AM David Jacot < djacot@confluent.io > wrote: > Israel, > > I am not aware of any apache docker repository. It seems preferable > to keep it in your own repository at this stage. Otherwise, you would > depend on committers/PMC members to update the image which is > not ideal for your project, I suppose. > > Best, > David > > On Sat, Sep 25, 2021 at 12:56 PM Israel Ekpo < israelekpo@gmail.com > wrote: > >> David >> >> I have one more question: do we have a Docker Hub account where the >> Docker images can be hosted? >> >> I am fine using my personal account but it would be better if it were >> something like apache/kafka:3.1.0-rc2 or just kafka/3.1.0-rc2 instead of >> izzyacademy/kafka:3.1.0-rc2 >> >> I can still use my personal account but it would look better and give >> others opportu...

Re: [kafka-clients] Re: [DISCUSS] Before you Vote for Apache Kafka RCs, Watch This!

Israel, I am not aware of any apache docker repository. It seems preferable to keep it in your own repository at this stage. Otherwise, you would depend on committers/PMC members to update the image which is not ideal for your project, I suppose. Best, David On Sat, Sep 25, 2021 at 12:56 PM Israel Ekpo < israelekpo@gmail.com > wrote: > David > > I have one more question: do we have a Docker Hub account where the Docker > images can be hosted? > > I am fine using my personal account but it would be better if it were > something like apache/kafka:3.1.0-rc2 or just kafka/3.1.0-rc2 instead of > izzyacademy/kafka:3.1.0-rc2 > > I can still use my personal account but it would look better and give > others opportunities to manage the images as well without depending on my > personal Docker Hub account > > Your initial feedback was great and I am going to trim the prep process by > prepping some base images to cut...

Differences in java API and Kafka Connect Publishers?

Hi folks, Here's the question: Rather than developing a model for dividing the work among publishers, wouldn't the publisher offset internal topic manage that? it is supposed to help the publisher understand where it is in publishing events and where to start looking for the next event to publish. This would only work for a three publisher model if the publishers shared the internal publisher offset topic. I'm assuming that isn't the case? each publisher instance would have its own internal offset topic? Now, I know the answer to these questions using the standard Java Publisher API, but what are the answers when using Kafka Connect Publishers? Best Regards, -- Doug Whitfield | Enterprise Architect, OpenLogic< https://www.openlogic.com/?utm_leadsource=email-signature&utm_source=outlook-direct-email&utm_medium=email&utm_campaign=2019-common&utm_content=email-signature-link > Perforce Software< http://www.perforce.com/?utm_leadsourc...

MirrorMaker 2 - sync newly created topics automatically

Hi Kafka Users, I'm running MM2 to replicate data from one region to another, it works great, only newly created topics are not being picked up automatically and I need to restart MM2 in order to sync those topics as well. Is there a way to configure MM2 to pick newly created topics automatically? Thanks, Tomer This email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service < https://www.amdocs.com/about/email-terms-of-service >

Re: [kafka-clients] Re: [DISCUSS] Before you Vote for Apache Kafka RCs, Watch This!

David I have one more question: do we have a Docker Hub account where the Docker images can be hosted? I am fine using my personal account but it would be better if it were something like apache/kafka:3.1.0-rc2 or just kafka/3.1.0-rc2 instead of izzyacademy/kafka:3.1.0-rc2 I can still use my personal account but it would look better and give others opportunities to manage the images as well without depending on my personal Docker Hub account Your initial feedback was great and I am going to trim the prep process by prepping some base images to cut down the Docker image build elapsed time Thank you again Sincerely Israel On Fri, Sep 24, 2021 at 6:45 PM Israel Ekpo < israelekpo@gmail.com > wrote: > > Thanks for the feedback David. I really appreciate it. > > I would work on eliminating some items in the docker container that are > not needed and making this as a base image that already has > those dependencies pre-baked. ...

Re: [kafka-clients] Re: [DISCUSS] Before you Vote for Apache Kafka RCs, Watch This!

Thanks for the feedback David. I really appreciate it. I would work on eliminating some items in the docker container that are not needed and making this as a base image that already has those dependencies pre-baked. That could significantly trim the elapsed time between when they fire up the container and when they can get started with the validation. Let me update this and will share another update again soon. I appreciate this feedback Thanks. On Fri, Sep 24, 2021 at 10:51 AM 'David Jacot' via kafka-clients < kafka-clients@googlegroups.com > wrote: > Hi Israel, > > Thank you for this initiative. > > The tool seems pretty cool and I think that it could be useful as well. I > haven't tried it yet though. > > I just watched the video and I found the part which requires to build > the docker container really long, perhaps even longer than the > validation itself (running the test apart). Could we simplify ...

Re: `java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS` after upgrading `Kafka-clients` from 2.5.0 to 3.0.0

I've had something similar on a different embedded kafka project. Most likely your issue is that you are putting kafka-clients 3.0.0 on the classpath alongside the Kafka server in version 2.7.1, which is the version brought in by your spring-kafka-test dependency. Since the Kafka server itself depends on kafka-clients, if you upgrade kafka-clients but not the server on the same classpath, you might get code mismatches like this. I think you need to wait for a new version of spring-kafka-test. You can try bumping the org.apache.kafka:kafka_2.13 dependency to 3.0.0, but there's no guarantee it will work. Den fre. 24. sep. 2021 kl. 09.24 skrev Bruno Cadonna < cadonna@apache.org >: > Hi Bruce, > > I do not know the specific root cause of your errors but what I found is > that Spring 2.7.x is compatible with clients 2.7.0 and 2.8.0, not with > 3.0.0 and 2.8.1: > > https://spring.io/projects/spring-kafka > > Best. > Bruno ...

Re: [DISCUSS] Before you Vote for Apache Kafka RCs, Watch This!

Hi Israel, Thank you for this initiative. The tool seems pretty cool and I think that it could be useful as well. I haven't tried it yet though. I just watched the video and I found the part which requires to build the docker container really long, perhaps even longer than the validation itself (running the test apart). Could we simplify this? Automating the validation process is a great and risky thing at the same time. Imagine if we would introduce a bug in the scripts. We could all miss an issue in the RC. This is the advantage of our current, boring and manual process. I feel like we would always need some sort of manual sanity checks anyway. It is hard to say if this will really help to have more people validating the release candidates. However, we could advertise it again when we publish RCs for 3.1.0, by mentioning it in the thread, and see if it helps folks. Cheers, David On Fri, Sep 24, 2021 at 2:06 PM Israel Ekpo < israelekpo@gmail.com ...

Re: [DISCUSS] Before you Vote for Apache Kafka RCs, Watch This!

Hello Everyone, Please take a moment to review this and share your thoughts. It would be great to have more community involvement during release candidate validations I am wondering if this should be split up between the site docs repo and the core code repo or just in the code repo Should we include this in future release candidate voting notifications to the community? Also do you think we even need this at all? When you have a moment please let me know Thanks On Wed, Sep 15, 2021 at 7:18 PM Israel Ekpo < israelekpo@gmail.com > wrote: > Before you Vote for Apache Kafka RCs, Watch This! > > https://youtu.be/T1VqFszLuQs > > Hello Kafka Community Members and welcome to the Apache Kafka Release > Party! > > As part of an effort (KAFKA-9861) to get more community participation > during release candidate (RC) validations, we have created the following > steps to hopefully allow more community members to participa...

Re: `java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS` after upgrading `Kafka-clients` from 2.5.0 to 3.0.0

Hi Bruce, I do not know the specific root cause of your errors but what I found is that Spring 2.7.x is compatible with clients 2.7.0 and 2.8.0, not with 3.0.0 and 2.8.1: https://spring.io/projects/spring-kafka Best. Bruno On 24.09.21 00:25, Chang Liu wrote: > Hi Kafka users, > > I start running into the following error after upgrading `Kafka-clients` from 2.5.0 to 3.0.0. And I see the same error with 2.8.1. I don't see a working solution by searching on Google: https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server < https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server > > > This looks like backward incompatibility of Kafka-clients. Do you happen to know a solution for this? > > ``` > java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS > > at kafka.server.Defaults$.<init>(KafkaConfig.scala:242) > at kafka.server.Defaults$.<cl...

Re: Internal Connect REST endpoints are insecure

I worked on this space.. But I didn't take *https options within kafka connect instead I deployed kafka connect in the kubernetes cluster so I leveraged Ingress exposed https to allow clients access to my kafka connect rest api. Thanks, Vignesh On Fri, Sep 17, 2021 at 10:31 AM Kuchansky, Valeri < VKuchansky@rbbn.com > wrote: > Hi Community Members, > > I am following available documents to have kafka-connect REST API > secured. > In particular this one< > https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface > >. > I do not see that use any of listeners.https.ssl.* options make any > difference. > I would appreciate any help in creating of a valid configuration. > My running Kafka version is 2.5. > > Thanks, > Valeri > > Notice: This e-mail together with any attachments may contain information > of Ribbon Communications Inc. and its...

`java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS` after upgrading `Kafka-clients` from 2.5.0 to 3.0.0

Hi Kafka users, I start running into the following error after upgrading `Kafka-clients` from 2.5.0 to 3.0.0. And I see the same error with 2.8.1. I don't see a working solution by searching on Google: https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server < https://stackoverflow.com/questions/46914225/kafka-cannot-create-embedded-kafka-server > This looks like backward incompatibility of Kafka-clients. Do you happen to know a solution for this? ``` java.lang.NoSuchFieldError: DEFAULT_SASL_ENABLED_MECHANISMS at kafka.server.Defaults$.<init>(KafkaConfig.scala:242) at kafka.server.Defaults$.<clinit>(KafkaConfig.scala) at kafka.server.KafkaConfig$.<init>(KafkaConfig.scala:961) at kafka.server.KafkaConfig$.<clinit>(KafkaConfig.scala) at kafka.server.KafkaConfig.LogDirProp(KafkaConfig.scala) at org.springframework.kafka.test.EmbeddedKafkaBroker.afterPropertiesSet(EmbeddedKafkaBroker.java:298) at org...

Re: [ANNOUNCE] Apache Kafka 3.0.0

Thank you for your hard work, Konstantine!! Best, Dongjin On Thu, Sep 23, 2021 at 1:14 AM Guozhang Wang < wangguoz@gmail.com > wrote: > Kudos to Konstantine! Congrats to everyone. > > On Tue, Sep 21, 2021 at 9:01 AM Konstantine Karantasis < > kkarantasis@apache.org > wrote: > > > The Apache Kafka community is pleased to announce the release for Apache > > Kafka 3.0.0 > > > > It is a major release that includes many new features, including: > > > > * The deprecation of support for Java 8 and Scala 2.12. > > * Kafka Raft support for snapshots of the metadata topic and other > > improvements in the self-managed quorum. > > * Deprecation of message formats v0 and v1. > > * Stronger delivery guarantees for the Kafka producer enabled by default. > > * Optimizations in OffsetFetch and FindCoordinator requests. > > * More flexible MirrorMaker 2 configuration and deprecat...

Re: [ANNOUNCE] Apache Kafka 3.0.0

Kudos to Konstantine! Congrats to everyone. On Tue, Sep 21, 2021 at 9:01 AM Konstantine Karantasis < kkarantasis@apache.org > wrote: > The Apache Kafka community is pleased to announce the release for Apache > Kafka 3.0.0 > > It is a major release that includes many new features, including: > > * The deprecation of support for Java 8 and Scala 2.12. > * Kafka Raft support for snapshots of the metadata topic and other > improvements in the self-managed quorum. > * Deprecation of message formats v0 and v1. > * Stronger delivery guarantees for the Kafka producer enabled by default. > * Optimizations in OffsetFetch and FindCoordinator requests. > * More flexible MirrorMaker 2 configuration and deprecation of MirrorMaker > 1. > * Ability to restart a connector's tasks on a single call in Kafka Connect. > * Connector log contexts and connector client overrides are now enabled by > default. > * Enhanced semant...

Log cleaner is not cleaning log of a partition

Hi All, I have a kafka cluster with 3 brokers. My kafka version is 2.5.0 I have a topic named "order" with 3 partitions and replication factor of 3. cleanup policy of the topic is "compact,delete" topic is evenly distributed across all brokers. broker 0 => partition 0,1,2 broker 1 => partition 0,1,2 broker 2 => partition 0,1,2 What i have noticed is log cleaner on broker 1 is not cleaning the log of partition 0. it is cleaning all other partitions of the topic on that broker and all partitions of that topic on other brokers. i have checked the kafka log carefully and did not find any log cleaning message for partition 0. neither any error logged. This is causing the log of the partition to grow continuously and eventually filling the disk space. This happen randomly for a partition on one of the broker. after the broker restart log cleaner start cleaning the partition again but some other partition get affected. Any hint what cau...

Re: CVE-2021-38153: Timing Attack Vulnerability for Apache Kafka Connect and Clients

Hi Randall, Could you please share the JIRA ticket or the fixing commit? It might help to evaluate the impact better. Thank you! Ivan On Tue, 21 Sept 2021 at 19:37, Randall Hauch < rhauch@apache.org > wrote: > Severity: moderate > > Description: > > Some components in Apache Kafka use `Arrays.equals` to validate a > password or key, which is vulnerable to timing attacks that make brute > force attacks for such credentials more likely to be successful. Users > should upgrade to 2.8.1 or higher, or 3.0.0 or higher where this > vulnerability has been fixed. The affected versions include Apache > Kafka 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, > 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.7.1, and > 2.8.0. > > Credit: > > Apache Kafka would like to thank J. Santilli for reporting this issue. > > References: > https://kafka.apache.org/cve-list >

CVE-2021-38153: Timing Attack Vulnerability for Apache Kafka Connect and Clients

Severity: moderate Description: Some components in Apache Kafka use `Arrays.equals` to validate a password or key, which is vulnerable to timing attacks that make brute force attacks for such credentials more likely to be successful. Users should upgrade to 2.8.1 or higher, or 3.0.0 or higher where this vulnerability has been fixed. The affected versions include Apache Kafka 2.0.0, 2.0.1, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.6.0, 2.6.1, 2.6.2, 2.7.0, 2.7.1, and 2.8.0. Credit: Apache Kafka would like to thank J. Santilli for reporting this issue. References: https://kafka.apache.org/cve-list

[ANNOUNCE] Apache Kafka 3.0.0

The Apache Kafka community is pleased to announce the release for Apache Kafka 3.0.0 It is a major release that includes many new features, including: * The deprecation of support for Java 8 and Scala 2.12. * Kafka Raft support for snapshots of the metadata topic and other improvements in the self-managed quorum. * Deprecation of message formats v0 and v1. * Stronger delivery guarantees for the Kafka producer enabled by default. * Optimizations in OffsetFetch and FindCoordinator requests. * More flexible MirrorMaker 2 configuration and deprecation of MirrorMaker 1. * Ability to restart a connector's tasks on a single call in Kafka Connect. * Connector log contexts and connector client overrides are now enabled by default. * Enhanced semantics for timestamp synchronization in Kafka Streams. * Revamped public API for Stream's TaskId. * Default serde becomes null in Kafka Streams and several other configuration changes. You may read a more detailed list of ...