Skip to main content

Posts

Showing posts from July, 2022

An existing connection was forcibly closed by the remote host

Hi, I start to learn Kafka and.. I have first problem. I started Zookeeper, I started Kafka - both on localhost. Seems all is fine. But when I try to execute command: zookeeper-shell.bat localhost:2181 ls /brokers/ids I get error: Connecting to localhost:2181 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [0] [2022-07-30 19:54:22,524] ERROR Exiting JVM with code 0 (org.apache.zookeeper.util.ServiceUtils) And in Zookeeper: [2022-07-30 19:54:53,335] INFO Expiring session 0x10000190aa20001, timeout of 30000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer) [2022-07-30 19:58:05,393] WARN Close of session 0x10000190aa20002 (org.apache.zookeeper.server.NIOServerCnxn) java.io.IOException: An existing connection was forcibly closed by the remote host at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method) at java.base/sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) at java.base/sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.jav...

Expand embedded cluster at the Integration test

Hello, I need to check case with kafka cluster expansion at the integration test. I see the EmbeddedKafkaCluster and IntegrationTestHarness are popular to use at he integration tests. But these test classes don't support test cluster expansion. What do you suggest to use in this case? Does make sense create issue & PR to add cluster expansion function at the EmbeddedKafkaCluster or IntegrationTestHarness? May be I miss something and a cluster expansion functionality not specifically added to the classes? -- With best regards, Taras Ledkov

Re: Apache Kafka acknowledgement

Hi Raj, The Kafka design is based on fully decoupled producers and consumers, where the responsibility of the producer ends after a record has been successfully produced. Since a producer is not aware of other applications that might be consuming the record it can continue with its own flow. The same is true for a consumer, they are usually not aware which application is producing a record. only where to read a record and how to deserialize and interpret the record. This allows for new applications to be added as producers or consumers of the data. This is one of the reasons why Kafka does not contain a builtin feature to send an acknowledgement to the producer when the record has been consumed. It has only built-in support to acknowledge delivery of a record to the partition, which is controlled by the acks setting in a producer. If the design requires an acknowledgement that the record is picked up by a consumer then the easiest way I can think of is to create a...

Re: Kafka certificate monitoring

Hello Sandeep, When using Strimzi and Kafka I created a small app that parses the cert file and pushes the validity period as a prom metric. You can run the app periodically in a cron job, you would mount the secret within the job pod and parse the file. I was using prom push gateway for that but you can make it run within a deployment and expose the metrics to prom as well. Fares Le jeu. 28 juil. 2022 à 04:42, Luke Chen < showuon@gmail.com > a écrit : > Hi Sandeep, > > AFAIK, Kafka doesn't expose this kind of metrics. > I did a quick search, and found there's a similar request in Strimzi. > https://github.com/strimzi/strimzi-kafka-operator/issues/3761 > > Maybe you can help contribute it? Either to Kafka or to Strimzi? :) > > Thank you. > Luke > > On Wed, Jul 27, 2022 at 11:28 PM Sandeep M < sunyar123@gmail.com > wrote: > > > Hi Team, > > > > I using Kafka with st...

Re: question: kafka stream Tumbling Window can't close when no producer sending message

Hello, Yes, this is correct. There is a difference between what we call "stream time" and regular "wall-clock time". All the windowing operations need to be deterministic, otherwise your results would depend on when you run your program. For that reason, we have "stream time", which takes its clock from the incoming records' timestamps instead of the system clock. Stopping the producer means you also stop sending new record timestamps, and you have effectively paused the clock. As a consequence, the open windows can't close (nor can other temporal operations continue). If you're trying to write a test, my suggestion is to send a dummy record (to each partition) with a much later timestamp, which will cause stream time to advance, close out all your windows, and flush your results to the outputs. Now that you know the terminology, you'll be able to find documentation and presentations online about "stream time". It...

Re: Kafka certificate monitoring

Hi Sandeep, AFAIK, Kafka doesn't expose this kind of metrics. I did a quick search, and found there's a similar request in Strimzi. https://github.com/strimzi/strimzi-kafka-operator/issues/3761 Maybe you can help contribute it? Either to Kafka or to Strimzi? :) Thank you. Luke On Wed, Jul 27, 2022 at 11:28 PM Sandeep M < sunyar123@gmail.com > wrote: > Hi Team, > > I using Kafka with strimzi as kafka operator. I wanted to monitor kafka > certificate expiry. We have Rancher kubernetes management tool which has > built in Prometheus and Grafana. How could we monitor certificate expiry > with current setup. > > Regards, > Sandeep >

[RESULTS] [VOTE] Release Kafka version 3.2.1

The vote for RC3 has passed with eight +1 votes (three binding) and no -1 votes. Here are the results: +1 votes PMC: * Randall Hauch * Rajini Sivaram * Bill Bejeck Committers: None Community: * Christopher Shannon * Federico Valeri * Dongjoon Hyun * Jakub Scholz * Matthew de Detrich 0 Votes: None -1 Votes: None Vote Thread: https://lists.apache.org/thread/kcr2xncr762sqy79rbl83w0hzw85w775 I'll continue with the release process and send out the release announcement over the next few days. Thanks! David Arthur

Re: [VOTE] 3.2.1 RC3

I'm closing out the vote now. Thanks to everyone who voted. The RC passed with the required number of votes. I'll send out the results thread shortly. Cheers, David Arthur On Wed, Jul 27, 2022 at 11:54 AM Bill Bejeck < bbejeck@gmail.com > wrote: > Hi David, > > Thanks for running the release! > > I did the following steps: > > - Validated all signatures and checksums > - Built from source > - Ran all the unit tests > - I spot-checked the doc. I did notice the same version number as > Randal - but I expect that will get fixed when the docs are updated with > the release. > > +1(binding) > > Thanks, > Bill > > On Tue, Jul 26, 2022 at 5:56 PM Matthew Benedict de Detrich > <matthew.dedetrich@aiven.io.invalid> wrote: > > > Thanks for the RC, > > > > I ran the full (unit + integration) tests using Scala 2.12 and 2.13 > across > >...

Apache Kafka acknowledgement

Hi Team! I developed a simple kafka producer & consumer module. Just I want to acknowledge whether the message is delivered or not. I couldn't find any document, reference or sample code. Could you please guide me on this? *Development Language : *Python Thanks! Raj

Re: [VOTE] 3.2.1 RC3

Hi David, Thanks for running the release! I did the following steps: - Validated all signatures and checksums - Built from source - Ran all the unit tests - I spot-checked the doc. I did notice the same version number as Randal - but I expect that will get fixed when the docs are updated with the release. +1(binding) Thanks, Bill On Tue, Jul 26, 2022 at 5:56 PM Matthew Benedict de Detrich <matthew.dedetrich@aiven.io.invalid> wrote: > Thanks for the RC, > > I ran the full (unit + integration) tests using Scala 2.12 and 2.13 across > OpenJDK (Linux) 11 and 17 and all tests passed apart from a single one > which is documented at https://issues.apache.org/jira/browse/KAFKA-13514 > > +1 (non binding) > > > > On Fri, Jul 22, 2022 at 3:15 AM David Arthur < davidarthur@apache.org > > wrote: > > > Hello Kafka users, developers and client-developers, > > > > This is...

Kafka certificate monitoring

Hi Team, I using Kafka with strimzi as kafka operator. I wanted to monitor kafka certificate expiry. We have Rancher kubernetes management tool which has built in Prometheus and Grafana. How could we monitor certificate expiry with current setup. Regards, Sandeep

Re: [VOTE] 3.2.1 RC3

Thanks for the RC, I ran the full (unit + integration) tests using Scala 2.12 and 2.13 across OpenJDK (Linux) 11 and 17 and all tests passed apart from a single one which is documented at https://issues.apache.org/jira/browse/KAFKA-13514 +1 (non binding) On Fri, Jul 22, 2022 at 3:15 AM David Arthur < davidarthur@apache.org > wrote: > Hello Kafka users, developers and client-developers, > > This is the first release candidate of Apache Kafka 3.2.1. > > This is a bugfix release with several fixes since the release of 3.2.0. A > few of the major issues include: > > * KAFKA-14062 OAuth client token refresh fails with SASL extensions > * KAFKA-14079 Memory leak in connectors using errors.tolerance=all > * KAFKA-14024 Cooperative rebalance regression causing clients to get stuck > > > Release notes for the 3.2.1 release: > https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/RELEASE_NOTES.html > > > ...

Re: [VOTE] 3.2.1 RC3

Hi David, +1 (binding) Verified signatures, ran quickstart with binaries, built from source and verified with quickstart, checked some javadocs. Thanks for the RC, David! Regards, Rajini On Tue, Jul 26, 2022 at 4:32 PM Randall Hauch < rhauch@gmail.com > wrote: > Thanks for the RC, David. > > I was able to successfully complete the following: > > - Installed 3.2.1 RC3 and performed quickstart for broker and > Connect (using Java 17) > - Verified signatures and checksums > - Verified the tag > - Manually compared the release notes to JIRA > - Build release archive from the tag, installed locally, and ran a portion > of quickstart > - Manually spotchecked the Javadocs and release notes linked above > - The site docs at https://kafka.apache.org/32/documentation.html still > reference the 3.2.0 version (as expected), but I verified that putting the > contents of > > https://home.apache.org/...

Re: [VOTE] 3.2.1 RC3

Thanks for the RC, David. I was able to successfully complete the following: - Installed 3.2.1 RC3 and performed quickstart for broker and Connect (using Java 17) - Verified signatures and checksums - Verified the tag - Manually compared the release notes to JIRA - Build release archive from the tag, installed locally, and ran a portion of quickstart - Manually spotchecked the Javadocs and release notes linked above - The site docs at https://kafka.apache.org/32/documentation.html still reference the 3.2.0 version (as expected), but I verified that putting the contents of https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/kafka_2.12-3.2.1-site-docs.tgz into the "32" directory of a local Apache server running https://github.com/apache/kafka-site , and the proper 3.2.1 version was referenced. So I'm +1 (binding) Best regards, Randall On Thu, Jul 21, 2022 at 8:15 PM David Arthur < davidarthur@apache.org > wrote: > Hello Kafka u...

Re: [VOTE] 3.2.1 RC3

+1 (non-binding). I run my tests with the staged binaries and all seems to work fine. Thanks for running the release. Jakub On Fri, Jul 22, 2022 at 3:15 AM David Arthur < davidarthur@apache.org > wrote: > Hello Kafka users, developers and client-developers, > > This is the first release candidate of Apache Kafka 3.2.1. > > This is a bugfix release with several fixes since the release of 3.2.0. A > few of the major issues include: > > * KAFKA-14062 OAuth client token refresh fails with SASL extensions > * KAFKA-14079 Memory leak in connectors using errors.tolerance=all > * KAFKA-14024 Cooperative rebalance regression causing clients to get stuck > > > Release notes for the 3.2.1 release: > https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/RELEASE_NOTES.html > > > > **** Please download, test and vote by Wednesday July 27, 2022 at 17:00 PT. > **** > Kafka's KEYS file containing PGP keys...

Re: [ANNOUNCE] New Committer: Chris Egerton

Congratulations, Chris!! -John On Mon, Jul 25, 2022, at 20:22, Luke Chen wrote: > Congratulations Chris! Well deserved! > > Luke > > On Tue, Jul 26, 2022 at 5:39 AM Anna McDonald < jbfletch@happypants.org > > wrote: > >> Congratulations Chris! Time to Cellobrate! >> >> anna >> >> On Mon, Jul 25, 2022 at 4:23 PM Martin Gainty < mgainty@hotmail.com > wrote: >> >> > Congratulations Chris! >> > >> > martin~ >> > ________________________________ >> > From: Mickael Maison < mimaison@apache.org > >> > Sent: Monday, July 25, 2022 12:25 PM >> > To: dev < dev@kafka.apache.org >; Users < users@kafka.apache.org > >> > Subject: [ANNOUNCE] New Committer: Chris Egerton >> > >> > Hi all, >> > >> > The PMC for Apache Kafka has invited Chris Egerton as a committer, and >> ...

Re: [ANNOUNCE] New Committer: Chris Egerton

Thanks to Mickael and the PMC for this privilege, and to everyone here for their well wishes. I look forward to continuing to work with this wonderful community. Cheers, Chris On Mon, Jul 25, 2022, 17:39 Anna McDonald < jbfletch@happypants.org > wrote: > Congratulations Chris! Time to Cellobrate! > > anna > > On Mon, Jul 25, 2022 at 4:23 PM Martin Gainty < mgainty@hotmail.com > wrote: > > > Congratulations Chris! > > > > martin~ > > ________________________________ > > From: Mickael Maison < mimaison@apache.org > > > Sent: Monday, July 25, 2022 12:25 PM > > To: dev < dev@kafka.apache.org >; Users < users@kafka.apache.org > > > Subject: [ANNOUNCE] New Committer: Chris Egerton > > > > Hi all, > > > > The PMC for Apache Kafka has invited Chris Egerton as a committer, and > > we are excited to announce that he accepted! > > ...

Re: [ANNOUNCE] New Committer: Chris Egerton

Congratulations Chris! Well deserved! Luke On Tue, Jul 26, 2022 at 5:39 AM Anna McDonald < jbfletch@happypants.org > wrote: > Congratulations Chris! Time to Cellobrate! > > anna > > On Mon, Jul 25, 2022 at 4:23 PM Martin Gainty < mgainty@hotmail.com > wrote: > > > Congratulations Chris! > > > > martin~ > > ________________________________ > > From: Mickael Maison < mimaison@apache.org > > > Sent: Monday, July 25, 2022 12:25 PM > > To: dev < dev@kafka.apache.org >; Users < users@kafka.apache.org > > > Subject: [ANNOUNCE] New Committer: Chris Egerton > > > > Hi all, > > > > The PMC for Apache Kafka has invited Chris Egerton as a committer, and > > we are excited to announce that he accepted! > > > > Chris has been contributing to Kafka since 2017. He has made over 80 > > commits mostly around Kafka Connect. His most...

Re: [ANNOUNCE] New Committer: Chris Egerton

Congratulations Chris! Time to Cellobrate! anna On Mon, Jul 25, 2022 at 4:23 PM Martin Gainty < mgainty@hotmail.com > wrote: > Congratulations Chris! > > martin~ > ________________________________ > From: Mickael Maison < mimaison@apache.org > > Sent: Monday, July 25, 2022 12:25 PM > To: dev < dev@kafka.apache.org >; Users < users@kafka.apache.org > > Subject: [ANNOUNCE] New Committer: Chris Egerton > > Hi all, > > The PMC for Apache Kafka has invited Chris Egerton as a committer, and > we are excited to announce that he accepted! > > Chris has been contributing to Kafka since 2017. He has made over 80 > commits mostly around Kafka Connect. His most notable contributions > include KIP-507: Securing Internal Connect REST Endpoints and KIP-618: > Exactly-Once Support for Source Connectors. > > He has been an active participant in discussions and reviews on the > mailing lis...

Re: [ANNOUNCE] New Committer: Chris Egerton

Congratulations Chris! martin~ ________________________________ From: Mickael Maison < mimaison@apache.org > Sent: Monday, July 25, 2022 12:25 PM To: dev < dev@kafka.apache.org >; Users < users@kafka.apache.org > Subject: [ANNOUNCE] New Committer: Chris Egerton Hi all, The PMC for Apache Kafka has invited Chris Egerton as a committer, and we are excited to announce that he accepted! Chris has been contributing to Kafka since 2017. He has made over 80 commits mostly around Kafka Connect. His most notable contributions include KIP-507: Securing Internal Connect REST Endpoints and KIP-618: Exactly-Once Support for Source Connectors. He has been an active participant in discussions and reviews on the mailing lists and on Github. Thanks for all of your contributions Chris. Congratulations! -- Mickael, on behalf of the Apache Kafka PMC

Re: [ANNOUNCE] New Committer: Chris Egerton

Congrats Chris! On Mon, Jul 25, 2022, 18:33 Matthew Benedict de Detrich <matthew.dedetrich@aiven.io.invalid> wrote: > Congratulations! > > -- > Matthew de Detrich > Aiven Deutschland GmbH > Immanuelkirchstraße 26, 10405 Berlin > Amtsgericht Charlottenburg, HRB 209739 B > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen > m: +491603708037 > w: aiven.io e: matthew.dedetrich@aiven.io > On 25. Jul 2022, 18:26 +0200, Mickael Maison < mimaison@apache.org >, wrote: > > Hi all, > > > > The PMC for Apache Kafka has invited Chris Egerton as a committer, and > > we are excited to announce that he accepted! > > > > Chris has been contributing to Kafka since 2017. He has made over 80 > > commits mostly around Kafka Connect. His most notable contributions > > include KIP-507: Securing Internal Connect REST Endpoints and KIP-618: > > Exactly-Once Sup...

Re: [ANNOUNCE] New Committer: Chris Egerton

Congratulations! -- Matthew de Detrich Aiven Deutschland GmbH Immanuelkirchstraße 26, 10405 Berlin Amtsgericht Charlottenburg, HRB 209739 B Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen m: +491603708037 w: aiven.io e:  matthew.dedetrich@aiven.io On 25. Jul 2022, 18:26 +0200, Mickael Maison < mimaison@apache.org >, wrote: > Hi all, > > The PMC for Apache Kafka has invited Chris Egerton as a committer, and > we are excited to announce that he accepted! > > Chris has been contributing to Kafka since 2017. He has made over 80 > commits mostly around Kafka Connect. His most notable contributions > include KIP-507: Securing Internal Connect REST Endpoints and KIP-618: > Exactly-Once Support for Source Connectors. > > He has been an active participant in discussions and reviews on the > mailing lists and on Github. > > Thanks for all of your contributions Chris. Congratulations! > ...

Re: [ANNOUNCE] New Committer: Chris Egerton

Congrats Chris! ——— Josep Prat Aiven Deutschland GmbH Immanuelkirchstraße 26, 10405 Berlin Amtsgericht Charlottenburg, HRB 209739 B Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen m: +491715557497 w: aiven.io e: josep.prat@aiven.io On Mon, Jul 25, 2022, 18:26 Mickael Maison < mimaison@apache.org > wrote: > Hi all, > > The PMC for Apache Kafka has invited Chris Egerton as a committer, and > we are excited to announce that he accepted! > > Chris has been contributing to Kafka since 2017. He has made over 80 > commits mostly around Kafka Connect. His most notable contributions > include KIP-507: Securing Internal Connect REST Endpoints and KIP-618: > Exactly-Once Support for Source Connectors. > > He has been an active participant in discussions and reviews on the > mailing lists and on Github. > > Thanks for all of your contributions Chris. Congratulations! > > -- Mi...

[ANNOUNCE] New Committer: Chris Egerton

Hi all, The PMC for Apache Kafka has invited Chris Egerton as a committer, and we are excited to announce that he accepted! Chris has been contributing to Kafka since 2017. He has made over 80 commits mostly around Kafka Connect. His most notable contributions include KIP-507: Securing Internal Connect REST Endpoints and KIP-618: Exactly-Once Support for Source Connectors. He has been an active participant in discussions and reviews on the mailing lists and on Github. Thanks for all of your contributions Chris. Congratulations! -- Mickael, on behalf of the Apache Kafka PMC

Kafka Consumer JMX incoming byte rate to 0

Hi there,   Just noticed from time to time, but not so often, Kafka consumer JMX incoming byte rate to 0, meanwhile consumer is consuming as expected.   SELECT average(newrelic.timeslice.value) FROM Metric WHERE metricTimesliceName = 'MessageBroker/Kafka/Internal/consumer-node-metrics/incoming-byte-rate'   Kafka consumer version 2.7.1, and broker side is 2.6.2   Any idea which could be the problem?   We are thinking about downgrading consumer version to discard version related.   Thanks   Jose Manuel Vega Monroy Senior Backend Developer Direct: +350  Mobile: +34(0) 633710634 WHG (International) Ltd  | 6/1 Waterport Place | Gibraltar |       Confidentiality: The contents of this e-mail and any attachments transmitted with it are intended to be confidential to the intended recipient; and may be privileged or otherwise protected from disclosure. I...

Re: [VOTE] 3.2.1 RC3

gpg: Good signature from "David Arthur (CODE SIGNING KEY) < davidarthur@apache.org >" [unknown] /tmp/tmp.lRpAY/kafka-3.2.1-src.tgz.sha512: OK Build using Gradle 7.3.3, Java 11 and Scala 2.13.8 BUILD SUCCESSFUL in 5m 34s Unit and integration tests: BUILD SUCCESSFUL in 37m 47s I also used Scala 2.13 binaries and staged Maven artifacts to run some client applications. +1 (non binding) Thanks On Fri, Jul 22, 2022 at 4:21 PM Christopher Shannon < christopher.l.shannon@gmail.com > wrote: > > +1 (non binding) > > I built from source and ran through some of the tests including all the > connect runtime tests. I verified that KAFKA-14079 was included and the fix > looked good in my tests. > > On Thu, Jul 21, 2022 at 9:15 PM David Arthur < davidarthur@apache.org > wrote: > > > Hello Kafka users, developers and client-developers, > > > > This is the first release candidate of Apache Ka...

Re: [VOTE] 3.2.1 RC3

+1 (non binding) I built from source and ran through some of the tests including all the connect runtime tests. I verified that KAFKA-14079 was included and the fix looked good in my tests. On Thu, Jul 21, 2022 at 9:15 PM David Arthur < davidarthur@apache.org > wrote: > Hello Kafka users, developers and client-developers, > > This is the first release candidate of Apache Kafka 3.2.1. > > This is a bugfix release with several fixes since the release of 3.2.0. A > few of the major issues include: > > * KAFKA-14062 OAuth client token refresh fails with SASL extensions > * KAFKA-14079 Memory leak in connectors using errors.tolerance=all > * KAFKA-14024 Cooperative rebalance regression causing clients to get stuck > > > Release notes for the 3.2.1 release: > https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/RELEASE_NOTES.html > > > > **** Please download, test and vote by Wednesday July 27, 2022 at 1...

perf test on kafka and mirror2

Hi Kafka Experts I recently worked on kafka perf test, my kafka active/standby clusters and mirrormake2 are all built with sasl-ssl, kafka-producer-perf-test.sh setup with sasl-ssl too. I use kafka default settings, say, message.max.size, batch size...no changes... Test result shows active kafka cluster(5 brokers(cpu 16GHZ memory 32GB) located in Santa Clara) throughout is ~65MB/s for 1million records each on 10KB size, is it looks normal? However, the mirror-make2 is slow for mirroring message from active cluster to standby one(standby cluster located in another city named Wenatchee), it took 10minutes for mirror-maker2 to finish all records mirroring...so this looks not normal, so my questions are: do we have docs to tune kafka performance with sasl-ssl enabled, also do we have docs to tune kafka mirror-maker2 with sasl-ssl enabled? thanks. orker@devks-ca-dev:~/kafka$ ./kafka_2.13-3.2.0/bin/kafka-producer-perf-test.sh --topic vks-perf-tst-10240-mirror --throughput -1 --num-r...

Re: Performance test latency

Dear Kafka Community, Pease, don't support rossian terrorism. Supporting central bank of russia < https://www.google.com/search?sxsrf=ALiCzsaDe4dFArMyyR2x5MfJoLb5ssEvIw:1658485585862&q=Central+Bank+of+Russia&stick=H4sIAAAAAAAAAOPgE-LUz9U3SM8pKShXAjONLYzyCrTUs5Ot9JNKizPzUouL4Yz4_ILUosSSzPw8q7T80ryU1KJFrGLOqXklRYk5Ck6JedkK-WkKQaXFxZmJO1gZd7EzcTACAL98COdiAAAA&sa=X&ved=2ahUKEwiKjdehpIz5AhVXGDQIHTLTBSUQmxMoAXoECFcQAw > /ministry of finance of the russian federation < https://www.google.com/search?sxsrf=ALiCzsaDe4dFArMyyR2x5MfJoLb5ssEvIw:1658485585862&q=Ministry+of+Finance+of+the+Russian+Federation&stick=H4sIAAAAAAAAAOPgE-LUz9U3SM8pKShX4gIxLYuKzbMNtdSzk630k0qLM_NSi4vhjPj8gtSixJLM_DyrtPzSvJTUokWsur6ZeZnFJUWVCvlpCm6ZeYl5yakgZklGqkJQaXFxZmKegltqClTfDlbGXexMHIwAhtJqYnoAAAA&sa=X&ved=2ahUKEwiKjdehpIz5AhVXGDQIHTLTBSUQmxMoAnoECFcQBA > and any rossian organization, you support occupants in Ukraine. On Fri, 22 Jul 2022 at 13:13, Ivanov, Evgeny...

Performance test latency

Hi everyone, I ran several performance test below and the overall throughput looks good, but I don't understand why it shows huge latency (about 2 seconds). Could you please explain what that latency means ? It can't be the latency between records or batches. What it is then ? kafka-producer-perf-test.sh --producer-props bootstrap.servers=server1:9095 --producer.config kafka/client/consumer.properties --topic TopicC3part --throughput -1 --record-size 1000 --num-records 1000000 --print-metrics 53137 records sent, 10625.3 records/sec (10.13 MB/sec), 1501.9 ms avg latency, 2384.0 ms max latency. 87152 records sent, 17430.4 records/sec (16.62 MB/sec), 1908.7 ms avg latency, 2365.0 ms max latency. 75728 records sent, 15142.6 records/sec (14.44 MB/sec), 2120.1 ms avg latency, 2878.0 ms max latency. 78736 records sent, 15747.2 records/sec (15.02 MB/sec), 2064.6 ms avg latency, 2826.0 ms max latency. 94688 records sent, 18933.8 records/sec (18.06 MB/sec), 1717.0 ms avg lat...

[VOTE] 3.2.1 RC3

Hello Kafka users, developers and client-developers, This is the first release candidate of Apache Kafka 3.2.1. This is a bugfix release with several fixes since the release of 3.2.0. A few of the major issues include: * KAFKA-14062 OAuth client token refresh fails with SASL extensions * KAFKA-14079 Memory leak in connectors using errors.tolerance=all * KAFKA-14024 Cooperative rebalance regression causing clients to get stuck Release notes for the 3.2.1 release: https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/RELEASE_NOTES.html **** Please download, test and vote by Wednesday July 27, 2022 at 17:00 PT. **** Kafka's KEYS file containing PGP keys we use to sign the release: https://kafka.apache.org/KEYS Release artifacts to be voted upon (source and binary): https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/ Maven artifacts to be voted upon: https://repository.apache.org/content/groups/staging/org/apache/kafka/ Javadoc: https://home....

Re: Designing a low latency consumer

Here's a great resource. Includes links to even more at the end. https://www.confluent.io/blog/configure-kafka-to-minimize-latency/ ---------- Forwarded message ---------- > From: Vasant Surya Teja < vasant.teja@gmail.com > > To: users@kafka.apache.org > Cc: > Bcc: > Date: Wed, 20 Jul 2022 18:13:02 -0500 > Subject: Designing a low latency consumer > Hi Team, > > I need a small advise from you on how to design a low latency consumer. I > am aiming to reduce the amount of time it takes to transfer data from Kafka > to my sink. During this research I came across two properties. They are: > 1. fetch.min.bytes > 2. fetch.max.wait.ms > I am planning to reduce the default values on both these properties to half > and see if I can increase the efficiency of my consumer. Is this a good > idea to tune the default values or should I leave them as it is? Can you > please let me know the repercussions related to th...

Designing a low latency consumer

Hi Team, I need a small advise from you on how to design a low latency consumer. I am aiming to reduce the amount of time it takes to transfer data from Kafka to my sink. During this research I came across two properties. They are: 1. fetch.min.bytes 2. fetch.max.wait.ms I am planning to reduce the default values on both these properties to half and see if I can increase the efficiency of my consumer. Is this a good idea to tune the default values or should I leave them as it is? Can you please let me know the repercussions related to this? Thanks Teja

OutOfRange event handler

Hello, I have one issue with detecting OffsetOutOfRange. When this happened, in logs I see ConsumerCoordinator.refreshCommittedOffsetsIfNeeded() Fetcher.handleOffsetOutOfRange() SubscriptionsState.maybeSeekUnvalidated() It's ok and works perfectly but I want to know about this event because I need to save info about consumerGroup offset, about topic offset, about lag between them. Is there any ability to do this? What is the best practice to do this? Best regards, Sasha Korn, Java Dev

Re: Inquiry about using SSL encryption and SASL authentication for Kafka without specifying IP address in SAN in the CA certificate

Hi Deepak, Unfortunately you cannot disable that with the default client implementation as far as I know. The SSL connection is created using the SSL implementation provided by the JVM. It might be possible to do this with a different or custom SSL implementation or with a custom SSL Engine. You can control that with the ssl.engine.factory.class property, This can have other side effects with the applications, and is a lot of work. Kind regards, Richard Bosch Developer Advocate Axual BV https://axual.com/ On Thu, Jul 14, 2022 at 12:46 PM Deepak Jain < deepak.jain@cumulus-systems.com > wrote: > Hi Richard, > > Thanks for your response. > > We are using IP in the advertised.listener and also passing IP in the > property ' bootstrap.servers' while instantiating KafkaConsumer class. But > in the server certificate only dns is used as SAN and not IP due to some > security concerns. > > Regarding hostna...

RE: Inquiry about using SSL encryption and SASL authentication for Kafka without specifying IP address in SAN in the CA certificate

Hi Richard, Thanks for your response. We are using IP in the advertised.listener and also passing IP in the property ' bootstrap.servers' while instantiating KafkaConsumer class. But in the server certificate only dns is used as SAN and not IP due to some security concerns. Regarding hostname verifier disabling, we are able to do it by setting the client property ssl.endpoint.identification.algorithm to an empty string. But the Customer is asking below query whose answer can only be provided by the Kafka team: Query: Is there any way to enable the hostname verification for Kafka communication between broker and client without specifying the IP address in SAN? Regards, Deepak -----Original Message----- From: Richard Bosch < richard.bosch@axual.com > Sent: 13 July 2022 20:57 To: users@kafka.apache.org Subject: Re: Inquiry about using SSL encryption and SASL authentication for Kafka without specifying IP address in SAN in the CA certificate Cautio...

Re: Consume data-skewed partitions using Kafka-streams causes consumer load balancing issue.

Hello Ankit, Kafka Streams's rebalance protocol is trying to balance workloads based on the num.partitions (more specifically, the num.tasks which is derived from the input partitions) but not on the num.messages or num.bytes, so they would not be able to handle data-skewness across partitions unfortunately. In practice, if a KS app is reading multiple topics, the data skewness could be remedied since an instance could get the heavy partitions of a topic, while getting light partitions of another topic. But if your app is only reading a single topic that has data skewness, it's hard to balance the throughput. Guozhang On Thu, Jul 7, 2022 at 7:29 AM ankit Soni < ankit.soni.geode@gmail.com > wrote: > Hello kafka-users, > > I have 50 topics, each with 32 partitions where data is being ingested > continuously. > > Data is being published in these 50 partitions externally (no control) > which causes data skew amount t...

Re: Inquiry about using SSL encryption and SASL authentication for Kafka without specifying IP address in SAN in the CA certificate

Hi Deepak, I'm not sure what you mean by IP in the CA certificate? The CA certificates are used to determine who signed a provided certificate and if it is valid. So when I connect to a broker using an IP address, then the server must provide a server certificate containing the IP as SAN to verify the handshake and signed by a CA that the client trusts. If the IP address is used in the advertised listener configuration or if only the listener is configured to the IP address then the client will fail as well. Because the client will open a new connection using the addresses provided by the broker, which are IP based. Can you check that the IP address is set as SAN in the broker Server certificates? And that the Kafka Broker configuration uses listeners like this? listener=SSL:// 1.2.3.4:9092 advertised.listener=SSL://hostname:9092 This means that the hostname is used to connect to the broker, and the hostname must be in the SAN to successfully connect. ...

DataMass Summit waiting for a speakers

Dear Hadoop Community, CFP for DataMass Gdańsk Summit < https://summit.datamass.io/?utm_source=mailing_lists&utm_medium=email&utm_campaign=getindata_CFP >has been extended! You can join the event and share your experience by submitting the presentation proposal till the 15th July 2022. What is the DataMass Summit? The Data Mass Gdańsk Summit is aimed at people who use the cloud in their daily work to solve Big Data, Data Science, Machine Learning and AI problems. The main idea of the conference is to promote knowledge and experience in designing and implementing tools for solving difficult and interesting challenges. Key topics - Data engineering in the cloud – how to ingest, store, transform your data in an efficient way. - Real time streaming – when you need to make decisions without delay. - Data intensive applications – patterns and templates for architectures that can handle hundreds of TB and PB of data. ...