Skip to main content

Posts

Showing posts from September, 2019

Re: Kafka Streams can't run normally after restart/redeployment

You could try increasing retries and see if that helps as well as adjusting the producer batch size to a lower value. (I think the retries default is Integer.MAX when you're on kafka streams version 2.1 or higher so you can definitely increase it beyond 5). Additionally you could look at the " delivery.timeout.ms " config property. Default is 2 minutes but you could experiment with increasing it as well. Another property to check if you're getting timeout exceptions would be " default.api.timeout.ms ". Those are just some initial ideas, good luck! Alex On Thu, Sep 26, 2019 at 6:02 PM Xiyuan Hu < xiyuan.huhu@gmail.com > wrote: > Thanks Alex! Some updates: > > I tried to restart service with staging pool, which has far less > traffic as production environment. And after restart, the application > works fine without issues. I assume I can't restart the service in > production, is caused by the huge lag in pro...

Re: One Partition missing a node in ISR

I deleted the topic now and with topic-auto-create enabled it was immediately recreated and all is in sync again. Will keep and eye on this to see if it happens again.... On 30-Sep-19 3:12 PM, Sebastian Schmitz wrote: > Hello again, > > after like 15 minutes I have now this result: > > root@kafka_node_1:/opt/kafka_2.12-2.3.0/bin# > ./kafka-reassign-partitions.sh --bootstrap-server localhost:9092 > --zookeeper node1:2181 --reassignment-json-file move2.json --verify > Status of partition reassignment: > Reassignment of partition my_topic-7 completed successfully > Reassignment of partition my_topic-14 completed successfully > Reassignment of partition my_topic-8 completed successfully > Reassignment of partition my_topic-4 completed successfully > Reassignment of partition my_topic-3 completed successfully > Reassignment of partition my_topic-13 completed successfully > Reassignment of partition my_topic-1 completed ...

Is restart necessary when adding users in SASL/PLAIN ?

Hi everyone ! Our Kafka (Ambari distribution) is under SASL/PLAIN authentication mechanism. I am adding new users and it seems a restart is required to enable the new users. So I want to know whether brokers must restart in this situation. If yes, is there a workaround to avoid this, otherwise, SASL/PALIN is less useful. | | 张祥 | | 18133622460@163.com | 签名由网易邮箱大师定制

Re: Achiving at least once Delivery on top of Kafka

Hi, This is not the complete requirements set we need. Sorry for any inconvenience occurred. Thank you. Regards. On Sun, 29 Sep 2019 at 18:25, Isuru Boyagane < isuruboyagane.16@cse.mrt.ac.lk > wrote: > Hi, > > We are implementing a use case that needs tight at least once delivery. > Even in the case of failures of nodes, no messages must be lost. > > We are trying to find out the least restrictive configurations that can > give us at least once delivery. Following is what we found. > > > - > > If we use acks=1, we can't guarantee that messages will not be lost, > - > > If we use acks= all, we will have a good data safety but unclean > leader failover may lead to data loss. > > > As we found, setting acks=all and unclean.leader.election.enable = false > will give us data safety so that no message will be lost (sacrificing the > availability of the system). > ...

Re: Broker regularly facing memory issues

Thanks for the input. I do plan on reducing the heap for the next restart. Initially I'd thought that less heap memory could also be the reason but since then I've learned that cannot be the issue. I also found the number 262144 online but I believe those results were related to Elastic Search. On Fri, Sep 27, 2019 at 2:31 PM Karolis Pocius <karolis.pocius@sentiance.com.invalid> wrote: > How did you arrive at the 10 GB JVM heap value? I'm running Kafka on 16 GB > RAM instances with ~4000 partitions each and only assigning 5 GB to JVM of > which Kafka only seems to be using ~2 GB at any given time. > > Also, I've set vm.max_map_count to 262144 -- didn't use any formula to > estimate that, must have been some answer I found online, but it's been > doing its trick -- no issues so far. > > On Fri, Sep 27, 2019 at 11:29 AM Arpit Gogia < arpit@ixigo.com > wrote: > > > Hello Kafka user group > ...

Re: Only use SSL to encrypt the authentication messages

So, is this a no ? | | | | | 签名由网易邮箱大师定制 On 09/29/2019 17:38,Pere Urbón Bayes< pere.urbon@gmail.com > wrote: Hi, if you're worried about the performance impact i would suggest : * first benchmark the impacts, by experience i would see around 30 to 40% performance degradation. * use java 11, and you will see a lot less performance impact when using ssl. -- Pere On Sun, 29 Sep 2019, 08:28 张祥 < 18133622460@163.com > wrote: Hi everyone ! I am enabling SASL/PLAIN authentication for our Kafka and I am aware it should be used with SSL encryption. But it may bring a performance impact. So I am wondering whether it is possible that we only use SSL to encrypt the authentication messages but leave the data unencrypted. Thanks. | | 张祥 | | 18133622460@163.com | 签名由网易邮箱大师定制

Re: Idempotent Producers and Exactly Once Consumers

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl2Rn0UACgkQu8PBaGu5 w1H0Pw//dyM9UCj8lJsDPU39GwZHV7cRYkbdHzvmaQPT8oOZbakz0bvTY1duLYxv bxAJNHSIbjl3BqxI8wtvIqL5ODKgmpGiTZSBnM3nqaL/HYkFVucWtOLBDicnVK7Q EZmL53Z0HQf1RVeIIbMouc6xCJBLe+YiQB8+ktcQveEsYVBe4k6Zt8n6FZjtCgyX pas1gg90KQzm9pN5jymrIdj/Ra+WM1YgQXpD3ftnKL/iyj5ITME+HYox08Y0zyq8 jpSpa//hXNND0n5vqy27ft5LHtII58OC5fp1+BpWieSaWXeUnrubDKluv3b7VrFf TybcOADizkScPqbz2v11rYXPApSZySo23FSZFVk1ByL/+dw545aUZbXDKm55+Wj4 ygcaAsg1I8aEPG3S4B2/pVoRZs10BcpluiGhJgpONjYR7KFxm/RmuLR0gmODcLR8 jASUlxOYZuQx/6vNMfA+8IGP314+88mEyX4E2xEfdnGnL4X5OYomR/5s+142vQUP 9+lWjqwMefDbaZedSfhOaGZkon4WGfz4UgPPfHlSxL2hRNohMf5hdy6Z9dEH2wMj 2paj5jAT80cUJi6tVCUzZyPP1zogLMSIzV5R1QJ1fZpRP03RIoFeJTYgoZOn6uXJ 6wTGwFVL87XBYrz9Y1UB/XLl2Mo+GiHnHts8UFwd/nmSs74hAnE= =gujC -----END PGP SIGNATURE----- If you only enable "read_committed", yes. If you set `processing.guarantees="exactly_once...

Re: Idempotent Producers and Exactly Once Consumers

This is not only a configuration. Like in all messaging solutions you need to make sure that all involved applications ensure once and only once delivery from producer(s) over Kafka to consumer(s). > Am 30.08.2019 um 22:00 schrieb Peter Groesbeck < peter.groesbeck@gmail.com >: > > For a producer that emits messages to a single topic (i.e. no single > message is sent to multiple topics), will enabling idempotency but not > transactions provide exactly once guarantees for downstream consumers of > said topic? > > Ordering is not important I just want to make sure consumers only consumer > messages sent once.

Re: One Partition missing a node in ISR

Hello again, after like 15 minutes I have now this result: root@kafka_node_1:/opt/kafka_2.12-2.3.0/bin# ./kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --zookeeper node1:2181 --reassignment-json-file move2.json --verify Status of partition reassignment: Reassignment of partition my_topic-7 completed successfully Reassignment of partition my_topic-14 completed successfully Reassignment of partition my_topic-8 completed successfully Reassignment of partition my_topic-4 completed successfully Reassignment of partition my_topic-3 completed successfully Reassignment of partition my_topic-13 completed successfully Reassignment of partition my_topic-1 completed successfully Reassignment of partition my_topic-15 completed successfully Reassignment of partition my_topic-6 completed successfully Reassignment of partition my_topic-11 completed successfully Reassignment of partition my_topic-0 completed successfully Reassignment of partition my_topic-12 comp...

Re: One Partition missing a node in ISR

Hello, I just ran the kafka-reassign-partitions with --generate to create the json and then with --execute to run it. Now when checking with --verify I can see that the 4 partitions (it now changed from only one partitions not having all in ISR to 12 not being all in ISR) are successful, but the others are still in progress.... That status remains: root@kafka_node_1:/opt/kafka_2.12-2.3.0/bin# ./kafka-topics.sh --bootstrap-server localhost:9092 --topic my_topic --describe Topic:my_topic        PartitionCount:16 ReplicationFactor:3 Configs:segment.bytes=1073741824,message.format.version=2.3-IV1,retention.bytes=1073741824         Topic: my_topic       Partition: 0    Leader: 1 Replicas: 2,3,1 Isr: 1         Topic: my_topic       Partition: 1    Leader: 1 Replicas: 3,1,2 ...

Re: Idempotent Producers and Exactly Once Consumers

Does a Kafka Streams consumer also have that same limitation of possible duplicates? Thanks, Chris On Fri, Sep 27, 2019 at 11:56 AM Matthias J. Sax < matthias@confluent.io > wrote: > Enabling "read_committed" only ensures that a consumer does not return > uncommitted data. > > However, on failure, a consumer might still read committed messages > multiple times (if you commit offsets after processing). If you commit > offsets before you process messages, and a failure happens before > processing finishes, you may "loose" those messages, as they won't be > consumed again on restart. > > Hence, if you have a "consumer only" application, not much changed and > you still need to take care in your application code about potential > duplicate processing of records. > > -Matthias > > > On 9/27/19 7:34 AM, Alessandro Tagliapietra wrote: > > You can achieve exactly once o...