Skip to main content

Posts

Showing posts from August, 2020

Re: MM2 max.request.size setting

Even with the errors complaining, it does appear to have worked. Thanks much! James Lavoy Senior Infrastructure Engineer Verizon - Protectwise 1601 Wewatta Street #700 Denver, Co > On Aug 31, 2020, at 2:16 PM, nitin agarwal < nitingarg456@gmail.com > wrote: > > I am not sure why it is picked in ConsumerConfig also. But you can verify > in producer configuration that MM2 writes in logs. > > Thanks, > Nitin > > On Tue, Sep 1, 2020 at 1:14 AM James Lavoy > <james.lavoy@protectwise.com.invalid> wrote: > >> Are you sure this is the proper syntax? >> >> [2020-08-31 19:44:11,074] WARN The configuration >> 'override.max.request.size' was supplied but isn't a known config. >> (org.apache.kafka.clients.consumer.ConsumerConfig:362) >> max.request.size = 1048576 >> >> Seeing this in output after adding those values. >> >> James Lavoy...

Re: MM2 max.request.size setting

I am not sure why it is picked in ConsumerConfig also. But you can verify in producer configuration that MM2 writes in logs. Thanks, Nitin On Tue, Sep 1, 2020 at 1:14 AM James Lavoy <james.lavoy@protectwise.com.invalid> wrote: > Are you sure this is the proper syntax? > > [2020-08-31 19:44:11,074] WARN The configuration > 'override.max.request.size' was supplied but isn't a known config. > (org.apache.kafka.clients.consumer.ConsumerConfig:362) > max.request.size = 1048576 > > Seeing this in output after adding those values. > > James Lavoy > > Senior Infrastructure Engineer > Verizon - Protectwise > > 1601 Wewatta Street > #700 > Denver, Co > > > On Aug 31, 2020, at 1:08 PM, nitin agarwal < nitingarg456@gmail.com > > wrote: > > > > If you have clusters like DC1 and DC2 then you can add the following > > configuration: > > DC1->DC2....

Streams constantly reblancing,

When running an application on the Mac it works fine, when running exactly the same app and config on the Raspberry Pi it constantly says it is "Rebalancing" the streams 2020-08-31 12:47:11 INFO org.apache.kafka.common.utils.AppInfoParser$AppInfo <init> Kafka version: 2.6.0 ''2020-08-31 12:47:11 INFO org.apache.kafka.common.utils.AppInfoParser$AppInfo <init> Kafka commitId: 62abe01bee039651 ''2020-08-31 12:47:11 INFO org.apache.kafka.common.utils.AppInfoParser$AppInfo <init> Kafka startTimeMs: 1598903231499 ''2020-08-31 12:47:11 WARNING org.apache.kafka.streams.StreamsConfig checkIfUnexpectedUserSpecifiedConsumerConfig Unexpected user-specified consumer config: enable.auto.commit found. User setting (true) will be ignored and the Streams default setting (false) will be used ''2020-08-31 12:47:11 INFO org.apache.kafka.streams.KafkaStreams setState stream-client [pi-test-84721b40-dfa1-4848-b3de-5c75610...

Re: MM2 max.request.size setting

Are you sure this is the proper syntax? [2020-08-31 19:44:11,074] WARN The configuration 'override.max.request.size' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig:362) max.request.size = 1048576 Seeing this in output after adding those values. James Lavoy Senior Infrastructure Engineer Verizon - Protectwise 1601 Wewatta Street #700 Denver, Co > On Aug 31, 2020, at 1:08 PM, nitin agarwal < nitingarg456@gmail.com > wrote: > > If you have clusters like DC1 and DC2 then you can add the following > configuration: > DC1->DC2.producer.override.max.request.size=6291456 > > Thanks, > Nitin > > On Tue, Sep 1, 2020 at 12:35 AM James Lavoy > <james.lavoy@protectwise.com.invalid> wrote: > >> Through the connect-mirror-maker.sh helper script. >> >> James Lavoy >> >> Senior Infrastructure Engineer >> Verizon - Prot...

Re: MM2 max.request.size setting

If you have clusters like DC1 and DC2 then you can add the following configuration: DC1->DC2.producer.override.max.request.size=6291456 Thanks, Nitin On Tue, Sep 1, 2020 at 12:35 AM James Lavoy <james.lavoy@protectwise.com.invalid> wrote: > Through the connect-mirror-maker.sh helper script. > > James Lavoy > > Senior Infrastructure Engineer > Verizon - Protectwise > > 1601 Wewatta Street > #700 > Denver, Co > > > On Aug 31, 2020, at 1:04 PM, nitin agarwal < nitingarg456@gmail.com > > wrote: > > > > How are you running MM2 ? > > > > Thanks, > > Nitin > > > > On Mon, Aug 31, 2020 at 11:23 PM James Lavoy > > <james.lavoy@protectwise.com.invalid> wrote: > > > >> Good day, > >> > >> I'm trying to use MM2 to setup replication between two Kafka clusters, I > >> have hit an issue that I believe is beca...

Re: MM2 max.request.size setting

Through the connect-mirror-maker.sh helper script. James Lavoy Senior Infrastructure Engineer Verizon - Protectwise 1601 Wewatta Street #700 Denver, Co > On Aug 31, 2020, at 1:04 PM, nitin agarwal < nitingarg456@gmail.com > wrote: > > How are you running MM2 ? > > Thanks, > Nitin > > On Mon, Aug 31, 2020 at 11:23 PM James Lavoy > <james.lavoy@protectwise.com.invalid> wrote: > >> Good day, >> >> I'm trying to use MM2 to setup replication between two Kafka clusters, I >> have hit an issue that I believe is because of hitting the ProducerConfig >> max.request.size setting, however I seem unable to find the proper string >> to override the default setting in the mm2 config. >> >> This is the specific value I'm talking about: >> [2020-08-31 15:42:17,066] INFO ProducerConfig values: >> max.request.size = 1048576 >> Does anybody k...

Re: MM2 max.request.size setting

How are you running MM2 ? Thanks, Nitin On Mon, Aug 31, 2020 at 11:23 PM James Lavoy <james.lavoy@protectwise.com.invalid> wrote: > Good day, > > I'm trying to use MM2 to setup replication between two Kafka clusters, I > have hit an issue that I believe is because of hitting the ProducerConfig > max.request.size setting, however I seem unable to find the proper string > to override the default setting in the mm2 config. > > This is the specific value I'm talking about: > [2020-08-31 15:42:17,066] INFO ProducerConfig values: > max.request.size = 1048576 > Does anybody know off-hand the right config line to override this limit? > > Thank you for your time and attention. > > > > James Lavoy > > Senior Infrastructure Engineer > Verizon > > >

MM2 max.request.size setting

Good day, I'm trying to use MM2 to setup replication between two Kafka clusters, I have hit an issue that I believe is because of hitting the ProducerConfig max.request.size setting, however I seem unable to find the proper string to override the default setting in the mm2 config. This is the specific value I'm talking about: [2020-08-31 15:42:17,066] INFO ProducerConfig values: max.request.size = 1048576 Does anybody know off-hand the right config line to override this limit? Thank you for your time and attention. James Lavoy Senior Infrastructure Engineer Verizon

Re: Can't trace any sendfile system call from Kafka process

Hi Ming, Maybe this ticket could be useful to you: https://issues.apache.org/jira/browse/KAFKA-7504 Guozhang On Fri, Aug 28, 2020 at 8:21 AM Ming Liu < mingaliu@gmail.com > wrote: > Hi > One major reason that Kafka is fast is because it is using sendfile() > for zero copy, as what it described at > https://kafka.apache.org/documentation/#producerconfigs , > > This combination of pagecache and sendfile means that on a Kafka > cluster where the consumers are mostly caught up you will see no read > activity on the disks whatsoever as they will be serving data entirely > from cache. > > However, when I ran tracing on all my kafka brokers, I didn't get a > single sendfile system call, why his discrepancy? > > sudo ./syscount -p 126806 -d 30 > Tracing syscalls, printing top 10... Ctrl+C to quit. > [17:44:10] > SYSCALL COUNT > epoll_wait 108482 > write 107165 ...

Kafka cluster cannot connect to zookeeper

Thank you for the update steps!I've successfully expanded my zookeeper。But what should I do with Kafka clusters that can't connect to zookeeper,Now the Kafka cluster can work normally, but it cannot be operated。 Thanks for your help!! ________________________________ 发件人: Manoj.Agrawal2@cognizant.com < Manoj.Agrawal2@cognizant.com > 发送时间: 2020年8月30日 11:03 收件人: users@kafka.apache.org < users@kafka.apache.org > 主题: Re: 回复: Kafka cluster cannot connect to zookeeper Try below 1. Update conf/ zoo.cfg Configure the configuration of exiting one and new one. server nodes 2. Add myid under dataDir 3. Restart the existing zookeeper node 4. Start the other one zookeeper nodes 5. Update conf/ zoo.cfg Configure the configuration of existing (2 zookeeper) and new one server nodes 6. Add myid under dataDir 7. Restart the existing zookeeper node 8. Start the other one zookeeper nodes On 8/29/2...

Re: 回复: Kafka cluster cannot connect to zookeeper

Try below 1. Update conf/ zoo.cfg Configure the configuration of exiting one and new one. server nodes 2. Add myid under dataDir 3. Restart the existing zookeeper node 4. Start the other one zookeeper nodes 5. Update conf/ zoo.cfg Configure the configuration of existing (2 zookeeper) and new one server nodes 6. Add myid under dataDir 7. Restart the existing zookeeper node 8. Start the other one zookeeper nodes On 8/29/20, 3:52 AM, "Li,Dingqun" < lidingqun@agora.io > wrote: [External] I updated zookeeper's process 1. Update conf/ zoo.cfg Configure the configuration of two new server nodes 2. Add myid under dataDir 3. Restart the existing zookeeper node 4. Start the other two zookeeper nodes 5. The existing zookeeper node is changed from stand-alone to leader zookeeper version 3.4.14-4 kafka version 2.3.0 This is part of the lo...

回复: Kafka cluster cannot connect to zookeeper

I updated zookeeper's process 1. Update conf/ zoo.cfg Configure the configuration of two new server nodes 2. Add myid under dataDir 3. Restart the existing zookeeper node 4. Start the other two zookeeper nodes 5. The existing zookeeper node is changed from stand-alone to leader zookeeper version 3.4.14-4 kafka version 2.3.0 This is part of the log: [2020-08-28 08:43:23,872] INFO Got user-level KeeperException when processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x5 zxid:0x2000005e0 txnt ype:-1 reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = NodeExists for /brokers/topics (org.apache.zookeeper.server.PrepRequestProcesso r) 181 [2020-08-28 08:43:23,945] INFO Got user-level KeeperException when processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x6 zxid:0x2000005e1 txnt ype:-1 reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = NodeExists for /config/changes (org.apache.zookeeper.server.PrepRequestProcesso r)...

Read only Access(ACL) for all topics in cluster to user

Hi , We are using kafka 2.2.1 and we have requirement to provide read only access to user a for all topics existing in kafka cluster . is there any way we can add KAFKA ACL rule for read access at cluster level or all topic* to user . Thanks Manoj A This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored. This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain co...

Re: Kafka cluster cannot connect to zookeeper

You have'nt describe how you are adding zookeeper . Right way to add zookeeper One host at a time 1. update the existing zookeeper node conf/zoo.cfg by adding new host 2. restart the zk process on existing host 3. start the zk process in new node On 8/28/20, 8:20 AM, "Li,Dingqun" < lidingqun@agora.io > wrote: [External] We have one zookeeper node and two Kafka nodes. After that, we expand the capacity of zookeeper: change the configuration of zookeeper node, restart it, and add two zookeeper nodes. After that, my Kafka cluster could not connect to the zookeeper cluster, and there was no information available in the log. What should we do? Thank you This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthor...

Kafka cluster cannot connect to zookeeper

We have one zookeeper node and two Kafka nodes. After that, we expand the capacity of zookeeper: change the configuration of zookeeper node, restart it, and add two zookeeper nodes. After that, my Kafka cluster could not connect to the zookeeper cluster, and there was no information available in the log. What should we do? Thank you

回复: Kafka cluster cannot connect to zookeeper

We have one zookeeper node and two Kafka nodes. After that, we expand the capacity of zookeeper: change the configuration of zookeeper node, restart it, and add two zookeeper nodes. After that, my Kafka cluster could not connect to the zookeeper cluster, and there was no information available in the log. What should we do? Thank you ________________________________ 发件人: Li,Dingqun 发送时间: 2020年8月28日 14:43 收件人: users@kafka.apache.org < users@kafka.apache.org > 主题: Kafka cluster cannot connect to zookeeper We have one zookeeper node and two Kafka nodes. After that, we expand the capacity of zookeeper: change the configuration of zookeeper node, restart it, and add two zookeeper nodes. After that, my Kafka cluster could not connect to the zookeeper cluster, and there was no information available in the log. What should we do? Thank you

Kafka cluster cannot connect to zookeeper

We have one zookeeper node and two Kafka nodes. After that, we expand the capacity of zookeeper: change the configuration of zookeeper node, restart it, and add two zookeeper nodes. After that, my Kafka cluster could not connect to the zookeeper cluster, and there was no information available in the log. What should we do? Thank you

Re: JNI linker issue on ARM (Raspberry PI)

Thanks for the update Steve! This is very helpful and I find the blog is a good read too! Appreciate your contribution to the community. Guozhang On Thu, Aug 27, 2020 at 11:06 AM Steve Jones < jones.steveg@gmail.com > wrote: > Ok so an update here, it's just a learning thing alongside the day job so > it took a little longer than expected. I upgraded to 2.6 but the same > thing happened, the reason is that rocksDB doesn't include the .so for the > Raspberry Pi platform, hence the linker error. Solution was to build the > JNI jar on the Raspberry PI. > > Blogged it so there is a record in case someone else has the challenge. > One 'interesting' piece was that although the application had never > successfully run there was a record of the state somewhere in the broker so > once the linker issue was the resolved it failed because it couldn't access > the RocksDB database in the state that it thought existed...

Can't trace any sendfile system call from Kafka process

Hi One major reason that Kafka is fast is because it is using sendfile() for zero copy, as what it described at https://kafka.apache.org/documentation/#producerconfigs , This combination of pagecache and sendfile means that on a Kafka cluster where the consumers are mostly caught up you will see no read activity on the disks whatsoever as they will be serving data entirely from cache. However, when I ran tracing on all my kafka brokers, I didn't get a single sendfile system call, why his discrepancy? sudo ./syscount -p 126806 -d 30 Tracing syscalls, printing top 10... Ctrl+C to quit. [17:44:10] SYSCALL COUNT epoll_wait 108482 write 107165 epoll_ctl 95058 futex 86716 read 86388 pread 26910 fstat 9213 getrusage 120 close 27 open 21

Re: Can we use VIP ip rather than Kafka Broker host name in bootstrap string

I use a VIP in my production system and I haven't had any issues On Wed, Aug 26, 2020 at 5:21 PM Peter Bukowinski < pmbuko@gmail.com > wrote: > I do something like this in my environment to simplify things. We use a > consul service address, e.g 'kafka.service.subdomain.consul', to provide > the VIP, which returns the address of a live broker in the cluster. Kafka > clients use that address in their configs. It works very well. > > — > Peter > > > On Aug 26, 2020, at 11:54 AM, Manoj.Agrawal2@cognizant.com wrote: > > > > Hi All , > > Can we use VIP ip rather than Kafka Broker host name in bootstrap > string at producer side ? > > Any concern or recommendation way >

Re: JNI linker issue on ARM (Raspberry PI)

Ok so an update here, it's just a learning thing alongside the day job so it took a little longer than expected. I upgraded to 2.6 but the same thing happened, the reason is that rocksDB doesn't include the .so for the Raspberry Pi platform, hence the linker error. Solution was to build the JNI jar on the Raspberry PI. Blogged it so there is a record in case someone else has the challenge. One 'interesting' piece was that although the application had never successfully run there was a record of the state somewhere in the broker so once the linker issue was the resolved it failed because it couldn't access the RocksDB database in the state that it thought existed, changed the application.id and it all worked fine. Not sure that counts as a defect on the Kafka side and I didn't have time to track down the root cause. http://service-architecture.blogspot.com/2020/08/getting-rocksdb-working-on-raspberry-pi.html On Mon, 24 Aug 2020 at 20:53, J...

Re: Can we use VIP ip rather than Kafka Broker host name in bootstrap string

I do something like this in my environment to simplify things. We use a consul service address, e.g 'kafka.service.subdomain.consul', to provide the VIP, which returns the address of a live broker in the cluster. Kafka clients use that address in their configs. It works very well. — Peter > On Aug 26, 2020, at 11:54 AM, Manoj.Agrawal2@cognizant.com wrote: > > Hi All , > Can we use VIP ip rather than Kafka Broker host name in bootstrap string at producer side ? > Any concern or recommendation way

Re: Please add our company to https://kafka.apache.org/powered-by

-----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl9GyUwACgkQO4miYXKq /OjUKA//aDtAB4lw2k60K5oj1Wc9Fty0EwOQNkznCIVaP11EuLV6P9efdcTvzglw zo/UbEXf7OJpSou6GwPL5XNJxXwCBxc/h+mp9shPbA24Crw6esSHxIOFRWVpBNdA byiIhZo1BicFcqCnN0SINWqAs8Xd/YEeqpB/OAXgSIQETBAEm1Nv4l23Kn04sjH3 fp7RCuIWXbSj2HB+4qsBoZOCKMNJOpP5VK7S7vcChtScTcrN7QURx5PPeXDYDvtk b56hyR+iqPrj+nbdw8+gnvGiEr2Ytqs4RipzB7nJhyaUHXmL23UGjMKFXSLvaGOS ySmiwmU1sGKCfj8e/sceDvHUIwFkkJs7UBIK9B+GWxtYPovdoAwIDOBS4MtqTMRi bGFN35MLkLMGT4nAUdH49bmcuvnyWH3Cv2Itp+MGcaMxewqA4iin09KezlkTRFRW 7RjK4AmGxm5Tlokve+sJfBCj40LOx7VKMU0X3Caad60IuoCa1b0q89oZBRihNcV9 wqWrWo1Mjw8jrArB+ss8YA1xedW+/snnJLdi/XvAL2LKCSKZnWm4p+eOrpj4X5on +/blurGDzYCXnKflGC8WGi35cDy6arVBjbNghpAPGsfEFNtqenSB+QCcvBoeWjLo Po+SlXPJVBSbR9qzZW91qG5xXjYvSnZjYyxPRp2GtWIHVBY5b8g= =14em -----END PGP SIGNATURE----- Hello Sebastian, feel free to open a PR against https://github.com/apache/kafka-site/ to update the `powered-by.html` file. -Matthias On 8...

Can we use VIP ip rather than Kafka Broker host name in bootstrap string

Hi All , Can we use VIP ip rather than Kafka Broker host name in bootstrap string at producer side ? Any concern or recommendation way This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored. This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destro...

Re: Request to get added to Asignee list

Done. Thanks for your interest in Apache Kafka! On Wed, Aug 26, 2020 at 1:41 PM Aakash Gupta < aakash.gupta96@outlook.com > wrote: > > Hi, > > Please add me to the JIRA Asignee list. I would like to start contributing. > > Jira user id: aakashgupta96 > Full Name: Aakash Gupta > > Apologies in case I've sent this request to wrong mailing list. > > Thanks, > Aakash Gupta

Re: Not able to connect to bootstrap server when one broker down

What error you are getting , can you share the exact error ? What is version of kafka lib at client side ? On 8/25/20, 7:50 AM, "Prateek Rajput" <prateek.rajput@flipkart.com.INVALID> wrote: [External] Hi, please if anyone can help, will be a huge favor. *Regards,* *Prateek Rajput* < prateek.rajput@flipkart.com > On Tue, Aug 25, 2020 at 12:06 AM Prateek Rajput < prateek.rajput@flipkart.com > wrote: > Hi everyone, > I am new to Kafka, and recently started working on kafka in my company. We > recently migrated our client and cluster from the *0.10.x* version to > *2.3.0*. I am facing this issue quite often. > I have provided all brokers in *bootstrap.servers* config to instantiate > the producer client but while using this client for batch publishing, > sometimes some of my mappers get stuck. > I debugged and found that one broker was down (for some ...

Python error after upgrading confluent-kafka library

I upgraded confluent-kafka from 1.0.1 to 1.4.2. After upgrading, if I execute 'pip install' I am getting the following error. I am using python 2.7. Has anyone else encountered this issue? Traceback (most recent call last):import confluent_kafka as kafkaFile "/home/sshil/virtual_envs/ma_venv/local/lib/python2.7/site-packages/confluent_kafka/__init__.py", line 19, in <module>from .deserializing_consumer import DeserializingConsumerFile "/home/sshil/virtual_envs/ma_venv/local/lib/python2.7/site-packages/confluent_kafka/deserializing_consumer.py", line 19, in <module>from confluent_kafka.cimpl import Consumer as _ConsumerImplImportError: /home/sshil/virtual_envs/ma_venv/local/lib/python2.7/site- packages/confluent_kafka/cimpl.so: undefined symbol: PyUnicodeUCS2_FromObject

Re: Not able to connect to bootstrap server when one broker down

Hi Rohit, We checked that ISR was available and all leaders were there. It was failing during instantiation of the client, while making the very first connection to a broker. Because randomly it selected only that broker from *bootstrap.servers *list which was down and tried to connect to that server only. But when we killed that attempt of stuck mapper and reattempted then it ran fine. *Regards,* *Prateek Rajput* < prateek.rajput@flipkart.com > On Tue, Aug 25, 2020 at 8:32 PM rohit garg < rohit.garg41@gmail.com > wrote: > Please check your ISR using describe command and see if there is leader > available when one of the broker is down . > > Thanks and Regards, > Rohit > > On Tue, Aug 25, 2020, 20:20 Prateek Rajput > <prateek.rajput@flipkart.com.invalid> wrote: > > > Hi, please if anyone can help, will be a huge favor. > > > > *Regards,* > > *Prateek Rajput* < prateek.rajput@flipkart.co...

Re: Not able to connect to bootstrap server when one broker down

Please check your ISR using describe command and see if there is leader available when one of the broker is down . Thanks and Regards, Rohit On Tue, Aug 25, 2020, 20:20 Prateek Rajput <prateek.rajput@flipkart.com.invalid> wrote: > Hi, please if anyone can help, will be a huge favor. > > *Regards,* > *Prateek Rajput* < prateek.rajput@flipkart.com > > > > On Tue, Aug 25, 2020 at 12:06 AM Prateek Rajput < > prateek.rajput@flipkart.com > > wrote: > > > Hi everyone, > > I am new to Kafka, and recently started working on kafka in my company. > We > > recently migrated our client and cluster from the *0.10.x* version to > > *2.3.0*. I am facing this issue quite often. > > I have provided all brokers in *bootstrap.servers* config to instantiate > > the producer client but while using this client for batch publishing, > > sometimes some of my mappers get stuck. > > I ...

Re: Not able to connect to bootstrap server when one broker down

Hi, please if anyone can help, will be a huge favor. *Regards,* *Prateek Rajput* < prateek.rajput@flipkart.com > On Tue, Aug 25, 2020 at 12:06 AM Prateek Rajput < prateek.rajput@flipkart.com > wrote: > Hi everyone, > I am new to Kafka, and recently started working on kafka in my company. We > recently migrated our client and cluster from the *0.10.x* version to > *2.3.0*. I am facing this issue quite often. > I have provided all brokers in *bootstrap.servers* config to instantiate > the producer client but while using this client for batch publishing, > sometimes some of my mappers get stuck. > I debugged and found that one broker was down (for some maintenance > activity). Now it was getting stuck because the mapper's client was trying > to connect to that node only for the very first time. And it was failing > with NoRouteToHost Exception. > I have read that the very first time the client will select a random...