Skip to main content

Posts

Showing posts from July, 2019

Kafka S3 Connector

Hi Team, I handle operations on Big Data Platform. We have a kafka cluster, which has to be connected to S3 buckets from on-prem servers. While searching, I did find only Confluent Kafka has S3 Connectors. So I was wondering if there is any solution, apache has in place for this. Kindly suggest to achieve Kafka & S3 connectivity. Thanks & Regards, Himansu Kumar Panigrahy Mercedes-Benz Research and Development India Pvt Ltd Embassy Crest, Plot No. 05, EPIP Zone, Phase 1, Whitefield Road, Bangalore - 560066,India Email - himansu.panigrahy@daimler.com <mailto: himansu.panigrahy@daimler.com > Web - www.mbrdi.co.in < http://www.mbrdi.co.in/ > If you are not the addressee, please inform us immediately that you have received this e-mail by mistake, and delete it. We thank you for your support.

Kafka SSL Issue Observed

Hi team, Any update on the below issue. Regards, Soumya From: Nayak, Soumya R. Sent: Wednesday, July 31, 2019 11:37 AM To: users@kafka.apache.org Subject: Kafka SSL Issue Observed Hi team, I am using the SSL and SASL PLAIN on the kafka brokers (cluster of 4 nodes). The version of kafka - 1.0.0 . I am observing the below issue with regards to SSL. Why this issue is happening? Is this issue addressed in the latest versions ? [2019-07-30 06:11:35,629] WARN Failed to send SSL Close message (org.apache.kafka.common.netw java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.jav...

Re: Partition assignment in kafka streams

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl1CKNEACgkQu8PBaGu5 w1GT0Q/7BfXlIVf2JEUmibXpXSH9lhtzM9T/It3DEm1Nh/+EmQOPywKSwPclgr26 Y3eKeK2TO3B94E72sXoSBpe1aH1yv8Y+j/UrisxW7AggWvBZgUOwe6b6OGNwwfB6 96fgzAXib85Dvbo4ac5urRwW+P3C/kZCmvWWebE9UKPKcgwxD9pm6M1fsrqF0BVd u14GNBxKItTL2at/SoN3ckXBHm6RKcCC6yFvxWsIlqdF46Qy+7MvK2tp4PL+e3je YLnJy+oPdv+KepCTu+s4O9kKEOHhAWioQPgCXBC4Ipm19vPgnth+vSCnVfEs8P1H VC6TWGL+Jw87V8MuH7CRVdTrj5TO5gQpy1mGVpdLYTnvEFWQzPj60KV0P+O7x1Gc rTHyEMXJRXW5t/oQXCsLeWbHykhQRnPNt9dUSQnwO87JiGcV3CGqqsLbugpxck9x 9+ZAL+InE7uxc7HZgM4j7P2HBUqC4djPLCuOUDtgA0Vk9Xg9ycPm7CrGfJqv5bOw yx3UhQc3+/RmqDW7dM5XnGVbOoDND9xs8TkBLtEzWzcslFbHOHTVV9OkWbpg3OOh CMXXXpN+sYOvjwFc4xZOsFXurPjyX+BdYpB1vetr4HMrtMtaRiUuooFCQrpFgGnp TpwOUsLo3oENpnHYKi/qxRZtnT5C2O+wMNPInbL0ulx07qGoTwM= =zdgH -----END PGP SIGNATURE----- You cannot hook into partition assignment, and I am not sure what you exactly want to do. You can get lo...

Re: Site Issues

thanks tristan! ________________________________ From: Tristan Lannigan < tristanlannigan@gmail.com > Sent: Wednesday, July 31, 2019 9:56 AM To: users@kafka.apache.org < users@kafka.apache.org > Subject: Site Issues Hi there, I found a grammatical error on https://kafka.apache.org/uses < https://mailtrack.io/trace/link/3b0a59f209f9d8ab6f68eff489fd9a43fcad7c00?url=https%3A%2F%2Fkafka.apache.org%2Fuses&userId=2787936&signature=792049b503955428 > under the *Stream Processing* header, halfway into the paragraph. "Published" is used where "publish" is appropriate. Regarding the Contact Us page, the mailto: links on the page all have a space (" ") between the mailto: and the actual email address. I found myself right clicking to copy the link from my browser, but pasting it in Gmail results in the space being encoded ("%20"). Take care

Re: Site Issues

________________________________ From: Tristan Lannigan < tristanlannigan@gmail.com > Sent: Wednesday, July 31, 2019 9:56 AM To: users@kafka.apache.org < users@kafka.apache.org > Subject: Site Issues Hi there, I found a grammatical error on https://kafka.apache.org/uses < https://mailtrack.io/trace/link/3b0a59f209f9d8ab6f68eff489fd9a43fcad7c00?url=https%3A%2F%2Fkafka.apache.org%2Fuses&userId=2787936&signature=792049b503955428 > under the *Stream Processing* header, halfway into the paragraph. "Published" is used where "publish" is appropriate. Regarding the Contact Us page, the mailto: links on the page all have a space (" ") between the mailto: and the actual email address. I found myself right clicking to copy the link from my browser, but pasting it in Gmail results in the space being encoded ("%20"). Take care

Re: TLS Communication in With Zookeeper Cluster

documentation on "Managing ZK cluster nodes" was disappointingly thin so I asked is there any product currently in use which uses ZK to configure their nodes? the answer is Apache SOLR https://lucene.apache.org/solr/guide/6_6/solrcloud.html Makes Sense? ________________________________ From: Nayak, Soumya R. < snayak@firstam.com > Sent: Wednesday, July 31, 2019 2:00 AM To: users@kafka.apache.org < users@kafka.apache.org > Subject: TLS Communication in With Zookeeper Cluster Hi Martin, Thanks for the reply. What exactly is SOLR ? In my case I have set up a zookeeper cluster(3 nodes) across three Azure ubuntu VMs - each VM having one node. Regards, Soumya -----Original Message----- From: Martin Gainty < mgainty@hotmail.com > Sent: Tuesday, July 30, 2019 5:21 PM To: users@kafka.apache.org Subject: Re: TLS Communication in With Zookeeper Cluster MG>definitely implement 3.5 ZK+ as suggested _____________________________...

Re: How to un-subscribe from Kafka user group

I already sent to that mail address, But still I'm getting email from that one . Any solutions ???? On 30-Jul-2019 7:24 PM, Jakub Scholz < jakub@scholz.cz > wrote: I think you should send an email to users-unsubscribe@kafka.apache.org - that should get you unsubscribed. Jakub On Tue, Jul 30, 2019 at 3:25 PM ASHOK MACHERLA < iAshok7@outlook.com > wrote: > Hi Teammates > > Can anyone guide me to un-subscribe from Kafka user group. > > > Sent from Outlook< http://aka.ms/weboutlook > >

Re: Not able to start the KAFKA

Please send your email again correctly - as the images haven't been delivered (probably blocked by security policies). On Wed, 31 Jul 2019 at 16:40, J Vel, Kumaravel < kumaravel.j@wabco-auto.com > wrote: > Dear, > > Im getting the following error while starting the KAFKA, > > [image: image.png] > > [image: image.png] > [image: image.png] > *Best regards,* > Kumaravel J > > WABCO IT > * BI IT Digital Big Data Analytics* > Mob: +91 95000 62307 > kumaravel.j@wabco-auto.com > > > ------------------------------ > Proprietary and confidential. It is for use by the addressee only. If you > are not the addressee or If you received this email in error, please > notify the sender and delete it and any attachments. All communications are > subject to our Data Protection Policy that can be found at * https://www.wabco-auto.com/footer/legal/legal/privacy-statement/ > < https://www.wabco...

Site Issues

Hi there, I found a grammatical error on https://kafka.apache.org/uses < https://mailtrack.io/trace/link/3b0a59f209f9d8ab6f68eff489fd9a43fcad7c00?url=https%3A%2F%2Fkafka.apache.org%2Fuses&userId=2787936&signature=792049b503955428 > under the *Stream Processing* header, halfway into the paragraph. "Published" is used where "publish" is appropriate. Regarding the Contact Us page, the mailto: links on the page all have a space (" ") between the mailto: and the actual email address. I found myself right clicking to copy the link from my browser, but pasting it in Gmail results in the space being encoded ("%20"). Take care

Not able to start the KAFKA

Dear, Im getting the following error while starting the KAFKA, Best regards, Kumaravel J  WABCO IT  BI IT Digital Big Data Analytics Mob: +91 95000 62307 kumaravel.j@wabco-auto.com Proprietary and confidential.  It is for use by the addressee only. If you are not the addressee or  If you received this email in error, please notify the sender and delete it and any attachments. All communications are subject to our Data Protection Policy that can be found at  https://www.wabco-auto.com/ footer/legal/legal/privacy- statement/  . By communicating with us via e-mail, you agree to the policy. WABCO does not guarantee that this email has not been intercepted, amended, or is virus-free.  No liability is accepted for viruses and it is your responsibility to scan any attachments.  Technical data and/or information provided in this email or any attachment may be subject to U.S. export control laws. Export, re-export, diversion or disclosure contrary to U.S. law is prohibited

Log Retention

Hi Manna, Thanks for the update. Will try with the log.cleaner.enable-false . Will the above parameter if false - will override the log.retention.hours and log.retention.bytes properties? And vice versa when true. Regards, Soumya -----Original Message----- From: M. Manna < manmedia@gmail.com > Sent: Tuesday, July 30, 2019 4:46 PM To: Kafka Users < users@kafka.apache.org > Subject: Re: Log Retention Hi, instead of tweaking retention time/size, you can try using log.cleaner.enable=false. It's true by default as of 0.9.0.1. However, i haven't tried that myself. There is always going to be an impact since it takes up resources. And in production, the general idea is to do housekeeping at an acceptable rate. That again, depends on how much cost you can bear. Thanks, On Tue, 30 Jul 2019 at 11:29, Nayak, Soumya R. < snayak@firstam.com > wrote: > Hi Team, > > There are two variables in Kafka - log.retention.hours and ...

Re: TLS Communication in With Zookeeper Cluster

No. There is quorum SSL between zookeeper servers on different ports. https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#Quorum+TLS Note: please don't use in production self signed certificates and put in your trust store the CAs. Client ports are for clients > Am 30.07.2019 um 12:41 schrieb Nayak, Soumya R. < snayak@firstam.com >: > > Thanks Harsha for the link. > > As I am using a zookeeper cluster. > In the below link there is a mention that no SSL support is there between zookeeper servers. (Any future version that would have this feature) > > So is it that the zookeeper servers will talk to each other on the ClientPort - 2181 and the kafka brokers will talk to these zookeeper servers over SSL on the secureClientPort - 2281. > > Please confirm if its correct or anything I am missing. > > Regards, > Soumya > > -----Original Message----- > From: Harsha < kafka@harsha.io > ...

Kafka SSL Issue Observed

Hi team, I am using the SSL and SASL PLAIN on the kafka brokers (cluster of 4 nodes). The version of kafka - 1.0.0 . I am observing the below issue with regards to SSL. Why this issue is happening? Is this issue addressed in the latest versions ? [2019-07-30 06:11:35,629] WARN Failed to send SSL Close message (org.apache.kafka.common.netw java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:212) at org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:170) at org.apache.kafka.common.utils.Utils.closeAll(Utils.java:703) at org.apach...

TLS Communication in With Zookeeper Cluster

Hi Martin, Thanks for the reply. What exactly is SOLR ? In my case I have set up a zookeeper cluster(3 nodes) across three Azure ubuntu VMs - each VM having one node. Regards, Soumya -----Original Message----- From: Martin Gainty < mgainty@hotmail.com > Sent: Tuesday, July 30, 2019 5:21 PM To: users@kafka.apache.org Subject: Re: TLS Communication in With Zookeeper Cluster MG>definitely implement 3.5 ZK+ as suggested ________________________________ From: Nayak, Soumya R. < snayak@firstam.com > Sent: Tuesday, July 30, 2019 6:41 AM To: users@kafka.apache.org < users@kafka.apache.org > Subject: TLS Communication in With Zookeeper Cluster Thanks Harsha for the link. As I am using a zookeeper cluster. In the below link there is a mention that no SSL support is there between zookeeper servers. (Any future version that would have this feature) MG>JavaDeveloper claims affirmative *if* you configure SSL on SOLR nodes Step 6: Co...

Re: How to un-subscribe from Kafka user group

I think you should send an email to users-unsubscribe@kafka.apache.org - that should get you unsubscribed. Jakub On Tue, Jul 30, 2019 at 3:25 PM ASHOK MACHERLA < iAshok7@outlook.com > wrote: > Hi Teammates > > Can anyone guide me to un-subscribe from Kafka user group. > > > Sent from Outlook< http://aka.ms/weboutlook > >

Re: TLS Communication in With Zookeeper Cluster

MG>definitely implement 3.5 ZK+ as suggested ________________________________ From: Nayak, Soumya R. < snayak@firstam.com > Sent: Tuesday, July 30, 2019 6:41 AM To: users@kafka.apache.org < users@kafka.apache.org > Subject: TLS Communication in With Zookeeper Cluster Thanks Harsha for the link. As I am using a zookeeper cluster. In the below link there is a mention that no SSL support is there between zookeeper servers. (Any future version that would have this feature) MG>JavaDeveloper claims affirmative *if* you configure SSL on SOLR nodes Step 6: Configure Solr properties in zookeeper Before you start any SolrCloud nodes, you must configure your solr cluster properties in ZooKeeper, so that Solr nodes know to communicate via SSL.The urlScheme cluster-wide property needs to be set to https before any Solr node starts up.Use below command: 1. server\scripts\cloud-scripts\zkcli.bat -zkhost localhost:2181 -cmd clusterprop -name urlScheme -val...

Re: Log Retention

Hi, instead of tweaking retention time/size, you can try using log.cleaner.enable=false. It's true by default as of 0.9.0.1. However, i haven't tried that myself. There is always going to be an impact since it takes up resources. And in production, the general idea is to do housekeeping at an acceptable rate. That again, depends on how much cost you can bear. Thanks, On Tue, 30 Jul 2019 at 11:29, Nayak, Soumya R. < snayak@firstam.com > wrote: > Hi Team, > > There are two variables in Kafka - log.retention.hours and > log.retention.bytes. > What is the value has to be set for the above two variables so that there > is no limit of hours or bytes, the messages stays forever? Is it a > production level set up ? > Will it impact kafka performance and also use much of disk space or can we > in between change the above variable values and restart the brokers? > > Why I am asking the above is in hyperledger fabric bl...

TLS Communication in With Zookeeper Cluster

Thanks Harsha for the link. As I am using a zookeeper cluster. In the below link there is a mention that no SSL support is there between zookeeper servers. (Any future version that would have this feature) So is it that the zookeeper servers will talk to each other on the ClientPort - 2181 and the kafka brokers will talk to these zookeeper servers over SSL on the secureClientPort - 2281. Please confirm if its correct or anything I am missing. Regards, Soumya -----Original Message----- From: Harsha < kafka@harsha.io > Sent: Monday, July 29, 2019 4:26 PM To: users@kafka.apache.org Subject: Re: TLS Communication in With Zookeeper Cluster Here is the guide https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide you need zookeeper 3.5 or higher for TLS. On Mon, Jul 29, 2019, at 1:21 AM, Nayak, Soumya R. wrote: > Hi Team, > > Is there any way mutual TLS communication set up can be done with > zookeeper. If any...

Log Retention

Hi Team, There are two variables in Kafka - log.retention.hours and log.retention.bytes. What is the value has to be set for the above two variables so that there is no limit of hours or bytes, the messages stays forever? Is it a production level set up ? Will it impact kafka performance and also use much of disk space or can we in between change the above variable values and restart the brokers? Why I am asking the above is in hyperledger fabric blockchain framework kafka and zookeeper is used as a messaging service but recently many people have complained about its pruning of data for unknown reasons. So just wanted to know how the best can the kafka be configured so that there is no issues of pruning. Currently zookeeper version - 3.4.10 and kafka version - 1.0.0 is used. Will the issues be resolved migrating to latest versions? Regards, ****************************************************************************************** This message may contain confidential...

Re: Partition assignment in kafka streams

Hi All, The main reason for knowing the partitions is to have a localized routing based on partitions assigned to set a stream tasks. This would really help in my use case. Thanks On Mon, Jul 29, 2019 at 8:58 PM Navneeth Krishnan < reachnavneeth2@gmail.com > wrote: > Hi, > > I'm using the processor topology for my use case and I would like to get > the partitions assigned to a particular stream instance. I looked at the > addSouce function but I don't see a way to add a callback to get notified > when partition assignment or reassignment happens. Please advise. > > Thank you >

Partition assignment in kafka streams

Hi, I'm using the processor topology for my use case and I would like to get the partitions assigned to a particular stream instance. I looked at the addSouce function but I don't see a way to add a callback to get notified when partition assignment or reassignment happens. Please advise. Thank you

Smoothly expanding log.dirs

Hey all! I was wondering how you all might recommend expanding your log.dirs settings safely. Specifically, when you add new entries to your log.dirs setting on your brokers, and execute a rolling restart to have the brokers take the new configuration, how do you spread out your partitions into the new directory that was added? I've noticed that this will not happen automatically for existing partitions(on version 1.1.1). A relatively straightforward, albeit dangerous solution is to simply remove all contents from the existing log.dirs defined in the brokers configuration(including the new one). I'd really rather not do that going forward if I could avoid it, as it does put additional strain on the cluster for what I feel like should be possible to do mostly within a single broker. I am aware that there is a more manual method available by modifying the files in the partition directories, outlined for the most part here: https://youtu.be/qDjEveb8vzk?t=15...

Kafka stops on cleaner-offset-checkpoint not found

I am stuck at this and hopeless. I have asked same question here : https://stackoverflow.com/questions/57247237/error-failed-to-access-checkpoint-file-cleaner-offset-checkpoint-in-dir-tmp-kaf But I dint get any response yet. Cold you please help me to resolve this. [2019-07-28 04:11:14,310] ERROR Failed to access checkpoint file cleaner-offset-checkpoint in dir /tmp/kafka-logs (kafka.log.LogCleaner) org.apache.kafka.common.errors.KafkaStorageException: Error while reading checkpoint file /tmp/kafka-logs/cleaner-offset-checkpoint Caused by: java.nio.file.NoSuchFileException: /tmp/kafka-logs/cleaner-offset-checkpoint at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java...

Re: Print RocksDb Stats

Hi Muhammed, RocksDB is not an in-memory store. If you use only InMemoryKeyValueStore, you are not using any RocksDB. Best, Bruno On Wed, Jul 17, 2019 at 3:26 PM Muhammed Ashik < ashikes@gmail.com > wrote: > > Hi I'm trying to log the rocksdb stats with the below code, but not > observing any logs.. > I'm enabling this as the off-heap memory grows indefinitely over a > period of time. > We were using inMemoryKeyValueStore only, I was not sure kafka-streams uses > rockdb as default in memory store. > > Kafka Streams version - 2.0.0 > > class CustomRocksDBConfig extends RocksDBConfigSetter { > override def setConfig(storeName: String, options: Options, configs: > util.Map[String, AnyRef]): Unit = { > > val stats = new Statistics > stats.setStatsLevel(StatsLevel.ALL) > options.setStatistics(stats) > .setStatsDumpPeriodSec(600) > options > .setInfoLogLevel(...

Re: TLS Communication in With Zookeeper Cluster

MG>below ________________________________ From: Nayak, Soumya R. < snayak@firstam.com > Sent: Monday, July 29, 2019 4:20 AM To: users@kafka.apache.org < users@kafka.apache.org > Subject: TLS Communication in With Zookeeper Cluster Hi Team, Is there any way mutual TLS communication set up can be done with zookeeper. If any references, can you please let me know. I am trying to set up a Zookeeper cluster (3 Zookeepers) and Kafka cluster (4 Kafka Brokers) using docker images in Azure Ubuntu VM servers. Also, there is a new protocol of RAFT-ETCD . How is it when compared to Kafka Zookeeper set up? MG>CoreOS installation with with etcd installed and running under systemctl then MG> Kubernete services (apiserver,controller-manager,kubelet,proxy) installed and running under CoreOS systemctl MG> https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-1/ MG>since ETCD-keystore != JSSE_keystore MG>for now I would suggest gett...

Re: TLS Communication in With Zookeeper Cluster

Here is the guide https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide you need zookeeper 3.5 or higher for TLS. On Mon, Jul 29, 2019, at 1:21 AM, Nayak, Soumya R. wrote: > Hi Team, > > Is there any way mutual TLS communication set up can be done with > zookeeper. If any references, can you please let me know. > > I am trying to set up a Zookeeper cluster (3 Zookeepers) and Kafka > cluster (4 Kafka Brokers) using docker images in Azure Ubuntu VM > servers. > > > Also, there is a new protocol of RAFT-ETCD . How is it when compared to > Kafka Zookeeper set up? > > Regards, > Soumya > > ****************************************************************************************** > This message may contain confidential or proprietary information > intended only for the use of the > addressee(s) named above or may contain information that is legally > privileged. If ...

TLS Communication in With Zookeeper Cluster

Hi Team, Is there any way mutual TLS communication set up can be done with zookeeper. If any references, can you please let me know. I am trying to set up a Zookeeper cluster (3 Zookeepers) and Kafka cluster (4 Kafka Brokers) using docker images in Azure Ubuntu VM servers. Also, there is a new protocol of RAFT-ETCD . How is it when compared to Kafka Zookeeper set up? Regards, Soumya ****************************************************************************************** This message may contain confidential or proprietary information intended only for the use of the addressee(s) named above or may contain information that is legally privileged. If you are not the intended addressee, or the person responsible for delivering it to the intended addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the me...

Replica assignment with rack awareness

Hi, I wanted to confirm the behavior when a new topic is created in a cluster where brokers have rack.id parameter configured. Lets say we have a 6 broker cluster 3 brokers with rack.id 'RACK1' ( broker ids : 1,2,3 ) 3 brokers with rack.id 'RACK2' ( brokers ids : 4,5,6 ) I created a topic with replication factor 4 Partition 0 : Replica set = 1,4,2,5 ( equal distribution of replicas across racks ) Will replicas always be equally distributed across racks or it can be uneven also like: Replica set = 1,4,2,3 uneven distribution of replicas across racks. 1,2,3 are from same rack thanks, ashish

Re: Rebalancing algorithm is extremely suboptimal for long processing

That seems to be a real bug -- and a pretty common one. We will look into it asap. Guozhang On Thu, Jul 25, 2019 at 7:26 AM Raman Gupta < rocketraman@gmail.com > wrote: > I'm looking forward to the incremental rebalancing protocol. In the > meantime, I've updated to Kafka 2.3.0 to take advantage of the static > group membership, and this has actually already helped tremendously. > However, unfortunately while it was working initially, some streams > are now unable to start at all, due to a code error in the broker > during the consumer join request: > > [2019-07-25 08:14:11,978] ERROR [KafkaApi-1] Error when handling > request: > clientId=x-stream-4a43d5d4-d38f-4cb0-8741-7a6c685abf15-StreamThread-1-consumer, > correlationId=6, api=JOIN_GROUP, > > body={group_id=x-stream,session_timeout_ms=10000,rebalance_timeout_ms=300000,member_id=,group_instance_id=lcrzf-1,protocol_type=consumer,protocols=[{name=stream,metad...

Re: [EXTERNAL] Handling of inter.broker.protocol.version and lo.message.format.version after upgrading

Hi Manuel, thanks. I did the three rolling restarts, or in my case three deployments of containers with different config... But the question now is if I should do these three restarts each time I update to latest Kafka or if I now just change both versions to latest one without having to do three restarts as it's just a small change and not two major versions. I already upgraded Kafka itself from 2.1.1 to 2.3.0 in the meantime, but still use 2.1-IV2 as Version for the log.message.format and inter.broker.protocol... So should I do two more deployments with each one changing one version at a time or just bump everything up to 2.3-IV1 in one go for next config-change-deployment? Thanks Sebastian On 25-Jul-19 5:52 PM, Jose Manuel Vega Monroy wrote: > Hi Sebastian, > > Recently we upgraded exactly from same version to that version, and > you need just two rolling restarts if you want two keep same log > format version, or three to upgra...

Re: Rebalancing algorithm is extremely suboptimal for long processing

I'm looking forward to the incremental rebalancing protocol. In the meantime, I've updated to Kafka 2.3.0 to take advantage of the static group membership, and this has actually already helped tremendously. However, unfortunately while it was working initially, some streams are now unable to start at all, due to a code error in the broker during the consumer join request: [2019-07-25 08:14:11,978] ERROR [KafkaApi-1] Error when handling request: clientId=x-stream-4a43d5d4-d38f-4cb0-8741-7a6c685abf15-StreamThread-1-consumer, correlationId=6, api=JOIN_GROUP, body={group_id=x-stream,session_timeout_ms=10000,rebalance_timeout_ms=300000,member_id=,group_instance_id=lcrzf-1,protocol_type=consumer,protocols=[{name=stream,metadata=java.nio.HeapByteBuffer[pos=0 lim=64 cap=64]}]} (kafka.server.KafkaApis) java.util.NoSuchElementException: None.get at scala.None$.get(Option.scala:366) at scala.None$.get(Option.scala:364) at kafka.coordinator.group.GroupMetadata.generat...

Getting started with Kafka topic to store multiple types

Hi All, I am new to Kafka and still getting myself acquainted with the product. I have a basic question around using Kafka. I want to store in a Kafka topic, a string value against some keys while a HashMap value against some of the keys. For this purpose, I have created two different producers as below which I instantiate with two different producer instances. Note that I need to create two different producers since I want to use generic types properly, else with a single producer if I want to use the same producer to store a String and Map then I will need to use <String, Object> in the generic types and Object is too generic which I don't want to allow, so defined two different producers... private static Producer<String, String> basicProducer = null; private static Producer<String, Map> hashProducer = null; Now, if I want to use other streams classes such as KTable or GlobalKTable from where I would read my data, then these classes also req...

Re: [EXTERNAL] Handling of inter.broker.protocol.version and lo.message.format.version after upgrading

Hi Sebastian, Recently we upgraded exactly from same version to that version, and you need just two rolling restarts if you want two keep same log format version, or three to upgrade that version too. Personally, I would recommend you to keep those properties, for example you could keep the current log format version for compatibility with old clients versions. Cheers, Get Outlook for Android< https://aka.ms/ghei36 > From: Sebastian Schmitz Sent: Thursday, 25 July, 03:22 Subject: [EXTERNAL] Handling of inter.broker.protocol.version and lo.message.format.version after upgrading To: users@kafka.apache.org Hey guys, I wonder what I should do with both settings for future upgrades after having finished the upgrade from 0.10.1 to 2.1.1. Should I just remove it from the config so it will default to the current version of Kafka or do I have to do the rolling upgrade with changing versions and restarting Kafka three times each time? Thanks Sebas...

Handling of inter.broker.protocol.version and lo.message.format.version after upgrading

Hey guys, I wonder what I should do with both settings for future upgrades after having finished the upgrade from 0.10.1 to 2.1.1. Should I just remove it from the config so it will default to the current version of Kafka or do I have to do the rolling upgrade with changing versions and restarting Kafka three times each time? Thanks Sebastian -- DISCLAIMER This email contains information that is confidential and which may be legally privileged. If you have received this email in error please notify the sender immediately and delete the email. This email is intended solely for the use of the intended recipient and you may not use or disclose this email in any way.

Add Agoda to the Powered by page

Hi, Could you please add Agoda ( https://www.agoda.com/ ) to the powered by< http://kafka.apache.org/powered-by > page with the following description and logo: Apache Kafka powers the backbone of Agoda's data pipeline with trillions of events streaming through daily across multiple data centers. The majority of the events are destined for analytical systems and directly influence business decisions at one of the world's fastest growing online travel booking platforms. [ https://i1.wp.com/www.agoda.com/wp-content/uploads/2017/10/Main_Agoda_Logo.png?resize=798%2C152&ssl=1 ] https://i1.wp.com/www.agoda.com/wp-content/uploads/2017/10/Main_Agoda_Logo.png Best regards Johan Lundahl --------------------------------------------------- Agoda Services Co., Ltd. e: johan.lundahl@agoda.com <mailto: johan.lundahl@agoda.com > --------------------------------------------------- [/var/folders/j7/pg6j3j593lg3xg864bskqw8d8c4h2v/T/com.microsoft.Outlook/...

Re: Repeating UNKNOWN_PRODUCER_ID errors for Kafka streams applications

Hello Pieter, Thanks for your updates (and sorry I lost your email in my inbox for a month) ! In this release cycle (2.4) we are actively working on KIP-360 and and it should resolve the UKNOWN producer-id issue completely. At the mean time, I think I also agree with you that we do not need to purge data on every commit, not because it helps resolving this issue as it should really be resolved by KIP-360, but because it is not necessary and may be costly with short commit interval (and especially with EOS we would have 10 purge round-trips per second). However, the consuming streams instance of that repartition topic may not be the same as the producing streams instance, so deciding the purge frequency with producing traffic is tricky to implement. After thinking about it, I think just having an internal hard-coded purging.interval (say, 30 secs) is sufficient, since it may not really worth exposing as a config. Guozhang On Mon, Jun 24, 2019 at 6:55 AM P...