Skip to main content

Posts

Showing posts from October, 2019

Re: serdeProps what does that mean

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl270U4ACgkQu8PBaGu5 w1FZrA/+PkZjuo/WxEjxh+5lwUubzTGJ7Wx9y4f21N69f4cQpDK2C0n52qlWgXAF YcmGcSEfDc25EE9AW5MkB41wMhpGkc1vtt4U/O2WMQ/PTcesd8ME+a+sIqkiexuN GscElxbAcaq/Gvbh+MEul1biq5LfibN0UsobRUTqIEsKxmllzqmbi2Vw+saIprtN 1/yPhVQbdGPXqwY+P3OOEig0ewu0/+1/JqkHN9OkKAebrRxkjKqb6tCTO6ql0RS7 /CvHkyPZZ00lLpssk/H57NSYDInjgi4PrC7EQyDxeOz3MqDZziXW1eXoQ913srMI o4KbgHmD0ujqLC0pn/yL1lUi5w6o0Hgy8Uzot3AVnE2QsIyDckDHSloevN6xYxYV e7P0Yx9kBzB0IxJOYJUMy5KM7xDPDu0PPQfeR9sVThJFFusu5kP5fRFDTrKmtfBu VUwMd+kXuKqZEnQrSR2esR847vGcDwh4EGM42krrNEgLr05mKRpUjza6HEhinXuk r1eHNGh4KVCSoUWMXudnotDDf4xArLKz8m0ofqaTkE9maw/MdM5BoZiz6oB45e8s h7XPTZFtoXlLerDf2KudW9Sb7QNsckqJm3BilmeiiGGyal5coacLmOB0+xnEqLLz x4cJSD4F4+kJPxfDnH+QPlkz2SMMHiCBpOGETZg1fBQpS1p9utY= =QeX3 -----END PGP SIGNATURE----- I guess that should be possible. Kafka Streams would not (de)serialize it's configuration `Properties` ...

Kafka Streams OffsetOutOfRangeException / restart to recover

I'm getting an OffsetOutOfRangeException accompanied by the log message "Updating global state failed. You can restart KafkaStreams to recover from this error." But I've restarted the app several times and it's not recovering, it keeps failing the same way. Is this error message just wrong (and should be changed), or if not, why might the restart recovery not be happening? I see there's a cleanUp() method on KafkaStreams that looks like it's what's needed, but I don't find any callers of it in the streams source code. thx, Chris

Re: Needless group coordination overhead for GlobalKTables

Thanks Bruno, filed https://issues.apache.org/jira/browse/KAFKA-9127 . On Wed, Oct 30, 2019 at 2:06 AM Bruno Cadonna < bruno@confluent.io > wrote: > Hi Chris, > > Thank you for the clarification. Now I see what you mean. If your > topology works correctly, I would not file it as a bug but as a > possible improvement. > > Best, > Bruno > > On Wed, Oct 30, 2019 at 1:20 AM Chris Toomey < ctoomey@gmail.com > wrote: > > > > Bruno, > > > > I'm using a fork based off the 2.4 branch .It's not the global consumer > but > > the stream thread consumer that has the group id since it's built with > the > > main consumer config: > > > https://github.com/apache/kafka/blob/065411aa2273fd393e02f0af46f015edfc9f9b55/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L1051 > > . > > > > It shouldn't be creating a regular consumer for t...

Re: kafka stream ktable with suppress operator

Hi Matthias, Some additional information, after I restart the app, it went to endless rebalancing. Join rate loos like below attachment. It's basically rebalanced every 5 minutes. I checked into each node logging. And found below warning: On node A: 2019/10/31 10:13:46 | 2019-10-31 10:13:46,543 WARN [kafka-coordinator-heartbeat-thread | XXX] o.a.k.c.c.i.AbstractCoordinator [Consumer clientId=XXX-StreamThread-1-consumer, groupId=XXX] This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms , which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. 2019/10/31 10:13:46 | 2019-10-31 10:13:46,544 INFO [kafka-coordinator-heartbeat-thread | XXX] o.a.k.c.c.i.Abstrac...

Re: Exactly once transactions

Thanks a lot for all the suggestions! Streams API is not really what I need, but I looked into Streams sources and found that it initializes transactional Producers in ConsumerRebalanceListener.onPartitionsAssigned which is called in KafkaConsumer.poll before fetching records. Looks like this approach solves the race I was talking about. Sergi чт, 31 окт. 2019 г. в 12:10, Matthias J. Sax < matthias@confluent.io >: > I would recommend to read the Kafka Streams KIP about EOS: > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-129%3A+Streams+Exactly-Once+Semantics > > Fencing is the most critical part in the implementation. Kafka Streams > basically uses a dedicated ` transactional.id ` per input topic-partition. > Hence, if a rebalance happens and a partition is re-assigned, it's > ensure that only one "instance" of a consumer-producer pair can commit > the transactions successfully, and the "new produc...

Re: serdeProps what does that mean

Hi I'm sorry for the confusion. My question should be 'are there limitations on what you can put in this configuration map' ? Could you put objects with state in the map for instance? Will this map be serialized ? Thanks On Thu, Oct 31, 2019, at 10:35, Matthias J. Sax wrote: > Well, the `configure()` method is there to configure the serde :) > > Hence, it might be a good place to instantiate the state of the serde, > ie, to create a `SchemaStore` instance. > > For the passed in map: the full `StreamsConfig` will be passed into it, > hence, you can add any configuration you want to the global > `KafkaStreams` configuration and it will be forwarded. > > Hope this helps. > > > -Matthias > > On 10/28/19 5:34 AM, Bart van Deenen wrote: > > Hi all > > > > I need a custom serde for Kafka. Our serde needs a SchemaStore instance with some state, and I'm trying to figure out how to...

Re: kafka stream ktable with suppress operator

Hi Matthias, When I redeployment the application with the same application Id, it will cause a rebalance loop: partition revoked -> rebalance -> offset reset to zero -> partition assigned -> partition revoked. The app was running well before the redeployment, but once redeployed, it will keep rebalancing for hours and I have to switch to a new application id to stop that. You mentioned that, reduce() could use RocksDB as stores by default while suppress() is in memory. Is that the reason that reduce() has both -repartition and -changelog topics while suppress() only has -changelog topic? And will that be related to the shutdown hook? If I don't provide shutdown hook and perform a redeployment, will it cause above issue? Thanks! On Thu, Oct 31, 2019 at 5:55 AM Matthias J. Sax < matthias@confluent.io > wrote: > > Just a follow up: currently, suppress() only supports in-memory stores > (note, that `suppress()` has it's own st...

Re: kafka stream ktable with suppress operator

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26r4cACgkQu8PBaGu5 w1H6pw/+MnUHZqSXwZjjvRgBzx+uf6UsBNoaCn/vpAX31Lw7Ts+sLwZNHcGB2n9C QLSP1V2REFQmXb1slza7qK2X8EK+RBaSFjWVT3o8NLd3sJKzGxSqMQcMdrXqqwKo H5mbpS02w9x5qMEUZh/wvXbx2cRb0Y7qOfD1q1aWIjg6m4iQf1yPW5Xg5wU+DJMy hDQ3vGLH13Bq7zuNDbozSViP0bCWk+LEjC6fzGUb/pcASpXx8Hm6KHR5oOoeOnBR uF/+kvsJILjtlnbFrWUxgoIoN76ly7I1crSJU5sH2aQR8Rt2iJIOMiaJh7n6OoPM Z4bRlLRAhM59s39OXXKHnAsJ8OoA7u/BspNrKnJjS3ogrJ5fzTJSR3sJMN1ksLAd VkIB4LSppWed9BJIizaTeplmLtBspKvkarGTjPEqw8Npq3fwy/j+5zNAjLlCzXcC zmwdzrrt9+pgZb5yTMU958gC+9RFxC2gpUL6gTAOiexkNHgsU3rAn4SwTGWwpJWY I7fUYqiQCqNKi/2tBmZDGvkuYF1xzOToGXOjM7I2n1Q/sJQ9BJKbPX9EKaYo/WNu n4PcneRVqdZyFQhkn62ilzvEO4ZDyQizQTQI8+VB5bzc8ZJ3gDxwZRQ0T4Blx+7X ojEfLreGVX5cwKC/ZPb5k+fRRUfnvPUxy22NJTwSvOJQ8VVSvy4= =0usY -----END PGP SIGNATURE----- Just a follow up: currently, suppress() only supports in-memory stores (note, that `suppress()` has it'...

Re: Attempt to prove Kafka transactions work

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26rVoACgkQu8PBaGu5 w1Hgnw//YxNlbZhtbbZcLEsiIbTDPf1n4E3+wMm92pqUFkYxcafN73rpeA9EBvS1 ecg2RM7HNvHTuElguOu3lEcPbBeD4r8VoUCrufyXavRKX18AVR2QFI5JOVSWS/CC lg8YMcZesk2Wbb8NAvZ0bP9FHi/ADPDJfVUSl099QZgpFZEWSDVN/pcGF6jzfN6f dupqGUMy+lwGXRb0W/e/FEhrzWiPjzsxYjhTwpZGt7M8PR7Bxf+u5Gc17xPNrI4e eoZ4GlQ3+qfkyRW5ROhDRpeAO8inHx3ERRHqAfD1zTZwvzno1YDrgReam753nLeL 6u3Ex2o4RNHnHv9AKGmjZ4omDwdWKlig5eccmRajDceqL+KnXHEA6JDFrttPBMxq 6fwR49bxhEhJsTWZX2t1wOo/ROWEWGnHMITzJPTqqfmvnwmKG4XAckot7eHBcLa3 PY76ZOIjqMtpoZz8qJ6S5GrgDlNf7he8nI04nHb0bx8wVyeaJU+IZUDUUPrcGgPv X71Dl18o8hNxqF1II5+JYWE+jNKeEz3Y/bwn9QOUZjsMBr6N0H9F2k4OrYsaBpF3 58fn0mSEg/ZjtLXL6RdK/4zdcLhtfBx40ebdQKjLc1yEkb3jajEpWrZTLUD8TDSt FU/U4MHb9u+mdxWF54HA6QRVHtKXmcmqXBGUrfcXb0LYvWVC7H8= =I9fn -----END PGP SIGNATURE----- Quite a project to test transactions... The current system test suite is part of the code base: https://...

Re: serdeProps what does that mean

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26qtIACgkQu8PBaGu5 w1HWKg/+I8GBMPVuuTjkC8832uEmRpHeZrXRRrRlDuiIwpm3DzvDKwQyUlJUK0wm jgyLl+jvbqLnOGZQGfqm5Ks4hWE/W/40vP7BxhDa5WjD5I94JGwKYnGTVj6lZfeg NkDfsWpZcP9SX9lCjAOKd8dNAIusEXITKUiNhKySCv+S0A6nN9hUNSoQUIl//8W/ YQ4KE9RvuSc3fJdi9xUKe3dNChTnDbX5BBQrdAgE6u3x0s6mnv3LZfvOI7q8I9sW OUrN/OF276gDRW4Fp1y2C1m3jQP37Ihu8h2DiG2h66wafe+UYJdM3DflX99V5HB2 tjNAz97dWAeLqX/1fclVWXvfowcJLzr4u24gVGqiypEL8omg/Bi/ZYh9jCpUNc2n +TQdeXmQsGav9/LQ/Yhup51ztWLYUsdw0C7nKawVEZHebrli3y1ufs3KGe1v+8+N jD0U/xEMuHjbsyFLuE8Ykpb/YYNFWVx+n+XNv4Av96ITm158MC20Kcgysyo9UqSr OzLzf1tuPBMUa6JgyAKp+uhwoUY2d7RvsUAgX1bRNa6fBiDkKWulnLDZ0c1BQ0bk ChV3pVmSdMPWVSipQZv9pMzcBRjKuoyaAjlp1CnDxPY+kaE/P4e53Ze2L9SSEQ6n DelZyQZ/rbFwI6Db4UScbYRM1eSXYIYOE7y0qdjfb2lviXUx4XM= =JePA -----END PGP SIGNATURE----- Well, the `configure()` method is there to configure the serde :) Hence, it might be a good place to inst...

Re: Kafka Streams Daily Aggregation

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26qdIACgkQu8PBaGu5 w1G/6A/9GjVaIUcor1RiEtG9LydAD8RcSvK7fGafSMFOlC44NTFkF/pP9MAUwMGT eRXdv2xl9RFm9iZn1jbh3vSL9RTM0edkaEfenp+dBWYGYL9yQzYTyzoMnCQMTppP 2t/nGvXwIZuv4V8Rklfk1CyTuvA1CItYMXu5f9AQkv9I9X1aTaNbNBqFvbEHXJr7 UNdATf8Romxv5ruslNGpG1ahMG625PeLr9ITSmxszKSR9of8XqlR3W/2GYysiWKa JQ99vy4yIATq6GiB8HC4r1oMBCziatiSy3PNzw46CHhB91X97Z65GbrGGB/4QWFH 7NBKgb5tMNsaGMXKTGFMg4pvhDAmqLWsdc9ngNpfSxWNld+AohAZxcVmj8o2MnyJ d5VzNWrcFq4s7UTGH6bxif0KSFFlZ6SyNsCNkuoFZDpchWr+t9SF0Sy5hSLbM7sK xhj1kS/TIkvp6cWX5IeDvmnaRDBN4CY28c7aomg/IVrt4+Ci23tx9nzuGS3zNr91 XZgBAjv3awtH+Xev9mzTR4oCZFDDh6XYRNn/+2WXidqIqrsi6toFDOT2/RoIC9XI qojg3e2A36zzARwgbJFBQiHsYFR/Dy/950Moflvfkel1fJUlvLet2jJnTOQYaerM 5riVc3eZSv3eYmFSMmFbU2UuN1cOUnactj3hqmv7fvUiAuJKLX0= =91cR -----END PGP SIGNATURE----- Btw: There is an example implementation of a custom daily window that considers time zones. Maybe it helps:...

Re: Kafka EOL policy question.

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26p+EACgkQu8PBaGu5 w1Fy+w//aS5d6PYmALndO9fnd2FfSrQXI12ZylEgli2SWZYkwnxeza+dNWAS0tkF iG2xWVZ7DCLzdkvzwn9YUyBZ2DXDP6KOaLjWjxbKdLqggWxV/faaihUcnMtcrIDO YkEe/k0GRDW0QuIW43pckdFJb8/dYyxEnB8xeWTzMOm2w4Cq6/wEKkgVI7HJDPCT teXV+URjEqE/S0erxwUspy78p22st6n/oMUxQKTWQEcynmWOEiDwB2/szMVIn5EN OJIN/lgg3EM+sUlfNRP6JyW+2fwvViWWMvJ5RhHBWDtaYnuJtUeE1hM6VUnR3kt2 yhYzwjtO8EKJN4LOPJpiSW+y0TFQMPL9o1kVd6jaX7PGNAU7fAOr8asxhZQ0iEA/ W16+fKPh17vw2kZ/kxitYKReF8r6pXgzZJ5VaB4t64onZqmrLh+gfDrCFjdrsM03 WuMBLjroayjFcHQ9QRJqullVHj7L+yRFQWDXcVE0AmnR+DpSPkfTIUuXU3u5lXFR d896iX/SDrGTK1IG5oj/OeaCkb3MmTVjvCK0QUNNMer/2QYYQy+dN82W+H+96NLb kbTXg48q9ZwbbPY8O9Wau8JpIA5zIjLLOsMKtzAFr8K9N6pVInpclXS/Vdcvultn lE2x0mUVhVpRvmfTK5g6Y/DUliPU2pgB0fr7UiiCnCXteWXUe+A= =udUK -----END PGP SIGNATURE----- Bug fix releases are done "on demand" and there is no strict policy to not do a bug fix release f...

Re: Exactly once transactions

-----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26pQwACgkQu8PBaGu5 w1HOdQ/+LNU2MiHdLm0dJzuxZsi83xQSZPv9eM+ZxTDWAjwCYjr5I2R/kvpubVQi xmJxx/b/BuvuvY7+x3BfR8qK1eNdbd3ZNbpzjk1kgWVJNwGOvqPo+cfwNg2damJu Wt5Xv0hc+f3/TKC3pvDgsdYw3wo2hf2SBpRoA+m+uySgupmx2a4SYZGhhrP69eWD fomqW9jMp23s17AHyuJI7rVbeMSranCm95ZKY9xMlRTxq4afvBV4lGRs8aZi6ygR 292IpojGMa0VHArN5k41J+DJfo+KY/fDxyA36b34jSiFG/Z4hIMX7GZ6fQ4UQNtd Fa9Vq8Mejj17VQA1Y5eQ75EhpRtG8AE13hQ0Txey4H6VuWo9IUXtLV0RVzP0BjOV W8KfKdMAAjYXNLVPTIvQwd5KaRUz/Sm5FqTfDrZltC21d21vtpcDQR6k5W2zY6EW 87vAE5K/OvkY+9EXSP+ohc7wqDR9L804rqhfydDMCIUNbIEEbL8GMotVXa22aHYb 6rvkf9SwM+5jFRDtT1HnkOyLFczsmaGg9D/1D5heeL8i55xJ2jXWi+VuwG+9hd7b V5HDJyQ+yPoZdDJld5F1gtUp0if1F5Xfy4ZtepSV7g/UmQSWLy4f7GCWNFFNxc/m WpcL0ymAcVv75xhddxqEuemHvkjhlg7bFgdPOLgAZeQp2KW+J40= =+YfE -----END PGP SIGNATURE----- I would recommend to read the Kafka Streams KIP about EOS: https://cwiki.apache.org/confluence/display/KAFK...

Re: Exactly once transactions

Sergi, have you looked at using the Kafka Streams API? If you're just consuming from a topic, transforming the data, then writing it out to a sink topic, you can probably do that relatively easily using the Streams DSL. No need to manually subscribe, poll, or commit offsets. Then if you want to get exactly once guarantees, you just set the processing.guarantee property to "exactly_once" and you're all set. However, since it sounds like your application isn't stateful then I think your only concern is with producing a duplicate message(s) to the sink topic right? (you don't have any internal state that could get messed up in the event of crashes, network failures, etc) Do you have control over who/what consumes the sink topic? If so, can you make that consumer tolerant of duplicate messages? Exactly once works well in my experience, but there is overhead involved so only use it if you need it. :) Alex On Wed, Oct 30, 2019 at 10:04 PM Ki...

Kafka Connect JDBC Connector | Closing JDBC Connection after each poll()

We are going to use use Kafka Connect JDBC Source Connector to ingest data from Oracle Databases. We have one Kafka JDBC Connector per one Oracle Db. Looking at the JDBC Connector implementation ,if we have N number of maxTasks per Connector, there will be N number of JDBC Connections to the server. These connections are kept alive and will not be closed after each poll(). Our DB Admins has strict conditions about number of live connections to the db servers.Because of this we are thinking about closing the JDBC Connection after each poll() since our poll times are usually 10 mins or more. Is this supported natively by JDBC Connector or do we have to do a patch?

Re: Exactly once transactions

Hi, It may be not for your case, but I have implemented an example about kafka transaction: https://medium.com/@mykidong/kafka-transaction-56f022af1b0c , in this example, offsets are saved to external db. - Kidong 2019ė…„ 10ģ›” 31ģ¼ (ėŖ©) ģ˜¤ģ „ 11:39, Sergi Vladykin < sergi.vladykin@gmail.com >ė‹˜ģ“ ģž‘ģ„±: > Ok, so what is the advice? Not to use Kafka transactions ever because they > are unusable in real life? > Can you please provide a recipe how to make it work in the simple scenario: > no databases, just two topics, no admin actions. > > Sergi > > ср, 30 окт. 2019 г. в 22:39, Jƶrn Franke < jornfranke@gmail.com >: > > > Please note that for exactlyOnce it is not sufficient to set simply an > > option. The producer and consumer must individually make sure that they > > only process the message once. For instance, the consumer can crash and > it > > may then resend already submitted messages ...

Re: Exactly once transactions

Ok, so what is the advice? Not to use Kafka transactions ever because they are unusable in real life? Can you please provide a recipe how to make it work in the simple scenario: no databases, just two topics, no admin actions. Sergi ср, 30 окт. 2019 г. в 22:39, Jƶrn Franke < jornfranke@gmail.com >: > Please note that for exactlyOnce it is not sufficient to set simply an > option. The producer and consumer must individually make sure that they > only process the message once. For instance, the consumer can crash and it > may then resend already submitted messages or the producer might crash and > might write the same message twice to a database etc. > Or due to a backup and restore or through a manual admin action all these > things might happen. > Those are not "edge" scenarios. In operations they can happen quiet often, > especially in a Containerized infrastructure. > This you have to consider for all messaging so...

Re: Exactly once transactions

Please note that for exactlyOnce it is not sufficient to set simply an option. The producer and consumer must individually make sure that they only process the message once. For instance, the consumer can crash and it may then resend already submitted messages or the producer might crash and might write the same message twice to a database etc. Or due to a backup and restore or through a manual admin action all these things might happen. Those are not "edge" scenarios. In operations they can happen quiet often, especially in a Containerized infrastructure. This you have to consider for all messaging solutions (not only Kafka) in your technical design. > Am 30.10.2019 um 20:30 schrieb Sergi Vladykin < sergi.vladykin@gmail.com >: > > Hi! > > I investigate possibilities of "exactly once" Kafka transactions for > consume-transform-produce pattern. As far as I understand, the logic must > be the following (in pseudo-code): > ...

Exactly once transactions

Hi! I investigate possibilities of "exactly once" Kafka transactions for consume-transform-produce pattern. As far as I understand, the logic must be the following (in pseudo-code): var cons = createKafkaConsumer(MY_CONSUMER_GROUP_ID); cons.subscribe(TOPIC_A); for (;;) { var recs = cons.poll(); for (var part : recs.partitions()) { var partRecs = recs.records(part); var prod = getOrCreateProducerForPart(MY_TX_ID_PREFIX + part); prod.beginTransaction(); sendAllRecs(prod, TOPIC_B, partRecs); prod.sendOffsetsToTransaction(singletonMap(part, lastRecOffset(partRecs) + 1), MY_CONSUMER_GROUP_ID); prod.commitTransaction(); } } Is this right approach? Because it looks to me there is a possible race here and the same record from topic A can be committed to topic B more than once: if rebalancing happens after our thread polled a record and before creating a producer, then another thread wil...

Few Kafka Consumer threads do not consume

Kafka details: Kafka Version: kafka_2.12-2.2.0 3 broker Kafka Cluster Kafka topic configs - Partitions 60, Replication Factor 3, min-insync-replica 2 Overview of Application from Kafka scenario: 1. The Collector application recieves data from Kafka topic (60 Partitions.) 2. Application has 4 instances running on 4 different server machines. 3. An application instantance initiates multiple consumer threads. 4. This is done through Java Executor service eg: ExecutorService es = Executors.newFixedThreadPool(NoOfconsumers); 5. An auto.offset.reset: 'earliest' property is also passed to each kafka consumer threads. 6. Each single partition in the topic is assigned to each single consumer thread using the Kafka Assign Method. 7. A "seekToBeginning" method is called inside the kafka Consumer thread to consume from first offset from the Kafka Partition. 8. The Kafka consumer group-id is kept constant throughout the application i...

Transactions and offsets topic in multitrack environment

When setting up kakfka in a network with racks or multiple datacenters and manual replica placement is it required that the transaction offset topics match the topology as a user topic would? Are producers or consumers affected by changing these with live traffic. -- Sorry this was sent from mobile. Will do less grammar and spell check than usual.

Kafka EOL policy question.

Hi All, I'm trying to understand Kafka EOL policy. As stated in https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan *"""* *What Is Our EOL Policy?Given 3 releases a year and the fact that no one upgrades three times a year, we propose making sure (by testing!) that rolling upgrade can be done from each release in the past year (i.e. last 3 releases) to the latest version.We will also attempt, as a community to do bugfix releases as needed for the last 3 releases.* *"""* So does it mean that 2.1.0 is latest release that is supporting any bugfixing as for now and when 2.4 will be released 2.1.0 won't get any further bugfixes? if not please help me to understand which one is supporting bugfixing as for now and when it will lose support. Any help is appreciated. Regards, Vitalii.

Re: Needless group coordination overhead for GlobalKTables

Hi Chris, Thank you for the clarification. Now I see what you mean. If your topology works correctly, I would not file it as a bug but as a possible improvement. Best, Bruno On Wed, Oct 30, 2019 at 1:20 AM Chris Toomey < ctoomey@gmail.com > wrote: > > Bruno, > > I'm using a fork based off the 2.4 branch .It's not the global consumer but > the stream thread consumer that has the group id since it's built with the > main consumer config: > https://github.com/apache/kafka/blob/065411aa2273fd393e02f0af46f015edfc9f9b55/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L1051 > . > > It shouldn't be creating a regular consumer for the topic since my topology > only has a single element, the GlobalKTable, which is populated by the > global consumer. My scala code: > > val builder: StreamsBuilder = new StreamsBuilder() > val gTable = builder.globalTable[K, V](...) > val stream = n...

Re: Needless group coordination overhead for GlobalKTables

Bruno, I'm using a fork based off the 2.4 branch .It's not the global consumer but the stream thread consumer that has the group id since it's built with the main consumer config: https://github.com/apache/kafka/blob/065411aa2273fd393e02f0af46f015edfc9f9b55/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L1051 . It shouldn't be creating a regular consumer for the topic since my topology only has a single element, the GlobalKTable, which is populated by the global consumer. My scala code: val builder: StreamsBuilder = new StreamsBuilder() val gTable = builder.globalTable[K, V](...) val stream = new KafkaStreams(builder.build(), props) stream.start() I can disable the stream thread consumer by configuring num.stream.threads = 0, but why does it create this stream thread consumer in the first place if it's not been requested in the topology? thx, Chris On Tue, Oct 29, 2019 at 2:08 PM Bruno Cadonna < bruno@confluent...