Skip to main content

Posts

Showing posts from December, 2023

Using MirrorMaker2 | How to conclude that clusters are in sync?

Hi, I am using MirrorMaker2 to do unidirectional mirroring of all topics from one Kafka cluster to other. I am doing this to migrate clusters to reduce storage size. I am looking for a way to conclude the exercise and ensure that the clusters are synched and start migrating consumers to the target cluster. Please help me with metrics or validation step to conclude that the clusters are in sync and all messages of a particular topic are migrated to the target cluster. Thanks Akshaya

Re: Is the KafkaStreams#store() method thread-safe?

You need to wrap Hashmap with ConcurrentHashMap imlementation otherwise you will receive no Thread Notifications Sent from my Verizon, Samsung Galaxy smartphone Get Outlook for Android< https://aka.ms/AAb9ysg > ________________________________ From: Kohei Nozaki < kohei@apache.org > Sent: Wednesday, December 27, 2023 5:52:02 PM To: users@kafka.apache.org < users@kafka.apache.org > Subject: Re: Is the KafkaStreams#store() method thread-safe? Hi Sophie, thank you so much for sharing that. It all makes sense to me. Unfortunately my application uses REPLACE_THREAD, so it seems like I need a workaround for this until this thread unsafeness is removed. As I raised in my first email, would sharing only the ReadOnlyWindowStore instance with other threads be a workaround for this? Would the store object here be able to capture the changes that would be made by rebalancing? I've filed a ticket here (I'm interested in submitting a patch, but I cannot m...

Re: Is the KafkaStreams#store() method thread-safe?

Hi Sophie, thank you so much for sharing that. It all makes sense to me. Unfortunately my application uses REPLACE_THREAD, so it seems like I need a workaround for this until this thread unsafeness is removed. As I raised in my first email, would sharing only the ReadOnlyWindowStore instance with other threads be a workaround for this? Would the store object here be able to capture the changes that would be made by rebalancing? I've filed a ticket here (I'm interested in submitting a patch, but I cannot make any commitment): https://issues.apache.org/jira/browse/KAFKA-16055 Regards, Kohei > On Dec 27, 2023, at 5:43, Sophie Blee-Goldman < sophie@responsive.dev > wrote: > > Hey Kohei, > > Good question -- I don't think there's exactly a short answer to this > seemingly simple question so bear with me for a second. > > My understanding is that KafkaStreams#store is very much intended to be > thread-safe, and would...

Re: Is the KafkaStreams#store() method thread-safe?

Hey Kohei, Good question -- I don't think there's exactly a short answer to this seemingly simple question so bear with me for a second. My understanding is that KafkaStreams#store is very much intended to be thread-safe, and would have been back when it was first added a long time ago, and the javadocs should probably be updated to reflect that. That said, you are totally right that whatever the intention, it is technically not completely thread safe anymore since the storeProviders map can be mutated when threads are added or removed. Of course, as long as you are not adding or removing StreamThreads in your application, it should be effectively thread-safe (but note: this includes using the REPLACE_THREAD option with the StreamsUncaughtExceptionHandler) We should go ahead and fix this of course. I'm pretty sure we can just change the HashMap to a ConcurrentHashMap and be done with it -- there's already locking around the actual map modifications...

Re: where to capture a failed task's exception

Hey Akash, Thanks for the question! For a direct answer, no: throwing exceptions from poll() is only one of many ways that a task can fail. If you look at the AK source, every failure ultimately uses the AbstractStatus.State.FAILED enum [1]. You can trace the usages of this enum back to see all of the ways that a connector or task can fail. The failure trace is also exposed in the REST API, which is a stable public API you can depend on. [1]: https://github.com/apache/kafka/blob/d582d5aff517879b150bc2739bad99df07e15e2b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractStatus.java#L27C13-L27C13 Happy to help, Greg On Tue, Dec 26, 2023 at 2:41 PM Akash Dhiman < akashdhiman.me@gmail.com > wrote: > > Hello, > > I have a requirement where I need to detect failed tasks based on the > specific errors and emit a metric only when it doesn't fail based on these > specific errors (these include unknown_topic_or_partition...

where to capture a failed task's exception

Hello, I have a requirement where I need to detect failed tasks based on the specific errors and emit a metric only when it doesn't fail based on these specific errors (these include unknown_topic_or_partition, specific cases of ConfigException etc) I know about a similar metric accessible via Prometheus but that gives me count of failed task count for any reason. I was thinking that wrapping the poll method of the task in try catch block would be sufficient where i detect for error details in the catch block and emit metric when they don't match the ones i don't want the metric for) but I am unsure if this captures all possible scenarios for which a task might fail. is it guaranteed that all the exceptions/error for which a task might fail gets emitted via the poll method?

Re: Question about using public classes from the internal packages

????????? On Wed, Dec 20, 2023 at 6:00 PM Vikram Singh < vikram.singh@clouzersolutions.com > wrote: > Hello Team, > I have done setup of 3 node kafka cluster(3.4.1) all nodes within cluster > are configured as broker and controller due to some reasons i have > restarted kafka cluster. but after restarting kafka cluster nodes are not > getting in the same cluster. Can you please help me out with this issue? > > On Tue, Dec 19, 2023 at 10:00 PM Bruno Cadonna < cadonna@apache.org > wrote: > >> Hi John, >> >> In general, we do not guarantee anything on APIs of the internal >> package. That is also the reason why you do not need a KIP to change >> those classes. Any class for which the build generates Javadoc is >> considered public API [1]. For public APIs we guarantee backwards >> compatibility. >> >> Best, >> Bruno >> >> [1] >> >> https://cwi...

Is the KafkaStreams#store() method thread-safe?

Hello, I have Kafka Streams questions related to thread safety. In my Kafka Streams application, I have 2 threads below: * Thread A: this creates a Topology object including state stores and everything and then eventually calls the constructor of the KafkaStreams class and the start() method. * Thread B: this has a reference to the KafkaStreams object the thread A created. This periodically calls KafkaStreams#store on the object, gets a ReadOnlyWindowStore instance and reads the data in the store for monitoring purposes. I'm wondering if what my app does is ok in terms of thread safeness. I'm not so worried about ReadOnlyWindowStore because the javadoc says: "Implementations should be thread-safe as concurrent reads and writes are expected." But as for KafkaStreams#store, I'm not so sure if it is ok to call from separate threads. One thing which concerns me is that it touches a HashMap, which is not thread safe here https://github.com/apache/kafka/b...

Strange error in Kafka server logs

Hi, I have problem with SASL_SSL configuration of Kafka. In Server.log is strange error: 2023-12-21 00:22:17,254] DEBUG Setting SASL/SCRAM_SHA_256 server state to FAILED (org.apache.kafka.common.security.scram.internals.ScramSaslServer) [2023-12-21 00:22:17,256] DEBUG Set SASL server state to FAILED during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator) [2023-12-21 00:22:17,257] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Failed authentication with / 127.0.0.1 (channelId= 127.0.0.1:9092 -127.0.0.1:63474-6) (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector) My server.properties: sasl.enabled.mechanisms=SCRAM-SHA-256 listeners=SASL_SSL://localhost:9092 advertised.listeners=SASL_SSL://localhost:9092 sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 security.inter.broker.protocol=SASL_SSL ssl.keystore....

Re: Question about using public classes from the internal packages

Hello Team, I have done setup of 3 node kafka cluster(3.4.1) all nodes within cluster are configured as broker and controller due to some reasons i have restarted kafka cluster. but after restarting kafka cluster nodes are not getting in the same cluster. Can you please help me out with this issue? On Tue, Dec 19, 2023 at 10:00 PM Bruno Cadonna < cadonna@apache.org > wrote: > Hi John, > > In general, we do not guarantee anything on APIs of the internal > package. That is also the reason why you do not need a KIP to change > those classes. Any class for which the build generates Javadoc is > considered public API [1]. For public APIs we guarantee backwards > compatibility. > > Best, > Bruno > > [1] > > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals > > On 12/18/23 1:46 PM, John Brackin wrote: > > Hi all, > > > > I have a question about using classes that ar...

Re: Question about using public classes from the internal packages

Hi John, In general, we do not guarantee anything on APIs of the internal package. That is also the reason why you do not need a KIP to change those classes. Any class for which the build generates Javadoc is considered public API [1]. For public APIs we guarantee backwards compatibility. Best, Bruno [1] https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals On 12/18/23 1:46 PM, John Brackin wrote: > Hi all, > > I have a question about using classes that are are public but the class is > contained in the internals packages Generally, would it be supported if I > wrote Kafka code referencing these classes? > > This question comes from an attempt to add a CacheListener to a > TimestampedKeyValueStore. The line of code looks like this: > > cachingEnabled = ((WrappedStateStore) this.store).setFlushListener(new > AggregateCacheFlushListener<>(context), false); > > To gain acce...

Question about using public classes from the internal packages

Hi all, I have a question about using classes that are are public but the class is contained in the internals packages Generally, would it be supported if I wrote Kafka code referencing these classes? This question comes from an attempt to add a CacheListener to a TimestampedKeyValueStore. The line of code looks like this: cachingEnabled = ((WrappedStateStore) this.store).setFlushListener(new AggregateCacheFlushListener<>(context), false); To gain access to the setFlushListener the code needs to cast the underlying state store object to a org.apache.kafka.streams.state.internals.WrappedStateStore object. Kind regards, John Brackin

Re: Can a controller in a kafka kraft cluster be a bootstrap server

Hello, Why am I getting below logs continuously, how can I configure to avoid this. 11:32:43.469 WARN org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=javaproducer11703811766106] Received invalid metadata error in produce request on partition CCR_CLZ_COM_STATUS_MONITOR-PREDEV_123-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.. Going to request metadata update now On Wed, Dec 13, 2023 at 11:51 AM Vikram Singh < vikram.singh@clouzersolutions.com > wrote: > Hello Luke, > Please look into below logs, > > 12:32:15.813 WARN org.apache.kafka.clients.NetworkClient - [AdminClient > clientId=test-lgn-clz-com-v0.0.43-INS_CLZ_COM-TEST_123-7e87eb30-69e9-4746-b351-beac3a085383-admin] ...

Re: In Kafka KRaft can controllers participate as bootstrap servers

Hello, Why am I getting below logs continuously, how can I configure to avoid this. 11:32:43.469 WARN org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=javaproducer11703811766106] Received invalid metadata error in produce request on partition CCR_CLZ_COM_STATUS_MONITOR-PREDEV_123-0 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.. Going to request metadata update now On Thu, Dec 14, 2023 at 1:50 AM David Arthur <david.arthur@confluent.io.invalid> wrote: > Only brokers can be specified as --bootstrap-servers for AdminClient (the > bin/kafka-* scripts). > > In 3.7, we are adding the ability to bootstrap from KRaft controllers for > certain scripts. In this case, the scripts will use ...

Re: In Kafka KRaft can controllers participate as bootstrap servers

Only brokers can be specified as --bootstrap-servers for AdminClient (the bin/kafka-* scripts). In 3.7, we are adding the ability to bootstrap from KRaft controllers for certain scripts. In this case, the scripts will use --bootstrap-controllers (the details are in https://cwiki.apache.org/confluence/display/KAFKA/KIP-919%3A+Allow+AdminClient+to+Talk+Directly+with+the+KRaft+Controller+Quorum+and+add+Controller+Registration ) But in general, no controllers cannot be used as bootstrap servers. -David On Tue, Dec 5, 2023 at 10:05 AM Dima Brodsky < ddbrodsky@gmail.com > wrote: > Hello, question, > > If I have my kafka cluster behind a VIP for bootstrapping, is it possible > to have the controllers participate in the bootstrap process or only > brokers can? > > Thanks! > ttyl > Dima > > -- > ddbrodsky@gmail.com > > "The price of reliability is the pursuit of the utmost simplicity. > It is a price whi...

Re: Can a controller in a kafka kraft cluster be a bootstrap server

Hello Luke, Please look into below logs, 12:32:15.813 WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-lgn-clz-com-v0.0.43-INS_CLZ_COM-TEST_123-7e87eb30-69e9-4746-b351-beac3a085383-admin] Error connecting to node kafka-0-0.kafka.test.svc.cluster.local:9092 (id: 0 rack: null) java.net.UnknownHostException: kafka-0-0.kafka.test.svc.cluster.local at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_212] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_212] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_212] at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27) ~[kafka-clients-3.4.1.jar:?] at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:110) ~[kafka-clients-3.4.1.jar:?] at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:510) ~[kafka-clients-3.4.1.jar:?] 12:36:42.366 E...

Re: Can a controller in a kafka kraft cluster be a bootstrap server

Hi Vikram, It would be good if you could share client and broker logs for troubleshooting. Thanks. Luke On Wed, Dec 13, 2023 at 1:15 PM Vikram Singh <vikram.singh@clouzersolutions.com.invalid> wrote: > > Hello, > I have 3 node kafka cluster, when one node goes down for some reason the > request which are serving on down node is not routing to other running > node, It takes me always to restart the services, > Running on kafka version 3.2.1 (kraft mode) > > On Mon, Dec 11, 2023 at 12:33 PM Luke Chen < showuon@gmail.com > wrote: > > > Hi Dima, > > > > You can set "process.roles=controller,broker" to get what you want. > > Otherwise, the controller role cannot be served as a broker. > > > > Thanks. > > Luke > > > > On Sat, Dec 9, 2023 at 3:59 AM Dima Brodsky < ddbrodsky@gmail.com > wrote: > > > > > Hello, > > > > > >...

Re: Kafka frequent disconnection issue

Hello, I have 3 node kafka cluster, when one node goes down for some reason the request which are serving on down node is not routing to other running node, It takes me always to restart the services, Running on kafka version 3.2.1 (kraft mode) On Mon, Dec 11, 2023 at 12:40 PM Luke Chen < showuon@gmail.com > wrote: > Hi Ankit, > > We can't see the log snippet. > But it looks normal to disconnect when connections.max.idle.ms expires. > When increasing the connections.max.idle.ms value, there might be some > activities in the connection during this time (ex: at 5 min 10 sec), so the > idle timer is reset. > > Thanks. > Luke > > On Fri, Dec 8, 2023 at 11:06 PM Ankit Nigam > <ankit.nigam@ericsson.com.invalid> wrote: > > > Hi Team, > > > > > > > > We are using Apache Kafka 3.3.1 in our application. We have created > Kafka > > Admin Client , Kafka Producer and Kaf...

Re: Can a controller in a kafka kraft cluster be a bootstrap server

Hello, I have 3 node kafka cluster, when one node goes down for some reason the request which are serving on down node is not routing to other running node, It takes me always to restart the services, Running on kafka version 3.2.1 (kraft mode) On Mon, Dec 11, 2023 at 12:33 PM Luke Chen < showuon@gmail.com > wrote: > Hi Dima, > > You can set "process.roles=controller,broker" to get what you want. > Otherwise, the controller role cannot be served as a broker. > > Thanks. > Luke > > On Sat, Dec 9, 2023 at 3:59 AM Dima Brodsky < ddbrodsky@gmail.com > wrote: > > > Hello, > > > > Would the following configuration be valid in a kafka kraft cluster > > > > So lets say we had the following configs for a controller and a broker: > > > > === controller - > > > > > https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d/config/kraft/contro...

[DISCUSS] Kafka Connect source task interruption semantics

Hi all, I'd like to solicit input from users and maintainers on a problem we've been dealing with for source task cleanup logic. If you'd like to pore over some Jira history, here's the primary link: https://issues.apache.org/jira/browse/KAFKA-15090 To summarize, we accidentally introduced a breaking change for Kafka Connect in https://github.com/apache/kafka/pull/9669 . Before that change, the SourceTask::stop method [1] would be invoked on a separate thread from the one that did the actual data processing for the task (polling the task for records, transforming and converting those records, then sending them to Kafka). After that change, we began invoking SourceTask::stop on the same thread that handled data processing for the task. This had the effect that tasks which blocked indefinitely in the SourceTask::poll method [2] with the expectation that they could stop blocking when SourceTask::stop was invoked would no longer be capable of graceful s...

Re: [ANNOUNCE] Apache Kafka 3.5.2

Hello, Is there a "git" issue with 3.5.2. When I look at github I see the 3.5.2 tag. But if I make the repo an upstream remote target I don't see 3.5.2. Any ideas what could be up? Thanks! ttyl Dima On Mon, Dec 11, 2023 at 3:36 AM Luke Chen < showuon@apache.org > wrote: > The Apache Kafka community is pleased to announce the release for > Apache Kafka 3.5.2 > > This is a bugfix release. It contains many bug fixes including > upgrades the Snappy and Rocksdb dependencies. > > All of the changes in this release can be found in the release notes: > https://www.apache.org/dist/kafka/3.5.2/RELEASE_NOTES.html > > > You can download the source and binary release from: > https://kafka.apache.org/downloads#3.5.2 > > > --------------------------------------------------------------------------------------------------- > > > Apache Kafka is a distributed streaming platform with four core...

Extend DefaultReplicationPolicy for MirrorSourceConnector / MirrorMaker

Hi Guys, we have an already running Kakfa Connector Cluster. I want to add a MirrorSourceConnector with a custom replication policy: ----- curl --request POST \ --url http://localhost:9083/connectors \ --header 'Accept: application/json' \ --header 'Content-Type: application/json' \ --data '{ "name" : "cluster-source", "config":{ "connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector", ... "replication.policy.class" : "org.example.CustomReplicationPolicy", ... } }' ----- This leads to an error: ----- { "error_code": 400, "message": "Connector configuration is invalid and contains the following 1 error(s):\nInvalid value org.example.CustomReplicationPolicy for configuration replication.policy.class: Class org.example.CustomReplicationPolicy could not be found.\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorT...

Re: [ANNOUNCE] Apache Kafka 3.5.2

Thanks for running the release, and thanks to all the contributors! Mickael On Mon, Dec 11, 2023 at 1:56 PM Josep Prat <josep.prat@aiven.io.invalid> wrote: > > Thanks Luke for running the release! > > Best! > > On Mon, Dec 11, 2023 at 12:34 PM Luke Chen < showuon@apache.org > wrote: > > > The Apache Kafka community is pleased to announce the release for > > Apache Kafka 3.5.2 > > > > This is a bugfix release. It contains many bug fixes including > > upgrades the Snappy and Rocksdb dependencies. > > > > All of the changes in this release can be found in the release notes: > > https://www.apache.org/dist/kafka/3.5.2/RELEASE_NOTES.html > > > > > > You can download the source and binary release from: > > https://kafka.apache.org/downloads#3.5.2 > > > > > > ------------------------------------------------------------------------------------...

Re: [ANNOUNCE] Apache Kafka 3.5.2

Thanks Luke for running the release! Best! On Mon, Dec 11, 2023 at 12:34 PM Luke Chen < showuon@apache.org > wrote: > The Apache Kafka community is pleased to announce the release for > Apache Kafka 3.5.2 > > This is a bugfix release. It contains many bug fixes including > upgrades the Snappy and Rocksdb dependencies. > > All of the changes in this release can be found in the release notes: > https://www.apache.org/dist/kafka/3.5.2/RELEASE_NOTES.html > > > You can download the source and binary release from: > https://kafka.apache.org/downloads#3.5.2 > > > --------------------------------------------------------------------------------------------------- > > > Apache Kafka is a distributed streaming platform with four core APIs: > > > ** The Producer API allows an application to publish a stream of records to > one or more Kafka topics. > > ** The Consumer API allows an appli...

[ANNOUNCE] Apache Kafka 3.5.2

The Apache Kafka community is pleased to announce the release for Apache Kafka 3.5.2 This is a bugfix release. It contains many bug fixes including upgrades the Snappy and Rocksdb dependencies. All of the changes in this release can be found in the release notes: https://www.apache.org/dist/kafka/3.5.2/RELEASE_NOTES.html You can download the source and binary release from: https://kafka.apache.org/downloads#3.5.2 --------------------------------------------------------------------------------------------------- Apache Kafka is a distributed streaming platform with four core APIs: ** The Producer API allows an application to publish a stream of records to one or more Kafka topics. ** The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them. ** The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an outp...

Re: Kafka frequent disconnection issue

Hi Ankit, We can't see the log snippet. But it looks normal to disconnect when connections.max.idle.ms expires. When increasing the connections.max.idle.ms value, there might be some activities in the connection during this time (ex: at 5 min 10 sec), so the idle timer is reset. Thanks. Luke On Fri, Dec 8, 2023 at 11:06 PM Ankit Nigam <ankit.nigam@ericsson.com.invalid> wrote: > Hi Team, > > > > We are using Apache Kafka 3.3.1 in our application. We have created Kafka > Admin Client , Kafka Producer and Kafka consumer in the application using > the default properties. > > > > Once our application starts we are observing below disconnects logs every > 5 minutes for Admin client and once for Kafka consumer and producer in our > application. Also at around the same time disconnection logs are being > observed in Kafka debug server logs.. > > Below is log snippet for the same. > > > ...

Re: Can a controller in a kafka kraft cluster be a bootstrap server

Hi Dima, You can set "process.roles=controller,broker" to get what you want. Otherwise, the controller role cannot be served as a broker. Thanks. Luke On Sat, Dec 9, 2023 at 3:59 AM Dima Brodsky < ddbrodsky@gmail.com > wrote: > Hello, > > Would the following configuration be valid in a kafka kraft cluster > > So lets say we had the following configs for a controller and a broker: > > === controller - > > https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d/config/kraft/controller.properties > > process.roles=controller > node.id =1 > controller.quorum.voters=1@host1:9093 > listeners=CONTROLLER://:9093,BROKER://:9092 > controller.listener.names=CONTROLLER > > advertised.listeners=BROKER://:9092,CONTROLLER://:9093 > inter.broker.listener.name =BROKER > > === broker - > > https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d...

Re: Relation between fetch.max.bytes, max.partition.fetch.bytes & max.poll.records

> is there any gain on number of network calls being made No basically. However, since next fetch requests are sent only when previous-fetched records are processed, setting max.poll.records to too low would negatively affect the network-call frequency depending on how you process records Because if every poll() returns only small number of records, batch-processing (which is known to be efficient) wouldn't be able so processing previously-fetched records could take time due to the insufficient throughput. Also posted on SO: https://stackoverflow.com/a/77633494/3225746 2023年12月10日(日) 3:42 Debraj Manna < subharaj.manna@gmail.com >: > Can someone please clarify my below doubt? The same has been asked on stack > overflow also. > > https://stackoverflow.com/q/77630586/785523 > > > On Fri, 8 Dec, 2023, 21:33 Debraj Manna, < subharaj.manna@gmail.com > wrote: > > > Thanks again. > > > > Another follow...

Re: Relation between fetch.max.bytes, max.partition.fetch.bytes & max.poll.records

Can someone please clarify my below doubt? The same has been asked on stack overflow also. https://stackoverflow.com/q/77630586/785523 On Fri, 8 Dec, 2023, 21:33 Debraj Manna, < subharaj.manna@gmail.com > wrote: > Thanks again. > > Another follow-up question, since max.poll.records has nothing to do with > fetch requests, then is there any gain on number of network calls being > made between consumer & broker if max.poll.records is set to 1 as against > let's say the default 500. > > On Wed, Dec 6, 2023 at 7:21 PM Haruki Okada < ocadaruma@gmail.com > wrote: > >> poll-idle-ratio-avg=1.0 doesn't immediately mean fetch throughput problem >> since if processing is very fast, the metric will always be near 1.0. >> >> 2023年12月4日(月) 13:09 Debraj Manna < subharaj.manna@gmail.com >: >> >> > Thanks for the reply. >> > >> > I read KIP >> > < ...

Can a controller in a kafka kraft cluster be a bootstrap server

Hello, Would the following configuration be valid in a kafka kraft cluster So lets say we had the following configs for a controller and a broker: === controller - https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d/config/kraft/controller.properties process.roles=controller node.id =1 controller.quorum.voters=1@host1:9093 listeners=CONTROLLER://:9093,BROKER://:9092 controller.listener.names=CONTROLLER advertised.listeners=BROKER://:9092,CONTROLLER://:9093 inter.broker.listener.name =BROKER === broker - https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d/config/kraft/broker.properties process.roles=broker node.id =2 controller.quorum.voters=1@host1:9093 listeners=BROKER://:9092 inter.broker.listener.name =BROKER advertised.listeners=BROKER://:9092 The controller, not only advertises itself as a controller but also as a broker. If the controller is contacted by a client as a bootstrap server wil...