Skip to main content

Posts

Showing posts from December, 2024

Startup error due to log.dirs being symbolic link

Hi, Our team has been using Apache Kafka for over a year. Recently, we plan to replace the Kafka log directory with a symbolic link. For example, in the server.properties: `log.dirs=/some/path/to/logdir` whereas `/some/path/to/logdir` is a symbolic link to `/actual/path/to/logdir`. To provide more context, the two paths reside in different disk partitions. However, we keep on failing to startup Kafka. Here's the error message: ``` [2024-12-30 18:14:52,434] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) [2024-12-30 18:14:52,444] INFO [KafkaServer id=MYID] Rewriting /path/to/logdir/meta.properties (kafka.server.KafkaServer) [2024-12-30 18:14:52,445] ERROR Error while writing meta.properties to /path/to/logdir (org.apache.kafka.storage.internals.log.LogDirFailureChannel) java.nio.file.FileAlreadyExistsException: /path/to/logdir at java.base/sun.nio.fs.UnixException.translateToIOExcepti...

Re: Extracting key-value pair from Produce Request API.

Hi Chain Head, Are you seeing any errors, or just getting empty strings for k and v? If you are seeing empty strings, it could well be that the ByteBuffer returned by Record.key() and Record.value() have their position set to the ByteBuffer's length. i.e. at the end of the ByteBuffer. Have you checked the value of position() coming back from those ByteBuffers, and also have you tried creating the string from position 0 rather than using the position() result? (Disclaimer, I haven't tried writing any code to test the values being returned by the key/value position() methods.) Regards, David Finnie Infrasoft On 28/12/2024 2:17, Chain Head wrote: > Hi, > I am struggling to get the key-value pair from the Produce Request API. I > want to write it to a Buffer for further processing. I can't seem to get > the `k` and `v` values whereas the `.keySize` and `.valueSize` are reported > correctly. Please advise how to extract the key...

Re: Extracting key-value pair from Produce Request API.

Hi, I apologize for misunderstanding your initial email. Unfortunately I still don't understand your question. Could you clarify what result you expect from your code, and what the actual behavior is? Maybe also try and simplify the reproduction case. I see confusing use of a String constructor that could be causing your problem or masking it. Thanks, Greg On Fri, Dec 27, 2024, 5:47 PM Chain Head < mrchainhead@gmail.com > wrote: > Hi, > I am looking at parsing Produce request API on broker side. This is for > simulating a broker. No consumer is involved. Also, I am using 3.8.0. > > On Sat, 28 Dec, 2024, 04:47 Greg Harris, <greg.harris@aiven.io.invalid> > wrote: > > > Hi, > > > > Thanks for your question. > > > > It appears you're using the legacy consumer API, which was removed in > 2.0.0 > > and is no longer supported. > > I would strongly suggest building on top of...

Re: Extracting key-value pair from Produce Request API.

Hi, I am looking at parsing Produce request API on broker side. This is for simulating a broker. No consumer is involved. Also, I am using 3.8.0. On Sat, 28 Dec, 2024, 04:47 Greg Harris, <greg.harris@aiven.io.invalid> wrote: > Hi, > > Thanks for your question. > > It appears you're using the legacy consumer API, which was removed in 2.0.0 > and is no longer supported. > I would strongly suggest building on top of the modern Java Consumer API at > this time. > > The modern API exposes the deserialized headers via the > ConsumerRecord#headers method: > > https://kafka.apache.org/39/javadoc/org/apache/kafka/clients/consumer/ConsumerRecord.html > > Hope this helps, > Greg > > On Fri, Dec 27, 2024, 10:19 AM Chain Head < mrchainhead@gmail.com > wrote: > > > Hi, > > I am struggling to get the key-value pair from the Produce Request API. I > > want to write it to a Buffer f...

Re: Extracting key-value pair from Produce Request API.

Hi, I am looking at parsing Produce request API on broker side. This is for simulating a broker. No consumer is involved. Also, I am using 3.8.0. On Sat, 28 Dec, 2024, 04:47 Greg Harris, <greg.harris@aiven.io.invalid> wrote: > Hi, > > Thanks for your question. > > It appears you're using the legacy consumer API, which was removed in 2.0.0 > and is no longer supported. > I would strongly suggest building on top of the modern Java Consumer API at > this time. > > The modern API exposes the deserialized headers via the > ConsumerRecord#headers method: > > https://kafka.apache.org/39/javadoc/org/apache/kafka/clients/consumer/ConsumerRecord.html > > Hope this helps, > Greg > > On Fri, Dec 27, 2024, 10:19 AM Chain Head < mrchainhead@gmail.com > wrote: > > > Hi, > > I am struggling to get the key-value pair from the Produce Request API. I > > want to write it to a Buffer f...

Re: Extracting key-value pair from Produce Request API.

Hi, Thanks for your question. It appears you're using the legacy consumer API, which was removed in 2.0.0 and is no longer supported. I would strongly suggest building on top of the modern Java Consumer API at this time. The modern API exposes the deserialized headers via the ConsumerRecord#headers method: https://kafka.apache.org/39/javadoc/org/apache/kafka/clients/consumer/ConsumerRecord.html Hope this helps, Greg On Fri, Dec 27, 2024, 10:19 AM Chain Head < mrchainhead@gmail.com > wrote: > Hi, > I am struggling to get the key-value pair from the Produce Request API. I > want to write it to a Buffer for further processing. I can't seem to get > the `k` and `v` values whereas the `.keySize` and `.valueSize` are reported > correctly. Please advise how to extract the key value pairs from the > Produce request API payload. > > For better format, see https://pastebin.com/ZKad1ET6 > > MemoryRecords partit...

RE: Seeking Advice on Restricting Kafka ACLs for Specific Operations

Am also posting this question in stack overflow. https://stackoverflow.com/questions/79312372/seeking-advice-on-restricting-kafka-acls-for-specific-operations From: Achar, Bharath Kumar Cm Sent: Friday, December 27, 2024 10:17 PM To: users@kafka.apache.org ; users@kafka.apache.org Subject: Seeking Advice on Restricting Kafka ACLs for Specific Operations Hello Folks, I'm encountering a challenge with Kafka ACLs related to "Alter Cluster" privileges. Currently, granting "Alter Cluster" allows users to manage their ACLs, as it inherits CREATE_ACLS and DELETE_ACLS. However, users can also add ClusterAction and AlterConfigs permissions on the "Cluster" resource, which we want to restrict because it could enable them to modify broker configurations. I'm exploring two potential solutions and would appreciate guidance: 1. PolicyViolationException: Is it possible to leverage PolicyViolationException to block users from adding Clust...

Seeking Advice on Restricting Kafka ACLs for Specific Operations

Hello Folks, I'm encountering a challenge with Kafka ACLs related to "Alter Cluster" privileges. Currently, granting "Alter Cluster" allows users to manage their ACLs, as it inherits CREATE_ACLS and DELETE_ACLS. However, users can also add ClusterAction and AlterConfigs permissions on the "Cluster" resource, which we want to restrict because it could enable them to modify broker configurations. I'm exploring two potential solutions and would appreciate guidance: 1. PolicyViolationException: Is it possible to leverage PolicyViolationException to block users from adding ClusterAction or AlterConfigs on the "Cluster" resource? 2. Custom Authorizer: Alternatively, can we modify the Kafka source code to implement a custom authorizer? For instance, tweaking the StandardAuthorizer< https://github.com/apache/kafka/blob/trunk/metadata/src/main/java/org/apache/kafka/metadata/authorizer/StandardAuthorizer.java > to explicitly ...

Extracting key-value pair from Produce Request API.

Hi, I am struggling to get the key-value pair from the Produce Request API. I want to write it to a Buffer for further processing. I can't seem to get the `k` and `v` values whereas the `.keySize` and `.valueSize` are reported correctly. Please advise how to extract the key value pairs from the Produce request API payload. For better format, see https://pastebin.com/ZKad1ET6 MemoryRecords partitionRecords = (MemoryRecords) partitionData.records(); for (RecordBatch batch : partitionRecords.batches()) { // Iterate through reords of a batch Buffer batchBuffer = Buffer.buffer(); Iterator<org.apache.kafka.common.record.Record> it = batch.iterator(); while (it.hasNext()) { org.apache.kafka.common.record.Record record = it.next(); String k = ""; String v = ""; for (Header header : record.headers()) { v = new String(head...

[HELP] kraft cluster sasl_plaintext(plain) not works without super.users configured

Hi Community guys, Greetings to you! I'm setup kafka kraft cluster via containers use apache/kafka:3.9.0 docker image, and when enable sasl_plaintext for inter-broker and inter-controller communication, when I set up the cluster and configure ACLs, I set the user 'kafka' (which used for inter-broker and inter-controller sasl authentication) as super user. Everything looks fine! The container cluster configuration persisted to host folders # docker-compose file ============================================ services: kafka-1: image: "apache/kafka:3.9.0" hostname: kafka-1 container_name: kafka-1 environment: # Borker configuration KAFKA_NODE_ID: 1 KAFKA_LOG_DIRS: '/var/lib/kafka/data' KAFKA_PROCESS_ROLES: "broker,controller" KAFKA_CONTROLLER_QUORUM_VOTERS: '1@kafka-1:19092,2@kafka-2:19092,3@kafka-3:19092' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PL...

RE: Does Replication Throttling Work with "*"

Hi Jason, Yes, replication throttling does work, however, the "entity-default" configuration does not work in Kafka 3.5, you have to set the broker configs individually with "entity-name". This is a bug ( https://issues.apache.org/jira/browse/KAFKA-10190 ), which I fixed in Kafka 3.9. Hope this helps, Harry On 2024/12/10 17:06:57 Jason Taylor wrote: > Hello, > > Has anyone successfully made replication throttling work? I am doing the > following on Apache Kafka 3.5 > > ./kafka_2.13-3.7.1/bin/kafka-configs.sh \ > --bootstrap-server $BROKERS \ > --entity-type brokers \ > --entity-default \ > --alter \ > --add-config leader.replication.throttled.rate=50000000 \ > --command-config client.properties > > ./kafka_2.13-3.7.1/bin/kafka-configs.sh \ > --bootstrap-server $BROKERS \ > --entity-type brokers \ > --entity-default \ > --alter \ > --add-config follower.repl...

Re: Kafka streams lost messages on repartition topic during rebalancing

Updating Kafka Streams would be enough. -Bill On Wed, Dec 18, 2024 at 6:50 AM TheKaztek < thekaztek@gmail.com > wrote: > Would updating the kafka streams client library be enough ? Or should the > cluster be updated to ? > > wt., 17 gru 2024 o 19:28 Bill Bejeck <bill@confluent.io.invalid> > napisał(a): > > > Hi, > > > > I think you could be hitting KAFKA-17635 > > < https://issues.apache.org/jira/browse/KAFKA-17635 > which has been fixed > > in > > Kafka Streams v 3.7.2 . > > It's been released this week, is it possible to upgrade and try it out? > > > > -Bill > > > > On Tue, Dec 17, 2024 at 4:10 AM TheKaztek < thekaztek@gmail.com > wrote: > > > > > Hi, we have a worrying problem in the project where we use kafka > streams. > > > > > > > > > > > > We had an incident where during heavy load o...

CVE-2024-56128: Apache Kafka: SCRAM authentication vulnerable to replay attacks when used without encryption

Severity: low Affected versions: - Apache Kafka 0.10.2.0 before 3.7.2 - Apache Kafka 3.8.0 Description: Incorrect Implementation of Authentication Algorithm in Apache Kafka's SCRAM implementation. Issue Summary: Apache Kafka's implementation of the Salted Challenge Response Authentication Mechanism (SCRAM) did not fully adhere to the requirements of RFC 5802 [1]. Specifically, as per RFC 5802, the server must verify that the nonce sent by the client in the second message matches the nonce sent by the server in its first message. However, Kafka's SCRAM implementation did not perform this validation. Impact: This vulnerability is exploitable only when an attacker has plaintext access to the SCRAM authentication exchange. However, the usage of SCRAM over plaintext is strongly discouraged as it is considered an insecure practice [2]. Apache Kafka recommends deploying SCRAM exclusively with TLS encryption to protect SCRAM exchanges from intercept...

Re: Kafka streams lost messages on repartition topic during rebalancing

Would updating the kafka streams client library be enough ? Or should the cluster be updated to ? wt., 17 gru 2024 o 19:28 Bill Bejeck <bill@confluent.io.invalid> napisał(a): > Hi, > > I think you could be hitting KAFKA-17635 > < https://issues.apache.org/jira/browse/KAFKA-17635 > which has been fixed > in > Kafka Streams v 3.7.2 . > It's been released this week, is it possible to upgrade and try it out? > > -Bill > > On Tue, Dec 17, 2024 at 4:10 AM TheKaztek < thekaztek@gmail.com > wrote: > > > Hi, we have a worrying problem in the project where we use kafka streams. > > > > > > > > We had an incident where during heavy load on our app (dozens of millions > > of records in 15 minutes span on an input topic of the stream) we decided > > to add additional instances of the app (5 -> 10) and some of the > > already-running instances were restarted (there a...

Re: Kafka streams lost messages on repartition topic during rebalancing

Hi, I think you could be hitting KAFKA-17635 < https://issues.apache.org/jira/browse/KAFKA-17635 > which has been fixed in Kafka Streams v 3.7.2 . It's been released this week, is it possible to upgrade and try it out? -Bill On Tue, Dec 17, 2024 at 4:10 AM TheKaztek < thekaztek@gmail.com > wrote: > Hi, we have a worrying problem in the project where we use kafka streams. > > > > We had an incident where during heavy load on our app (dozens of millions > of records in 15 minutes span on an input topic of the stream) we decided > to add additional instances of the app (5 -> 10) and some of the > already-running instances were restarted (there are 16 partitions on an > input topic, 5 brokers in a cluster). > > > > During the restart/rebalance we saw a massive drop in consumer group lag on > one of repartition topics (in 1 minute after rebalance it went down by 40 > million records from around 50 m...

Kafka streams lost messages on repartition topic during rebalancing

Hi, we have a worrying problem in the project where we use kafka streams. We had an incident where during heavy load on our app (dozens of millions of records in 15 minutes span on an input topic of the stream) we decided to add additional instances of the app (5 -> 10) and some of the already-running instances were restarted (there are 16 partitions on an input topic, 5 brokers in a cluster). During the restart/rebalance we saw a massive drop in consumer group lag on one of repartition topics (in 1 minute after rebalance it went down by 40 million records from around 50 million). This fact is visible in broker logs, where we can see requests to delete records on that topic (so effectively moving forward logStartOffset) by a stream thread. This is a normal operation of course for a repartition topic, but in this case the offset was forwarded by a concerning number of messages, for example 3.5 million records forward on one of the partitions (there are 16 par...

Need help with issues setting up OAuth2 SASL authentication on Kafka

Hi All, I have the following setup: Kafka broker (3.9.0) Kafka producer (for now, using the producer-console in kafka itself) This setup works fine for basic TCP, TLS and even tried SASL authentication using PLAIN, SHA256. Now, I am trying to setup OAuth2 SASL authentication on this setup and get an *invalid_token* error from kafka broker while doing *SASL authentication* ;. This is what my configuration looks like: (included only properties relevant to SASL oauth) server.properties: sasl.enabled.mechanisms=OAUTHBEARER #JWKS URL from the openid-configuration URL for the oauth2 host sasl.oauthbearer.*jwks.endpoin*t.url=https://<oauth2 host name>:443/admin/v1/SigningCert/jwk listener.name.sasl_plaintext.sasl.enabled.mechanisms=OAUTHBEARER #verifief that this isn sync with the values for aud and iss from the access token *sasl.oauthbearer.expected.audience="<aud value>"sasl.oauthbearer.expected.issuer="<issuer URL>"* l...

Re: [ANNOUNCE] Apache Kafka 3.7.2

Thanks for running the release, Matthias! This one has some important fixes. -Bill On Fri, Dec 13, 2024 at 6:17 PM Matthias J. Sax < mjsax@apache.org > wrote: > The Apache Kafka community is pleased to announce the release for Apache > Kafka 3.7.2 > > This is a bug-fix release, closing 21 Jira tickets. > > All of the changes in this release can be found in the release notes: > https://www.apache.org/dist/kafka/3.7.2/RELEASE_NOTES.html > > You can download the source and binary release (Scala 2.12 and 2.13) from: > https://kafka.apache.org/downloads#3.7.2 > > > --------------------------------------------------------------------------------------------------- > > > Apache Kafka is a distributed streaming platform with four core APIs: > > > ** The Producer API allows an application to publish a stream of records > to one or more Kafka topics. > > ** The Consumer API allows an applicat...

[ANNOUNCE] Apache Kafka 3.7.2

The Apache Kafka community is pleased to announce the release for Apache Kafka 3.7.2 This is a bug-fix release, closing 21 Jira tickets. All of the changes in this release can be found in the release notes: https://www.apache.org/dist/kafka/3.7.2/RELEASE_NOTES.html You can download the source and binary release (Scala 2.12 and 2.13) from: https://kafka.apache.org/downloads#3.7.2 --------------------------------------------------------------------------------------------------- Apache Kafka is a distributed streaming platform with four core APIs: ** The Producer API allows an application to publish a stream of records to one or more Kafka topics. ** The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them. ** The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output t...

[RESULT] [VOTE] Release Kafka version 3.7.2

This vote passes with 8 +1 votes (4 bindings) and no 0 or -1 votes. +1 votes PMC Members: * Bill Bejeck * Divij Vaidya * Luke Chen * Matthias J. Sax Committers: * Andrew Schofield Community: * Jiunn-Yang * TengYao * Federico Valeri 0 votes * No votes -1 votes * No votes Vote thread: https://lists.apache.org/thread/og02lmp3pf7hpc4hsdo12yg06rgosfqm I'll continue with the release process and the release announcement will follow in the next few days. -Matthias

Re: [VOTE] 3.7.2 RC1

Thanks all for verifying the RC and for voting. I am closing the vote now as accepted. Will send a follow up email with the result, and move forward with the release. Also: +1 (binding) from my side -Matthias On 12/8/24 2:47 AM, Federico Valeri wrote: > Hi, thanks for running the release. > > - Built from source with JDK 17 > - Ran all unit and integration tests > - Spot checked documentation and javadoc > - Ran ZK and KRaft quickstarts > - Ran Docker image > > +1 non binding > > Cheers > Fede > > On Sun, Dec 8, 2024 at 8:57 AM TengYao Chi < kitingiao@gmail.com > wrote: >> >> Hi Matthias, >> >> Thanks for running the 3.7.2 release. >> I did some validation on my local machine: >> >> - Built from the source /w JDK21 >> - Ran all tests (including unit test and integration test) >> - Ran the quickstart separately using both KRaft an...