Skip to main content

Correct way to override Kafka Connect producer settings?

Hey Kafka fam,
What's the correct way to set task-level overrides to producer settings in a Kafka Connect task? For example, with MirrorMaker2, I'd expect the following "producer.override.*" based configs to work based on the documentation, but in in reality this does not change any of the producer behavior and the default 1MB "max.request.size" is still used:

https://github.com/apache/kafka/blob/3.9.0/docs/connect.html#L60


{
"producer.override.max.request.size": "26214400",
"producer.override.batch.size": "524288",
"producer.override.buffer.memory": "524288000",
"producer.override.receive.buffer.bytes": "33554432",
"producer.override.send.buffer.bytes": "33554432",
"producer.override.compression.type": "gzip",
"name": "mm2-cpc",
"connector.class": "org.apache.kafka.connect.mirror.MirrorCheckpointConnector",
"clusters": "msksource,mskdest",
"source.cluster.alias": "msksource",
"target.cluster.alias": "mskdest",
"target.cluster.bootstrap.servers": "{TARGET CLUSTER BROKERS ADDRESS}",
"source.cluster.bootstrap.servers": "{SOURCE CLUSTER BROKERS ADDRESS}",
"tasks.max": "1",


I know I can set this in the Kafka Connect worker properties file to apply to all connectors, but can't seem to override at the task level.

Thanks in advance,
Mazrim

Comments