Skip to main content

RE: Validating Kafka Disk Throughput Formulas (Write & Read)

Hi @Brebner, Paul<mailto:Paul.Brebner@netapp.com>,

Thanks for your reply.

I checked this calculator: https://github.com/instaclustr/code-samples/blob/main/Kafka/TieredStorage/kafka_calculator_graphs.html

But I don't think it considers scenarios with replication lag. For example, if my replicas are significantly behind the leader, then when they fetch data from the leader, the leader will need to read from disk to serve those followers. Shouldn't we consider disk bandwidth for this scenario as well?

From: Brebner, Paul <Paul.Brebner@netapp.com>
Sent: 02 December 2025 07:23
To: users@kafka.apache.org
Cc: Prateek Kohli <prateek.kohli@ericsson.com>
Subject: Re: Validating Kafka Disk Throughput Formulas (Write & Read)

You don't often get email from paul.brebner@netapp.com<mailto:paul.brebner@netapp.com>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>
Hi Prateek,

You may find this blog on wrote on Kafka sizing useful:

https://www.instaclustr.com/blog/how-to-size-apache-kafka-clusters-for-tiered-storage-part-1/

With the associated calculator here: https://github.com/instaclustr/code-samples/tree/main/Kafka/TieredStorage
This one in particular: https://github.com/instaclustr/code-samples/blob/main/Kafka/TieredStorage/kafka_calculator_graphs.html
Just download and use locally with a browser. For your example you want "delayed consumers" only.

Regards, Paul Brebner
NetApp Instaclustr

From: Prateek Kohli via users <users@kafka.apache.org<mailto:users@kafka.apache.org>>
Date: Monday, 1 December 2025 at 8:49 pm
To: users <users@kafka.apache.org<mailto:users@kafka.apache.org>>
Cc: Prateek Kohli <prateek.kohli@ericsson.com<mailto:prateek.kohli@ericsson.com>>
Subject: Validating Kafka Disk Throughput Formulas (Write & Read)
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments




Hi everyone,

I'm working on capacity planning for Kafka and wanted to validate two formulas I'm using to estimate cluster-level disk throughput in a worst-case scenario (when all reads come from disk due to large consumer lag and replication lag).

1. Disk Write Throughput
Write_Throughput = Ingest_MBps × Replication_Factor(3)
Explanation:
Every MB of data written to Kafka is stored on all replicas (leader + followers), so total disk writes across the cluster scale linearly with the replication factor.

2. Disk Read Throughput (worst case, cache hit = 0%)
Read_Throughput = Ingest_MBps × (Replication_Factor − 1 + Number_of_Consumer_Groups)
Explanation:
Leaders must read data from disk to:

* serve followers (RF − 1 times), and
* serve each consumer group (each group reads the full stream).
If pagecache misses are assumed (e.g., heavy lag), all of these reads hit disk, so the terms add up.

Are these calculations accurate for estimating cluster disk throughput under worst-case conditions?
Any corrections or recommendations would be appreciated.

Regards,
Prateek Kohli

Comments