Hola,
Simply, windows is not supported.
Good luck.
On Wed, Jun 25, 2025, 11:10 KrishnaSai Dandu <krishnasai.dandu@bluepal.com>
wrote:
> Hi,Good Afternoon!
>
> We are using the spring boot java application, Below are the Kafka
> dependencies
> <properties> <java.version>17</java.version> <!-- Use
> consistent Kafka version --> <kafka.version>2.8.0</kafka.version>
> <zookeeper.version>3.6.3</zookeeper.version>
> <scala.version>2.13</scala.version> </properties> <!-- Kafka
> Dependencies - All same version to avoid conflicts --> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_${scala.version}</artifactId>
> <version>${kafka.version}</version> <exclusions>
> <exclusion> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId> </exclusion>
> <exclusion> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId> </exclusion>
> </exclusions> </dependency> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka-clients</artifactId>
> <version>${kafka.version}</version> </dependency>
> <dependency> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka-metadata</artifactId>
> <version>${kafka.version}</version> </dependency>
> <dependency> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka-raft</artifactId>
> <version>${kafka.version}</version> </dependency>
> <!-- ZooKeeper - Single version, remove duplicates --> <dependency>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> <version>${zookeeper.version}</version> <exclusions>
> <exclusion> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId> </exclusion>
> <exclusion> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId> </exclusion>
> <exclusion>
> <groupId>commons-logging</groupId>
> <artifactId>commons-logging</artifactId> </exclusion>
> </exclusions> </dependency>
>
> Also i am giving the Zookeeper and the Kafka broker configurations
>
> kafkaProps.setProperty("zookeeper.connect", "localhost:2181"); //
> ZooKeeper address kafkaProps.setProperty("broker.id",
> String.valueOf(brokerId)); // Broker ID
> kafkaProps.setProperty("listeners", "PLAINTEXT://" + listener); // Listener
> address kafkaProps.setProperty("log.dirs", logDir); // Log
> directory kafkaProps.setProperty("num.network.threads", "3");
> kafkaProps.setProperty("num.io.threads", "8");
> kafkaProps.setProperty("log.retention.hours", "168"); // Retention period
> in hours kafkaProps.setProperty("log.segment.bytes", "1073741824");
> // 1GB log segment size kafkaProps.setProperty("
> log.retention.check.interval.ms", "300000"); // Retention check interval
> kafkaProps.setProperty("offsets.topic.replication.factor", "2");
> kafkaProps.setProperty("transaction.state.log.replication.factor", "2");
> kafkaProps.setProperty("transaction.state.log.min.isr", "2");
> kafkaProps.setProperty("auto.create.topics.enable", "true");
> kafkaProps.setProperty("num.partitions", "2");
> kafkaProps.setProperty("default.replication.factor", "2");
> // Custom log cleaner and file handling // Here we can
> attempt to use FileChannel to handle log files
> kafkaProps.setProperty("log.cleaner.threads", "2"); // Number of log
> cleaner threads
> kafkaProps.setProperty("log.cleaner.io.max.buffer.size", "10485760"); //
> Max buffer size for cleaner (10 MB)
> kafkaProps.setProperty("log.cleaner.dedupe.buffer.size", "10485760"); //
> Dedupe buffer size (10 MB)
> kafkaProps.setProperty("log.cleaner.min.cleanable.ratio", "0.5"); // Min
> ratio for cleanable logs (default is 0.5)
> kafkaProps.setProperty("log.cleaner.enable", "true"); // Enable log cleaner
> Error Log:-
> 2025-06-23 21:11:10 DEBUG [pool-11-thread-1] [] org.hibernate.SQL - insert
> into nms_event_adjacent_kavach_information_field_element_status
> (all_field_elements,crc,created_at,date,frame_number,mac_code,message_length,message_sequence,nms_system_id,packet_message_length,packet_message_sequence,packet_name,receiver_identifier,sender_identifier,specific_protocol,stationary_kavach_id,system_version,message_time,total_field_elements)
> values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)2025-06-23 21:11:10 ERROR
> [kafka-log-cleaner-thread-0] [] kafka.server.LogDirFailureChannel - Failed
> to clean up log for __consumer_offsets-45 in dir D:\14-05-25\kafka-logs1
> due to IOExceptionjava.nio.file.FileSystemException:
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.cleaned
> ->
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.swap:
> The process cannot access the file because it is being used by another
> process at
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
> at
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
> at
> java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:403)
> at
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)
> at java.base/java.nio.file.Files.move(Files.java:1432) at
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:904)
> at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:210)
> at kafka.log.LazyIndex$IndexValue.renameTo(LazyIndex.scala:155) at
> kafka.log.LazyIndex.$anonfun$renameTo$1(LazyIndex.scala:79) at
> kafka.log.LazyIndex.renameTo(LazyIndex.scala:79) at
> kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:496) at
> kafka.log.Log.$anonfun$replaceSegments$4(Log.scala:2402) at
> kafka.log.Log.$anonfun$replaceSegments$4$adapted(Log.scala:2402) at
> scala.collection.immutable.List.foreach(List.scala:333) at
> kafka.log.Log.replaceSegments(Log.scala:2402) at
> kafka.log.Cleaner.cleanSegments(LogCleaner.scala:613) at
> kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:538) at
> kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:537)
> at scala.collection.immutable.List.foreach(List.scala:333) at
> kafka.log.Cleaner.doClean(LogCleaner.scala:537) at
> kafka.log.Cleaner.clean(LogCleaner.scala:511) at
> kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:380) at
> kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:352)
> at
> kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:332)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:321)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> Suppressed: java.nio.file.FileSystemException:
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.cleaned
> ->
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.swap:
> The process cannot access the file because it is being used by another
> process at
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
> at
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
> at
> java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:317)
> at
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)
> at java.base/java.nio.file.Files.move(Files.java:1432)
> at
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:901)
> ... 20 common frames omitted2025-06-23 21:11:10 ERROR
> [LogDirFailureHandler] [] kafka.log.LogManager - Shutdown broker because
> all log dirs in D:\14-05-25\kafka-logs1 have failed
>
> Requirement:-
> Can you guys help us to debug the issue as we need to run the Kafka
> continuously without any breakage in local.
>
>
>
>
> Thank you,-Krishna Sai Dandu.
>
>
Simply, windows is not supported.
Good luck.
On Wed, Jun 25, 2025, 11:10 KrishnaSai Dandu <krishnasai.dandu@bluepal.com>
wrote:
> Hi,Good Afternoon!
>
> We are using the spring boot java application, Below are the Kafka
> dependencies
> <properties> <java.version>17</java.version> <!-- Use
> consistent Kafka version --> <kafka.version>2.8.0</kafka.version>
> <zookeeper.version>3.6.3</zookeeper.version>
> <scala.version>2.13</scala.version> </properties> <!-- Kafka
> Dependencies - All same version to avoid conflicts --> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka_${scala.version}</artifactId>
> <version>${kafka.version}</version> <exclusions>
> <exclusion> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId> </exclusion>
> <exclusion> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId> </exclusion>
> </exclusions> </dependency> <dependency>
> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka-clients</artifactId>
> <version>${kafka.version}</version> </dependency>
> <dependency> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka-metadata</artifactId>
> <version>${kafka.version}</version> </dependency>
> <dependency> <groupId>org.apache.kafka</groupId>
> <artifactId>kafka-raft</artifactId>
> <version>${kafka.version}</version> </dependency>
> <!-- ZooKeeper - Single version, remove duplicates --> <dependency>
> <groupId>org.apache.zookeeper</groupId>
> <artifactId>zookeeper</artifactId>
> <version>${zookeeper.version}</version> <exclusions>
> <exclusion> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId> </exclusion>
> <exclusion> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId> </exclusion>
> <exclusion>
> <groupId>commons-logging</groupId>
> <artifactId>commons-logging</artifactId> </exclusion>
> </exclusions> </dependency>
>
> Also i am giving the Zookeeper and the Kafka broker configurations
>
> kafkaProps.setProperty("zookeeper.connect", "localhost:2181"); //
> ZooKeeper address kafkaProps.setProperty("broker.id",
> String.valueOf(brokerId)); // Broker ID
> kafkaProps.setProperty("listeners", "PLAINTEXT://" + listener); // Listener
> address kafkaProps.setProperty("log.dirs", logDir); // Log
> directory kafkaProps.setProperty("num.network.threads", "3");
> kafkaProps.setProperty("num.io.threads", "8");
> kafkaProps.setProperty("log.retention.hours", "168"); // Retention period
> in hours kafkaProps.setProperty("log.segment.bytes", "1073741824");
> // 1GB log segment size kafkaProps.setProperty("
> log.retention.check.interval.ms", "300000"); // Retention check interval
> kafkaProps.setProperty("offsets.topic.replication.factor", "2");
> kafkaProps.setProperty("transaction.state.log.replication.factor", "2");
> kafkaProps.setProperty("transaction.state.log.min.isr", "2");
> kafkaProps.setProperty("auto.create.topics.enable", "true");
> kafkaProps.setProperty("num.partitions", "2");
> kafkaProps.setProperty("default.replication.factor", "2");
> // Custom log cleaner and file handling // Here we can
> attempt to use FileChannel to handle log files
> kafkaProps.setProperty("log.cleaner.threads", "2"); // Number of log
> cleaner threads
> kafkaProps.setProperty("log.cleaner.io.max.buffer.size", "10485760"); //
> Max buffer size for cleaner (10 MB)
> kafkaProps.setProperty("log.cleaner.dedupe.buffer.size", "10485760"); //
> Dedupe buffer size (10 MB)
> kafkaProps.setProperty("log.cleaner.min.cleanable.ratio", "0.5"); // Min
> ratio for cleanable logs (default is 0.5)
> kafkaProps.setProperty("log.cleaner.enable", "true"); // Enable log cleaner
> Error Log:-
> 2025-06-23 21:11:10 DEBUG [pool-11-thread-1] [] org.hibernate.SQL - insert
> into nms_event_adjacent_kavach_information_field_element_status
> (all_field_elements,crc,created_at,date,frame_number,mac_code,message_length,message_sequence,nms_system_id,packet_message_length,packet_message_sequence,packet_name,receiver_identifier,sender_identifier,specific_protocol,stationary_kavach_id,system_version,message_time,total_field_elements)
> values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)2025-06-23 21:11:10 ERROR
> [kafka-log-cleaner-thread-0] [] kafka.server.LogDirFailureChannel - Failed
> to clean up log for __consumer_offsets-45 in dir D:\14-05-25\kafka-logs1
> due to IOExceptionjava.nio.file.FileSystemException:
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.cleaned
> ->
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.swap:
> The process cannot access the file because it is being used by another
> process at
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
> at
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
> at
> java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:403)
> at
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)
> at java.base/java.nio.file.Files.move(Files.java:1432) at
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:904)
> at kafka.log.AbstractIndex.renameTo(AbstractIndex.scala:210)
> at kafka.log.LazyIndex$IndexValue.renameTo(LazyIndex.scala:155) at
> kafka.log.LazyIndex.$anonfun$renameTo$1(LazyIndex.scala:79) at
> kafka.log.LazyIndex.renameTo(LazyIndex.scala:79) at
> kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:496) at
> kafka.log.Log.$anonfun$replaceSegments$4(Log.scala:2402) at
> kafka.log.Log.$anonfun$replaceSegments$4$adapted(Log.scala:2402) at
> scala.collection.immutable.List.foreach(List.scala:333) at
> kafka.log.Log.replaceSegments(Log.scala:2402) at
> kafka.log.Cleaner.cleanSegments(LogCleaner.scala:613) at
> kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:538) at
> kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:537)
> at scala.collection.immutable.List.foreach(List.scala:333) at
> kafka.log.Cleaner.doClean(LogCleaner.scala:537) at
> kafka.log.Cleaner.clean(LogCleaner.scala:511) at
> kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:380) at
> kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:352)
> at
> kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:332)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:321)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
> Suppressed: java.nio.file.FileSystemException:
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.cleaned
> ->
> D:\14-05-25\kafka-logs1\__consumer_offsets-45\00000000000000000000.timeindex.swap:
> The process cannot access the file because it is being used by another
> process at
> java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
> at
> java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
> at
> java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:317)
> at
> java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)
> at java.base/java.nio.file.Files.move(Files.java:1432)
> at
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:901)
> ... 20 common frames omitted2025-06-23 21:11:10 ERROR
> [LogDirFailureHandler] [] kafka.log.LogManager - Shutdown broker because
> all log dirs in D:\14-05-25\kafka-logs1 have failed
>
> Requirement:-
> Can you guys help us to debug the issue as we need to run the Kafka
> continuously without any breakage in local.
>
>
>
>
> Thank you,-Krishna Sai Dandu.
>
>
Comments
Post a Comment