These properties can't be triggered programatically. Kafka uses an
internal thread pool called "Log Cleaner Thread" that does the job
asynchronously of deleting old segments ("delete") and deleting
repeated records ("compact").
Whatever the S3 connector picks up is already compacted and/or deleted.
— Ricardo
On Tue, 2020-09-22 at 11:50 +0200, Daniel Kraus wrote:
> Hi,
> I have a KStreams app that outputs a KTableto a topic with cleanup
> policy "compact,delete".
> I have the Confluent S3 Connector to store thistable in S3 where I do
> further analysis with hive.
> Now my question is, if there's a way to triggerlog compaction right
> before the S3 Connectorreads the data so I store less data in S3
> thenwhen it simply copies all data from the stream?
> Thanks, Daniel
internal thread pool called "Log Cleaner Thread" that does the job
asynchronously of deleting old segments ("delete") and deleting
repeated records ("compact").
Whatever the S3 connector picks up is already compacted and/or deleted.
— Ricardo
On Tue, 2020-09-22 at 11:50 +0200, Daniel Kraus wrote:
> Hi,
> I have a KStreams app that outputs a KTableto a topic with cleanup
> policy "compact,delete".
> I have the Confluent S3 Connector to store thistable in S3 where I do
> further analysis with hive.
> Now my question is, if there's a way to triggerlog compaction right
> before the S3 Connectorreads the data so I store less data in S3
> thenwhen it simply copies all data from the stream?
> Thanks, Daniel
Comments
Post a Comment