-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org
iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26rVoACgkQu8PBaGu5
w1Hgnw//YxNlbZhtbbZcLEsiIbTDPf1n4E3+wMm92pqUFkYxcafN73rpeA9EBvS1
ecg2RM7HNvHTuElguOu3lEcPbBeD4r8VoUCrufyXavRKX18AVR2QFI5JOVSWS/CC
lg8YMcZesk2Wbb8NAvZ0bP9FHi/ADPDJfVUSl099QZgpFZEWSDVN/pcGF6jzfN6f
dupqGUMy+lwGXRb0W/e/FEhrzWiPjzsxYjhTwpZGt7M8PR7Bxf+u5Gc17xPNrI4e
eoZ4GlQ3+qfkyRW5ROhDRpeAO8inHx3ERRHqAfD1zTZwvzno1YDrgReam753nLeL
6u3Ex2o4RNHnHv9AKGmjZ4omDwdWKlig5eccmRajDceqL+KnXHEA6JDFrttPBMxq
6fwR49bxhEhJsTWZX2t1wOo/ROWEWGnHMITzJPTqqfmvnwmKG4XAckot7eHBcLa3
PY76ZOIjqMtpoZz8qJ6S5GrgDlNf7he8nI04nHb0bx8wVyeaJU+IZUDUUPrcGgPv
X71Dl18o8hNxqF1II5+JYWE+jNKeEz3Y/bwn9QOUZjsMBr6N0H9F2k4OrYsaBpF3
58fn0mSEg/ZjtLXL6RdK/4zdcLhtfBx40ebdQKjLc1yEkb3jajEpWrZTLUD8TDSt
FU/U4MHb9u+mdxWF54HA6QRVHtKXmcmqXBGUrfcXb0LYvWVC7H8=
=I9fn
-----END PGP SIGNATURE-----
Quite a project to test transactions...
The current system test suite is part of the code base:
https://github.com/apache/kafka/tree/trunk/tests/kafkatest/tests
There is course also some unit/integration test for transactions.
There is also a blog post that describes in a high level what testing
was done when EOS was introduced:
https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/
And yes, transactions are built on some assumptions and if you configure
your system incorrectly of violate those assumptions, it may break. We
also fixed some bugs since the first release. And there might be more
bugs --- software is always buggy. However, for practical consideration,
transactions should work.
We would of course love if you could share your test results! If you
discover a bug, please report it, so we can fix it.
-Matthias
On 10/28/19 10:06 AM, Edward Capriolo wrote:
> On Sunday, October 27, 2019, Boyang Chen <reluctanthero104@gmail.com> wrote:
>
>> Hey Edward,
>>
>> just to summarize and make sure I understood your question, you want to
>> implement some Chaos testing to validate Kafka EOS model, but not sure how
>> to start or curious about whether there are already works in the community
>> doing that?
>>
>> For the correctness of Kafka EOS, we have tons of unit tests and system
>> tests to prove its functionality. They could be found inside the repo. You
>> could check them out and see if we still have gaps (which I believe we
>> definitely have).
>>
>> Boyang
>>
>> On Fri, Oct 25, 2019 at 7:25 PM Edward Capriolo <edlinuxguru@gmail.com>
>> wrote:
>>
>>> Hello all,
>>>
>>> I used to work in adtech. Adtech was great. CPM for ads is 1-$5 per
>>> thousand ad impression. If numbers are 5% off you can blame javascript
>>> click trackers.
>>>
>>> Now, I work in a non addtech industry and they are really, really serious
>>> about exactly once.
>>>
>>> So there is this blog:
>>>
>>> https://www.confluent.io/blog/transactions-apache-kafka/
>>>
>>> Great little snippet of code. I think I can copy it and implement it
>>> correctly.
>>> You know, but if you read that section about the zombie fencing, you
>> learn
>>> that you either need to manually assign partitions or use the rebalance
>>> listener and have N producers. Hunting around github is not super
>> helpful,
>>> some code snippets are less complete then even the snipped in the blog.
>>>
>>> I looked at what spring-kafka does. It does get the zombie fencing
>> correct
>>> with respect to fencing id, other bits other bits of the code seem
>>> plausible.
>>>
>>> Notice I said "plausible", because I do not count a few end to end tests
>>> running single VM as a solid enough evidence that this works in the face
>> of
>>> failures.
>>>
>>> I have been contemplating how one stress tests this exactly once concept,
>>> with something Jepsen like or something brute force that I can run for 5
>>> hours in a row
>>>
>>> If I faithfully implemented the code in the transactional read-write loop
>>> and I feed it into my jepsen like black box tester it should:
>>>
>>> Create a topic with 10 partitions, Start launching read-write transaction
>>> code, start feeding input data, Maybe strings like 1 -1000, now start
>>> randomly killing vms with kill -9 kill graceful exits, maybe even killing
>>> kafka, and make sure 1-1000, pop out on the other end.
>>>
>>> I thought of some other "crazy" ideas. One such idea:
>>>
>>> If I make a transactional "echo", read x; write x back to the same topic.
>>> RunN instances of that and kill them randomly. If I am loosing messages
>>> (and not duplicating messages) then the topic would eventually have no
>>> data..
>>>
>>> Or should I make a program with some math formula like receive x write
>> xx.
>>> If duplication is happening I would start seeing multiple xx's
>>>
>>> Or send 1,000,000,000 messages through and consumer logs them to a file.
>>> Then use an etl tool to validate messages come out on the other side.
>>>
>>> Or should I use a nosql with increments and count up and ensure no key
>> has
>>> been incremented twice.
>>>
>>> note: I realize I can just use kafka streams or storm, which has its own
>>> systems to guarantee "at most once" but Iooking for a way to prove what
>> can
>>> be done with pure kafka. (and not just prove it adtech work (good enough
>> 5%
>>> here or there) )
>>>
>>> I imagine someone somewhere must be doing this. How where tips? Is it
>> part
>>> of some kafka release stress test? I'm down to write it if it does not
>>> exist.
>>>
>>> Thanks,
>>> Edward,
>>>
>>> Thanks,
>>> Edward
>>>
>>
>
> Boyang,
>
> I just to summarize and make sure I understood your question, you want to
> implement some Chaos testing to validate Kafka EOS model, but not sure how
> to start or curious about whether there are already works in the community
> doing that?
>
> Yes.
>
> I am not an expert in this field, but I know what distributed systems can
> mask failures. For example if you have atomic increment you might unit test
> it and it works fine, but if you ran it for 40 days it might double count 1
> time.
>
> of Kafka EOS, we have tons of unit tests and system
> tests to prove its functionality. They could be found inside the repo.
>
> I've been a developer for a while so the phrase "there are tests" never
> tells me everything. Tests reveal the presence of bugs not the absence.
>
> Can you please point me at the tests? My curiosity is if there is a
> systematic in-depth strategy here and how much rigor there is.
>
> In my environment I need to quantify and use rigor to prove out these
> things. Things that you might take for granted. For example, I have to
> prove that zookeeper works as expected when we lose a datacenter. Most
> people 'in the know' take it for granted that kafka and zk do what is
> advertised when configured properly. I have to test that out and document
> my findings.
>
> For kafka transactions. The user space code needs to be written properly
> and configured properly along with the server being setup properly. It is
> not enough for me to check out kafka run 'sbt test' and declare victory
> after the unit tests pass.
>
> What I am effectively looking for is the anti jepsen blog that says...We
> threw the kitchen sink at this and these transactions are bullet proof.
> Here is our methodology, here is some charts, here is xyz. Here is how we
> run it every minor release
>
> I'm not trying to be a pita, educate me on how bullet proof this is and how
> I can reproduce the results.
>
>
>
>
>
>
>
Comment: GPGTools - https://gpgtools.org
iQIzBAEBCgAdFiEE8osu2CcCCF5douGQu8PBaGu5w1EFAl26rVoACgkQu8PBaGu5
w1Hgnw//YxNlbZhtbbZcLEsiIbTDPf1n4E3+wMm92pqUFkYxcafN73rpeA9EBvS1
ecg2RM7HNvHTuElguOu3lEcPbBeD4r8VoUCrufyXavRKX18AVR2QFI5JOVSWS/CC
lg8YMcZesk2Wbb8NAvZ0bP9FHi/ADPDJfVUSl099QZgpFZEWSDVN/pcGF6jzfN6f
dupqGUMy+lwGXRb0W/e/FEhrzWiPjzsxYjhTwpZGt7M8PR7Bxf+u5Gc17xPNrI4e
eoZ4GlQ3+qfkyRW5ROhDRpeAO8inHx3ERRHqAfD1zTZwvzno1YDrgReam753nLeL
6u3Ex2o4RNHnHv9AKGmjZ4omDwdWKlig5eccmRajDceqL+KnXHEA6JDFrttPBMxq
6fwR49bxhEhJsTWZX2t1wOo/ROWEWGnHMITzJPTqqfmvnwmKG4XAckot7eHBcLa3
PY76ZOIjqMtpoZz8qJ6S5GrgDlNf7he8nI04nHb0bx8wVyeaJU+IZUDUUPrcGgPv
X71Dl18o8hNxqF1II5+JYWE+jNKeEz3Y/bwn9QOUZjsMBr6N0H9F2k4OrYsaBpF3
58fn0mSEg/ZjtLXL6RdK/4zdcLhtfBx40ebdQKjLc1yEkb3jajEpWrZTLUD8TDSt
FU/U4MHb9u+mdxWF54HA6QRVHtKXmcmqXBGUrfcXb0LYvWVC7H8=
=I9fn
-----END PGP SIGNATURE-----
Quite a project to test transactions...
The current system test suite is part of the code base:
https://github.com/apache/kafka/tree/trunk/tests/kafkatest/tests
There is course also some unit/integration test for transactions.
There is also a blog post that describes in a high level what testing
was done when EOS was introduced:
https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/
And yes, transactions are built on some assumptions and if you configure
your system incorrectly of violate those assumptions, it may break. We
also fixed some bugs since the first release. And there might be more
bugs --- software is always buggy. However, for practical consideration,
transactions should work.
We would of course love if you could share your test results! If you
discover a bug, please report it, so we can fix it.
-Matthias
On 10/28/19 10:06 AM, Edward Capriolo wrote:
> On Sunday, October 27, 2019, Boyang Chen <reluctanthero104@gmail.com> wrote:
>
>> Hey Edward,
>>
>> just to summarize and make sure I understood your question, you want to
>> implement some Chaos testing to validate Kafka EOS model, but not sure how
>> to start or curious about whether there are already works in the community
>> doing that?
>>
>> For the correctness of Kafka EOS, we have tons of unit tests and system
>> tests to prove its functionality. They could be found inside the repo. You
>> could check them out and see if we still have gaps (which I believe we
>> definitely have).
>>
>> Boyang
>>
>> On Fri, Oct 25, 2019 at 7:25 PM Edward Capriolo <edlinuxguru@gmail.com>
>> wrote:
>>
>>> Hello all,
>>>
>>> I used to work in adtech. Adtech was great. CPM for ads is 1-$5 per
>>> thousand ad impression. If numbers are 5% off you can blame javascript
>>> click trackers.
>>>
>>> Now, I work in a non addtech industry and they are really, really serious
>>> about exactly once.
>>>
>>> So there is this blog:
>>>
>>> https://www.confluent.io/blog/transactions-apache-kafka/
>>>
>>> Great little snippet of code. I think I can copy it and implement it
>>> correctly.
>>> You know, but if you read that section about the zombie fencing, you
>> learn
>>> that you either need to manually assign partitions or use the rebalance
>>> listener and have N producers. Hunting around github is not super
>> helpful,
>>> some code snippets are less complete then even the snipped in the blog.
>>>
>>> I looked at what spring-kafka does. It does get the zombie fencing
>> correct
>>> with respect to fencing id, other bits other bits of the code seem
>>> plausible.
>>>
>>> Notice I said "plausible", because I do not count a few end to end tests
>>> running single VM as a solid enough evidence that this works in the face
>> of
>>> failures.
>>>
>>> I have been contemplating how one stress tests this exactly once concept,
>>> with something Jepsen like or something brute force that I can run for 5
>>> hours in a row
>>>
>>> If I faithfully implemented the code in the transactional read-write loop
>>> and I feed it into my jepsen like black box tester it should:
>>>
>>> Create a topic with 10 partitions, Start launching read-write transaction
>>> code, start feeding input data, Maybe strings like 1 -1000, now start
>>> randomly killing vms with kill -9 kill graceful exits, maybe even killing
>>> kafka, and make sure 1-1000, pop out on the other end.
>>>
>>> I thought of some other "crazy" ideas. One such idea:
>>>
>>> If I make a transactional "echo", read x; write x back to the same topic.
>>> RunN instances of that and kill them randomly. If I am loosing messages
>>> (and not duplicating messages) then the topic would eventually have no
>>> data..
>>>
>>> Or should I make a program with some math formula like receive x write
>> xx.
>>> If duplication is happening I would start seeing multiple xx's
>>>
>>> Or send 1,000,000,000 messages through and consumer logs them to a file.
>>> Then use an etl tool to validate messages come out on the other side.
>>>
>>> Or should I use a nosql with increments and count up and ensure no key
>> has
>>> been incremented twice.
>>>
>>> note: I realize I can just use kafka streams or storm, which has its own
>>> systems to guarantee "at most once" but Iooking for a way to prove what
>> can
>>> be done with pure kafka. (and not just prove it adtech work (good enough
>> 5%
>>> here or there) )
>>>
>>> I imagine someone somewhere must be doing this. How where tips? Is it
>> part
>>> of some kafka release stress test? I'm down to write it if it does not
>>> exist.
>>>
>>> Thanks,
>>> Edward,
>>>
>>> Thanks,
>>> Edward
>>>
>>
>
> Boyang,
>
> I just to summarize and make sure I understood your question, you want to
> implement some Chaos testing to validate Kafka EOS model, but not sure how
> to start or curious about whether there are already works in the community
> doing that?
>
> Yes.
>
> I am not an expert in this field, but I know what distributed systems can
> mask failures. For example if you have atomic increment you might unit test
> it and it works fine, but if you ran it for 40 days it might double count 1
> time.
>
> of Kafka EOS, we have tons of unit tests and system
> tests to prove its functionality. They could be found inside the repo.
>
> I've been a developer for a while so the phrase "there are tests" never
> tells me everything. Tests reveal the presence of bugs not the absence.
>
> Can you please point me at the tests? My curiosity is if there is a
> systematic in-depth strategy here and how much rigor there is.
>
> In my environment I need to quantify and use rigor to prove out these
> things. Things that you might take for granted. For example, I have to
> prove that zookeeper works as expected when we lose a datacenter. Most
> people 'in the know' take it for granted that kafka and zk do what is
> advertised when configured properly. I have to test that out and document
> my findings.
>
> For kafka transactions. The user space code needs to be written properly
> and configured properly along with the server being setup properly. It is
> not enough for me to check out kafka run 'sbt test' and declare victory
> after the unit tests pass.
>
> What I am effectively looking for is the anti jepsen blog that says...We
> threw the kitchen sink at this and these transactions are bullet proof.
> Here is our methodology, here is some charts, here is xyz. Here is how we
> run it every minor release
>
> I'm not trying to be a pita, educate me on how bullet proof this is and how
> I can reproduce the results.
>
>
>
>
>
>
>
Comments
Post a Comment