Skip to main content

Re: Too many commits

That's the problem I think.
The gist of it is ( without going back on my part and looking at the
details of the docs) Kafka isn't getting a poll from your consumer in time
and thinks it's dead (crashed or hung)...then you try to commit something
that is no longer valid.
The solution is pausing the consumer. However pausing the consumer doesn't
mean you can stop polling, it just means it's not going to give you
anything back when you poll -- so you still need the loop to keep running.
That means you need to:

1. get the message
2. pause the consumer (call pause on it)
3. run your message processing in another thread
4. allow your consumer to keep polling while its paused and your message
is being processed
5. when processing thread is done call resume on the consumer.

I remember seeing a nice example of this by someone online, but I'm sorry I
don't have the link to it off hand.

Hope this helps.





On Thu, Apr 25, 2019 at 9:26 AM yuvraj singh <19yuvrajsingh90@gmail.com>
wrote:

> Yeah it's taking time , that's why I am doing manually commits to achieve
> at least once semantics .
>
>
> Thanks
> Yubraj Singh
>
> On Thu, Apr 25, 2019, 12:49 PM Dimitry Lvovsky <dlvovsky@gmail.com> wrote:
>
> > Are your processes taking a long time between commits — does consuming
> each
> > message take a long while?
> >
> > On Thu, Apr 25, 2019 at 08:50 yuvraj singh <19yuvrajsingh90@gmail.com>
> > wrote:
> >
> > > Hi all ,
> > >
> > > In my application i am committing every offset to kafka one by one and
> my
> > > max poll size is 30 . I am facing lot of commit failures so is it
> because
> > > of above reasons ?
> > >
> > > Thanks
> > > Yubraj Singh
> > >
> > > [image: Mailtrack]
> > > <
> > >
> >
> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&
> > > >
> > > Sender
> > > notified by
> > > Mailtrack
> > > <
> > >
> >
> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&
> > > >
> > > 04/25/19,
> > > 12:16:48 PM
> > >
> >
>

Comments