Adam Green
Twitter API Consultant
adam@140dev.com
781-879-2960
@140dev

Engagement Programming: Super simple rate limit programming

by Adam Green on November 25, 2013

in Engagement Programming,Rate Limits

Rate limits are a constant concern when doing engagement programming with the REST API. I’ve settled on an incremental approach. Instead of building a rate accounting infrastructure that measures the remaining requests for each API call, I find it easier to write scripts that break high-usage tasks into manageable chunks that won’t exceed the limits. I then schedule a cronjob to repeat these scripts at a frequency that will stay below the rate limit. If for some reason I get back a 429 error code that signifies an exceeded rate limit, I have the script exit and let it try again later based on the cronjob schedule.

To support this code, I also record all API calls in an api_log database table that saves the account I was working with, the API request made, and the http code returned. A separate script checks this table and emails me if the number of rate limit errors over the last hour exceeds a pre-defined level.

This decoupled approach allows multiple scripts to overlap with the same API call. This is the reality of a complex system. Trying to be too much of a control freak, and coordinate all my scripts to prevent ever hitting a rate limit ends up in diminishing returns. You end up spending more time on the scaffolding surrounding your code, and less on actually getting work done.

Some people are afraid of ever triggering a rate limit error out of fear of suspension, but I have never had that problem. Remember, I back off as soon as I get the first error response. My code monitoring the api log also warns me if my system is getting overloaded. I can then reschedule the cronjobs at a lower rate, or have each script make fewer requests in each cycle.

I learned how to manage Twitter API rate limits with a minimum of work.

Leave a Comment

Previous post:

Next post: