<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>140dev &#187; Rate Limits</title>
	<atom:link href="http://140dev.com/twitter-api-programming-blog/category/rate-limits/feed/" rel="self" type="application/rss+xml" />
	<link>http://140dev.com</link>
	<description>Twitter API Programming Tips, Tutorials, Source Code Libraries and Consulting</description>
	<lastBuildDate>Wed, 31 Jul 2019 10:03:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6</generator>
		<item>
		<title>Engagement Programming: Super simple rate limit programming</title>
		<link>http://140dev.com/twitter-api-programming-blog/engagement-programming-super-simple-rate-limit-programming/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/engagement-programming-super-simple-rate-limit-programming/#comments</comments>
		<pubDate>Mon, 25 Nov 2013 13:16:31 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Engagement Programming]]></category>
		<category><![CDATA[Rate Limits]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=2526</guid>
		<description><![CDATA[Rate limits are a constant concern when doing engagement programming with the REST API. I&#8217;ve settled on an incremental approach. Instead of building a rate accounting infrastructure that measures the remaining requests for each API call, I find it easier to write scripts that break high-usage tasks into manageable chunks that won&#8217;t exceed the limits. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p><strong>Rate limits</strong> are a constant concern when doing <strong>engagement programming</strong> with the REST API.  I&#8217;ve settled on an incremental approach. Instead of building a rate accounting infrastructure that measures the remaining requests for each API call, I find it easier to write scripts that break high-usage tasks into manageable chunks that won&#8217;t exceed the limits. I then schedule a cronjob to repeat these scripts at a frequency that will stay below the rate limit. If for some reason I get back a 429 error code that signifies an exceeded rate limit, I have the script exit and let it try again later based on the cronjob schedule. </p>
<p>To support this code, I also record all API calls in an api_log database table that saves the account I was working with, the API request made, and the http code returned. A separate script checks this table and emails me if the number of rate limit errors over the last hour exceeds a pre-defined level. </p>
<p>This decoupled approach allows multiple scripts to overlap with the same API call. This is the reality of a complex system. Trying to be too much of a control freak, and coordinate all my scripts to prevent ever hitting a rate limit ends up in diminishing returns. You end up spending more time on the scaffolding surrounding your code, and less on actually getting work done. </p>
<p>Some people are afraid of ever triggering a rate limit error out of fear of suspension, but I have never had that problem. Remember, I back off as soon as I get the first error response. My code monitoring the api log also warns me if my system is getting overloaded. I can then reschedule the cronjobs at a lower rate, or have each script make fewer requests in each cycle. </p>
<p id="ttext">I learned how to manage Twitter API rate limits with a minimum of work. </a></p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/engagement-programming-super-simple-rate-limit-programming/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why are Twitter API limits kept a secret?</title>
		<link>http://140dev.com/twitter-api-programming-blog/why-are-twitter-api-limits-kept-a-secret/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/why-are-twitter-api-limits-kept-a-secret/#comments</comments>
		<pubDate>Sat, 23 Nov 2013 16:19:19 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Rate Limits]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=2502</guid>
		<description><![CDATA[Twitter has always been vague when it comes to limits in the API. The search API is a good example. Until version 1.1 the search API had rate limits, but they were never revealed in the docs or by dev support staff. The most we were able to extract were statements about there being enough [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>Twitter has always been vague when it comes to limits in the API. The search API is a good example. Until version 1.1 the search API had rate limits, but they were never revealed in the docs or by dev support staff. The most we were able to extract were statements about there being enough requests to handle the needs of most apps. Eventually the guesstimate of 200 search API requests per hour became the unwritten assumption. Other areas still left uncertain are the windows of time used to rate limit actions like posting tweets and DMS during the day. We are given daily limits, such as posting up to 1,000 tweets a day, but within the day there are secret limits based on unspecified time periods. The number of apps an account is able to create is another secret. I have some ideas on how the API dev community may be able to fill in these gaps in the future, but first I&#8217;d like to explore the psychology and corporate strategy driving this secrecy. </p>
<p><strong>Twitter is under constant attack</strong><br />
You have to look at this from the point of view of Twitter&#8217;s own developers and sysadmins. They are under attack from thousands, perhaps millions, of spammers and bots. There is @mention spam, reply spam, DM spam, follow spam, account creation spam, and probably kinds of spam I&#8217;ve never noticed. If you had to keep a massive server farm alive while getting beaten on constantly by waves of bots, you would naturally develop a bunker mentality. These are not true denial of service attacks, but they must feel like that when you are on the receiving end. </p>
<p>A classic way of dealing with online attacks is to reveal as few details as possible about your true limits. If you told everyone that they could only send X tweets per hour, spammers would instantly adjust their code to match X. Why make it easy for your attackers to just slip within your limits? </p>
<p>Unfortunately, before API 1.1 allowed Twitter to track every request to a specific account, it was impossible to tell the good devs from the spammers. I know that Twitter wants good developers to build on the API. If they didn&#8217;t it would just get turned off. But the primary task has been keeping the service going, and some good devs and their apps are just going to be collateral damage. </p>
<p><strong>The Twitter API is a byproduct, not a product</strong><br />
The true role of the Twitter API is to support Twitter.com and Twitter&#8217;s own apps. Allowing outsiders to also access the API was a very clever hack by Twitter that started before APIs were assumed to be offered by web services. If the primary job of the API is driving Twitter&#8217;s own products, then the limits have to be flexible and adjust to a constantly changing feature set. Twitter can&#8217;t tie itself down by promising 3rd party developers that there will be an exact set of limits for every action. With the rate of change Twitter&#8217;s code undergoes I&#8217;m willing to bet that the support staff doesn&#8217;t even know what the limits are for many requests at a given time. </p>
<p><strong>API limits have to react to demand and server load</strong><br />
For much of Twitter&#8217;s early years just keeping the service up in the face of rapid growth was a huge challenge. One obvious solution was to throttle back API limits during periods of heavy load. Scaling Twitter and managing the load has been largely solved, but when there is a revolution, or natural disaster, or someone like Mitt Romney makes a speech, the load still spikes. I&#8217;m sure the API is still throttled back at these times. </p>
<p><strong>Thou shalt not reverse engineer</strong><br />
Finally we come to the lawyers. When reading Twitter&#8217;s terms of service it becomes clear that the lawyers were given the task of cutting off the possibility of a competitor duplicating Twitter. Since lawyers have to work with the constraints of the legal system and their own lack of technical knowledge, the common legal answer is &#8220;keep it vague.&#8221; Don&#8217;t reveal anything that will help a competitor understand your product&#8217;s internal machinery. I&#8217;ve dealt with IP lawyers enough to have gotten that answer many times. So management asks the lawyers what to say about limits, the lawyers say to reveal as little as possible, and this answer gets communicated back to the support staff. </p>
<p>What secret limits have you figured out? Do you know the true limits on search results returned or the number of apps an account can create? Share them here or <a href="http://twitter.com/140dev">tweet me</a>. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/why-are-twitter-api-limits-kept-a-secret/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Twitter Consultant Tip: Start with Twitter API rate limits</title>
		<link>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-start-with-twitter-api-rate-limits/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-start-with-twitter-api-rate-limits/#comments</comments>
		<pubDate>Fri, 01 Jun 2012 14:32:24 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Rate Limits]]></category>
		<category><![CDATA[Twitter consultant]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1497</guid>
		<description><![CDATA[A good Twitter consultant should start any discussion with a potential client by reviewing the Twitter API rate limits on the features they want. This is really a case of form follows function. Twitter has defined what developers should be doing through their wide range of rate limits, and you better pay attention to them [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>A good Twitter consultant should start any discussion with a potential client by reviewing the Twitter API rate limits on the features they want. This is really a case of form follows function. Twitter has defined what developers should be doing through their wide range of rate limits, and you better pay attention to them before promising to deliver on a client&#8217;s dream app. One rule I start with is: tweets are easy, followers are hard. You can get tons of tweets with the streaming API, which has no rate limits. Getting followers, on the other hand, is something Twitter clearly does not want you to do in bulk. </p>
<p>For example, you can request up to 5,000 followers of a specific account at a time, but you can only get user profiles for these followers at a rate of 100 per API call. This means that there is no way to get the details on all the followers of a major celebrity in anything like real-time. </p>
<p>Another aspect of rate limits that has a big impact on code architecture is deciding which entity is making the API call. If you do everything from a single server with the OAuth keys for the app itself, you only get 350 calls per hour with the REST API. But if you let users login through OAuth, you can use their keys and get 350 calls per hour with each set of keys. Just 100 users give you 35,000 calls per hour. Pretty powerful reason to build OAuth login into a site, right? This is perfectly kosher. You are not really taking anything from the users. Each app they authorize gets 350 calls per hour with a different set of keys based on the same person. </p>
<p>You can also play with rate limits by offloading some of the functionality into the user&#8217;s browser. When you make an API call that isn&#8217;t using OAuth keys, such as the search API, the rate limit is charged to the IP of the server that connects to Twitter. A way around this limit is to call the search API with Javascript from the web page. In that case, the IP of the user&#8217;s browser is absorbing the rate limit. That can scale up to any number of users. We take advantage of this technique in the <a href="http://thisrth.at/">ThisrThat</a> app we built for a client. </p>
<p>There are lots of other rate limit tricks you can use, but this is enough to make the point that a Twitter consultant needs to make new clients aware of the limits they will be facing before a complete feature list is created. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-start-with-twitter-api-rate-limits/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Overcoming 502 errrors while backfilling tweets</title>
		<link>http://140dev.com/twitter-api-programming-blog/overcoming-502-errrors-while-backfilling-tweets/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/overcoming-502-errrors-while-backfilling-tweets/#comments</comments>
		<pubDate>Fri, 04 Feb 2011 13:15:55 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Rate Limits]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Twitter Politics]]></category>
		<category><![CDATA[Twitter Server Errors]]></category>
		<category><![CDATA[2012 candidates]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1178</guid>
		<description><![CDATA[I&#8217;m collecting all the tweets for possible 2012 candidates with the Streaming API, and I wanted to make sure I was getting every one of their tweets. I built a backfilling script to go through every tweet in each of these accounts, and add any that weren&#8217;t already in the database. This uses the /statuses/user_timeline [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I&#8217;m collecting all the tweets for possible 2012 candidates with the Streaming API, and I wanted to make sure I was getting every one of their tweets. I built a backfilling script to go through every tweet in each of these accounts, and add any that weren&#8217;t already in the database. This uses the <a href="http://dev.twitter.com/doc/get/statuses/user_timeline">/statuses/user_timeline</a> call to get the past tweets. I ran into a problem with tons of 502 errors from the Twitter API, as many as one every two or three API calls. </p>
<p>Taylor Singletary on the Dev mailing list suggested dropping the count parameter to avoid timeout errors, and this has helped a lot. I was using a count of 200 tweets per call to keep the number of calls low. This gave me all the data in about 100 calls, but with the errors I wasn&#8217;t able to complete the process before hitting the rate limit. I tried dropping the count to 100, and this allowed the script to finish with a total of 298 calls. </p>
<p>So now I have the catch 22 of needing to do more API calls to avoid the errors that cause too many API calls. The only solution I see is to cut the count parameter to a level that is low enough to avoid errors, and then spread the backfilling out over multiple hours to stay within the rate limit. </p>
<p>I think the ultimate solution is to do a steady level of backfilling spread over the entire day. I haven&#8217;t had to do backfilling in the past, because I was treating the Streaming API tweet collection as a high volume sampling mechanism. As long as I got lots of tweets on a particular subject, it was good. Now that I want to maintain a database of every tweet made by the candidates I have to backfill to make sure nothing was missed by streaming. This seems to be necessary, since every time I run the backfill I get two to three tweets that didn&#8217;t get sent by streaming. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/overcoming-502-errrors-while-backfilling-tweets/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Twitter Consultant Tip: Get all the Twitter data you need for free</title>
		<link>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-get-all-the-twitter-data-you-need-for-free/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-get-all-the-twitter-data-you-need-for-free/#comments</comments>
		<pubDate>Fri, 03 Dec 2010 15:24:55 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Consulting Tips]]></category>
		<category><![CDATA[Rate Limits]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Twitter Developers]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1143</guid>
		<description><![CDATA[Since the announcement of the Twitter-Gnip partnership, there have been lots of news stories and blog posts stating that this is the end of the independent developer, because there is no more free Twitter data. This is completely wrong. You can get all the Twitter data you need, as long as you don&#8217;t want *all* [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>Since the announcement of the Twitter-Gnip partnership, there have been lots of news stories and blog posts stating that this is the end of the independent developer, because there is no more free Twitter data. This is completely wrong. You can get all the Twitter data you need, as long as you don&#8217;t want *all* the Twitter data. What Twitter is selling through Gnip is up to 50% of the full Firehose, which means 50% of all tweets. That is 50 million tweets a day at the present time. Twitter is also selling the entire Firehose to search engines, like Google and Bing.</p>
<p>Nobody is going to convince me that an independent consultant or a private corporation needs a copy of every single tweet. What these people need is all the tweets for a specific set of keywords or from specific users, and that is still free through the streaming API. Using the <a href="http://dev.twitter.com/pages/streaming_api_methods#statuses-filter">/statuses/filter</a> request you can get all the tweets for up to 400 keywords and 5,000 users. All you have to do is decide which words or users you need to track when you make the request.</p>
<p>What Twitter won&#8217;t let you do is try and grab every single tweet, store them in a database, and then deliver them for selected keywords or users. That is the definition of a search engine. If you really have the bandwidth and server capacity needed to do this for 100,000,000 tweets each day, why in the world should Twitter deliver this to you for free and foot the bill for its side of the bandwidth and server capacity? That is just absurd. But then a lot of Web 2.0 was exactly that. Thankfully, it is now over.</p>
<p>This ability to get all the tweets, as long as you limit the keywords and users to a reasonable number was restated again today by John Kalucki of the Twitter API team. A developer asked on the <a href="http://groups.google.com/group/twitter-development-talk/browse_thread/thread/469f65900acad14d">API mailing list</a>:</p>
<blockquote><p>If I am using the statuses/filter streaming API, with a &#8220;track=&#8221; query<br />
that is not overly broad, and my client never receives any &#8220;limit&#8221;<br />
responses, can I assume that the results returned represent all the<br />
results from the entire firehose?  In other words, in the absence of<br />
&#8220;limit&#8221; response, is my visibility into the firehose 100%?</p></blockquote>
<p>John responded:</p>
<blockquote><p>Yes, where firehose is the stream of all public statuses, with some low-quality accounts removed.</p></blockquote>
<p>From my usage of the streaming API, this is correct.</p>
<p>But what about even higher limits? Shouldn&#8217;t data be free? Maybe it should be in a perfect world, but in the real world bandwidth, servers, and labor costs aren&#8217;t. If you actually need more than the default level of access, you can request a higher level, which is often given for free. If you need all of Twitter&#8217;s data, you should share Twitter&#8217;s costs, because you better have a business model that supports your side of the costs.</p>
<p>And if you need <a href="http://140dev.com/free-twitter-api-source-code-library/">source code</a> for gathering tweets from the streaming API and storing them in a database, that is still free also.</p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-get-all-the-twitter-data-you-need-for-free/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Raffi is planning new HTTP response codes for the API</title>
		<link>http://140dev.com/twitter-api-programming-blog/raffi-is-planning-new-http-response-codes-for-the-api/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/raffi-is-planning-new-http-response-codes-for-the-api/#comments</comments>
		<pubDate>Mon, 27 Sep 2010 18:27:35 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Rate Limits]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=95</guid>
		<description><![CDATA[@raffi has a detailed post on the problems with the error codes returned by the Twitter API to report rate limit errors and possible changes to them. This is interesting for a number of reasons: If his post is taken at face value, it means that he has a lot of latitude to make some [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p><a href="http://twitter.com/raffi">@raffi</a> has a <a href="http://mehack.com/inventing-a-http-response-code-aka-seriously">detailed post </a>on the problems with the error codes returned by the Twitter API to report rate limit errors and possible changes to them. This is interesting for a number of reasons:</p>
<ul>
<li>If his post is taken at face value, it means that he has a lot of latitude to make some serious changes to the behavior of the API. </li>
<li>I&#8217;m assuming that somebody will eventually have to approve these changes, but Raffi makes no mention of any design meetings or internal debate taking place. It is as if he is discussing code that is totally under his control. </li>
<li>The response codes that are buried within multiple control structures of any script that interacts with the API are subject to change. </li>
</ul>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/raffi-is-planning-new-http-response-codes-for-the-api/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
