<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>140dev &#187; Database Cache</title>
	<atom:link href="http://140dev.com/twitter-api-programming-blog/category/database-cache/feed/" rel="self" type="application/rss+xml" />
	<link>http://140dev.com</link>
	<description>Twitter API Programming Tips, Tutorials, Source Code Libraries and Consulting</description>
	<lastBuildDate>Wed, 31 Jul 2019 10:03:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6</generator>
		<item>
		<title>How is Twitter programming better than Twitter search?</title>
		<link>http://140dev.com/twitter-api-programming-blog/how-is-twitter-programming-better-than-twitter-search/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/how-is-twitter-programming-better-than-twitter-search/#comments</comments>
		<pubDate>Fri, 08 Jun 2012 12:51:28 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Data Mining Tweets]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Quality Control]]></category>
		<category><![CDATA[Twitter Spam]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1540</guid>
		<description><![CDATA[This is a question I frequently get asked by new clients. They know there is a Twitter API available to collect tweets, but they have no idea how the results differ from just asking for tweets with Search.Twitter.com. I&#8217;ve recently explained the fact that a tweet database lets you create a long-term store that cannot [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>This is a question I frequently get asked by new clients. They know there is a Twitter API available to collect tweets, but they have no idea how the results differ from just asking for tweets with Search.Twitter.com. I&#8217;ve recently <a href="http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-tweet-data-is-priceless/">explained</a> the fact that a tweet database lets you create a long-term store that cannot be reproduced or purchased any other way. That is just the starting point. The real advantage of Twitter API programming is the way it allows you to add value to a collection of tweets:</p>
<ul>
<li>You can apply <a href="http://140dev.com/twitter-api-programming-blog/screening-a-tweet-stream-for-quality-control/">quality control rules</a> that let you filter out false positives for the keywords you are using in your collection query.</li>
<li>I also like to apply simple &#8220;filth controls&#8221; to all tweet streams that get displayed on sites. This starts with a list of George Carlin&#8217;s 7 words you can&#8217;t say on television, and grows into a list of the more creative racist and misogynist words so popular on Twitter. Excluding tweets with these words makes Twitter seem much more civilized.</li>
<li>A simple <a href="http://140dev.com/twitter-api-programming-blog/language-detection-for-tweets-part-1/">language detection algorithm</a> will let you tweets for a specific language and exclude all other languages.</li>
<li>By checking the tweets you receive for spammy words, like free, coupon, buy now, or sale, you can clean out a high percentage of spam tweets, and if you check new tweets for duplicates, you can identify spammers and blacklist them.</li>
<li>If you screen the user account data for each tweet&#8217;s author, you can exclude accounts that have a spammy profile, such as a default avatar, no followers, or an account that has only been in existence a few days.</li>
<li>Or you can come up with an influence algorithm, such as follower count or frequency of mentions, to select tweets from the most influential users.</li>
</ul>
<p>These are just the generic ways to add value to a tweet aggregation site. Once you start working with a client with specific application needs, there are many ways to add value to Twitter. This is an iterative process that keeps improving the quality of your tweet collection.</p>
<p>So the simple answer to the question is that Twitter programming produces much higher quality results than Twitter search.</p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/how-is-twitter-programming-better-than-twitter-search/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Twitter Consultant Tip: Tweet data is priceless</title>
		<link>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-tweet-data-is-priceless/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-tweet-data-is-priceless/#comments</comments>
		<pubDate>Thu, 31 May 2012 20:12:52 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Data Mining Tweets]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Twitter consultant]]></category>
		<category><![CDATA[Twitter Database Programming]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1489</guid>
		<description><![CDATA[Most of the Twitter consulting I do involves some form of tweet collection and storage in a database. Even when clients approach me with this in mind, they hardly ever realize just how valuable tweet data can be. In fact, it is priceless in the truest sense of the word, because there is no way [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>Most of the Twitter consulting I do involves some form of tweet collection and storage in a database. Even when clients approach me with this in mind, they hardly ever realize just how valuable tweet data can be. In fact, it is priceless in the truest sense of the word, because there is no way to buy tweets after they are sent. You either capture them in real-time, or they are gone forever. Anyone who wants to work as a Twitter consultant needs to be able to explain that value added message to potential clients. Here are the key selling points to keep in mind. </p>
<p>The Twitter search API only goes back in time 5 to 6 days, and will only return up to 1,500 tweets for any query. If you want old tweets from the API, that is an absolute limit. The streaming API is much more responsive, and will return up to 1% of the total stream, meaning that you can get up to 3 million tweets a day on any query, but these tweets are returned in real-time, not after the fact. So if you want to get all the tweets for a query, you must set up the streaming API connection <em>before you need the results</em>.  Then you must store them in a database for later retrieval. </p>
<p>The <a href="https://twitter.com/tos">Twitter terms of service</a> (TOS) allow you to store tweets for use on your own server, either for display or analysis, but there are strict limitations on reselling this data. You can sell it in discrete data sets as a file, such as a PDF or Excel file, but you cannot resell it as an API or real-time service. This means that if someone has already collected tweets that you need, you are forbidden from buying them as a continuous stream for display on your site. If you haven&#8217;t collected them yourself, you can&#8217;t have a real-time display of tweets on your site, even if you are willing to pay for them. </p>
<p>But what about Twitter&#8217;s data partners, Gnip and Datasift? These sites don&#8217;t publicize the limitation on their site, but they are also forbidden by Twitter&#8217;s license from selling tweets for display on other sites. The tweets you buy from them may only be used for analysis, such as in a product like Radian 6. </p>
<p>All of this means that once a client has built up a long-term database of tweets, they have a priceless resource. There is no price at which these tweets can be bought and sold for continuous display. That makes a tweet database an incredibly valuable resource, and it means that you have to start collecting tweets and saving them in advance. There is no going back for them. </p>
<p>Once clients understand this, they suddenly become very acquisitive. They can collect all the tweets about politicians, celebrities, athletes, TV shows, etc., and have a iron-clad barrier to entry against any competitor coming along later. That is a valuable selling tool for any Twitter consultant who can do this type of database programming. My free, <a href="http://140dev.com/free-twitter-api-source-code-library/">open source library</a> is a good starting point for this type of coding. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-tweet-data-is-priceless/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Simple PHP/MySQL database library source code: db_lib.php</title>
		<link>http://140dev.com/twitter-api-programming-blog/simple-php-mysql-database-library-source-code/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/simple-php-mysql-database-library-source-code/#comments</comments>
		<pubDate>Tue, 15 May 2012 14:28:07 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Twitter Database Programming]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1378</guid>
		<description><![CDATA[There seems to be a good amount of interest in the new set of tutorials I&#8217;ve started writing, and most of the code I produce interacts with a MySQL database, so I&#8217;m going to post the code for my standard database library here. This makes it easy for me to link to this post multiple [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>There seems to be a good amount of interest in the new set of tutorials I&#8217;ve started writing, and most of the code I produce interacts with a MySQL database, so I&#8217;m going to post the code for my standard database library here. This makes it easy for me to link to this post multiple times, rather than include the source of this library in multiple posts. This is a simplified version of the library included in the <a href="http://140dev.com/free-twitter-api-source-code-library/twitter-database-server/db-lib-php/">140dev Framework</a>. </p>
<p>The login info for this library is kept in a separate script called db_config.php. For the sample code shown on this blog this configuration file will be kept in the same directory as the db_lib.php script. Security minded programmers will probably want to keep this in a different location on their server, preferably outside the web accessible directories. </p>
<p><strong>db_config.php</strong><br />
<table><tr><td class="line_numbers"><pre>1
2
3
4
5
6
7
</pre></td><td class="code"><pre>&lt;?php
// db_config.php
$db_host = 'localhost';
$db_user = 'ENTER USER NAME HERE';
$db_password = 'ENTER USER PASSWORD HERE'; 
$db_name = 'ENTER DATABASE NAME HERE'; 
?&gt; </pre></td></tr></table></p>
<p>The actual library code is written as a PHP class. This allows the code to open a MySQL connection once, and then keep it open for the entire time the script using the library is running. The library contains simple functions for preparing data for insertion, running any SQL query, checking to see if a value already exists in a table, and table insertion and update functions. </p>
<p><strong>db_lib.php</strong><br />
<table><tr><td class="line_numbers"><pre>1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
</pre></td><td class="code"><pre>&lt;?php
// db_lib.php

class db
{
  public $dbh;

  // Create a database connection for use by all functions in this class
  function __construct() {

    require_once('db_config.php');
    
    if($this-&gt;dbh = mysqli_connect($db_host, 
      $db_user, $db_password, $db_name)) { 
	} else {
	  exit('Unable to connect to DB');
    }
	// Set every possible option to utf-8
    mysqli_query($this-&gt;dbh, 'SET NAMES &quot;utf8&quot;');
    mysqli_query($this-&gt;dbh, 'SET CHARACTER SET &quot;utf8&quot;');
    mysqli_query($this-&gt;dbh, 'SET character_set_results = &quot;utf8&quot;,' .
        'character_set_client = &quot;utf8&quot;, character_set_connection = &quot;utf8&quot;,' .
        'character_set_database = &quot;utf8&quot;, character_set_server = &quot;utf8&quot;');
  }
  
  // Create a standard data format for insertion of PHP dates into MySQL
  public function date($php_date) {
    return date('Y-m-d H:i:s', strtotime($php_date));	
  }
  
  // All text added to the DB should be cleaned with mysqli_real_escape_string
  // to block attempted SQL insertion exploits
  public function escape($str) {
    return mysqli_real_escape_string($this-&gt;dbh,$str);
  }
    
  // Test to see if a specific field value is already in the DB
  // Return false if no, true if yes
  public function in_table($table,$where) {
    $query = 'SELECT * FROM ' . $table . 
      ' WHERE ' . $where;
    $result = mysqli_query($this-&gt;dbh,$query);
    return mysqli_num_rows($result) &gt; 0;
  }

  // Perform a generic select and return a pointer to the result
  public function select($query) {
    $result = mysqli_query( $this-&gt;dbh, $query );
    return $result;
  }
    
  // Add a row to any table
  public function insert($table,$field_values) {
    $query = 'INSERT INTO ' . $table . ' SET ' . $field_values;
    mysqli_query($this-&gt;dbh,$query);
  }
  
  // Update any row that matches a WHERE clause
  public function update($table,$field_values,$where) {
    $query = 'UPDATE ' . $table . ' SET ' . $field_values . 
      ' WHERE ' . $where;
    mysqli_query($this-&gt;dbh,$query);
  } 
 
}  
?&gt;</pre></td></tr></table></p>
<p>There will be practical examples of using this library throughout the tutorials coming up in the blog. Just to show the simplest example possible, here is a script that makes a database connection and then runs a &#8220;SHOW TABLES&#8221; MySQL query. </p>
<p><strong>db_lib_demo.php</strong><br />
<table><tr><td class="line_numbers"><pre>1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
</pre></td><td class="code"><pre>&lt;?php 
// db_lib_demo.php

// Create a database connection
require_once('db_lib.php');
$oDB = new db;

// Run a MySQL query
$query = &quot;SHOW TABLES&quot;;
$result = $oDB-&gt;select($query);

// Retrieve the first row of results as an array
$row = mysqli_fetch_assoc($result);
print_r($row);

?&gt;</pre></td></tr></table></p>
<p>This script can be run from the command line of an SSH or Telnet client, however you normally connect to your server.</p>
<p><code># <b>php db_lib_demo.php</b></code><br />
<table><tr><td class="line_numbers"><pre>1
2
3
4
</pre></td><td class="code"><pre> Array
(
    [Tables_in_140dev_tutorials] =&gt; rss_feed
)</pre></td></tr></table></p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/simple-php-mysql-database-library-source-code/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Dealing with tweet bursts</title>
		<link>http://140dev.com/twitter-api-programming-blog/dealing-with-tweet-bursts/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/dealing-with-tweet-bursts/#comments</comments>
		<pubDate>Fri, 27 Jan 2012 14:11:25 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Twitter Politics]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1309</guid>
		<description><![CDATA[This week we got crushed by the State of the Union speech. We normally get about 30,000 to 50,000 tweets per day in the 2012twit.com database, and our largest server can handle that without any showing any appreciable load. During the SOTU tweet volume exploded. We got 500,000 tweets in about 4 hours. I was [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>This week we got crushed by the State of the Union speech. We normally get about 30,000 to 50,000 tweets per day in the <a href="http://2012twit.com">2012twit.com</a> database, and our largest server can handle that without any showing any appreciable load. During the SOTU tweet volume exploded. We got 500,000 tweets in about 4 hours. I was able to keep the server going by shutting down some processes that weren&#8217;t needed, but it was a challenge. This issue of bursts of tweets seems to be getting worse. In the case of Twitter and politics people are getting used to talking back to the TV through Twitter. With 9 months left until the election I needed to find some solutions. </p>
<p>I spent a lot of time over the last 2 days trying to find the problem, and discovered that it was not parsing the tweets that was killing us, but inserting the raw tweet data into the json_cache table. I use a <a href="http://140dev.com/twitter-api-programming-tutorials/twitter-api-database-cache/">two phase processing system</a> with the raw tweet delivered by the streaming API getting inserted as fast as possible in a cache table, and then a separate parsing phase breaking it out into a normalized schema. You can get the <a href="http://140dev.com/free-twitter-api-source-code-library/">basic code</a> for this as open source. </p>
<p>It looks like Twitter has been steadily increasing the size of the basic payload that it sends for each tweet in the streaming API. That makes sense, since people are demanding more data. Yesterday they announced some insane scheme where every tweet will include data about countries that don&#8217;t want tweets with specific words to be displayed. This will only get worse.</p>
<p>I realized that I have never actually needed to go back and reparse the contents of json_cache, and I had long ago added purging code to my 2012twit system to delete anything in that table older than 7 days. I tried clearing out the json_cache table on my server and modifying the code to delete each tweet as soon as it was parsed. This cut the size from several hundred thousand rows on average in this table to about 50. The load on that server dropped right away and during the GOP debate last night, the load stayed very low. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/dealing-with-tweet-bursts/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Dealing with 500 errors when retrieving user data</title>
		<link>http://140dev.com/twitter-api-programming-blog/dealing-with-500-errors-when-retrieving-user-data/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/dealing-with-500-errors-when-retrieving-user-data/#comments</comments>
		<pubDate>Thu, 24 Nov 2011 15:32:43 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[API error]]></category>
		<category><![CDATA[Database Cache]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1237</guid>
		<description><![CDATA[I got a call the other day from a developer who was receiving various 500 series errors when trying to gather large amounts of Twitter user data. The API has a number of errors in the 500 range, all of which generally mean that the Twitter servers are overloaded. The API is built on the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I got a call the other day from a developer who was receiving various 500 series errors when trying to gather large amounts of Twitter user data. The API has a number of errors in the 500 range, all of which generally mean that the Twitter servers are overloaded. The API is built on the principle of staying alive while handling as many requests as possible. If the load gets too high or a request takes too long to process, the request is dumped and one of the 500 errors is returned.</p>
<p>The specific requirement for this developer was getting information on all the followers of his app&#8217;s users. He was doing this in a brute force fashion every 24 hours. First looking up all the followers 5,000 at a time with the <a href="https://dev.twitter.com/docs/api/1/get/followers/ids">/followers/ids</a> call. Then getting the profile data for each of these followers 100 at a time with <a href="https://dev.twitter.com/docs/api/1/get/users/lookup">/users/lookup</a>. This is a very intensive use of the API, and it is exactly what Twitter doesn&#8217;t want you to do. Look at the hint they are offering by returning 5,000 follower ids in a single call, but doling out profile data on only 100 users. They are telling us not to request too much user data.</p>
<p>Whenever possible you should be caching data you get from the API. User profiles are a perfect example. Instead of requesting data on every user every 24 hours, it is much better to store user profiles in a database, and request this data less often. Cutting back to once every 7 days reduces the number of API calls by 86%. I recommended that he adopt this type of caching and then check the user ids he receives from /followers/ids against the user database table. If the user is new or hasn&#8217;t been updated recently, then request the profile with /users/lookup. </p>
<p>It also helps to be opportunistic about caching. Many of the API calls return a user&#8217;s profile in the payload. If you get this data anywhere in your code, take advantage of this opportunity to cache it. </p>
<p>The other solution to 500 errors is to request less data each time. As I said, a 500 error is often a time out. While the /users/lookup call allows you to request 100 users at a time, try backing off to just 50 at a time. It will take more API calls, but you&#8217;ll have a better chance of getting results without an error. This type of logic should be built into your code. If a request triggers a 500 error, scale back the quantity requested and repeat the call. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/dealing-with-500-errors-when-retrieving-user-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>First day in the life of the 140dev framework</title>
		<link>http://140dev.com/twitter-api-programming-blog/first-day-in-the-life-of-the-140dev-framework/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/first-day-in-the-life-of-the-140dev-framework/#comments</comments>
		<pubDate>Thu, 11 Nov 2010 05:52:13 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Custom Twitter Client]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Tweet Display]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=926</guid>
		<description><![CDATA[The first day has gone well. I announced the code on the Twitter dev list, and got 16 visitors to the site. The good thing is that the average pages per visitor was 7, and they spent an average of 11 minutes on the site. So people who get to the code are giving it [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>The first day has gone well. I announced the code on the Twitter dev list, and got 16 visitors to the site. The good thing is that the average pages per visitor was 7, and they spent an average of 11 minutes  on the site. So people who get to the code are giving it a good amount of attention. </p>
<p>I&#8217;m also using this code as an starting point to teach my son Web programming. He has taken a course in Java in school, and has experience setting up WordPress blogs, but hasn&#8217;t done any PHP or server based coding. Working through the install process for 140dev with him was very informative. I&#8217;m going to rewrite the <a href="http://140dev.com/free-twitter-api-source-code-library/twitter-database-server/install/">install page</a> for the Twitter Database Server based on his feedback. </p>
<p>The biggest problem is identifying the target audience. Is it people who have never used Telnet or worked at a Unix-style prompt? Is it someone who has coded in PHP for a while, but has never used the Twitter API? One solution is to produce tutorials and programming primers to help bridge this gap. </p>
<p>I also want to make the install process much simpler. That is the obvious blocking point. If I can make the install on the database server module easy, the rest of the framework will be a breeze. </p>
<p>Overall, I&#8217;m happy with the first day&#8217;s results. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/first-day-in-the-life-of-the-140dev-framework/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 140dev Twitter framework is now available</title>
		<link>http://140dev.com/twitter-api-programming-blog/the-140dev-twitter-framework-is-now-available/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/the-140dev-twitter-framework-is-now-available/#comments</comments>
		<pubDate>Wed, 10 Nov 2010 03:34:47 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Custom Twitter Client]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=922</guid>
		<description><![CDATA[I just opened up access to version 0.10 of my free Twitter source code library. At first it just has modules for a tweet aggregation database and a tweet display plugin. My next module will be a WordPress display plugin. I also have a lot of tutorials planned that can use this source code as [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I just opened up access to version 0.10 of my <strong><a href="http://140dev.com/free-twitter-api-source-code-library/" title="Download free Twitter source code">free Twitter source code library</a></strong>. At first it just has modules for a <strong>tweet aggregation database</strong> and a <strong>tweet display</strong> plugin. My next module will be a WordPress display plugin. I also have a lot of tutorials planned that can use this source code as examples. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/the-140dev-twitter-framework-is-now-available/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>WordPress architecture is in place</title>
		<link>http://140dev.com/twitter-api-programming-blog/wordpress-architecture-is-in-place/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/wordpress-architecture-is-in-place/#comments</comments>
		<pubDate>Fri, 15 Oct 2010 15:30:39 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Integrating Twitter with Wordpress]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=276</guid>
		<description><![CDATA[I rebuilt the code for the 104dev system using a WP-style architecture. As expected the messiest part was getting the directory paths right. I think the idea of putting most of the functionality into separate plugin directories will be a big win. There will be a very small core of tweet collection code and general [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I rebuilt the code for the 104dev system using a WP-style architecture. As expected the messiest part was getting the directory paths right. I think the idea of putting most of the functionality into separate plugin directories will be a big win. There will be a very small core of tweet collection code and general purpose functions. If everything else is a plugin, others can build onto the system. <a href="http://140dev.com/demo/">Here</a> is a very simple demo of the code in use. </p>
<p>The source of this page shows of easy it is to add Twitter functionality to any Web page:<br />
<code><br />
&lt;head&gt;<br />
&lt;title&gt;Tweet Display from Database Cache&lt;/title&gt;<br />
&lt;meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /&gt;<br />
&lt;link rel="stylesheet"<br />
    href="http://140dev.com/demo/themes/default.css" type="text/css" /&gt;<br />
&lt;script type="text/javascript"<br />
    src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"&gt;&lt;/script&gt;<br />
&lt;script type="text/javascript"<br />
    src="http://140dev.com/demo/plugins/tweet_display/site.js"&gt;&lt;/script&gt;<br />
&lt;/head&gt;<br />
&lt;body&gt;<br />
&lt;?php print file_get_contents('http://140dev.com/demo/plugins/tweet_display'); ?&gt;<br />
&lt;/body&gt;<br />
&lt;/html&gt;<br />
</code></p>
<p>I&#8217;ll write up a tutorial to describe the architecture I used. The key is a decoupled approach. The 140dev server can deliver tweets to any site. That allows a single collection point for interaction with the Twitter API, but any number of display sites for the aggregated tweets.</p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/wordpress-architecture-is-in-place/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Decisions about the tweet aggregation code</title>
		<link>http://140dev.com/twitter-api-programming-blog/decisions-about-the-tweet-aggregation-code/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/decisions-about-the-tweet-aggregation-code/#comments</comments>
		<pubDate>Wed, 13 Oct 2010 23:09:06 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Integrating Twitter with Wordpress]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Twitter Developers]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=264</guid>
		<description><![CDATA[I&#8217;ve been thinking about how to prepare the tweet aggregation code used for the dev tweets page. I know that I want to add multiple modules to the system, and make it easy to customize the appearance of the tweet list. This sounds a lot like the WordPress model with plugins and themes, so I&#8217;m [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I&#8217;ve been thinking about how to prepare the <strong>tweet aggregation code</strong> used for the <a href="http://140dev.com/twitter-development-team-tweets/">dev tweets page</a>. I know that I want to add multiple modules to the system, and make it easy to customize the appearance of the tweet list. This sounds a lot like the WordPress model with plugins and themes, so I&#8217;m going to adopt a similar code architecture. If I do this right, others will be able to add their own plugins and themes. I also want to make installation as drop dead simple as possible. Finally, I&#8217;m going to call the code base 140Dev. I already use this name for the company and site. Why bother coming up with a new product name, only to have to rename the site and company to match the product later? And so it begins&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/decisions-about-the-tweet-aggregation-code/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>New Twitter API tutorial comparing search API and streaming API</title>
		<link>http://140dev.com/twitter-api-programming-blog/new-twitter-api-tutorial-comparing-search-api-and-streaming-api/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/new-twitter-api-tutorial-comparing-search-api-and-streaming-api/#comments</comments>
		<pubDate>Sun, 10 Oct 2010 15:34:39 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Search API]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=244</guid>
		<description><![CDATA[I just finished a tutorial on the two methods of searching for tweets. Whenever this subject comes up on the Twitter developers mailing list, the usual response is that the streaming API is best, but that depends on your goals and programming ability. If you want to search for tweets in the past, or if [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I just finished a tutorial on the two methods of <a href="http://140dev.com/twitter-api-programming-tutorials/aggregating-tweets-search-api-vs-streaming-api/" title="searching for tweets with the Twitter API">searching for tweets</a>. Whenever this subject comes up on the Twitter developers mailing list, the usual response is that the streaming API is best, but that depends on your goals and programming ability. If you want to search for tweets in the past, or if you are not a very experienced programmer, the search API is the right choice. On the other hand, the streaming API will deliver tweets in real-time, which is very impressive for an app. I lay out all the pros and cons <a href="http://140dev.com/twitter-api-programming-tutorials/aggregating-tweets-search-api-vs-streaming-api/" title="searching for tweets with the Twitter API">here</a>. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/new-twitter-api-tutorial-comparing-search-api-and-streaming-api/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
