<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>140dev &#187; Tweet Aggregation</title>
	<atom:link href="http://140dev.com/twitter-api-programming-blog/category/tweet-aggregation/feed/" rel="self" type="application/rss+xml" />
	<link>http://140dev.com</link>
	<description>Twitter API Programming Tips, Tutorials, Source Code Libraries and Consulting</description>
	<lastBuildDate>Wed, 31 Jul 2019 10:03:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6</generator>
		<item>
		<title>Editing tweets will be impossible</title>
		<link>http://140dev.com/twitter-api-programming-blog/editing-tweets-will-be-impossible/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/editing-tweets-will-be-impossible/#comments</comments>
		<pubDate>Tue, 17 Dec 2013 15:04:25 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Twitter API]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=2631</guid>
		<description><![CDATA[The idea is floating around that Twitter will allow users to edit their tweets, possibly for a limited period of time. I agree with the desire for this feature, but it won&#8217;t work. Once a tweet is published, copies of it are delivered in real-time to thousands of data collection scripts capturing tweets with the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>The <a href="http://gigaom.com/2013/12/16/why-adding-a-feature-to-edit-tweets-after-the-fact-might-not-be-such-a-good-thing/">idea </a>is floating around that Twitter will allow users to edit their tweets, possibly for a limited period of time. I agree with the desire for this feature, but it won&#8217;t work. Once a tweet is published, copies of it are delivered in real-time to thousands of data collection scripts capturing tweets with the streaming API. These copies are then put into thousands of databases, in some cases with the goal of permanent storage. It is impossible for Twitter to force changes to these stored copies of tweets once are delivered by the API. </p>
<p>This problem is apparent right now when tweets are deleted. Old copies remain all over the Web. In theory, the Twitter Terms of Service requires developers to remove deleted tweets from their own tweet storage. The streaming API even sends out a signal to alert developers of deleted tweets, so they can be removed. In practice, this issue is ignored, and deleted tweets remain permanently available at sites like Topsy.com. </p>
<p>Keeping track of changes due to editing tweets will be even more difficult. The edit may be allowed on Twitter.com, but then a mechanism must be added to the streaming API to notify all consumers of this stream that an edit has been made. Even if API users wanted to update their own databases to reflect the edits, pushing through these changes to an entire tweet collection system will require major rewrites. The contents of each tweet such as tags, @mentions and URLs are spread throughout complex data structures with the expectation that they are written once and never change. I see it as highly unlikely that these rewrites will be made. So unlikely that it just won&#8217;t happen as a general rule. </p>
<p>Twitter can try to allow edits to tweets, but the results will just be a mess, with multiple copies of each tweet popping up from different sources. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/editing-tweets-will-be-impossible/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Initial impressions of Rackspace&#8217;s cloud servers</title>
		<link>http://140dev.com/twitter-api-programming-blog/initial-impressions-of-rackspaces-cloud-servers/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/initial-impressions-of-rackspaces-cloud-servers/#comments</comments>
		<pubDate>Mon, 20 Feb 2012 15:06:50 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Server configuration]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1318</guid>
		<description><![CDATA[I have a bad habit acquired from my years as a Dot Com CTO. When the time comes to pick a server for a new project, I always overbuy. I&#8217;d rather pay a hundred dollars more per month then have a server that can&#8217;t take the load. One of the driving forces behind this decision [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I have a bad habit acquired from my years as a Dot Com CTO. When the time comes to pick a server for a new project, I always overbuy. I&#8217;d rather pay a hundred dollars more per month then have a server that can&#8217;t take the load. One of the driving forces behind this decision is the time it takes to migrate to a more powerful server, if I discover that it is needed. So I have a collection of dedicated servers that I lease from various webhosts that collectively waste about $500 a month. That&#8217;s a small percentage when considered as a cost of doing business across all my clients, but it is still wasteful. </p>
<p>I&#8217;ve thought about moving to Amazon&#8217;s AWS system, but every time I look at the docs I get turned off. I&#8217;m an application builder, not a professional sysadmin. I have no problem managing a Linux based server, but when it comes to configuring and optimizing Apache and MySQL, I turn to professionals. The AWS docs make it clear that this system is built by people who LOVE tweaking servers. They also seem to love really detailed command driven operations spanning several lines, with very complex parameter names, with very odd capitalization. I couldn&#8217;t care less about that. If I could just say, &#8220;Create a new server instance, and make it this big.&#8221; I&#8217;d be thrilled. </p>
<p>That is what I have now found with <a href="http://rackspace.com">Rackspace.com</a>. Their cloud servers let me clone multiple server instances, and upsize or downsize them with a menu. I&#8217;m testing this for a client who wants to collect tweets that have a lot of flow. His search terms for the Twitter streaming API retrieve about 60,000 tweets an hour. If I had to lease a dedicated server for this, I would have spent at least $150 to $200 a month to be sure I could handle the load. Instead I had my sysadmin create the cheapest server instance at Rackspace, at $11 per month. The entire pricing structure is <a href="http://www.rackspace.com/cloud/cloud_hosting_products/servers/pricing/">here</a>. </p>
<p>Once the basic server configuration with all of my code was set up, I made a server image with Rackspace&#8217;s control panel that could be used to create a new server instance in minutes without having to pay my sysadmin again. I ran the tweet collection for a few hours, and found that this server size was too small. The server load went up above 3.0, and queries were completely stalled. All I had to do was ask for the next size server ($22 a month) using the menu, and 10 minutes later I was up again with the new configuration. This ran much better for inserting tweets, but queries were still too slow. So after a few hours of watching the server, I decided to bump it up again to the next size at $44. This configuration looks like it will work. Server load is about 0.5, and the queries we need to run complete in a few seconds. </p>
<p>Overall this has been a great experience. I love the idea of being able to size up gradually until I see the server handling a real world load, and then downsizing if that load drops. I&#8217;m going to start moving some of my existing sites across to Rackspace next with the hope of saving at least $300 to $400 a month. Then I&#8217;l be experimenting with clusters of different sized servers to handle more complex site requirements. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/initial-impressions-of-rackspaces-cloud-servers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Dealing with tweet bursts</title>
		<link>http://140dev.com/twitter-api-programming-blog/dealing-with-tweet-bursts/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/dealing-with-tweet-bursts/#comments</comments>
		<pubDate>Fri, 27 Jan 2012 14:11:25 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Twitter Politics]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1309</guid>
		<description><![CDATA[This week we got crushed by the State of the Union speech. We normally get about 30,000 to 50,000 tweets per day in the 2012twit.com database, and our largest server can handle that without any showing any appreciable load. During the SOTU tweet volume exploded. We got 500,000 tweets in about 4 hours. I was [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>This week we got crushed by the State of the Union speech. We normally get about 30,000 to 50,000 tweets per day in the <a href="http://2012twit.com">2012twit.com</a> database, and our largest server can handle that without any showing any appreciable load. During the SOTU tweet volume exploded. We got 500,000 tweets in about 4 hours. I was able to keep the server going by shutting down some processes that weren&#8217;t needed, but it was a challenge. This issue of bursts of tweets seems to be getting worse. In the case of Twitter and politics people are getting used to talking back to the TV through Twitter. With 9 months left until the election I needed to find some solutions. </p>
<p>I spent a lot of time over the last 2 days trying to find the problem, and discovered that it was not parsing the tweets that was killing us, but inserting the raw tweet data into the json_cache table. I use a <a href="http://140dev.com/twitter-api-programming-tutorials/twitter-api-database-cache/">two phase processing system</a> with the raw tweet delivered by the streaming API getting inserted as fast as possible in a cache table, and then a separate parsing phase breaking it out into a normalized schema. You can get the <a href="http://140dev.com/free-twitter-api-source-code-library/">basic code</a> for this as open source. </p>
<p>It looks like Twitter has been steadily increasing the size of the basic payload that it sends for each tweet in the streaming API. That makes sense, since people are demanding more data. Yesterday they announced some insane scheme where every tweet will include data about countries that don&#8217;t want tweets with specific words to be displayed. This will only get worse.</p>
<p>I realized that I have never actually needed to go back and reparse the contents of json_cache, and I had long ago added purging code to my 2012twit system to delete anything in that table older than 7 days. I tried clearing out the json_cache table on my server and modifying the code to delete each tweet as soon as it was parsed. This cut the size from several hundred thousand rows on average in this table to about 50. The load on that server dropped right away and during the GOP debate last night, the load stayed very low. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/dealing-with-tweet-bursts/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Screening a tweet stream for quality control</title>
		<link>http://140dev.com/twitter-api-programming-blog/screening-a-tweet-stream-for-quality-control/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/screening-a-tweet-stream-for-quality-control/#comments</comments>
		<pubDate>Fri, 20 Jan 2012 12:05:37 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Quality Control]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1252</guid>
		<description><![CDATA[We&#8217;ve been working on a college football recruiting site called DirectSnap.com for a couple of months, and the most interesting aspect of the technology behind this site is the quality control algorithm I had to develop. Most of the tweet streams we work on, such as 2012twit.com, are based on collecting tweets for either a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>We&#8217;ve been working on a college football recruiting site called <a href="http://directsnap.com">DirectSnap.com</a> for a couple of months, and the most interesting aspect of the technology behind this site is the quality control algorithm I had to develop. Most of the tweet streams we work on, such as <a href="http://2012twit.com">2012twit.com</a>, are based on collecting tweets for either a set of screen names or real names that are distinctive, such as politicians. When you find a match for Newt Gingrich or Mitt Romney, you can be fairly sure you have the right person. </p>
<p>In the case of DirectSnap, the tweet collection is based on the first and last name of 250 high school football players. Right away I knew I would have a problem when I found Michael Moore in the list of potential recruits. Randy Johnson was going to be even trickier, since the baseball player with this name was likely to be tweeted about by the same sports fans as the football recruit we were tracking. Identifying college teams is also tricky. For example, the word &#8216;Florida&#8217; in a tweet with a player&#8217;s name could refer to the University of Florida or Florida State University. </p>
<p><a href="http://directsnap.com"><img class="alignleft" src="http://140dev.com/blog_images/quality.png" /></a></p>
<p>The solution I came up with was creating a list of exclusion keywords for each player and team. If a tweet contains &#8216;Michael Moore&#8217;, but it also has words like fat, hypocrite, film, or liberal, it probably is not about the football player. A tweet with a player&#8217;s name is assigned to the University of Florida if it contains &#8216;Florida&#8217;, but not &#8216;Florida State&#8217;. This first level of screening did a good job of filtering out false positives, such as the wrong Michael Moore,  but we wanted to curate the tweets automatically to select the highest quality. The goal was to end up with a tweet stream that was much more interesting than what you could get with Twitter&#8217;s search. </p>
<p>To do this we added a set of high quality words to the quality screen, like the team position or hometown name of each player. We found that tweets with this extra information was generally from users who were serious about reporting details, not just random fans chanting a player&#8217;s name repeatedly. We used these quality words in two ways. Each time a quality word was found in a tweet, 1 point was added to a quality score for the tweet and for the user who sent the tweet. This allows us to select tweets for display that have a minimum quality score, and that are from a user with a minimum quality score. </p>
<p>To see how well this system works, try comparing the <a href="http://directsnap.com/recruits/2012/michael-moore/">DirectSnap page for Michael Moore</a>, and <a href="https://twitter.com/#!/search/michael%20moore">Twitter search</a> for the same words. My experience is that users find false positives very upsetting. They think computers actually understand what they are searching for, and when they see a false positive, the reaction is always that the website is &#8220;stupid&#8221;. My favorite example of this is when people complain about Google Alerts for their own name returning blog posts or tweets they have written. The reaction is usually &#8220;How stupid can Google be? Doesn&#8217;t it know that I don&#8217;t want to be alerted about my own writing?&#8221; On the other hand, they never seem to be upset about missing results. So ending up with a subset of all possible matches, but with no visible false positives is always the best goal. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/screening-a-tweet-stream-for-quality-control/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Collecting #OWS tweets with the 140dev framework</title>
		<link>http://140dev.com/twitter-api-programming-blog/collecting-ows-tweets-with-the-140dev-framework/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/collecting-ows-tweets-with-the-140dev-framework/#comments</comments>
		<pubDate>Mon, 24 Oct 2011 19:25:56 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Twitter Politics]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1223</guid>
		<description><![CDATA[Our work with Twitter and politics has now moved beyond the 2012 election. We just set up a tweet collection database to track the Occupy Wall Street movement. It uses the 140dev framework to collect all tweets containing #ows, #occupy, and #occupywalllstreet. This will be used to document our tools and methodology to automate a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>Our work with Twitter and politics has now moved beyond the 2012 election. We just set up a <a href="http://ows.140elect.com/tweet_stats.php">tweet collection database</a> to track the Occupy Wall Street movement. It uses the 140dev framework to collect all tweets containing #ows, #occupy, and #occupywalllstreet. </p>
<p><img src="http://140dev.com/blog_images/ows.png"> </p>
<p>This will be used to document our tools and methodology to automate a Twitter engagement campaign. In this case we&#8217;ll be building up a new political account we started called <a href="http://twitter.com/4more">@4more</a>. You can follow this work on our <a href="http://140elect.com/2012-analysis/tweet-collection-database-for-4more-engagement/">140elect.com</a> blog. </p>
<p>Update, 1/21/12: The tweet flow for #OWS has dwindled down to a point where it isn&#8217;t worth tracking, so we&#8217;ve turned this system off for now. Hopefully the Occupy movement will become active again in the spring. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/collecting-ows-tweets-with-the-140dev-framework/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SocStudies.com: A new approach to college study with Twitter</title>
		<link>http://140dev.com/twitter-api-programming-blog/socstudies-com-a-new-approach-to-college-study-with-twitter/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/socstudies-com-a-new-approach-to-college-study-with-twitter/#comments</comments>
		<pubDate>Thu, 01 Sep 2011 03:32:48 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Education]]></category>
		<category><![CDATA[The next wave]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=1202</guid>
		<description><![CDATA[The thing I find so great about doing Twitter consulting is the wide range of vertical applications that are completely open for new development. One of them is higher level education. We&#8217;ve just launched a new site called Social Studies that applies the techniques of tweet aggregation to help students study collectively. This work was [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>The thing I find so great about doing Twitter consulting is the wide range of vertical applications that are completely open for new development. One of them is higher level education. We&#8217;ve just launched a new site called <a href="http://socstudies.com">Social Studies</a> that applies the techniques of tweet aggregation to help students study collectively. </p>
<p><a href="http://socstudies.com"><img src="http://140dev.com/blog_images/socialstudies.png"></a></p>
<p>This work was done under contract with a Colgate University freshman student, Peter McGrath, who has a vision for the next generation response to Facebook. Peter grew up with social networks, and now wants to use his experience to add a social layer to college study. I think this may be an example of Web 3.0 in its earliest stages. </p>
<p>This first version of <a href="http://socstudies.com">Social Studies</a> is a tweet database that lets you view tweets based on colleges and individual classes, but the next phase of development will add a point system and game dynamics to make the whole experience a lot more fun. </p>
<p>Do you have a vision of the next great application for Twitter? We can help you bring it to life. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/socstudies-com-a-new-approach-to-college-study-with-twitter/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Twitter consultant tip: Creating a sales lead spreadsheet</title>
		<link>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-sales-lead-spreadsheet/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-sales-lead-spreadsheet/#comments</comments>
		<pubDate>Mon, 15 Nov 2010 13:52:34 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[Consulting Tips]]></category>
		<category><![CDATA[Data Mining Tweets]]></category>
		<category><![CDATA[Lead Generation]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Twitter Marketing]]></category>
		<category><![CDATA[User Ranking]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=995</guid>
		<description><![CDATA[Part of the sales process for Twitter consulting is convincing a new client that Twitter is more than just another way to broadcast their message. You have to show them that what appears to be a random stream of tweets is really a collection of highly qualified sales prospects. By aggregating Twitter users as well [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>Part of the sales process for <strong>Twitter consulting</strong> is convincing a new client that Twitter is more than just another way to broadcast their message. You have to show them that what appears to be a random stream of tweets is really a collection of highly qualified <strong>sales prospects</strong>. By aggregating Twitter users as well as their Tweets, you can extract a great set of <strong>sales leads</strong> along with their contact info. One way to quickly demonstrate the value of tweet aggregation  is to deliver an Excel spreadsheet of sales prospects that meet the client&#8217;s needs. </p>
<p>When you <strong>aggregate tweets</strong> from the <strong>Twitter streaming API</strong>, it also returns the complete account profile for each user. You can data mine this collection of users to extract highly targeted lists of users, along with their geographical location and home page URL. </p>
<p>The free <a href="http://140dev.com/free-twitter-api-source-code-library/">140dev Twitter framework</a> is an example of the code you will need to do the tweet aggregation. The schema for the <a href="http://140dev.com/free-twitter-api-source-code-library/twitter-database-server/mysql-database-schema/">MySQL database</a> it creates shows you it has a table for all the aggregated tweets, which links to the list of tweeting users. Since all of this data is collected for a specific set of keywords, you can then extract personal details on the users who tweet these keywords the most with a simple SQL statement:</p>
<p><code>SELECT count(*) AS cnt, users.screen_name, users.name, users.location, users.url<br />
FROM tweets, users<br />
WHERE tweets.user_id = users.user_id<br />
AND users.location != ''<br />
AND users.url != ''<br />
GROUP BY tweets.user_id<br />
ORDER BY cnt DESC<br />
LIMIT 1000</code></p>
<p>The 140dev framework&#8217;s example database collects tweets for the keyword &#8220;recipe&#8221;, so this query gives us the most active tweeters in the food world. Here are the results in phpMyAdmin:</p>
<p><img src="http://140dev.com/tutorial_images/sales_leads.png"></p>
<p>You can then export the results from phpMyAdmin to an <a href="http://140dev.com/download/sales_leads.xls">Excel spreadsheet</a>, and email it to your client. This gives them solid data in a familiar form. Twitter doesn&#8217;t deliver email addresses, and doesn&#8217;t even collect phone numbers, but you do get each user&#8217;s home page URL. This can be used to gather other contact info, a task that is easily farmed out to people on freelance sites like <a href="https://www.mturk.com/mturk/welcome">Mechanical Turk</a>. </p>
<p>So the next time you want to convince a client that Twitter is not just a bunch of kids talking to each other, you can just create a tweet aggregation database for the client&#8217;s industry keywords, let it collect data for a few days, and pull out a list of targeted users. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/twitter-consultant-tip-sales-lead-spreadsheet/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>First day in the life of the 140dev framework</title>
		<link>http://140dev.com/twitter-api-programming-blog/first-day-in-the-life-of-the-140dev-framework/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/first-day-in-the-life-of-the-140dev-framework/#comments</comments>
		<pubDate>Thu, 11 Nov 2010 05:52:13 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Custom Twitter Client]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Tweet Display]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=926</guid>
		<description><![CDATA[The first day has gone well. I announced the code on the Twitter dev list, and got 16 visitors to the site. The good thing is that the average pages per visitor was 7, and they spent an average of 11 minutes on the site. So people who get to the code are giving it [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>The first day has gone well. I announced the code on the Twitter dev list, and got 16 visitors to the site. The good thing is that the average pages per visitor was 7, and they spent an average of 11 minutes  on the site. So people who get to the code are giving it a good amount of attention. </p>
<p>I&#8217;m also using this code as an starting point to teach my son Web programming. He has taken a course in Java in school, and has experience setting up WordPress blogs, but hasn&#8217;t done any PHP or server based coding. Working through the install process for 140dev with him was very informative. I&#8217;m going to rewrite the <a href="http://140dev.com/free-twitter-api-source-code-library/twitter-database-server/install/">install page</a> for the Twitter Database Server based on his feedback. </p>
<p>The biggest problem is identifying the target audience. Is it people who have never used Telnet or worked at a Unix-style prompt? Is it someone who has coded in PHP for a while, but has never used the Twitter API? One solution is to produce tutorials and programming primers to help bridge this gap. </p>
<p>I also want to make the install process much simpler. That is the obvious blocking point. If I can make the install on the database server module easy, the rest of the framework will be a breeze. </p>
<p>Overall, I&#8217;m happy with the first day&#8217;s results. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/first-day-in-the-life-of-the-140dev-framework/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The 140dev Twitter framework is now available</title>
		<link>http://140dev.com/twitter-api-programming-blog/the-140dev-twitter-framework-is-now-available/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/the-140dev-twitter-framework-is-now-available/#comments</comments>
		<pubDate>Wed, 10 Nov 2010 03:34:47 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Custom Twitter Client]]></category>
		<category><![CDATA[Database Cache]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=922</guid>
		<description><![CDATA[I just opened up access to version 0.10 of my free Twitter source code library. At first it just has modules for a tweet aggregation database and a tweet display plugin. My next module will be a WordPress display plugin. I also have a lot of tutorials planned that can use this source code as [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I just opened up access to version 0.10 of my <strong><a href="http://140dev.com/free-twitter-api-source-code-library/" title="Download free Twitter source code">free Twitter source code library</a></strong>. At first it just has modules for a <strong>tweet aggregation database</strong> and a <strong>tweet display</strong> plugin. My next module will be a WordPress display plugin. I also have a lot of tutorials planned that can use this source code as examples. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/the-140dev-twitter-framework-is-now-available/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>140dev open source progress report</title>
		<link>http://140dev.com/twitter-api-programming-blog/open-source-progress-report/</link>
		<comments>http://140dev.com/twitter-api-programming-blog/open-source-progress-report/#comments</comments>
		<pubDate>Fri, 22 Oct 2010 19:31:52 +0000</pubDate>
		<dc:creator>Adam Green</dc:creator>
				<category><![CDATA[140dev Source Code]]></category>
		<category><![CDATA[Custom Twitter Client]]></category>
		<category><![CDATA[Streaming API]]></category>
		<category><![CDATA[Tweet Aggregation]]></category>
		<category><![CDATA[Tweet Display]]></category>

		<guid isPermaLink="false">http://140dev.com/?p=319</guid>
		<description><![CDATA[I&#8217;ve been cleaning up the code the last few days. I finally got around to switching to tweet entities. They&#8217;ve been around for a while, but every time people complained about a problem Twitter HQ said that developers should code for their occasional disappearance. I don&#8217;t have time to code for a major data component [&#8230;]]]></description>
				<content:encoded><![CDATA[<p></p><p>I&#8217;ve been cleaning up the code the last few days. I finally got around to switching to tweet entities. They&#8217;ve been around for a while, but every time people complained about a problem Twitter HQ said that developers should code for their occasional disappearance. I don&#8217;t have time to code for a major data component being there part of the time. The complaints have died down, so I guess it is stable now. I don&#8217;t disagree with the Twitter model of getting improvements out early, but I stay away from the bleeding edge. </p>
<p>One of the nice things about entities is that they include the disambiguated versions of shortened URLs. That saves a ton of processing time for each user of the API, and even more bandwidth for the target of popular URLs. Unfortunately, they only contain the original value of URLs that are shortened by Twitter itself through its t.co domain. </p>
<p>I&#8217;m also making sure that every piece of text is pulled out into config files. I want this code to work for any language. </p>
<p>My son is coming back from school for the weekend, and I want to review it with him. He&#8217;s my target user. He knows a little PHP and Javascript. If I can make this system easily installable for him, then it will be ready to make public. </p>
]]></content:encoded>
			<wfw:commentRss>http://140dev.com/twitter-api-programming-blog/open-source-progress-report/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
