I Remember

I struggle when I reflect on September 11th.

In many ways I’ve moved on, taking with me everything that I have learned. A lot about the world. A lot about myself.

I struggle when I reflect because I often feel like I don’t need to. I tell myself that I’ve remembered enough.

Somewhere in my mind I know that the true purpose of reflection is not to remember. Instead the true purpose is to learn. For that reason, I’m not going to move on as quickly today. Instead I’m going to write it down as I reflect. To share what I remember and what has led to many lessons learned.

I remember a daze that lasted for a long time. It started the moment I turned on the television and saw one tower ablaze. I’m not sure how long it lasted, but it may have been weeks. It didn’t seem that things could be allowed to go back to normal.

I remember silent sky. The eeriness of a world in which there is no air travel.

I remember community. An instant bond with people around the country. Especially that morning, while everybody was still trying to figure out what had just happened, business went on while people shared dazed observations.

I remember calling my friend Alan in New York to make sure he was okay. I hadn’t spoken to him in almost a year, but I felt the need to try. I remember being amazed that already, just several hours after the attack, he was reassuring me that New York was strong enough to handle this.

I remember Peter Jennings. Until then I had no attachment to television news. Jennings became my news anchor. He spoke to me during those days and he did a damn good job.

I remember anger. Anger at the hijackers. Anger at the world for failing one another. Anger at reactions against the wrong people.

I remember a strange fear. Fear that it happened, not that it would happen again. A fear of what would come next because it had happened.

I remember Ryan Adams and New York, New York. I remember seeing the video for the first time, sitting in my apartment with my friend Chuck. I think I remember both of us being speechless.

It’s strange. The things that you remember, the things you don’t. The new things that you will remember the next time you reflect.

However strange, it’s probably good to do it every once and a while. To take those reflections and those memories and to learn from them. To never stop remembering, but to especially never stop learning.

‘Twitter For Newsrooms’ – But Why?

Twitter launched a site today, titled “Twitter For Newsrooms”, in which they provide some information on which Twitter tools or clients are available for journalists to use as well as how to use them. Through the handful of pages, the assumption is made that a newsroom needs Twitter in order to find sources and readers.

we know Twitter is a tool all journalists can use to find sources faster, tell stories better, and build a bigger audience for their work.

It is a marketing document, so that’s a valid assumption for Twitter to want to make, but a critical argument is missing.

Why exactly does a newsroom need Twitter?

What does an investment in Twitter provide that makes it worth a newsroom’s time and money? How is this investment better than a team creating the tool that a newsroom needs? One that can directly target their subscriber base.

At the very least, especially because they know it is a useful tool, Twitter should provide some kind of statistical data. It doesn’t have to be too in depth, but something to show that sources and readers exist with the potential to have a major impact.

  • Of 200 million users, how many are potential sources? (Tweet # times/day)
  • Of 200 million users, how many are potential readers? (Click # links/day)

While print circulation appears to be dying, and for good reason, millions of newspapers and magazines are still sold every day. Without Twitter, millions of people are still reading articles in print and online.

Twitter is hot right now, and I’m an active user, but news organizations should know they still have a chance to be creative on their own, outside of a silo.

“What I Read” 10 Years From Now

I’m a big fan of the Atlantic’s “What I Read” series that pops up every once and a while. I always learn something cool about popular pundits and writers, and I usually find something else to add to my own list.

In 10 years, I do wonder if it will be a bit like a high school yearbook for some of those who have been interviewed.


Experimenting With Feedhose – Part II

My expirement with Dave Winer’s Feedhose protocol started off a week ago with a simple river that auto updated with the latest headlines from the NY Times. As I said at the time, I enjoy pushing out that first draft to help grasp what else I want to do with it.

So far, it’s been really useful. I leave a browser window open all day pulling headlines from the NY Times as they are published.

It only took a few hours before I started wishing for more.

My next step was to branch off a bit. I was able to hook in to The Guardian through their pretty fabulous open platform API. This wasn’t as easy as hooking into Dave’s Feedhose, but it was pretty close. Long polling isn’t an option and I have to keep track of my own cursor. The Guardian’s API keeps track of things by page number and items per page instead of item number, so some duplicate content has to be parsed. The API is snappy though, which is very helpful. Right now I’ve been making a request every two minutes for updated articles and I’ll probably up that to every one minute pretty soon.

Two sources still wasn’t enough, so when I saw Dave mention his AFP feed (coming soon here) in a comment on his post covering Erik Kidd’s Feedhose client, I decided to plugin BBC to see what would happen. Sure enough, it had already been hooked up and has been working smooth all day.

Of course, once I had three separate pages up tracking real time news from these great sources, the next logical “what if” became combining those into one easy format.

So here it is, with a brand new domain, Feed River Wire.

Note: The one caveat with The Guardian is their news traffic drops drastically during the late night hours, London time. NY Times seems to have a decent stream throughout the night. We’ll see about the BBC tonight.

A Feed River Wire

A few days ago, Dave Winer published a firehose for feeds that is currently hooked into a near real time feed of stories from the NY Times. In attempting to figure out what to do with a service like this, I find pushing out a first draft always seems to help. And since the explanation of the first draft is longer than a comment, here’s a blog post. 🙂

The Feed River Wire can be broken down as:

  1. Reads feeds. In this case, the NY Times feeds with support from Dave’s long-poll RSS server.
  2. Displays feeds as a river. In this context, to me, a river means– if it isn’t new, then it isn’t news. Dive in and read. No worries.
  3. Hooks the river to the wire. As new items are pulled in from the long-poll server, they are displayed on the front page almost immediately.

The river flows by.

Now, for the technical details.

Every hour, a script runs. This script is allowed to run anywhere from 55-58 minutes before it dies.

This script looks at the last known seed (cursor) and makes a request from Dave’s server for any items that have come in during the period since the last request. It tells the server to listen to its request for 30 seconds if no data is immediately available.

If data becomes available during that time, it is served up immediately.

If data does not become available during that 30 seconds, the script pauses for 60 seconds and then runs again. This pause time fluctuates depending on the number of empty requests. We don’t want to pound a server that isn’t giving up data, that won’t help.

If for some reason we go around 20 minutes without an update, the script stops and waits for the next hourly start trigger. I figure if we’ve gone 20 minutes without news from the NY Times, it’s either a slow news day or we can take a break during the middle of the night until the next hourly script is run.

On the front end, the end user is greeted with the 20 most recent items of the river whenever the page is loaded. If the page is not closed, it will ping back to the server every 5 seconds looking for new items. If one is found, it will populate.


So check it out. No fancy bells and whistles, but I learned a couple things and the wheels are turning on draft two. Hopefully Dave can get a few more feeds hooked into the hose. 🙂

On Banks


a financial institution that accepts deposits and channels the money into lending activities

I’m excited to see that somebody with the developer chops of Alex Payne has hooked up with a startup that promises a more simple bank.

It’s amusing to me that for all the annoyances I’ve had with banks over the years, I’ve never really imagined that anything could be too different about the way they were run. The biggest change in recent memory is the move to non brick and mortar establishments, like ING, that allow for smaller operational costs. This, of course, allows for better customer benefits and bigger profits.

So, at the risk of sounding naive, since I got to thinking about what a bank should be tonight, and since BankSimple is asking for feedback, here’s what I think a bank should provide.

A bank should live on:

  • Customers who deposit money are the only reason banks are in business.
  • Lending should provide (1) funding for operations and (2) passive income for customers.
  • At the risk of everything else, the bank should hold the faith of those customers that trust the bank with their money.

That’s pretty much it.

If a bank wants to make a ton of money, they should explore other high risk avenues and leave the trusting customers to their solid banks.

If a bank wants to make enough money while making their customers happy, there are always low risk lending opportunities out there that allow you to both cover your costs and give at least a small amount back to your faithful customers.

And then.

In the 21st century, a bank should also live on electronic access that is:

  • Secure.
  • Always available.
  • User friendly.
  • Transparent.

So many banks are trapped in the 20th century with what can be done on the web, and that’s scary. Every time I do something on my non-ING bank’s website, I see the “https”, but I still have a hard time believing it. It’d be nice for once to be blown away by the design of a bank’s website, to have no troubles with the way data was collected, and to go away believing that my information was secure.

Once the first three are covered, it would ultimately be nice to see what data my bank had about me, how they used that data, and what methods they used to keep everything contained to my eyes only.

That adds up to a lot of words about how simple a bank should be, but the concepts are easy – take my money, be nice about it, hold it tight, and be sure to toss a little back my way every once and a while.

I Want To Publish More Stuff

Really, I want to do more writing. It seems though that every time I bring up a post window in Educer to start pounding away at the keyboard, I justify myself away from the post because it’s (a) not long enough, (b) doesn’t fit the normal subject matter, or (c) just doesn’t come out the way I want it to.

In order to get around that, I hope, I’m starting to establish more places for material to be published. I can get my quick thought, RT, slow chat, whatever done on Twitter. As of today, I can hopefully get the quick thought, but a little longer than 140 done at A Little Longer. I’m soon to setup a Tumblr blog that I can use to share media and other random bits. And I already have a Posterous site setup that I love to use when capturing pictures on my phone as kind of a mobile flow thing.

The end goal with all of this is to establish some kind of place for all of this published content to flow through. My own personal aggregator of sorts. The thought right now is that I can take what I learned from building My Status Cloud over the summer and apply it to a real time river of me at some central location.

Could all be very interesting. We’ll see.

The (De) Construction Of Twitter

$new_standard = strtolower("Twitter");

In the last couple of weeks, both WordPress and Tumblr have announced support for the Twitter API.

The immediate benefits are that any forward thinking Twitter client can now also be a WordPress or Tumblr client as well. Tweetie, one of the most popular iPhone clients, has had support for this for a while and immediately became the tool of choice for testing the new features out. Choices for users expand.

So, with that development aside, where next? I see three things.

1) WordPress should publish an official plugin for WordPress.org that enables the Twitter API for any blog. This act alone could create millions of possible twitter servers.

2) WordPress/Tumblr should make a big deal about how their new changes are also already tied in with real time protocols RSSCloud and pubsubhubbub. This helps make the new twitter servers real time.

3) Everybody outside of Twitter should huddle for a brief second and add some new syntax to the existing twitter api that allows for a piece of metadata to be attached (urls to start), call it optional, and implement.

Or, in short– Now that you’ve shown how easy it is to implement Twitter’s API, rip it out of their hands, build a new community, and then market the hell out of it.

Google’s DNS

Google launched a public DNS option today as part of their “effort to make the web faster”. It comes complete with a concise write up and extremely easy to remember IPs ( and

I’ve switched my connection from OpenDNS to Google for now. While all of the benchmarks that I’ve done on my end show that Google is slower than a few alternatives, I have a feeling it will get faster over time. Give them a few days to adapt to the new traffic.

If you’re looking for benchmark possibilities, I tried both namebench, which was pretty cool, and DNS Benchmark, which I like a lot.

My First Stab At A Trending Topics App – Toppics

The other night I pushed out Toppics, my first little app that plays with Twitter’s Trending Topics.

At the moment it grabs the current trending topics from Twitter every several minutes while searching every few minutes for new tweets that mention twitpic.com as well as the topic. Toppics, get it. 🙂

Version 0.0 is very basic, but very fun. For example, I know when a football game starts because all of a sudden two team names pop up and I have jersey pictures from both sides. I’ve been able to determine that tweeps really like the Christmas tree at the Four Seasons by watching that category for the last day.

The display is only within the last 24 hours, and that does two things. One – it keeps the pictures relevant. One “Monday Night” trending topic is different from another. Two – it can keep picture counts low. I’m learning quickly that some trends just don’t generate pictures. I hope to add some more features in as well soon, possibly refrain from creating a topic page until it has content to display.

The next goal is to add content. It’d be nice to grab visuals from other sources than Twitpic, especially for the topics that don’t generate a lot of traffic. And, while visuals are great, if I can add some context with text, that would be ideal.

All in all, it’s another playground. Feel free to play.