Skip to content


What I’ve Been Up To

It’s been a year and a half since the last time I posted anything on this blog, and boy has a lot happened since. Here’s what I’ve been up to:

App.net

As the twenty-ninth user of App.net I was pretty excited about its potential. I had lots of great conversations there in part due to my vow to read every single post on the platform during their fundraiser period (which ended up being around 30,000 posts). Shortly thereafter I mentioned to another user I was thinking about asking them to hire me, and he demanded I fly out to San Francisco immediately on his dime. Long story short a few weeks later I was switching coasts to become their new Developer Advocate, giving up my NYC apartment and putting to rest Siftee, the Twitter client I was perpetually in some state of developing.

I had an awesome time getting to know App.net and its great developer community. I spent a lot of time working on documentation for the API, being a beta tester for our third party devs and planning hackathons. After a few months though it became clear that I wasn’t the right fit for the team and it was time for me to look for something else. The only thing was I really had no idea what I wanted to do next so…

Europe

I packed a carryon and departed for two months in Europe. Luckily I have a fantastic hobby in swing dancing which came in handy as the global dancer network is very accommodating to hosting and there were some big dance events coming up. My first stops were the Berlin Lindy Exchange, the London Swing Festival and the Paris Lindy Exchange, each happening on successive weekends. After that my travels took me through Brugge, Amsterdam, Prague, Budapest and then a hop over to Barcelona before returning to London and then to NYC, my hometown. I’ll have to share that experience in more detail in some other blog posts.

Landmark

Shortly after returning home to New York I began a journey through a rather audaciously titled “Curriculum for Living” course offered by Landmark Education – most well known for its Landmark Forum. The experience is eye-opening and the personal development work I’ve been doing there has been tremendous for me, even if I feel I still have a long way to go in becoming the kind of person I really want to be. Besides a general boost in confidence and feelings of relatedness to other people, some specific outcomes are an unbelievable road trip I just completed (more below) and reconnecting with my cofounders from my first startup who I refused to speak to for nearly six years. Another thing I’ve been doing which I will blog about is conducting personal interviews with my friends and associates as to how they see me, my strengths, my weaknesses, etc. (if you want me to interview you please let me know!)

Work

I’ve been doing some consulting work with Fusebox, a digital agency I used to work at full time. I’ve been helping them evaluate new clients and projects, did requirements gathering for a large scale financial / social application and built some loan calculator javascript widgets.

What I’m most proud of though is building and launching ListeningToTheEnemy.com which showcases my mother’s massive anti-war art project, which is currently looking for venues. I’m thrilled to finally see some of her work online and it was a great way for me to get into doing modern HTML/CSS work.

I also spent a little time hanging out at the Flatiron School (run by my good friend Avi Flombaum) trying to be helpful, getting to know some of the students and otherwise checking out what goes on there. It’s an amazing program.

Dance

I’ve been doing a ton of swing dancing. What can I say? It’s fun, it’s challenging, it’s a great way to meet people and a great reason to travel. Besides the events I went to in Europe, I took part in Swing Out New Hampshire (a five day summer camp) in August, the San Francisco Lindy Exchange in September, the Ultimate Lindy Hop Showdown in New Orleans in October and my fourth annual pilgrimage to Lindy Focus in Asheville, North Carolina in December.

While I was at Lindy Focus I met Rosie, a wonderful Belgian girl who was traveling the US solo for three months. After just a few minutes of talking to her about my travels through Europe she shocked me with a proposal to join her for a road trip out west in a spray-painted campervan she had reserved for February. After a few weeks of indecision, a couple of skype chats with Rosie (who was constantly on the move) and some help from my coach and friends at Landmark, I decided to throw caution to the wind and go along for the ride.

Road Trip

Once I was committed to the idea of taking a road trip with someone I barely knew, I got very excited about exploring some of the country’s finest natural wonders. We planned out a trip that ultimately lasted some three weeks and three thousand miles taking us through San Francisco, Big Sur, Hearst Castle, Sequoia National Park, Las Vegas, Zion National Park, Bryce Canyon, Antelope Canyon, Grand Canyon, Monument Valley, Canyon de Chelly, the Hopi Mesas, Flagstaff, Sedona, Phoenix, Joshua Tree National Park and finally Los Angeles.

The trip was definitely one of the best experiences of my life. Besides experiencing some awe inspiring places, which challenged me to work on my photography and driving skills, I made a wonderful friend and learned that yes I could be cooped up with someone for several weeks without getting bored, having a single argument or otherwise freaking out (we saved our freaking out for dangling off the top of the 108-story Stratosphere hotel in Vegas).

Today

I’ve only been back from the road trip for a couple weeks and while I have a strong desire to stay in motion, often being in motion without any sense of direction means you’re really just spinning your wheels. So for now I’m taking stock of where I’ve been, what I’ve accomplished, what I’ve left unresolved, where I can improve and where I want to go from here. If there’s anything I’ve learned, there are a lot a wide open roads out there.

Posted in Personal.


How App.net Can Change Everything

A caveat: this article represents my current understanding of the planned App.net platform as laid out by its founder Dalton Caldwell through many different sources, as well as my own personal views of its potential future development. It is not definitive.

What is App.net?

App.net is a service dedicated to providing a new infrastructure for social web applications that will never be funded through ad revenue. It is the brainchild of Dalton Caldwell and Bryan Berg, co-founders of Mixed Media Labs. The vision for App.net was crystalized as an audacious proposal from Dalton after he received an overwhelming response to a post he wrote on what Twitter could have been, itself a response to a blog post from Michael Sippey, the Director of Consumer Product at Twitter.

On July 13th, a fundraiser was initiated to raise $500k in 30 days in order to prove the demand for a new way of thinking about web startups – one where users pay for compelling products rather than being turned into products themselves for advertisers to pay to market to and mine information about. This funding goal was met on August 12th with more than 7000 backers donating in tiers of $50, $100 and $1000 dollars.

How does App.net benefit users?

App.net is fundamentally driven by a desire to empower users and has the potential to establish a new standard of excellence in the treatment of user data and permissions which will be innate to the core platform upon which App.net developers will build. App.net has publicly declared that their most valuable asset is their users’ trust.

Some of the things I’m most excited about are:

  • Users own the content they create and are already able to download a complete archive of their content at any point in time, something that has been a rarity among web startups (Twitter has failed to support this for six years despite declaring the intention to do so many times).
  • Users will likely have a universal id that will connect them across any services they use that are running on the App.net platform. This infrastructure would support seamless discovery of friends across services and a robust view of the permissions a user has granted across all the services they use.
  • By charging for access to the underlying infrastructure, spam will be heavily disincentivized.
  • As an infrastructure company, App.net’s business motivation is to encourage a vibrant ecosystem of applications and novel uses of data. The most interesting social applications we’ll see in the next phase of the web will be built on App.net.

How does App.net benefit developers?

In the last few years the rapid acceleration in the creation of startups can be attributed to a great extent to the arrival of cloud hosted infrastructure like Amazon Web Services and web application development frameworks like Ruby on Rails and Python/Django.

Amazon Web Services meant that startups no longer had to worry about how to set up and maintain their server infrastructure or how many machines they needed to buy in order to handle spikes in traffic. Instead, they could plug into Amazon’s system which treats storage space and computational cycles as a utility. It’s like drawing electricity from the wall – you don’t worry about it running out, and you only pay for what you use.

Frameworks like Ruby on Rails provide powerful yet flexible guidelines and core capabilities for programmers to build web applications with. They free developers from having to think too much about the common issues of developing a web app so that they can instead focus on the bits that make what they are building unique.

App.net will combine the simplicity of cloud infrastructure with the power of web frameworks to deliver the best platform for developing social web applications. Social web apps are built around concepts like users, posts, connecting and sharing. App.net will provide a scalable infrastructure and a base model for these concepts upon which startups can innovate without reinventing the same wheels again and again. Developers will spend less time just trying to make their applications functional, so they can have more time to make them unique and useful.

Is App.net vaporware?

Absolutely not. The current infrastructure for App.net is built using the codebase from PicPlz, a photo sharing service that supported hundreds of thousands of users and tens of millions of API calls monthly. After announcing the new App.net initiative, a brand new UI was built in two weeks for an alpha service to demonstrate the viability of the platform. In the one week since this alpha was made available to backers of the initiative, over 3500 users have joined the service and 40,000 messages have been created. In that same period 13 web apps, 5 mobile clients, 2 browser extensions and 5 API libraries for the platform have been released. There is an actively curated list of projects running on the platform.

Is App.net a Twitter clone?

No, it is not. There is definitely a great deal of misunderstanding about this currently. The service that the early backers of the platform have been using this past week, which can be viewed at alpha.app.net is a testing ground for the capabilities of the platform. It does heavily resemble Twitter. It also hasn’t been given a specific name to distinguish it from the core App.net platform and this has contributed to the confusion. For clarity sake for the rest of this post, I’m going to refer to the particular network we’ve been playing with this past week as Alpha. Alpha is just one network running on top of the App.net infrastructure, and in the future there should be hundreds if not thousands. Each of these networks will have their own userbase and their own apps, browser extensions, etc, but they will share a common infrastructure and many core capabilities. In fact they will be greatly enhanced by having standardized ways of talking to each other.

Going forward I do believe Alpha will continue to play a vital role in the success of the App.net platform, which I will discuss in more detail below. The important thing to realize is that App.net’s core business is not Alpha – it is the platform that powers Alpha.

Will App.net be another Diaspora?

It is understandable to equate App.net to Diaspora, but it is not accurate. I believe App.net will succeed where Diaspora has for all intents and purposes failed, for a variety of reasons:

  • App.net is not vaporware. Diaspora was funded on an idea and an initial goal of raising $10,000 but was able to raise over $200k due to the tech community’s excitement over a user-controlled and privacy focused alternative to Facebook. It took three and a half months before any software was released. App.net has exceeded its funding goal of $500k and established a user service and a developer API during the funding period despite an aura of extreme pessimism by the tech community as to its viability, in large part due to the perceived failure of Diaspora.
  • App.net has traction. App.net has exceeded 10,000 backers. A third of those users are already actively using Alpha and have contributed more than 40,000 posts. Well known third party developers with extensive experience with the Twitter and Facebook APIs are actively developing tools and services for the platform, have already released working products and are contributing to testing and debugging the App.net API.
  • App.net has a business model. From the outset, App.net will be charging users $50/yr for access to Alpha, and developers an additional $50/yr to access the platform API. Moving forward App.net will model its pricing on running a sustainable business without any ad revenue.
  • Dalton and his team have the necessary experience. This to me is by far the most critical factor. Frequently the human factor is lost when discussing the merits and viability of any particular startup. Building software and a business with the scope envisioned by Diaspora and App.net is incredibly challenging. Diaspora was started by four college students with no prior background in running a startup. App.net has a skilled team of twelve lead by an entreprenuer who built a service, iMeem, that at its peak had 26 million users. Founders deal with tremendous amounts of stress, as I have learned first hand several times over. Much of Dalton’s motivation for App.net is a reflection of a deep regret over the mistreatment of iMeem’s third party developers after MySpace acquired it and then shut down its API without warning. It is a testament to him that he was able to bounce back from this as well as cut his losses on PicPlz and continue to innovate. It doesn’t always work out that way. Ilya Zhitomirskiy, one of the four co-founders of Diaspora, suffered from depression and committed suicide at the age of 22. I think it’s absolutely tragic and I wish there was a greater awareness of just how hard the startup founder life is (I was really hesitant to add this last bit about Ilya but it is a significant part of the Diaspora story and shouldn’t go unmentioned).

What is App.net’s business model?

App.net will charge developers for access to the platform. Controversially, App.net also currently charges users of Alpha for access to the network. There is healthy debate going on about the pricing model for App.net but one thing is absolutely clear: App.net will not run ads on Alpha and will not have an ad-supported revenue model.

As an infrastructure play I think App.net has a lot of options for how to develop its revenue model. Here’s what I think they should do:

  • Charge developers a basic fee for access to the platform and a network for developers only – similar to Alpha (lets call it Dev).
  • Charge applications based on the number of active users they have. This could be tiered or scale linearly. There should be some basic threshold (say 5 users) that is free, so that purely experimental apps can still flourish.
  • Charge applications utility expense based on the resources they consume, the same way Amazon Web Services does. This would enable App.net to be a feasible platform for media hosting in addition to messaging.
  • Keep Alpha as a paid access network for the time being in the spirit of lean development. Those who want to be there in the early days because they feel it is worth it can pay to be there (all the current backers are already doing that).

How can App.net prove its infrastructure model?

In order to succeed, I believe App.net needs to show that it can effectively support multiple networks running on the same base infrastructure and data models, while being able to add their own unique attributes. Additionally, App.net needs to show that it can effectively manage permissions for the data that is shared within and across networks. To do this in the near term I suggest that App.net establish a secondary network for the developer tier of backers, which I called Dev above, and proceed to figure out the methodology for having developer user accounts operate concurrently on Alpha and Dev and how they can cross post between them while maintaining the overall privacy of the Dev network (which requires a different level of subscription payment than Alpha).

Why should App.net continue to build the Alpha network and eventually make it free?

As I linked to above, there is a healthy debate occurring as to whether Alpha should be a paid service for all users. I believe that in the long term it should not, but in the short term the status quo is fine as App.net has already proven its ability to gain traction with a paid approach. There is also some indication from Dalton that Alpha may not live much beyond its current form, as App.net transitions to being solely a infrastructure provider and relies on third parties to establish UIs for interacting with the platform. I believe this would be a mistake.

In my mind, continuing to nurture Alpha is a vital element for the success of the platform. There are several reasons for this:

  • Bootstrapping the universal user id. One of the biggest potential benefits of the App.net platform is adoption of a universal user id that would enable your identity to seemlessly move between an unlimited number of web services running on the platform. The biggest hurdle with this is the initial identity creation, and Alpha can bear the brunt of this burden.
  • Give people something to understand relative to Twitter and Facebook. The reality is that grasping the full potential of the App.net platform is challenging. Most of the people who initially visit it will be looking for “a better social network”. They should be able to find it here.
  • Developers will benefit from a reference implementation. Alpha can serve as a showcase for the best ideas occurring in the ecosystem and provide third party developers something to measure their own efforts against.
  • Providing discovery for other networks. Providing users with a central place to see posts originating from a wide range of networks running on the App.net platform will benefit everyone.

I personally believe that it will be in the best interests of the platform to eventually transition Alpha to being an entirely free service. There will be many opportunities for paid networks to spring up on the platform that cater to certain industries or hobbies or age groups, but it will be difficult for anyone but the core App.net team to operate a large scale free network on the platform without significant investor backing, which leads back to the original problems App.net was envisioned to solve. Again, in the short term I think things are fine the way they are, but in the long term I believe it would be a disservice to the world and likely a poor choice for the overall health of the App.net ecosystem to not open the doors to Alpha for everyone, for free.

Won’t the App.net Alpha network compete with third party developers?

Yes, it will – for discovering and incorporating fundamental ideas that benefit the entire ecosystem. The reality is it is likely impossible for a platform to improve without competing with services that were created to address the platform’s deficiencies. However, given that App.net’s business model centers on providing the best infrastructure for third parties to build upon, improvements to the core platform should always stand to benefit the entire ecosystem. The best thing App.net can do to prevent screwing over developers is to maintain the open discussion that it has so far (and could continue to do on Dev) and to build a visible roadmap so developers know what is coming.

Will advertising be allowed on App.net?

I believe so, yes. A fundamental misunderstanding about the App.net platform so far has been that there will never be any ads running anywhere on the platform. I believe that is incorrect. What is correct is that there will never be any ads run on Alpha or anywhere else to fund the the operations of App.net the company. Ads can make sense in content networks, and any such networks running on the App.net platform should be allowed to run them. But they do not make sense to support an infrastructure service, which is at its core what App.net is. If you want a better understanding of the App.net ad thesis, read this interview with Dalton.

Why doesn’t Dalton understand network effects!?

Excuse me, but I believe Dalton understands networks effects better than almost anyone in this industry right now. While everyone is up in arms about the chilling effect of gated access to a social network they are completely missing two other potentially massive sources of network effects: a developer / platform ecosystem that supports thousands of interoperable services and a core infrastructure that provides an extremely high level of customer satisfaction. Not only has Dalton demonstrated profound insight into network effects at a macro level but at a micro level as well. He has stated he intentionally and carefully titrated the addition of new members into Alpha so as to not have an inverse network effect by having a social network filled with n00bs who have no idea what is going and that end up hating the experience. If you’re looking for network effects on App.net I suggest you look here.

What are the risks?

Although I have been extremely excited by the potential of the App.net platform there are certainly many things that could go wrong. Here are a few that come to mind:

  • Investors. App.net has been built by Mixed Media Labs using existing code from PicPlz. Mixed Media Labs has already received several million dollars in investor backing and it isn’t immediately clear how in line Dalton’s backers are with his new vision, or how much control he maintains, or essentially what, if anything, prevents the company from being forced to follow the same path that Twitter seems to be on. I feel pretty confident that there isn’t anything to worry about here, but it would be very nice to have some more concrete information from Dalton.
  • Baggage. Much of the conversation on Alpha up to this point has been about how to implement “missing” features that can be found on Twitter. There is a real risk that innovation will be stymied by the pursuit of copying existing lousy social networking mechanisms. However, there have also been some truly great threads that have questioned fundamental assumptions about how these things should work, and again I’m pretty confident that there are great things coming.
  • Rushing. Building a robust yet flexible API to support the widest variety of social network implementations is no easy task. Having lots of users on Alpha has already put pressure on the team to build rapidly, and this could come at the expense of “doing things right” for the long term.
  • Security. Having thousands of networks utilizing the same core user id could be a recipe for disaster if accounts aren’t secure.
  • Decentralization. App.net will have to figure out how to provide redundancy or better yet how to decentralize as the platform grows. We don’t want thousands of startups to grind to a halt due to central points of failure. We also don’t want all these startups to cease to function should App.net the company decide to shut down, for whatever reason.

Conclusion

If it succeeds, App.net will undermine the basic economic premise of the entire current social web ecosystem and this is a good, good thing. This is a tremendous opportunity to dream big and put aside ingrained thinking.

I’m excited to contribute to this endeavor. If you are too, join.alpha.net today (as an extra incentive, it’s your last opportunity to guarantee that your Twitter handle will be available to you). If I helped convince you to join, please say hi and let me know! I’m @orian.

Posted in App.net, Social Web OS, Twitter.


We Need a Social Web OS

Web 2.0 LogosThe Internet has always been fundamentally social. Its evolution can be described as a series of advancements that have harnessed the innate power of social activity. First, the Internet itself made computers social by allowing them to talk to each other through standardized network protocols. Second, the web made content social by allowing documents to talk to each other through hyperlinks. Third, APIs made software social by allowing applications to talk to each other through exposed programming interfaces. And finally, social networks made the creators and consumers of web content social by allowing them to find one another and share.

But there is a problem. The problem is the later two stages of the Internet, which we used to more frequently refer to as Web 2.0, arose in a fundamentally different manner than the first two. They arose as a series of protected hubs. Each of these hubs has drawn from the great strength and flexibility of the underlying social networks of the Internet and the web to come into existence. Yet most have only begrudgingly given back by being interoperable with one another when it suits their own purposes. They haven’t been built by academics whose fundamental goals were to improve the human condition through the sharing of knowledge, like that guy at the Olympics opening ceremony. They haven’t been built with government backing to facilitate knowledge transfer between universities and research labs. Instead they’ve been built on venture capital, an industry that seems to have lost its way from its humble beginnings seeking outsized financial returns through investing in radical ideas and new markets to seeking “guaranteed” returns through eyeballs and ad revenue.

Businesses built around controlling eyeballs run fundamentally counter to the nature of social, the nature of the Internet. Social is connecting, sharing, repurposing – the one thing it is not is controlling. But this does not mean successful businesses cannot be built on the web. It means we need to rethink how we build them. David Weinberger, one of the authors of The Cluetrain Manifesto, offers the following in his book Everything is Miscellaneous:

“The commoditization of knowledge enables greater value to be built from it, just as commoditized nails and lumber let us build better family homes for more people. But now more than ever, knowledge’s value will come from the understanding it enables.

And since the commoditization of knowledge includes its easy accessibility, business loses one of its traditional assets. Information may not want to be free, in Stewart Brand’s memorable phrase, but it sure wants to be dirt cheap. The good news for customers is that miscellanized, commoditized knowledge sparks competition and innovation. The good news for businesses is that they can focus on providing the goods and services that are at the heart of their value.”

It is Time for a Social Web OS

It is time for us to take what we’ve learned from years of building and using social software, suss out the common elements, identify the common weaknesses and lay down a new layer of Internet infrastructure designed from the ground up to support a new generation of social services. In programmer speak, it’s time for us to refactor.

This new layer requires three things: standards, frameworks and platforms. Now, everyone has different definitions of these terms so I’ll clarify mine. In this context standards provide agreed upon specifications for how data is defined and structured, frameworks provide methodology for storing, manipulating and distributing data, and platforms provide infrastructure for implementing frameworks.

I may sound crazy, but I firmly believe there is a way to universally model the social web. What I don’t mean is that there is a way for us to define all the fields we could ever possibly want to use in social web applications. What I do mean is that there is plenty enough commonality in the fundamental elements of social web applications that we can define a set of standards and frameworks that are flexible enough to encapsulate any ultimate manifestation while adhering to a core that inherently promotes openness and connectedness.

Why do I believe this is possible? It’s the nature of technology and abstraction – we can see examples everywhere we look. Programming languages enabled us to give instructions to computers in human readable form, rather than ones and zeroes. Operating systems enabled us to build software that could run without worrying about all the intricacies of the hardware running underneath. TCP/IP enabled computers to share data with each other over a network. DNS enabled computers to find each other across the globe. HTTP and HTML gave us a way to reference and retrieve documents across the Internet. The list goes on and on.

Any specific collection of standards, frameworks and platforms will have quirks and deficiencies. But when well thought out their benefits often far outweigh their drawbacks. Imagine, if you will, a world where you don’t have to create a new identity for every web service that you use. Imagine a world where where every new startup didn’t have to reinvent the wheel of how it defines users, how it handles permissions, how it defines content, how it makes that content accessible, how it makes its services compatible with others. If that sounds utterly implausible, I would ask you how implausible it would have sounded just a few years ago to say that soon most startups wouldn’t own or manage their own server infrastructure.

Building Blocks of the Social Web OS

I believe there is a fundamental set of building blocks we can use to develop a new generation of social web applications. I’ll list those building blocks here, and in the future I plan to link to my own or other peoples ideas on how to standardize them, develop a framework for utilizing them and a platform for implementing them.

The building blocks of the Social Web OS are:

  • Users
  • Groups
  • Relationships
  • Permissions
  • Annotations
  • Posts
  • Versions
  • Events
  • Feeds
  • Notifications

Going into the details of each of these here would be too lengthy, but I don’t think it takes too much effort to understand what each of these elements represents or how they interoperate. For those who want to build the future of the Internet, standardizing these elements (whether that means de facto standardization or something more “official”) and how they interact is where we should be investing our energy. Building platforms that allow startups to focus on building new networks that utilize and augment these building blocks, rather than reinventing them, will unleash a torrent of innovation.

Facebook, Twitter, Google+, LinkedIn, Flickr, Instagram, Foursquare, WordPress, Drupal, Basecamp, Dropbox, Google Docs, Reddit, Digg. Tell me which of these services cannot be modeled using the above elements. Tell me which of these services would not benefit from avoiding reinventing each of the above components, only adding new logic and fields for whatever they do that is unique. Tell me which of these services would not benefit from being fundamentally interoperable. Seriously, I want to know what I’m missing.

Posted in Social Web OS.


Building a Better Social Web

For the last few years we’ve experienced a renaissance of human connectivity through the web. The rise of social networks and APIs have allowed us to connect and share information with one another in ways we could hardly have imagined only a few years ago. But as these services mature, their need to generate revenue is beginning to expose fundamental flaws in the ways we have established this connectivity. The fundamental openness of the web is under attack from all sides, from ill-conceived copyright regulation to bandwidth capping to a renewed focus on monetization through advertising by the major social networks. While these will all continue to be areas of debate for some time, I believe now is a critical time for anyone who uses the social web to get involved in the conversation.

A few weeks ago, Twitter made waves by indicating its renewed focus on becoming a media entity and locking down the user experience of its service. Many people immediately began engaging in a conversation about what this means for the individuals who use this service (I certainly had my say). One well known entrepreneur, Dalton Caldwell, has stepped up with an audacious proposal to compete with Twitter. I commend Dalton for this effort, and I admire the speed and resolve with which he is attempting to respond. However, I don’t believe his current plan is nearly audacious enough.

What Dalton and many others see (myself included) is that there is something fundamentally wrong about the concentration of power in the hands of a few corporate social networks that will inevitably have to make sacrifices in providing the best experience for their users in order to satisfy their true constituents – their investors and their advertisers. Dalton’s current proposal seeks to shift this balance of power away from these entities and into the hands of 3rd party developers. But that is not enough. Large entities, small startups, and individual developers all play a necessary role in a vibrant web ecosystem, but we are losing site of the most important thing – us. The users. The individuals. We need to acknowledge that any social system that doesn’t fundamentally empower the individuals that comprise it will ultimately be replaced by something that does. It is time to return balance to the force.

Whew, okay. So what am I really talking about here? I’m talking about ownership of identity and content. The way the social web has evolved thus far we’ve had a really haphazard handling of both. What this has meant in practical terms is that I have an identity on Twitter, I have an identity on Facebook, I have an identity on Foursquare and whatever else. On every service I need to recreate my connections. On every service I need to manage permissions for how my content is shared with others, both on said services and between them. The fundamental problem is not the managing of these things (it’s an issue we can’t get around, but we can change how it’s handled). The fundamental problem is that these services are structured from a viewpoint that they – not you – own your identity, your connections, your permissions and your content (I’m not talking about what their terms of service say, I’m talking about the effective reality of how they operate).

For the most part these services have decided to play nice and allow users to import their connections and share content from one to the other, but that isn’t always the case. The important realization to have here is they don’t *have* to, because you create your identity within their system and they are simply doing you a favor by allowing you to bring it somewhere else until they decide it no longer serves their interests. There have been many instances of services deciding to restrict other services from importing your connections. Twitter already restricts how much of your own old content you can retrieve, and they can play favorites with who gets to retrieve your content, in what quantities and at what speed.

I could go on about the implications, but for the sake of brevity I’m going to turn to my version of an audacious proposal for an open social web. In fact, this is more of an audacious goal than a proposal, because I don’t want to try to elaborate on technical requirements in this post and I’m hoping the technorati will have something to say about how to actually get this done.

What we need is an open architecture that provides the following:

Identity

  • Individuals own and manage a universal web identity (or multiple identities) independent of any service provider.
  • Startups provide services to assist users in managing and augmenting their identities.

Connectivity

  • Individuals own and manage an address book of other identities they are connected to, which is independent of any particular social network.
  • Individuals own and manage a collection of permissions for how the content they create can be received / accessed / augmented through particular channels. This might mean a type or piece of content is only available to some identities through some services, or available to anybody or available only to the content creator.
  • Startups provide services to assist users in managing their address books and their permissions.

Distribution

  • Individuals own and manage a data store of the content they create.
  • Startups provide services to assist in creating content, distributing content based on the individual’s permissions, and replicating that content back to the individual’s personal archive.

What you may have picked up on is that this model changes the nature of the relationship between web startups and users. In the programming world, we have a concept called Model-View-Controller. It’s an effective way of building software by separating code according to data storage (the model), visual interface (the view) and the logic for the exchange and manipulation of information between the data store and the visual interface (the controller). If you think of social web services in a similar manner, right now most of these networks are seeking to own all three parts, but they shouldn’t. Users should own and manage their data and the logic for how it is exchanged with others. Startups should focus on providing the best experience (essentially the best “view”) for empowering users to do so.

The technical implementation of this architecture is left as an exercise for the reader. JK! Well, I’ve got some ideas for how this can actually be done, but I do hope that if this model resonates at all good implementation ideas will be forthcoming from the community. I’ll share my own in a subsequent post. I’m sure many will point out that something like this has been tried before and that is likely true, but sometimes you need not just the right ideas but the right timing. I think this might be the time. While this is an approach that existing social networks undoubtedly would not like, I believe it can replicate all the existing functionality they provide, while instituting a much needed balance between all parties.

Thoughts?

Posted in Twitter.


What Twitter Wants

I have had a love / hate relationship with Twitter for four years. As a technologist, it is impossible not to be enamored with the transformative effect Twitter has had not just within my industry but the world at large. As an entrepreneur and perhaps an idealist, it is impossible not to be embittered by the trajectory upon which Twitter has set itself as a company.

In a recent ominous blog post, Twitter hinted at a further shift toward becoming a media portal and away from being an open platform for communication. Twitter has already earned its place in the history of the Internet and I have been very glad to be involved with it as a user and a developer during its exciting stages of rapid growth and innovation. What saddens me is that as a company Twitter seems hell-bent on relegating itself to being a precursor for something else, something better, abandoning its radical and innovative roots for staid ideas of commercialization in order to emulate a decade old model that will make it just another media entity if not completely defunct.

When Steve Jobs died, much was said about the fact that as a visionary he changed not one, but five industries. Few other entrepreneurs can make such a claim. In a similar sense, Twitter revolutionized five different areas of the web: real-time, mobile, non-reciprocal social networking, short-form communication, and the use of APIs. In comparison I would say that Google and Facebook each revolutionized two. The former in search (PageRank) and advertising (AdWords). The latter in activity streams (the news feed) and content sharing (tagging people in photos and posts).  The use of a social network identity as an authentication mechanism for other services is another major innovation, but I’m really not sure who deserves the most credit for that, Facebook or Twitter.

I am not claiming that Twitter was the first to introduce any of these ideas nor that their implementations were the best. What they did do is solidify these ideas in the new technological order. Twitter demonstrated the feasibility and value of delivering streams of information to the masses in real-time. It unlocked the extraordinary information dissemination potential of social networks that did not require reciprocal connections among users. It established the role of the mobile device in producing content for the web and disseminating news. It validated the idea that artificial constraints on the length of content (where we thought the web clearly had an advantage over paper) could lead to an explosion in the creation of content (how many people have written a blog post vs. posted a tweet?). Finally, it established the concept of web services as robust open platforms from which thousands of programs and startups could bloom.

It is because of this that as a web developer and an entrepreneur I feel I owe such a debt to Twitter. It has opened my eyes to so many exciting possibilities. And that is why I am so sad to think that some time in the near future I might not be using it.

My story as a Twitter developer

In 2009, my then girlfriend Whitney Hess was well on her way to prominence as a new voice in the field of User Experience. She attributed much of her success to her prolific blogging and her tireless engagement with a rapidly growing collection of followers on Twitter. Yet as this follower-base grew she was increasingly frustrated with how cumbersome the experience of using Twitter was for a professional. I could see how powerful this new communications channel was and yet how far it had to go to reach its potential.

Information management has always fascinated me. My previous startup was focused on building an intelligent RSS reader that would reduce information overload by collating related posts across the blogosphere. I was excited to turn my attention to trying to solve all sorts of interesting problems with the creation and consumption of tweets. How do you make it easy to filter tweets by keywords or relevance? How do you figure out who to follow? How do you discover what your followers are interested in? Where they are? What they talk about? Who they know that you might want to know? How do you find that important thing that someone said weeks ago? Is it through searching just your own streams? Is it through tagging tweets in order to “bookmark” them? How do you see all your old conversations with someone else?

Working on these problems became my passion for the next two years, pouring hundreds of hours into my little Twitter client, Siftee. It meant I got to learn all the ins-and-outs of the Twitter API, including all its ugly parts (I’ve already written about what’s wrong with the Twitter API). It meant I had to design and build my own infrastructure and APIs to do the things it seemed Twitter was never going to do, like enable search in your sent tweets and DMs, sorting and filtering of followers, tagging tweets, seeing conversation histories, et cetera (I owe a debt to Stephen Michaels over at Fusebox for developing the backend components that made those features a reality).

I arrived at the SXSW 2011 technology conference excited to show off my progress on Siftee. The angst over the acquisition of Tweetie the previous year, just before the announcement of promoted tweets at Twitter’s Chirp developer conference, had long ago created a gloom over the developer community but I had pressed on in light of the fact that I saw no signs that Twitter was interested in addressing the needs of the professional market any time soon. A day or two later, however, the infamous Ryan Sarver memo would drop, informing developers that continuing to build 3rd party Twitter clients was not in their best interests. I tried to press on, but that’s when potential investors left and right started telling me “it’s over.” I probably would have stopped working on my project if it hadn’t been for Rachel Sklar, a media insider, TechStars mentor and Twitter power user who immediately signed on as an advisor after a demonstration I gave her last summer.

Although Siftee was getting traction and continues to get new signups every day, friends and investors continued to tell me that building on Twitter’s platform was crazy, and, well, I had rent to pay and mouths to feed (I have a cat). Siftee is now more of a side project for me, but I can’t quite let go of it because I know that most of the problems I was trying to solve have still not been addressed, and they likely never will be.

I continue to believe that they are worth solving.

What just happened

A few days ago, Twitter’s consumer product manager Michael Sippey (formerly the VP of product at Six Apart) published a post on Twitter’s developer blog entitled Delivering a consistent Twitter experience. It reiterated 3rd party developers’ subordinate status as first detailed in the Ryan Sarver memo and laid the groundwork for Twitter to further clarify its vision of a more tightly-controlled experience.

It has created a firestorm of concern and criticism. Mathew Ingram warned Twitter to remember what happened to MySpace and Digg (and AOL, and AIM, and…). He wonders if anyone would use a truly open alternative. Dave Winer reminded developers of the folly of building on a corporate API. Ben Popper and Tim Carmody lamented as Twitter follows Facebook down the walled garden path. Anil Dash (who spent years working with Mr. Sippey at Six Apart) thinks we’re all just a bunch of whiny bitches.

But let’s step back for a second…

Twitter’s purpose

Back in 2009 Whitney and I started playing a little game. We would post messages like “NE corner 14st 6th ave #godontgo”. The walk signals in New York City had recently started acting glitchy, showing with increasing frequency both the walk and don’t walk signs at the same time. We decided to track our encounters with these confused boxes in real-time using Twitter. We wondered if our friends would catch on and join in the fun (a few did). We thought one day we could present the accumulated data to the city as a crowdsourced effort to help solve a municipal problem. I’d love to show you one of those tweets now, but I can’t.

We knew about Twitter’s technical limitations at the time – search didn’t go back more than a few days and users weren’t allowed to retrieve more than their last 3,200 tweets. We weren’t that concerned, after all Twitter was still a fledgling company with limited resources and they issued constant reminders that all these tweets were safely locked away somewhere. What we never imagined was that three years, a billion dollars and hundreds of new engineers later things would be exactly the same.

Of course, there is a simple solution for retrieving all your old tweets. All you have to do is ask nicely – and be a senior journalist at a respected news organization like National Public Radio. Andy Carvin was able to retrieve his archive of (at the time) 95,000 tweets doing just that. Now, the purpose of this request was so that Mr. Carvin could study his massive amount of first hand reporting on the Arab Spring, an effort for which some said he should earn a Pulitzer, and I applaud his monumental efforts. As a side effect he got to revisit pleasant memories of his daughter in his very first tweet, something I imagine a lot of other people with thousands of old tweets would like to be able to do as well.

Being an acclaimed journalist wasn’t the only way to get special treatment with Twitter data. As an early developer on the platform, I have the privilege of “whitelisting”. My @orian account can make up to 10,000 requests for data from Twitter every hour, whereas yours most likely can only make 350. Additionally I have access to the “Site Streams” service which enables developers to track tweets and keywords across thousands of users in real-time. Whitelisting is no longer available, and Site Streams (which was intended to serve as somewhat of a replacement) remains in perpetual beta with signs that it too will likely be shut down. At the time, whitelisting was granted to developers on the premise that they would build interesting applications and expand the ecosystem. Nowadays Twitter would prefer that that be left to the pros – as in them and a handful of designated partners who will charge you enterprise level rates to retrieve precious Twitter data, even if it’s your own.

The beauty of Twitter was that it didn’t have a purpose. This of course is what led to the common characterization of Twitter as being a platform for telling people what you had for lunch. The reality is that this is what separated Twitter from Facebook. It didn’t provide a context that said “here, this is for sharing stuff between friends.” Want to see what someone is saying? Go ahead and follow them, they don’t need to follow you back. Want to provide customer support for your product? Have at it. Want to build an emergency broadcasting service for your town? By all means. Want to track your dieting progress? Good idea! Want to “hear” what time it is according to Big Ben? Um, sure, why not?

Providing a better user experience

Ever since the Ryan Sarver memo, the party line at Twitter has been that reducing the diversity in the ecosystem will result in a better user experience, one guided by Twitter’s expert hand. Yet, as Nick Bilton of the New York Times points out, Twitter’s sites and apps remain a cacophony of confusion more than a year later.

More importantly, nearly all of the useful additions to the original idea of Twitter as a one-to-many short message service originated outside of Twitter HQ. The @reply? Outside Twitter HQ. The hashtag? Outside Twitter HQ. The retweet? Outside Twitter HQ. Search? That came from Summize, which Twitter acquired. Analytics? Twitters current efforts are based off their acquisition of Backtype. Lists? Twibes and TweepML (now dead) did it first. In-line images? Brizzly (now dead) did it first. Pull-to-refresh? Loren Brichter, creator of Tweetie, invented that. Twitter acquired Loren’s company atebits and made its products the first official Twitter apps for the iPhone and iPad, and then set about patenting the pull-to-refresh feature (to Twitter’s credit, their approach to patents seems to be innovative although that’s up for debate). By the way, Loren quit working for Twitter back in November.

Luckily, those are all examples of ideas that have made it into the core of Twitter. What about everything else? What about Proxlet, a popular service that let people filter their timelines so they didn’t have to see Foursquare checkins or dopey hashtag memes if they didn’t want to? Twitter shut them down with no advance warning. It never came back. What about Favstar, a service that scours Twitter activity to find the best stuff that users are favoriting? Developer Tim Haines put his API plans for the service on hold for fear that Twitter would shut him down. And then there’s TweetDeck, which Twitter hurriedly acquired to keep from the hands of Bill Gross out of fear he might roll out an ad network faster (better?) than they would, or even possibly a competing social network. This was a move that pretty clearly did not benefit end users.

So what does Twitter believe to be the next big step in improving the user experience? Primarily it seems to be the, er, expansion of content delivered in expanded tweets. I’ve watched as developers in the Twitter forums have pleaded to be able to make their media content appear in Twitter’s side pane and expanded tweet views, only to receive the same response: “we’re not taking more partners at this time”. Twitter could have gone the way of supporting existing embedded media standards years ago by utilizing a service like Embedly, which now supports 218 different content providers at last count, but instead they are focused on getting developers to use their proprietary “Twitter cards” which, not surprisingly, requires a formal agreement with and approval from Twitter. [edit: Ryan Sarver wants to correct some inaccuracies here, noting that there are hundreds of partners for Twitter Cards and joining doesn't require a deal of any kind. However, here is the form for participating in Twitter Cards which explicitly states you need approval from Twitter, so I don't see how my original statement is inaccurate.]

That’s a far cry from annotations, an idea Twitter announced at the Chirp conference. With annotations, developers would be able to add any arbitrary metadata they wanted to tweets, so tweets could suddenly carry any kind of additional meaning their creators wanted to add to them. It was an idea so radical I could barely wrap my head around it. I remember meeting a bunch of Twitter engineers in the lobby at the conference. When I introduced myself, they said “oh, you’re that guy who sent the letter to Ryan.” Just before the conference I had sent Ryan Sarver my thoughts on how Twitter was mismanaging the developer community, which, to his credit, was at his request. Apparently the letter had circulated. They then asked me what I thought was more important, user streams or annotations (the two major engineering efforts announced at the conference). When I said “annotations”, the engineers on that team cheered. Twitter took to its blog to detail how this would be part of their strategy to create enduring value. Annotations never happened.

What Twitter wants

Twitter has figured out what it wants to be when it grows up. It wants to be The New Media. Twitter doesn’t want to come to you. It wants you to come to it. This, we’re told, will provide a better experience. And it will – for their advertisers and their investors.

In its youth Twitter might have thought it purpose was to empower us as creators. But it has grown up, and the conventional wisdom of grownups is that it’s far more profitable to think of everyone as consumers.

This is just the natural outcome for a social network once it leaves the hands of its early adopters and reaches scale. Twitter isn’t trying to create a Horrible Network for Kardashians. It’s just a reflection of all the normal people that showed up to use the service. At least, that’s what Anil Dash argues. But Anil is wrong. Twitter has the hand that guides the users, not the other way around. Twitter gets to decide whether the engineering priority is to support Justin Bieber having millions of followers (here’s @biz with the Bieber boxes) or whether all users should be able to access all their old tweets.

Anil takes issue with programmers like Dalton Caldwell (and presumably myself) who are lamenting what Twitter could have been, claiming we’re just a bunch of “hippie utopian technological triumphalists” who are upset because Twitter isn’t just for geeks anymore. Well, programmers will always be in the minority in established social networks. Concluding that we’re just resentful of this status is wrong. I certainly can’t speak for everyone, but I know I was trying to make things better for whoever wanted to use Siftee, even if they liked Kardashians.

Others are saying that Twitter is just taking a cue from Apple, which clearly stands out as an example of the benefits of top-down control. Color me skeptical. I certainly believe Twitter would be a better service and product today if they had continued to work with the developer community rather than against it. Some see Jack Dorsey as the heir apparent to Steve Jobs (though some do not). And, ya know, I understand where Twitter is coming from. What company ever made good money focusing on building great products that made their users more productive? I guess that’s why Apple became the worlds largest company by selling ads.

What Twitter should do

I hope that the contents of this blog post adds some useful context to the vibrant discussion that is emerging about what Twitter is, has been, and may become. If Twitter wants to remain a truly open network and bring 3rd-party developers back to the platform, here are three things it should do:

First, Twitter should issue a memo explicitly reversing course on the policy that 3rd parties should not develop Twitter clients and acknowledge that this policy was not the right approach to fostering future innovation on the Twitter platform. This memo should be accompanied by a strategy document detailing how Twitter will work with 3rd parties to develop guidelines for the rollout of new platform features and branding requirements while maintaining overall implementation flexibility.

Second, Twitter should announce a revised ad distribution model, and a timeline for implementation, which provides for free ad-supported access to tweet streams as well as paid access to tweet streams without ads. In addition, some portion of revenue generated from ads delivered through 3rd parties should revert to those 3rd parties. For further discussion on the benefits of this approach, see Nova Spivack’s excellent piece A Solution to the Twitter API Problem. By the way, the revenue share model is actually what Twitter CEO Dick Costolo said he was going to do two years ago “in a very transparent way.”

Third, Twitter should announce an effort similar to Google’s Data Liberation Front to provide users complete access to the content which they have created and continue to own under the current Terms of Service. Nova Spivack’s piece makes salient points regarding the law around content ownership and distribution, and I do hope that someone with expertise starts to take a closer look at Twitter’s artificial restrictions on retrieving your own content. At a minimum, Twitter should provide a way to request and receive a one-time data dump of your own Tweets and Direct Messages should you choose to take them elsewhere (and without qualification as to who you are or why you want them, so long as you are the author).

Note that these are not feature suggestions. There’s a reason for that. Everyone does not need the same features. That’s the beauty of an open ecosystem. At some point Twitter’s thinking on open ecosystems flipped, where the costs are now perceived to outweigh the benefits. My suggestions only reflect how to flip it back. The rest takes care of itself.

Conclusion

Through Twitter I’ve met hundreds of interesting people, spawning thousand of conversations. I’ve used it to grow my skills as a programmer, an entrepreneur and maybe even a writer. It has been fascinating to watch its growth and see its impact on the world.

Obviously I’m not happy about the direction Twitter is headed in, and I haven’t been for a long a time. Do I think any of my suggestions above will happen? No, I don’t. I think Twitter will continue to spread FUD until what’s left of the ecosystem remains wilting in the carefully arranged flower beds of its walled garden, foregoing the legacy of all the good ideas that got it to where it is today.

Mind you, I don’t think we’re losing these good ideas. The web doesn’t seem to let good ideas go. That’s why I continue to remain excited about how they might take hold elsewhere. For that, I’ll leave you with some final thoughts on Twitter from someone who articulated them way better than I could:

Twitter needs to decentralize or it will die. Maybe not tomorrow, maybe not even in a decade, but it was (and, I think, remains) my belief that all communications media will inevitably be decentralized, and that all businesses who build walled gardens will eventually see them torn down. [...]

The call for a decentralized Twitter speaks to deeper motives than profit: good engineering and social justice. Done right, a decentralized one-to-many communications mechanism could boast a resilience and efficiency that the current centralized Twitter does not. Decentralization isn’t just a better architecture, it’s an architecture that resists censorship and the corrupting influences of capital and marketing. At the very least, decentralization would make tweeting as fundamental and irrevocable a part of the Internet as email. [...]

So while I don’t expect Twitter to master its own destiny as far as the decentralization of the medium goes, I do support the idea, and I hope that Twitter as a business can coexist with the need for the world to have a free, open, reliable, and verifiable way for humans to instantly communicate in a one-to-many fashion.

- Alex Payne, original engineer of the Twitter API

Posted in Twitter.


What’s wrong with the Twitter API

I want to make something clear from the start: I love Twitter, though sometimes I wonder if I’m suffering from Stockholm syndrome. I have devoted much of the last three years of my life to working with the Twitter API and continue to pursue building the world’s best Twitter client for professionals in the form of Siftee. Recently Twitter staff has been reaching out to developers with a renewed vigor in the hopes of recapturing some of the goodwill and enthusiasm that has been squandered in the past two years. I applaud them for that. The new developer discussion site and documentation portal are significant improvements. Jack Dorsey has reached out for feedback. I recently spent time on the phone with Jason Costa, Twitter’s developer relations manager, at his request.  I think these are all good signs for the ecosystem.

With that said, there is a lot of feedback to give. This post is a technical one focused on the API itself, not on Twitter’s relationship with developers (I’ll save that for another day). Although it’s technical I’ve tried really hard to make it readable to “normals” :)

Before I start I want to thank Ryan Sarver for lending me his ear in the past, and especially Taylor Singletary for providing lots of great support over the years to not just me but the entire developer community.

Without further ado, here’s what’s wrong with the Twitter API…

Users can’t fetch all their old content.

Twitter imposes a variety of artificial limitations on how far back you can access tweets. Users can only access the most recent 800 tweets in their home timeline, their most recent 800 mentions and the most recent 3200 tweets they’ve sent. Additionally a user may have favorited tweets that have become “too old” to be retrieved. What this effectively means is most long-term users of Twitter will never be able to access all of their old content.

Twitter imposed these constraints early on due to their limited infrastructure capacity and a need to focus on reliable accessibility for recent content. While I can understand an ongoing need to prevent third parties from crawling all of Twitter’s old content, I believe that individual users should be able to access all old content directly relating to their account, which means all their Mentions, Direct Messages, Favorites and sent Tweets, without restriction.

With Siftee we attempt to archive as much of this content as possible for our users so they can search over it and see old conversations.

You can’t fetch all the replies to a tweet.

Twitter’s Issue 142 is one of the oldest and most infamous shortcomings of the API: there is no way to retrieve all the replies to a particular tweet. Issue 142 is about to celebrate it’s third birthday and has the dubious honor of being assigned to a programmer who no longer works for Twitter and it has been given a status of “WontFix”. I have long felt this is the most obvious shortcoming of the entire API and addressing it has more potential than any other item in this post to fundamentally change the nature of the Twitter experience. Consider that Twitter allows you to see everyone who has retweeted a tweet – allowing you to see all the replies to a tweet would revitalize Twitter as a medium for conversation rather than just broadcasting. Unfortunately Twitter sees itself as an information delivery system rather than a social network so this is likely to continue to go unresolved.

You can’t see who favorited a tweet.

Similar to not being able to fetch all the replies to a particular tweet, you cannot fetch a list of users who have favorited a particular tweet. While Twitter’s User Streams API supports notifying users when their tweets are favorited in real-time, there are no methods for finding out who favorited your tweets (or anyone else’s) in the past. This seems like a missed opportunity for user discovery.

Native retweets don’t allow for comments.

When Twitter takes a phenomenon that is naturally emerging from the ecosystem and attempts to formalize it, a good rule of thumb would be that the result should keep all the existing functionality of the phenomenon and ideally make some or all of it better. When Twitter rolled out native retweeting they solved a number of “problems” that most users probably didn’t care about (making tweets look like they came from the original author; making sure the tweet text wasn’t tampered with) while eliminating a very significant element of the original phenomenon (adding your own commentary to someone else’s).

We now have a situation where most Twitter clients support both the original “RT” approach to retweeting as well as the native retweet functionality. This sort of potential UI confusion is cited as one of the main reasons why Twitter wants to stop third parties from developing new clients. Of course this completely fails to acknowledge the reality that if Twitter’s own solution to retweeting was actually better across the board than the original behavior it would have been near universally adopted. I don’t mean to be harsh but I feel like I’m in pretty good company when Twitter’s own co-founder and Executive Chairman Jack Dorsey said he doesn’t use the native retweet functionality because it doesn’t fit how he retweets, by which he means he likes to include his own comments. I’m pretty sure he’s not the only one.

Think of it this way: the current system only allows for implicit agreement. There is no way to natively retweet someone while stating “I totally disagree with this”. Imagine only being allowed to quote political candidates you agree with. This is similar to Facebook’s problematic decision to create a Like button but no corresponding Dislike. It artificially skews potential activity in the system. This doesn’t make sense for a social network, but as I noted earlier, Twitter doesn’t consider itself a social network.

At this point it might be very difficult for Twitter to improve the situation. One idea is that Twitter clients could implement a behavior where a user could follow up a retweet with a separate reply to the original tweet and have the two visually linked.

DMs can’t be marked as replies.

Direct messages do not have the in_reply_to property that tweets have – in other words there is no way to explicitly link one direct message as being a reply to another. This means there is no way to break up direct message conversations between two users except perhaps by how much time has passed between messages, which is a very unreliable way of breaking up conversations. All this requires as a fix is implementing the exact same functionality that already exists for tweets to mark one as in reply to another.

DMs aren’t threaded as conversations.

Developers access direct messages via two endpoints: direct_messages and direct_messages/sent. The first represents a user’s inbound direct messages and the second represents all their outbound direct messages. This is, unfortunately, the entirely wrong model for how private messaging should be represented. This model makes it extremely difficult to surface old conversations between the user and another specific person because it requires the developer to go back in time by loading all old direct messages the user has sent and received just to find the ones sent to or received from a specific account. This is why Twitter clients that show DMs as per-user conversations (including twitter.com) don’t let you go as far back in time as you would like to go. It simply becomes unwieldy and requires too many potentially extraneous calls to the API.

The right model would allow developers to fetch a list of accounts the user has had DM conversations with in reverse chronological order (the same way your phone shows you who you’ve been texting) and then fetch just the messages between the user and another specific account (again, the same way your phone does it).

Search results aren’t tweets.

Twitter search does something very weird. It doesn’t return tweets! It returns information that roughly resembles tweets but leaves out many of the standard fields and totally changes one very important one – the user id of the sender. This is noted in a warning on the Search API page, which notes that the issue (Issue 214) is being “tracked”. Unfortunately this issue has been “tracked” since 2008, when Twitter acquired Summize to form the core of its search capabilities. The last comment on Issue 214 sums things up pretty well: it’s “more or less obvious that they are not ever going to fix it”.

Twitter developers could pretty easily fix the issues of missing and incorrect fields in search results by taking all the tweet IDs returned in a search and looking up the original tweets, but unfortunately…

Tweets can’t be looked up in bulk.

Twitter provides some nifty bulk lookup tools such as users/lookup which lets a developer send up to 100 user IDs or screen names to Twitter and get back a full representation of that user (their real name, bio, friends and follower counts, etc). Unfortunately there is no equivalent for tweets. If you have a list of tweet IDs (unique numbers linked to each tweet) and you want to look up the actual corresponding tweets, you have to do it one at a time.

This wasn’t really such a big deal for a long time, as there was rarely a situation where you would need to look up lots of tweets using ids. Twitter makes lots of different services available for getting tweets in bulk such as fetching a user’s home timeline or their mentions. But what if a developer wanted to look up something that Twitter didn’t provide a service for, say for example the top most favorited tweets? At one point in time they might have turned to a third party such as Favstar. Unfortunately (I seem to be using that word a lot) Twitter made what I consider to be one of their greatest strategic errors by changing their Terms of Service to prevent third parties from making tweets available via their own APIs. Instead, “If you provide an API that returns Twitter data, you may only return IDs (including tweet IDs and user IDs)” (section 4.A. of the ToS). Twitter allows developers to get up to 200 tweets at a time for things like a user’s home timeline, counting it as a single request against your rate limit (the number of times you can request certain things from Twitter per hour). But getting a similar 200 tweets from a third party service requires getting 200 tweet IDs and then requesting each tweet individually from Twitter, using up over half the current rate limit of 350 requests per hour.

Talking about Twitter’s recent terms of service changes and their implications is fodder for a different post. The takeaway here is that Twitter should never have forced third parties to make only tweet IDs available through their own APIs if Twitter wasn’t ready to release a corresponding bulk tweet lookup service.

Lists don’t show @replies to people you follow but are not members of the list.

There are a number of characteristics of Twitter lists that can be very confusing for users. One thing many people don’t realize is you do not need to follow the accounts you put on a list. In fact, lists can be a great way to keep track of things people are saying without following them and cluttering up your home timeline. Another thing many people don’t realize about lists is that they will never show tweets that are replies to accounts that are not on the list. This, in my opinion, is not a good thing.

The reason this is not a good thing is because many people think of lists as a way to organize the people they are already following into more manageable groups. If I follow someone and I put them on a list, such as my “Twitter Developers” list, I would expect any tweets of theirs that I see in my home timeline to also appear on my Twitter Developers timeline. But that is not the case. For example, if I didn’t put myself on my Twitter Developers list I would not see any @replies to me from anyone on my Twitter Developers list when looking at that list, even though I would see them in my home timeline. This is confusing.

There are good reasons from an infrastructure perspective as to why Twitter may have had to build lists this way. It would be great if Twitter would enable an option to request list tweets as either filtered or unfiltered for @replies to accounts you follow whether or not they are on the list. This is not too likely to happen, but the missing @replies could be pretty easily restored on the client side by merging the appropriate tweets from a user’s home timeline into their applicable list timelines.

Lists should not be capped.

Twitter imposes two limitations on lists: a user cannot create more than 20 of them, and each of them cannot have more than 500 members. Neither of these restrictions make much sense and my guess is that they do more harm toward user adoption of lists than they do good in terms of preventing Twitter’s infrastructure from being overloaded (which is the only justification for these restrictions in the first place).

Capping lists at 500 members doesn’t make sense given that Twitter can clearly already handle generating timelines for accounts that are following tens of thousands of people. Most users will not build such large lists for personal organization anyway, but consider cases such as building a list of conference attendees (Twitter arguably had its first major spurt of adoption at the 2007 SXSW conference as a way for attendees to communicate) or maintaining a company directory (Twitter maintains a list of all its employees – a list which apparently has special privileges as it currently has more then 500 members).

As for limiting accounts to 20 lists, again this seems very arbitrary and unnecessary. Most users will never create more than 20 lists, but those that would want to would likely have good reason to. What if a university wanted to create a list of all the students in each of its courses? What if a large company wanted to create a list for each of its departments? What if the US government wanted to create a list of government officials on Twitter in each of the 50 states? As we continue to generate more and more information, curation becomes ever more necessary. Twitter should be trying to get ahead of the curve on this, especially with new services like Google+ entering the fray.

Errors aren’t consistent.

This is purely anecdotal but from my extensive experience in building Siftee I can say that the Twitter API throws a lot of random errors that frequently don’t accurately reflect whatever the problem may be. Sometimes Twitter is having a capacity issue but the API tells you you’re requesting some information that doesn’t exist (when it does). Sometimes it spits HTML at you rather than a properly formed error response. Experience indicates there are a wide variety of possible error messages one can receive from the Twitter API but unfortunately these messages aren’t documented anywhere. Yes, there is an error codes and responses documentation page but this seems to just scratch the surface of actual error messages you may receive.

The crossdomain.xml for non-search APIs is too restrictive.

I saved this one for last as this is a pet peeve that only impacts developers using Flash (and possibly Silverlight). Twitter hates Flash (along with everybody else these days). Why do I say that? Because Twitter has since day one made it impossible for Flash to directly access any Twitter services except the Search API. This is very simply due to the lack of a sufficiently open crossdomain.xml file on Twitter servers which Flash needs to satisfy security constraints. I’ve been bringing this issue up for three years here, here, here, here and here. Siftee is currently built in Flash using the Flex framework and every request it makes cannot go directly to Twitter but instead has to be routed through a PHP proxy due to this extremely stupid constraint that no other major web service imposes. Regardless of how you feel about Flash, it makes no sense for Twitter to alienate a whole industry of developers simply because it can’t get around to reviewing this issue.

Conclusion

This post is not meant to be exhaustive. There are lots of other things Twitter could be doing. In fact we’re doing lots of great stuff with Siftee that can’t currently be done with the Twitter API. My goal was to cover what I see as some of the long-standing issues that should have been addressed a long time ago. I don’t mean to suggest that all of these things are easy to fix. I don’t work for Twitter and I am no expert in building large-scale web services. However after several years and raising more than a billion dollars I would have expected many of these issues to be non-existent.

I’m sure I’ll be adding to this as I expand my own knowledge of the Twitter API. In the meantime I look forward to seeing what comes of Twitter’s renewed interest in gathering feedback from developers.

Posted in Uncategorized.


Where have I been?

Three years is a long time to go between posts. I started this blog because I needed to get something out of my system. Since then I’ve worked at a great digital agency, gone through an intense personal relationship, became addicted to swing dancing and began a new startup. I’ve met a ton of awesome people in the last few years and I’m excited about what’s in store for me. I hope I make the time to write more frequently. I don’t know if I will, but I do know I have a lot to say…

Posted in Uncategorized.


Getting Out the Vote: The Pen is Mightier?

Last Call for Change @ BAMRecently I spent a few hours making calls for the Obama campaign at the Brooklyn Academy of Music. Although I was really inspired when I arrived and saw just how many people were plugging away and how excited people seemed, I left with a feeling of dissatisfaction. The reason is because the experience served as a vivid reminder of how poorly information technology is being utilized by our democracy.

The first problem to become obvious was that people in battleground states were getting multiple repeat calls from the campaign, as many as a dozen the same day in the final stretch. This was frustrating for both the voter and the callers. One person near me reached a voter who said "If I get one more call from Obama’s campaign I’m not going to vote for him". I reached a number of people who were patient enough with me to explain that this was the nth time they had been called and that yes they knew where their polling location was. I also called plenty of people who simply hung up as soon as I announced myself. Keep in mind that I was supposedly calling a list of "Obama supporters". I put that in quotes because it was never made clear whether that just meant registered Democrats or something else. Either way, in several hours of calling, I don’t think I reached one person who sounded particularly happy to hear from me. That might be because I was tasked with calling back only people that had previously been marked as either having been left a message or were otherwise unavailable (I was working off already marked up call sheets and instructed to mark this second round of calls using boxes rather than circles for effect). Other people seemed to be having some pleasant conversations, I guess I just lucked out.

The next obvious problem was that the majority of voters on the calling rolls either were not answering their phones or the numbers were out of date. For those of us calling Florida, we were first instructed to not leave messages if we were unable to reach the voter, then we were told we should. There was some implication that the campaign didn’t want to risk giving people information about their polling location if they were not Obama supporters, but that in the final hours of the campaign it made more sense to leave messages than not. The implication was made more explicit when we were told that if we reached a McCain supporter we should simply apologize for calling them and hang up, and not give them their polling location information. This really bothered me. The justification was supposedly that there was so little time left, it had to be spent only on Obama supporters, but I wonder if the instructions were any different earlier in the campaign. It had a tinge of voter suppression to it and that’s something I don’t want to be involved with whether you are for my candidate or not. On top of that, it’s potentially a missed opportunity for the campaign to breed unity by telling people "it doesn’t matter if you don’t want to vote for our candidate, we want to make sure you have the opportunity to vote, because it is important". I can guarantee you the McCain camp wouldn’t do that, and just like the good business practice of telling people where to go when you don’t have what they are looking for, it breeds trust.

The next thing that only became apparent after making calls for a while was that the system for flagging calls doesn’t entirely make sense. For each call completed, you had to mark one of Voted / Will Vote / Won’t Vote / McCain Supporter / Wrong Number / Left Message / NA. My first concern was that I definitely spoke to some people who sounded like they were telling me I had called the wrong number just to get me to go away. I understand that, I just don’t know what impact it has on voter information files once that goes back into the system (if it does at all). The NA option is to be used if the caller is not home, refused the call, hostile, asked to be called back, has a language barrier or for any other reason. We were told to explicitly write down "hostile" if that was the case. However, on the script sheet there is a note that says "For Data Entry, all NA = ‘Not Home’". Well hello, if that isn’t a boneheaded way to make sure that people who don’t want to get called end up getting called lots and lots of times, I don’t know what is.

None of that bothered me as much as the options for recording someone’s voting intention. We were told that if someone is going to vote for Obama we should mark "Will Vote", but if they were voting for McCain mark "McCain". The problem with this wasn’t immediately apparent to me until I reached my first voter who told me her intention was to vote, but who she was voting for was "none of your damn business". So, now it would seem I should mark that as "Will Vote", except that those are supposed to mean "will vote for Obama". I started wondering how skewed internal polling might be if every "Will Vote" counted that way, since I was reaching a lot more people who told me they would be voting but not who they were voting for than those who explicitly said Obama. Once this really sunk in, I decided to not even ask explicitly who they were voting for because most people seemed annoyed by it and it was uncomfortable, and if someone was a McCain supporter they would quickly make that known whether you asked or not. The same issue of course exists for people who said they voted early.

So looking back on all of this, I’m struck by the fact that most of this is primarily an IT problem. A lot of people, including campaign staff, were noting how the abundance of cell phones has changed the nature of phone drives. It was certainly amazing to be able to show up with a cell phone and be making calls five minutes later. But what is even more amazing to me is that no one is asked to bring a laptop if they have one. Looking around, I would guess that the majority of people making calls owned a laptop. Why were we all working off of sheets of paper many of which had already been marked up several times by other volunteers? Why weren’t we working off our laptops and WiFi, requesting a single voter at a time to call and recording the results directly – thus bypassing the need for campaign staff to be constantly tallying, and preventing unnecessary repeat calls (not to mention the difficulty in keeping track of both your script and your place in the middle of a long list of names). Supposedly such a tool exists for people who want to make calls from home, so why weren’t we encouraged to use that at the gathering?

I frankly wouldn’t have been surprised by any of this at other candidate’s campaign drives, but this is for Obama, who supposedly "gets" technology more than any other candidate up to this point and has run the most well organized campaign ever seen by the Democratic party. How can I believe Obama’s campaign promise to streamline the healthcare system through better use of Information Technology when it isn’t even getting done in his own campaign for the presidency? This isn’t even touching on our pathetic electronic voting systems (I’ll save that for after the election). How is it that any of a hundred two-person web startups out there can properly keep track of their user base but the Democratic party still can’t? Hopefully Obama’s CTO will get serious on this so that by the second decade we could get a little more caught up with the new millennium, but I’m not holding my breath.

Posted in Politics.


Thinking About Sound and Software Interfaces

I’m always exploring concepts surrounding User Interface development, and a video I recently came across got me thinking about sound and how it relates to interactive experiences. Audible cues are the bastard child of UI design, to be used sparingly or not at all. There is a rather low tolerance for sound cues in desktop applications, and almost none for anything delivered in a web browser. Yet we all know that when done right, sounds can deeply affect our experience. I’ll return to that in a second, but first the clip:

This is a video demonstrating the McGurk effect. The McGurk effect is a demonstration of a deep interconnection between our auditory and visual processing systems. Watch the video and listen to the sounds being uttered. Then close your eyes and play the video again.

I think this effect is quite profound and to me serves as a reminder of a resource we may be overlooking too much when it comes to designing effective software interfaces. There is only one type of software that I regularly interact with that uses sound heavily for interface cues, and that is video games. The most obvious usage is to convey positioning in space. In a first-person shooter, you rarely get visual cues of a monster creeping up behind you, but you get plenty of auditory ones. In real-time strategy games there is often a concept of a radar showing activity across the game map, but it is usually the sounds of battle in the distance that alert you to something going on first.

In a game I’ve been playing recently, Company of Heroes, text alerts are displayed on the screen when a unit comes under attack, but they are easy to ignore. What isn’t easy to ignore is a tank commander screaming "holy shit, they’ve got Panzers!" which immediately gives me a sense of three things: who is under attack (voices for different types of units are very distinguishable), roughly where they are (whether the audio is off to my left or right and how loud it is) and even what kind of unit is attacking (a German Panzer tank). In older RTS games, units would often acknowledge a command by saying something like "understood" or "moving out", but in CoH they will say things like "quick, get on that turret" which lets me know that not only did they get the move order but that I indeed properly selected a turret to be captured.

Time and state can also be richly enhanced by auditory cues. Consider the classic Super Mario Brothers. There is always a clock ticking, but no one pays attention until the music speeds up indicating you are almost out of time (and actually makes you feel like you need to move faster). Or consider when Mario grabs an invincibility star. Even though there is a visual cue (a blinking Mario), it is the change to "invincibility music" that we are more aware of, including the cure for when the invincibility is about to run out. If you don’t think so, try playing without the sound on.

Okay so how does this relate to non-gaming software? In my mind right now there is only one common usage of audio cues for indicating what is going on in an application, and that is alerts. We often use little chimes to indicate things like an error occurring or some incorrect key being pressed. I think a more useful example is that of the dings used to notify you of a response in an instant messaging client. Many people may turn those off if they get too annoying. Gmail chat has a great adaptation where no audio cues are used if the chat is my currently active window, since I can see what is going on, but they are used if I have switched away to some other window. But beyond alerts, I am hard pressed to come up with really compelling examples of sound enhancing the functionality and, more importantly, usability of an application.

One area to explore is in conveying the general "ambiance" of an application. When I was working at 9mmedia we were approached by a client who was trying to create a more engaging interface for a network monitoring application. This app would have people sitting watching it all day for emergencies, and they wanted some way to make it more interesting so people wouldn’t fall asleep at the wheel. The end result was an app that looked like a radar station in a submarine, complete with a sweeping band showing the status of servers as blips. To enhance the effect, we used sound during the log in process, where upon entering correct credentials, the login box would seal away behind a door and two metal plates would unlock and open to reveal the control panel – all using lots of heavy clanking and whirring noises. The end result was an effect that made you feel more like the captain of a nuclear submarine than a low-level sys admin.

Is that example applicable to all software? Certainly not. But at the same time if I asked you to think of how you could make a real-time network monitoring app more compelling, is that the kind of thing you would come up with? More often that not the answer will be no, and that might be a missed opportunity. Here’s another example I am just coming up with on the fly: imagine you are searching for a good bar in your neighborhood. You might search on Google Maps and then click around to read some reviews. What if the review information was instead converted to an auditory cue such that as I panned around the map (perhaps even when walking down the street using street view) I heard "bar noise" that was louder at more popular locations, just like the real world. I could easily pan around searching for auditory "hot spots" and perhaps avoid the time consuming process of clicking around looking for reviews.

I’m trying to think about ways in which even more basic elements of software UI could be significantly enhanced via sound. For example, are there auditory cues which would actually enhance basic functionality like cut/paste, zoom in/out, scroll, window managment, drag-and-drop, etc, or are they all doomed to be gimmicky?

Posted in User Interface.


An Abbreviated History

Going forward, I expect much of my writing to be about two general areas of interest of mine, entrepreneurship and human-computer interaction. I thought a good place to start would be by sharing a bit of my background and how my experiences have shaped my interests and vice versa.

My first real computer was a Headstart Turbo 286. I vaguely remember writing something in my third grade class about how excited I was when my parents bought it. We had a family friend who was in some technical industry come over to set it up for us, and I remember watching hundreds of lines of text scroll by the screen as he installed DOS and Windows 3.1 and whatever else. I had no idea what was going on, but I knew I wanted to understand.

Computer Shopper Magazine My first fascination was really with the hardware. Maybe it was from a childhood spent playing with Lego and Erector sets, I don’t know. By the time I was thirteen I convinced my parents I had read enough magazines to build a computer myself. I ended up assembling a Pentium 120 machine in a monster tower case, with parts I had spent hours upon hours researching. When I finally got the thing working after many headaches (including blowing one power supply, scaring the begeezus out of me that I had just killed the whole thing and wasted my parent’s money) it was the first time I truly had the feeling I could do anything I set my mind to. I think it really set me free on a course of exploration for the rest of my life that I am truly grateful for.

I was fortunate again when I discovered the fledgling computer science program at Stuyvesant High School. Single-handedly run by the best teacher I’ve ever had, Mike Zamansky, this is where I was introduced to the mysterious world of code, my best high school friends, and two of my future business partners. I learned the ins and outs of procedural programming with Turbo Pascal and formed a particular fondness for recursion and linked-lists. After that it was a combination of C and Assembly programming while learning the fundamentals of 3D rendering.

I was a bit reluctant to go to college. It was 1999 and there were kids my age getting rich writing HTML. I though if I stayed in school I would miss the boat, but I did it anyway, enrolling at the University of Michigan. U of M had a somewhat strange offering in that you could major in Computer Science through either the liberal arts or engineering colleges. Something told me that going in through liberal arts was right for me. I’m not sure why… maybe it was because I grew up in SoHo with sculptor/painter parents (my mother was commissioned for the Rothenberg Memorial permanently on display in Stuyvesant).

I spent two summers interning for Fidelity Investments in Boston, working in the Fidelity Center for Applied Technology. The Center is an R&D facility focused on employing experimental technologies in the workplace. It was fascinating to see the challenges of applying new technologies such as voice-over-Internet and biometric security in real world settings. It was here that I first learned Flash, getting to try my hand at the crossroads of programming and visual design in order to build interactive pieces that would communicate some of what the Center was researching.

At school though, I was struggling. I was doing okay following the core CS curriculum, but I was bored out of my mind. I felt I was rehashing a lot of what I had already learned in high school and I found academic computer science dry and unfulfilling. So, I got an apartment near campus and took a term off.

Taking Time OffThe next six months were pivotal in shaping my outlook on life. I spent a great deal of time reading, mostly futurist manifestos and books on entrepreneurship. Books like The Design of Everyday Things and The Inmates Are Running the Asylum resonated with my growing belief that many of the frustrations we experience with technology should not be approached as engineering challenges, but rather as design / psychology challenges. Books like Engines of Creation and The Age of Spiritual Machines completely and utterly rewired my outlook on out future.

As a result, I had a newfound desire to plot my own course of education, and I put together a proposal for a specialized degree in Human-Computer Interaction, a field focused on understanding how people interact with technology. It was a lengthy process and ultimately my proposal failed to win approval from a university review board. So, I decided to switch to the General Studies program, which allowed me to do essentially the same coursework but graduating with a degree that some joked was originally created in order to graduate the university’s football players. I took courses in cognitive psychology, computation linguistics, information technology and global politics, virtual reality environments, entrepreneurship, complex systems and a host of others. Many of these were graduate level, and ironically none of them would have counted toward a degree in computer science. Taking time off and then graduating with a General Studies degree empowered me to believe I could pursue the things that were important to me without having to follow the "standard" path, and it was very fulfilling.

After graduating, I began working for a former professor who had been a successful consultant to the manufacturing industry specializing in Lean Manufacturing. I was helping him build multimedia training material and he was teaching me principles of entrepreneurship. It was exciting. Then one day I received a call that he had crashed a small aircraft he was piloting en route to deliver some of this material to a client. He and one of my co-workers, a 24 year old engineering student, had perished. Had some circumstances been different I probably could have been on that plane.

I returned home to New York a bit shell-shocked. I got my bearings in part by joining a fledging web design firm that specialized in rich interactive sites. It was here that I would hone my Flash development skills and learn to work in a punishingly fast-paced environment. I was a wild ride, but after two years it was time to move on. I was getting eager to start my own company. I shifted into doing freelance work while I tried to discover a "big idea" I could invest myself in. At some point one came to me, and soon Eluciv Knowledge was born, the company whose demise I recently chronicled. And now again I find myself with both a sense of loss and an eagerness to discover what’s next…

Posted in Uncategorized.