Add to Technorati Favorites

Hacking Twitter on JetBlue

21:41 November 24th, 2007 by terry. Posted under companies, me, python, twitter. 7 Comments »

I have much better and more important things to do than hack on my ideas for measuring Twitter growth.

But a man’s gotta relax sometime.

So I spent a couple of hours at JFK and then on the plane hacking some Python to pull down tweets (is this what other people call Twitter posts?), pull out their Twitter id and date, convert the dates to integers, write this down a pipe to gnuplot, and put the results onto a graph. I’ve nothing much to show right now. I need more data.

But the story with Twitter ids is apparently not that simple. While you can get tweets from very early on (like #20 that I pointed to earlier), and you can get things like #438484102 which is a recent one of mine, it’s not clear how the intermediate range is populated. Just to get a feel for it, I tried several loops like the following at the shell:

i=5000

while [ $i -lt 200000 ]
do
  wget --http-user terrycojones --http-passwd xxx \
    http://www.twitter.com/statuses/show/$i.xml
  i=`expr $i + 5000`
  sleep 1
done

Most of these were highly unsuccessful. I doubt that’s because there’s widespread deleting of tweets by users. So maybe Twitter are using ids that are not sequential.

Of course if I wasn’t doing this for the simple joy of programming I’d start by doing a decent search for the graph I’m trying to make. Failing that I’d look for someone else online with a bundle of tweets.

I’ll probably let this drop. I should let it drop. But once I get started down the road of thinking about a neat little problem, I sometimes don’t let go. Experience has taught me that it is usually better to hack on it like crazy for 2 days and get it over with. It’s a bit like reading a novel that you don’t want to put down when you know you really should.

One nice sub-problem is deciding where to sample next in the Twitter id space. You can maintain something like a heap of areas – where area is the size of the triangle defined by two tweets: their ids and dates. That probably sounds a bit obscure, but I understand it :-) Gradient of the growth curve is interesting – you probably want more samples when the gradient is changing fastest. Adding time between tweets to gradient gives you a triangle whose area you can measure. There are simpler approaches too, like uniform sampling, or some form of binary splitting of interesting regions of id space. Along the way you need to account for pages that give you a 404. That’s a data point about the id space too.

AddThis Social Bookmark Button

Twitter creeps in

21:30 November 21st, 2007 by terry. Posted under me, tech, twitter. 3 Comments »

I often notice little things about how I work that I think point out value. One sign that a piece of UI is right is when you start to look for it in apps that don’t have it. For example, after I had started using mouse gestures in Opera I’d find myself wanting to make mouse gestures in other applications. When mice first started to have a wheel, I was skeptical. Support of the mouse wheel was not universal across applications. When I found myself trying to scroll with the mouse wheel in applications that didn’t support it, I knew it was right.

Tonight I came home and went to my machine. The first thing I did was to check what was going on in Twitter. That’s pretty interesting, at least for someone like me. I’ve been sending email on pretty much a daily basis for 25 years. It’s pretty much always the first thing I look at when I come back to my machine. Occasionally these days I find myself first going into Google reader to see what’s new, but that’s pretty rare and I might be looking for something specific. Tonight, I think for the first time, Twitter was where I went to – and not just for the general news, but for communications between and about people I know or am interested in. Much more interesting than looking through my email.

I’m one of those that thought Twitter was pretty silly when I first signed up (Dec 2006). I only used it once, and also found it intolerably slow. But it’s grown on me. And I find definite value there.

A few examples:

  1. I’d mailed Dick Costolo a few times in the past. Then I saw him twittering that he was drinking cortados. So I figured he must be in Spain. I mailed him, and he was. As a result I ended up at the FOWA conference in London the next day and met a bunch of people.
  2. On Tuesday I went out and bought a Wii in Manhattan to take back to my kids in Spain. I twittered about heading out to do it. I got an email a bit later from @esteve telling me to take the Wii back as they are region-locked. So I did.
  3. A week or so ago I was reading some tweets and noticed that someone had just been out to dinner in Manhattan with someone else that I wanted to meet. So I sent a mail to the first person and was soon swapping mails with the second.
  4. I’ve noticed about 5 times that interesting people were going to be in Barcelona and so I’ve mailed them out of the blue. That’s really good – people on holiday are often happy to have a beer and a chat. I’d have had no idea they were going to literally be outside my door were it not for Twitter.
AddThis Social Bookmark Button

Not exactly Brownian motion in Manhattan

22:08 November 19th, 2007 by terry. Posted under companies, tech. 10 Comments »

Today after some meetings I went out for a walk. I’m staying on 12th Street between 5th and University in Manhattan.

I had intended to “just wander around” pretty much at random. That’s what I really felt like too. But in the back of my mind, not quite so far back that I wasn’t aware of it, my brain was making sure that, like it or not, I went to the Apple store on the corner of 5th Avenue and Central Park.

I really have no need of an Apple store. There’s nothing I would buy, nothing I need. But.

So off I wandered… Broadway, 6th Av, 5th Av. I stopped briefly in many stores, had a coffee and a muffin, tried to tell myself that I actually wasn’t going to the Apple store. But.

I saw iPods and iPhones aplenty along the way. Hundreds of them. All identically priced. Best buy, Comp USA, Circuit City, all the small electronic shops on 5th Av. No need, no need at all to go to the Apple store. None.

I’m walking up 5th Av in the boring super-rich area, Cartier, Dunhill, DeBeers. There can be no doubt whatsoever that I am heading to the Apple store. Most of my mind doesn’t want to go, but my legs and body seem determined. They know I need it.

And there it is. Amazing. I’ve been in several of these stores before, including this one, but there’s something you just have to see and feel. Maybe it’s the church of the 21st century… people are drawn in to worship an abstract god, to kneel at the altar and finger the icons.

It really is amazing. To me the Apple store is about the hippest place in Manhattan. Here you get to see all sorts of cool cats just hanging out with their favorite hardware. The place is full. Full of people from all over the world who’ve come to buy Apple gear. The place has a very definite atmosphere, and it’s not the atmosphere of a regular computer store. There are hundreds of Apple products out, they’re all on, and people are using them – surfing the web, reading email, listening to music, marveling. Spend half an hour in there people watching, and you want to run out and buy AAPL stock.

Apple and Nokia are two companies that really understand the importance of appearance, design, and fashion in technology. I think Nokia were the first company to see clearly that a phone is not just a phone – it’s a statement about yourself. It’s something you take out and leave on the table at the cafe, or casually flip open when you need to impress someone or get laid. Apple understands it even better. I’ll walk nearly 50 blocks just to get a fix – not to buy, just to look at the products, look at the people, be amazed at it all.

Fortunately I’m old enough to know that I don’t really need any of those shiny objects. I have a first generation iPod that I never use. I have a dead-simple phone that I don’t feel any need to upgrade. I haven’t bought myself a computer in I don’t know how long – maybe 10 years (I always get them through work). I’m not even sure that I’d own a computer if I didn’t work from home. But I sure do like to look at hardware. The new iPod nano is extraordinarily beautiful – dimensions, sleakness, feel, everything about it is divine – and at $149 (4GB) or $199 (8GB) it doesn’t feel expensive. But I know I simply wouldn’t use it. What a pity!

AddThis Social Bookmark Button

The Mahalo-Wikipedia-Google love triangle

00:12 November 18th, 2007 by terry. Posted under companies. 28 Comments »

Lots of people seem to like dumping on Mahalo and Jason Calacanis. For example, Andrew Baron recently posted about Why Mahalo is Fundamentally Flawed.

Try Googling Mahalo sucks and you’ll get about 232,000 hits. Take your pick of the highly critical coverage.

Some of the negative commentary on Mahalo is probably due to professional and personal jealousy. Some of it is due to the fact that it’s early days yet. And I think some of it may be due to Jason happily telling people to look left while he goes right.

How can Jason raise money for Mahalo at valuations north of $100M? Surely there must be a revenue plan that holds water? If you want to argue that Mahalo is a failure and that Jason is simply a ceaseless self-marketer full of hot air, you’ll need to argue that some of the same things are true of Mahalo’s investors. Or maybe we’re in a bubble and they’ve all simply lost it.

Here’s what I think is going on.

Firstly, I think Jason is using a little smoke and mirrors when he calls Mahalo a search engine and frequently compares Mahalo’s “search” results to Google’s. With few exceptions, everyone seems to be buying it! With few exceptions, people compare Mahalo with Google – presumably because Jason tells them to and because he talks about being a search engine. And, with few exceptions, the technorati tell us that Mahalo is a pretty crappy search engine.

I agree, because Mahalo is not a search engine. Putting a box labeled “Search” on your web site to dig hits out of your own content does not make you a search engine – if it did, millions of sites would qualify. Passing queries off to Google and showing the results does not make you a search engine, either. Telling people to compare your content with Google’s results does not make you a search engine. Nor does putting the words “search engine” in your company’s strapline.

Mahalo will never be a search engine, and almost certainly does not want to be a search engine. That would be suicidal.

I believe their strategy is entirely different and that the relevant comparison is not with Google, but with Wikipedia.

Mahalo is a rapidly growing collection of carefully curated content. Mahalo is Wikipedia with a different model of control, ownership, and content creation. It’s a benevolent dictator with a purchase agreement instead of a loose anarchy with the GNU Free Documentation License.

If you want to compare Mahalo to something, compare it to Wikipedia. Jason is a huge fan of Wikipedia. And here he is begging Jimbo Wales not to leave $100M/yr on the table. Interesting.

Right now Mahalo has roughly 25K pages. Google has information on, let’s say, 10 billion pages. By this simplistic measure, Google is about 400,000 times bigger than Mahalo. You’re not going to catch or compete with Google using people to make content. Yes, you can use Google for things you don’t have static pages for, as Mahalo does. But Mahalo is not a search engine. Never will be.

Now consider Wikipedia. Wikipedia has 1.2M English pages. That means that, in English, Wikipedia is a mere 48 times larger than Mahalo! Now we’re talking. Mahalo are currenly adding something like 1,000 pages a week. Suppose Jason manages to double that quite soon. That would be 100K pages a year, or about 8.3% of Wikipedia annually. So I think it’s conceivable that Mahalo could catch Wikipedia. Even if they keep a steady ship and only gain linearly they could easily be 35-40% the size of Wikipedia in 4 years’ time.

But sheer number of pages is only part of the story. Because the distribution of search requests will follow some kind of power law, you can pick up (say) half of all search requests by only covering a small number of them, and, as always, leave the long tail to Google.

So with a small finite amount of work, you can cover a very large chunk of Wikipedia. And I think that’s exactly what Mahalo are aiming to do.

A few weeks ago I pulled down all of Mahalo’s URIs for another project. Here’s a tiny sample – and I really did pick this out at random:

    http://www.mahalo.com/Valerie_Plame_Affair
    http://www.mahalo.com/Violence_on_Television
    http://www.mahalo.com/Violent_Crime_Rate
    http://www.mahalo.com/Virginia_Tech_Report
    http://www.mahalo.com/Voting_Machine_Controversy
    http://www.mahalo.com/Walter_Reed_Army_Medical_Center
    http://www.mahalo.com/War_Wounded
    http://www.mahalo.com/Washington_D.C._Lobbying_Scandal
    http://www.mahalo.com/Abdullah_Gul
    http://www.mahalo.com/Alan_Garcia
    http://www.mahalo.com/Alex_Salmond
    http://www.mahalo.com/Angela_Merkel

So what? you might ask. Well, let’s replace www.mahalo.com with en.wikipedia.org/wiki in the above. We get:

    http://en.wikipedia.org/wiki/Valerie_Plame_Affair
    http://en.wikipedia.org/wiki/Violence_on_Television
    http://en.wikipedia.org/wiki/Violent_Crime_Rate
    http://en.wikipedia.org/wiki/Virginia_Tech_Report
    http://en.wikipedia.org/wiki/Voting_Machine_Controversy
    http://en.wikipedia.org/wiki/Walter_Reed_Army_Medical_Center
    http://en.wikipedia.org/wiki/War_Wounded
    http://en.wikipedia.org/wiki/Washington_D.C._Lobbying_Scandal
    http://en.wikipedia.org/wiki/Abdullah_Gul
    http://en.wikipedia.org/wiki/Alan_Garcia
    http://en.wikipedia.org/wiki/Alex_Salmond
    http://en.wikipedia.org/wiki/Angela_Merkel

And guess what? All those URIs actually work! See below for a possible reason for this uncanny coincidence.

Research question: what percentage of Mahalo URIs work as Wikipedia URIs with the above simple substitution? I may do this test when I get a little more time. I bet the answer is high.

Ask yourself again: does Mahalo look more like Google or more like Wikipedia?

The idea of Mahalo-as-search-alternative-to-Google is just Jason operating Mahalo in stealth mode in broad daylight. “Hey, Rocky, watch me pull a search engine out of my hat! Oops! That’s not a search engine. I swear there was a search engine in there somewhere.”

How is Mahalo different from Wikipedia?

A big one is that Mahalo owns all its content. If Mahalo puts one of your pages on its site, you’ll first sign a purchase agreement in which the

Seller hereby irrevocably sells, grants, assigns, conveys and transfers to Mahalo, exclusively and forever, Seller’s entire right, title and interest in and to the SeRPs

and in which you warrant that the content is legit, in which you fully indemnify Mahalo, and in which you agree to let them be your agent and attorney should they need to take some action to obtain or protect the content.

In consideration you get $10-$15 which you can have in cash. Or, in a wonderfully ironic and masterful gesture, you can have your earnings donated to the Wikimedia Foundation! That’s just brilliant, I love it. How can you not be in awe of that? The guy’s a genius.

Talking of genius, just look at the language on the payment details page at Mahalo: “A Greenhouse Guide begins their career in the Greenhouse…”. You see? Writing articles for Mahalo is the beginning of a career. George Lakoff would probably count that as a classic example of framing (also see here).

Unlike Wikipedia, Mahalo owns every word of its content. That means they can sell it. That they can be acquired. But who would want to acquire Mahalo? Wait.

What other differences are there between Wikipedia and Mahalo?

Another big one is the millions of links on the internet that point to Wikipedia pages. Those little tubules that make up the internets, with Google’s PageRank worming its way down each and every one, assigning and passing on credit.

There are two things here: 1) the links themselves and 2) the high consequent position Wikipedia’s pages have on Google.

Can Mahalo get large numbers of people to link to their pages? If the pages are any good (and they are), then why not? Plus, it may be that Mahalo can catch Wikipedia in terms of how many people link to them.

According to the Netcraft October 2007 Web Server Survey, the number of servers on the net has been growing at an amazing 5% per month!

That’s just the rate of increase of new servers, not the rate of new pages being put onto existing sites. Let’s assume the Netcraft server number isn’t too far from the overall growth, and that the web roughly doubles in size every two years. That means if the size today is X, then in 4 years, towards the end of Jason’s horizon, it will be size 4X. If so, there are 3X pages yet to come into existence. The creators of these will have a choice to point links at Wikipedia or Mahalo. If popular momentum can be shifted to Mahalo, it can grab a large chunk of the link pie graph.

All of which brings us, inevitably, to Google.

Quick survey question: when you need to find something that you know you’d be happy to read in Wikipedia, do you first go to Wikipedia, find English (or your language), find their search box, enter your query, and click on the link? Or do you go to Google and take its Wikipedia link?

I thought so – you use Google. It’s a uniform way to get to things, it’s likely integrated into your browser, and they generally do a better and faster job of indexing sites’ content than the sites do themselves. So the existence and massive popularity of Wikipedia drives traffic to Google. And Google of course drives traffic to Wikipedia. The two of them are dating. But Wikipedia is not the perfect lover: they stubbornly refuse to put ads on their pages, to share the love. Along comes Jason Calacanis, then at AOL, to whom this is all very clear. He tells Wikipedia in no uncertain terms that with all that traffic they could make $100M per year from ads on just the home page. He points to a conservative estimate of the worth of Wikipedia at $600M, and his own estimate is $5B. Hmmmmm. What’s an entrepreneur to do when he sees someone leaving that much value on the table?

Back to Google. They would like to have more content. Traditionally, when you got back a page of their search results, you wouldn’t see links to pages on Google – that wouldn’t make sense: there were no pages on Google, after all. Google was supposed to point you to other pages. It was an index to help you find the things you actually wanted to look at. That was the old model. These days, Google is buying content (e.g., YouTube) and pointing their search results at their content, neatly taking the ad revenue in both places. All the better if the content comes with indemnification.

You can see where I’m going. Mahalo already does advertising with Google. In fact, they’re already a premium adsense publisher to the surprise of some. If ads on the single front page of Wikipedia could generate $100M annually, what could ads on all Mahalo pages generate if Mahalo grows to rival Wikipedia?

And… who weighs the importance of links (and other unknown factors) in Google’s results page? Yes, of course, Google does. According to this Fast Company article, Mahalo gets 65% of revenue Google makes when it sends its users into Google. And Google makes money when it sends its users into Mahalo.

If there’s really (say) $1B of value to be had by building a successful commercial version of Wikipedia, you can see why Google might have some interest in nudging links to Mahalo a little higher in its results. Maybe even higher than the equivalent page for Wikipedia. Now would be a good moment to remember that I illustrated above just how trivial it can be to match up equivalent Mahalo and Wikipedia pages… Got it? User enters a query, Google does the search and finds a highly-linked Wikipedia page, then in an instant they can make and instead display a link to the equivalent Mahalo page, optionally displaying the Wikipedia page below the fold. Would that qualify as evil?

All of which leads to a very clear answer to my “who would want to acquire Mahalo?” question. Interestingly, Google will want to wait until Mahalo is big (they will know exactly when, supposing Mahalo keeps using adsense). They want Mahalo to be independent and with strong momentum before they turn the corporate intake valve in the Mahalo direction.

Can Jason build a viable alternative to Wikipedia? I bet he can. He has the lessons of Wikipedia. He doesn’t have the anarchy factor. He has no spam. He knows what he’s doing, and he’s in control. It’s a content play, and Jason is a content guy. An editor with a track record of building valuable content in this way. He’s playing to his strengths. The engineering is not nearly as daunting as building a Google. He has the money. As he ramps it up he’s going to have more money.

Who’s going to stop him? Certainly not Google – that’s not in their interest at all. Almost certainly not Wikipedia – unless they start putting up ads and funneling large amounts of money back to Google. And Jason is unlikely to shoot himself in the foot either – quite the reverse.

So if that’s the strategy, and if he’s on track with content (as he seems to be), and if the content is passably good, or better (which it is), and if he has a good understanding with his “friends at Google” (which you can bet he does — let’s not forget the Sequoia factor either), and if the revenue numbers are about right, then a $175M valuation for an upcoming round to accelerate things might look like a steal.

Along the way, Jason gets to have a quiet inner smile at all the people whining about how Mahalo is a crap search engine. He feeds the fire all the while, telling them to go ahead, make his day, and compare Mahalo’s results to Google’s (but not Wikipedia’s). Misdirecting attention towards Google and having people write him off probably suits him just fine. Meanwhile, they’re getting on with the real mission.

AddThis Social Bookmark Button

Flakey Twitter and the use of consecutive ids

05:54 November 16th, 2007 by terry. Posted under companies, tech, twitter. 2 Comments »

Twitter was just inaccessible for maybe a couple of hours. Prior to that there was a 9-day gap in their timeline, noticed by at least a few people. I quite regularly have twitters I send not show up at all.

I wonder what could be going on over there? Things certainly don’t feel very stable.

A friend signed up tonight. Using the Twitter API you can see her id. It’s a bit over 10 million. You can also see the id of her first twitter, a bit over 417 million. The earliest twitter available on the system is number 20 “just setting up my twttr” sent at 20:50:14 on Tue Mar 21 2006 by Jack Dorsey who has user id 12 (the lowest user I’ve seen).

Given that Twitter seem to be using consecutive ids for users and twitters, and that you can pull dates out of their API, it would be pretty easy to make graphs showing growth in users and twitters over time. You could probably also infer downtime by looking for periods when no twitters appeared. This would be pretty easy too. Beyond a certain point in time it would be very accurate (i.e., when there are so many twitters arriving that a twittering gap is suspicious), and you could calculate confidence estimates.

I don’t have time for all that though.

But I wonder if Google did something like that as part of their competitive analysis when they decided to buy Jaiku, or if Twitter’s investors did it, and how the numbers would match up with whatever Twitter management might claim. I’ve no idea or opinion at all about any of that btw. But I don’t think I’d be exposing all that information by using consecutive ids for users and their twitters.

AddThis Social Bookmark Button

Is Andrew Parker secretly running Union Square Ventures?

18:06 November 14th, 2007 by terry. Posted under companies, me. 6 Comments »

Fred Wilson stirred up the entrepreneurial blogosphere 6 months ago with a series of posts wondering about the influence of founder age on startup success. I wrote one of my typically long comments.

Today I was making myself a coffee, and thinking about how fast/slow I can move, and how that’s changed over the years. When I was 24 I perhaps had more energy, but I often acted in a quite unfocused way. Now I’m 44, and I still have tons of energy. E.g., I was up coding last night until 6:30am, and then got up at 10am this morning and continued, so I’m not exactly loafing around with slippers and a pipe reflecting on my glory days. But I also have 3 kids, and other things going on. I have to act in a much more focused way or I couldn’t do the things I want to do.

But….., I then thought, the life of your average VC probably has some strong similarities. A couple of kids, insanely busy when working, regularly carving out quality time for family, needing to stay very organized and on top of things, needing to keep multiple balls rolling, etc. Those thoughts led me to reconsider Fred’s posting, but in the context of VCs.

Might it be that the best VC general partners would actually be a bunch of 24 years olds? Of course they could have some older guys as analysts. What do 40-50 year old VCs have that 20-30 year olds don’t that makes them more qualified and better as VCs? If you want to argue that experience makes the older better, you probably need to argue that for entrepreneurs too. If you want to argue that the energy of youth makes for a better entrepreneur, you might need to argue that for VCs too. If you want to argue that young founders have unique insight into what products will be successful, you might think the same would be true of young VCs — if there were any.

It takes a massive amount of work to create and build a startup. Unless you’re a superstar, it’s also a huge amount of work to get funded. You have to go begging and scraping, on bended knee, hat in hand, to make mature and otherwise sober people with a lot of money believe in you. And that’s all done against a background of very steep odds. Similarly, it’s a massive amount of work to raise a venture fund. You have to make even more mature and more sober people with even more money believe in you. And you have to do it in a much less forgiving environment, also against steep odds.

Thirty or even twenty years ago, most CEOs would probably have scoffed at the idea that a 20-year-old could start and run a company, and sell it for tens or hundreds of millions, or even a billion, or take it public. We now know that that actually happens, and the idea that the very young can do it, including getting financial backing, is no longer foreign. Might not the same one day be true for fund managers? When will we see the first VC fund run by a couple of twenty-somethings? Will they exhibit a marked preference for funding older founders?

Back when Fred was posting, I pointed Howard Gutowitz to one of the postings. A couple of days later, Howard told me that he’d talked about it to his brother:

Robert made what is actually an interesting suggestion: get a figurehead 26 year old to be the CEO. Turn the old game around.

I think that’s pretty amusing.

Maybe Andrew Parker is actually running Union Square Ventures. Turn the old game around.

AddThis Social Bookmark Button

The young are different

15:40 November 14th, 2007 by terry. Posted under companies. 5 Comments »

[These are some comments I wrote in reply to a Fred Wilson posting, The Age Question (final post). I’ve pulled them out here because I feel like it, and I want to link to them.]

I really don’t see what all the fuss is about. Fred posts some observations about the numbers he’s seeing, and people take it personally. It could hardly be clearer that he’s not ageist. This is shooting the messenger.

The world is a changing place, as usual. Technology and knowledge is better understood and packaged and available, especially to youth, as usual. Kids of 15 can do things today (including starting companies, which takes 10 minutes on the web) that older people wouldn’t have dreamed of at 15, as usual.

There’s a tendency to let two questions overlap here. One is whether young people do better at starting and running companies, etc. The other is whether young people are inherently different simply because they’re young.

I don’t think there’s enough evidence on the first question to know the answer. Personally, I doubt that young inexperienced people are better than older more experienced people on this question.

But, young people ARE different. They think differently. It’s very clear. The easiest way to see it is to ask yourself whether (at least with regards to technology) you’re more advanced (or similar word) than your own parents? Of course you are. Of course. How could you not be? Ask all your friends the same question, and they’ll probably all say the same thing. Now just extrapolate.

The “but we invented it” argument doesn’t hold water. Bicycles were invented many generations ago, and in many respects haven’t changed that much. But look at the innovations in cycling of the last 20 years! Kids are doing things on bicycles that people of my generation (I’m 43) never contemplated. Or take skateboards. The basic form hasn’t changed much in the last 20 years. Arguably, my generation invented the skateboard. But when we were kids the Ollie and grinding etc. hadn’t been thought of and the kinds of tricks that regular kids in the street are doing today would have blown our minds in the 70s. We had almost exactly the same gear. Back then, the ultimate in coolness was to do a few 360’s and maybe a handstand. If a mundane object like a skateboard or a bicycle can be put to such novel use, who can doubt what the internet/web can be made to do/look like in the hands of people who push the envelope? It’s very clear that successive generations of users (aka “kids”) push the envelope.

The internet, and of course the web, are not mature technologies. Our kids are still discovering amazing new way to use bikes 120 years after the invention of the “safety” bicycle. So we probably have a fair way to go on the internet/web. The sheer numbers alone argue that most of the innovation will come from people much younger than the current generation of users, some of whom are undoubtedly yet to be born, and none of whom will have had anything to do with the invention.

In summary: are youth better at starting and running companies? Don’t know. Are they different from us? Hell yes.

It’s probably useful to keep the two questions apart.

Another thought worth considering: in some cases (see above) it may take adults to invent something and bring it to market, but kids to figure out how to really use it.

AddThis Social Bookmark Button

Twittering from inside emacs

04:34 November 12th, 2007 by terry. Posted under python, tech, twitter. Comments Off on Twittering from inside emacs

I do everything I can from inside emacs. Lately I’ve been thinking a bit about the Twitter API and social graphs.

Tonight I went and grabbed python-twitter, a Python API for Twitter. Then I wrote a quick python script to post to Twitter:

import sys
import twitter
twit = twitter.Api(username='terrycojones', password='xxx',
                        input_encoding='iso-8859-1')
twit.PostUpdate(sys.argv[1])

and an equally small emacs lisp function to call it:

(defun tweet (mesg)
  (interactive "MTweet: ")
  (call-process "tweet" nil 0 nil mesg))

so now I can M-x tweet from inside emacs, or simply run tweet from the shell.

Along the way I wrote some simple emacs hook functions to tweet whenever I visited a new file or switched into Python mode. I’m sure that’s not so interesting to my faithful Twitter followers, but it does raise interesting questions. I also thought about adding a mail-send-hook function to Twitter every time I send a mail (and to whom). Probably not a good idea.

You can follow me in Twitter. Go on, you know you want to.

Anyway, Twitter is not the right place to publish information like this. Something more general would be nicer…

AddThis Social Bookmark Button

Multiplying with Roman numerals

18:14 November 10th, 2007 by terry. Posted under companies, representation, tech. 1 Comment »

I like thinking about the power of representation, particularly inside computers. I wrote about it earlier in the year and gave a couple of examples. Here’s another.

Think about how you might have done multiplication with Roman numerals. Why is it so difficult?

It’s not because multiplication is inherently so hard. Roman numerals were just a terribly awkward way to represent numbers. However, if you introduce the concept of a zero and use a positional representation, things become much easier.

Note that the problem hasn’t changed, only the representation did. A new representation can make things that look like problems go away.

I claim that we are still using Roman numerals to manage information online (and on the desktop for that matter). Until we do something about it, we’ll probably continue butting our heads against the same problems and they’ll probably continue to appear intractable.

At Fluidinfo, everything we do is based on a new way to represent information.

AddThis Social Bookmark Button

Daylight robbery in Berlin

21:49 November 6th, 2007 by terry. Posted under companies, tech, travel. Comments Off on Daylight robbery in Berlin

I’m sitting in a hotel in Berlin, the Hotel Ibis Berlin Mitte. They’ve done a deal with Vodafone to provide wifi access for their guests.

Here’s the price list:

  • 30 minutes – 5.95 euros, or $8.66
  • 2 hours – 12.95 euros, or $18.85
  • 24 hours – 29.95 euros, or $43.59

There’s no option to connect/disconnect and use your time bit by bit. You have to take it all at once, making the 24 hour option particularly attractive.

Way to go Vodafone! You idiots. With bargain basement rates like these I will certainly keep coming back. Same goes for you Ibis Hotel. Typical phone company strategy – maximally fuck your customers in the short term.

AddThis Social Bookmark Button

Powerset hampered by limited resources? Oh please

19:54 November 2nd, 2007 by terry. Posted under companies, tech. Comments Off on Powerset hampered by limited resources? Oh please

I don’t mean to appear cold-hearted. I have a heart. Really. But news of a shakeup at Powerset given release delays doesn’t come as a surprise at all.

What is surprising is to read that Powerset has “been hampered by limited resources.” Oh puhleease. Since when has $12.5M (minimum) in funding qualified as having limited resources?

Delays in getting hold of the Xerox NLP API caused fundamental problems? I used that API (ten years ago, admittedly) and, sorry to say, it’s not the key to unlocking the natural language understanding puzzle. But it was widely trumpeted as the key to Powerset knocking off Google. The mysterious all-powerful NLP API from the mysterious all-fumbling Xerox PARC finally lands in the hands of a commercial company poised to Make Good! Powerset had snatched the NLP crown jewels out from under Google’s nose!

It wouldn’t surprise me if PARC were glad to get rid of the rusty old thing. “Psssst, buddy. You over there… wanna buy an antique NLP API owned by former royalty? S’good fer what ails ya.”

OK, I’m being a bit sarcastic and silly. I guess I just have limited patience for these projects and especially for the breathless hype that surrounds them.

I’ve often wondered about Powerset (and Metaweb) hitting the wall. Lots of hype, pressure, and funding. Lots of people. High burn rate. And revenue coming from…… where exactly? And that’s not to mention the blow to our confidence that Powerset were really onto something deep when they let a genius programmer drink and get away from his handlers at a dotcom-style bash.

I’d say the real reason Powerset are “hampered” is the fact that they’re trying to solve something that’s practically impossible.

If you look at it that way, then I suppose having only $12.5M to achieve the impossible really is a case of having limited resources.

Stay tuned. There’s a long, nasty and heartless blog posting locked up inside me about people and companies that chase words like “understanding”, “meaning” and “intelligence”.

AddThis Social Bookmark Button

That deep sucking sound

05:00 November 2nd, 2007 by terry. Posted under companies, tech. Comments Off on That deep sucking sound

First of all, let me just say that Jason Calacanis is a genius.

Having said that, I most humbly submit that his posting today on Facebook’s WORST two features is a little off in one regard.

I also hate the way Facebook tries to pull me into its world when they could so easily just deliver my message in email. But I don’t think they’re doing it for the page views.

I think they’re doing it because they want to take over the world.

Once there were PCs. Then came the mass migration towards online apps that run in one’s browser. Microsoft got that one right, though they were a little early in calling it (culling it?), and then ironically were present at the conception and birth of what really kicked it into high gear, XMLHttpRequest. But that’s another story.

So what’s next? Or, what would Facebook like to be next? Well, the obvious next step in the progression: mass migration to a particular platform running inside your browser. It just makes sense.

Except it doesn’t.

But that’s what I think Facebook will be going after. They want us all in there. That’s where we should be sending and receiving messages, IMs, poking each other, and, of course, throwing expensive virtual food. Why not twitter in Facebook? Why not Tabblo in Facebook? Why not watch videos inside of Facebook? Why not everything in Facebook?

There’s no way it can work, but I bet that’s what they’re going after – to as great an extent as possible. A bundle of cash will make for some nice acquisitions, as would an IPO. And everything they buy is going to wind up inside Facebook.

I’ll save my reasons for why it can’t work for another posting.

AddThis Social Bookmark Button

Twitterquake

01:34 November 1st, 2007 by terry. Posted under companies, twitter. Comments Off on Twitterquake

Fastest news in the West? Twitter wins hands down.

twitterquake

AddThis Social Bookmark Button

Risk and reward, from the investor POV

23:20 October 30th, 2007 by terry. Posted under companies. Comments Off on Risk and reward, from the investor POV

I’m very curious about the correlation of perceived early stage (seed, series A) startup risk and the eventual reward. That is, if you plotted a set of well-informed potential investors’ perception of a collection of startup companies’ risk against the eventual performance of those companies, what would the plot look like?

Would it be low-risk low-reward and high-risk high-reward? Would it be all over the place?

In an earlier post, the blind leading the blind?, I wrote about it being extremely hard (or impossible) to assess value. I’m sure that’s true in one-off cases, and that it’s true for entrepreneurs who by definition have relatively little practical experience with startups (sure, they may have read a lot, we’ve all read a lot – I mean in actually doing them).

But is the same true for investors? It’s certain that the bulk of investors get their results all over the map – some things that look safe end up worthless, some things that look like big risks end up paying off big time, and everything in between. If it were otherwise, the investment game wouldn’t be what it is – it would be much safer and more predictable.

One question I have relates to the relationship between extreme risk and extreme reward that I wrote about earlier. I.e., if risk and reward always go together, then you can’t reap a huge reward without taking huge risks.

But perception of risk is subjective. A run-of-the-mill VC might see something as hugely risky, not make the bet and so miss a huge payoff. But an exceptional VC, presented with the same investment opportunity, might see accurately that it in fact was destined to be big (i.e., consider it relatively low risk) and make the home-run investment. Do such VCs exist? If so, and we knew who they were, we’d be clamoring to pitch to them, to invest alongside them, to study there methods. Does Sequoia fall into this category? Or are they just lucky, or maybe they simply have access to better quality deals because of their track record, etc.

That’s why I’d love to see a plot of perceived risk against reward. I think in general my feelings about entrepreneurs would also hold with investors – that the really rewarding things are strongly correlated with the really risky. An investor with that sort of profile doesn’t really know any more than the rest of us. But might there exist a class of investor who can look at things that are going to be hugely rewarding and not think that they’re also high risk?

Here’s another way to put it: Maybe risk and reward always go hand in hand, are always proportional, that to get high rewards you must take high risk. If so, a VC firm cannot possibly be sitting on the next Google (or whatever) unless they have one or more companies in their portfolio that scare the shit out of them. Or, might it be the case that while risk and reward appear to go hand in hand to many, there are a few superb investors who perceive risk differently and looking at their plots we’d see that they made tons of money by taking what looked to them like low-risk bets? Or maybe the entire premise is wrong and risk and reward actually aren’t well correlated, let alone perceived risk and reward.

This is a bit of a rambling post, I know. But it’s what I’m thinking of these days. I would at least know how to make the scatter plot I’m imagining, and it’s fun to speculate on its shape – both across multiple VCs and for them individually.

AddThis Social Bookmark Button

On Andreessen on platforms

17:14 October 26th, 2007 by terry. Posted under companies, tech. 2 Comments »

[This taken from my comment on Fred Wilson‘s posting Andreessen on Platforms, in which he discussed Marc Andreessen‘s posting The three kinds of platforms you meet on the Internet.]

I think Marc’s posting has two flaws. The first, which is serious, is that he didn’t put enough thought into it. The second, less of a problem, is that in several places it comes across as biased and a bit of a Level 3 sales pitch. I may be guilty of the former in what follows. Certainly my reply is a bit piecemeal – but there are only so many hours in the day.

In what follows, when I talk about “you”, I mean you the humble individual programmer.

Firstly, things become clearer if we categorize Marc’s Levels 1, 2 and 3 differently. Level 1 and 2 are two sides of the same coin:

  • Level 1: You write an app, and you call out to an API (a library of functions) that someone else has written.
  • Level 2: You write functions, and an app that someone else has written calls you (treats your code as a library function it can call).

To me these things are opposites. Within Level 2, there are two classes:

  • Level 2a: You write functions. An app that someone else has written calls your code, which runs on your server.
  • Level 2b: You write functions. An app that someone else has written calls your code, which runs on their server.

My Level 2b is what Marc calls Level 3. I’ll continue to use his terms.

Note that only in Level 1 are you really writing a full app. In level 2 and 3 you’re writing functions that are called from an existing application (like facebook or photoshop) that you almost certainly didn’t write. To make you feel better, they give your functions pleasing names like “plug in” (photoshop), “extension” (firefox), and even “app” (facebook).

To me that’s a more logical division of the 3 classes. I see no reason at all to call Level 1 a “platform”. You are writing an app. You’re calling someone else’s libraries – some of them will be local, some will be on the network. You’re not writing a platform. The only platform here is in the local OS of the machine your app is running on.

If we stop calling Level 1 a platform, it makes that word much less cloudy. That means that things like Photoshop, Firefox, and Facebook (Level 2), and Ning, Salesforce.com, and 2nd life (Level 3) all provide platforms for you. But Flickr, delicious, the Google maps API, etc., are not platforms and calling them that is just confusing. They’re just APIs or libraries that other apps can call (across the network, in these cases).

Next, virtually ALL applications in operation today are running in Level 3 platforms. Most of them run in the environment provided by operating systems.

Once you look at things that way, you see that the thing which is important is the runtime environment provided by the Level 3 platform you are already running on. Is it fast, secure, scalable, flexible, etc.? Can you write the kinds of things you want to write with it? Should you try something else?

I think Marc didn’t look at his Level 3 this way, or at least not clearly.

Now, traditionally in the field of computing, there has been a single main way of providing a platform. You provided a computer system — a mainframe, a PC operating system, a database, or even an ERP system or a game — that contained a programming environment that let people create and run code, plus an API that let them hook into the core system in various ways and do things.

The Internet — as a massive distributed system of many millions of internetworked computers running many different kinds of software — complicates things, and gives rise to three new models of platform that you see playing out in the Internet industry today.

I don’t think they’re all platforms, and I don’t think any of them are new :-)

But let me say up front — they’re all good. In no way to I intend to cast aspersions on what anyone I discuss is doing. Having a platform is always better than not having a platform, period. Platforms are good, period.

Hey, all platforms are great. But some are greater than others…

Level 1 is what I call an “Access API”.

This is undoubtedly a very useful thing and has now been proven effective on a widespread basis. However, the fact that this is also what most people think of when they think of “Internet platform” has been seriously confusing, as this is a sharply limited approach to the idea of providing a platform.

Do most people think of things like the Flickr API as being internet platforms? If it’s sharply limited (I agree), then please let’s not call it a platform.

What’s the problem? The entire burden of building and running the application itself is left entirely to the developer. The developer needs to provide her own runtime system, programming language, database, servers, storage, networking, bandwidth, and security, and needs to take responsibility for running all of the above — and then exposing the application to users. This is a very high bar in terms of both technical expertise and financial resources.

This is painting an overly bleak picture. Almost every application programmer on earth uses an off-the-shelf runtime system (e.g., an OS or a Java sandbox), off-the-shelf databases, servers, networking, etc. Yes they choose a programming language (as they do if they choose to use a Level 3 system). It’s work to pick these things out and combine them but that’s a very far cry from shouldering the _entire_ burden.

This is an example of what feels like salesmanship in Marc’s article. He’s right in general, but the way he puts it feels slanted.

As a consequence, you don’t see that many applications get built relative to what you’d think would be possible with these APIs — in fact, uptake of web services APIs has been nothing close to what you saw with previous widespread platforms such as Windows or the Mac.

And this isn’t a good comparison. It’s comparing use of a Level 1 API to use of what Marc later tells us is a Level 3 system (a traditional OS).

Because of this and because Level 1 platforms are still highly useful, notwithstanding their limitations, I believe we will see a lot more of them in the future — which is great. And in fact, as we will see, Level 2 and Level 3 platforms will typically all incorporate an Level 1-style access API as well.

Right. In fact Level 1 platforms (aka APIs) underpin all of Marc’s levels. Which is to say that even if he’s right, the Level 1 “platform” isn’t going away or lessening in importance – that’s because it’s not a platform at all. It’s a API, and libraries of functions exposed as APIs are useful things to have around. Likewise, APIs on the local OS aren’t about to go away either – in fact they’re crucial to the operation of the OS, just as they are to the operation of a level 3 platform (which is also running in a Level 3 OS).

So Level 1 isn’t going anywhere, or getting less important.

When you develop a Facebook app, you are not developing an app that simply draws on data or services from Facebook, as you would with a Level 1 platform. Instead, you are building an app that acts like a “plug-in” into Facebook — your app literally shows up within the Facebook user experience, often as a box in the middle of a page that Facebook otherwise defines, such as a user profile page.

Here (as with Photoshop or Firefox), your code is like a library function you write that is called by another app. In this case, your code runs on your server, and the calling app (usually on another server, if it’s a web app) takes your results and displays them (often to a web browser).

Level 3 is what I call a “Runtime Environment”.

In a Level 3 platform, the huge difference is that the third-party application code actually runs inside the platform — developer code is uploaded and runs online, inside the core system. For this reason, in casual conversation I refer to Level 3 platforms as “online platforms”.

And here, your code is like a library function you write that is called by another app. In this case, your code runs on the platform’s server, and the calling app (on their server) takes your results and displays them (often to a web browser).

Obviously this is a huge difference from Level 2. And this difference — and what makes it possible — is why I think Level 3 platforms are the future.

And the past.

There follow a number of breathless paragraphs that describe exactly why it’s hard to build an OS, and what the advantages are once you manage it.

Then it’s acknowledged that yes, this is all… just like having an OS!

So those long paragraphs feel like Marc is either completely blind to an _extremely_ obvious and almost perfect analogy, or, like he’s a salesman trying out a snow job on just how incredibly amazing these totally new Level 3 platforms will be. It’s impossible to think #1, so I’m left feeling #2.

The Level 3 Internet platform approach is ironically much more like the computer industry’s typical platform model than Levels 2 or 1.

Back to basics: with a traditional platform, you take a computer, say a PC, with an operating system like Windows. You create an application. The application code runs right there, on the computer. It doesn’t run elsewhere — off the platform somewhere — it just runs right there — technically, within a runtime environment provided by the platform. For example, an application written in C# runs within Microsoft’s Common Language Runtime, which is part of Windows, which is running on your computer.

At which point you note that basically all programs already run in a Level 3 platform:

I say this is ironic because I’m not entirely sure where the idea came from that an application built to run on an Internet platform would logically run off the platform, as with Level 1 (Flickr-style) or Level 2 (Facebook-style) Internet platforms. That is, I’m not sure why people haven’t been building Level 3 Internet platforms all along — apart from the technological complexity involved.

But nothing is running “off platform”. It’s all already Level 3. Yes, there are differences in environment… coming up.

So who’s building Level 3 Internet platforms now?

First, I am — Ning has been built from the start to be a Level 3 platform.

Second, in a completely different domain, Salesforce.com is also taking a Level 3 platform approach

Third, and again in a completely different domain, Second Life is a Level 3 platform.

Fourth, Amazon is — I would say — “sort of” building a Level 3 Internet platform with EC2 and S3. I say “sort of” because EC2 is more focused on providing a generic runtime environment for any kind of code than it is for building any specific kind of application — and because of that, there are no real APIs in EC2 that you wouldn’t just have on your own PC or server.

Ah, there’s a very interesting bias…

The generic traditional PC OS is a Level 3 platform, despite the fact that it’s not specifically geared towards any particular use. But EC2/S3 are somehow only sort of Level 3 precisely because they have the exact same property???

By this, I mean: Ning within our platform provides a whole suite of APIs for easily building social networking applications; Salesforce within its platform provides a whole suite of APIs for easily building enterprise applications; Second Life within its platform provides a whole suite of APIs for easy building objects that live and interact within Second Life. EC2, at least for now, has no such ambitions, and is content to be more of a generic hosting environment.

However, add S3 and some of Amazon’s other web services efforts to the mix, and you clearly have at least the foundation of a Level 3 Internet platform.

I might argue this the other way round. Things like Ning and 2nd life and Facebook are trying to be real Level 3 platforms to allow people to build a wide range of apps (i.e., 3rd party functions that they call), but they’re only “sort of” true Level 3 because they’re built for a specific purpose and so are only useful for that purpose – even if the purpose is broad, like “the” social network.

Things that are more generic, like EC2 and S3, are more like the generic computational environment provided by a traditional OS. And for that reason, one can expect them to be used for a wider range of applications (including standalone applications, not just code that lives within the Facebook or Ning world). For that reason you might expect that applications written against them will be longer-lived, as they will not die as fashion and coolness moves its fickle hand from MySpace to Facebook to Ning to…?

Would you buy a used Level 3 platform from this man?

Fifth and last, Akamai, coming from a completely different angle, is tackling a lot of the technical requirements of a Level 3 Internet platform in their “EdgeComputing” service — which lets their customers upload Java code into Akamai’s systems. The Java code then runs on the “edge” of the network on Akamai’s servers, and is distributed, managed, and secured so that it runs at scale and without stepping on other customers’ applications.

This is not a full Level 3 Internet platform, nor do I think Akamai would argue that it is, but there are significant similarities in the technical challenges, and it’s certainly worth watching what they do with their approach over time.

Why is it not a full Level 3 platform? Because it doesn’t have a particular focus?

I believe that in the long run, all credible large-scale Internet companies will provide Level 3 platforms. Those that don’t won’t be competitive with those that do, because those that do will give their users the ability to so easily customize and program as to unleash supernovas of creativity.

Oh my!

But having already said that Level 3 platforms will need underlying Level 2 and Level 1, it doesn’t seem like the Level 3 providers are driving the lesser levels out of the marketplace.

One might instead argue that it’s the Level 3 providers who are most likely to disappear. We’ve seen exactly that happen in the traditional Level 3 world (operating systems), while some applications and many great libraries hop happily from one Level 3 environment to the next.

I think there will also be a generational shift here. Level 3 platforms are “develop in the browser” — or, more properly, “develop in the cloud”. Just like Internet applications are “run in the browser” — or, more properly, “run in the cloud”. The cloud being large-scale Internet services run on behalf of users by large Internet companies and other entities. I think that kids coming out of college over the next several years are going to wonder why anyone ever built apps for anything other than “the cloud” — the Internet — and, ultimately, why they did so with anything other than the kinds of Level 3 platforms that we as an industry are going to build over the next several years — just like they already wonder why anyone runs any software that you can’t get to through a browser. Granted, I’m overstating the point but I’m doing so for clarity, and I’m quite confident the point will hold.

But everything _already_ runs “in the cloud” on a Level 3 platform. Your local OS has far more functionality, more speed, more libraries, more space, more flexibility, etc., for you to run your applications in. OK, I’m being a bit difficult, and understating the point. Maybe.

Now to the main point, which I think is valid, but which Marc doesn’t answer.

Before we had operating systems with all their benefits (see the long list of benefits Marc tells us will accrue from his Level 3 – ease of use! open source! buying and selling code that just runs!) a forward-looking person could have looked ahead and predicted the rise of the operating system. What sorts of programs, what supernovas of creativity might they have predicted?

Marc looks ahead…

A new platform typically enables a new set of applications that were not previously possible. Why else would there be a need for a new platform?

But: keep this in mind; look for the new applications that a new platform makes possible, as opposed to evaluating the new platform on the basis of whether or not you see older classes of applications show up on it right away.

But give us no examples at all.

I’m extremely interested in this. What will these applications be?

Is it true that what we can build with these future systems is not “possible” without them? Or just not feasible? Where does their extra power come from? I think it’s NOT principally from the great diversity of apps that can be written to run on these platforms, but from what you gain by having a large number of apps running in the _same environment_ – be it in an OS with a file system, a process subsystem and communicating processes, or a Level 3 internet platform with whatever it provides.

In the fullness of time, whenever that is, we may see the rise of truly open internet Level 3 platforms that will challenge the well-funded closed commercial ones. Meanwhile, I’m happy to _only_ be working away at Level 1.

AddThis Social Bookmark Button

The value of APIs to startups

17:02 October 26th, 2007 by terry. Posted under companies, tech, twitter. 2 Comments »

[This pulled from my comments and questions on Fred Wilson‘s posting Every Product Is A Platform on September 10, 2007]

My question to VCs and others is where you see value in having others build on an API. I can see some arguments – visibilty and branding, pushing maturity of the API, giving you an under-the-radar tap with which you can experiment with increasing traffic, maybe giving you ideas for products (if you’re the kind to take that route), finding (and then hiring) good hackers who love your product. These are all indirect benefits. I’m curious about why, from an investor’s POV, there’s value in having others build on the API. There are 250+ things built on the del.icio.us API. Were they of value? Did they increase revenue in any direct way? If you argue that there’s great direct value, can I therefore walk into your office, claim that thousands of people will write apps using my API and argue for a massive valuation? :-)

Do any of the companies offering an API have a strategy for monetizing it, or simply recouping costs for bandwidth, servers, etc.? Sure, the exposure is great. But, as I was once taught, you can die from over-exposure.

Here’s another way of looking at my question: if API traffic is 10x bigger than interactive web traffic, then just 1/11th of Twitter’s computing resources are being used to support their (arguably) most important customers. Maybe the site could have been many times faster if they had opened up API usage slower. I found the Twitter web interface unusably slow in the first 6 months after I heard about it – a feeling that many shared. Is that because they were actually using 90% of their resources supporting apps they didn’t write and didn’t benefit (directly, financially) from? That’s a very delicate line to choose to walk. At that level of diverting resources from normal users, there’s a huge risk blowing it. Hence my question about value. Sure, the 3rd party apps are cool and exciting – but are they so important that it makes sense to give you front-line customers a miserable time, making your service extremely slow.

To go to another extreme, imagine releasing an API that was so powerful that thousands of people wrote to it, but which had no user-facing component. How is that going to make you money unless you charge for it? E.g., Amazon’s S3. If you charge, like Amazon, I understand the model. If you don’t charge and the API is eating 90% of your resources, you may be shooting yourself in the foot rather severely.

It’s an interesting problem. As I said earlier, I agree with you that if you can do it, product should drive platform. Twitter could have followed that route, but apparently went the other way round. Or maybe things were just totally out of control and they unexpectedly found themselves in this 10:1 situation.

One thing’s for sure, if you’re using 10/11ths of your resources on your (non-paying) API customers, you should definitely make sure the rest of the world knows about it :-)

AddThis Social Bookmark Button

Twisting the towel

04:01 October 26th, 2007 by terry. Posted under companies, tech. Comments Off on Twisting the towel

Russell and I met with Esther Dyson (a Fluidinfo investor) recently. After she’d listened to our presentation and seen the latest demo, she said that we’d “given the towel another half twist” and that we should carry on twisting.

She was referring to the process of tightening up and focusing company vision, strategy, business plan, etc.

I liked the analogy a lot. Twisting a wet towel is fun. It’s hard work, and it gets harder. But it’s surprising and satisfying to see just how much water you can get out of the thing before you let nature take its course and finish the job.

It also applies to writing documents. I spent most of 2005 writing a proposal to start a research institute for the computational study of infectious diseases (still in the works, though I’m no longer directly involved). Thanks to the repeated insistence of Derek Smith in Zoology at Cambridge, the document went through about 5 iterations, each more painful and difficult than the previous. It drove me nuts. But it was amazing how much better the thing became at each round, and the end result was hugely satisfying.

I’m going through the same process now with Fluidinfo as we prepare to raise our first round of outside financing. Putting together a slide show, executive summary, and demo is a ton of work. I’ve been round the loop a few times already. Earlier tonight I gave a presentation to Vicente López, general manager of the Barcelona Media Centre for Innovation. He poked holes in the presentation from start to finish. I took notes.

So I just spent the last 6 hours slowly twisting the towel. As a result the presentation is much improved. I figure we still have a couple of half twists left to do.

Meanwhile, I’ve paused to reward myself by knocking off today’s blog entry.

AddThis Social Bookmark Button

smell the fear

14:24 June 30th, 2007 by terry. Posted under companies, tech. Comments Off on smell the fear

Entertainment retailers are not happy that Prince is giving away his upcoming album, via a deal with the Mail on Sunday newspaper. Their reaction is one of abject fear with a sprinkling of nonsense:

It would be an insult to all those record stores who have supported Prince throughout his career

All those stores making all that money, colluding to fix prices, over all those years, and they were just doing it to support the artists! My heart bleeds for them.

You can almost smell the fear.

AddThis Social Bookmark Button

my O’Reilly number

11:18 June 25th, 2007 by terry. Posted under books, companies, tech. Comments Off on my O’Reilly number

I like O’Reilly technical books. Back in 1987 I put together some notes to write a book on the vi editor, and later considered submitting the idea to O’Reilly. I used to think I knew just about everything there was to know about vi, at least as a user, and I spent a small amount of time fiddling with its code to fix some limitations. Of course now being a hardened emacs user, it’s a good thing I didn’t blot my career early by writing a book on a crappy editor like vi.

I just did a quick count of the O’Reilly titles on my shelves: I have fifty five.

And you?

AddThis Social Bookmark Button

literary arbitrage

14:58 June 20th, 2007 by terry. Posted under books, companies. Comments Off on literary arbitrage

The two books I just bought on Amazon.com cost me $37.74, plus shipping to Spain of $13.47, for a total of $51.21.

The same books are available on Amazon.co.uk for a total of £28.35, plus shipping to Spain of £5.97 and VAT of £1.37 for a grand total of £35.69 or USD $71.15.

So you can pay $51 to have the books shipped (in theory) from the US, or pay roughly 40% more and have them shipped (in theory) from the UK. The difference in shipping time isn’t much either, in practice. Even if the price of mailing in the UK were free and there were no VAT, it would still be cheaper to have books sent from the US.

The dollar hit a 26-year low against the pound in April of this year (2007). If it keeps falling and Amazon don’t adjust their pricing, I might start a side business in literary arbitrage.

AddThis Social Bookmark Button