Archive for November, 2007

reCaptcha added

Friday, November 30th, 2007

I’m stuck in the Oakland airport with a 3 hour delay on a flight to Vegas. Bambi, who steadfastly refuses to blog for reasons unknown, has dinner waiting for me there. Bummer.

Meanwhile, Russell, who does blog and makes a mean Irish coffee, tells me I need to add a Captcha to this blog, so I’ve installed the very clever reCaptcha. Enjoy.

All in all a pretty thrilling night here at the airport. Battery #2 is halfway done. Me too.

Elevator status report

Thursday, November 29th, 2007

Tonight I caught the BART from San Francisco back to Oakland. Waiting for the train to arrive and then again when I got off, I heard the world’s stupidest public announcement. It was so weird that I took out my laptop and typed it in. Here it is, pretty much word for word:

Elevator status report: all station elevators are currently in service. All station agents please make sure your status boards reflect this information.

This was broadcast to the whole station. It reminds me of town criers of old: “It’s 9pm and all’s well.”

The blogging honeymoon is over

Sunday, November 25th, 2007

I can’t take it any more. I can’t handle the pace of blogging every day. The honeymoon is over, barely a month after I got the blogmobile back on the road. I just don’t have that much to say.

Actually, I do. There are things I am dying to write – for example about how I think the current debate about data ownership and privacy and control could be resolved. But those thoughts say too much about what we’re building for my taste right now. It’s very hard to know how to balance letting information out for the sake of attention with keeping information in for the sake of competitive advantage. I don’t know the answer to that one.

I have a list of things I could easily write about – it’s just that I don’t really feel like it.

Tomorrow I have a couple of meetings in LA and then I’m heading up to the Bay area until Friday.

Hacking Twitter on JetBlue

Saturday, November 24th, 2007

I have much better and more important things to do than hack on my ideas for measuring Twitter growth.

But a man’s gotta relax sometime.

So I spent a couple of hours at JFK and then on the plane hacking some Python to pull down tweets (is this what other people call Twitter posts?), pull out their Twitter id and date, convert the dates to integers, write this down a pipe to gnuplot, and put the results onto a graph. I’ve nothing much to show right now. I need more data.

But the story with Twitter ids is apparently not that simple. While you can get tweets from very early on (like #20 that I pointed to earlier), and you can get things like #438484102 which is a recent one of mine, it’s not clear how the intermediate range is populated. Just to get a feel for it, I tried several loops like the following at the shell:


while [ $i -lt 200000 ]
  wget –http-user terrycojones –http-passwd xxx \$i.xml
  i=`expr $i + 5000`
  sleep 1

Most of these were highly unsuccessful. I doubt that’s because there’s widespread deleting of tweets by users. So maybe Twitter are using ids that are not sequential.

Of course if I wasn’t doing this for the simple joy of programming I’d start by doing a decent search for the graph I’m trying to make. Failing that I’d look for someone else online with a bundle of tweets.

I’ll probably let this drop. I should let it drop. But once I get started down the road of thinking about a neat little problem, I sometimes don’t let go. Experience has taught me that it is usually better to hack on it like crazy for 2 days and get it over with. It’s a bit like reading a novel that you don’t want to put down when you know you really should.

One nice sub-problem is deciding where to sample next in the Twitter id space. You can maintain something like a heap of areas – where area is the size of the triangle defined by two tweets: their ids and dates. That probably sounds a bit obscure, but I understand it :-) Gradient of the growth curve is interesting – you probably want more samples when the gradient is changing fastest. Adding time between tweets to gradient gives you a triangle whose area you can measure. There are simpler approaches too, like uniform sampling, or some form of binary splitting of interesting regions of id space. Along the way you need to account for pages that give you a 404. That’s a data point about the id space too.

Free Wifi with JetBlue at JFK terminal 6

Friday, November 23rd, 2007

berlin wifi

I’ve never flown with JetBlue before. I’m in Terminal 6 at JFK and JetBlue has free Wifi. That’s so sensible. I’ll even blog about it. I’ll remember it. And next time I have a chance to fly out of here with JetBlue, I will.

Compare that with probably 70 flights I’ve taken out of JFK since 2000. Not a single one has offered free Wifi. It’s always the expensive pay-through-the-nose access.

Compare that with Berlin’s Tegel airport. I was there a couple of weeks ago and there were 5 Wifi providers. But they all operated under the aegis of the airport, and their prices were all outrageous. Four of them charged an identical fee of €5.95 for 30 minutes (USD 8.83), and one was half that. I guess that’s supposed to be competition.

Las Vegas airport is the only other airport I know of with free Wifi. Surely there must be a site one can go to to check on this sort of thing. I’d use it when booking travel. Maybe Kayak or some other modern flight search engine will add it one day as an airline criterion.

At least JetBlue can do something right.

Off to LA

Friday, November 23rd, 2007

I’m off to LA today. I’m staying with an old school friend. I’ve not been blogging much in NY. Yesterday was extremely warm and very unlike November. But I spent most of the day and night inside working. Today it’s cold again. I’m done with laundry, and about to pack. I’m flying Jetblue to Burbank. No doubt all my details will be safely in the hands of the US government before the plane takes off. On Monday after a couple of meetings, it’s up to the bay area.

Can’t stand perl

Thursday, November 22nd, 2007

I’ve just spent the last 7 hours working on a bunch of old Perl code that maintains a company equity plan. It’s been pain, pain, pain the whole way. I can’t believe I ever thought Perl was cool and fun. I can’t believe I wrote that stuff. I can’t believe it’s almost midnight.

But, I’m nearly done.

Twitter creeps in

Wednesday, November 21st, 2007

I often notice little things about how I work that I think point out value. One sign that a piece of UI is right is when you start to look for it in apps that don’t have it. For example, after I had started using mouse gestures in Opera I’d find myself wanting to make mouse gestures in other applications. When mice first started to have a wheel, I was skeptical. Support of the mouse wheel was not universal across applications. When I found myself trying to scroll with the mouse wheel in applications that didn’t support it, I knew it was right.

Tonight I came home and went to my machine. The first thing I did was to check what was going on in Twitter. That’s pretty interesting, at least for someone like me. I’ve been sending email on pretty much a daily basis for 25 years. It’s pretty much always the first thing I look at when I come back to my machine. Occasionally these days I find myself first going into Google reader to see what’s new, but that’s pretty rare and I might be looking for something specific. Tonight, I think for the first time, Twitter was where I went to – and not just for the general news, but for communications between and about people I know or am interested in. Much more interesting than looking through my email.

I’m one of those that thought Twitter was pretty silly when I first signed up (Dec 2006). I only used it once, and also found it intolerably slow. But it’s grown on me. And I find definite value there.

A few examples:

  1. I’d mailed Dick Costolo a few times in the past. Then I saw him twittering that he was drinking cortados. So I figured he must be in Spain. I mailed him, and he was. As a result I ended up at the FOWA conference in London the next day and met a bunch of people.
  2. On Tuesday I went out and bought a Wii in Manhattan to take back to my kids in Spain. I twittered about heading out to do it. I got an email a bit later from @esteve telling me to take the Wii back as they are region-locked. So I did.
  3. A week or so ago I was reading some tweets and noticed that someone had just been out to dinner in Manhattan with someone else that I wanted to meet. So I sent a mail to the first person and was soon swapping mails with the second.
  4. I’ve noticed about 5 times that interesting people were going to be in Barcelona and so I’ve mailed them out of the blue. That’s really good – people on holiday are often happy to have a beer and a chat. I’d have had no idea they were going to literally be outside my door were it not for Twitter.

No comment

Tuesday, November 20th, 2007

For some reason I’m not receiving email notification of comments on this blog. I’ve just noticed a bunch of comments I’d not seen. I’m looking into it.

Not exactly Brownian motion in Manhattan

Monday, November 19th, 2007

Today after some meetings I went out for a walk. I’m staying on 12th Street between 5th and University in Manhattan.

I had intended to “just wander around” pretty much at random. That’s what I really felt like too. But in the back of my mind, not quite so far back that I wasn’t aware of it, my brain was making sure that, like it or not, I went to the Apple store on the corner of 5th Avenue and Central Park.

I really have no need of an Apple store. There’s nothing I would buy, nothing I need. But.

So off I wandered… Broadway, 6th Av, 5th Av. I stopped briefly in many stores, had a coffee and a muffin, tried to tell myself that I actually wasn’t going to the Apple store. But.

I saw iPods and iPhones aplenty along the way. Hundreds of them. All identically priced. Best buy, Comp USA, Circuit City, all the small electronic shops on 5th Av. No need, no need at all to go to the Apple store. None.

I’m walking up 5th Av in the boring super-rich area, Cartier, Dunhill, DeBeers. There can be no doubt whatsoever that I am heading to the Apple store. Most of my mind doesn’t want to go, but my legs and body seem determined. They know I need it.

And there it is. Amazing. I’ve been in several of these stores before, including this one, but there’s something you just have to see and feel. Maybe it’s the church of the 21st century… people are drawn in to worship an abstract god, to kneel at the altar and finger the icons.

It really is amazing. To me the Apple store is about the hippest place in Manhattan. Here you get to see all sorts of cool cats just hanging out with their favorite hardware. The place is full. Full of people from all over the world who’ve come to buy Apple gear. The place has a very definite atmosphere, and it’s not the atmosphere of a regular computer store. There are hundreds of Apple products out, they’re all on, and people are using them – surfing the web, reading email, listening to music, marveling. Spend half an hour in there people watching, and you want to run out and buy AAPL stock.

Apple and Nokia are two companies that really understand the importance of appearance, design, and fashion in technology. I think Nokia were the first company to see clearly that a phone is not just a phone – it’s a statement about yourself. It’s something you take out and leave on the table at the cafe, or casually flip open when you need to impress someone or get laid. Apple understands it even better. I’ll walk nearly 50 blocks just to get a fix – not to buy, just to look at the products, look at the people, be amazed at it all.

Fortunately I’m old enough to know that I don’t really need any of those shiny objects. I have a first generation iPod that I never use. I have a dead-simple phone that I don’t feel any need to upgrade. I haven’t bought myself a computer in I don’t know how long – maybe 10 years (I always get them through work). I’m not even sure that I’d own a computer if I didn’t work from home. But I sure do like to look at hardware. The new iPod nano is extraordinarily beautiful – dimensions, sleakness, feel, everything about it is divine – and at $149 (4GB) or $199 (8GB) it doesn’t feel expensive. But I know I simply wouldn’t use it. What a pity!

The Mahalo-Wikipedia-Google love triangle

Sunday, November 18th, 2007

Lots of people seem to like dumping on Mahalo and Jason Calacanis. For example, Andrew Baron recently posted about Why Mahalo is Fundamentally Flawed.

Try Googling Mahalo sucks and you’ll get about 232,000 hits. Take your pick of the highly critical coverage.

Some of the negative commentary on Mahalo is probably due to professional and personal jealousy. Some of it is due to the fact that it’s early days yet. And I think some of it may be due to Jason happily telling people to look left while he goes right.

How can Jason raise money for Mahalo at valuations north of $100M? Surely there must be a revenue plan that holds water? If you want to argue that Mahalo is a failure and that Jason is simply a ceaseless self-marketer full of hot air, you’ll need to argue that some of the same things are true of Mahalo’s investors. Or maybe we’re in a bubble and they’ve all simply lost it.

Here’s what I think is going on.

Firstly, I think Jason is using a little smoke and mirrors when he calls Mahalo a search engine and frequently compares Mahalo’s “search” results to Google’s. With few exceptions, everyone seems to be buying it! With few exceptions, people compare Mahalo with Google – presumably because Jason tells them to and because he talks about being a search engine. And, with few exceptions, the technorati tell us that Mahalo is a pretty crappy search engine.

I agree, because Mahalo is not a search engine. Putting a box labeled “Search” on your web site to dig hits out of your own content does not make you a search engine – if it did, millions of sites would qualify. Passing queries off to Google and showing the results does not make you a search engine, either. Telling people to compare your content with Google’s results does not make you a search engine. Nor does putting the words “search engine” in your company’s strapline.

Mahalo will never be a search engine, and almost certainly does not want to be a search engine. That would be suicidal.

I believe their strategy is entirely different and that the relevant comparison is not with Google, but with Wikipedia.

Mahalo is a rapidly growing collection of carefully curated content. Mahalo is Wikipedia with a different model of control, ownership, and content creation. It’s a benevolent dictator with a purchase agreement instead of a loose anarchy with the GNU Free Documentation License.

If you want to compare Mahalo to something, compare it to Wikipedia. Jason is a huge fan of Wikipedia. And here he is begging Jimbo Wales not to leave $100M/yr on the table. Interesting.

Right now Mahalo has roughly 25K pages. Google has information on, let’s say, 10 billion pages. By this simplistic measure, Google is about 400,000 times bigger than Mahalo. You’re not going to catch or compete with Google using people to make content. Yes, you can use Google for things you don’t have static pages for, as Mahalo does. But Mahalo is not a search engine. Never will be.

Now consider Wikipedia. Wikipedia has 1.2M English pages. That means that, in English, Wikipedia is a mere 48 times larger than Mahalo! Now we’re talking. Mahalo are currenly adding something like 1,000 pages a week. Suppose Jason manages to double that quite soon. That would be 100K pages a year, or about 8.3% of Wikipedia annually. So I think it’s conceivable that Mahalo could catch Wikipedia. Even if they keep a steady ship and only gain linearly they could easily be 35-40% the size of Wikipedia in 4 years’ time.

But sheer number of pages is only part of the story. Because the distribution of search requests will follow some kind of power law, you can pick up (say) half of all search requests by only covering a small number of them, and, as always, leave the long tail to Google.

So with a small finite amount of work, you can cover a very large chunk of Wikipedia. And I think that’s exactly what Mahalo are aiming to do.

A few weeks ago I pulled down all of Mahalo’s URIs for another project. Here’s a tiny sample – and I really did pick this out at random:

So what? you might ask. Well, let’s replace with in the above. We get:

And guess what? All those URIs actually work! See below for a possible reason for this uncanny coincidence.

Research question: what percentage of Mahalo URIs work as Wikipedia URIs with the above simple substitution? I may do this test when I get a little more time. I bet the answer is high.

Ask yourself again: does Mahalo look more like Google or more like Wikipedia?

The idea of Mahalo-as-search-alternative-to-Google is just Jason operating Mahalo in stealth mode in broad daylight. “Hey, Rocky, watch me pull a search engine out of my hat! Oops! That’s not a search engine. I swear there was a search engine in there somewhere.”

How is Mahalo different from Wikipedia?

A big one is that Mahalo owns all its content. If Mahalo puts one of your pages on its site, you’ll first sign a purchase agreement in which the

Seller hereby irrevocably sells, grants, assigns, conveys and transfers to Mahalo, exclusively and forever, Seller’s entire right, title and interest in and to the SeRPs

and in which you warrant that the content is legit, in which you fully indemnify Mahalo, and in which you agree to let them be your agent and attorney should they need to take some action to obtain or protect the content.

In consideration you get $10-$15 which you can have in cash. Or, in a wonderfully ironic and masterful gesture, you can have your earnings donated to the Wikimedia Foundation! That’s just brilliant, I love it. How can you not be in awe of that? The guy’s a genius.

Talking of genius, just look at the language on the payment details page at Mahalo: “A Greenhouse Guide begins their career in the Greenhouse…”. You see? Writing articles for Mahalo is the beginning of a career. George Lakoff would probably count that as a classic example of framing (also see here).

Unlike Wikipedia, Mahalo owns every word of its content. That means they can sell it. That they can be acquired. But who would want to acquire Mahalo? Wait.

What other differences are there between Wikipedia and Mahalo?

Another big one is the millions of links on the internet that point to Wikipedia pages. Those little tubules that make up the internets, with Google’s PageRank worming its way down each and every one, assigning and passing on credit.

There are two things here: 1) the links themselves and 2) the high consequent position Wikipedia’s pages have on Google.

Can Mahalo get large numbers of people to link to their pages? If the pages are any good (and they are), then why not? Plus, it may be that Mahalo can catch Wikipedia in terms of how many people link to them.

According to the Netcraft October 2007 Web Server Survey, the number of servers on the net has been growing at an amazing 5% per month!

That’s just the rate of increase of new servers, not the rate of new pages being put onto existing sites. Let’s assume the Netcraft server number isn’t too far from the overall growth, and that the web roughly doubles in size every two years. That means if the size today is X, then in 4 years, towards the end of Jason’s horizon, it will be size 4X. If so, there are 3X pages yet to come into existence. The creators of these will have a choice to point links at Wikipedia or Mahalo. If popular momentum can be shifted to Mahalo, it can grab a large chunk of the link pie graph.

All of which brings us, inevitably, to Google.

Quick survey question: when you need to find something that you know you’d be happy to read in Wikipedia, do you first go to Wikipedia, find English (or your language), find their search box, enter your query, and click on the link? Or do you go to Google and take its Wikipedia link?

I thought so – you use Google. It’s a uniform way to get to things, it’s likely integrated into your browser, and they generally do a better and faster job of indexing sites’ content than the sites do themselves. So the existence and massive popularity of Wikipedia drives traffic to Google. And Google of course drives traffic to Wikipedia. The two of them are dating. But Wikipedia is not the perfect lover: they stubbornly refuse to put ads on their pages, to share the love. Along comes Jason Calacanis, then at AOL, to whom this is all very clear. He tells Wikipedia in no uncertain terms that with all that traffic they could make $100M per year from ads on just the home page. He points to a conservative estimate of the worth of Wikipedia at $600M, and his own estimate is $5B. Hmmmmm. What’s an entrepreneur to do when he sees someone leaving that much value on the table?

Back to Google. They would like to have more content. Traditionally, when you got back a page of their search results, you wouldn’t see links to pages on Google – that wouldn’t make sense: there were no pages on Google, after all. Google was supposed to point you to other pages. It was an index to help you find the things you actually wanted to look at. That was the old model. These days, Google is buying content (e.g., YouTube) and pointing their search results at their content, neatly taking the ad revenue in both places. All the better if the content comes with indemnification.

You can see where I’m going. Mahalo already does advertising with Google. In fact, they’re already a premium adsense publisher to the surprise of some. If ads on the single front page of Wikipedia could generate $100M annually, what could ads on all Mahalo pages generate if Mahalo grows to rival Wikipedia?

And… who weighs the importance of links (and other unknown factors) in Google’s results page? Yes, of course, Google does. According to this Fast Company article, Mahalo gets 65% of revenue Google makes when it sends its users into Google. And Google makes money when it sends its users into Mahalo.

If there’s really (say) $1B of value to be had by building a successful commercial version of Wikipedia, you can see why Google might have some interest in nudging links to Mahalo a little higher in its results. Maybe even higher than the equivalent page for Wikipedia. Now would be a good moment to remember that I illustrated above just how trivial it can be to match up equivalent Mahalo and Wikipedia pages… Got it? User enters a query, Google does the search and finds a highly-linked Wikipedia page, then in an instant they can make and instead display a link to the equivalent Mahalo page, optionally displaying the Wikipedia page below the fold. Would that qualify as evil?

All of which leads to a very clear answer to my “who would want to acquire Mahalo?” question. Interestingly, Google will want to wait until Mahalo is big (they will know exactly when, supposing Mahalo keeps using adsense). They want Mahalo to be independent and with strong momentum before they turn the corporate intake valve in the Mahalo direction.

Can Jason build a viable alternative to Wikipedia? I bet he can. He has the lessons of Wikipedia. He doesn’t have the anarchy factor. He has no spam. He knows what he’s doing, and he’s in control. It’s a content play, and Jason is a content guy. An editor with a track record of building valuable content in this way. He’s playing to his strengths. The engineering is not nearly as daunting as building a Google. He has the money. As he ramps it up he’s going to have more money.

Who’s going to stop him? Certainly not Google – that’s not in their interest at all. Almost certainly not Wikipedia – unless they start putting up ads and funneling large amounts of money back to Google. And Jason is unlikely to shoot himself in the foot either – quite the reverse.

So if that’s the strategy, and if he’s on track with content (as he seems to be), and if the content is passably good, or better (which it is), and if he has a good understanding with his “friends at Google” (which you can bet he does — let’s not forget the Sequoia factor either), and if the revenue numbers are about right, then a $175M valuation for an upcoming round to accelerate things might look like a steal.

Along the way, Jason gets to have a quiet inner smile at all the people whining about how Mahalo is a crap search engine. He feeds the fire all the while, telling them to go ahead, make his day, and compare Mahalo’s results to Google’s (but not Wikipedia’s). Misdirecting attention towards Google and having people write him off probably suits him just fine. Meanwhile, they’re getting on with the real mission.

Dinner at Els Quatre Gats

Saturday, November 17th, 2007


I’m just back from the annual Mosterín Höpping family dinner at Els Quatre Gats (the four cats). We sat at a table right under the famous Ramon Casas painting above. It’s nice to just walk over to Picasso‘s old haunt of 100 years ago and be surrounded by all that history. I was just browsing the Wikipedia entry for Picasso. It’s amazing to think that he was burning his own paintings to stay warm during winter.

Flakey Twitter and the use of consecutive ids

Friday, November 16th, 2007

Twitter was just inaccessible for maybe a couple of hours. Prior to that there was a 9-day gap in their timeline, noticed by at least a few people. I quite regularly have twitters I send not show up at all.

I wonder what could be going on over there? Things certainly don’t feel very stable.

A friend signed up tonight. Using the Twitter API you can see her id. It’s a bit over 10 million. You can also see the id of her first twitter, a bit over 417 million. The earliest twitter available on the system is number 20 “just setting up my twttr” sent at 20:50:14 on Tue Mar 21 2006 by Jack Dorsey who has user id 12 (the lowest user I’ve seen).

Given that Twitter seem to be using consecutive ids for users and twitters, and that you can pull dates out of their API, it would be pretty easy to make graphs showing growth in users and twitters over time. You could probably also infer downtime by looking for periods when no twitters appeared. This would be pretty easy too. Beyond a certain point in time it would be very accurate (i.e., when there are so many twitters arriving that a twittering gap is suspicious), and you could calculate confidence estimates.

I don’t have time for all that though.

But I wonder if Google did something like that as part of their competitive analysis when they decided to buy Jaiku, or if Twitter’s investors did it, and how the numbers would match up with whatever Twitter management might claim. I’ve no idea or opinion at all about any of that btw. But I don’t think I’d be exposing all that information by using consecutive ids for users and their twitters.

Stop in the middle of something

Thursday, November 15th, 2007

Over the years I’ve found little tricks to keep myself working efficiently.

For some reason I have it in mind that people think it’s good to finish off tasks before taking a break or heading to bed. I find that that’s a really bad strategy. If you finish what you’re doing before you stop, then when you re-start work, you’ll have to look for a next task to begin. There’s overhead in that no matter when you do it. But when you go back to work you have less momentum. It’s harder to begin on something new from a cold start.

I find I’m much more efficient if I leave tasks incomplete before taking a break. That way when I come back to them I just pick up where I was. So I’ll leave a line of code only half written, or open a quote and not close it so the syntax coloring in my editor looks wacky. Or I’ll simply stopping writing in the middle of a sentence. With code you have the advantage that you can leave it syntactically incorrect, so there’s no chance you can forget to fix what you were doing.

If you’re heading to bed, it’s best to take your task to the point where what remains is just mechanical. That way you don’t continue to think about a problem when you should be falling asleep.

Right now, in another window, I’m running a bunch of unit tests. When I finish this blog entry I’m not going to go look at them, I’m going to bed. I’m pretty sure they’ll run successfully. In the morning I’ll have to do a few mechanical things – fix any failed tests and re-test, merge my changes, re-run tests, close a ticket, and remove a completed branch. By the time I’m done with that I’ll be in the swing of things and starting a new task will be easy.

Is Andrew Parker secretly running Union Square Ventures?

Wednesday, November 14th, 2007

Fred Wilson stirred up the entrepreneurial blogosphere 6 months ago with a series of posts wondering about the influence of founder age on startup success. I wrote one of my typically long comments.

Today I was making myself a coffee, and thinking about how fast/slow I can move, and how that’s changed over the years. When I was 24 I perhaps had more energy, but I often acted in a quite unfocused way. Now I’m 44, and I still have tons of energy. E.g., I was up coding last night until 6:30am, and then got up at 10am this morning and continued, so I’m not exactly loafing around with slippers and a pipe reflecting on my glory days. But I also have 3 kids, and other things going on. I have to act in a much more focused way or I couldn’t do the things I want to do.

But….., I then thought, the life of your average VC probably has some strong similarities. A couple of kids, insanely busy when working, regularly carving out quality time for family, needing to stay very organized and on top of things, needing to keep multiple balls rolling, etc. Those thoughts led me to reconsider Fred’s posting, but in the context of VCs.

Might it be that the best VC general partners would actually be a bunch of 24 years olds? Of course they could have some older guys as analysts. What do 40-50 year old VCs have that 20-30 year olds don’t that makes them more qualified and better as VCs? If you want to argue that experience makes the older better, you probably need to argue that for entrepreneurs too. If you want to argue that the energy of youth makes for a better entrepreneur, you might need to argue that for VCs too. If you want to argue that young founders have unique insight into what products will be successful, you might think the same would be true of young VCs — if there were any.

It takes a massive amount of work to create and build a startup. Unless you’re a superstar, it’s also a huge amount of work to get funded. You have to go begging and scraping, on bended knee, hat in hand, to make mature and otherwise sober people with a lot of money believe in you. And that’s all done against a background of very steep odds. Similarly, it’s a massive amount of work to raise a venture fund. You have to make even more mature and more sober people with even more money believe in you. And you have to do it in a much less forgiving environment, also against steep odds.

Thirty or even twenty years ago, most CEOs would probably have scoffed at the idea that a 20-year-old could start and run a company, and sell it for tens or hundreds of millions, or even a billion, or take it public. We now know that that actually happens, and the idea that the very young can do it, including getting financial backing, is no longer foreign. Might not the same one day be true for fund managers? When will we see the first VC fund run by a couple of twenty-somethings? Will they exhibit a marked preference for funding older founders?

Back when Fred was posting, I pointed Howard Gutowitz to one of the postings. A couple of days later, Howard told me that he’d talked about it to his brother:

Robert made what is actually an interesting suggestion: get a figurehead 26 year old to be the CEO. Turn the old game around.

I think that’s pretty amusing.

Maybe Andrew Parker is actually running Union Square Ventures. Turn the old game around.

The young are different

Wednesday, November 14th, 2007

[These are some comments I wrote in reply to a Fred Wilson posting, The Age Question (final post). I’ve pulled them out here because I feel like it, and I want to link to them.]

I really don’t see what all the fuss is about. Fred posts some observations about the numbers he’s seeing, and people take it personally. It could hardly be clearer that he’s not ageist. This is shooting the messenger.

The world is a changing place, as usual. Technology and knowledge is better understood and packaged and available, especially to youth, as usual. Kids of 15 can do things today (including starting companies, which takes 10 minutes on the web) that older people wouldn’t have dreamed of at 15, as usual.

There’s a tendency to let two questions overlap here. One is whether young people do better at starting and running companies, etc. The other is whether young people are inherently different simply because they’re young.

I don’t think there’s enough evidence on the first question to know the answer. Personally, I doubt that young inexperienced people are better than older more experienced people on this question.

But, young people ARE different. They think differently. It’s very clear. The easiest way to see it is to ask yourself whether (at least with regards to technology) you’re more advanced (or similar word) than your own parents? Of course you are. Of course. How could you not be? Ask all your friends the same question, and they’ll probably all say the same thing. Now just extrapolate.

The “but we invented it” argument doesn’t hold water. Bicycles were invented many generations ago, and in many respects haven’t changed that much. But look at the innovations in cycling of the last 20 years! Kids are doing things on bicycles that people of my generation (I’m 43) never contemplated. Or take skateboards. The basic form hasn’t changed much in the last 20 years. Arguably, my generation invented the skateboard. But when we were kids the Ollie and grinding etc. hadn’t been thought of and the kinds of tricks that regular kids in the street are doing today would have blown our minds in the 70s. We had almost exactly the same gear. Back then, the ultimate in coolness was to do a few 360’s and maybe a handstand. If a mundane object like a skateboard or a bicycle can be put to such novel use, who can doubt what the internet/web can be made to do/look like in the hands of people who push the envelope? It’s very clear that successive generations of users (aka “kids”) push the envelope.

The internet, and of course the web, are not mature technologies. Our kids are still discovering amazing new way to use bikes 120 years after the invention of the “safety” bicycle. So we probably have a fair way to go on the internet/web. The sheer numbers alone argue that most of the innovation will come from people much younger than the current generation of users, some of whom are undoubtedly yet to be born, and none of whom will have had anything to do with the invention.

In summary: are youth better at starting and running companies? Don’t know. Are they different from us? Hell yes.

It’s probably useful to keep the two questions apart.

Another thought worth considering: in some cases (see above) it may take adults to invent something and bring it to market, but kids to figure out how to really use it.

Bicycle removal problem – solution

Tuesday, November 13th, 2007

I wrote earlier about a bicycle removal problem. I wasn’t exactly flooded with responses. Anyway, here’s my solution.

You buy a large collection of bike locks. Then you drive around the city and lock people’s bike locks to the bike racks (or poles or railings or whatever). People can still take their bikes and their locks away as usual, and your lock then falls to the ground (but is still attached to the bike rack). Once every X weeks your people go out and do a sweep – taking away bikes and bike fragments that still have your lock on their lock, recovering your locks that are no longer locked to other locks, and using these to lock the locks of bikes that do not have your lock on them.

You can put a brightly colored ring with a warning message on your locks “This bike will be taken away in X weeks if this bicycle has not been used”.

Twittering from inside emacs

Monday, November 12th, 2007

I do everything I can from inside emacs. Lately I’ve been thinking a bit about the Twitter API and social graphs.

Tonight I went and grabbed python-twitter, a Python API for Twitter. Then I wrote a quick python script to post to Twitter:

import sys
import twitter
twit = twitter.Api(username=‘terrycojones’, password=‘xxx’,

and an equally small emacs lisp function to call it:

(defun tweet (mesg)
  (interactive "MTweet: ")
  (call-process "tweet" nil 0 nil mesg))

so now I can M-x tweet from inside emacs, or simply run tweet from the shell.

Along the way I wrote some simple emacs hook functions to tweet whenever I visited a new file or switched into Python mode. I’m sure that’s not so interesting to my faithful Twitter followers, but it does raise interesting questions. I also thought about adding a mail-send-hook function to Twitter every time I send a mail (and to whom). Probably not a good idea.

You can follow me in Twitter. Go on, you know you want to.

Anyway, Twitter is not the right place to publish information like this. Something more general would be nicer…

Beach, chicken, hospital

Sunday, November 11th, 2007

Today we ate roasted chicken sitting outside Opollo in Barceloneta. We sat in the sun on a wooden terrace right on the beach. The sand was one step from our table. Fergus and Amy were there and the 4 big kids ate and played for a couple of hours, Lucas in a t-shirt the whole time. It was 19°C (66F) when I headed down there at about 2pm.

Then we took Findus across the street to Hospital del Mar, where they filmed the hospital scenes for Todo Sobre Mi Madre. Findus has had a bit of fever following a vacination. It took us about 10 minutes to get seen by a doctor. They did some tests on him, including blood oxygen, taking a snot sample and doing a viral analysis while we waited. It’s all extremely good, with several people looking after you, all very friendly and professional, etc. This is just walking in off the street on a Sunday afternoon. We were there about 90 minutes, and they gave us the results (a virus causing bronchitis), prescriptions, etc. Universal health care can never work, right? Socialized medicine is evil, right?

And then home. All walking, no need for a car, or even the metro. I love having such a local life here. I like having my kids grow up by the sea, and that they can play for hours in the sun on the beach in the winter. I haven’t owned a car for almost 12 years.

Things like that.

Multiplying with Roman numerals

Saturday, November 10th, 2007

I like thinking about the power of representation, particularly inside computers. I wrote about it earlier in the year and gave a couple of examples. Here’s another.

Think about how you might have done multiplication with Roman numerals. Why is it so difficult?

It’s not because multiplication is inherently so hard. Roman numerals were just a terribly awkward way to represent numbers. However, if you introduce the concept of a zero and use a positional representation, things become much easier.

Note that the problem hasn’t changed, only the representation did. A new representation can make things that look like problems go away.

I claim that we are still using Roman numerals to manage information online (and on the desktop for that matter). Until we do something about it, we’ll probably continue butting our heads against the same problems and they’ll probably continue to appear intractable.

At Fluidinfo, everything we do is based on a new way to represent information.