Archive for November, 2008

Changing POV under Twitter

Wednesday, November 26th, 2008

One thing I’d like to be able to do in Twitter is change my point of view. That is, see what Twitter looks like from the POV of another user.

Given Twitter’s asymmetric follower model and the prevalence of @ messaging, it’s very common to run across a fragment of a conversation that seems potentially interesting. It’s also common not to be following the full set of people who are interacting.

For example, four people might be exchanging tweets on a subject, and you may follow just one of them. So you’ll see roughly one quarter of the thread. Right now, to get the context for the discussion you need to go take a look at the archives of the various people and try to piece the conversation together. You have to do this one tweeter at a time. Or you could temporarily follow the people involved and then page backwards through time to see the flow of tweets. With some work on the server side, Twitter could let you see this using the Twitter search interface (you’d need to put in the names of the various parties though).

It would be much simpler and much cooler to just to click a link besides a user’s name and get that user’s POV. You’d see what they see, except for the people whose tweets are private and which you’re prevented from seeing by the Twitter permission system. Not only could you see more or all of a conversation, I bet it would be really interesting to see Twitter from someone else’s POV. You could click on the @Replies tab to see all replies to that user, etc. There’s no reason why not – it’s all public data, and you can easily fetch the @replies using the search interface. I think wandering around inside the Twitterverse jumping from the POV of one identity to another would be fascinating. It reminds me of wandering around inside the wayback machine, except it’s the present.

That would all be pretty easy to implement, even for a 3rd party using the Twitter API. It would be nice if Twitter were to implement it themselves. I could do the basics myself in a few hours, but I’d rather not. This is also something that could be accessed via a Firefox extension or Greasemonkey – install it and get an extra button next to every tweet. The button switches you to the POV of the tweeter.

All we need is someone to build it.

I have several more Twitter blog posts I’d love to write. The most interesting, to me, is all about evolutionary biology, sex, and the meaning of life itself. But no time, no time. I’ve finally added a Twitter category to this blog, and was surprised to find 14 posts that fit it. Am I obsessed?

As usual, make sure you follow me :-)

A kinder and more consistent defer.inlineCallbacks

Friday, November 21st, 2008

Here’s a suggestion for making Twisted‘s inlineCallbacks function decorator more consistent and less confusing. Let’s suppose you’re writing something like this:

    def func():
        # Do something.

    result = func()

There are 2 things that could be better, IMO:

1. func may not yield. In that case, you get an AttributeError when inlineCallbacks tries to send() to something that’s not a generator. Or worse, the call to send might actually work, and do who knows what. I.e., func() could return an object with a send method but which is not a generator. For some fun, run some code that calls the following decorated function (see if you can figure out what will happen before you do):

    def f():
        class yes():
            def send(x, y):
                print ‘yes’
                # accidentally_destroy_the_universe_too()
        return yes()

2. func might raise before it get to its first yield. In that case you’ll get an exception thrown when the inlineCallbacks decorator tries to create the wrapper function:

    File "/usr/lib/python2.5/site-packages/twisted/internet/", line 813, in unwindGenerator
      return _inlineCallbacks(None, f(*args, **kwargs), Deferred())

There’s a simple and consistent way to handle both of these. Just have inlineCallbacks do some initial work based on what it has been passed:

    def altInlineCallbacks(f):
        def unwindGenerator(*args, **kwargs):
            deferred = defer.Deferred()
                result = f(*args, **kwargs)
            except Exception, e:
                return deferred
            if isinstance(result, types.GeneratorType):
                return defer._inlineCallbacks(None, result, deferred)
            return deferred

        return mergeFunctionMetadata(f, unwindGenerator)

This has the advantage that (barring e.g., a KeyboardInterrupt in the middle of things) you’ll *always* get a deferred back when you call an inlineCallbacks decorated function. That deferred might have already called or erred back (corresponding to cases 1 and 2 above).

I’m going to use this version of inlineCallbacks in my code. There’s a case for it making it into Twisted itself: inlinecallbacks is already cryptic enough in its operation that anything we can do to make its operation more uniform and less surprising, the better.

You might think that case 1 rarely comes up. But I’ve hit it a few times, usually when commenting out sections of code for testing. If you accidentally comment out the last yield in func, it no longer returns a generator and that causes a different error.

And case 2 happens to me too. Having inlinecallbacks try/except the call to func is nicer because it means I don’t have to be quite so defensive in coding. So instead of me having to write

        d = func()
    except Exception:
        # Do something.

and try to figure out what happened if an exception fired, I can just write d = func() and add errbacks as I please (they then have to figure out what happened). The (slight?) disadvantage to my suggestion is that with the above try/except fragment you can tell if the call to func() raised before ever yielding. You can detect that, if you need to, with my approach if you’re not offended by looking at d.called immediately after calling func.

The alternate approach also helps if you’re a novice, or simply being lazy/careless/forgetful, and writing:

    d = func()

thinking you have your ass covered, but you actually don’t (due to case 2).

There’s some test code here that illustrates all this.

bzr viz is so nice

Thursday, November 20th, 2008

A year ago we switched from SVN to Bazaar for source code control. I started using source code control in 1989 with RCS after comparing it with SCCS. Then I duly moved to CVS and on to SVN. In retrospect, they all sucked pretty badly but each in turn was a big improvement and seemed great at the time.

The topic of source code control is a very complex one. There’s tons of debate online about the advantages of various packages. I don’t want to get into details, plus there are details that I don’t fully appreciate anyway. It really is complex – at least if you want to do anything even a little bit sophisticated, e.g., with multiple users working on multiple branches.

Anyway, we wanted to move away from SVN, which is cumbersome, too manual and heavyweight (at least in its handling of branches), and requires you to talk to a centralized server all the time. Plus it has no handling of directories or symbolic links, and you lose history in merging. There are other problems and annoyances too.

A distributed version control system seemed like the way to go.

We looked closely at Bazaar and Mercurial. I was prejudiced towards Mercurial. I liked its name, I liked the coolness of the Qwerty-symmetric hg command, and above all I liked how lightweight and simple it is. We took a quick look at Git, but it looked like a bit of a hodge-podge and we’re Python fanboys, so we fairly quickly decided against it.

From what I’ve read, all of Bazaar, Mercurial and Git are excellent. It’s clear that they leave SVN for dead. When I run across open source projects, especially new projects, that are still using SVN I silently raise an inner eyebrow.

But like I said, I don’t want to get into details. What I do want to do is say that I really like a plugin for Bazaar called viz (aka vizualize). It’s in bzr-gtk in case you use apt-get.

You just type bzr viz and it pops a glorious window with a visualization of your branching and merging history. The image above is just a fragment of the full window. The most recent activity is at the top, so as you look down the page you’re looking at older and older branches and merges. On the left you see the branch numbers. The vertical lines are the branches, the left-most being the trunk (in this case). You can see that the 2 right-most branches have no activity in the fragment shown.

If you want to take a look at more of the window, showing a different part of the tree, click on the following image.

Not only does Bazaar make branching really lightweight, it takes all the uncertainty out of the process (ever try merging branches in SVN, reading the log file to make sure you’ve got the right revision numbers before entering the extremely long command?). Plus you get full history when merging (and this is nicely displayed in the output of bzr log) and with a tool like bzr viz you can just see the history. Our tree has some much more complex sections, including one where Esteve had 25 branches going at the same time! And yes, they all got merged to trunk. Bazaar makes branching and merging so simple you just start to do it all the time, and it becomes very natural. Then you just merge whatever you like into whatever you like and gradually merge your way back into the trunk (after merging the trunk into your branch first to have a look at things). It’s great.

That’s it. No time for blogging. I’m waiting for someone to upload a patch so I can continue working. Meanwhile, lightweight distributed version control has really changed how we work. It’s much much better. If you’re still using SVN and haven’t checked out Bazaar or Mercurial (and there are several others), you really should.

Passion and the creation of highly non-uniform value

Monday, November 10th, 2008

Here, finally, are some thoughts on the creation of value. I don’t plan to do as good a job as the subject merits, but if I don’t take a rough stab at it, it’ll never happen.

I’ll first explain what I mean by “the creation of highly non-uniform value”. I’m talking about ideas that create a lot of (monetary) value for a very small number of people. If you made a graph and on the X axis put all the people in the world, in sorted order of how much they make from an idea, and on the Y axis you put value they each receive, we’re talking about distributions that look like the image on the right.

In other words, a setting in which a very small number of people try to get extremely rich. I.e., startup founders, a few key employees, their investors, and their investors’ investors. BTW, I don’t want to talk about the moral side of this, if there is one. There’s nothing to stop the obscenely rich from giving their money away or doing other charitable things with it.

So let’s just accept that many startup founders, and (in theory) all venture investors, are interested in turning ideas into wealth distributions that look like the above.

I was partly beaten to the punch on this post by Paul Graham in his essay Why There Aren’t More Googles? Paul focused on VC caution, and with justification. But there’s another important part of the answer.

One of the most fascinating things I’ve heard in the last couple of years is an anecdote about the early Google. I wrote about it in an earlier article, The blind leading the blind:

…the Google guys were apparently running around search engine companies trying to sell their idea (vision? early startup?) for $1M. They couldn’t find a buyer. What an extraordinary lack of.. what? On the one hand you want to laugh at those idiot companies (and VCs) who couldn’t see the huge value. OK, maybe. But the more extraordinary thing is that Larry Page and Sergei Brin couldn’t see it either! That’s pretty amazing when you think about it. Even the entrepreneurs couldn’t see the enormous value. They somehow decided that $1M would be an acceptable deal. Talk about a lack of vision and belief.

So you can’t really blame the poor VCs or others who fail to invest. If the founding tech people can’t see the value and don’t believe, who else is going to?

I went on to talk about what seemed like it might be a necessary connection between risk and value.

Image: Lost Tulsa

Following on…

After more thought, I’m now fairly convinced that I was on the right track in that post.

It seems to me that the degree to which a highly non-uniform wealth distribution can be created from an idea depends heavily on how non-obvious the value of the idea is.

If an idea is obviously valuable, I don’t think it can create highly non-uniform wealth. That’s not to say that it can’t create vast wealth, just that the distribution of that wealth will be more widely spread. Why is that the case? I think it’s true simply because the value will be apparent to many people, there will be multiple implementations, and the value created will be spread more widely. If the value of an idea is clear, others will be building it even as you do. You might all be very successful, but the distribution of created value will be more uniform.

Obviously it probably helps if an idea is hard to implement too, or if you have some other barrier to entry (e.g., patents) or create a barrier to adoption (e.g., users getting positive reinforcement from using the same implementation).

I don’t mean to say that an idea must be uniquely brilliant, or even new, to generate this kind of wealth distribution. But it needs to be the kind of proposition that many people look at and think “that’ll never work.” Even better if potential competitors continue to say that 6 months after launch and there’s only gradual adoption. Who can say when something is going to take off wildly? No-one. There are highly successful non-new ideas, like the iPod or YouTube. Their timing and implementation were somehow right. They created massive wealth (highly non-uniformly distributed in the case of YouTube), and yet many people wrote them off early on. It certainly wasn’t plain sailing for the YouTube founders – early adoption was extremely slow. Might Twitter, a pet favorite (go on, follow me), create massive value? Might Mahalo? Many people would have found that idea ludicrous 1-2 years ago – but that’s precisely the point. Google is certainly a good example – search was supposedly “done” in 1998 or so. We had Alta Vista, and it seemed great. Who would’ve put money into two guys building a search engine? Very few people.

If it had been obvious the Google guys were doing something immensely valuable, things would have been very different. But they traveled around to various companies (I don’t have this first hand, so I’m imagining), showing a demo of the product that would eventually create $100-150B in value. It wasn’t clear to anyone that there was anything like that value there. Apparently no-one thought it would be worth significantly more that $1M.

I’ve come to the rough conclusion that that sort of near-universal rejection might be necessary to create that sort of highly non-uniform wealth distribution.

There are important related lessons to be learned along these lines from books like The Structure of Scientific Revolutions and The Innovator’s Dilemma.

Now back to Paul’s question: Why aren’t there more Googles?

Part of the answer has to be that value is non-obvious. Given the above, I’d be willing to argue (over beer, anyway) that that’s almost by definition.

So if value is non-obvious, even to the founders, how on earth do things like this get created?

The answer is passion. If you don’t have entrepreneurs who are building things just from sheer driving passion, then hard projects that require serious energy, sacrifice, and risk-taking, simply wont be built.

As a corollary, big companies are unlikely to build these things – because management is constantly trying to assess value. That’s one reason to rue the demise of industrial research, and a reason to hope that cultures that encourage people to work on whatever they want (e.g., Google, Microsoft research) might be able to one day stumble across this kind of value.

This gets me to a recent posting by Tim Bray, which encourages people to work on things they care about.

It’s not enough just to have entrepreneurs who are trying to create value. As I’m trying to say, practically no-one can consistently and accurately predict where undiscovered value lies (some would argue that Marc Andreessen is an exception). If it were generally possible to do so, the world would be a very different place – the whole startup scene and venture/angel funding system would be different, supposing they even existed. Even if it looks like a VC or entrepreneur can infallibly put their finger on undiscovered value, they probably can’t. One-time successful VCs and entrepreneurs go on to attract disproportionately many great companies, employees, funding, etc., the next time round. You can’t properly separate their raw ability to see undiscovered value from the strong bias towards excellence in the opportunities they are later afforded. Successful entrepreneurs are often refreshingly and encouragingly frank about the role of luck in their success. They’re done. VCs are much less sanguine – they’re supposed to have natural talent, they’re trying to manufacture the impression that they know what they’re doing. They have to do that in order to get their limited partners to invest in their funds. For all their vaunted insight, roughly only 25% of VCs provide returns that are better than the market. The percentage generating huge returns will of course be much smaller, as in turn will be those doing so consistently. I reckon the whole thing’s a giant crap shoot. We may as well all admit it.

I have lots of other comments I could make about VCs, but I’ll restrict myself to just one as it connects back to Paul’s article.

VCs who claim to be interested in investing in the next Google cannot possibly have the next Google in their portfolio unless they have a company whose fundamental idea looks like it’s unlikely to pan out. That doesn’t mean VCs should invest in bad ideas. It means that unless VCs make bets on ideas that look really good – but which are e.g., clearly going to be hard to build, will need huge adoption to work, appear to be very risky long-shots, etc. – then they can’t be sitting on the next Google. It also doesn’t mean VCs must place big bets on stuff that’s highly risky. A few hundred thousand can go a long way in a frugal startup.

I think this is a fundamental tradeoff. You’ll very frequently hear VCs talk about how they’re looking for companies that are going to create massive value (non-uniformly distributed, naturally), with massive markets, etc. I think that’s pie in the sky posturing unless they’ve already invested in, or are willing to invest in, things that look very risky. That should be understood. And so a question to VCs from entrepreneurs and limited partners alike: if you claim to be aiming to make massive returns, where are your necessary correspondingly massively risky investments? Chances are you wont find any.

There is a movement in the startup investment world towards smaller funds that make smaller investments earlier. I believe this movement is unrelated to my claim about non-obviousness and highly non-uniform returns. The trend is fuelled by the realization that lots of web companies are getting going without the need for traditional levels of financing. If you don’t get in early with them, you’re not going to get in at all. A big fund can’t make (many) small investments, because their partners can’t monitor more than a handful of companies. So funds that want to play in this area are necessarily smaller. I think that makes a lot of sense. A perhaps unanticipated side effect of this is that things that look like they may be of less value end up getting small amounts of funding. But on the whole I don’t think there’s a conscious effort in that direction – investors are strongly driven to select the least risky investment opportunities from the huge number of deals they see. After all, their jobs are on the line. You can’t expect them to take big risks. But by the same token you should probably ignore any talk of “looking for the next Google”. They talk that way, but they don’t invest that way.

Finally, if you’re working on something that’s being widely rejected or whose value is being widely questioned, don’t lose heart (instead go read my earlier posting) and don’t waste your time talking to VCs. Unless they’re exceptional and serious about creating massive non-uniformly distributed value, and they understand what that involves, they certainly wont bite.

Instead, follow your passion. Build your dream and get it out there. Let the value take care of itself, supposing it’s even there. If you can’t predict value, you may as well do something you really enjoy.

Now I’m working hard to follow my own advice.

I had to learn all this the hard way. I spent much of 2008 on the road trying to get people to invest in Fluidinfo, without success. If you’re interested to know a little more, earlier tonight I wrote a Brief history of an idea to give context for this posting.

That’s it for now. Blogging is a luxury I can’t afford right now, not that I would presume to try to predict which way value lies.

Expecting and embracing startup rejection

Sunday, November 9th, 2008

When I was younger, I didn’t know what to make of it when people rejected my ideas. Instead of fighting it, trying again, or improving my delivery, I’d just conclude that the rejector was an idiot, and that it was their loss if they didn’t get it.

For example, I put considerable time and effort into writing academic papers, several of which were rejected, to my surprise. I’d never considered that the papers might not be accepted. When this happened, I wouldn’t re-submit them or try to re-publish them. By then I would usually have moved on to doing something else anyway.

When I applied for jobs, it never entered my mind that I might not be wanted. How could anyone not want me? After a couple of years working on my current ideas, I applied for a computer science faculty position at over 40 US universities. I refused to emphasize my well-received and published Ph.D. work, of which I was and am still proud, because I was no longer working in that area.

I was convinced the new ideas would be recognized as being strong.

But guess what? I was summarily rejected by all 40+ universities. I only got one interview, at RPI. No other school even wanted to meet me. I kept all the rejection letters. I still have them. (Amusingly, I was swapping emails with Ben Bederson earlier this year and it transpired that he’d had the same experience, also with 40 universities, and he too kept all his rejection letters!)

You never learn more than when you’re being humbled.

I’ve now returned to those same ideas and have been working on them for the last 3 years. In January 2007 I went and met with a couple of the most appropriately visionary VCs to tell them what I was building. I was naïve enough to think they might back me at that early point. Wrong. They suggested I come back with a demo to concretely illustrate what the system would allow people to do. That was easier said than done – the system is not simple. I spent 2007 building the core engine, a 90% fully-functional demo of the major application, several smaller demo apps (including a Firefox toolbar extension built by Esteve Fernandez), and added about 20 sample data sets to further illustrate possibilities.

That’ll show ’em, right? I went out in November 2007 armed with all this, and began talking to a variety of potential investors. I was sure VCs would be falling over themselves to invest, especially given that we were working on some mix of innovative search, cloud computation, APIs, and various Web 2.0 concepts, and that tons of VCs claimed to be looking for the Next Big Thing in search, and for Passionate Entrepreneurs tackling Hard Problems who wanted to build Billion Dollar Companies, etc., etc.

You guessed it. Over the next year literally dozens of potential investors all said no. The demo wasn’t enough. Would people use it? Could we build the real thing? Would it scale? Where was the team? What are you doing in Barcelona? “Looks fascinating, do please let us know when you’ve released it and are seeing adoption,” they almost universally told me. The standout exception to this was Esther Dyson, who agreed to invest immediately after seeing the demo, and whose courage I hope I can one day richly reward.

What to make of all this rejection?

One thing that became clear is that if you’re smarter than average, you’ll almost by definition be constantly thinking of things too early. Maybe many years too early. Your ideas will seem strange, doubtful, and perhaps plain wrong to many people.

This makes you realize how important timing is.

Being right with an idea too early and trying to build a startup around it is similar to correctly knowing a company is going to fail, and immediately rushing out to short its stock. Even though you’re right, you can be completely wiped out if the stock’s value rises in the short term. You were brilliant, insightful, and 100% correct – but you were too early.

Getting timing right can clearly be partly based on calculation and reason. But given that many startups are driven by founder passion, I think luck in timing plays an extremely important role in startup success. And the smarter and more far-sighted you are, the greater the chance that your timing will be wrong.

So the that’s the first thing to understand: if you’re smarter than average, your ideas will, on average, be ahead of their time. Some level of rejection comes with the territory.

But I’d go much further than that, and claim that if you are not seeing a very high level of rejection in trying to get a new idea off the ground, you’re probably not working on anything that’s going to change the world or be extremely valuable.

That might sound like an outrageous extrapolation (or even wishful thinking, given my history). Later tonight I plan to explain this claim in a post on the connections between passion, value, non-obviousness, and rejection. That’s the subject I really want to write about.

For now though, I simply want to say that I’ve come to understand that having one’s ideas regularly rejected is a good sign. It tells you you’re either on a fool’s errand, or that you’re doing something that might actually be valuable and important.

If you’re not going to let rejection get you down, you might content yourself by learning to ignore it. But you can do better. You can come to regard it as positive and affirming. Without becoming pessimistic or in any way accepting defeat, you can come to expect to be rejected and even to embrace it.

If you can do that, rejection loses its potential for damage. As Paul Graham pointed out, the impact of disappointment can destroy a startup. That’s an important observation, and a part of why startups can be so volatile and such a wild ride.

I don’t mean to suggest that you don’t also do practical things with rejection too – like learn from it. That’s very important and will help you shape your product, thoughts, presentation, expectations, etc. Again, see Paul’s posting.

But I think the mental side of rejection is more important than the practical. The mental side has more destructive potential. You have to figure out how to deal with it. If you look at it the right way you can turn it into something that’s almost by definition positive, as I’ve tried to illustrate.

In a sense I even relish it, and use it for fuel. There are little tricks I sometimes use to keep myself motivated. I even keep a list of them (and no, you can’t see it). One is imagining that some day all the people who rejected me along the way will wring their hands in despair at having missed such an opportunity :-)

I’ve not been universally rejected, of course. There are lots of people who know what we’re doing and are highly supportive (more on them at a later point). If I’d been universally rejected, or rejected by many well-known people whose opinions I value, I probably would have stopped by now.

I’ve had to learn to see a high level of rejection as not just normal but a necessary (but not sufficient!) component of a correct belief that you’re doing something valuable.

Stay tuned for the real point of this flurry of blogging activity.

Brief history of an idea

Sunday, November 9th, 2008

In 1997 I had a fairly simple idea.

I spent a year thinking about it obsessively: where it might lead, what it would enable, how to implement it, etc. As I considered and refined it during this time, I began to appreciate its generality and power. I soon realized you could use it computationally to do anything and everything.

By day I was running Teclata S.L. a too-early web company in Barcelona. By night, very often until 4 or 5am, I was up thinking and writing, trying to get a better grip on what I’d dreamed up. I wrote hundreds of pages of notes. That’s a year of thinking, with no coding at all.

Teclata was killing me. I was burning to work on my ideas instead. I was exhausted, and several times literally in tears with the frustration. So in 1998 we sold the company and I started as a postdoc in Jim Hollan‘s Distributed Cognition and Human Computer Interaction lab in the Department of Cognitive Science at UCSD. I spent a year there writing code to build a prototype. I worked typically 15 hours a day for a whole year, producing roughly 30K lines of C code. During that year I think I went out only a few nights, saw exactly one movie, and made no new friends apart from Al Davis, a brilliant homeless guy who hung around campus. The prototype implementation worked, and I built some trivial applications on top of it.

I’ve now spent most of 2006-2008 back working on the same ideas, founding Fluidinfo in the process.

From March 2006 I began re-coding my UCSD work in Python. During 2007 I built a demo version, writing the core engine myself. One other person, Daniel Burr, worked on building a web front-end for a particularly important application. That was another 30K lines of code. To fund that effort, we sold a small apartment we owned in Barcelona and I took a small investment from my parents. During 2008, with funding from Esther Dyson, I’ve worked on a full distributed implementation, with Esteve Fernandez. We’re getting there, and are planning to release something in early 2009. I can’t wait to be able to talk about it all more openly.

That’s 5 years of my life working to reduce this idea to practice.

The opportunity cost in the last few years has been huge. The career risk feels huge. The salary cost has definitely been huge. I’m 45, and my most expensive personal possession is either my CD player or a €200 bicycle I bought about 5 years ago (I have a few computers – but they’re all company/job bought). Plus, I have 3 kids.

I have many other blog posts I’d love to write about this journey (e.g., pond scum), but blogging is a low priority right now. I’ve just jotted down these notes in order to point to them from an upcoming post on the importance of passion, innovation, and the creation of value.

I’ve had plenty of time to think about the subject.

[Update: here are the follow-on posts: one on rejection and one on passion and value.]

A Python metaclass for Twisted allowing __init__ to return a Deferred

Monday, November 3rd, 2008

OK, I admit, this is geeky.

But we’ve all run into the situation in which you’re using Python and Twisted, and you’re writing a new class and you want to call something from the __init method that returns a Deferred. This is a problem. The __init method is not allowed to return a value, let alone a Deferred. While you could just call the Deferred-returning function from inside your __init, there’s no guarantee of when that Deferred will fire. Seeing as you’re in your __init method, it’s a good bet that you need that function to have done its thing before you let anyone get their hands on an instance of your class.

For example, consider a class that provides access to a database table. You want the __init__ method to create the table in the db if it doesn’t already exist. But if you’re using Twisted’s twisted.enterprise.adbapi class, the runInteraction method returns a Deferred. You can call it to create the tables, but you don’t want the instance of your class back in the hands of whoever’s creating it until the table is created. Otherwise they might call a method on the instance that expects the table to be there.

A cumbersome solution would be to add a callback to the Deferred you get back from runInteraction and have that callback add an attribute to self to indicate that it is safe to proceed. Then all your class methods that access the db table would have to check to see if the attribute was on self, and take some alternate action if not. That’s going to get ugly very fast plus, your caller has to deal with you potentially not being ready.

I ran into this problem a couple of days ago and after scratching my head for a while I came up with an idea for how to solve this pretty cleanly via a Python metaclass. Here’s the metaclass code:

from twisted.internet import defer

class TxDeferredInitMeta(type):
    def __new__(mcl, classname, bases, classdict):
        hidden = ‘__hidden__’
        instantiate = ‘__instantiate__’
        for name in hidden, instantiate:
            if name in classdict:
                raise Exception(
                    ‘Class %s contains an illegally-named %s method’ %
                    (classname, name))
            origInit = classdict[‘__init__’]
        except KeyError:
            origInit = lambda self: None
        def newInit(self, *args, **kw):
            hiddenDict = dict(args=args, kw=kw, __init__=origInit)
            setattr(self, hidden, hiddenDict)
        def _instantiate(self):
            def addSelf(result):
                return (self, result)
            hiddenDict = getattr(self, hidden)
            d = defer.maybeDeferred(hiddenDict[‘__init__’], self,
                                    *hiddenDict[‘args’], **hiddenDict[‘kw’])
            return d.addCallback(addSelf)
        classdict[‘__init__’] = newInit
        classdict[instantiate] = _instantiate
        return super(TxDeferredInitMeta, mcl).__new__(
            mcl, classname, bases, classdict)

I’m not going to explain what it does here. If it’s not clear and you want to know, send me mail or post a comment. But I’ll show you how you use it in practice. It’s kind of weird, but it makes sense once you get used to it.

First, we make a class whose metaclass is TxDeferredInitMeta and whose __init__ method returns a deferred:

class MyClass(object):
    __metaclass__ = TxDeferredInitMeta
    def __init__(self):
        d = aFuncReturningADeferred()
        return d

Having __init__ return anything other than None is illegal in normal Python classes. But this is not a normal Python class, as you will now see.

Given our class, we use it like this:

def cb((instance, result)):
    # instance is an instance of MyClass
    # result is from the callback chain of aFuncReturningADeferred

d = MyClass()

That may look pretty funky, but if you’re used to Twisted it wont seem too bizarre. What’s happening is that when you ask to make an instance of MyClass, you get back an instance of a regular Python class. It has a method called __instantiate__ that returns a Deferred. You add a callback to that Deferred and that callback is eventually passed two things. The first is an instance of MyClass, as you requested. The second is the result that came down the callback chain from the Deferred that was returned by the __init__ method you wrote in MyClass.

The net result is that you have the value of the Deferred and you have your instance of MyClass. It’s safe to go ahead and use the instance because you know the Deferred has been called. It will probably seem a bit odd to get your instance later as a result of a Deferred firing, but that’s perfectly in keeping with the Twisted way.

That’s it for now. You can grab the code and a trial test suite to put it through its paces at The code could be cleaned up somewhat, and made more general. There is a caveat to using it – your class can’t have __hidden__ or __instantiate__ methods. That could be improved. But I’m not going to bother for now, unless someone cares.