Add to Technorati Favorites

iteranything

12:22 May 7th, 2007 by terry. Posted under python, tech. Comments Off on iteranything

Here’s a Python function to iterate over pretty much anything. In the extremely unlikely event that anyone uses this code, note that if you pass keyword arguments the order of the resulting iteration is not defined (as with iterating through any Python dictionary).

from itertools import chain
import types

def iteranything(*args, **kwargs):
    for arg in chain(args, kwargs.itervalues()):
        t = type(arg)
        if t == types.StringType:
            yield arg
        elif t == types.FunctionType:
            for i in arg():
                yield i
        else:
            try:
                i = iter(arg)
            except TypeError:
                yield arg
            else:
                while True:
                    try:
                        yield i.next()
                    except StopIteration:
                        break

if __name__ == '__main__':
    def gen1():
        yield 1
        yield 2

    def gen2():
        yield 3
        yield 4

    assert list(iteranything()) == []
    assert list(iteranything([])) == []
    assert list(iteranything([[]])) == [[]]
    assert list(iteranything([], [])) == []
    assert list(iteranything(3)) == [3]
    assert list(iteranything(3, 4)) == [3, 4]
    assert list(iteranything(3, 4, dog='fido')) == [3, 4, 'fido']
    assert list(iteranything(3, 4, func=gen1)) == [3, 4, 1, 2]
    assert list(iteranything(3, 4, func=gen1())) == [3, 4, 1, 2]
    assert list(iteranything(3, 4, func=iteranything)) == [3, 4]
    assert list(iteranything(3, 4, func=iteranything())) == [3, 4]
    assert list(iteranything(3, 4, func=iteranything('a', 'b', c='z'))) ==
        [3, 4, 'a', 'b', 'z']
    assert list(iteranything(3, 4, func=iteranything('a',
        iteranything(5, 6), c='z'))) == [3, 4, 'a', 5, 6, 'z']
    assert list(iteranything(None, 'xxx', True)) == [None, 'xxx', True]
    assert list(iteranything(3, 4, [5, 6])) == [3, 4, 5, 6]
    assert list(iteranything(3, 4, gen1, gen2)) == [3, 4, 1, 2, 3, 4]
    assert list(iteranything(3, 4, gen1(), gen2())) == [3, 4, 1, 2, 3, 4]
    assert list(iteranything(1, 2, iteranything(3, 4), None)) ==
        [1, 2, 3, 4, None]
    assert list(iteranything(1, 2, iteranything(3, iteranything(1, 2,
        iteranything(3, 4), None)))) == [1, 2, 3, 1, 2, 3, 4, None]
AddThis Social Bookmark Button

why data (information representation) is the key to the coming semantic web

01:51 March 19th, 2007 by terry. Posted under me, representation, tech. 5 Comments »

In my last posting I argued that we should drop all talk about Artificial Intelligence when discussing the semantic web, web 3.0, etc., and acknowledge that in fact it’s all about data. There are two points in that statement. I was scratching an itch and so I only argued one of them. So what about my other claim?

While I’m not ready to describe what my company is doing, there’s a lot I can say about why I claim that data is the important thing.

Suppose something crops up in the non-computational “real-world” and you decide to use a computer to help address the situation. An inevitable task is to take the real-world situation and somehow get it into the computational system so the computer can act on it. Thus one of the very first tasks we face when deciding to use a computer is one of representation. Given information in the real world, we must choose how to represent it as data in a computer. (And it always is a choice.)

So when I say that data is important, I’m mainly referring to information representation. In my opinion, representation is the unacknowledged cornerstone of problem solving and algorithms. It’s fundamentally important and yet it’s widely ignored.

When computer scientists and others talk about problem solving and algorithms, they usually ignore representation. Even in the genetic algorithms community, in which representation is obviously needed and is a required explicit choice, the subject receives little attention. But if you think about it, in choosing a representation you have already begun to solve the problem. In other words, representation choice is a part of problem solving. But it’s never talked about as being part of a problem-solving algorithm. In fact though, if you choose your representation carefully the rest of the problem may disappear or become so trivial that it can be solved quickly by exhaustive search. Representation can be everything.

To illustrate why, here are a couple of examples.

Example 1. Suppose I ask you to use a computer to find two positive integers that have a sum of 15 and a product of 56. First, let’s pick some representation of a positive integer. How about a 512-bit binary string for each integer? That should cover it, I guess. We’ll have two of them, so that will be 1,024 bits in our representation. And here’s an algorithm, more or less: repeatedly set the 1,024 bits at random, add the corresponding integer values, to see if they sum to 15. If so, multiply them and check the product too.

But wait, wait, wait… even my 7-year-old could tell you that’s not a sensible approach. It will work, eventually. The state search space has 21024 candidate solutions. Even if we test a billion billion billion of them per second, it’s going to take much longer than a billion years.

Instead, we could think a little about representation before considering what would classically be called the algorithm. Aha! It turns out we could actually represent each integer using just 4 bits, without risk of missing the solution. Then we can use our random (or an exhaustive) search algorithm, and have the answer in about a billionth of a second. Wow.

Of course this is a deliberately extreme example. But think about what just happened. The problem and the algorithm are the same in both of the above approaches. The only thing that changed was the representation. We coupled the stupidest possible algorithm with a good representation and the problem became trivial.

Example 2. Consider the famous Eight Queens problem (8QP). That’s considerably harder than the above problem. Or is it?

Let’s represent a chess board in the computer using a 64-bit string, and make sure that exactly 8 bits are set to one to indicate the presence of a queen. We’ll devise a clever algorithm for coming up with candidate 64-bit solutions, and write code to check them for correctness. But the search space is 264, and that’s not a small number. It could easily take a year to run through that space, so the algorithm had better be pretty good!

But wait. If you put a queen in row R and column C, no other queen can be in row R or column C. Following that line of thinking, you can see that all possibly valid solutions can be represented by a permutation of the numbers 1 through 8. The first number in the permutation gives the column of the queen in the first row, and so on. There are only 8! = 40,320 possible arrangements that need to be checked. That’s a tiny number. We could program it up, use exhaustive search as our algorithm, and have a solution in well under a second!

Once again, a change of representation has a radical impact on what people would normally think of as the problem. But the problem isn’t changing at all. What’s happening is that when you choose a representation you have actually already begun to solve the problem. In fact, as the examples show, if you get the representation right enough the “problem” pretty much vanishes.

These are just two simple examples. There are many others. You may not be ready to generalize from them, but I am.

I think fundamental advances based almost solely on improved representation lie just ahead of us.

I think that If we adopt a better representation of information, things that currently look impossible may even cease to look like problems.

There are other people who seem to believe this too, though perhaps implicitly. Web 3.0, whatever that is, can bring major advances without anyone needing to come up with new algorithms. Given a better representation we could even use dumb algorithms (though perhaps not pessimal algorithms) and yet do things that we can’t do with “smart” ones. I think this is the realization, justifiably exciting, that underlies the often vague talk of “web 3.0″, the “read/write web”, the “data web”, “data browsing”, the infinite possible futures of mash ups, etc.

This is why, to pick the most obvious target, I am certain that Google is not the last word in search. It’s probably not a smart idea to try to be smarter than Google. But if you build a computational system with a better underlying representation of information you may not need to be particularly intelligent at all. Things that some might think are related to “intelligence”, including the emergence of a sexy new “semantic” web, may not need much more than improved representation.

Give a 4-year-old a book with a 90%-obscured picture of a tiger in the jungle. Ask them what they see. Almost instantly they see the tiger. It seems incredible. Is the child solving a problem? Does the brain or the visual system use some fantastic algorithm that we’ve not yet discovered? Above I’ve given examples of how better representation can turn things that a priori seemed to require problem solving and algorithms into things that are actually trivial. We can extend the argument to intelligence. I suspect it’s easy to mistake someone with a good representation and a dumb algorithm as being somehow intelligent.

I bet that evolution has produced a representation of information in the brain that makes some problems (like visual pattern matching) non-existent. I.e., not problems at all. I bet that there’s basically no problem solving going on at all in some things people are tempted to think of as needing intelligence. The “algorithm”, and I hesitate to use that word, might be as simple as a form of (chemical) hill climbing, or something even more mundane. Perhaps everything we delight in romantically ascribing to native “intelligence” is really just a matter of representation.

That’s why I believe data (aka information representation) is so extremely important. That’s where we’re heading. It’s why I’m doing what I’m doing.

AddThis Social Bookmark Button

the semantic web is the new AI

03:30 March 18th, 2007 by terry. Posted under me, representation, tech. Comments Off on the semantic web is the new AI

I’m not a huge fan of rationality. But if you are going to try to think and act rationally, especially on quantitative or technical subjects, you may as well do a decent job of it.

I have a strong dislike of trendy terms that give otherwise intelligent people a catchy new phrase that can be tossed around to get grants, get funded, and get laid. I spent years trying to debunk what I thought was appalling lack of thought about Fitness Landscapes. At the Santa Fe Institute in the early 90s, this was a term that (with very few exceptions, most notably Peter Stadler) was tossed about with utter carelessness. I wrote a Ph.D. dissertation on Evolutionary Algorithms, Fitness Landscapes and Search, parts of which were thinly-veiled criticism of some of the unnecessarily colorful biological language used to describe “evolutionary” algorithms. I get extremely impatient when I sense a herd mentality in the adoption of a catchy new term for talking about something that in fact is far more mundane. I get even more impatient when widespread use of the term means that people stop thinking.

That’s why I’m fed up with the current breathless reporting on the semantic web. The semantic web is the new artificial intelligence. We’re on the verge of wonders, but everyone agrees these will take a few more years to realize. Instead of having intelligent robots to do our bidding, we’ll have intelligent software agents that can reason about stuff they find online, and do what we mean without even needing to be told. They’ll do so many things, coordinating our schedules, finding us hotels and booking us in, anticipating our wishes and intelligently combining disparate information from all over the place to…. well, you get the picture.

There are two things going on in all this talk about the semantic web. One is recycled rubbish and one is real. The recycled rubbish is the Artificial Intelligence nonsense, the visionary technologist’s wet dream that will not die. Sorry folks – it ain’t gonna happen. It wasn’t going to happen last century, and it’s not going to happen now. Can we please just forget about Artificial Intelligence?

It was once thought that it would take intelligence for a computer to play chess. Computers can now play grandmaster-level chess. But they’re not one whit closer to being intelligent as a result, and we know it. Instead of admitting we were wrong, or admitting that since it obviously doesn’t take intelligence to play chess that maybe Artificial Intelligence as a field was chasing something that was not actually intelligence at all, we move the goalposts and continue the elusive search. Obviously the development of computers that can play better than human-level chess (is it good chess? I don’t think we can say it is), and other advances, have had a major impact. But they’ve nothing to do with intelligence, beside our own ingenuity at building faster, smaller, and cooler machines with better algorithms (and, in the case of chess, bigger lookup tables) making their way into hardware.

And so it is with the semantic web. All talk of intelligence should be dropped. It’s worse than useless.

But, there has been real progress in the web in recent years. Web 2.0, whatever that means exactly, is real. Microsoft were right to be worried that the browser could make the underlying OS and its applications irrelevant. They were just 10 years too early in trying to kill it, and then, beautiful irony, they had a big hand in making it happen with their influence in getting asynchronous browser/server traffic (i.e., XmlHttpRequest and its Microsoft equivalent, the foundation of AJAX) into the mainstream.

Similarly, there is real substance to what people talk about as Web 3.0 and the semantic web.

It’s all about data.

It’s just that. One little and very un-sexy word. There’s no need to get all hot and bothered about intelligence, meaning, reasoning, etc. It’s all about data. It’s about making data more uniform, more accessible, easier to create, to share, to find, and to organize.

If you read around on the web, there are dozens of articles about the upcoming web. Some are quite clear that it’s all about the data. But many give into the temptation to jump on the intelligence bandwagon, and rabbit on about the heady wonders of the upcoming semantic web (small-s, capital-S, I don’t mind). Intelligent agents will read our minds, do the washing, pick up the kids from school, etc.

Some articles mix in a bit of both. I just got done reading a great example, A Smarter Web: New technologies will make online search more intelligent–and may even lead to a “Web 3.0.”

As you read it, try to keep a clean separation in mind between the AI side of the romantic semantic web and simple data. Every time intelligence is mentioned, it’s vague and with an acknowledgment that this kind of progress may be a little way off (sound familiar?). Every time real progress and solid results are mentioned, it’s because someone had the common sense to take a bunch of data and put it into a better format, like RDF, and then take some other routine action (usually search) on it.

I fully agree with those who claim that important qualitative advances are on their way. Yes, that’s a truism. I mean that we are soon going to see faster-than-usual advances in how we work with information on the web. But the advances will be driven by mundane improvements in data architecture, and, just like computers “learning” to “play” chess, they will have nothing at all to do with intelligence.

I should probably disclose that I’m not financially neutral on this subject. I have a small company that some would say is trying to build the semantic web. To me, it’s all about data architecture.

AddThis Social Bookmark Button

in praise of simplicity

17:53 February 24th, 2007 by terry. Posted under books, tech. Comments Off on in praise of simplicity

In her keynote at PyCon a few minutes ago, Adele Goldberg just mentioned Mitch Resnick’s book Turtles, Termites and Traffic Jams. I wrote a review of the book for the Complexity journal in 1994 or 1995:

There is an important trade-off between realism and understanding in the construction of models of complex systems. At one extreme, a model may be so realistic that it allows no increase in understanding of the modeled system. At the other, the model may be precisely understood but be so divorced from reality that this understanding cannot be related back to the original system. The construction of a model requires that difficult choices be made about what aspects of a system should and should not be modeled, and about how abstractions, simplifications and generalizations are to be justified and implemented. Any unchecked tendency to include more than is absolutely necessary can soon result in a model that, at least aesthetically, feels somehow bloated. It is easy to underestimate the difficulty involved in these decisions, and in the requirements of good judgement and taste in the construction of models.

It was with great pleasure then that I read Mitchel Resnick’s “Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds” (1994, MIT Press). Resnick’s StarLogo system achieves a balance between simplicity and realism that it would be difficult to improve on. This is an accomplishment in itself, but Resnick takes us much further. His StarLogo is not a single model, but a platform for exploring a wide range of decentralized systems. The StarLogo system deals so effectively with the trade-off between realism and understanding, that at times one tends to forget it is an issue.

The most provocative situation in modeling, and a sure sign that a model has dealt with the trade-off well, occurs when an apparently simple model produces unexpected results. At these times, the potential for increased understanding is at its greatest. The probability of explaining the surprising results is high, because the model is apparently simple. The decentralized systems constructed and described by Resnick repeatedly produce surprises of this kind. The delightful simplicity of StarLogo makes it possible to understand what is happening, and why our expectations were incorrect. These systems, few and far between, offer the highest returns for the effort we must invest to understand and use them.

In five short chapters, Resnick guides us through thinking about centralized and decentralized mindsets, the StarLogo system, and reflections on psychology and education. The “Explorations” chapter describes simulations (or, as Resnick prefers, stimulations) of Slime Molds, Artificial Ants, Traffic Jams, Termites, Turtles, Frogs, Forest Fires, Geometry and Recursive Trees. Resnick guides us through the thinking behind the construction of these simulations, presents alternative ideas for their construction, and argues well for decentralized views of these systems. Resnick offers the reader challenges, surprises, insights, and simple heuristic guidelines that he developed as a result of these explorations. It is remarkable that Resnick includes the entire StarLogo programs for these systems in the text of the book. The code, only once slightly over two pages in length, is clear, instructive, and incredibly simple.

Resnick’s book is a little treasure. Though much of the book is presented in the context of high-school education, any temptation to discount it on this account should be resisted. Resnick has something to teach us all. If it has a failing, it is the modesty of its presentation and claims, which may retard its recognition in “higher” academic circles. Virtually every aspect of this book should be instructive to researchers involved in agent-based modeling and simulation, especially to those in biology and artificial life. To the many scientists interested in agent-based computational modeling who are, however, not computationally inclined, read this book. It is an example of someone getting a set of deceptively difficult problems absolutely right. There are many ways in which to appreciate “Turtles, Termites, and Traffic Jams.” It is an important book.

AddThis Social Bookmark Button

no coffee before 10am at PyCon!

17:26 February 24th, 2007 by terry. Posted under tech. Comments Off on no coffee before 10am at PyCon!

I’m at PyCon in Dallas, at the Dallas/Addison Marriott Quorum. There are 580 attendees. Morning talks start at 9 and people are milling around downstairs from 8 or so. Of course they’re looking for coffee. But unlike every other conference I’ve been to over the past 20 years, there is no coffee. But the hotel does have a small store with a Starbucks outlet in it. There’s a line of people paying $3 per head for coffee. The conference has tons of coffee after 10am, but Starbucks has the early traffic by the balls.

I smell something, and sadly it’s not a fresh-brewed roast.

AddThis Social Bookmark Button

UI consistency

01:52 February 20th, 2007 by terry. Posted under tech. Comments Off on UI consistency

I often wonder if I’m super sensitive to issues in user interface, or if everyone notices the same things that I do.

For example, when I find things that are inconsistent in user interfaces it really bugs me. I’m sitting in front of a beautiful Apple cinema display attached to a laptop, all made by a company that clearly cares a lot about user interface. BUT, when I want to maximize something that’s iconized, I have to remember the command to do it on an application-by-application basis. Yes, I could reach for the mouse, but I don’t want to reach for the mouse.

If I want to get my iTunes window up, I can Apple-TAB to get to iTunes, and then to get the window up from the dock I have to Apple-Option-1. If I’m tabbing over to Terminal, I have to use Apple-1. If it’s iCal, there is no key combination to maximize the window. Duh.

Nokia care about user interface too. Yet on my cheap 6070 model, no doubt with a stock version of Series 60, when I go to delete things from the messaging area, the buttons I have to press depend on the type of thing I’m deleting. If it’s a text message, I click Left (Options), Select (Delete is the first option), and then Left (Yes, I want to delete). It’s the same for the Sent Items. It I try to delete a template, I have to click Left, Down, Select, Left. If it’s a sent email message, I have to click Left, Select, Select. There are various other items in there, and I bet they have differing delete sequences too.

Those would be such simple things to make more consistent, you would think. I find stuff like that in user interfaces all the time, and I always wonder why these companies with huge budgets don’t have someone who can see these things take a look at their products for what seem like glaring inconsistencies.

AddThis Social Bookmark Button

will they never learn?

03:05 February 17th, 2007 by terry. Posted under companies, tech. Comments Off on will they never learn?

I find it amazing that huge corporations are unable to see that attempts to copy protect things always fail. Here’s another one gone wrong. Undoubtedly, millions were spent on getting this protection in place, and it’s picked apart by one guy in a mere 8 days.

There are so many examples of this. I guess it can’t be that “they” don’t know their schemes will be broken. Perhaps they assume that, but also know that just a small percentage of customers will avail themselves of the means to use the crack. If so, it’s certainly better to use some form of protection, but why not face facts and put less effort into making it obscure. After all, what’s the difference between an elaborate scheme that’s cracked in 8 days and a trivial one that’s cracked in an hour?

AddThis Social Bookmark Button

backwards E

17:34 January 29th, 2007 by terry. Posted under tech. 4 Comments »

I don’t know how much weight I’d give to this study. But it reminds me of a joke we had in mathematics at Sydney University in the early 80’s. It went like this:

∀∀∃∃

Translation: For all upside-down A’s, there exists a backwards E.

I didn’t say it was funny.

AddThis Social Bookmark Button

how i spent my night

13:53 January 12th, 2007 by terry. Posted under tech. Comments Off on how i spent my night

Here’s how I spent my night. Which says nothing of the amount of time I spent deep in the debugger finding the problem in the first place. Not to mention a whole bunch of extraordinarily obscure digging and thinking and hypothesizing and and and…. argh.

AddThis Social Bookmark Button

YouTube spam

01:16 January 10th, 2007 by terry. Posted under tech. Comments Off on YouTube spam

I just got spammed at the email address I signed up with at YouTube:

Nora72o has sent you a message

Use http://www.youtube.com/my_messages?folder=inbox&filter=messages to go directly to this message, or go to your Inbox at http://www.youtube.com/my_messages on YouTube to view all your messages.

Thanks for using YouTube!

– The YouTube Team

Thank you for using YouTube indeed. And thanks for your email address too!

Maybe someone at Google missed out on the IPO, so they’re doing a little business on the side selling the mailing list of Google’s acquisitions?

AddThis Social Bookmark Button

$HOME/Desktop/.. — what were you doing in there anyway?

14:33 December 1st, 2006 by terry. Posted under tech. Comments Off on $HOME/Desktop/.. — what were you doing in there anyway?

Over on the Python Dev mailing list, discussion has been raging about home directories, hidden dotfiles, user interface, etc. See this recent posting for the latest in a debate that was kicked off in November under the innocuous-sounding subject Python and the Linux Standard Base (LSB).

In the meantime, I have been forwarded a confidential Apple email from CEO Steve Jobs laying out his “roadmap for the Desktop”. In it, Jobs says he “saw the Desktop light” after former Disney CEO Michael Eisner called him a Shiite Muslim for his refusal to support efforts to root out subversive use of dotfiles and home directories in general.

Highlights from the memo:

  • Campaign branding: “A man’s home is his Desktop”.
  • Plan to completely phase out $HOME. Terminal will start in /Users/USER/Desktop. Use of cd .. to be monitored.
  • Henceforth Apple developers are to refrain from directly mentioning user’s home directories in public, in blogs, etc. Internally, when mention of a home directory cannot be avoided, the approved phrasing is “Desktop/..“.
  • There is a plan to put a small Supporter of Computational Liberty and Freedom flag icon on users’ Desktops. This flag cannot be removed, but automatically disappears if a user does ever access $HOME directly.
  • There is a “four-part plan to undermine $HOME”:
    1. Jobs expresses admiration for the Microsoft warning dialog that appears when users try to access C:\WINDOWS and plans a similar feature for OS X.
    2. Once $HOME has been completely “annexed”, user related program activities will be moved into system-owned and managed 0700-mode /usr/sys/users/USER/home directories.
    3. User interaction will then be moved into /Users/USER/Desktop/MySpace where users can do anything they want. The parent /Users/USER/Desktop directory will then transition to being primarily managed by the system, and users will be discouraged from putting any actual files in there.
    4. Support for $HOME will be removed from bash, to be replaced with $HOMELAND (a synonym for $DESKTOP). $DESKTOP will be default destination for the cd command. ~ will also map to $DESKTOP.
  • Perhaps most disturbing of all, there is an explicit plan to use PR and the media to fuel anti-$HOME sentiment. Use of $HOME is to be portrayed as un-American, subversive, terrorist, and effeminate. Only terrorists and insurgents use $HOME. Deliberate use of dotfiles to hide things is to be tied to terrorist use of stenography.

Folks, this is a clarion call to action. We cannot stand idly by as mute witnesses to the slow-drip erosion of personal liberties. Next we’ll be hearing that they’re taking away $HOME because we weren’t using it anyway.

Make no mistake, Big Government is behind this, egged on by the Disney lobby. I suspect they’re pushing for the infantilization of user interface, though I’m not entirely sure why (I have my suspicions). Jobs is clearly the go-to man they’re using to get it done and the route is ?? -> Disney -> Pixar -> Jobs -> Apple -> Consumer, with probable use of cut-outs. I wonder how they got to Steve.

These bastards know how to use the media better than anyone. Apple are clearly backed all the way by Hollywood and the MSM. All you ever read about these days in the media is the Desktop and how Web 2.0 is the new Desktop. Gone are the days when you could catch a glimpse in a movie of someone using a keyboard, or even ftp’ing into to someone else’s $HOME.

It’s all Desktop now, all the time.

This may all sound highly improbable. But that’s exactly what they want you to think! They want to spread doubt. They want to ridicule us, to divide us, weaken us, and to scoff at us. We must band together, with one voice, with the voice of freedom and of liberty, and collectively support our home directories. Don’t let them take it away. There’s no time like the present – we must act before it is too late.

As a UNIX command-line user, you have an inalienable right to a home directory. It’s part of our history, our tradition, our identity. Let us not forget who we are!

Just remember friends, it’s not a conspiracy theory when you’re right.

AddThis Social Bookmark Button

undoing the YouTube deal

14:11 October 31st, 2006 by terry. Posted under companies, tech. Comments Off on undoing the YouTube deal

There’s yet more dirt on the Google / YouTube deal. I’ve thought several times already that I wouldn’t be surprised if the deal didn’t close. I don’t remember when it’s due to close, maybe as soon as December. It would be a major stumble for Google, but I’m not convinced that that would be worse than going ahead. It feels to me like the whole thing is built on sand, and that we’re seeing that now.

YouTube is changing so quickly, with tens of thousands of pieces of content being pulled very publically, lots of semi-negative write-ups, the threat of many lawsuits, the apparently desperate last-minute behind-the-scenes deal-making that went on the day they got it done, other video sites moving to share revenue with content providers, etc.

It all has a feeling of extreme volatility to me. It wouldn’t surprise me to see YouTube’s value plummet if this continues for long. Is it too late for Google to pull the plug?

I think there’s probably almost no brand faithfulness in the online video world. Sure, coolness is important, but coolness can change overnight. And then you’re left with what? What does someone who uploads a video actually want? Basically, they want storage space, a URL to point their friends to, and maybe somewhere for people to leave comments or a rating. A bit of revenue would be nice. Your average Joe probably doesn’t care about much more. It’s a bit like buying gas.

I don’t think it makes sense to run out and short Google over this, but I do think the potential for a major disruption is there. If it happened and people suddenly saw Google as just another company, capable of making big errors, there might be a disproportionally large adjustment in their stock price.

Pure speculation of course. I’m very interested to see how this plays out.

AddThis Social Bookmark Button

getting away with it

17:06 October 25th, 2006 by terry. Posted under tech. Comments Off on getting away with it

This brief article discusses how thieves stole account details from two online brokerages using keystroke loggers. By some broad definition this falls into the category of “identity theft”, since the thieves could then pretend to be the user in question. I don’t really think that warrants being called identity theft, but whatever.

I think dreaming up crimes is pretty easy. The hard thing is to figure out a good way to take physical possession of the loot. At some point you have to go clean out that bank account, or have illegally bought goods delivered to a physical address, etc. It’s not easy to come up with ways that let you reap the benefits of crime while minimizing the risk associated with these physical problems. There are some classic approaches, like using cut outs, drops, etc. In the online world it seems just as hard, and perhaps more so. At the end of the day there’s a wire leading to your house, or similar.

That’s why I like the above scam. Instead of trying for a direct score, they set up a disconnect and used the accounts to create an effect in the stock market that they then took advantage of. That’s got to be much harder to nail down, especially if even moderate care is taken to disguise the trades. I think this disconnect is very clever. But perhaps it’s actually an old trick?

AddThis Social Bookmark Button

station-wagon full of tapes

22:29 October 20th, 2006 by terry. Posted under tech. 1 Comment »

Cringley writes about Sun’s new black box portable server farms:

The beauty of a shipping container data center isn’t just that it operates stand-alone and can be plunked down in the parking lot of your existing data center or dropped by helicopter on the roof of your headquarters building. A great proportion of its beauty lies in the shipping container’s efficiency not as a server but as a network. It’s the largest sneakernet ever built. Moving a petabyte of data across the country using even the biggest optical fiber connection could take weeks, but the Blackbox can be installed in at most a few days.

which reminded me straight away of an Andrew Tanenbaum quote:

Never underestimate the bandwidth of a station-wagon full of tapes hurtling down the highway.

which I’m pretty sure I ran across in one of Jon Bentley’s Programming Pearls books in the mid-late 80s. I was surprised not to find it though. I thought either The Back of the Envelope (which poses the question) or the Bumper-Sticker Computer Science column was a sure bet.

Anyway, the station-wagon just got a whole lot bigger.

AddThis Social Bookmark Button

code, launch, sell

03:53 October 20th, 2006 by terry. Posted under tech. Comments Off on code, launch, sell

I guess this guy really didn’t want to get into management. He spends 2.5 years working on a product, launches it, and simultaneously offers it for sale. Or he just figured the time was right. That’s the ultimate in minimalist business plans.

AddThis Social Bookmark Button

transporter

13:34 October 19th, 2006 by terry. Posted under tech. Comments Off on transporter

Here’s a nice-looking box. Pricey though.

They just got acquired by Logitech.

Memo to self: Make something people want.

AddThis Social Bookmark Button

real tax, virtual payment

19:27 October 18th, 2006 by terry. Posted under tech. Comments Off on real tax, virtual payment

Here’s more on politicians thinking about imposing real-world tax on purely in-game profits.

I love the comment from the person who says they’ll be happy to pay real dollars for tax on in-game profits the day they can use virtual gold to pay real world tax.

AddThis Social Bookmark Button

reuters second life

12:24 October 16th, 2006 by terry. Posted under tech. Comments Off on reuters second life

Reuters sets up shop inside an online game.

AddThis Social Bookmark Button

plug & play II

13:33 October 13th, 2006 by terry. Posted under tech. Comments Off on plug & play II

Teqlo

AddThis Social Bookmark Button

get your 50% productivity gain here

10:44 October 12th, 2006 by terry. Posted under companies, tech. Comments Off on get your 50% productivity gain here

Apple-funded study reveals that $1999 Apple 30″ displays result in up to 50% productivity gains*.

Hurry while stocks last.

* On certain tasks, such as mouse move and click.

AddThis Social Bookmark Button