Ravi Pandya
ravi@iecommerce.com
www.iecommerce.com
+1 425 417 4180
vcard

syndicate this site

Ravi Pandya   software | nanotechnology | economics

ARCHIVES

2007 11 10

2004 10 09 08 07 06

2003 04 02 01

2002 12 11 10 09 08

2001 11

ABOUT ME

Ravi Pandya
Architect
Cloud Computing Futures
Microsoft
ravip at microsoft.com

03-Microsoft
00-02 Covalent
97-00 EverythingOffice
96-97 Jango
93-96 NetManage
89-93 Xanadu
88-89 Hypercube
84,85 Xerox PARC
83-89 University of Toronto, Math
86-87 George Brown College, Dance
95-Foresight Institute
97-Institute for Molecular Manufacturing

DISCLAIMER

The opinions expressed here are purely my own, and do not reflect the policy of my employer.


Sun 28 Oct 2007

The Logic of Political Survival

My friend Robert Bell pointed me at this, which has been mentioned by Angry Bear and Samizdata. It is an interesting combination of public choice theory and organizational theory. Bruce Bueno de Mesquita has developed a model of political economy he calls selectorate theory, modeling the behavior of rules based on the size of the selectorate that has a say in choosing the ruler, and the minimal size of the winning coalition that is sufficient to determine a particular outcome. In autocratic regimes, the coalition is small so the ruler can get away with ruling for the gain of a few; in democracies the winning coalition is large, and so there is an incentive to increase overall public welfare. His book, The Logic of Political Survival is (almost completely) available on Google Books. There's also a podcast interview which sounds interesting.

One interesting question is how the selectorate/winning coalition ratio evolves over time. In particular, the current U.S. administration's pursuit of the Iraq war despite strong popular opinion against it, and the concentrated private benefits such as war contracts and oil profits, run counter to de Mesquita's theory of democratic behavior (Testing Novel Implications from the Selectorate Theory of War). It is possible that in practice the effective winning coalition in the U.S. has become smaller due to effects like single-issue voting (whether it's pro-environment or anti-abortion), and gerrymandering.

19:19 #

Computing with molecules

There's a DNA computing group at Caltech doing some amazing work. They have defined a model for performing arbitrary Turing-complete computation using chemical reactions by mapping a Minsky Register Machine to relative molecule counts:

"Well-mixed finite stochastic chemical reaction networks with a fixed number of species can perform Turing-universal computation with an arbitrarily low error probability. This result illuminates the computational power of stochastic chemical kinetics: error-free Turing universal computation is provably impossible, but once any non-zero probability of error is allowed, no matter how small, stochastic chemical reaction networks become Turing universal. This dichotomy implies that the question of whether a stochastic chemical system can eventually reach a certain state is always decidable, the question of whether this is likely to occur is uncomputable in general."

07:03 #


Thu 25 Oct 2007

What They Said

The question raised by the Dynamo paper is answered here: Michael Stonebraker, Pat Helland, et al say it's the end of architectural era in databases, and it's time for a complete rewrite. In an earlier paper, Stonebraker said that special-purpose databases would be faster than general-purpose DBs for specialized tasks like stream queries. Now he's saying (and showing) that a general purpose database can be solidly trounced (82x perf on TPC benchmarks!) by a database optimized for current systems architectures. H-store has:

  • Single threading, small transactions, in memory operation
  • Shared nothing, with hot standby for high availability
  • Application logic "in process" to avoid protocol overhead
  • Data partitioning instead of locking for transactions

They work through a taxonomy of application schemas and map them to this model. Typical business apps follow a constrained tree schema (orders, order lines, etc.) which maps very well, and then there are soem interesting variations. H-store can precompile query/update plans, conflict analysis, etc. based on a static schema.

There are a number of interesting issues still to research - schema evolution, rebalancing, etc. But this is a very promising direction.

07:17 #

Dynamic optimization of hardware

Frank Vahid at UC Riverside has been working on "warp processors" - dynamically compiling hot code segments into on-chip FPGA to get substantial performance gains without requiring special hardware. His benchmarks average 7.4x performance gains with 38-94% energy reduction.

This is fascinating - a hardware analog to dynamically optimize just-in-time compilation used in languages like Strongtalk.

06:14 #

Berkeley Parallel Browser

50 cores on your phone? Even for low-power applications, it looks like many-core will be the answer, since you get more computation per mW. The Berkeley Parallel Browser Project is working on:

  • A parallel lexer, using speculation guided by heuristics
  • A parallel parser, using an older, simpler CYK reduction parser that parallelizes better than modern LL/LALR parsers
  • A parallel application programming model using the Flapjax dataflow language

For the bigger picture, if you haven't yet read it you should look athe The Landscape of Parallel Computing Research: A View from Berkeley.

05:37 #


Sat 20 Oct 2007

Werner Vogels on Dynamo

Werner Vogels had a great paper at SOSP on the Dynamo distributed storage infrastructure underlying Amazon's applications. There were a number of interesting insights in the paper:

  • Target metrics at the 99.9% percentile of the distribution for defining SLA, to avoid poor user experience
  • Prioritize availability over consistency, using vector clocks to assist in resolving the resulting conflicts
  • Response in the face of failure is critical, since at scale you always have failures
  • Partitioning is key for scalability

These are all great principles for developing scalable distributed systems, but in addition, there is the basic data model - single-index blob storage with optimistic transactions and intelligent conflict resolution. This is very different from the classic normalized relational transaction model for developing applications. It clearly works very well for Amazon, and you might be tempted to say it applies only to cloud-scale Internet sites. But then you hear someone like Pat Helland say this is the way you should develop service-oriented applications for the enterprise.

So where does that leave the relational model? Running a lot of complex legacy apps like SAP? Data mining and analysis? There's clearly a lot of value in having a well-defined declarative schema for organizing your information, and running ad hoc queries, but maybe it's not the best model for writing software. Is anyone working on a general business application platform using these principles? Or will it show up inside some cloud SaaS platform? I'll be curious to see how this develops.

08:55 #


Thu 18 Oct 2007

The Future of Personal Computing

Nicholas Carr just posted a provocative note. I think in the interest of telling a good story, he glosses over some important details. I doubt that Jonathan Ive's design aesthetic would allow Apple to sell a $199 computer. And the collision between Steve Jobs' carefully crafted perfectionism and Google's get-it-out-quick prototyping culture would make it very difficult for them to collaborate rapidly or coherently. And Microsoft has many more strengths in this area than he gives us credit for.

But those are just details; it's hard to argue with his basic thesis. For most ordinary people and for most information workers, their internet access and their browser are more important than their personal computer and its operating system (whether it's Windows, Linux, or OS X). The only real exceptions are entertainment, for which specialized devices like the iPod and XBox are dominating, and complex creative work, like engineering and design, for which people still need real workstations. As a software developer, I have a quad-core Xeon with half a terabyte of disk, 4Gb of RAM, and 3 million pixels, and I'd happily take more. But for my personal life, and an increasing part of my work life, a computer with just a browser would be Good Enough.

06:45 #

Starting up again

I see it's been 3 years since I last posted... how time flies. I'll try to do a little better this time! Last summer I moved from Windows Security to an incubation group which is, as Chris Brumme so eloquently puts it, "exploring evolution and revolution in operating systems". I'm having a lot of fun working with a variety of interesting systems technologies, including security, distributed systems, many-core, virtualization, managed systems code, dynamic resource scheduling, asynchronous & adaptive user interfaces, etc. The best part is the people - some really outstanding architects like Chris, with deep knowledge in a diverse set of areas. And we're still hiring, so if you're someone like that, send me an email.

05:16 #


© 2002-2004 Ravi Pandya | All Rights Reserved