Saturday, October 28, 2006

Workarounds for SQLAlchemy and Turbogears 1.0b

I've started work on a new TurboGears project to track merge requests for Bazaar. I was already familiar with earlier versions of TurboGears, but I decided to upgrade to 1.0b, and try out this new SQLAlchemy thing that everyone's raving about. It's supposed to be the official object-relational-mapper in TurboGears 1.1.

You have the option of using it in 1.0b, but the integration is not smooth.

Although you can use the identity framework with SQLAlchemy, the default model.py that TurboGears provides is broken. TurboGears uses the ActiveMapper extension, but the default model.py defines relationship between groups and users, and between groups and permissions in two places. Even though the definitions are equivalent, this is not allowed, and it fails silently.

So to use groups/permissions effectively, comment out the two many_to_many lines in Group. You'll still get users and properties attributes on Group, but they'll be created by the 'backref' parameters on the many_to_many statements in User and Property.

I've learned to like unit-testing. But the Turbogears TestCase object does not handle SQLAlchemy objects. So I wrote my own:

class TestMessages(TestCase):

def iter_active_mappers(self):
for key, item in model.__dict__.items():
try:
if issubclass(item, ActiveMapper):
if item is not ActiveMapper:
yield item
except TypeError:
pass

def setUp(self):
for active_mapper in self.iter_active_mappers():
active_mapper.table.create()

def tearDown(self):
for active_mapper in self.iter_active_mappers():
active_mapper.table.drop()

Finally, I also had to force it to provide my own database configuration:
config.update({'sqlalchemy.dburi': 'sqlite:///:memory:'})

The tg-admin tool has quite limited support for SQLAlchemy: it only does sql create. I've done sql drop quite a lot in past projects. I could adapt the tearDown method above, but I'm currently using SQLite, so rm devdata is an adequate subsitute.

I hope these tips are helpful.

Update:
SQLAlchemy emits reams and reams of SQL kipple; To disable that, place sqlalchemy.echo = False in your config file.

It's great that TurboGears is so clearly committed to delivering best-of-breed components that they'll switch ORMs and templating libraries for their 1.1 release. But 1.1 isn't here yet, and while you can integrate SQLAlchemy and TurboGears today, there's a certain amount of pain involved.

Saturday, June 10, 2006

Bloom filters and Smart Servers

I just got back from a joint Mercurial / Bazaar meetup. One of the neat things about it was the bloom filter collaboration. Bryan O'Sullivan described bloom filters to us as a mechanism for greylisting. They're a way of cheaply storing large sets of keywords, with the tradeoff that testing whether a keyword is in a set may produce false positives.

Later on, they were discussing their smart server implementation. This is especially interesting to the Bazaar crew, because we are now planning to create our own smart server.

One of the tricky things about smart servers is that they must determine which revisions the server has that are not present on the client side. Mercurial currently does this using a binary search, but this can have up to 15 roundtrips, and roundtrips in network protocols can slow them down dramatically.

It came to me that bloom filters could be applied to this problem, because they allow a set to be represented compactly, and the revisions in a repository can be treated as a set.

Robert Collins stated that bloom filters can also be inverted, so that instead of the risk of false positives, you have a risk of false negatives. This is useful if the receiver sends its filter to the sender, and the sender decides which revisions to send. That way, the sender can never send revisions that depend on unknown revisions. So far, we haven't found a description of how to invert a bloom filter to produce false negatives, however.

If the sender sends its filter, then the receiver decides which tips the two machines have in common, and so false positives don't produce a risk of sending revisions that depend on unknown revisions.

Matt Mackell investigated the possibility of using blooms with Mercurial's smart server, and concluded that a bloom filter of a million revisions with a 1% error rate would take about a megabyte of data-- too much to send all at once. I pointed out that we could accept a higher error rate for this data, because revisions in a set have strong correspondances: if a revision is in a set, then its direct ancestor should also be in the set. Using this rule, we can eliminate matches where the direct ancestor is not in the set, reducing the error rate to 1% again. Unfortunately, a bloom filter with a 10% error rate is still 500k or so.

However, it's worth noting that it's rarely necessary to send a million revisions; instead, a bloom filter of a smaller number of revisions can be sent, since repositories are usually very similar.

Thursday, March 23, 2006

More tree transforms

Here's a follow-up on tree transforms, now that I've finished implementing them.

One thing I found as I implemented tree transforms in bzr was that the modification step is probably the smallest and most boring. Content changes can be structured as delete/create pairs, so they don't need to be modifications. So the only modifications bzr does are file permission changes.

Another thing I found is that you can structure a tree transform so that all the risky stuff happens up-front, making it very unlikely that you'll run into an error that leaves your tree in an inconsistent state. The trick is to do all your creation operations into a temp directory beforehand. Then, when you're applying the transform, you rename-into-place instead of creating.

That means you're not subject to 'disk full' errors or failures in the text merge. It also means that you can safely perform merge operations that make reference to the working tree, like using '.' as BASE or OTHER in a merge. And the simpler you can make the tranform application, the better.

You can even take this a step farther, and defer deletions until you've completeliy updated the working tree. Just rename files into a temporary directory, and then delete it when the working tree has been successfully updated. That should make it relatively easy to roll back to the previous state, should you encounter an error.

Another thing we do before applying the transform is conflict resolution. After all, there may already be files where you want to put them. Or a text merge may need to emit .BASE, .THIS and .OTHER files. Or a merge may produce a directory loop. Etc. Bzr has a set of fairly simple rules for detecting and resolving conflicts, and doesn't try to predict how different conflict resolutions could interact with each other. It just runs the resolver up to 10 times, and if that doesn't fix the problem, it gives up.

But again, with all of this happening before a single move or deletion, there isn't a huge penalty for failure.

Thursday, March 16, 2006

A satisfying morning's hack

So standards-conformant web sites are mother-and-apple-pie kinda stuff. Everyone knows they're a good thing. The w3c has even been so helpful as to provide a way to validate anything you might have online.

Problem is, a lot of your stuff isn't online. Once you know it's valid, then it'll go online. Sure, the w3c validator will accept file uploads, but if you're doing any kinda dynamic content generation, you have to save the rendered output, and upload that. Every time.

So yesterday, (or was it Tuesday?) I hacked up a validator proxy. It's TurboGears, but it just uses CherryPy. It pulls down a specified URL, which can be a LAN-only URL, shoves it up to the w3c validator, and returns a slightly-rewritten version of the validator's response. For an encore, I hacked up the w3c validation bookmarklet, to teach it to use my proxy.

Validation happens a lot more when it's easy. And it's so validating...