Archive for the 'Web' Category

Google-worthy Cartography
Greg Abbas

I'm glad that Google is in the world, and I imagine that most people would agree. Sure they're practically taking over the Internet, but so far they seem to mostly benevolent about it. Sure they have a few lapses in judgement like China and (more recently) Dell, but mostly they're good guys. I think of them as the BDFL's of the Internet... if they hired Guido they can't be all bad. :-) They set a very high bar for software quality, just like (if I may say so) my employer, Apple. They have an office here in Santa Monica just a few blocks from mine, so I feel a certain connection with them.

That's why I feel obliged to make a suggestion: I've noticed that on Google Maps, the little green arrow is often not quite accurate. It's usually on the right block, but it's often not on the right building... often three or four doors away from the right location. I think it'd make a good project for the "one day a week" that Google engineers are supposed to spend thinking about new, cool stuff. I mean, it's no MentalPlex, but I bet there's some way to analyze a set of data comparing where Google Maps thinks an address is to where it actually is (as manually indicated by human data entry), correlate the errors to some other variable(s), and construct an algorithm that could be used to correct whatever approximations are inherent in the geolocation data sets that they're using. I mean, how cool would that be, if you could type an address into Google Earth and it showed you where the front door was, with an error margin of 3 feet instead of 300 feet?

So that's my idea. I don't have any illusions that Google researchers haven't already thought about this problem, but maybe with a little gentle prodding they'll look at it from a fresh angle. Consider this post a friendly "open letter" to my esteemed neighbors. :-)


Avatars
Greg Abbas

Pictures are more interesting than words... it's cool to represent a person or a web site using an icon instead of just a name. I'm obviously not the first person to realize this; avatars are cropping up on the web in more and more places. I like how Lisa McMillan added favatars to her "hall of fame", for instance. But her comment regarding the fact that technically she's stealing their bandwidth made me think that it'd be courteous to set up a Django app that cached those images, rather than linking straight to them. The extensibility to add features like that is exactly why I'm using Django! Here's a description of how it works.

My goal was to provide avatars for users who write comments on my posts. Two styles of avatars are supported: favatars (the favicon from the user's web site) and gravatars (images associated with the users email address).

Usage. To insert an avatar, use the "avatar" template tag that takes two parameters: the name of an object, and a size (in pixels). The object is queried for "email" and "url" attributes, which django.contrib.comments doesn't support so I copied the comments app and added them myself. The size attribute works best if it's a multiple of 16 (because that's how big a favicon is) and less than or equal to 80 (because that's how big a gravatar is), so I use 48. Here's what my template looks like:

  {% for comment in comment_list %}
  <li class="item" id="comment-{{ comment.id }}">
    {% avatar comment 48 %}
    ...etc...
  </li>
  <% endfor %>

The object doesn't have to define both email and url attributes... if you just want favatars, then all you need is a url. If an object does have both kinds of avatars, the gravatar gets priority (because they're typically higher res). Also, there are some variables to define in your settings.py to control how the tag operates. Two are required, the rest are optional:

AVATAR_STORAGE_DIR. Required. The name of a directory where the downloaded avatar images may be stored.
STATIC_DIRS. Required. A tuple of pairs, each one containing the name of a directory and the URL with which to access it. For instance, (('/var/htdocs/media', '/site_media'),) would let the avatar tag know that if the AVATAR_STORAGE_DIR is /var/htdocs/media/my_avatars, that it can generate urls that start with "/site_media/my_avatars/".
AVATAR_REFETCH. Default: 1 day. The minimum amount of time between requests to the original web server to see if an image has changed.
AVATAR_LIMIT. Default: 0. Like AVATAR_REFETCH, but this is the minimum amount of time between requests if the user hits the "reload" button (as indicated by HTTP_PRAGMA=no-cache).
AVATAR_DEFAULT_IMAGE. Default: no image. The image to show if no real avatar is available.

And here's the source code: avatar.py

Next, I want to use this to support favicons for a blogroll, of course. But one thing at a time. :-) And there are other features that could be added, like pre-fetching images asynchronously or deleting old files from the cache, but I don't get enough traffic to justify things like those.


MetaWeblog & Django
Greg Abbas

As I mentioned previously, I wanted to integrate Ecto with my new Django-based blog using the "MetaWeblog" remote procedure call API. With an XMLRPC implementation ready to go, the remaining task is to write MetaWeblog methods and hook them up to my blog model. The model is pretty straightforward; for instance, here are the fields for "Post" object:

class Post(models.Model):
    title = models.CharField(maxlength=200)
    slug = models.SlugField(prepopulate_from=('title',))
    pub_date = models.DateTimeField('date published', blank=True, null=True)
    create_date = models.DateTimeField('date created', auto_now_add=True,
        blank=True, null=True)
    update_date = models.DateTimeField('date updated', auto_now=True,
        blank=True, null=True)
    body = models.TextField()
    tags = models.ManyToManyField(Tag)
    author = models.ForeignKey(User)
    status = models.CharField(maxlength=32, choices=STATUS_CHOICES,
        radio_admin=True, default='Draft')

The MetaWeblog API is pretty well documented, but has some features that I'm not quite ready to tackle. For instance, I only have a single user with a single blog, you have to be a "super user" to edit it (no sophisticated authorization scheme yet), and I haven't done media objects (e.g., images) yet either.

Most every method takes "username" and "password" parameters, so I wrote an "authenticated" wrapper function to convert them to a user object (and throw an exception if there are any authentication or authorization issues). Then the various methods are short, because the Django database/model API makes things easy. For instance, the method to get information about a blog post (given a post id) is called "getPost". My implementation looks up the post object by id, and then calls a function to turn it into the struct that MetaWeblog requires:

@public
@authenticated()
def metaWeblog_getPost(user, postid):
    post = Post.objects.get(id=postid)
    return post_struct(post)

def post_struct(post):
    link = full_url(post.get_absolute_url())
    categories = post.tags.all()
    struct = {
        'postid': post.id,
        'title': post.title,
        'link': link,
        'permaLink': link,
        'description': post.body,
        'categories': [c.name for c in categories],
        'userid': post.author.id,
        }
    if post.pub_date:
        struct['dateCreated'] = format_date(post.pub_date)
    return struct

where full_url urljoin's the web site host name ("www.allyourpixel.com") to a URL, and format_date calls xmlrpclib to (you guessed it) turn a date into a string. Here's the whole file: metaweblog.py. It's not a complete application, but hopefully it'll be easy enough to customize for what your doing.


Net Neutrality
Greg Abbas

For those of you who haven't heard, large telecommunications companies like AT&T, Verizon, and Comcast are trying to change how the internet works. Historically, innovation and free speech have thrived on the internet because anyone who wants to can participate equally. But now, instead of offering unbiased service to all web sites, these companies want to offer preferred service to corporate web sites that pay them money, leaving small sites (that can't afford such payola) out in the cold. It would be good for their shareholders, which is why they've launched a multi-million dollar lobbying campaign to make Congress let them get away with it. But it would be very bad for the public.

Luckily, there are people fighting to prevent this abuse. For instance, I wrote a letter to one of my senators, Sen. Barbara Boxer, and I received a reply that made me proud to be a constituent of hers:

As a longtime advocate for network neutrality, I believe that network operators should facilitate, not stifle, Americans' access to the Internet. Consumers will suffer if network operators are allowed to discriminate against their competitors' use of the network by giving certain content preferential treatment.

I believe that individuals' ability to define and shape their experiences on the Internet has led to an explosion of creativity that has greatly benefited our economy and our lives. I fear that if network operators abandon the principle of network neutrality, the next generation of Internet innovators will be harmed... please be assured that I will continue working to ensure that the Internet remains an engine of growth for commerce and ideas.

How cool is that?!? Also, there's a broad coalition called "Save the Internet" that's fighting for internet freedom... they have a great F.A.Q. and a list of ways you can help.

Also, beware of the FUD that the AT&T/Verizon/Comcast axis is trying to spread. They have a campaign called "hands off the internet" that's trying to convince people that Net Neutrality is somehow bad, but if you look carefully you'll notice that their arguments are suspiciously devoid of any coherent logic. It's just greed, masquerading as principle.