Friday, August 28, 2009

Praise for IBM

Note: I am not in ANY way affiliated with IBM or Lenovo. This is a purely, satisfied customer.

Pretty much everyone has written about their frustration with customer service X or company Y. I'm here to tell a different tale, about how a company did things RIGHT.

It started off a few months ago when I had diagnosed that my ATI video card was having problems on my Thinkpad T400 and needed to be replaced. Since this computer has two video cards that you can switch between (one for battery life, one for power), I was able to just use the other video card. I didn't feel like being without my computer, so rather than get it fixed, I just held off and only used the integrated card. This means I lost dual-monitor support, but I wasn't dead in the water.

I finally decided the time was right to send it in, so I called up the technical support number. I was on hold for less than five minutes. After giving the typical info, I described in a few sentences what was wrong, including what I did to troubleshoot and diagnose the problem. At this point I was ready for the guy on the other end to start his laundry list of things that they need to check.

"Disconnect all external devices..."
"Let's try restarting the computer..."
"Let's reinstall Windows..."

I've had screens burn out and would need to go through this list of steps before they would allow me to send it in for hardware repairs. Especially since I'm not a business user, it just seemed like other companies paid no special attention to keeping this process simple.

IBM did it right. The person listened to me, instantly recognized that I knew what I was talking about, and first checked to see if he could just send me the replacement card that I could reinstall (with no harm to my existing warranty). Turns out that wasn't an option, but he did immediately determine that the computer needed hardware repair, and started the process of getting my info for where to ship the box. No fuss about checking for software issues, I already did that, I know what needs to be done; let's just do this. It was wonderful.

That was on Sunday night. The box arrived on Tuesday Afternoon, which I promptly sent out. It's now Friday, 10am, and my computer is back with the problem fixed.

No doubt, next PC will be a Thinkpad.

Monday, August 10, 2009

Reusable SQLAlchemy Models

Recently looking at django, I took a liking to their idea of reusable apps. One thing that was common with most of these apps were some model that they would include and which you would call a function to automatically set up a relation to one of your models, thus having a "reusable model". An example would be any object that you'd like to have comments on automatically having a Comment model generated and relationships to your custom-built object automatically set up.

Out of mere curiosity, I wondered how difficult it would be to create a "reusable model" in SQLAlchemy. My end goal was to be able to do something like this...


@commentable
class Post(Base):
__tablename__ = 'posts'
id = sa.Column(sa.Integer, primary_key=True)
text = sa.Column(sa.String)


A class that was decorated as commentable would automatically have a relation defined to contain multiple comments. After trying out a few ideas, I wrote a test for what I would want the end result to look like...


class TestModels(SATestCase):
def test_make_comment(self):
p = Post()
p.text = 'SQLAlchemy is amazing!'

text = 'I agree!'
name = 'Mark'
url = 'http://www.sqlalchemy.org/'

c = Post.Comment()
c.text = text
c.name = name
c.url = url
p.add_comment(c)
Session.add(p)

p = self.reload(p)

self.assertEquals(len(p.comments), 1)
c = p.comments[0]
self.assertEquals(c.text, text)
self.assertEquals(c.name, name)
self.assertEquals(c.url, url)



(As a little note, self.reload() is a helper function that will force a complete reload of the session and return the objects you passed in after the session is opened again)

First off, a little bit about how the table structure would work. AFAIK, django stores comments in one large table and has what you might call a discriminator field to determine the object type that the row relates to. In my case, every type of commentable object (Post, NewItem, etc.) would instead get it's own comment class (PostComment, NewsItemComment) as well as their own table (post_comments, news_items_comments). No real reason to do it this way, I just thought it'd be easier.

In the end, it was actually pretty easy. Here is what the initial results look like...


class BaseComment(object):
pass

def build_comment_model(clazz):
class_table_name = str(class_mapper(clazz).local_table)
metadata = clazz.metadata

comment_class_name = clazz.__name__ + 'Comment'
comment_class = type(comment_class_name, (BaseComment,), {})

comment_table_name = class_table_name + '_comments'
comment_table = sa.Table(comment_table_name, metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(class_table_name + '_id',
sa.Integer,
sa.ForeignKey(class_table_name + '.id')),

sa.Column('text', sa.String),
sa.Column('name', sa.String(100)),
sa.Column('url', sa.String(255)),
)

mapper(comment_class, comment_table)

return comment_class, comment_table

def commentable(clazz):
comment_class, comment_table = build_comment_model(clazz)

clazz.Comment = comment_class
setattr(clazz, 'comments', relation(comment_class))

def add_comment(self, comment):
self.comments.append(comment)

setattr(clazz, 'add_comment', add_comment)

return clazz


First off, we have the BaseComment class. This is just an empty class, but you could easily see there being logic on here.

In the build_comment_model() function, you see where the creation of the table's metadata takes place. It'll first create a new class that will be used for the new comment model (in the case of a Post model being commentable, it's creating a PostComment class that inherits from BaseComment). Using SQLAlchemy information that we find on the mapper of the class we're decorating, we can determine things such as the name of the table so that we can create our own table (the Post model's table name is "posts", so we'll create a "posts_comments" table).

Finally, the commentable() function finishes by setting the Comment attribute on the model class we're decorating, as well as adding the relation and a class method. The Comment attribute allows us to easily get at the new class that we created, the relation allows us to easily work with the items, and the add_comment is an example of how we can also add extra methods to the model.

Currently, there are some pitfalls:

1.) It assumes that the class has been mapped before running. Because mapping of a class must happen after the class is instantiated, you couldn't use the commentable() function as a decorator for a non-declarative style mapping, since it wouldn't be able to find the mapper. The solution would be to just call the commentable function after the class is mapped. I guess it's not really a pitfall per se, although I haven't actually tried it, so I'm just guessing that it works :)

2.) Right now, it assumes that the foreign key for the Comment object would be a single column that corresponds to the column named "id" on whatever model it's decorating. I still need to work on this, since you might have a different name, different primary key type, or even a multi-column primary key. Basically when the new comment table's metadata is being laid out, it needs to look at the decorated model's primary keys to determine how it should construct it's foreign keys, rather than just jumping to the conclusions that it does now.

Sunday, August 9, 2009

A hint for those new to Django.

After you've done with the django tutorial, you'll probably want to start on your own project. Here's a hint that helped me, coming from a Pylons background, on writing that first project and not getting fed up with it (especially if you're biased and are ready to throw it all away and go back to your old framework at the slightest imperfection)...

In your urls.py, you have the following lines...

# Uncomment the next two lines to enable the admin:
# from django.contrib import admin
# admin.autodiscover()

In order to enable admin, you uncomment those two lines, amongst doing a few other things.

Delete those lines. Do it, right now. And get the admin out of your mind.

In the last week, I kept finding myself making my models, opening the admin, messing around with the model's admin interface, getting annoyed over how silly the relationships work in the admin and how it's not how I would want users to specify them, and giving up.

The problem was that I was equating the django admin with django. In reality, what I was doing in pylons, writing the "admin" pages manually but perfectly honed to what I wanted, could've been done in django as well.

Now, perhaps eventually, once you've gotten fairly sufficient with django, specifically after you've wrapped your head around the forms, you can take a look at the admin and see how it might help you. But starting off using the admin will just distract you.

Monday, August 3, 2009

Shortening bash prompts inside a virtualenv

This post will describe a way to shorten your bash prompt in Linux so that inside the inevitable multiple directories caused by using virtualenv, you can still have a small prompt.

I wanted to take an existing library and split it into it's own library. On my computer, I was going to work wit this library at ~/projects/sc/kespa_scrape.

Of course, that kespa_scrape directory is actually the directory holding the virtual_env...


kespa_scrape/
|-- bin
| |-- activate
| |-- activate_this.py
| |-- easy_install
| |-- easy_install-2.6
| |-- nosetests
| |-- nosetests-2.6
| |-- python
| |-- testall.sh
| `-- to3.sh
|-- include
| `-- python2.6 -> /usr/include/python2.6
|-- kespa_scrape
| `-- kespa_scrape
`-- lib
`-- python2.6


This is what the kespa_scrape directory looks like. You can see it has a kespa_scrape directory inside it as well. This directory is the root to the actual project. This directory will probably also have a kespa_scrape directory which would be the actual library package directory. Obviously, there is redundancy, but it's necessary redundancy to keep all the parts where I want them.

Some people complain about virtualenv because then you need to cd through a whole bunch of directories to start working. I make this easier with a script. This is how it is used...


gobo@gobo:~$ cd projects/sc/
gobo@gobo:~/projects/sc$ activate_project kespa_scrape
(kespa_scrape)gobo@gobo:~/projects/sc/kespa_scrape/kespa_scrape$


As you can see, I change to the directory where the project is, enter "activate_project", and it will put me into the virtual environment as well as change to the root directory of the project.

The problem I still had with this is that now my prompt is huge. I would typically change directories once more into the next kespa_scrape package, and with a large enough library name, have a prompt that easily took up half the screen.

At first, I made sure to choose short library names, but now I've come up with a better solution. It rewrites the bash prompt to replace the virtual environment directory with two tildes, and looks like this...


gobo@gobo:~$ cd projects/sc
gobo@gobo:~/projects/sc$ activate_project kespa_scrape
(kespa_scrape)gobo@gobo:~~$


Check out the difference for yourself...


(kespa_scrape)gobo@gobo:~/projects/sc/kespa_scrape/kespa_scrape$
(kespa_scrape)gobo@gobo:~~$


Here's the full script...


#!/bin/bash

# Remove possible slash at end.
DIR=${1%/}

ACTIVATE_FILE=$1/bin/activate

if [ -z "$DIR" ]; then
echo "usage: $0 directory"
return 1
fi

if [ ! -d "$DIR" ]; then
echo "Directory not found: $DIR"
return 1
fi

if [ ! -d "$DIR/$DIR" ]; then
echo "Child directory not found: $DIR/$DIR"
return 1
fi

if [ ! -e "$DIR/bin/activate" ]; then
echo "Activate file not found. Are you sure this is a virtualenv directory?"
return 1
fi


cd $DIR
source bin/activate
cd $DIR

CUR_DIR_CMD='pwd | sed -e '\''s!'$VIRTUAL_ENV/$DIR'!~~!g'\'' | sed -e '\''s!'$HOME'!~!g'\'
PS1=${PS1//\\w/$\($CUR_DIR_CMD\)}

alias cdp='cd '$VIRTUAL_ENV/$DIR


After checking to make sure all the files exist, it sets a variable called CUR_DIR_CMD. This is basically a variable that contains a bash command to take the current directory, and replace any instances of the virtual directory with ~~. Since virtualenv sets the VIRTUAL_ENV variable, it basically looks for "/projects/sc/kespa_scrape/kespa_scrape" and replaces it with ~~. It will also take the result of that and replace instances of the home directory with ~. This is so that if I change directories to something outside of the virtualenv, I can still have '~' instead of '/home/gobo'

Next, I need to replace all instances of \w in the prompt string with this expression. \w is the special string you'd use when setting your bash prompt to give you the full directory that you're in. Replacing it with my custom command allows me to replace the area where the directory should go with what I want, but leave the rest of the prompt (including virtualenv's addition to the beginning) intact.

The entire process also does pretty well when you change directories to outside the virtualenv, since you're just left with the directory...


(kespa_scrape)gobo@gobo:~~$ cd
(kespa_scrape)gobo@gobo:~$ cd projects/
(kespa_scrape)gobo@gobo:~/projects$


The only thing I want to fix up on it is to allow activate_project to work from ANY directory. Currently, it assumes that the project directory is right in front of you. This is good enough for now though.

----

Edit: Feb 18, 2010

Also, one thing to keep in mind, for those who don't really deal much with bash scripts, is that typically when you launch a script the script runs inside it's own shell, and when it completes, the shell closes. This means you lose things such as the new alias or the changes to the directory. So, you can't run the script in any of these ways:


sh /home/mark/bin/activate_project project_name
/home/mark/bin/activate_project project_name


If you did this, the script would run, but you would not see anything changed, since all the changes would be done in the shell created for the script, which exits without changing the shell that ran the script. To ensure that you run the script on the shell you run the script with, you need to use the source or dot operator.


. /home/mark/bin/activate_project project_name
source /home/mark/bin/activate_project project_name


As a side note, now you really know what that source bin/activate line does! Anyway, you don't want to deal with this hassle, so set up an alias...


alias activate_project="source /home/mark/bin/activate_project"