Tuesday, July 26, 2011

Moving blog.

Now that I'm getting comfortable enough with hosting and such, my blog is now on my new web site...

Mark Hildreth's Web Site

Tuesday, April 5, 2011

Uniquness of an id attribute on an XMPP iq stanza.

The stanza in XMPP has an 'id' attribute. An id is used so that the sender of multiple requests can determine which result they're getting when they start coming back. The XMPP Protocol states about the id attribute the following:

It is OPTIONAL for the value of the 'id' attribute to be unique globally, within a domain, or within a stream. The semantics of IQ stanzas impose additional restrictions; see IQ Semantics (IQ Semantics).


The "additional restrictions" as discussed in IQ Semantics is really the following:

1.) An request stanza MUST provide an id.
2.) An response stanza MUST provide the id of the request stanza it's responding to.

So, in conclusion, there doesn't seem to be a hard and fast rule. It's entirely legit (although probably a bad idea) to just an empty string as your id. Looking at my console, I noticed that Pidgin used the word "purple" (since Pidgin uses the "libpurple" library) plus what appeared to be an incrementing hexadecimal number...

purplebeb0fb89
purplebeb0fb8a
purplebeb0fb8b

Thursday, November 12, 2009

Joining multiple mp4 files

I'm not going to claim to know anything about video codecs and the such, so I'm pretty much at the mercy of google when it comes to modifying video files. Having found part of the answer but not all of it, I figured I'd post the exact command line here for the lazy like me...

A forum hinted me to the software MP4Box, which I was able to install by installing the gpac bundle. From there, it was simply a matter of...


MP4Box a.mp4 -cat b.mp4 -out result.mp4


a is input, with -cat saying that we should concatenate b to the end of a.

This worked well enough for me, but since the video codec world is strange and mysterious, your results may vary.

Monday, September 7, 2009

Running SQLAlchemy scripts off of pylon's paste configurations

I've come across a situation where I want to be able to use cron to launch scripts that automate some tasks (fetch info from 3rd party, scramble their data into my database format, write to my database). I want to be able to use the models I've already created in SQLAlchemy, along with the configuration of SQLAlchemy in my paste configuration files (development.ini, production.ini) in these scripts.

I don't need all of the bells and whistles of pylons, just the SQLAlchemy stuff. Luckily, there are some convenience functions that seem to have been made to deal with a situation just like this one. Here's the code:


import sys
from paste.deploy import appconfig
from sqlalchemy import engine_from_config
from myapp.model import init_model

def setup_environment():
if len(sys.argv) != 2:
print 'Usage: Need to specify config file.'
sys.exit(1)

config_filename = sys.argv[1]
config = appconfig('config:%s' % config_filename, relative_to='.')
engine = engine_from_config(config)
init_model(engine)


Of course, replace "myapp" with the name of your application.

Getting the configuration filename from the command-line arguments could of course be put somewhere else, so I'll just skip talking about that. The two main functions are appconfig and engine_from_config.

appconfig is from paste.deploy, which reads the configuration file and returns a dictionary-like object. engine_from_config is from SQLAlchemy and takes a dictionary-like object, retrieves SQLAlchemy-specific configurations, and uses the configurations to create an engine object.

Friday, August 28, 2009

Praise for IBM

Note: I am not in ANY way affiliated with IBM or Lenovo. This is a purely, satisfied customer.

Pretty much everyone has written about their frustration with customer service X or company Y. I'm here to tell a different tale, about how a company did things RIGHT.

It started off a few months ago when I had diagnosed that my ATI video card was having problems on my Thinkpad T400 and needed to be replaced. Since this computer has two video cards that you can switch between (one for battery life, one for power), I was able to just use the other video card. I didn't feel like being without my computer, so rather than get it fixed, I just held off and only used the integrated card. This means I lost dual-monitor support, but I wasn't dead in the water.

I finally decided the time was right to send it in, so I called up the technical support number. I was on hold for less than five minutes. After giving the typical info, I described in a few sentences what was wrong, including what I did to troubleshoot and diagnose the problem. At this point I was ready for the guy on the other end to start his laundry list of things that they need to check.

"Disconnect all external devices..."
"Let's try restarting the computer..."
"Let's reinstall Windows..."

I've had screens burn out and would need to go through this list of steps before they would allow me to send it in for hardware repairs. Especially since I'm not a business user, it just seemed like other companies paid no special attention to keeping this process simple.

IBM did it right. The person listened to me, instantly recognized that I knew what I was talking about, and first checked to see if he could just send me the replacement card that I could reinstall (with no harm to my existing warranty). Turns out that wasn't an option, but he did immediately determine that the computer needed hardware repair, and started the process of getting my info for where to ship the box. No fuss about checking for software issues, I already did that, I know what needs to be done; let's just do this. It was wonderful.

That was on Sunday night. The box arrived on Tuesday Afternoon, which I promptly sent out. It's now Friday, 10am, and my computer is back with the problem fixed.

No doubt, next PC will be a Thinkpad.

Monday, August 10, 2009

Reusable SQLAlchemy Models

Recently looking at django, I took a liking to their idea of reusable apps. One thing that was common with most of these apps were some model that they would include and which you would call a function to automatically set up a relation to one of your models, thus having a "reusable model". An example would be any object that you'd like to have comments on automatically having a Comment model generated and relationships to your custom-built object automatically set up.

Out of mere curiosity, I wondered how difficult it would be to create a "reusable model" in SQLAlchemy. My end goal was to be able to do something like this...


@commentable
class Post(Base):
__tablename__ = 'posts'
id = sa.Column(sa.Integer, primary_key=True)
text = sa.Column(sa.String)


A class that was decorated as commentable would automatically have a relation defined to contain multiple comments. After trying out a few ideas, I wrote a test for what I would want the end result to look like...


class TestModels(SATestCase):
def test_make_comment(self):
p = Post()
p.text = 'SQLAlchemy is amazing!'

text = 'I agree!'
name = 'Mark'
url = 'http://www.sqlalchemy.org/'

c = Post.Comment()
c.text = text
c.name = name
c.url = url
p.add_comment(c)
Session.add(p)

p = self.reload(p)

self.assertEquals(len(p.comments), 1)
c = p.comments[0]
self.assertEquals(c.text, text)
self.assertEquals(c.name, name)
self.assertEquals(c.url, url)



(As a little note, self.reload() is a helper function that will force a complete reload of the session and return the objects you passed in after the session is opened again)

First off, a little bit about how the table structure would work. AFAIK, django stores comments in one large table and has what you might call a discriminator field to determine the object type that the row relates to. In my case, every type of commentable object (Post, NewItem, etc.) would instead get it's own comment class (PostComment, NewsItemComment) as well as their own table (post_comments, news_items_comments). No real reason to do it this way, I just thought it'd be easier.

In the end, it was actually pretty easy. Here is what the initial results look like...


class BaseComment(object):
pass

def build_comment_model(clazz):
class_table_name = str(class_mapper(clazz).local_table)
metadata = clazz.metadata

comment_class_name = clazz.__name__ + 'Comment'
comment_class = type(comment_class_name, (BaseComment,), {})

comment_table_name = class_table_name + '_comments'
comment_table = sa.Table(comment_table_name, metadata,
sa.Column('id', sa.Integer, primary_key=True),
sa.Column(class_table_name + '_id',
sa.Integer,
sa.ForeignKey(class_table_name + '.id')),

sa.Column('text', sa.String),
sa.Column('name', sa.String(100)),
sa.Column('url', sa.String(255)),
)

mapper(comment_class, comment_table)

return comment_class, comment_table

def commentable(clazz):
comment_class, comment_table = build_comment_model(clazz)

clazz.Comment = comment_class
setattr(clazz, 'comments', relation(comment_class))

def add_comment(self, comment):
self.comments.append(comment)

setattr(clazz, 'add_comment', add_comment)

return clazz


First off, we have the BaseComment class. This is just an empty class, but you could easily see there being logic on here.

In the build_comment_model() function, you see where the creation of the table's metadata takes place. It'll first create a new class that will be used for the new comment model (in the case of a Post model being commentable, it's creating a PostComment class that inherits from BaseComment). Using SQLAlchemy information that we find on the mapper of the class we're decorating, we can determine things such as the name of the table so that we can create our own table (the Post model's table name is "posts", so we'll create a "posts_comments" table).

Finally, the commentable() function finishes by setting the Comment attribute on the model class we're decorating, as well as adding the relation and a class method. The Comment attribute allows us to easily get at the new class that we created, the relation allows us to easily work with the items, and the add_comment is an example of how we can also add extra methods to the model.

Currently, there are some pitfalls:

1.) It assumes that the class has been mapped before running. Because mapping of a class must happen after the class is instantiated, you couldn't use the commentable() function as a decorator for a non-declarative style mapping, since it wouldn't be able to find the mapper. The solution would be to just call the commentable function after the class is mapped. I guess it's not really a pitfall per se, although I haven't actually tried it, so I'm just guessing that it works :)

2.) Right now, it assumes that the foreign key for the Comment object would be a single column that corresponds to the column named "id" on whatever model it's decorating. I still need to work on this, since you might have a different name, different primary key type, or even a multi-column primary key. Basically when the new comment table's metadata is being laid out, it needs to look at the decorated model's primary keys to determine how it should construct it's foreign keys, rather than just jumping to the conclusions that it does now.

Sunday, August 9, 2009

A hint for those new to Django.

After you've done with the django tutorial, you'll probably want to start on your own project. Here's a hint that helped me, coming from a Pylons background, on writing that first project and not getting fed up with it (especially if you're biased and are ready to throw it all away and go back to your old framework at the slightest imperfection)...

In your urls.py, you have the following lines...

# Uncomment the next two lines to enable the admin:
# from django.contrib import admin
# admin.autodiscover()

In order to enable admin, you uncomment those two lines, amongst doing a few other things.

Delete those lines. Do it, right now. And get the admin out of your mind.

In the last week, I kept finding myself making my models, opening the admin, messing around with the model's admin interface, getting annoyed over how silly the relationships work in the admin and how it's not how I would want users to specify them, and giving up.

The problem was that I was equating the django admin with django. In reality, what I was doing in pylons, writing the "admin" pages manually but perfectly honed to what I wanted, could've been done in django as well.

Now, perhaps eventually, once you've gotten fairly sufficient with django, specifically after you've wrapped your head around the forms, you can take a look at the admin and see how it might help you. But starting off using the admin will just distract you.