CategoryDatabases

Getting Into Amazon EC2

I spent some time this weekend diving deeper into Amazon’s EC2 and all of the associated services. I’ve read about EC2, discussed it with colleagues, I pretty much thought I knew what it was all about….. virtual hosting right? Yeah, I was wrong. It was going through the process off setting up an instance and configuring all the network and storage services completely that changed my perspective. EC2 is really, really, cool.

What is really rocking my world is the whole concept of throw-away servers. The idea that a discrete process can spin up a new server that gets built at run time, does some work, then disappears is amazing.  I see this as turning the whole concept of linear scale on it’s head. You don’t scale an app, you scale individual threads. Powerful stuff, especially when dealing with data mining and event processing.

Much more coming soon…..

ORM, RDF, and Jon Postel

Had a great conversation with @cks earlier about the dichotomy between ORM and RDF/OWL when modeling enterprise data. His position was that with a pure ORM model you are more likely to have consistent data throughout your applications because the rules & constraints have laid out before the user ever touches the keyboard. Where he felt RDF/OWL was at odds with the ORM model is that by giving people the ability to create relationships that trigger additional inferences you must trust them to understand the implications of their actions.

A simple example….

Now someone comes along and creates this:

At this point it’s incumbent upon all dependent applications to understand that “:table” does not in fact need water. Again, the key is that every application must be aware of and apply the same rules or data integrity suffers. It’s not that tradtional ORM and RDF/OWL can’t coexist, in some companies it may be an integrated process. Where @cks was concerned is that the inferences inherent to RDF/OWL introduces issues with consistency and integration because it’s so easy for new rules to simply pop up.

I agree with everything up to this point, but where I would argue we need to be headed with enterprise apps is a hybrid model that blends the consistency & predictability of ORM, with the freedom of RDF/OWL.

First let’s quickly take a moment to talk about freedom. When I hear a programmer use the phrase “never trust the user” I scratch my head. Sure you should sanitize application input for the sake of security but let’s be realistic about it, business users do not intentionally inject crap into the system. They use software to get things done. Humans will make mistakes, but so do the software applications that were written by… well, humans.

The user is the most important component of software development. If that sounds obvious, then why don’t we trust them more? My hope is that developers begin putting more trust in the user with a focus on creating software that learns from the users instead of limiting them.

So back to the hybrid model. I picture RDF/OWL as the essential meta layer above the ORM. By abstracting it with interfaces that are usable to non-techies it becomes an engine for collecting knowledge about the relationships and attributes of the business across all dimensions. We shouldn’t be concerned with modeling absolute and irrefutable truths, because tomorrow there will be an exception. That’s the problem with strict models in the enterprise, there will always be exceptions. Next, the ORM layer follows on as an application specific module where you can extract pieces of the meta layer to digest, analyze, and make use of the data at a domain level.

It’s about putting a higher priority on the collection of information than on enforcing rules on the information. The principle reminds me of the great quote from Jon Postel.

“be conservative in what you send, be liberal in what you receive”

Postel is of course referring to the Transmission Control Protocol, a language that computers use to speak to each other over the internet.  To me however  these words have a more universal meaning in the world of software development which I’d categorize like this:

  • Listen more
  • Talk less
  • Prepare for exceptions
  • Trust until you are given a reason not to

Data Migration for CouchDB

Something that’s currently missing from CouchDB is a way to import/export documents. This feature may be added to CouchDB one day, but say you need a way to get your data out of CouchDB… like right now. Well here’s how you can do it.

Before getting started one quick side note about dealing with CouchDB data files. When you create a new database there is a corresponding {db}.couch file created that is your actual “database”. It’s usually in /var/lib/couchdb, but if not check DbRootDir in your /etc/couchdb/couch.ini for the location (update: for 0.9.0 it’s now database_dir in /etc/couchdb/default.ini)

Under normal circumstances you have the ability to take hot backups of these files at anytime using rsync, cp, etc… it’s simply a file. The thing that got me stuck was when CouchDB went from 0.8.0 to 0.9.0 and the internal file format changed. The result was that the data needed to be moved programmatically across databases using raw JSON.

If you search the CouchDB mailing lists for how to get your data migrated you’ll likely come across references to the couchdb-python utilities. Dig more and you’ll see references to the tools/dump.py and tools/load.py scripts. That’s about where the trail ended for me, but after some hacking around I’ve successfully moved data from 0.8.0 to 0.9.0. As an added bonus I was able to get my hands dirty with the couchdb-python library which has been fantastic so far.

One more side note, this time about couchdb-python. If you are new to CouchDB I would still recommend starting with Futon, Views, and the REST API before you move to a client library (Python or other). It will help you conceptualize how CouchDB is way more than a massive hash table or fancy object store.

So to the task at hand…. Assuming you have Python 2.4 or later you’ll need to install 3 things.

httplib2 – This is a Python HTTP lib, I was able to install it via apt-get on Debian.  There are packages available for other distros.

simplejson –  Python egg for JSON manipulation.

couchdb-python – Python egg for CouchDB.

I was able to install the egg files using the Python easy_installer.

The next step is to grab tools/dump.py and tools/load.py from CouchDB egg file. To do this you need to unzip the CouchDB .egg that’s in site-packages and extract the files to a directory of your choice. This seems like a strange method, but it works. Someone let me know if I’m missing an easier way.

To begin the database dump run dump.py and pass the full URL to the database you are exporting. Make sure to redirect output in order to capture the JSON.

./dump.py http://source-couchdb:5984/msg_db > msg_db.json

Once your export completes copy the .json file and the load.py to the same directory and run the following command to import the file to your target database.

./load.py –input=msg_db.json http://target-couchdb:5984/msg_db

Make sure you create the target database before you run the script or it will fail. You’ll know everything is working if you see a series of statements that looks like this:

Loading document ‘bda90174c1a41bad2289bfc5829008ce’
Loading document ‘e45d7c2850610a01658234eeddde1fde’
Loading document ‘e856071c791cd677eafbce85bb1509de’

After it completes, you can fire up Futon and you’ll see all your precious data has been loaded into your new instance of CouchDB. Victory!

© 2009 - 2019 Ross Bates